@@ -3,27 +3,31 @@ $TYPEDEF
33
44Abstract type interface for benchmark problems.
55
6- The following methods are mandatory for benchmarks :
7- - [`generate_dataset `](@ref) or [`generate_sample `](@ref)
6+ # Mandatory methods to implement for any benchmark :
7+ - [`generate_sample `](@ref): primary entry point, called by the default [`generate_dataset `](@ref)
88- [`generate_statistical_model`](@ref)
99- [`generate_maximizer`](@ref)
1010
11- The following methods are optional:
12- - [`plot_data`](@ref)
13- - [`objective_value`](@ref)
14- - [`compute_gap`](@ref)
11+ Override [`generate_dataset`](@ref) directly only when samples cannot be drawn independently.
12+
13+ # Optional methods (defaults provided)
14+ - [`is_minimization_problem`](@ref): defaults to `true`
15+ - [`objective_value`](@ref): defaults to `dot(θ, y)`
16+ - [`compute_gap`](@ref): default implementation provided; override for custom evaluation
17+
18+ # Optional methods (no default)
19+ - [`plot_data`](@ref), [`plot_instance`](@ref), [`plot_solution`](@ref)
20+ - [`generate_policies`](@ref)
1521"""
1622abstract type AbstractBenchmark end
1723
1824"""
1925 generate_sample(::AbstractBenchmark, rng::AbstractRNG; kwargs...) -> DataSample
2026
21- Generate a single [`DataSample`](@ref) for given benchmark.
22- This is a low-level function that is used by [`generate_dataset`](@ref) to create
23- a dataset of samples. It is not mandatory to implement this method, but it is
24- recommended for benchmarks that have a well-defined way to generate individual samples.
25- An alternative is to directly implement [`generate_dataset`](@ref) to create a dataset
26- without generating individual samples.
27+ Generate a single [`DataSample`](@ref) for the benchmark.
28+ This is the primary implementation target: the default [`generate_dataset`](@ref) calls
29+ it repeatedly. Override [`generate_dataset`](@ref) directly only when samples cannot be
30+ drawn independently (e.g. when the full dataset must be loaded at once).
2731"""
2832function generate_sample end
2933
4953"""
5054 generate_maximizer(::AbstractBenchmark; kwargs...)
5155
52- Generates a maximizer function.
53- Returns a callable f: (θ; kwargs...) -> y, where θ is a cost array and y is a solution.
56+ Returns a callable `f(θ; kwargs...) -> y`, solving a maximization problem.
5457"""
5558function generate_maximizer end
5659
5760"""
5861 generate_statistical_model(::AbstractBenchmark; kwargs...)
5962
60- Initializes and return an untrained statistical model of the CO-ML pipeline.
61- It's usually a Flux model, that takes a feature matrix x as input, and returns a cost array θ as output .
63+ Returns an untrained statistical model (usually a Flux neural network) that maps a
64+ feature matrix `x` to an output array `θ` .
6265"""
6366function generate_statistical_model end
6467
6568"""
6669 generate_policies(::AbstractBenchmark) -> Vector{Policy}
70+
71+ Return a list of named baseline policies for the benchmark.
6772"""
6873function generate_policies end
6974
@@ -99,7 +104,7 @@ function compute_gap end
99104"""
100105$TYPEDSIGNATURES
101106
102- Default behaviour of `objective_value` .
107+ Compute `dot(θ, y)`. Override for non-linear objectives .
103108"""
104109function objective_value (:: AbstractBenchmark , θ:: AbstractArray , y:: AbstractArray )
105110 return dot (θ, y)
139144"""
140145$TYPEDSIGNATURES
141146
142- Default behaviour of `compute_gap` for a benchmark problem where `features`, `solutions` and `costs` are all defined.
147+ Default implementation of [`compute_gap`](@ref): average relative optimality gap over `dataset`.
148+ Requires samples with `x`, `θ`, and `y` fields. Override for custom evaluation logic.
143149"""
144150function compute_gap (
145151 bench:: AbstractBenchmark ,
@@ -168,19 +174,18 @@ $TYPEDEF
168174
169175Abstract type interface for single-stage stochastic benchmark problems.
170176
171- A stochastic benchmark separates the problem into a **deterministic instance** (the
177+ A stochastic benchmark separates the problem into an **instance** (the
172178context known before the scenario is revealed) and a **random scenario** (the uncertain
173- part). The combinatorial oracle sees only the instance; scenarios are used to evaluate
174- anticipative solutions, generate targets, and compute objective values.
179+ part). Decisions are taken by seeing only the instance. Scenarios are used to generate
180+ anticipative targets and compute objective values.
175181
176182# Required methods (exogenous benchmarks, `{true}` only)
177183- [`generate_sample`](@ref)`(bench, rng)`: returns a [`DataSample`](@ref) with instance
178- and features but **no scenario**. The scenario is omitted so that
179- [`generate_dataset `](@ref) can draw K independent scenarios from the same instance .
184+ and features but **no scenario**. Scenarios are added later by [`generate_dataset`](@ref)
185+ via [`generate_scenario `](@ref).
180186- [`generate_scenario`](@ref)`(bench, sample, rng)`: draws a random scenario for the
181187 instance encoded in `sample`. The full sample is passed (not just the instance)
182- because context is tied to the instance and implementations may need fields beyond
183- `sample.instance`.
188+ so implementations can access any context field.
184189
185190# Optional methods
186191- [`generate_anticipative_solver`](@ref)`(bench)`: returns a callable
@@ -202,13 +207,9 @@ supports all three standard structures via `nb_scenarios_per_instance`:
202207| N instances with 1 scenario | `generate_dataset(bench, N)` (default) |
203208| N instances with K scenarios | `generate_dataset(bench, N; nb_scenarios_per_instance=K)` |
204209
205- Extra keyword arguments are forwarded to [`generate_instance_samples`](@ref), enabling
206- solver choice to reach target computation (e.g. `algorithm=compact_mip`).
207-
208210By default, each [`DataSample`](@ref) has `context` holding the instance (solver kwargs)
209211and `extra=(; scenario)` holding one scenario. Override
210- [`generate_instance_samples`](@ref) to store scenarios differently (e.g.
211- `extra=(; scenarios=[ξ₁,…,ξ_K])` for SAA).
212+ [`generate_instance_samples`](@ref) to store scenarios differently.
212213"""
213214abstract type AbstractStochasticBenchmark{exogenous} <: AbstractBenchmark end
214215
@@ -223,11 +224,14 @@ Draw a random scenario for the instance encoded in `sample`.
223224Called once per scenario by the specialised [`generate_dataset`](@ref).
224225
225226The full `sample` is passed (not just `sample.instance`) because both the scenario
226- and the context are tied to the same instance — implementations may need any field
227- of the sample. Consistent with [`generate_environment`](@ref) for dynamic benchmarks.
227+ and the context are tied to the same instance.
228228"""
229229function generate_scenario end
230230
231+ # function generate_scenario(b::AbstractStochasticBenchmark{true}, sample::DataSample, rng::AbstractRNG)
232+ # return generate_scenario(b, rng; sample.context...)
233+ # end
234+
231235"""
232236 generate_anticipative_solver(::AbstractStochasticBenchmark) -> callable
233237
@@ -236,11 +240,6 @@ scenario. The instance and other solver-relevant fields are spread from the samp
236240
237241 solver = generate_anticipative_solver(bench)
238242 y = solver(scenario; sample.context...)
239-
240- This mirrors the maximizer calling convention `maximizer(θ; sample.context...)`.
241-
242- Used by Imitating Anticipative and DAgger algorithms. Replaces the deprecated
243- [`generate_anticipative_solution`](@ref).
244243"""
245244function generate_anticipative_solver (bench:: AbstractStochasticBenchmark )
246245 return (scenario; kwargs... ) -> error (
@@ -260,13 +259,6 @@ parametric anticipative subproblem:
260259
261260The scenario comes first (it defines the stochastic cost function); `θ` is the
262261perturbation added on top, coupling the benchmark to the model output.
263-
264- The κ weight from the Alternating Minimization algorithm is not a parameter of this
265- solver. Since the subproblem is linear in `θ`, the algorithm scales θ by κ before
266- calling: `solver(κ * θ, scenario; sample.context...)`.
267-
268- Partially apply `scenario` to obtain a `(θ; kwargs...) -> y` closure, then wrap in
269- `PerturbedAdditive` (InferOpt) to compute targets `μᵢ` during the decomposition step.
270262"""
271263function generate_parametric_anticipative_solver end
272264
@@ -288,7 +280,7 @@ Map K scenarios to [`DataSample`](@ref)s for a single instance (encoded in `samp
288280This is the key customisation point for scenario→sample mapping in
289281[`generate_dataset`](@ref).
290282
291- **Default** (anticipative / DAgger — 1:1 mapping):
283+ **Default** (1:1 mapping):
292284Returns K samples, each with one scenario in `extra=(; scenario=ξ)`.
293285When `compute_targets=true`, calls [`generate_anticipative_solver`](@ref) to compute
294286an independent anticipative target per scenario.
@@ -371,17 +363,34 @@ end
371363"""
372364$TYPEDEF
373365
374- Abstract type interface for dynamic benchmark problems.
375- This type should be used for benchmarks that involve multi-stage stochastic optimization problems.
366+ Abstract type interface for multi-stage stochastic (dynamic) benchmark problems.
367+
368+ Extends [`AbstractStochasticBenchmark`](@ref). The `{exogenous}` parameter retains its
369+ meaning (whether uncertainty is independent of decisions). For exogenous benchmarks,
370+ a **scenario** is a full multi-stage realization of uncertainty, embedded in the
371+ environment rather than drawn via [`generate_scenario`](@ref) — hence that method raises
372+ an error for all dynamic benchmarks.
376373
377- It follows the same interface as [`AbstractStochasticBenchmark`](@ref), with the addition of the following methods:
378- TODO
374+ # Differences from [`AbstractStochasticBenchmark`](@ref)
375+ - [`generate_sample`](@ref) returns a [`DataSample`](@ref) holding the problem **instance**
376+ (initial configuration for rollout). No (instance, scenario) decomposition.
377+ - [`generate_scenario`](@ref) raises an error — the full multi-stage scenario unfolds through
378+ [`generate_environment`](@ref).
379+ - [`generate_dataset`](@ref) uses the standard independent-sample loop.
380+
381+ # Additional optional methods
382+ - [`generate_environment`](@ref)`(bench, instance, rng)` — initialize a rollout environment
383+ (holds the multi-stage scenario for exogenous benchmarks).
384+ - [`generate_environments`](@ref)`(bench, dataset; rng)` — batch version (default provided).
379385"""
380386abstract type AbstractDynamicBenchmark{exogenous} <: AbstractStochasticBenchmark{exogenous} end
381387
382- # Dynamic benchmarks do not use the stochastic dataset generation (which draws independent
383- # scenarios per instance). They generate each sample independently via `generate_sample`,
384- # using the standard AbstractBenchmark default.
388+ """
389+ $TYPEDSIGNATURES
390+
391+ Override of [`generate_dataset`](@ref) for dynamic benchmarks: generates each sample
392+ independently via [`generate_sample`](@ref), bypassing the stochastic scenario loop.
393+ """
385394function generate_dataset (
386395 bench:: AbstractDynamicBenchmark ,
387396 dataset_size:: Int ;
@@ -393,9 +402,6 @@ function generate_dataset(
393402 return [generate_sample (bench, rng; kwargs... ) for _ in 1 : dataset_size]
394403end
395404
396- # Dynamic benchmarks generate complete trajectories via `generate_sample` and do not
397- # decompose problems into (instance, scenario) pairs. `generate_scenario` is not
398- # applicable to them; this method exists only to provide a clear error.
399405function generate_scenario (
400406 bench:: AbstractDynamicBenchmark , sample:: DataSample , rng:: AbstractRNG ; kwargs...
401407)
@@ -415,8 +421,7 @@ function generate_environment end
415421"""
416422$TYPEDSIGNATURES
417423
418- Default behaviour of `generate_environment` applied to a data sample.
419- Uses the info field of the sample as the instance.
424+ Delegates to `generate_environment(bench, sample.instance, rng; kwargs...)`.
420425"""
421426function generate_environment (
422427 bench:: AbstractDynamicBenchmark , sample:: DataSample , rng:: AbstractRNG ; kwargs...
0 commit comments