Skip to content

Commit 5cee253

Browse files
committed
Update readme
1 parent 7fb87fc commit 5cee253

File tree

1 file changed

+8
-73
lines changed

1 file changed

+8
-73
lines changed

README.md

Lines changed: 8 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -6,87 +6,22 @@
66
[![Coverage](https://codecov.io/gh/JuliaDecisionFocusedLearning/DecisionFocusedLearningBenchmarks.jl/branch/main/graph/badge.svg)](https://app.codecov.io/gh/JuliaDecisionFocusedLearning/DecisionFocusedLearningBenchmarks.jl)
77
[![Code Style: Blue](https://img.shields.io/badge/code%20style-blue-4495d1.svg)](https://github.com/JuliaDiff/BlueStyle)
88

9-
> [!WARNING]
9+
> [!WARNING]
1010
> This package is currently under active development. The API may change in future releases.
11-
> Please refer to the [documentation](https://JuliaDecisionFocusedLearning.github.io/DecisionFocusedLearningBenchmarks.jl/stable/) for the latest updates.
1211
13-
## What is Decision-Focused Learning?
14-
15-
Decision-Focused Learning (DFL) is a paradigm that integrates machine learning prediction with combinatorial optimization to make better decisions under uncertainty.
16-
Unlike traditional "predict-then-optimize" approaches that optimize prediction accuracy independently of downstream decision quality, DFL directly optimizes end-to-end decision performance.
17-
18-
A typical DFL algorithm involves training a parametrized policy that combines a statistical predictor with an optimization component:
19-
20-
```math
21-
x \;\longrightarrow\; \boxed{\,\text{Statistical model } \varphi_w\,}
22-
\;\xrightarrow{\theta}\; \boxed{\,\text{CO algorithm } f\,}
23-
\;\longrightarrow\; y
24-
```
25-
26-
Where:
27-
- **Statistical model** $\varphi_w$: machine learning predictor (e.g., neural network)
28-
- **CO algorithm** $f$: combinatorial optimization solver
29-
- **Instance** $x$: input data (e.g., features, context)
30-
- **Parameters** $\theta$: predicted parameters for the optimization problem solved by `f`
31-
- **Solution** $y$: output decision/solution
32-
33-
## Package Overview
34-
35-
**DecisionFocusedLearningBenchmarks.jl** provides a comprehensive collection of benchmark problems for evaluating decision-focused learning algorithms. The package offers:
36-
37-
- **Standardized benchmark problems** spanning diverse application domains
38-
- **Common interfaces** for creating datasets, statistical models, and optimization algorithms
39-
- **Ready-to-use DFL policies** compatible with [InferOpt.jl](https://github.com/JuliaDecisionFocusedLearning/InferOpt.jl) and the whole [JuliaDecisionFocusedLearning](https://github.com/JuliaDecisionFocusedLearning) ecosystem
40-
- **Evaluation tools** for comparing algorithm performance
41-
42-
## Benchmark Categories
43-
44-
The package organizes benchmarks into three main categories based on their problem structure:
45-
46-
### Static Benchmarks (`AbstractBenchmark`)
47-
Single-stage optimization problems with no randomness involved:
48-
- [`ArgmaxBenchmark`](@ref): argmax toy problem
49-
- [`Argmax2DBenchmark`](@ref): 2D argmax toy problem
50-
- [`RankingBenchmark`](@ref): ranking problem
51-
- [`SubsetSelectionBenchmark`](@ref): select optimal subset of items
52-
- [`PortfolioOptimizationBenchmark`](@ref): portfolio optimization problem
53-
- [`FixedSizeShortestPathBenchmark`](@ref): find shortest path on grid graphs with fixed size
54-
- [`WarcraftBenchmark`](@ref): shortest path on image maps
55-
56-
### Stochastic Benchmarks (`AbstractStochasticBenchmark`)
57-
Single-stage optimization problems under uncertainty:
58-
- [`StochasticVehicleSchedulingBenchmark`](@ref): stochastic vehicle scheduling under delay uncertainty
59-
60-
### Dynamic Benchmarks (`AbstractDynamicBenchmark`)
61-
Multi-stage sequential decision-making problems:
62-
- [`DynamicVehicleSchedulingBenchmark`](@ref): multi-stage vehicle scheduling under customer uncertainty
63-
- [`DynamicAssortmentBenchmark`](@ref): sequential product assortment selection with endogenous uncertainty
64-
65-
## Getting Started
66-
67-
In a few lines of code, you can create benchmark instances, generate datasets, initialize learning components, and evaluate performance, using the same syntax across all benchmarks:
12+
**DecisionFocusedLearningBenchmarks.jl** provides a collection of benchmark problems for evaluating [Decision-Focused Learning](https://JuliaDecisionFocusedLearning.github.io/DecisionFocusedLearningBenchmarks.jl/stable/) algorithms, spanning static, stochastic, and dynamic settings.
13+
Each benchmark provides a dataset generator, a statistical model architecture, and a combinatorial oracle, ready to plug into any DFL training algorithm:
6814

6915
```julia
7016
using DecisionFocusedLearningBenchmarks
7117

72-
# Create a benchmark instance for the argmax problem
73-
benchmark = ArgmaxBenchmark()
74-
75-
# Generate training data
76-
dataset = generate_dataset(benchmark, 100)
77-
78-
# Initialize policy components
79-
model = generate_statistical_model(benchmark)
80-
maximizer = generate_maximizer(benchmark)
81-
82-
# Training algorithm you want to use
83-
# ... your training code here ...
84-
85-
# Evaluate performance
86-
gap = compute_gap(benchmark, dataset, model, maximizer)
18+
bench = ArgmaxBenchmark()
19+
dataset = generate_dataset(bench, 100)
20+
model = generate_statistical_model(bench)
21+
maximizer = generate_maximizer(bench)
8722
```
8823

89-
The only component you need to customize is the training algorithm itself.
24+
For the full list of benchmarks, the common interface, and detailed usage examples, refer to the [documentation](https://JuliaDecisionFocusedLearning.github.io/DecisionFocusedLearningBenchmarks.jl/stable/).
9025

9126
## Related Packages
9227

0 commit comments

Comments
 (0)