Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "DecisionFocusedLearningBenchmarks"
uuid = "2fbe496a-299b-4c81-bab5-c44dfc55cf20"
authors = ["Members of JuliaDecisionFocusedLearning"]
version = "0.4.0"
authors = ["Members of JuliaDecisionFocusedLearning"]

[workspace]
projects = ["docs", "test"]
Expand Down
12 changes: 12 additions & 0 deletions docs/src/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,18 @@ Modules = [DecisionFocusedLearningBenchmarks.FixedSizeShortestPath]
Public = false
```

## Maintenance

```@autodocs
Modules = [DecisionFocusedLearningBenchmarks.Maintenance]
Private = false
```

```@autodocs
Modules = [DecisionFocusedLearningBenchmarks.Maintenance]
Public = false
```

## Portfolio Optimization

```@autodocs
Expand Down
107 changes: 107 additions & 0 deletions docs/src/benchmarks/maintenance.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
# Maintenance problem with resource constraint

The Maintenance problem with resource constraint is a sequential decision-making benchmark where an agent must repeatedly decide which components to maintain over time. The goal is to minimize total expected cost while accounting for independent degradation of components and limited maintenance capacity.


## Problem Description

### Overview

In this benchmark, a system consists of $N$ identical components, each of which can degrade over $n$ discrete states. State $1$ means that the component is new, state $n$ means that the component is failed. At each time step, the agent can maintain up to $K$ components.

This forms an endogenous multistage stochastic optimization problem, where the agent must plan maintenance actions over the horizon.

### Mathematical Formulation

The maintenance problem can be formulated as a finite-horizon Markov Decision Process (MDP) with the following components:

**State Space** $\mathcal{S}$: At time step $t$, the state $s_t \in [1:n]^N$ is the degradation state for each component.

**Action Space** $\mathcal{A}$: The action at time $t$ is the set of components that are maintained at time $t$:
```math
a_t \subseteq \{1, 2, \ldots, N\} \text{ such that } |a_t| \leq K
```
### Transition Dynamics

The state transitions depend on whether a component is maintained or not:

For each component \(i\) at time \(t\):

- **Maintained component** (\(i \in a_t\)):

\[
s_{t+1}^i = 1 \quad \text{(perfect maintenance)}
\]

- **Unmaintained component** (\(i \notin a_t\)):

\[
s_{t+1}^i =
\begin{cases}
\min(s_t^i + 1, n) & \text{with probability } p,\\
s_t^i & \text{with probability } 1-p.
\end{cases}
\]

Here, \(p\) is the degradation probability, \(s_t^i\) is the current state of component \(i\), and \(n\) is the maximum (failed) state.

---

### Cost Function

The immediate cost at time \(t\) is:

$$
c(s_t, a_t) = \Big( c_m \cdot |a_t| + c_f \cdot \#\{ i : s_t^i = n \} \Big)
$$

Where:

- $c_m$ is the maintenance cost per component.
- $|a_t|$ is the number of components maintained.
- $c_f$ is the failure cost per failed component.
- $\#\{ i : s_t^i = n \}$ counts the number of components in the failed state.

This formulation captures the total cost for maintaining components and penalizing failures.

**Objective**: Find a policy $\pi: \mathcal{S} \to \mathcal{A}$ that minimizes the expected cumulative cost:
```math
\min_\pi \mathbb{E}\left[\sum_{t=1}^T c(s_t, \pi(s_t)) \right]
```

**Terminal Condition**: The episode terminates after $T$ time steps, with no terminal reward.

## Key Components

### [`MaintenanceBenchmark`](@ref)

The main benchmark configuration with the following parameters:

- `N`: number of components (default: 2)
- `K`: maximum number of components that can be maintained simultaneously (default: 1)
- `n`: number of degradation states per component (default: 3)
- `p`: degradation probability (default: 0.2)
- `c_f`: failure cost (default: 10.0)
- `c_m`: maintenance cost (default: 3.0)
- `max_steps`: Number of time steps per episode (default: 80)

### Instance Generation

Each problem instance includes:

- **Starting State**: Random starting degradation state in $[1,n]$ for each components.

### Environment Dynamics

The environment tracks:
- Current time step
- Current degradation state.

**State Observation**: Agents observe a normalized feature vector containing the degradation state of each component.

## Benchmark Policies

### Greedy Policy

Greedy policy that maintains components in the last two degradation states, up to the maintenance capacity. This provides a simple baseline.

1 change: 1 addition & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ Single-stage optimization problems under uncertainty:
Multi-stage sequential decision-making problems:
- [`DynamicVehicleSchedulingBenchmark`](@ref): multi-stage vehicle scheduling under customer uncertainty
- [`DynamicAssortmentBenchmark`](@ref): sequential product assortment selection with endogenous uncertainty
- [`MaintenanceBenchmark`](@ref): maintenance problem with resource constraint

## Getting Started

Expand Down
3 changes: 3 additions & 0 deletions src/DecisionFocusedLearningBenchmarks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ include("PortfolioOptimization/PortfolioOptimization.jl")
include("StochasticVehicleScheduling/StochasticVehicleScheduling.jl")
include("DynamicVehicleScheduling/DynamicVehicleScheduling.jl")
include("DynamicAssortment/DynamicAssortment.jl")
include("Maintenance/Maintenance.jl")

using .Utils

Expand Down Expand Up @@ -89,6 +90,7 @@ using .PortfolioOptimization
using .StochasticVehicleScheduling
using .DynamicVehicleScheduling
using .DynamicAssortment
using .Maintenance

export Argmax2DBenchmark
export ArgmaxBenchmark
Expand All @@ -100,5 +102,6 @@ export RankingBenchmark
export StochasticVehicleSchedulingBenchmark
export SubsetSelectionBenchmark
export WarcraftBenchmark
export MaintenanceBenchmark

end # module DecisionFocusedLearningBenchmarks
25 changes: 17 additions & 8 deletions src/DynamicAssortment/environment.jl
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Environment for the dynamic assortment problem.
$TYPEDFIELDS
"""
@kwdef mutable struct Environment{I<:Instance,R<:AbstractRNG,S<:Union{Nothing,Int}} <:
Utils.AbstractEnvironment
AbstractEnvironment
"associated instance"
instance::I
"current step"
Expand Down Expand Up @@ -197,16 +197,25 @@ Features observed by the agent at current step, as a concatenation of:
- change in hype and saturation features from the starting state
- normalized current step (divided by max steps and multiplied by 10)
All features are normalized by dividing by 10.

State
Comment thread
sdelannoypavy marked this conversation as resolved.
Return as a tuple:
- `env.features`: the current feature matrix (feature vector for all items).
- `env.purchase_history`: the purchase history over the most recent steps.
"""
function Utils.observe(env::Environment)
delta_features = env.features[2:3, :] .- env.instance.starting_hype_and_saturation
return vcat(
env.features,
env.d_features,
delta_features,
ones(1, item_count(env)) .* (env.step / max_steps(env) * 10),
) ./ 10,
nothing
features =
vcat(
env.features,
env.d_features,
delta_features,
ones(1, item_count(env)) .* (env.step / max_steps(env) * 10),
) ./ 10

state = (copy(env.features), copy(env.purchase_history))

return features, state
end

"""
Expand Down
144 changes: 144 additions & 0 deletions src/Maintenance/Maintenance.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
module Maintenance

using ..Utils

using DocStringExtensions: TYPEDEF, TYPEDFIELDS, TYPEDSIGNATURES, SIGNATURES
using Distributions: Uniform, Categorical
using Flux: Chain, Dense
using LinearAlgebra: dot
using Random: Random, AbstractRNG, MersenneTwister
using Statistics: mean

using Combinatorics: combinations

"""
$TYPEDEF

Benchmark for a standard maintenance problem with resource constraints.
Components are identical and degrade independently over time.
A high cost is incurred for each component that reaches the final degradation level.
A cost is also incurred for maintaining a component.
The number of simultaneous maintenance operations is limited by a maintenance capacity constraint.

# Fields
$TYPEDFIELDS

"""
struct MaintenanceBenchmark <: AbstractDynamicBenchmark{true}
"number of components"
N::Int
"maximum number of components that can be maintained simultaneously"
K::Int
"number of degradation states per component"
n::Int
"degradation probability"
p::Float64
"failure cost"
c_f::Float64
"maintenance cost"
c_m::Float64
"number of steps per episode"
max_steps::Int
Comment thread
sdelannoypavy marked this conversation as resolved.

function MaintenanceBenchmark(N, K, n, p, c_f, c_m, max_steps)
@assert K <= N "number of maintained components $K > number of components $N"
@assert K >= 0 && N >= 0 "number of components should be positive"
@assert 0 <= p <= 1 "degradation probability $p is not in [0, 1]"
return new(N, K, n, p, c_f, c_m, max_steps)
end
end

"""
MaintenanceBenchmark(;
N=2,
K=1,
n=3,
p=0.2
c_f=10.0,
c_m=3.0,
max_steps=80,
)

Constructor for [`MaintenanceBenchmark`](@ref).
By default, the benchmark has 2 components, maintenance capacity 1, number of degradation levels 3,
degradation probability 0.2, failure cost 10.0, maintenance cost 3.0, 80 steps per episode, and is exogenous.
"""
function MaintenanceBenchmark(; N=2, K=1, n=3, p=0.2, c_f=10.0, c_m=3.0, max_steps=80)
return MaintenanceBenchmark(N, K, n, p, c_f, c_m, max_steps)
end

# Accessor functions
component_count(b::MaintenanceBenchmark) = b.N
maintenance_capacity(b::MaintenanceBenchmark) = b.K
degradation_levels(b::MaintenanceBenchmark) = b.n
degradation_probability(b::MaintenanceBenchmark) = b.p
failure_cost(b::MaintenanceBenchmark) = b.c_f
maintenance_cost(b::MaintenanceBenchmark) = b.c_m
max_steps(b::MaintenanceBenchmark) = b.max_steps

include("instance.jl")
include("environment.jl")
include("policies.jl")
include("maximizer.jl")

"""
$TYPEDSIGNATURES

Outputs a data sample containing an [`Instance`](@ref).
"""
function Utils.generate_sample(b::MaintenanceBenchmark, rng::AbstractRNG)
return DataSample(; instance=Instance(b, rng))
end

"""
$TYPEDSIGNATURES

Generates a statistical model for the maintenance benchmark.
The model is a small neural network with one hidden layer no activation function.
"""
function Utils.generate_statistical_model(b::MaintenanceBenchmark; seed=nothing)
Random.seed!(seed)
N = component_count(b)
return Chain(Dense(N => N), Dense(N => N), vec)
end

"""
$TYPEDSIGNATURES

Outputs a top k maximizer, with k being the maintenance capacity of the benchmark.
"""
function Utils.generate_maximizer(b::MaintenanceBenchmark)
return TopKPositiveMaximizer(maintenance_capacity(b))
end

"""
$TYPEDSIGNATURES

Creates an [`Environment`](@ref) from an [`Instance`](@ref) of the maintenance benchmark.
The seed of the environment is randomly generated using the provided random number generator.
"""
function Utils.generate_environment(
::MaintenanceBenchmark, instance::Instance, rng::AbstractRNG; kwargs...
)
seed = rand(rng, 1:typemax(Int))
return Environment(instance; seed)
end

"""
$TYPEDSIGNATURES

Returns two policies for the dynamic assortment benchmark:
- `Greedy`: maintains components when they are in the last state before failure, up to the maintenance capacity
"""
function Utils.generate_policies(::MaintenanceBenchmark)
greedy = Policy(
"Greedy",
"policy that maintains components when they are in the last state before failure, up to the maintenance capacity",
greedy_policy,
)
return (greedy,)
end

export MaintenanceBenchmark

end
Loading