Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2025 JuliaDecisionFocusedLearning
Copyright (c) 2025 Guillaume Dalle

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name = "MathProgBenchmarks"
name = "MathOptBenchmarkInstances"
uuid = "f7f8d0a1-fd34-491e-a7ac-a4cf52f91fe5"
version = "0.1.0"
authors = ["Guillaume Dalle"]
Expand Down
41 changes: 32 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,50 @@
# MathProgBenchmarks.jl
# MathOptBenchmarkInstances.jl

A Julia package for automatic download and parsing of linear, quadratic and integer programming instances.

Supported datasets:
## Datasets

Please read and abide by the license of the dataset you plan to use.

### LP

- [x] [Netlib](https://www.netlib.org/lp/data/index.html)
- [x] [MIPLIB 2017](https://miplib.zib.de/index.html)
- [x] [Mittelmann LP benchmark](https://plato.asu.edu/ftp/lptestset/)

### MILP

- [x] [MIPLIB 2017](https://miplib.zib.de/index.html)

### QP

- [ ] [QPLIB](https://qplib.zib.de/)
- [ ] [Maros-Meszaros](https://www.doc.ic.ac.uk/~im/#DATA)

## Getting started

1. To see which instances are available, call `list_instances(dataset)` with `dataset in (Netlib, MIPLIB2017, MittelmannLP)`.
2. To read a specific instance, call `read_instance(dataset, name)` where `name isa String`.
3. The returned problem format is `QPSData` from [QPSReader.jl](https://github.com/JuliaSmoothOptimizers/QPSReader.jl).
1. To list the datasets available, call `values(Dataset)`.
2. To list the instances from a `dataset`, call `list_instances(dataset)`.
3. To read a specific instance given its `name`, call `read_instance(dataset, name)`. The return value is a tuple `(problem, path)` where `problem isa QPSData` from [QPSReader.jl](https://github.com/JuliaSmoothOptimizers/QPSReader.jl) and `path` points to the decompressed source file on your computer.

More details are available in the docstrings.
See the docstrings for details.

## Tips

The problem source files are downloaded automatically thanks to [DataDeps.jl](https://github.com/oxinabox/DataDeps.jl).
Note that each download has to be validated manually from the REPL.
This doesn't work well when the triggering line of code is executed with VSCode's Julia extension, better run it in the REPL directly.
This doesn't work well when the triggering line of code is executed with VSCode's Julia extension, you better run it in the REPL directly.
An alternative is to set `ENV["DATADEPS_ALWAYS_ACCEPT"] = true`.

The decompressed instances can be rather large (over 80 GB for the complete MIPLIB 2017 collection).
If you need to clean up some space, you can delete unneeded files inside the folder located at `MathProgBenchmarks.MPS_SCRATCH`.
If you need to clean up some space, you can delete unneeded files inside the folder located at `MathOptBenchmarkInstances.MPS_SCRATCH`.

## Contributing

To contribute a new dataset:

1. Add its name to the `Dataset` enum.
2. Register a new `DataDep` inside the `__init__()` function of the package.
3. Implement a reader based on the files downloaded by the `DataDep`. This part might need decompression or file conversion steps, for which you can use the `MPS_SCRATCH` folder.
4. Write documentation and tests.

You can (and should) draw inspiration from the implementation of existing datasets.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
module MathProgBenchmarks
module MathOptBenchmarkInstances

import CodecBzip2
using DataDeps
Expand All @@ -17,4 +17,4 @@ export Dataset, MIPLIB2017, Netlib, MittelmannLP
export read_instance
export list_instances

end # module MathProgBenchmarks
end # module MathOptBenchmarkInstances
5 changes: 4 additions & 1 deletion test/Project.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
[deps]
Aqua = "4c88cf16-eb10-579e-8560-4a9242c79595"
MathProgBenchmarks = "f7f8d0a1-fd34-491e-a7ac-a4cf52f91fe5"
MathOptBenchmarkInstances = "f7f8d0a1-fd34-491e-a7ac-a4cf52f91fe5"
QPSReader = "10f199a5-22af-520b-b891-7ce84a7b1bd0"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

[sources]
MathOptBenchmarkInstances = {path = ".."}
4 changes: 2 additions & 2 deletions test/miplib2017.jl
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
using MathProgBenchmarks
using MathOptBenchmarkInstances
using QPSReader
using Test

miplib_list = list_instances(MIPLIB2017)
@test length(miplib_list) == 1065

miplib_benchmark_list = MathProgBenchmarks.list_miplib2017_instances(; benchmark_only = true)
miplib_benchmark_list = MathOptBenchmarkInstances.list_miplib2017_instances(; benchmark_only = true)
@test length(miplib_benchmark_list) == 240

for name in miplib_benchmark_list[1:10]
Expand Down
2 changes: 1 addition & 1 deletion test/mittelmann.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
using MathProgBenchmarks
using MathOptBenchmarkInstances
using QPSReader
using Test

Expand Down
2 changes: 1 addition & 1 deletion test/netlib.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
using MathProgBenchmarks
using MathOptBenchmarkInstances
using QPSReader
using Test

Expand Down
6 changes: 3 additions & 3 deletions test/runtests.jl
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
using Aqua
using MathProgBenchmarks
using MathOptBenchmarkInstances
using Test

ENV["DATADEPS_ALWAYS_ACCEPT"] = true

@testset verbose = true "MathProgBenchmarks" begin
@testset verbose = true "MathOptBenchmarkInstances" begin
@testset "Code quality" begin
Aqua.test_all(MathProgBenchmarks; undocumented_names = true)
Aqua.test_all(MathOptBenchmarkInstances; undocumented_names = true)
end
@testset "Netlib" begin
include("netlib.jl")
Expand Down