🎉 Accepted to the ML4PS workshop @ NeurIPS 2024
Benchmark coupled ODE surrogate models on curated datasets with reproducible training, evaluation, and visualization pipelines. CODES helps you answer: Which surrogate architecture fits my data, accuracy target, and runtime budget?
- Baseline surrogates (MultiONet, FullyConnected, LatentNeuralODE, LatentPoly) with configurable hyperparameters
- Rich datasets spanning chemistry, astrophysics, and dynamical systems
- Optional studies for interpolation/extrapolation, sparse data regimes, uncertainty estimation, and batch scaling
- Automated reporting: accuracy tables, resource usage, gradient analyses, and dozens of diagnostic plots
uv (recommended)
git clone https://github.com/AstroAI-Lab/CODES-Benchmark.git
cd CODES-Benchmark
uv sync # creates .venv from pyproject/uv.lock
source .venv/bin/activate
uv run python run_training.py --config configs/train_eval/config_minimal.yaml
uv run python run_eval.py --config configs/train_eval/config_minimal.yamlpip alternative
git clone https://github.com/AstroAI-Lab/CODES-Benchmark.git
cd CODES-Benchmark
python -m venv .venv && source .venv/bin/activate
pip install -e .
pip install -r requirements.txt
python run_training.py --config configs/train_eval/config_minimal.yaml
python run_eval.py --config configs/train_eval/config_minimal.yamlOutputs land in trained/<training_id>, results/<training_id>, and plots/<training_id>. The configs/ folder contains ready-to-use templates (train_eval/config_minimal.yaml, config_full.yaml, etc.). Copy a file there and adjust datasets/surrogates/modalities before running the CLIs.
The GitHub Pages site now hosts the narrative guides, configuration reference, and interactive notebooks alongside the generated API docs.
| Path | Purpose |
|---|---|
configs/ |
Ready-to-edit benchmark configs (train_eval/, tuning/, etc.) |
datasets/ |
Bundled datasets + download helper (data_sources.yaml) |
codes/ |
Python package with surrogates, training, tuning, and benchmarking utilities |
run_training.py, run_eval.py, run_tuning.py |
CLI entry points for the main workflows |
docs/ |
Sphinx project powering the GitHub Pages site (guides, tutorials, API reference) |
test/ |
Unit and integration tests |
If you use CODES in your research, please cite the original paper, which was accepted to the ML4PS@NeurIPS24:
@misc{janssen2024codes,
title = {CODES: Benchmarking Coupled ODE Surrogates},
author = {Janssen, Robin and Sulzer, Immanuel and Buck, Tobias},
year = {2024},
eprint = {2410.20886},
archivePrefix= {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2410.20886}
}Robin Janssen, Immanuel Sulzer, Tobias Buck
CODES: Benchmarking Coupled ODE Surrogates
NeurIPS ML4PS Workshop, 2024
Initial publication of the CODES framework.
arXiv | bib
Robin Janssen, Lorenzo Branca, Tobias Buck
Systematic selection of surrogate models for nonequilibrium chemistry
Astronomy & Astrophysics, 2026
Extensive application of CODES to four challenging astrochemical datasets, demonstrating practical model selection strategies for stiff chemical ODE systems.
arXiv | bib
Contribution guidelines are documented in CONTRIBUTING.md.
In short: open or pick an issue, make your changes in a branch, and submit a pull request with tests/docs updates as needed.
Robin Janssen 💻 🖋 🔣 📖 🎨 💡 🤔 🚇 🚧 🔬 👀 🔧 |
Tobias Buck 🤔 🚇 📆 🔬 👀 📢 🧑🏫 |
Lorenzo Branca 🔣 🤔 🔬 |
Immanuel Sulzer 💻 🖋 🔣 🎨 📖 🚇 📓 |