Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .all-contributorsrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"projectName": "CODES-Benchmark",
"projectOwner": "AstroAI-Lab"
}
97 changes: 97 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# Contributing to CODES Benchmark

Thanks for helping improve CODES Benchmark. Contributions of all sizes are welcome, including bug fixes, new features, documentation updates, and benchmark extensions.

## What to contribute

Common contribution types in this repository are:

- Adding a new dataset
- Adding a new surrogate model
- Adding or improving benchmark evaluations and modalities

For extension-specific implementation details, follow the dedicated guide:

- https://astroai-lab.de/CODES-Benchmark/guides/extending-benchmark.html

## Development setup

Use either `uv` (recommended) or a standard `venv` + `pip` setup.

### Option A: uv (recommended)

```bash
git clone https://github.com/robin-janssen/CODES-Benchmark.git
cd CODES-Benchmark
uv sync
source .venv/bin/activate
uv pip install --group dev
```

### Option B: venv + pip

```bash
git clone https://github.com/robin-janssen/CODES-Benchmark.git
cd CODES-Benchmark
python -m venv .venv
source .venv/bin/activate
pip install -e .
pip install -r requirements.txt
pip install black isort pytest sphinx
```

## Code style and formatting

This repository uses:

- [Black](https://black.readthedocs.io/en/stable/) for formatting
- [isort](https://pycqa.github.io/isort/) for import sorting

The CI pipeline auto-applies both tools on contributions. Running them locally is still recommended so you get fast feedback before opening or updating a pull request.

Run both locally with:

```bash
black .
isort .
```

If you use pre-commit, install hooks once and run them across the repository:

```bash
pre-commit install
pre-commit run --all-files
```

## Tests and documentation checks

Before submitting a PR, run:

```bash
pytest
sphinx-build -b html docs/source docs/_build/html
```

If your change affects user-facing behavior, update the relevant docs in `docs/source/`.

## Standard contribution workflow

1. Create an issue (bug report, feature request, or extension proposal), or choose an open issue to work on.
2. Create a branch from `main` with a clear name.
3. Implement your changes and include tests/docs updates where relevant.
4. Run formatting, tests, and docs build locally.
5. Open a pull request and link the issue.
6. Address review feedback and keep your branch up to date.

## Pull request expectations

Please keep PRs focused and easy to review.

- Explain what changed and why.
- Link related issues.
- Include tests for behavior changes.
- Include documentation updates for new datasets, surrogates, modalities, or config options.

## Need help?

If you are unsure where to start, open an issue with your proposal or question.
40 changes: 24 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
# CODES Benchmark

[![codecov](https://codecov.io/github/robin-janssen/CODES-Benchmark/branch/main/graph/badge.svg?token=TNF9ISCAJK)](https://codecov.io/github/robin-janssen/CODES-Benchmark) ![Static Badge](https://img.shields.io/badge/license-GPLv3-blue) ![Static Badge](https://img.shields.io/badge/NeurIPS-2024-green)
[![All Contributors](https://img.shields.io/github/all-contributors/AstroAI-Lab/CODES-Benchmark?color=ee8449&style=flat-square)](#contributors)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)

🎉 Accepted to the ML4PS workshop @ NeurIPS 2024

Benchmark coupled ODE surrogate models on curated datasets with reproducible training, evaluation, and visualization pipelines. CODES helps you answer: *Which surrogate architecture fits my data, accuracy target, and runtime budget?*
Benchmark coupled ODE surrogate models on curated datasets with reproducible training, evaluation, and visualization pipelines. CODES helps you answer: _Which surrogate architecture fits my data, accuracy target, and runtime budget?_

## What you get

Expand Down Expand Up @@ -50,23 +53,28 @@ The GitHub Pages site now hosts the narrative guides, configuration reference, a

## Repository map

| Path | Purpose |
| --- | --- |
| `configs/` | Ready-to-edit benchmark configs (`train_eval/`, `tuning/`, etc.) |
| `datasets/` | Bundled datasets + download helper (`data_sources.yaml`) |
| `codes/` | Python package with surrogates, training, tuning, and benchmarking utilities |
| `run_training.py`, `run_eval.py`, `run_tuning.py` | CLI entry points for the main workflows |
| `docs/` | Sphinx project powering the GitHub Pages site (guides, tutorials, API reference) |
| `scripts/` | Convenience tooling (dataset downloads, analysis utilities) |
| Path | Purpose |
| ------------------------------------------------- | -------------------------------------------------------------------------------- |
| `configs/` | Ready-to-edit benchmark configs (`train_eval/`, `tuning/`, etc.) |
| `datasets/` | Bundled datasets + download helper (`data_sources.yaml`) |
| `codes/` | Python package with surrogates, training, tuning, and benchmarking utilities |
| `run_training.py`, `run_eval.py`, `run_tuning.py` | CLI entry points for the main workflows |
| `docs/` | Sphinx project powering the GitHub Pages site (guides, tutorials, API reference) |
| `scripts/` | Convenience tooling (dataset downloads, analysis utilities) |

## Contributing

Pull requests are welcome! Please include documentation updates, add or update tests when you touch executable code, and run:
Contribution guidelines are documented in [CONTRIBUTING.md](CONTRIBUTING.md).

```bash
uv pip install --group dev
pytest
sphinx-build -b html docs/source/ docs/_build/html
```
In short: open or pick an issue, make your changes in a branch, and submit a pull request with tests/docs updates as needed.

## Contributors

<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->

<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->

If you publish a new surrogate or dataset, document it under `docs/guides` / `docs/reference` so users can adopt it quickly. For questions, open an issue on GitHub.
<!-- ALL-CONTRIBUTORS-LIST:END -->