Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .github/workflows/build-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,14 @@ on:
paths:
- "docs/**"
- "mkdocs.yml"
- "tests/check-docs-drift.py"
pull_request:
branches: [master]
paths:
- "docs/**"
- "mkdocs.yml"
- ".github/workflows/build-docs.yml"
- "tests/check-docs-drift.py"

jobs:
build:
Expand All @@ -33,6 +35,9 @@ jobs:
- name: Install dependencies
run: uv sync --group doc

- name: Check docs code examples
run: uv run task docs-check-drift

- name: Build docs
run: |
set -euo pipefail
Expand Down
22 changes: 22 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -284,6 +284,28 @@ uv run --only-group doc task docs

to regenerate the html files. For local preview with live reload, run `uv run --only-group doc task docs-serve`.

#### Testing documentation code examples

Python code blocks in the docs can be checked to catch examples that have drifted out of sync with the library. Run the check with:

```sh
uv run task docs-check-drift
```

This executes every ` ```python ` fenced block found under `docs/` using [mktestdocs](https://github.com/koaning/mktestdocs). The check is driven by `scripts/check-docs-drift.py`.

**Adding a new code block to the docs?** Use a ` ```python ` fence so it is picked up. If the block cannot run in CI (e.g. it requires TensorFlow, a live API, or other external dependencies), add `# skip` as the first line inside the block:

````markdown
```python
# skip
import tensorflow as tf
...
```
````

Blocks marked `# skip` are excluded from the check but still rendered normally in the documentation.

### Rebase your branch on master

Before creating a PR, please make sure to rebase your branch on master to avoid merge conflicts and make the review easier. You can do it with the following command:
Expand Down
9 changes: 5 additions & 4 deletions docs/getting-started/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,14 +32,15 @@ codecarbon monitor

Or use the API in your code:

``` python
```python
from codecarbon import track_emissions

@track_emissions(save_to_api=True)
def train_model():
# GPU intensive training code goes here
# GPU intensive training code goes here
pass

if __name__ =="__main__":
if __name__ == "__main__":
train_model()
```

Expand All @@ -59,7 +60,7 @@ You then have to set your experiment id in CodeCarbon, with two options:

In the code:

``` python
```python
from codecarbon import track_emissions

@track_emissions(
Expand Down
5 changes: 3 additions & 2 deletions docs/getting-started/comet.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ and more.
To get started with the Comet-CodeCarbon integration, make sure you have
comet-ml installed:

``` python
``` console
pip install comet_ml>=3.2.2
```

Expand All @@ -24,7 +24,8 @@ In the
[mnist-comet.py](https://github.com/mlco2/codecarbon/blob/master/examples/mnist-comet.py)
example file, replace the placeholder code with your API key:

``` python
```python
# skip testing this with mkdoctests - would require making comet_ml a doc dependency which is overkill.
experiment = Experiment(api_key="YOUR API KEY")
```

Expand Down
15 changes: 12 additions & 3 deletions docs/getting-started/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,10 @@ automatically and printed at the end of the training.
But you can't get them in your code, see the Context Manager section
below for that.

``` python
```python
# skip mktestdocs
# dependency, which would slow down CI setup time, b) these snippets do quiet a lot of work which
# would slow things down further.
import tensorflow as tf
from codecarbon import track_emissions

Expand Down Expand Up @@ -49,7 +52,10 @@ if __name__ == "__main__":
We think this is the best way to use CodeCarbon. Still only two lines of
code, and you can get the emissions in your code.

``` python
```python
# skip mktestdocs
# dependency, which would slow down CI setup time, b) these snippets do quiet a lot of work which
# would slow things down further.
import tensorflow as tf

from codecarbon import EmissionsTracker
Expand Down Expand Up @@ -95,7 +101,10 @@ CodeCarbon scheduler is stopped. If you don't use
background after your computation code has crashed, so your program will
never finish.

``` python
```python
# skip mktestdocs
# dependency, which would slow down CI setup time, b) these snippets do quiet a lot of work which
# would slow things down further.
import tensorflow as tf

from codecarbon import EmissionsTracker
Expand Down
33 changes: 19 additions & 14 deletions docs/getting-started/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ code base, users can instantiate a `EmissionsTracker` object and pass it
as a parameter to function calls to start and stop the emissions
tracking of the compute section.

``` python
```python
from codecarbon import EmissionsTracker
tracker = EmissionsTracker()
tracker.start()
Expand All @@ -167,14 +167,16 @@ depending on the configuration, but keep running the experiment.
If you want to monitor small piece of code, like a model inference, you
could use the task manager:

``` python
```python
from codecarbon import EmissionsTracker

try:
tracker = EmissionsTracker(project_name="bert_inference", measure_power_secs=10)
tracker.start_task("load dataset")
dataset = load_dataset("imdb", split="test")
# do some data loading
imdb_emissions = tracker.stop_task()
tracker.start_task("build model")
model = build_model()
# build some model
model_emissions = tracker.stop_task()
finally:
_ = tracker.stop()
Expand All @@ -193,11 +195,11 @@ to interfere with the task measurement.

The `Emissions tracker` also works as a context manager.

``` python
```python
from codecarbon import EmissionsTracker

with EmissionsTracker() as tracker:
# Compute intensive training code goes here
_ = 1 + 1 # Compute intensive training code goes here
```

This mode is recommended when you want to monitor a specific block of
Expand All @@ -209,12 +211,13 @@ In case the training code base is wrapped in a function, users can use
the decorator `@track_emissions` within the function to enable tracking
emissions of the training code.

``` python
```python
from codecarbon import track_emissions

@track_emissions
def training_loop():
# Compute intensive training code goes here
pass
```

This mode is recommended if you have a training function.
Expand All @@ -239,7 +242,7 @@ can be found on
Developers can use the `OfflineEmissionsTracker` object to track
emissions as follows:

``` python
```python
from codecarbon import OfflineEmissionsTracker
tracker = OfflineEmissionsTracker(country_iso_code="CAN")
tracker.start()
Expand All @@ -251,11 +254,12 @@ tracker.stop()

The `OfflineEmissionsTracker` also works as a context manager

``` python
```python
from codecarbon import OfflineEmissionsTracker

with OfflineEmissionsTracker() as tracker:
# GPU intensive training code goes here
with OfflineEmissionsTracker(country_iso_code="CAN") as tracker:
# GPU intensive training code goes here
pass
```

### Decorator
Expand Down Expand Up @@ -334,7 +338,8 @@ CodeCarbon is structured so that you can configure it in a hierarchical manner:
- script parameters will override environment variables if the same
parameter is set in both:

``` python
```python
# skip this block for mktestdocs testing because it tries to call APIs
EmissionsTracker(
api_call_interval=4,
save_to_api=True,
Expand All @@ -343,7 +348,7 @@ CodeCarbon is structured so that you can configure it in a hierarchical manner:

Yields attributes:

``` python
```python
{
"measure_power_secs": 10, # from ~/.codecarbon.config
"save_to_file": True, # from ./.codecarbon.config (override ~/.codecarbon.config)
Expand Down Expand Up @@ -382,7 +387,7 @@ export HTTPS_PROXY="http://0.0.0.0:0000"

Or in your Python code:

``` python
```python
import os

os.environ["HTTPS_PROXY"] = "http://0.0.0.0:0000"
Expand Down
6 changes: 6 additions & 0 deletions docs/introduction/power_estimation.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,11 @@ Instead of relying solely on instantaneous power sensors (which might not repres
The `Power.from_energies_and_delay` method handles this operation:

```python
from codecarbon.core import units

energy_now = units.Energy(kWh=1.0)
energy_previous = units.Energy(kWh=0.5)
delay = units.Time(seconds=3600.0)
delta_energy_kwh = float(abs(energy_now.kWh - energy_previous.kWh))
power_kw = delta_energy_kwh / delay.hours
```
Expand All @@ -48,6 +53,7 @@ For recording the power, a running sum is maintained:

At the end of an execution task (or when data is exported), the true average Power is formulated:
```python
# skip mktestdocs testing - illustrative pseudocode using internal variables
avg_gpu_power = _gpu_power_sum / _power_measurement_count
```
This smoothing process prevents singular short measurement anomalies from skewing the final aggregated power values published in `EmissionsData`.
Expand Down
2 changes: 2 additions & 0 deletions docs/logging/output.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ docker-compose up
Run your EmissionsTracker as usual, with `save_to_prometheus=True`:

```python
# skip mktestdocs
tracker = OfflineEmissionsTracker(
project_name="my_project",
country_iso_code="USA",
Expand All @@ -86,6 +87,7 @@ CodeCarbon exposes all its metrics with the suffix `codecarbon_`.
Run your EmissionsTracker as usual, with `save_to_logfire=True`:

```python
# skip mktestdocs
tracker = OfflineEmissionsTracker(
project_name="my_project",
country_iso_code="USA",
Expand Down
9 changes: 6 additions & 3 deletions docs/logging/to_logger.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@ or cloud-based collector.

### Python logger

``` python
```python
# skip mktestdocs
import logging

# Create a dedicated logger (log name can be the CodeCarbon project name for example)
Expand All @@ -40,7 +41,8 @@ my_logger = LoggerOutput(_logger, logging.INFO)

### Google Cloud Logging

``` python
```python
# skip mktestdocs
import google.cloud.logging


Expand All @@ -61,7 +63,8 @@ documentation](https://cloud.google.com/logging/docs/reference/libraries#setting
Create an EmissionTracker saving output to the logger. Other save
options are still usable and valid.

``` python
```python
# skip mktestdocs
tracker = EmissionsTracker(save_to_logger=True, logging_logger=my_logger)
tracker.start()
# Your code here
Expand Down
4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,8 @@ doc = [
"zensical",
"mike",
"mkdocstrings[python]>=0.26",
"mktestdocs",
"pytest",
]

[project.optional-dependencies]
Expand Down Expand Up @@ -143,7 +145,7 @@ test-coverage = "CODECARBON_ALLOW_MULTIPLE_RUNS=True pytest --cov --cov-report=x
test-package-integ = "CODECARBON_ALLOW_MULTIPLE_RUNS=True python -m pytest -vv tests/"
docs = "zensical build -f mkdocs.yml"
docs-serve = "zensical serve -f mkdocs.yml"
docs-check-drift = "python scripts/check-docs-drift.py"
docs-check-drift = "pytest tests/check-docs-drift.py -v"
carbonboard = "python codecarbon/viz/carbonboard.py"

[tool.bumpver]
Expand Down
27 changes: 27 additions & 0 deletions tests/check-docs-drift.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
"""
Check that Python code blocks in docs are runnable.

Blocks whose first line is '# skip' are intentionally excluded
(e.g. examples requiring external services or heavy dependencies).

Run with: pytest tests/check-docs-drift.py -v
"""

import os
import pathlib

import pytest
from mktestdocs import grab_code_blocks
from mktestdocs.__main__ import exec_python

# Suppress CSV output and log noise when tracker examples run in CI.
os.environ["CODECARBON_SAVE_TO_FILE"] = "false"
os.environ["CODECARBON_LOG_LEVEL"] = "error"


@pytest.mark.parametrize("fpath", pathlib.Path("docs").glob("**/*.md"), ids=str)
def test_doc_file(fpath):
text = fpath.read_text()
for block in grab_code_blocks(text, lang="python"):
if not block.lstrip().startswith("# skip"):
exec_python(block)
Loading
Loading