Skip to content

Commit 1cc6726

Browse files
committed
Move documentation
1 parent f7e61c0 commit 1cc6726

File tree

2 files changed

+3
-106
lines changed

2 files changed

+3
-106
lines changed

DATA/common/README.md

Lines changed: 1 addition & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -1,51 +1 @@
1-
The `setenv-sh` script sets the following environment options
2-
* `NTIMEFRAMES`: Number of time frames to process.
3-
* `TFDELAY`: Delay in seconds between publishing time frames (1 / rate).
4-
* `NGPUS`: Number of GPUs to use, data distributed round-robin.
5-
* `GPUTYPE`: GPU Tracking backend to use, can be CPU / CUDA / HIP / OCL / OCL2.
6-
* `SHMSIZE`: Size of the global shared memory segment.
7-
* `DDSHMSIZE`: Size of shared memory unmanaged region for DataDistribution Input.
8-
* `GPUMEMSIZE`: Size of allocated GPU memory (if GPUTYPE != CPU)
9-
* `HOSTMEMSIZE`: Size of allocated host memory for GPU reconstruction (0 = default).
10-
* For `GPUTYPE = CPU`: TPC Tracking scratch memory size. (Default 0 -> dynamic allocation.)
11-
* Otherwise : Size of page-locked host memory for GPU processing. (Defauls 0 -> 1 GB.)
12-
* `CREATECTFDICT`: Create CTF dictionary.
13-
* `SAVECTF`: Save the CTF to a root file.
14-
* 0: Read `ctf_dictionary.root` as input.
15-
* 1: Create `ctf_dictionary.root`. Note that this was already done automatically if the raw data was simulated with `full_system_test.sh`.
16-
* `SYNCMODE`: Run only reconstruction steps of the synchronous reconstruction.
17-
* Note that there is no `ASYNCMODE` but instead the `CTFINPUT` option already enforces asynchronous processing.
18-
* `NUMAGPUIDS`: NUMAID-aware GPU id selection. Needed for the full EPN configuration with 8 GPUs, 2 NUMA domains, 4 GPUs per domain.
19-
In this configuration, 2 instances of `dpl-workflow.sh` must run in parallel.
20-
To be used in combination with `NUMAID` to select the id per workflow.
21-
`start_tmux.sh` will set up these variables automatically.
22-
* `NUMAID`: SHM segment id to use for shipping data as well as set of GPUs to use (use `0` / `1` for 2 NUMA domains, 0 = GPUS `0` to `NGPUS - 1`, 1 = GPUS `NGPUS` to `2 * NGPUS - 1`)
23-
* 0: Runs all reconstruction steps, of sync and of async reconstruction, using raw data input.
24-
* 1: Runs only the steps of synchronous reconstruction, using raw data input.
25-
* `EXTINPUT`: Receive input from raw FMQ channel instead of running o2-raw-file-reader.
26-
* 0: `dpl-workflow.sh` can run as standalone benchmark, and will read the input itself.
27-
* 1: To be used in combination with either `datadistribution.sh` or `raw-reader.sh` or with another DataDistribution instance.
28-
* `CTFINPUT`: Read input from CTF ROOT file. This option is incompatible to EXTINPUT=1. The CTF ROOT file can be stored via SAVECTF=1.
29-
* `NHBPERTF`: Time frame length (in HBF)
30-
* `GLOBALDPLOPT`: Global DPL workflow options appended to o2-dpl-run.
31-
* `EPNPIPELINES`: Set default EPN pipeline multiplicities.
32-
Normally the workflow will start 1 dpl device per processor.
33-
For some of the CPU parts, this is insufficient to keep step with the GPU processing rate, e.g. one ITS-TPC matcher on the CPU is slower than the TPC tracking on multiple GPUs.
34-
This option adds some multiplicies for CPU processes using DPL's pipeline feature.
35-
The settings were tuned for EPN processing with 4 GPUs (i.e. the default multiplicities are per NUMA domain).
36-
The multiplicities are scaled with the `NGPUS` setting, i.e. with 1 GPU only 1/4th are applied.
37-
You can pass an option different to 1, and than it will be applied as factor on top of the multiplicities.
38-
It is auto-selected by `start-tmux.sh`.
39-
* `SEVERITY`: Log verbosity (e.g. info or error, default: info)
40-
* `INFOLOGGER_SEVERITY`: Min severity for messages sent to Infologger. (default: `$SEVERITY`)
41-
* `SHMTHROW`: Throw exception when running out of SHM memory.
42-
It is suggested to leave this enabled (default) on tests on the laptop to get an actual error when it runs out of memory.
43-
This is disabled in `start_tmux.sh`, to avoid breaking the processing while there is a chance that another process might free memory and we can continue.
44-
* `NORATELOG`: Disable FairMQ Rate Logging.
45-
* `INRAWCHANNAME`: FairMQ channel name used by the raw proxy, must match the name used by DataDistribution.
46-
* `WORKFLOWMODE`: run (run the workflow (default)), print (print the command to stdout), dds (create partial DDS topology)
47-
* `FILEWORKDIR`: directory for all input / output files. E.g. grp / geometry / dictionaries etc. are read from here, and dictionaries / ctf / etc. are written to there.
48-
Some files have more fine grained control via other environment variables (e.g. to store the CTF to somewhere else). Such variables are initialized to `$FILEWORKDIR` by default but can be overridden.
49-
* `EPNSYNCMODE`: Specify that this is a workflow running on the EPN for synchronous processing, e.g. logging goes to InfoLogger, DPL metrics to to the AliECS monitoring, etc.
50-
* `BEAMTYPE`: Beam type, must be PbPb, pp, pPb, cosmic, technical.
51-
* `IS_SIMULATED_DATA` : 1 for MC data, 0 for RAW data.
1+
For a reference to available env-variables, please check https://github.com/AliceO2Group/AliceO2/blob/dev/prodtests/full-system-test/documentation/env-variables.md

DATA/production/README.md

Lines changed: 2 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -8,61 +8,8 @@ Standalone calibration workflows are contained in `standalone-calibration.desc`.
88

99
If processing is to be disabled, please use the `no-processing` workflow in `no-processing.desc`.
1010

11-
# Configuration options
12-
You can use the following options to change the workflow behavior:
13-
- `DDMODE` (default `processing`) : Must be `processing` (synchronous processing) or `processing-disk` (synchronous processing + storing of raw time frames to disk, note that this is the raw time frame not the CTF!). The `DDMODE` `discard` and `disk` are not compatible with the synchronous processing workflow, you must use the `no-processing.desc` workflow instead!.
14-
- `WORKFLOW_DETECTORS` (default `ALL`) : Comma-separated list of detectors for which the processing is enabled. If these are less detectors than participating in the run, data of the other detectors is ignored. If these are more detectors than participating in the run, the processes for the additional detectors will be started but will not do anything.
15-
- `WORKFLOW_DETECTORS_QC` (default `ALL`) : Comma-separated list of detectors for which to run QC, can be a subset of `WORKFLOW_DETECTORS` (for standalone detectors QC) and `WORKFLOW_DETECTORS_MATCHING` (for matching/vertexing QC). If a detector (matching/vertexing step) is not listed in `WORKFLOW_DETECTORS` (`WORKFLOW_DETECTORS_MATCHING`), the QC is automatically disabled for that detector. Only active if the `WORKFLOW_PARAMETER=QC` is set.
16-
- `WORKFLOW_DETECTORS_CALIB` (default `ALL`) : Comma-separated list of detectors for which to run calibration, can be a subset of `WORKFLOW_DETECTORS`. If a detector is not listed in `WORKFLOW_DETECTORS`, the calibration is automatically disabled for that detector. Only active if the `WORKFLOW_PARAMETER=CALIB` is set.
17-
- `WORKFLOW_DETECTORS_FLP_PROCESSING` (default `TOF` for sync processing on EPN, `NONE` otherwise) : Signals that these detectors have processing on the FLP enabled. The corresponding steps are thus inactive in the EPN epl-workflow, and the raw-proxy is configured to receive the FLP-processed data instead of the raw data in that case.
18-
- `WORKFLOW_DETECTORS_RECO` (default `ALL`) : Comma-separated list of detectors for which to run reconstruction.
19-
- `WORKFLOW_DETECTORS_CTF` (default `ALL`) : Comma-separated list of detectors to include in CTF.
20-
- `WORKFLOW_DETECTORS_MATCHING` (default selected corresponding to default workflow for sync or async mode respectively) : Comma-separated list of matching / vertexing algorithms to run. Use `ALL` to enable all of them. Currently supported options (see LIST_OF_GLORECO in common/setenv.h): `ITSTPC`, `TPCTRD`, `ITSTPCTRD`, `TPCTOF`, `ITSTPCTOF`, `MFTMCH`, `PRIMVTX`, `SECVTX`.
21-
- `WORKFLOW_EXTRA_PROCESSING_STEPS` Enable additional processing steps not in the preset for the SYNC / ASYNC mode. Possible values are: `MID_RECO` `MCH_RECO` `MFT_RECO` `FDD_RECO` `FV0_RECO` `ZDC_RECO` `ENTROPY_ENCODER` `MATCH_ITSTPC` `MATCH_TPCTRD` `MATCH_ITSTPCTRD` `MATCH_TPCTOF` `MATCH_ITSTPCTOF` `MATCH_MFTMCH` `MATCH_MFTMCH` `MATCH_PRIMVTX` `MATCH_SECVTX`. (Here `_RECO` means full async reconstruction, and can be used to enable it also in sync mode.)
22-
- `WORKFLOW_PARAMETERS` (default `NONE`) : Comma-separated list, enables additional features of the workflow. Currently the following features are available:
23-
- `GPU` : Performs the TPC processing on the GPU, otherwise everything is processed on the CPU.
24-
- `CTF` : Write the CTF to disk (CTF creation is always enabled, but if this parameter is missing, it is not stored).
25-
- `EVENT_DISPLAY` : Enable JSON export for event display.
26-
- `QC` : Enable QC.
27-
- `CALIB` : Enable calibration (not yet working!)
28-
- `RECO_NUM_NODES_OVERRIDE` (default `0`) : Overrides the number of EPN nodes used for the reconstruction (`0` or empty means default).
29-
- `MULTIPLICITY_FACTOR_RAWDECODERS` (default `1`) : Scales the number of parallel processes used for raw decoding by this factor.
30-
- `MULTIPLICITY_FACTOR_CTFENCODERS` (default `1`) : Scales the number of parallel processes used for CTF encoding by this factor.
31-
- `MULTIPLICITY_FACTOR_REST` (default `1`) : Scales the number of other reconstruction processes by this factor.
32-
- `QC_JSON_EXTRA` (default `NONE`) : extra QC jsons to add (if does not fit to those defined in WORKFLOW_DETECTORS_QC & (WORKFLOW_DETECTORS | WORKFLOW_DETECTORS_MATCHING)
33-
Most of these settings are configurable in the AliECS GUI. But some of the uncommon settings (`WORKFLOW_DETECTORS_FLP_PROCESSING`, `WORKFLOW_DETECTORS_CTF`, `WORKFLOW_DETECTORS_RECO`, `WORKFLOW_DETECTORS_MATCHING`, `WORKFLOW_EXTRA_PROCESSING_STEPS`, advanced `MULTIPLICITY_FACTOR` settings) can only be set via the "Additional environment variables field" in the GUI using bash syntax, e.g. `WORKFLOW_DETECTORS_FLP_PROCESSING=TPC`.
34-
35-
# Process multiplicity factors
36-
- The production workflow has internally a default value how many instances of a process to run in parallel (which was tuned for Pb-Pb processing)
37-
- Some critical processes for synchronous pp processing are automatically scaled by the inverse of the number of nodes, i.e. the multiplicity is increased by a factor of 2 if 125 instead of 250 nodes are used, to enable the processing using only a subset of the nodes.
38-
- Factors can be provided externally to scale the multiplicity of processes further. All these factors are multiplied.
39-
- One factor can be provided based on the type of the processes: raw decoder (`MULTIPLICITY_FACTOR_RAWDECODERS`), CTF encoder (`MULTIPLICITY_FACTOR_CTFENCODERS`), or other reconstruction process (`MULTIPLICITY_FACTOR_REST`)
40-
- One factor can be provided per detector via `MULTIPLICITY_FACTOR_DETECTOR_[DET]` using the 3 character detector representation, or `MATCH` for the global matching and vertexing workflows.
41-
- One factor can be provided per process via `MULTIPLICITY_FACTOR_PROCESS_[PROCESS_NAME]`. In the process name, dashes `-` must be replaced by underscores `_`.
42-
- The multiplicity of an individual process can be overridden externally (this is an override, no scaling factor) by using `MULTIPLICITY_PROCESS_[PROCESS_NAME]`. In the process name, dashes `-` must be replaced by underscores `_`.
43-
- For example, creating the workflow with `MULTIPLICITY_FACTOR_RAWDECODERS=2 MULTIPLICITY_FACTOR_DETECTOR_ITS=3 MULTIPLICITY_FACTOR_PROCESS_mft_stf_decoder=5` will scale the number of ITS raw decoders by 6, of other ITS processes by 3, of other raw decoders by 2, and will run exactly 5 `mft-stf-decoder` processes.
44-
45-
# Additional custom control variables
46-
For user modification of the workflow settings, the folloing *EXTRA* environment variables exist:
47-
- `ARGS_ALL_EXTRA` : Extra command line options added to all workflows
48-
- `ALL_EXTRA_CONFIG` : Extra config key values added to all workflows
49-
- `GPU_EXTRA_CONFIG` : Extra options added to the configKeyValues of the GPU workflow
50-
- `ARGS_EXTRA_PROCESS_[WORKFLOW_NAME]` : Extra command line arguments for the workflow binary `WORKFLOW_NAME`. Dashes `-` must be replaced by underscores `_` in the name! E.g. `ARGS_EXTRA_PROCESS_o2_tof_reco_workflow='--output-type clusters'`
51-
- `CONFIG_EXTRA_PROCESS_[WORKFLOW_NAME]` : Extra `--configKeyValues` arguments for the workflow binary `WORKFLOW_NAME`. Dashes `-` must be replaced by underscores `_` in the name! E.g. `CONFIG_EXTRA_PROCESS_o2_gpu_reco_workflow='GPU_proc.debugLevel=1;GPU_proc.ompKernels=0;'`
52-
53-
**IMPORTANT:** When providing additional environment variables please always use single quotes `'` instead of double quotes `"`, because otherwise there can be issues with whitespaces. E.g. `ARGS_EXTRA_PROCESS_o2_eve_display='--filter-time-min 0 --filter-time-max 120'` does work while `ARGS_EXTRA_PROCESS_o2_eve_display="--filter-time-min 0 --filter-time-max 120"` does not.
54-
55-
In case the CTF dictionaries were created from the data drastically different from the one being compressed, the default memory allocation for the CTF buffer might be insufficient. One can apply scaling factor to the buffer size estimate (default=1.5) of particular detector by defining variable e.g. `TPC_ENC_MEMFACT=3.5`
56-
57-
# File input for ctf-reader / raw-tf-reader
58-
- The variable `$INPUT_FILE_LIST` can be a comma-seperated list of files, or a file with a file-list of CTFs/raw TFs.
59-
- The variable `$INPUT_FILE_COPY_CMD` can provide a custom copy command (default is to fetch the files from EOS).
60-
61-
# Remarks on QC
62-
The JSON files for the individual detectors are merged into one JSON file, which is cached during the run on the shared EPN home folder.
63-
The default JSON file per detector is defined in `qc-workflow.sh`.
64-
JSONs per detector can be overridden by exporting `QC_JSON_[DETECTOR_NAME]`, e.g. `QC_JSON_TPC`, when creating the workflow.
65-
The global section of the merged qc JSON config is taken from qc-sync/qc-global.json
11+
# Options for dpl-workflow.sh
12+
Refer to https://github.com/AliceO2Group/AliceO2/blob/dev/prodtests/full-system-test/documentation/dpl-workflow-options.md
6613

6714
# run-workflow-on-inputlist.sh
6815
`O2/prodtests/full-system-test/run-workflow-on-inputlist.sh` is a small tool to run the `dpl-workflow.sh` on a list of files.

0 commit comments

Comments
 (0)