Skip to content
Ruben Shahoyan edited this page May 7, 2021 · 3 revisions

Dear all,

Here is a tentative proposal on creating the TF CTP time-stamp needed for CCDB queries. Hopefully it covers all our use cases, which are the following types of DPL processing.

The underlying calculation is (1):

uint64_t getTFTimeStamp(uint64_t orbit_reset_time, short nOrbitsPerTF, uint32_t firstTFOrbit)
{
  return orbit_reset_time + nOrbitsPerTF*firstTFOrbit*OrbitPeriod;
}

so the problem is how to extract these orbit_reset_time, nOrbitsPerTF and firstTFOrbit in different use-cases.

In the real life the CTP will periodically reset the orbit number to 0 (normally at the beginning of the stable beam or when this is needed. It is guaranteed that the orbit will never wrap to 0 during the run). At every reset it will write a CCDB object with CTP-server time of reset, with object validity from this time to at least 4 days. Also, I assume that before the actual SOR we can write to CCDB a GRP object with essential parameters including the nOrbitsPerTF, others are runNumber, declared detectors, readout modes, bunch-filling schema, mag.field, time-stamp (e.g. of GRP creation time) etc..

For most of the use cases I propose to have simple time-stamp-sender DPL device which calculates the timestamp and sends a message e.g. ***/TFTIMESTAMP/0 to which all other devices can subscribe (I assume the subscription to this input can be done internally by the DataProcessingDevice, with the option to avoid it if requested by the user code. Particularly, this time-stamp-sender as well as other devices which generate the time-stamp themselves, e.g. CTFreader or AODreader, should not subscribe to this message). Device should have CL options (usage explained below)

--grp-from-file         # default = false, if true passed, will be loaded from local file as now (path provided via --configKeyValues "NameConf.mDirGRP")
--time-stamp-within-run # default = 0, if >0, to provide any epoch time-stamp within the run (to be used for GRP and CTPreset query on CCDB). With default=0 `now` will be assumed.

An alternative to this --time-stamp-within-run is to use --run-number option which would be used to query the run-lookup CCDB object (infinite validity object containing map<run_number,time-stamp> updated for every run) as was discussed at WP4/14 meetings (perhaps more elegant solution was found meanwhile, I did not follow this topic).

Below are (hopefully) all use-cases.

A) Sync. reco with input from FLP/DD:

time-stamp-sender subscribes to FLP/DISTSUBTIMEFRAME/0 message and at very 1st TF uses time now to query from CCDB the GRP and CTP-reset objects to extract the orbit_reset_time and nOrbitsPerTF (done only once). Before I was proposing to use DataProcessingHeader.startTime of FLP clock, but in the sync.proc. both are guaranteed to be within the validity range of this objects on CCDB.

The firstTFOrbit comes from the DISTSUBTIMEFRAME DataHeader.firstTForbit for every TF. The alternative to accessing the CCDB in the very beginning (on the FLP it might be time-crtitical) is to get the orbit_reset_time and nOrbitsPerTF as a parameters from the ECS (as CL options?)

B) Replay of raw data with readout/DD or raw-file-reader:

Same as (A) but the time-stamp-sender receives a --time-stamp-within-run via CL which allows to query the CCDB for GRP and CTP-reset object. With the real data the ccdb will be used, with emulated MC->raw: either ccdb (particularly, for anchored MC) or local GRP of MC. For simulated raw data, if the --grp-from-file is activated, the nOrbitsPerTF and the time-stamp for CCDB query are read from the local GRP file, so the --time-stamp-within-run is not needed.

C) Reconstruction of anchored MC data from digits/clusters:

Same as (B) except that the time-stamp-sender generates the firstTForbit itself via common DPL options --orbit-offset-enumeration and --orbit-multiplier-enumeration hooked to digitization ini file, as it is already done for all reco workflows reading digits.

D) Reconstruction of non-anchored test MC from digits/clusters (applies also corresponding emulated raw data reconstruction).

This is not a problem by itself as the cases (B) and (C) should work also here, the question is what do we use as CCDB in this case. In the current test MCs we essentially use a handful of local files (GRP, geometry, material LUT object, etc.) or use default objects as created by their class constructors. But obviously this is not a viable real solution, I guess we should have a set of test CCDB servers (for pp, pbpb, cosmics) where detectors can store their default objects, like the AliRoot-OCDB. In the meantime, until this is set up, we can hack put on the ccdb-test.cern.ch a dummy CTP-reset object containing e.g. the time-stamp = 0, with the convention that the time-stamp-sender will interpret this time as now

E) Async. reco from CTF: here nothing is needed as the CTFreader will push the TF time-stamp stored in the CTCTF will stored in CTFheader. The same should be true for the AODreader.

will write a CCDB object ever time it resets Since it is related to the way we are are going to do anchored MC, I start with this use-case.

  1. o2-sim anchoring. Here I guess we will provide to o2-sim either the options --runNumber <N> --tfID <tf_orbit> or `--timestamp

The underlying calculation is (1):

uint64_t getTFTimeStamp(uint64_t orbit_reset_time, short nOrbitsPerTF, uint32_t firstTFOrbit) { return orbit_reset_time + nOrbitsPerTFfirstTFOrbitOrbitPeriod; }

FLP/DISTSUBTIMEFRAME/0

-- o2-sim