Skip to content

Conversation

@matthias-kleiner
Copy link
Contributor

Digitization:

  • adding loading of gas parameters from CCDB for tuning the electron attachment
  • adding loading of GEM parameters for gain tuning

Reconstruction:

  • adding option to load MC time gain calibration

Digitization:
- adding loading of gas parameters from CCDB for tuning the electron
attachment
- adding loading of GEM parameters for gain tuning

Reconstruction:
- adding option to load MC time gain calibration
@github-actions
Copy link
Contributor

REQUEST FOR PRODUCTION RELEASES:
To request your PR to be included in production software, please add the corresponding labels called "async-" to your PR. Add the labels directly (if you have the permissions) or add a comment of the form (note that labels are separated by a ",")

+async-label <label1>, <label2>, !<label3> ...

This will add <label1> and <label2> and removes <label3>.

The following labels are available
async-2023-pbpb-apass4
async-2023-pp-apass4
async-2024-pp-apass1
async-2022-pp-apass7
async-2024-pp-cpass0
async-2024-PbPb-cpass0
async-2024-PbPb-apass1
async-2024-ppRef-apass1

@matthias-kleiner
Copy link
Contributor Author

Hello @wiechula ,
if this PR is fine, we need to upload the default CCDB objects for the MC time gain, the gas parameters and the GEM parameters. Do you want to open a ticket for this or should I do it?

@alibuild
Copy link
Collaborator

alibuild commented Nov 15, 2024

Error while checking build/O2/fullCI for 9c53f6a at 2024-11-19 05:01:

## sw/BUILD/O2-full-system-test-latest/log
Detected critical problem in logfile digi.log
digi.log:[52063:internal-dpl-ccdb-backend]: [04:01:04][ERROR] Exception while running: Fatal error. Rethrowing.
digi.log-[52063:internal-dpl-ccdb-backend]: [04:01:05][FATAL] Unhandled o2::framework::runtime_error reached the top of main of o2-sim-digitizer-workflow, device shutting down. Reason: Fatal error
[52063:internal-dpl-ccdb-backend]: [04:01:00][ERROR] CcdbDownloader finished transfer http://alice-ccdb.cern.ch/TPC/Parameter/Gas for 1550600800000 (agent_id: alientest02-1731988841-tBzZLP) with http code: 404
[52063:internal-dpl-ccdb-backend]: [04:01:00][ERROR] File TPC/Parameter/Gas could not be retrieved. No more hosts to try.
[52063:internal-dpl-ccdb-backend]: [04:01:00][FATAL] Unable to find object TPC/Parameter/Gas/1550600800000
[52063:internal-dpl-ccdb-backend]: [04:01:04][ERROR] Exception while running: Fatal error. Rethrowing.
[52063:internal-dpl-ccdb-backend]: [04:01:05][FATAL] Unhandled o2::framework::runtime_error reached the top of main of o2-sim-digitizer-workflow, device shutting down. Reason: Fatal error
[ERROR] Workflow crashed - PID 52063 (internal-dpl-ccdb-backend) did not exit correctly however it's not clear why. Exit code forced to 128.


## sw/BUILD/o2checkcode-latest/log
--
========== List of errors found ==========
++ GRERR=0
++ grep -v clang-diagnostic-error error-log.txt
++ grep ' error:'
++ GRERR=1
++ [[ 1 == 0 ]]
++ mkdir -p /sw/INSTALLROOT/e93e771473517b8ad13106b859bce9b0cfb5b384/slc8_x86-64/o2checkcode/1.0-local267/etc/modulefiles
++ cat
--

Full log here.

wiechula
wiechula previously approved these changes Nov 18, 2024
@sawenzel
Copy link
Collaborator

sawenzel commented Nov 19, 2024

fullCI fails with

JAlienFile::Open>: Accessing file /alice/data/CCDB/TPC/Calib/LaserTracks/14/60057/dd26d771-0929-11ed-8000-2a010e0a0b16 in SE <ALICE::CERN::OCDB>
[52063:internal-dpl-ccdb-backend]: [04:01:00][INFO] ccdb reads http://alice-ccdb.cern.ch/TPC/Calib/LaserTracks/1546300800000/dd26d771-0929-11ed-8000-2a010e0a0b16 for 1550600800000 (load to memory, agent_id: alientest02-1731988841-tBzZLP), 
[52063:internal-dpl-ccdb-backend]: [04:01:00][ERROR] CcdbDownloader finished transfer http://alice-ccdb.cern.ch/TPC/Parameter/Gas for 1550600800000 (agent_id: alientest02-1731988841-tBzZLP) with http code: 404
[52063:internal-dpl-ccdb-backend]: [04:01:00][ERROR] File TPC/Parameter/Gas could not be retrieved. No more hosts to try.
[52063:internal-dpl-ccdb-backend]: [04:01:00][ALARM] Curl request to http://alice-ccdb.cern.ch/TPC/Parameter/Gas/1550600800000/, response code: 404
[52063:internal-dpl-ccdb-backend]: [04:01:00][FATAL] Unable to find object TPC/Parameter/Gas/1550600800000

which appears to be related to this PR (please make sure to upload CCDB objects (even trivial ones) for all runs). Uploading of those objects needs to be done prior to merging this PR.

@wiechula
Copy link
Collaborator

Yes, @matthias-kleiner will prepare an upload request with the default objects.

@chiarazampolli
Copy link
Collaborator

Here it is: https://its.cern.ch/jira/browse/O2-5562, already uploaded.

@wiechula wiechula enabled auto-merge (rebase) November 20, 2024 08:13
@wiechula
Copy link
Collaborator

+async-label async-2023-pbpb-apass4

@github-actions github-actions bot dismissed wiechula’s stale review November 20, 2024 08:15

Labels updated; please review again.

@github-actions github-actions bot added the async-2023-pbpb-apass4 Request porting to async-2023-pbpb-apass4 label Nov 20, 2024
@sawenzel sawenzel disabled auto-merge November 20, 2024 12:32
@sawenzel sawenzel merged commit 2de9c5c into AliceO2Group:dev Nov 20, 2024
11 checks passed
@chiarazampolli
Copy link
Collaborator

chiarazampolli commented Nov 20, 2024

Hello @matthias-kleiner ,

This does not apply to async-v1-01-branch, if you wanted for PbPb 2023 apass4 MC, you should find the extra needed commits.
Here the error:

DEBUG: CONFLICT (content): Merge conflict in GPU/Workflow/include/GPUWorkflow/GPUWorkflowSpec.h
DEBUG: Auto-merging GPU/Workflow/src/GPUWorkflowSpec.cxx
DEBUG: Auto-merging GPU/Workflow/src/gpu-reco-workflow.cxx
DEBUG: CONFLICT (content): Merge conflict in GPU/Workflow/src/gpu-reco-workflow.cxx

Chiara

@alcaliva alcaliva added the async-2024-pp-apass1 Request porting to async-2024-pp-apass1 label Feb 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

async-2023-pbpb-apass4 Request porting to async-2023-pbpb-apass4 async-2024-pp-apass1 Request porting to async-2024-pp-apass1

Development

Successfully merging this pull request may close these issues.

6 participants