Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions Common/Tools/Multiplicity/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Multiplicity/centrality tools in O2/O2Physics

This directory serves to aggregate all files necessary for multiplicity
and centrality calibration going from pp to Pb-Pb. It offers functionality
to perform simple slicing in percentiles, such as what is done in
proton-proton collisions, as well as the Glauber + analytical NBD fitter
used to perform anchoring and hadronic event distribution estimates in
nucleus-nucleus collisions.

## Files
* `README.md` this readme
* `CMakeLists.txt` definition of source files that need compiling
* `multCalibrator.cxx/h` a class to do percentile slicing of a given histogram. Used for all systems.
* `multMCCalibrator.cxx/h` a class to perform data-to-mc matching of average Nch.
* `multGlauberNBDFitter.cxx/h` a class to do glauber fits.
* `multModule.h` a class to perform calculations of multiplicity and centrality tables for analysis. Meant to be used inside the main core service wagon 'multcenttable'.

69 changes: 69 additions & 0 deletions Common/Tools/Multiplicity/macros/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Example multiplicity calibration macros

You will find some example macros in this directory that will allow for the calculation of a glauber fit and the corresponding calibration histograms.

A simplified description of the procedure necessary to generate a
Glauber + NBD fit to a certain histogram is described below.

## Performing a Glauber + NBD fit

### First step: calculation of Glauber MC sample

The Glauber + NBD model assumes that the multiplicity / amplitude
distribution that one intends to fit can be described as coming
from a certain number N_{ancestor} of particle-emitting sources called 'ancestors'. Each ancestor is assumed to produce particles
according to a negative binominal distribution. Further,
the number of ancestors is assumed to be related to the basic
glauber quantities N_{part} and N_{coll} via the formula:

N_{ancestor} = f * N_{part} + (1-f) N_{coll}

Usually, the value f is fixed at 0.8, and thus the range of
N_{ancestors} is typically 0-700 in Pb-Pb collisions.

In order to allow for Glauber + NBD fitting, the correlation of
(N_{part}, N_{coll}) needs to be known. For that purpose,
a tree of Glauber MC needs to be generated using TGlauberMC using the relevant nuclei, which also involves choosing an appropriate nuclear profile.

Once TGlauberMC has been used to produce a tree with N_{part} and
N_{coll} values, their correlation needs to be saved to a 2D histogram that serves as input to the Glauber + NBD fitter used in
O2/O2Physics. This is done using the macro called `saveCorrelation.C` contained in this directory, which produces a file named `baseHistos.root`. The file `saveCorrelation.C` serves as
an example for the Pb-Pb case and minor adaptation may be needed
in case other nuclei are to be used.

### Second step: execution of Glauber + NBD fitter

The fitting procedure is done via the macro ``. Because
the numerical fitting utilizes a convolution of a N_{ancestor}
distribution and NBD distributions, it will not be as fast
as typical one-dimensional fits: typical times of 10-100 seconds
are not unusual. The macro will produce an output file
that contains:

* the original fitted histogram
* the actual glauber fit distribution, plotted all the way
to zero multiplicity

This output can then be used to extract percentiles.

### Third step: calculation of percentile boundaries

Once both the data and glauber fit distributions are known,
the next step is to create a 'stitched' data/glauber distribution in
which the distribution at low multiplicities follows
the glauber fit and the distribution at higher multiplicities
follows the data. The multiplicity value in which the switch from
glauber to data takes place is called 'anchor point'.

Because of the fact that this 'stitching' procedure may need to be tuned and the actual glauber fit is slow, the stitching is done
in a separate macro called `runCalibration.C`. It is provided
in this directory as well and it is the third and last step in calculating percentile boundaries. The macro will printout some
key boundaries as well as save an output file with the calibration.

*Bonus*: at the stage in which the glauber fit has been done
and all information is available, a de-convolution process
can be used to calculate the average N_{part} and N_{coll}
in centrality slices. This functionality is also provided
in O2Physics as part of the `multGlauberNBDFitter` and the
`runCalibration.C` macro can optionally also perform that
deconvolution. *Warning*: this procedure might take a mooment.
172 changes: 172 additions & 0 deletions Common/Tools/Multiplicity/macros/runCalibration.C
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
// Copyright 2019-2020 CERN and copyright holders of ALICE O2.
// See https://alice-o2.web.cern.ch/copyright for details of the copyright holders.
// All rights not expressly granted are reserved.
//
// This software is distributed under the terms of the GNU General Public
// License v3 (GPL Version 3), copied verbatim in the file "COPYING".
//
// In applying this license CERN does not waive the privileges and immunities
// granted to it by virtue of its status as an Intergovernmental Organization
// or submit itself to any jurisdiction.
//
/// \file saveCorrelation.C
/// \brief
/// \author ALICE

#include <iostream>

/// @brief function to calibrate centrality
/// @param lInputFileName name of input file.
/// @param anchorPointPercentage anchor point percentage to use
/// @param matchRange width of region in which data/glauber matching is to be done in rolling anchoring test
/// @param doNpartNcoll wether or not to attempt calculating Npart, Ncoll in centrality bins
void runCalibration(TString lInputFileName = "results/AR_544122_glauberNBD_ancestorMode2_hFT0C_BCs.root", double anchorPointPercentage = 90.0, double matchRange = 200.0, bool doNpartNcoll = false)
{
TFile* file = new TFile(lInputFileName.Data(), "READ");
file->ls();

TH1F* hData = (TH1F*)file->Get("hV0MUltraFine");
TH1F* hGlauberParameters = (TH1F*)file->Get("hGlauberParameters");
TH1F* hGlauberFitRange = (TH1F*)file->Get("hGlauberFitRange");
hData->SetName("hData");
TH1F* hStitched = (TH1F*)hData->Clone("hStitched");
TH1F* hFit = (TH1F*)file->Get("hGlauber");

TCanvas* c1 = new TCanvas("c1", "", 800, 600);
c1->SetLeftMargin(0.17);
c1->SetBottomMargin(0.17);
c1->SetRightMargin(0.15);
c1->SetTopMargin(0.05);
c1->SetTicks(1, 1);
c1->SetLogz();
c1->SetFrameFillStyle(0);
c1->SetFillStyle(0);

cout << "Data bin width: " << hData->GetBinWidth(1) << endl;
cout << "Fit bin width: " << hFit->GetBinWidth(1) << endl;
cout << "Match range to use: " << matchRange << endl;

//____________________________________________
double anchorPointFraction = anchorPointPercentage / 100.f;
double anchorPoint = -1; // the anchor point value in raw

//____________________________________________
// doing partial integration up to certain point for finding anchor point bin
for (int ii = 1; ii < hData->GetNbinsX() + 1; ii++) {
// renormalize data curve
int bin1 = ii + 1;
int bin2 = hData->FindBin(hData->GetBinLowEdge(ii + 1) + matchRange + 1e-3);
double matchRangeData = hData->Integral(bin1, bin2);
double matchRangeFit = hFit->Integral(bin1, bin2);

// rescale fit to match in the vicinity of the region we're at
hFit->Scale(matchRangeData / matchRangeFit);

double integralFit = hFit->Integral(1, ii);
double integralData = hData->Integral(ii + 1, hData->GetNbinsX() + 1);
double integralAll = integralFit + integralData;

cout << "at bin #" << ii << ", integrated up to " << hData->GetBinLowEdge(ii + 1) << " fraction above this value is: " << integralData / integralAll << endl;
anchorPoint = hData->GetBinLowEdge(ii + 1);

if (integralData / integralAll < anchorPointFraction)
break;
}

//____________________________________________
for (int ii = 1; ii < hData->GetNbinsX() + 1; ii++) {
// renormalize data curve
if (hData->GetBinCenter(ii) < anchorPoint)
hStitched->SetBinContent(ii, hFit->GetBinContent(ii));
}

cout << "Anchor point determined to be: " << anchorPoint << endl;
cout << "Preparing stitched histogram ... " << endl;

hFit->SetLineColor(kRed);
hStitched->SetLineColor(kBlue);

hData->GetYaxis()->SetTitleSize(0.055);
hData->GetXaxis()->SetTitleSize(0.055);
hData->GetYaxis()->SetLabelSize(0.04);
hData->GetXaxis()->SetLabelSize(0.04);
hData->SetTitle("");
hData->Draw("hist");
hFit->Draw("hist same");
hStitched->Draw("hist same");

// All fine, let's try the calibrator
multCalibrator* lCalib = new multCalibrator("lCalib");
lCalib->SetAnchorPointPercentage(100.0f);
lCalib->SetAnchorPointRaw(-1e-6);

// Set standard Pb-Pb boundaries
lCalib->SetStandardOnePercentBoundaries();

TString calibFileName = lInputFileName.Data();
calibFileName.ReplaceAll("glauberNBD", "calibration");
TFile* fileCalib = new TFile(calibFileName.Data(), "RECREATE");

TH1F* hCalib = lCalib->GetCalibrationHistogram(hStitched, "hCalib");

TCanvas* c2 = new TCanvas("c2", "", 800, 600);
c2->SetLeftMargin(0.17);
c2->SetBottomMargin(0.17);
c2->SetRightMargin(0.15);
c2->SetTopMargin(0.05);
c2->SetTicks(1, 1);
// c2->SetLogz();
c2->SetFrameFillStyle(0);
c2->SetFillStyle(0);

hCalib->GetYaxis()->SetTitleSize(0.055);
hCalib->GetXaxis()->SetTitleSize(0.055);
hCalib->GetYaxis()->SetLabelSize(0.04);
hCalib->GetXaxis()->SetLabelSize(0.04);
hCalib->SetTitle("");
hCalib->Draw();

fileCalib->cd();

hData->Write();
hCalib->Write();
hStitched->Write();
hFit->Write();

if (doNpartNcoll) {
cout << "Will now attempt to calculate % -> Np, Nc map..." << endl;

TProfile* hProfileNpart = new TProfile("hProfileNpart", "", 100, 0, 100);
TProfile* hProfileNcoll = new TProfile("hProfileNcoll", "", 100, 0, 100);
TH2F* h2dNpart = new TH2F("h2dNpart", "", 100, 0, 100, 500, -0.5f, 499.5f);
TH2F* h2dNcoll = new TH2F("h2dNcoll", "", 100, 0, 100, 3000, -0.5f, 2999.5);

// Replay
multGlauberNBDFitter* g = new multGlauberNBDFitter("lglau");
TF1* fitfunc = g->GetGlauberNBD();

// Step 1: open the (Npart, Ncoll) pair information, provide
TFile* fbasefile = new TFile("basehistos.root", "READ");
TH2D* hNpNc = (TH2D*)fbasefile->Get("hNpNc");
g->SetNpartNcollCorrelation(hNpNc);
g->InitializeNpNc();

fitfunc->SetParameter(0, hGlauberParameters->GetBinContent(1));
fitfunc->SetParameter(1, hGlauberParameters->GetBinContent(2));
fitfunc->SetParameter(2, hGlauberParameters->GetBinContent(3));
fitfunc->SetParameter(3, hGlauberParameters->GetBinContent(4));
fitfunc->SetParameter(4, hGlauberParameters->GetBinContent(5));

Double_t lMax = hData->GetBinLowEdge(hData->GetNbinsX() + 1);

// uncomment if Np Nc needed -> Warning, slow!
g->CalculateAvNpNc(hProfileNpart, hProfileNcoll, h2dNpart, h2dNcoll, hCalib, 0, lMax);

hProfileNpart->Write();
hProfileNcoll->Write();
h2dNpart->Write();
h2dNcoll->Write();
}

fileCalib->Write();
}
Loading
Loading