Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • faproietti/ar2018
  • chierici/ar2018
  • SDDS/ar2018
  • cnaf/annual-report/ar2018
4 results
Show changes
Showing
with 585 additions and 28 deletions
contributions/farming/farm-jobs.png

230 KiB

contributions/farming/meltdown.jpg

39.2 KiB

contributions/farming/meltdown2.jpg

28.5 KiB

......@@ -6,12 +6,12 @@
\title{The \Fermi-LAT experiment}
\author{
M Kuss$^{1}$,
F Longo$^{2}$,
M. Kuss$^{1}$,
F. Longo$^{2}$,
on behalf of the \Fermi LAT collaboration}
\address{$^{1}$ Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa, Italy}
\address{$^{2}$ Department of Physics, University of Trieste, via Valerio 2, Trieste and INFN, Sezione di Trieste, via Valerio 2, Trieste, Italy}
\address{$^{1}$ INFN Sezione di Pisa, Pisa, IT}
\address{$^{2}$ University of Trieste and INFN Sezione di Trieste, Trieste, IT}
\ead{michael.kuss@pi.infn.it}
\begin{abstract}
......
......@@ -28,7 +28,7 @@
\section{The GAMMA experiment and the AGATA array}
The strong interaction described by quantum chromodynamics (QCD) is responsible for binding neutrons and protons into nuclei and for the many facets of nuclear structure and reaction physics. Combined with the electroweak interaction, it determines the properties of all nuclei in a similar way as quantum electrodynamics shapes the periodic table of elements. While the latter is well understood, it is still unclear how the nuclear chart emerges from the underlying strong interactions. This requires the development of a unified description of all nuclei based on systematic theories of strong interactions at low energies, advanced few- and many-body methods, as well as a consistent description of nuclear reactions. Nuclear structure and dynamics have not reached the discovery frontier yet (e.g., new isotopes, new elements, …), and a high precision frontier is also being approached with higher beam intensities and purity, along with better efficiency and sensitivity of instruments. The access to new and complementary experiments combined with theoretical advances allows key questions to be addressed such as:
The strong interaction described by quantum chromodynamics (QCD) is responsible for binding neutrons and protons into nuclei and for the many facets of nuclear structure and reaction physics. Combined with the electroweak interaction, it determines the properties of all nuclei in a similar way as quantum electrodynamics shapes the periodic table of elements. While the latter is well understood, it is still unclear how the nuclear chart emerges from the underlying strong interactions. This requires the development of a unified description of all nuclei based on systematic theories of strong interactions at low energies, advanced few- and many-body methods, as well as a consistent description of nuclear reactions. Nuclear structure and dynamics have not reached the discovery frontier yet (e.g. new isotopes, new elements, …), and a high precision frontier is also being approached with higher beam intensities and purity, along with better efficiency and sensitivity of instruments. The access to new and complementary experiments combined with theoretical advances allows key questions to be addressed such as:
How does the nuclear chart emerge from the underlying fundamental interactions?
......@@ -51,8 +51,17 @@ What is the density and isospin dependence of the nuclear equation of state?
\noindent AGATA \cite{ref:gamma_first,ref:gamma_second} is the European Advanced Gamma Tracking Array for nuclear spectroscopy project consisting of a full shell of high purity segmented germanium detectors. Being fully instrumented with digital electronics it exploits the novel technique of gamma-ray tracking. AGATA will be employed at all the large-scale radioactive and stable beam facilities and in the long-term will be fully completed in 60 detectors unit geometry, in order to realize the envisaged scientific program. AGATA is being realized in phases with the goal of completing the first phase with 20 units by 2020. AGATA has been successfully operated since 2009 at LNL, GSI and GANIL, taking advantage of different beams and powerful ancillary detector systems. It will be used in LNL again in 2022, with stable beams and later with SPES radioactive beams, and in future years is planned to be installed in GSI/FAIR, Jyvaskyla, GANIL again, and HIE-ISOLDE.
\section{AGATA computing model and the role of CNAF}
At present the array consists of 15 units, each composed by a cluster of 3 HPGe crystals. Each individual crystal is composed of 36 segments for a total of 38 associated electronics channels/crystal. The data acquisition rate, including Pulse Shape Analysis, can stand up to 4/5 kHz events per crystal. The bottleneck is presently the Pulse Shape Analysis procedure to extract the interaction positions from the HPGe detectors traces. With future faster processor one expects to be able to process the PSA at 10 kHz/crystal. The amount of raw data per experiment, including traces, is about 20 TB for a standard data taking of about 1 week and can increase to 50 TB for specific experimental configuration. The collaboration is thus acquiring locally about 250 TB of data per year. During data-taking raw data is temporarily stored in a computer farm located at the experimental site and, later on, it is dispatched on the GRID in two different centers, CCIN2P3 (Lyon) and CNAF (INFN Bologna), used as TIER1: the duplication process is a security in case of failures/losses of one of the TIER1.
The GRID itself is seldom used to re-process the data and the users usually download their data set to local storage where they can run emulators able to manage part or the full workflow.
At present the array consists of 15 units, each composed by a cluster of 3 HPGe crystals.
Each individual crystal is composed of 36 segments for a total of 38 associated electronics channels/crystal.
The data acquisition rate, including Pulse Shape Analysis, can stand up to 4/5 kHz events per crystal.
The bottleneck is presently the Pulse Shape Analysis procedure to extract the interaction positions from the HPGe detectors traces.
With future faster processor one expects to be able to process the PSA at 10 kHz/crystal. The amount of raw data per experiment, including traces,
is about 20 TB for a standard data taking of about 1 week and can increase to 50 TB for specific experimental configuration.
The collaboration is thus acquiring locally about 250 TB of data per year. During data-taking raw data is temporarily stored
in a computer farm located at the experimental site and, later on, it is dispatched on the GRID in two different centers, CCIN2P3 (Lyon) and CNAF (INFN Bologna),
used as Tier 1: the duplication process is a security in case of failures/losses of one of the Tier 1 sites.
The GRID itself is seldom used to re-process the data and the users usually download their data set to local storage
where they can run emulators able to manage part or the full workflow.
\section{References}
......
contributions/icarus/ICARUS-nue-mip.png

106 KiB

contributions/icarus/ICARUS-sterile-e1529944099665.png

36.7 KiB

contributions/icarus/SBN.png

2.93 MiB

contributions/icarus/icarus-nue.png

476 KiB

\documentclass[a4paper]{jpconf}
\usepackage[font=small]{caption}
\usepackage{graphicx}
\begin{document}
\title{ICARUS}
\author{A. Rappoldi$^1$, on behalf of the ICARUS Collaboration}
\address{$^1$ INFN Sezione di Pavia, Pavia, IT}
\ead{andrea.rappoldi@pv.infn.it}
\begin{abstract}
After its successful operation at the INFN underground laboratories
of Gran Sasso (LNGS) from 2010 to 2013, ICARUS has been moved to
Fermilab Laboratory at Chicago (FNAL),
where it represents an important element of the
Short Baseline Neutrino Project (SBN).
Indeed, the ICARUS T600 detector, which has undergone various technical upgrades
operations at CERN to improve its performance and make it more suitable
to operate at shallow depth, will constitute one of three Liquid Argon (LAr) detectors
exposed to the FNAL Booster Neutrino Beam (BNB).
The purpose of this project is to provide adequate answers to the
``sterile neutrino puzzle'', due to the observation, claimed by various
other experiments, of anomalies in the results obtained in the
measurement of the parameters that regulate the mechanism of neutrino
flavor oscillations.
\end{abstract}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The ICARUS project}
\label{ICARUS}
The technology of the Liquid Argon Time Projection chamber (LAr TPC),
was first proposed by scientist Carlo Rubbia in 1977. It was conceived as a tool for
detecting neutrinos in a way that would result in completely uniform imaging with high
accuracy of massive volumes (several thousand tons).
ICARUS T600, the first large-scale detector exploiting this detection technique,
is the biggest LAr TPC ever realized, with a cryostat containing 760 tons of liquid argon.
Its construction was the culmination of many years of ICARUS collaboration R\&D studies,
with larger and larger laboratory and industrial prototypes, mostly developed thanks
to the Italian National Institute for Nuclear Physics (INFN), with the support of CERN.
Nowadays, it represents the state of the art of this technique, and it marks a major
milestone in the practical realization of large-scale liquid-argon detectors.
The ICARUS T600 detector was previously installed in the underground Italian INFN Gran
Sasso National Laboratory (LNGS) and was the first large-mass LAr TPC operating as a continuously
sensitive general-purpose observatory.
The detector was exposed to the CERN Neutrinos to Gran Sasso (CNGS) beam,
a neutrino beam produced at CERN and
traveling undisturbed straight through Earth for 730 km.
This very successful run lasted 3 years (2010-2013),
during which were collected
$8.6 \cdot 10^{19}$ protons on target with a
detector live time exceeding 93\%, recording 2650 CNGS neutrinos,
(in agreement with expectations) and cosmic rays (with a total exposure of 0.73 kilotons per year).
ICARUS T600 demonstrated the effectiveness of the so-called {\it single-phase} TPC technique
for neutrino physics, providing a series of results, both from the technical and from the
physical point of views.
Beside the excellent detector performance, both as tracking device and as homogeneous calorimeter,
ICARUS demonstrated a remarkable capability in electron-photon separation and particle
identification, exploiting the measurement of dE/dx versus range, including also the
reconstruction of the invariant mass of photon pairs (coming from $\pi^0$ decay) to reject to unprecedented level
the Neutral Current (NC) background to $\nu_e$ Charge Current (CC) events (see Fig.~\ref{Fig1}).
\begin{figure}[ht]
\centering
% \includegraphics[width=0.8\textwidth,natwidth=1540,natheight=340]{icarus-nue.png}
\includegraphics[width=0.8\textwidth]{icarus-nue.png}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{ICARUS-nue-mip.png}
\caption{\label{Fig1} {\it Top:} A typical $\nu_e$ CC events recorded during the ICARUS operation
at LNGS. The neutrino, coming from the right, interacts with the Ar nucleus and produce a
proton (short heavy ionizing track) and an electron (light gray track) which starts an electromagnetic
shower, which develops to the left. {\it Bottom:} The accurate analysis of {\it dE/dx} allows
to easily distinguish the parts of the track in which there is the overlap of more particles,
locating with precision the beginning of the shower.}
\end{figure}
The tiny intrinsic $\nu_e$ component in the CNGS $\nu_{\mu}$
beam allowed ICARUS to perform a sensitive search for anomalous LSND-like $\nu_\mu \rightarrow \nu_e$ oscillations.
Globally, seven electron-like events have been observed, consistent with the $8.5 \pm 1.1$ events
expected from intrinsic beam $\nu_e$ component and standard oscillations, providing the limit on
the oscillation probability $P(\nu_\muμ \rightarrow \nu_e) \le 3.86 \cdot 10^{3}$ at 90\% CL and
$P(\nu_\mu \rightarrow \nu_e) \le 7.76 \cdot 10^{3}$ at 99\% CL, as shown in
Fig.~\ref{Fig2}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{ICARUS-sterile-e1529944099665.png}
\caption{\label{Fig2} Exclusion plot for the $\nu_\mu \rightarrow \nu_e$ oscillations.
The yellow star marks the best fit point of MiniBooNE.
The ICARUS limits on the oscillation probability are shown with the red lines. Most of
LSND allowed regios is excluded, except for a small area around $\sin^2 2 \theta \sim 0.005$,
$\Delta m^2 < 1 eV^2$.
}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{ICARUS at FNAL}
\label{FNAL}
After its successful operation at LNGS, the ICARUS T600 detector was planned
to be included in the Short Baseline Neutrino project (SBN) at Fermilab\cite{SBN},
in Chicago, aiming to give some definitive answer to the so-called
{\it Sterile Neutrino Puzzle}.
In this context, it will operate as the {\it far detector}, put along the
Booster Neutrino Beam (BNB) line, 600 meters from the target (see Fig.~\ref{Fig3}).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{SBN.png}
\caption{\label{Fig3} The Short Baseline Neutrino Project (SBN) at
Fermilab (Chicago) will use three LAr TPC detectors, exposed to the
Booster Neutrino Beam, at different distances fron the target.
The ICARUS T600 detector, put at 600 m, will operate as the {\it far detector},
voted to detect any anomaly in the beam flux and spectrum, with respect to
the initial beam composition detected by the {\it near detector}
(SBND).
These anomalies, due to neutrino flavour oscillations, would consist of
either $\nu_e$ appearence or $\nu_\mu$ disappearance.
}
\end{figure}
For this purpose, the ICARUS T600 detector underwent intensive
overhauling at CERN, before shipping to FNAL,
in order to make it better suited to surface operation (instead of in
an underground environment).
This important technical improvements took place in the CERN
Neutrino Platform framework (WA104) from 2015 to 2017.
In addition to significant mechanical improvements, especially concerning
a new cold vessel, with a purely passive thermal insulation,
some important innovations have been applied to the scintillation
light detection system\cite{PMT} and to the readout
electronics\cite{Electronics}.
% The role of ICARUS will be to detect any anomaly in the neutrino beam flux and
% composition that can occour during its propagation (from the near to the
% far detector), caused by neutrino flavour oscillation.
% This task requires to have an excellent capability to detect and identify
% neutrino interaction within the LAr sensitive volume, rejecting any other
% spurious event with a high level of confidence.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{ICARUS data amount}
\label{Computingreport_2015.pdf}
% The new ICARUS T600 detector (that has been modified and improved to operate
% at FNAL) contains about 54,000 sensitive wires (that give an electric signal
% proportional to the charge released into the LAr volume by ionizing particles)
% and 180 large PMTs, producing a prompt signal coming from the scintillation light.
% Both these analogic signal types are then converted in digital form, by mean of
% fast ADC modules.
%
% During normal run conditions, the trigger rate is about 0.5 Hz, and
% a full event, consisting of the digitized charge signals of all wires
% and all PMTs, has a size of about 80 MB (compressed).
% Therefore, the expected acquisition rate is about 40 MB/s, corrisponding
%to 1 PB/yr.
The data produced by ICARUS detector (which is a LAr Time Projection Chamber)
basically consist of a large number of waveforms generated by sampling the electric
signals induced on the sensing wires by the drift of the charge deposited along
the trajectory of the charged particles within the Lar sensitive volume.
The waveforms recorded on about 54000 wires and 360 PMTs are digitized
(at sample rate of 2.5 MHz and 500 MHz respectively) and compressed,
resulting in a total size of about 80 MB/event.
Considering the forseen acquisition rate of about 0.5 Hz (in normal
run conditions), the expected data flow is about 40 MB/s, which
involves a data production of about 1 PB/yr.
The raw data are then processed by automated filters that allow to recognize
and select the various event types (cosmic, beam, background, etc.) and rewrite
them in a more flexible format, suitable for the following analysis,
which is also supported by means of graphics interactive programs.
% The experiment is expected to start commissioning phase at the end of 2018,
% with first data coming as soon as the Liquid Argon filling procedure is completed.
% Trigger logic tuning will last not less than a couple of months during which
% one PB of data is expected.
Furthermore, the ICARUS Collaboration is actively working on
producing Montecarlo events needed
to design and test the trigger conditions to be implemented on the detector.
This is done by using the same analysis and simulation tools
developed at Fermilab for the SBN detectors (the {\it LArSoft framework}), in
order to have a common software platform, and to facilitate algorithm testing
and performance checking by all the components of the collaboration.
During the 2018 many activities related to the detector installation
were still ongoing, and the start of data acquisition activities
is scheduled for the 2019.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Role and contribution of CNAF}
\label{CNAF}
All the data (raw and reduced) will be stored on the Fermilab using local facility;
however, the ICARUS collaboration agreed to have a mirror site in Italy
(located at CNAF INFN Tier 1) where to retain a full replica of the preselected
raw data, both to have redundancy and provide a more direct data access
to european part of the collaboration.
The CNAF Tier 1 computing resources assigned to ICARUS for 2018 consist of:
4000 HSPEC of CPU, 500 TB of disk storage and 1500 TB of tape archive.
A small fraction of the available storage has been used to
make a copy of all the raw data acquired at LNGS,
which are still subject to analysis.
During 2018 the ICARUS T600 detector was still in preparation, so
only a limited fraction
of such resorces has been used, mainly to perform data transfer tests
(from FNAL to CNAF) and to check the installation of LArSoft framework
in the Tier 1 environment. For this last purpose, a dedicate virtual
machine with custom environment was also used.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{References}
\begin{thebibliography}{1}
\bibitem{SBN}
R. Acciarri et al.,
{\it A Proposal for a Three Detector Short-Baseline Neutrino
Oscillation Program in the Fermilab Booster Neutrino Beam},
arXiv:1503.01520 [physics.ins-det]
\bibitem{PMT}
M. Babicz et al.,
{\it Test and characterization of 400 Hamamatsu R5912-MOD
photomultiplier tubes for the ICARUS T600 detector}.
JINST 13 (2018) P10030
\bibitem{Electronics}
L. Bagby et al.,
{\it New read-out electronics for ICARUS-T600 liquid
argon TPC. Description, simulation and tests of the new
front-end and ADC system}.
JINST 13 (2018) P12007
\end{thebibliography}
\end{document}
File added
......@@ -6,13 +6,13 @@
\author{C. Bozza$^1$, T. Chiarusi$^2$, K. Graf$^3$, A. Martini$^4$ for the KM3NeT Collaboration}
\address{$ˆ1$ Department of Physics of the University of Salerno and INFN Gruppo Collegato di Salerno, via Giovanni Paolo II 132, 84084 Fisciano, Italy}
\address{$ˆ1$ University of Salerno and INFN Gruppo Collegato di Salerno, Fisciano (SA), IT}
\address{$ˆ2$ INFN, Sezione di Bologna, v.le C. Berti-Pichat, 6/2, Bologna 40127, Italy}
\address{$ˆ2$ INFN Sezione di Bologna, Bologna, IT}
\address{$ˆ3$ Friedrich-Alexander-Universit{\"a}t Erlangen-N{\"u}rnberg, Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Stra{\ss}e 1, 91058 Erlangen, Germany}
\address{$ˆ3$ Friedrich-Alexander-Universit{\"a}t Erlangen-N{\"u}rnberg, Erlangen, GE}
\address{$ˆ4$ INFN, LNF, Via Enrico Fermi, 40, Frascati, 00044 Italy}
\address{$ˆ4$ INFN-LNF, Frascati, IT}
\ead{cbozza@unisa.it}
......@@ -24,7 +24,7 @@ from astrophysical sources; the ORCA programme is devoted to
investigate the ordering of neutrino mass eigenstates. The
unprecedented size of detectors will imply PByte-scale datasets and
calls for large computing facilities and high-performance data
centres. The data management and processing challenges of KM3NeT are
centers. The data management and processing challenges of KM3NeT are
reviewed as well as the computing model. Specific attention is given
to describing the role and contributions of CNAF.
\end{abstract}
......@@ -80,7 +80,7 @@ way. One ORCA DU was also deployed and operated in 2017, with smooth
data flow and processing. At present time, most of the computing load is
due to simulations for the full building block, now being enriched with
feedback from real data analysis. As a first step, this
was done at CC-IN2P3 in Lyon, but usage of other computing centres is
was done at CC-IN2P3 in Lyon, but usage of other computing centers is
increasing and is expected to soon spread to the full KM3NeT
computing landscape. This process is being driven in accordance to the
goals envisaged in setting up the computing model. The KM3NeT
......@@ -105,14 +105,14 @@ flow with a reduction from $5 GB/s$ to $5 MB/s$ per \emph{building
block}. Quasi-on-line reconstruction is performed for selected
events (alerts, monitoring). The output data are temporarily stored on
a persistent medium and distributed with fixed latency (typically less
than few hours) to various computing centres, which altogether
than few hours) to various computing centers, which altogether
constitute Tier 1, where events are reconstructed by various fitting
models (mostly searching for shower-like or track-like
patterns). Reconstruction further reduces the data rate to about $1
MB/s$ per \emph{building block}. In addition, Tier 1 also takes care
of continuous detector calibration, to optimise pointing accuracy (by
working out the detector shape that changes because of water currents)
and photomultiplier operation. Local analysis centres, logically
and photomultiplier operation. Local analysis centers, logically
allocated in Tier 2 of the computing model, perform physics analysis
tasks. A database system interconnects the three tiers by distributing
detector structure, qualification and calibration data, run
......@@ -124,10 +124,10 @@ book-keeping information, and slow-control and monitoring data.
\label{fig:compmodel}
\end{figure}
KM3NeT exploits computing resources in several centres and in the
KM3NeT exploits computing resources in several centers and in the
GRID, as sketched in Fig.~\ref{fig:compmodel}. The conceptually simple
flow of the three-tier model is then realised by splitting the tasks
of Tier 1 to different processing centres, also optimising the data
of Tier 1 to different processing centers, also optimising the data
flow and the network path. In particular, CNAF and CC-IN2P3 aim at being
mirrors of each other, containing the full data set at any moment. The
implementation for the data transfer from CC-IN2P3 to CNAF (via an
......@@ -144,9 +144,9 @@ for a while becuse of the lack of human resources.
\section{Data size and CPU requirements}
Calibration and reconstruction work in batches. The raw data related
to the batch are transferred to the centre that is in charge of the
to the batch are transferred to the center that is in charge of the
processing before it starts. In addition, a rolling buffer of data is
stored at each computing centre, e.g.\ the last year of data taking.
stored at each computing center, e.g.\ the last year of data taking.
Simulation has special needs because the input is negligible, but the
computing power required is very large compared to the needs of
......@@ -179,9 +179,8 @@ Thanks to the modular design of the detector, it is possible to quote
the computing requirements of KM3NeT per \emph{building block}, having
in mind that the ARCA programme corresponds to two \emph{building
blocks} and ORCA to one. Not all software could be benchmarked, and
some estimates are derived by scaling from ANTARES ones. When needed,
a conversion factor about 10 between cores and HEPSpec2006 (HS06) is
used in the following.
some estimates are derived by scaling from ANTARES ones.
In the following, the standard conversion factor (~10) is used between cores and HEPSpec2006 (HS06).
\begin{table}
\caption{\label{cpu}Yearly resource requirements per \emph{building block}.}
......@@ -211,7 +210,7 @@ resources at CNAF has been so far below the figures for a
units are added in the following years. KM3NeT software that
runs on the GRID can use CNAF computing nodes in opportunistic mode.
Already now, the data handling policy to safeguard the products of Tier-0
Already now, the data handling policy to safeguard the products of Tier 0
is in place. Automatic synchronization from each shore station to both
CC-IN2P3 and CNAF runs daily and provides two maximally separated
paths from the data production site to final storage places. Mirroring
......@@ -219,11 +218,11 @@ and redundancy preservation between CC-IN2P3 and CNAF are foreseen and
currently at an early stage.
CNAF has already added relevant contributions to KM3NeT in terms of
know-how for IT solution deployment, e.g.~the above-mentioned synchronisation, software development solutions and the software-defined network at the Tier-0 at
know-how for IT solution deployment, e.g.~the above-mentioned synchronisation, software development solutions and the software-defined network at the Tier 0 at
the Italian site. Setting up Software Defined Networks (SDN) for data
acquisition deserves a special mention. The SDN technology\cite{SDN} is used to
configure and operate the mission-critical fabric of switches/routers
that interconnects all the on-shore resources in Tier-0 stations. The
that interconnects all the on-shore resources in Tier 0 stations. The
KM3NeT DAQ is built around switches compliant with the OpenFlow 1.3
protocol and managed by dedicated controller servers. With a limited
number of Layer-2 forwarding rules, developed on purpose for the KM3NeT
......
contributions/lhcb/T0T1.png

178 KiB

contributions/lhcb/T0T1_MC.png

146 KiB

contributions/lhcb/T2.png

159 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{LHCb Computing at CNAF}
\author{S. Perazzini$^1$, C. Bozzi$^{2,3}$}
\address{$^1$ INFN Sezione di Bologna, Bologna, IT}
\address{$^2$ CERN, Gen\`eve, CH}
\address{$^3$ INFN Sezione di Ferrara, Ferrara, IT}
\ead{stefano.perazzini@bo.infn.it, concezio.bozzi@fe.infn.it}
\begin{abstract}
In this document a summary of the LHCb computing activities during the 2018 is reported. The usage of the CPU, disk and tape resources spread among various computing centers is analysed, with particular attention to the performances of the INFN Tier 1 at CNAF. Projections of the necessary resources in the years to come are also briefly discussed.
\end{abstract}
\section{Introduction}
The Large Hadron Collider beauty (LHCb) experiment is the experiment dedicated to the study of $c$- and $b$-physics at the Large Hadrond Collider (LHC) accelerator at CERN. Exploiting the large production cross section of $b\bar{b}$ and $c\bar{c}$ quark pairs in proton-proton ($p-p$) collisions at the LHC, LHCb is able to analyse unprecedented quantities of heavy-flavoured hadrons, with particular attention to their $C\!P$-violation observables. Besides its core programme, the LHCb experiment is also able to perform analyses of production cross sections and electroweak physics in the forward region. To date, the LHCb collaboration is composed of about 1350 people, from 79 institutes spread all around the world in 18 countries. More than 50 physics paper have been published by LHCb during 2018, for a total of almost 500 papers since the start of its activities in 2010.
The LHCb detector is a single-arm forward spectrometer covering the pseudorapidity range between 2 and 5. The detector includes a
high-precision tracking system consisting of a silicon-strip vertex detector surrounding the $p-p$ interaction region, a large-area
silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip
detectors and straw drift tubes placed downstream. The combined tracking system provides a momentum measurement with relative
uncertainty that varies from 0.4\% at 5 GeV/$c$ to 0.6\% at 100 GeV/$c$, and impact parameter resolution of 20 $\mu$m for tracks with high transverse momenta. Charged hadrons are identified using two ring-imaging Cherenkov detectors. Photon, electron and hadron candidates are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The trigger consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction.
\section{Overview of LHCb computing activities in 2018}
The usage of offline computing resources involved: (a) the production of simulated events, which runs continuously; (b) running user jobs, which is also continuous; (c) stripping cycles before and after the end of data taking; (d) processing (i.e. reconstruction and stripping of the full stream, µDST streaming of the TURBO stream) of data taken in 2018 in proton-proton and heavy ion collisions; (e) centralized production of ntuples for analysis working groups.
Activities related to the 2018 data taking were tested in May and started at the beginning of the LHC physics run mid-June. A steady processing and export of data transferred from the pit to offline were always seen.
We recall that LHCb implemented in Run2 a trigger strategy, by which the high-level trigger is split in two parts. The first one (HLT1), synchronous with data taking, writes events at a 150kHz output rate in a temporary disk buffer located on the HLT farm nodes. Real-time calibrations and alignments are then performed and used in the second high-level trigger stage (HLT2), where event reconstruction algorithms as close as possible to those run offline are applied, and event selection is taking place.
Events passing the high-level trigger selections are sent to offline, either via a FULL stream of RAW events which are then reconstructed and processed as in Run1, or via a TURBO stream which directly records the results of the online reconstruction on tape. TURBO data are subsequently reformatted in a µDST format that does not require further processing, are stored on disk and can be used right away for physics analysis.
The information saved for an event of the TURBO stream is customizable and can range from information related to signal candidates only to the full event. The average size of a TURBO event is about 30kB, to be compared with a size of 60kB for the full event. The TURBO output is also streamed in O(5) streams, in order to optimize data access.
The offline reconstruction of the FULL stream for proton collision data run from May until November. The reconstruction of heavy-ion collision data was run in December.
A full re-stripping of 2015, 2016 and 2017 proton collision data, started in autumn 2017, ended in April 2018. A stripping cycle of 2015 lead collision data was also performed in that period. The stripping cycle concurrent with the 2018 proton collision data taking started in June and run continuously until November.
The INFN Tier 1 center at CNAF was in downtime from November 2017, due to a major flood incident. However, the site was again fully available in March 2018, allowing the completion of the stripping cycles on hold, waiting for the data located at CNAF (about 20\% of the total). Despite the unavailability of CNAF resources for the first months of 2018 the site performed excellently for the rest of the year, as testified by the number reported in this report.
As in previous years, LHCb continued to make use of opportunistic resources, that are not pledged to WLCG, but that significantly contributed to the overall usage.
\section{Resource usage in 2018}
Table~\ref{tab:pledges} shows the resources pledged for LHCb at the various tier levels for the 2018 period.
\begin{table}[htbp]
\caption{LHCb 2017 WLCG pledges.}
\centering
\begin{tabular}{lccc}
\hline
2018 & CPU & Disk & Tape \\
& kHS06 & PB & PB \\
\hline
Tier 0 & 88 & 11.4 & 33.6 \\
Tier 1 & 250 & 26.2 & 56.9 \\
Tier 2 & 164 & 3.7 & \\ \hline
Total WLCG & 502 & 41.3 & 90.5\\ \hline
\end{tabular}
\label{tab:pledges}
\end{table}
The usage of WLCG CPU resources by LHCb is obtained from the different views provided by the EGI Accounting portal. The CPU usage is presented in Figure~\ref{fig:T0T1} for the Tier 0 and Tier 1 sites and in Figure~\ref{fig:T2} for Tier 2 sites. The same data is presented in tabular form in Table~\ref{tab:T0T1} and Table~\ref{tab:T2}, respectively.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{T0T1.png}
\end{center}
\caption{\label{fig:T0T1}Monthly CPU work provided by the Tier 0 and
Tier 1 centers to LHCb during 2018.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{T2.png}
\end{center}
\caption{\label{fig:T2}Monthly CPU work provided by the Tier 2 centers to LHCb during 2018.}
\end{figure}
\begin{table}[htbp]
\caption{Average CPU power provided by the Tier 0 and the Tier 1
centers to LHCb during 2018.}
\centering
\begin{tabular}{lcc}
\hline
$<$Power$>$ & Used & Pledge \\
& kHS06 & kHS06 \\
\hline
CH-CERN & 141.0 & 88 \\
DE-KIT & 51.3 & 42.2 \\
ES-PIC & 12.8 & 14.8 \\
FR-CCIN2P3 & 43.2 & 30.4 \\
IT-INFN-CNAF & 64.1 & 46.8 \\
NL-T1 & 38.0 & 24.6 \\
RRC-KI-T1 & 22.0 & 16.4 \\
UK-T1-RAL & 71.7 & 74.8 \\
\hline
Total & 447.6 & 338.1 \\
\hline
\end{tabular}
\label{tab:T0T1}
\end{table}
\begin{table}[htbp]
\caption{Average CPU power provided by the Tier 2
centers to LHCb during 2018.}
\centering
\begin{tabular}{lcc}
\hline
$<$Power$>$ & Used & Pledge \\
& kHS06 & kHS06 \\
\hline
China & 0.3 & 0 \\
Brazil & 11.5 & 17.2 \\
France & 27.1 & 22.9 \\
Germany & 9.3 & 8.1 \\
Israel & 0.2 & 0 \\
Italy & 8.6 & 26.0 \\
Poland & 7.6 & 4.1 \\
Romania & 2.8 & 6.5 \\
Russia & 16.0 & 19.0 \\
Spain & 7.9 & 7.0 \\
Switzerland & 29.6 & 24.0 \\
UK & 85.7 & 29.3 \\
\hline
Total & 206.3 & 164.0 \\
\hline
\end{tabular}
\label{tab:T2}
\end{table}
The average power used at Tier 0 + Tier 1 sites is about 32\% higher than the pledges. The average power used at Tier 2 sites is about 26\% higher than the pledges.
The average CPU power accounted for by WLCG (including Tier 0/1 + Tier 2) amounts to 654 kHS06, to be compared to 502 kHS06 estimated needs quoted in Table~\ref{tab:pledges}. The Tier 0 and Tier 1s usage is generally higher than the pledges. The LHCb computing model is flexible enough to use computing resources for all production workflows wherever available. It is important to note that this is true also for CNAF, despite it started to contribute to the computing activities only in March, after the recovery from the incident. After that the CNAF Tier 1 has offered great stability, leading to maximal efficiency in the overall exploitation of the resources. The total amount of CPU used at Tier 0 and Tier 1s centers is detailed in Figure~\ref{fig:T0T1_MC}, showing that about 76\% of the CPU work is due to Monte Carlo simulation. From the same plot it is visible the start of a stripping campaign in March. This corresponds to the recovery of the backlog in the restripping of the Run2 data collected in 2015-2017, due to the unavailability of CNAF after the incident of November 2017. As it is visible from the plot, the backlog has been recovered by the end of April 2018, before the restart of data-taking operations. Even if all the other Tier 1 centers contributed to reprocess these data, the recall of them from tape has been done exclusively at CNAF. Approximately 580 TB of data have been recalled from tape in about 6 weeks, with a maximum throughput of about 250 MB/s.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{T0T1_MC.png}
\end{center}
\caption{\label{fig:T0T1_MC}Usage of LHCb resources at Tier 0 and Tier 1 sites during 2018. The plot shows the normalized CPU usage (kHS06) for the various activities.}
\end{figure}
Since the start of data taking in May 2018, tape storage grew by about 16.7 PB. Of these, 9.5 PB were due to new collected RAW data. The rest was due to RDST (2.6 PB) and ARCHIVE (4.6 PB), the latter due to the archival of Monte Carlo productions, re-stripping of former real data, and new Run2 data. The total tape occupancy as of December 31st 2018 is 68.9 PB, 38.4 PB of which are used for RAW data, 13.3 PB for RDST, 17.2 PB for archived data. This is 12.9\% lower than the original request of 79.2 PB. The total tape occupancy at CNAF at the end of 2018 was about 9.3 PB, of which 3.3 PB of RAW data, 3.6 PB of ARCHIVE and 2.4 of RDST. This correspond to an increase of about 2.3 PB with respect to the end of 2017. These numbers are in agreement with the share of resources expected from CNAF.
\begin{table}[htbp]
\caption{Disk Storage resource usage as of February 11$^{\rm th}$ 2019 for the Tier 0 and Tier 1 centers. The top row is taken from the LHCb accounting, the other ones (used, available and installed capacity) are taken from the recently commissioned WLCG Storage Space Accounting tool. The 2018 pledges are shown in the last row.}
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|l|cc|ccccccc|}
\hline
Disk (PB) & CERN & Tier 1s & CNAF & GRIDKA & IN2P3 & PIC & RAL & RRCKI & SARA \\
\hline
LHCb accounting & 6.00 & 19.78 & 4.33 & 3.32 & 2.77 & 1.21 & 4.55 & 1.63 & 1.96 \\
\hline
SRM T0D1 used & 6.32 & 19.88 & 4.37 & 3.34 & 2.77 & 1.25 & 4.54 & 1.69 & 1.92 \\
SRM T0D1 free & 2.08 & 2.87 & 0.93 & 0.44 & 0.36 & 0.11 & 0.26 & 0.71 & 0.05 \\
SRM T1D0 (used+free) & 1.40 & 2.25 & 1.30 & 0.15 & 0.03 & 0.02 & 0.63 & 0.09 & 0.03 \\
\hline
SRM T0D1+T1D0 total & 9.80 & 25.00 & 6.60 & 3.93 & 3.16 & 1.38 & 5.43 & 2.49 & 2.01 \\
\hline
Pledge '18 & 11.4 & 26.25 & 5.61 & 4.01 & 3.20 & 1.43 & 7.32 & 2.30 & 2.31 \\
\hline
\end{tabular}\label{tab:disk}
}
\end{center}
\end{table}
Table~\ref{tab:disk} shows the situation of disk storage resources at CERN and Tier 1 sites, as well as at each Tier 1 site, as of February 11$^{\rm th}$ 2019. The used space includes derived data, i.e. DST and micro-DST of both real and simulated data, and space reserved for users. The latter accounts for 1.2 PB in total, 0.9 of which are used. The SRR disk used and SRR disk free information concerns only permanent disk storage (previously known as “T0D1”). The first two lines show a good agreement between what the site reports and what the LHCb accounting (first line) reports. The sum of the Tier 0 and Tier 1 sites 2018 pledges amount to 37.7 PB. The available disk space is 35 PB in total, 26 PB of which are used to store real and simulated datasets, and user data. A total of 3.7 PB is used as tape buffer, the remaining 5 PB are free and will be used to store the output of the legacy stripping campaigns of Run1 and Run2 data that are currently being prepared. The disk space available at CNAF is about 6.6 PB, about 18\% above the pledge.
In summary, the usage of computing resources in the 2018 calendar year has been quite smooth for LHCb. Simulation is the dominant activity in terms of CPU work. Additional unpledged resources, as well as clouds, on-demand and volunteer computing resources, were also successfully used. They were essential
in providing CPU work during the outage of the CNAF Tier 1 center. As for the INFN Tier 1 at CNAF, it came back to its fully-operational status in March 2018. After that, the backlog in the restripping campaign due to unavailability of data stored at CNAF was recovered, thanks also to the contribution of other sites, in time for the restart of data taking. After March 2018, CNAF operated in a very efficient and reliable way, being even able to over perform in terms of CPU power with respect to the pledged resources.
\section{Expected growth of resources in 2020-2021}
In terms of CPU requirements, the different activities result in CPU work estimates for 2020-2021, that are apportioned between the different Tiers taking into account the computing model constraints and also capacities that are already installed. This results in the requests shown in Table~\ref{tab:req_CPU} together with the pledged resources for 2019. The CPU work required at CNAF would correspond to about 18\% of the total CPU requested at Tier 1s+Tier 2s sites.
\begin{table}[htbp]
\centering
\caption{CPU power requested at the different Tiers in 2020-2021. Pledged resources for 2019 are also reported}
\label{tab:req_CPU}
\begin{tabular}{lccc}
\hline
CPU power (kHS06) & 2019 & 2020 & 2021 \\
\hline
Tier 0 & 86 & 98 & 125\\
Tier 1 & 268 & 328 & 409\\
Tier 2 & 193 & 185 & 229\\
\hline
Total WLCG & 547 & 611 & 763\\
\hline
\end{tabular}
\end{table}
The forecast total disk and tape space usage at the end of the years 2019-2020 are broken down into fractions to be provided by the different Tiers. These numbers are shown in Table~\ref{tab:req_disk} for disk and Table~\ref{tab:req_tape} for tape. The disk resources required at CNAF would be about 18\% of those requested for Tier 1 sites + Tier 2 sites, while for tape storage CNAF is supposed to provide about 24\% of the total tape request to Tier 1 sites.
\begin{table}[htbp]
\centering
\caption{LHCb disk request for each Tier level in 2020-2021. Pledged resources for 2019 are also shown.}
\label{tab:req_disk}
\begin{tabular}{lccc}
\hline
Disk (PB) & 2019 & 2020 & 2021 \\
\hline
Tier 0 & 13.4 & 17.2 & 19.5 \\
Tier 1 & 29.0 & 33.2 & 39.0 \\
Tier 2 & 4 & 7.2 & 7.5 \\
\hline
Total WLCG & 46.4 & 57.6 & 66.0 \\
\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{LHCb tape request for each Tier level in 2020-2012. Pledged resources for 2019 are also reported.}
\label{tab:req_tape}
\begin{tabular}{lccc}
\hline
Tape (PB) & 2019 & 2020 & 2021 \\
\hline
Tier 0 & 35.0 & 36.1 & 52.0 \\
Tier 1 & 53.1 & 55.5 & 90.0 \\
\hline
Total WLCG & 88.1 & 91.6 & 142.0 \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
A description of the LHCb computing activities during 2018 has been given, with particular emphasis on the usage of resources and on the forecasts of resource needs until 2021. As in previous years, the CNAF Tier 1 center gave a substantial contribution to LHCb computing in terms of CPU work and storage made available to the collaboration. This achievement is particularly important this year, as CNAF was recovering from the major incident of November 2017 that unfortunately interrupted its activities. The effects of CNAF unavailability have been overcome also thanks to extra efforts from other sites and to the opportunistic usage of non-WLCG resources. The main consequence of the incident, in terms of LHCb operations, has been the delay in the restripping campaign of data collected during 2015-2017. The data that were stored at CNAF (approximately 20\% of the total) have been processed when the site restarted the operations in March 2018. It is worth to mention that despite the delay, the restripping campaign has been completed before the start of data taking according to the predicted schedule, avoiding further stress to the LHCb computing operations. Emphasis should be put also on the fact that an almost negligible amount of data have been lost in the incident and in any case it has been possible to recover them from backup copies stored at other sites.
\end{document}
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{The LHCf experiment}
\author{A Tiberio$^{2,1}$, O Adriani$^{2,1}$, E Berti $^{2,1}$, L Bonechi$^{1}$, M Bongi$^{2,1}$, R D'Alessandro$^{2,1}$, S Ricciarini$^{1,3}$, and A Tricomi$^{4,5}$ for the LHCf Collaboration}
\address{$^1$ INFN, Section of Florence, I-50019 Sesto Fiorentino, Florence, Italy}
\address{$^2$ Department of Physics, University of Florence, I-50019 Sesto Fiorentino, Florence, Italy}
\address{$^3$ IFAC-CNR, I-50019 Sesto Fiorentino, Florence, Italy}
\address{$^4$ INFN, Section of Catania, I-95131 Catania, Italy}
\address{$^5$ Department of Physics, University of Catania, I-95131 Catania, Italy}
\ead{alessio.tiberio@fi.infn.it}
\begin{abstract}
The LHCf experiment is dedicated to the measurement of very forward particle production in the high energy hadron-hadron collisions at LHC, with the aim of improving the cosmic-ray air shower developments models. Most of the simulations of particle collisions and detector response are produced exploiting the resources available at CNAF. The role of CNAF and the main recent results of the experiment are discussed in the following.
\end{abstract}
\section{Introduction}
The LHCf experiment is dedicated to the measurement of very forward particle production in the high energy hadron-hadron collisions at LHC. The main purpose of LHCf is improving the performance of the hadronic interaction models, that are one of the important ingredients of the simulations of the Extensive Air Showers (EAS) produced by primary cosmic rays.
Since 2009 the LHCf detector has taken data in different configurations of the LHC: p-p collisions at center of mass energies of 900\,GeV, 2.76\,TeV, 7\,TeV and 13\,TeV, and p-Pb collisions at $\sqrt{s_{NN}}\,=\,5.02$\,TeV and 8.16\,TeV. The main results obtained in 2018 is shortly presented in the next paragraphs.
\section{The LHCf detector}
The LHCf detector is made of two independent electromagnetic calorimeters placed along the beam line at 140\,m on both sides of the ATLAS Interaction Point, IP1 \cite{LHCf_experiment, LHCf_detector}. Each of the two detectors, called Arm1 and Arm2, contains two separate calorimeter towers allowing to optimize the reconstruction of neutral pion events decaying into couples of gamma rays. During data taking the LHCf detectors are installed in the so called \"recombination chambers\", a place where the beam pipe of IP1 splits into two separate pipes, thus allowing small detectors to be inserted just on the interaction line (this position is shared with the ATLAS ZDC e.m. modules). For this reason the size of the calorimeter towers is very limited (few centimeters). Because of the performance needed to study very high energy particles with the requested precision to allow discriminating between different hadronic interaction models, careful simulations of particle collisions and detector’s response are mandatory. In particular, due to the tiny transversal size of the detectors, large effects are observed due to e.m. shower leackage in and out of the calorimeter towers. Most of the simulations produced by the LHCf Collaboration for the study and calibration of the Arm2 detector have been run exploiting the resources made available at CNAF.
\section{Results obtained in 2018}
During 2018 no experimental operations were performed in LHC tunnel or SPS experimental area, so all the work was concentrated to the analysis of data collected during the 2015 operation in p-p collisions at 13 TeV and during 2016 operation in p-Pb collisions at 8.16 TeV.
The final results of photon and neutron production spectra in proton-proton collisions at $\sqrt{s} =$ 13 TeV in the very forward region ($8.81 < \eta < 8.99$ and $\eta > 10.94$ for photons, $8.81 < \eta < 9.22$ and $\eta > 10.76$ for neutrons, where $\eta$ is the pseudorapidity of the particle\footnote{In accelerator experiments the pseudorapidity of a particle is defined as $\eta = - \ln [ \tan(\theta / 2) ]$, where $\theta$ is the angle between the particle momentum and the beam axis.}) were published on Physics Letters B and Journal of High Energy Physics, respectively \cite{LHCf_photons, LHCf_neutrons}.
These are the first published results of the collaboration at the highest available collision energy of 13 TeV at the LHC.
In addition to proton-proton results, preliminary results for photon spectrum in proton-lead collisions at $\sqrt{s_{NN}} = 8.16$ TeV were obtained and presented in several international conferences.
\section{LHCf simulations and data processing}
A full LHCf event involves two kinds of simulations: the first one was produced making use of the COSMOS and EPICS libraries, the second one making use of the CRMC toolkit. In both cases we used the most common generators employed in cosmic ray physics. For the second group only secondary particles produced by collisions were considered, whereas for the first group transport through the beam pipe and detector interaction were simulated as well. For this purpose, all this software was at first installed on the CNAF dedicated machine, then we performed some debug and finally we interactively run some test simulations.
In order to optimize the usage of resources, simulations production was shared between Italian and Japanese side of the collaboration. For this reason, the machine was used as well to transfer data from/to Japanese server.
In addition to simulations activity, CNAF resources were important for data analysis, both for experimental and simulation files. This work required to apply all reconstruction processes, from raw data up to a ROOT file containing all relevant physics quantities reconstructed from detector information. For this purpose, LHCf analysis software was installed, debugged and continuously updated on the system. Because the reconstruction of a single file can take several hours and the number of files to be reconstructed is large, the usage of the queue dedicated to LHCf was necessary to accomplish this task. ROOT files were then transferred to local PCs in Firenze, in order to have more flexibility on the final analysis steps, that does not require long computing time.
In 2018, the CNAF resources were mainly used by LHCf for mass production of MC simulations needed for the $\pi^0$ analysis of LHC data relative to proton-proton collisions at $\sqrt{s} = 13\,$TeV.
In order to extend the rapidity coverage, in $\pi^0$ analysis also the data acquired with the detector shifted 5 mm upward with respect to the nominal position are analysed.
As a consequence all the MC simulations involving the detector have to be generated again with that modified geometry.
The full sample of $10^8$ collisions was generated for QGSJET model, while about 50\% of the EPOS sample was completed.
\section*{References}
\begin{thebibliography}{9}
\bibitem{LHCf_experiment} O. Adriani {\it et~al.}, JINST \textbf{3}, S08006 (2008)
\bibitem{LHCf_detector} O. Adriani {\it et~al.}, JINST \textbf{5}, P01012 (2010)
\bibitem{LHCf_photons} O. Adriani {\it et~al.}, Physics Letters B \textbf{780} (2018) 233–239
\bibitem{LHCf_neutrons} O. Adriani {\it et~al.}, J. High Energ. Phys. (2018) \textbf{2018}: 73.
\end{thebibliography}
\end{document}
......@@ -3,9 +3,9 @@
\begin{document}
\title{CSES-Limadou at CNAF}
\author{Matteo Merg\'e}
\author{Matteo Merg\'e$^1$}
\address{Agenzia Spaziale Italiana, Space Science Data Center ASI-SSDC \newline via del politecnico 1, 00133, Rome, Italy }
\address{$^1$ Agenzia Spaziale Italiana, Space Science Data Center ASI-SSDC, Rome, IT}
\ead{matteo.merge@roma2.infn.it, matteo.merge@ssdc.asi.it}
......@@ -21,7 +21,7 @@ The High-Energy Particle Detector (HEPD), developed by the INFN, detects electro
The instrument consists of several detectors. Two planes of double-side silicon microstrip sensors placed on the top of the instrument provide the direction of the incident particle. Just below, two layers of plastic scintillators, one thin segmented, give the trigger; they are followed by a calorimeter, constituted by other 16 scintillators and a layer of LYSO sensors. A scintillator veto system completes the instrument.
\section{HEPD Data}
The reconstruction occurs in three phases, which determine three different data formats, namely 0, 1 and 2, with increasing degree of abstraction. This structure is reflected on the data-persistency format, as well as on the software design. Raw data as downlinked from the CSES. They include ADC counts from the silicon strip, detector, from trigger scintillators, from energy scintillators and from LYSO crystals. ADC counts from lateral veto are also there, together with other very low-level information. Data are usually stored in ROOT format. Level 1 data contain all detector responses after calibration and equalization. The tracker response is clustered (if not already in this format at level0) and corrected for the signal integration time. All scintillator responses are calibrated and equalized. Information on the event itself like time, trigger flags, dead/alive time, etc… are directly inherited from level 0. Data are usually stored in ROOT format. Level 2 data contain higher level information, used to compute final data products. Currently the data are transferred from China as soon as they are downlinked from the CSES satellite and are processed at a dedicated facility at ASI Space Science Data Center (ASI-SSDC \cite{ssdc}) and then distributed to the analysis sites includind CNAF.
The reconstruction occurs in three phases, which determine three different data formats, namely 0, 1 and 2, with increasing degree of abstraction. This structure is reflected on the data-persistency format, as well as on the software design. Raw data as downlinked from the CSES. They include ADC counts from the silicon strip, detector, from trigger scintillators, from energy scintillators and from LYSO crystals. ADC counts from lateral veto are also there, together with other very low-level information. Data are usually stored in ROOT format. Level 1 data contain all detector responses after calibration and equalization. The tracker response is clustered (if not already in this format at level0) and corrected for the signal integration time. All scintillator responses are calibrated and equalized. Information on the event itself like time, trigger flags, dead/alive time, etc… are directly inherited from level 0. Data are usually stored in ROOT format. Level 2 data contain higher level information, used to compute final data products. Currently the data are transferred from China as soon as they are downlinked from the CSES satellite and are processed at a dedicated facility at ASI Space Science Data Center (ASI-SSDC \cite{ssdc}) and then distributed to the analysis sites including CNAF.
\section{HEPD Data Analysis at CNAF}
Level2 data of the HEPD detector are currently produced daily in ROOT format from the raw files. Once a week they are transferred at CNAF, using gfal-tools for the analysis team to be used. Raw data are transfered also to the CNAF facility on weekly basis and will be transferred to the tape storage. Most of the data analysis software and tools have been developed to be used at CNAF. Geant4 MC simulations are currently ran at CNAF by the collaboration, the facility proved to be crucial to perform, computational intensive, optical photons simulations needed to simulate the light yield of the plastic scintillators of the detector. Most of the software is written in C++/ROOT while several attempts to use Machine Learning and Neural Networks tecniques are pushing the collaboration to use more frequently Python for the analysis.
......