Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • faproietti/ar2018
  • chierici/ar2018
  • SDDS/ar2018
  • cnaf/annual-report/ar2018
4 results
Show changes
Showing
with 541 additions and 13 deletions
contributions/lhcb/T0T1.png

178 KiB

contributions/lhcb/T0T1_MC.png

146 KiB

contributions/lhcb/T2.png

159 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{LHCb Computing at CNAF}
\author{S. Perazzini$^1$, C. Bozzi$^{2,3}$}
\address{$^1$ INFN Sezione di Bologna, Bologna, IT}
\address{$^2$ CERN, Gen\`eve, CH}
\address{$^3$ INFN Sezione di Ferrara, Ferrara, IT}
\ead{stefano.perazzini@bo.infn.it, concezio.bozzi@fe.infn.it}
\begin{abstract}
In this document a summary of the LHCb computing activities during the 2018 is reported. The usage of the CPU, disk and tape resources spread among various computing centers is analysed, with particular attention to the performances of the INFN Tier 1 at CNAF. Projections of the necessary resources in the years to come are also briefly discussed.
\end{abstract}
\section{Introduction}
The Large Hadron Collider beauty (LHCb) experiment is the experiment dedicated to the study of $c$- and $b$-physics at the Large Hadrond Collider (LHC) accelerator at CERN. Exploiting the large production cross section of $b\bar{b}$ and $c\bar{c}$ quark pairs in proton-proton ($p-p$) collisions at the LHC, LHCb is able to analyse unprecedented quantities of heavy-flavoured hadrons, with particular attention to their $C\!P$-violation observables. Besides its core programme, the LHCb experiment is also able to perform analyses of production cross sections and electroweak physics in the forward region. To date, the LHCb collaboration is composed of about 1350 people, from 79 institutes spread all around the world in 18 countries. More than 50 physics paper have been published by LHCb during 2018, for a total of almost 500 papers since the start of its activities in 2010.
The LHCb detector is a single-arm forward spectrometer covering the pseudorapidity range between 2 and 5. The detector includes a
high-precision tracking system consisting of a silicon-strip vertex detector surrounding the $p-p$ interaction region, a large-area
silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip
detectors and straw drift tubes placed downstream. The combined tracking system provides a momentum measurement with relative
uncertainty that varies from 0.4\% at 5 GeV/$c$ to 0.6\% at 100 GeV/$c$, and impact parameter resolution of 20 $\mu$m for tracks with high transverse momenta. Charged hadrons are identified using two ring-imaging Cherenkov detectors. Photon, electron and hadron candidates are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The trigger consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction.
\section{Overview of LHCb computing activities in 2018}
The usage of offline computing resources involved: (a) the production of simulated events, which runs continuously; (b) running user jobs, which is also continuous; (c) stripping cycles before and after the end of data taking; (d) processing (i.e. reconstruction and stripping of the full stream, µDST streaming of the TURBO stream) of data taken in 2018 in proton-proton and heavy ion collisions; (e) centralized production of ntuples for analysis working groups.
Activities related to the 2018 data taking were tested in May and started at the beginning of the LHC physics run mid-June. A steady processing and export of data transferred from the pit to offline were always seen.
We recall that LHCb implemented in Run2 a trigger strategy, by which the high-level trigger is split in two parts. The first one (HLT1), synchronous with data taking, writes events at a 150kHz output rate in a temporary disk buffer located on the HLT farm nodes. Real-time calibrations and alignments are then performed and used in the second high-level trigger stage (HLT2), where event reconstruction algorithms as close as possible to those run offline are applied, and event selection is taking place.
Events passing the high-level trigger selections are sent to offline, either via a FULL stream of RAW events which are then reconstructed and processed as in Run1, or via a TURBO stream which directly records the results of the online reconstruction on tape. TURBO data are subsequently reformatted in a µDST format that does not require further processing, are stored on disk and can be used right away for physics analysis.
The information saved for an event of the TURBO stream is customizable and can range from information related to signal candidates only to the full event. The average size of a TURBO event is about 30kB, to be compared with a size of 60kB for the full event. The TURBO output is also streamed in O(5) streams, in order to optimize data access.
The offline reconstruction of the FULL stream for proton collision data run from May until November. The reconstruction of heavy-ion collision data was run in December.
A full re-stripping of 2015, 2016 and 2017 proton collision data, started in autumn 2017, ended in April 2018. A stripping cycle of 2015 lead collision data was also performed in that period. The stripping cycle concurrent with the 2018 proton collision data taking started in June and run continuously until November.
The INFN Tier 1 center at CNAF was in downtime from November 2017, due to a major flood incident. However, the site was again fully available in March 2018, allowing the completion of the stripping cycles on hold, waiting for the data located at CNAF (about 20\% of the total). Despite the unavailability of CNAF resources for the first months of 2018 the site performed excellently for the rest of the year, as testified by the number reported in this report.
As in previous years, LHCb continued to make use of opportunistic resources, that are not pledged to WLCG, but that significantly contributed to the overall usage.
\section{Resource usage in 2018}
Table~\ref{tab:pledges} shows the resources pledged for LHCb at the various tier levels for the 2018 period.
\begin{table}[htbp]
\caption{LHCb 2017 WLCG pledges.}
\centering
\begin{tabular}{lccc}
\hline
2018 & CPU & Disk & Tape \\
& kHS06 & PB & PB \\
\hline
Tier 0 & 88 & 11.4 & 33.6 \\
Tier 1 & 250 & 26.2 & 56.9 \\
Tier 2 & 164 & 3.7 & \\ \hline
Total WLCG & 502 & 41.3 & 90.5\\ \hline
\end{tabular}
\label{tab:pledges}
\end{table}
The usage of WLCG CPU resources by LHCb is obtained from the different views provided by the EGI Accounting portal. The CPU usage is presented in Figure~\ref{fig:T0T1} for the Tier 0 and Tier 1 sites and in Figure~\ref{fig:T2} for Tier 2 sites. The same data is presented in tabular form in Table~\ref{tab:T0T1} and Table~\ref{tab:T2}, respectively.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{T0T1.png}
\end{center}
\caption{\label{fig:T0T1}Monthly CPU work provided by the Tier 0 and
Tier 1 centers to LHCb during 2018.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{T2.png}
\end{center}
\caption{\label{fig:T2}Monthly CPU work provided by the Tier 2 centers to LHCb during 2018.}
\end{figure}
\begin{table}[htbp]
\caption{Average CPU power provided by the Tier 0 and the Tier 1
centers to LHCb during 2018.}
\centering
\begin{tabular}{lcc}
\hline
$<$Power$>$ & Used & Pledge \\
& kHS06 & kHS06 \\
\hline
CH-CERN & 141.0 & 88 \\
DE-KIT & 51.3 & 42.2 \\
ES-PIC & 12.8 & 14.8 \\
FR-CCIN2P3 & 43.2 & 30.4 \\
IT-INFN-CNAF & 64.1 & 46.8 \\
NL-T1 & 38.0 & 24.6 \\
RRC-KI-T1 & 22.0 & 16.4 \\
UK-T1-RAL & 71.7 & 74.8 \\
\hline
Total & 447.6 & 338.1 \\
\hline
\end{tabular}
\label{tab:T0T1}
\end{table}
\begin{table}[htbp]
\caption{Average CPU power provided by the Tier 2
centers to LHCb during 2018.}
\centering
\begin{tabular}{lcc}
\hline
$<$Power$>$ & Used & Pledge \\
& kHS06 & kHS06 \\
\hline
China & 0.3 & 0 \\
Brazil & 11.5 & 17.2 \\
France & 27.1 & 22.9 \\
Germany & 9.3 & 8.1 \\
Israel & 0.2 & 0 \\
Italy & 8.6 & 26.0 \\
Poland & 7.6 & 4.1 \\
Romania & 2.8 & 6.5 \\
Russia & 16.0 & 19.0 \\
Spain & 7.9 & 7.0 \\
Switzerland & 29.6 & 24.0 \\
UK & 85.7 & 29.3 \\
\hline
Total & 206.3 & 164.0 \\
\hline
\end{tabular}
\label{tab:T2}
\end{table}
The average power used at Tier 0 + Tier 1 sites is about 32\% higher than the pledges. The average power used at Tier 2 sites is about 26\% higher than the pledges.
The average CPU power accounted for by WLCG (including Tier 0/1 + Tier 2) amounts to 654 kHS06, to be compared to 502 kHS06 estimated needs quoted in Table~\ref{tab:pledges}. The Tier 0 and Tier 1s usage is generally higher than the pledges. The LHCb computing model is flexible enough to use computing resources for all production workflows wherever available. It is important to note that this is true also for CNAF, despite it started to contribute to the computing activities only in March, after the recovery from the incident. After that the CNAF Tier 1 has offered great stability, leading to maximal efficiency in the overall exploitation of the resources. The total amount of CPU used at Tier 0 and Tier 1s centers is detailed in Figure~\ref{fig:T0T1_MC}, showing that about 76\% of the CPU work is due to Monte Carlo simulation. From the same plot it is visible the start of a stripping campaign in March. This corresponds to the recovery of the backlog in the restripping of the Run2 data collected in 2015-2017, due to the unavailability of CNAF after the incident of November 2017. As it is visible from the plot, the backlog has been recovered by the end of April 2018, before the restart of data-taking operations. Even if all the other Tier 1 centers contributed to reprocess these data, the recall of them from tape has been done exclusively at CNAF. Approximately 580 TB of data have been recalled from tape in about 6 weeks, with a maximum throughput of about 250 MB/s.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{T0T1_MC.png}
\end{center}
\caption{\label{fig:T0T1_MC}Usage of LHCb resources at Tier 0 and Tier 1 sites during 2018. The plot shows the normalized CPU usage (kHS06) for the various activities.}
\end{figure}
Since the start of data taking in May 2018, tape storage grew by about 16.7 PB. Of these, 9.5 PB were due to new collected RAW data. The rest was due to RDST (2.6 PB) and ARCHIVE (4.6 PB), the latter due to the archival of Monte Carlo productions, re-stripping of former real data, and new Run2 data. The total tape occupancy as of December 31st 2018 is 68.9 PB, 38.4 PB of which are used for RAW data, 13.3 PB for RDST, 17.2 PB for archived data. This is 12.9\% lower than the original request of 79.2 PB. The total tape occupancy at CNAF at the end of 2018 was about 9.3 PB, of which 3.3 PB of RAW data, 3.6 PB of ARCHIVE and 2.4 of RDST. This correspond to an increase of about 2.3 PB with respect to the end of 2017. These numbers are in agreement with the share of resources expected from CNAF.
\begin{table}[htbp]
\caption{Disk Storage resource usage as of February 11$^{\rm th}$ 2019 for the Tier 0 and Tier 1 centers. The top row is taken from the LHCb accounting, the other ones (used, available and installed capacity) are taken from the recently commissioned WLCG Storage Space Accounting tool. The 2018 pledges are shown in the last row.}
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|l|cc|ccccccc|}
\hline
Disk (PB) & CERN & Tier 1s & CNAF & GRIDKA & IN2P3 & PIC & RAL & RRCKI & SARA \\
\hline
LHCb accounting & 6.00 & 19.78 & 4.33 & 3.32 & 2.77 & 1.21 & 4.55 & 1.63 & 1.96 \\
\hline
SRM T0D1 used & 6.32 & 19.88 & 4.37 & 3.34 & 2.77 & 1.25 & 4.54 & 1.69 & 1.92 \\
SRM T0D1 free & 2.08 & 2.87 & 0.93 & 0.44 & 0.36 & 0.11 & 0.26 & 0.71 & 0.05 \\
SRM T1D0 (used+free) & 1.40 & 2.25 & 1.30 & 0.15 & 0.03 & 0.02 & 0.63 & 0.09 & 0.03 \\
\hline
SRM T0D1+T1D0 total & 9.80 & 25.00 & 6.60 & 3.93 & 3.16 & 1.38 & 5.43 & 2.49 & 2.01 \\
\hline
Pledge '18 & 11.4 & 26.25 & 5.61 & 4.01 & 3.20 & 1.43 & 7.32 & 2.30 & 2.31 \\
\hline
\end{tabular}\label{tab:disk}
}
\end{center}
\end{table}
Table~\ref{tab:disk} shows the situation of disk storage resources at CERN and Tier 1 sites, as well as at each Tier 1 site, as of February 11$^{\rm th}$ 2019. The used space includes derived data, i.e. DST and micro-DST of both real and simulated data, and space reserved for users. The latter accounts for 1.2 PB in total, 0.9 of which are used. The SRR disk used and SRR disk free information concerns only permanent disk storage (previously known as “T0D1”). The first two lines show a good agreement between what the site reports and what the LHCb accounting (first line) reports. The sum of the Tier 0 and Tier 1 sites 2018 pledges amount to 37.7 PB. The available disk space is 35 PB in total, 26 PB of which are used to store real and simulated datasets, and user data. A total of 3.7 PB is used as tape buffer, the remaining 5 PB are free and will be used to store the output of the legacy stripping campaigns of Run1 and Run2 data that are currently being prepared. The disk space available at CNAF is about 6.6 PB, about 18\% above the pledge.
In summary, the usage of computing resources in the 2018 calendar year has been quite smooth for LHCb. Simulation is the dominant activity in terms of CPU work. Additional unpledged resources, as well as clouds, on-demand and volunteer computing resources, were also successfully used. They were essential
in providing CPU work during the outage of the CNAF Tier 1 center. As for the INFN Tier 1 at CNAF, it came back to its fully-operational status in March 2018. After that, the backlog in the restripping campaign due to unavailability of data stored at CNAF was recovered, thanks also to the contribution of other sites, in time for the restart of data taking. After March 2018, CNAF operated in a very efficient and reliable way, being even able to over perform in terms of CPU power with respect to the pledged resources.
\section{Expected growth of resources in 2020-2021}
In terms of CPU requirements, the different activities result in CPU work estimates for 2020-2021, that are apportioned between the different Tiers taking into account the computing model constraints and also capacities that are already installed. This results in the requests shown in Table~\ref{tab:req_CPU} together with the pledged resources for 2019. The CPU work required at CNAF would correspond to about 18\% of the total CPU requested at Tier 1s+Tier 2s sites.
\begin{table}[htbp]
\centering
\caption{CPU power requested at the different Tiers in 2020-2021. Pledged resources for 2019 are also reported}
\label{tab:req_CPU}
\begin{tabular}{lccc}
\hline
CPU power (kHS06) & 2019 & 2020 & 2021 \\
\hline
Tier 0 & 86 & 98 & 125\\
Tier 1 & 268 & 328 & 409\\
Tier 2 & 193 & 185 & 229\\
\hline
Total WLCG & 547 & 611 & 763\\
\hline
\end{tabular}
\end{table}
The forecast total disk and tape space usage at the end of the years 2019-2020 are broken down into fractions to be provided by the different Tiers. These numbers are shown in Table~\ref{tab:req_disk} for disk and Table~\ref{tab:req_tape} for tape. The disk resources required at CNAF would be about 18\% of those requested for Tier 1 sites + Tier 2 sites, while for tape storage CNAF is supposed to provide about 24\% of the total tape request to Tier 1 sites.
\begin{table}[htbp]
\centering
\caption{LHCb disk request for each Tier level in 2020-2021. Pledged resources for 2019 are also shown.}
\label{tab:req_disk}
\begin{tabular}{lccc}
\hline
Disk (PB) & 2019 & 2020 & 2021 \\
\hline
Tier 0 & 13.4 & 17.2 & 19.5 \\
Tier 1 & 29.0 & 33.2 & 39.0 \\
Tier 2 & 4 & 7.2 & 7.5 \\
\hline
Total WLCG & 46.4 & 57.6 & 66.0 \\
\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{LHCb tape request for each Tier level in 2020-2012. Pledged resources for 2019 are also reported.}
\label{tab:req_tape}
\begin{tabular}{lccc}
\hline
Tape (PB) & 2019 & 2020 & 2021 \\
\hline
Tier 0 & 35.0 & 36.1 & 52.0 \\
Tier 1 & 53.1 & 55.5 & 90.0 \\
\hline
Total WLCG & 88.1 & 91.6 & 142.0 \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
A description of the LHCb computing activities during 2018 has been given, with particular emphasis on the usage of resources and on the forecasts of resource needs until 2021. As in previous years, the CNAF Tier 1 center gave a substantial contribution to LHCb computing in terms of CPU work and storage made available to the collaboration. This achievement is particularly important this year, as CNAF was recovering from the major incident of November 2017 that unfortunately interrupted its activities. The effects of CNAF unavailability have been overcome also thanks to extra efforts from other sites and to the opportunistic usage of non-WLCG resources. The main consequence of the incident, in terms of LHCb operations, has been the delay in the restripping campaign of data collected during 2015-2017. The data that were stored at CNAF (approximately 20\% of the total) have been processed when the site restarted the operations in March 2018. It is worth to mention that despite the delay, the restripping campaign has been completed before the start of data taking according to the predicted schedule, avoiding further stress to the LHCb computing operations. Emphasis should be put also on the fact that an almost negligible amount of data have been lost in the incident and in any case it has been possible to recover them from backup copies stored at other sites.
\end{document}
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{The LHCf experiment}
\author{A Tiberio$^{2,1}$, O Adriani$^{2,1}$, E Berti $^{2,1}$, L Bonechi$^{1}$, M Bongi$^{2,1}$, R D'Alessandro$^{2,1}$, S Ricciarini$^{1,3}$, and A Tricomi$^{4,5}$ for the LHCf Collaboration}
\address{$^1$ INFN, Section of Florence, I-50019 Sesto Fiorentino, Florence, Italy}
\address{$^2$ Department of Physics, University of Florence, I-50019 Sesto Fiorentino, Florence, Italy}
\address{$^3$ IFAC-CNR, I-50019 Sesto Fiorentino, Florence, Italy}
\address{$^4$ INFN, Section of Catania, I-95131 Catania, Italy}
\address{$^5$ Department of Physics, University of Catania, I-95131 Catania, Italy}
\ead{alessio.tiberio@fi.infn.it}
\begin{abstract}
The LHCf experiment is dedicated to the measurement of very forward particle production in the high energy hadron-hadron collisions at LHC, with the aim of improving the cosmic-ray air shower developments models. Most of the simulations of particle collisions and detector response are produced exploiting the resources available at CNAF. The role of CNAF and the main recent results of the experiment are discussed in the following.
\end{abstract}
\section{Introduction}
The LHCf experiment is dedicated to the measurement of very forward particle production in the high energy hadron-hadron collisions at LHC. The main purpose of LHCf is improving the performance of the hadronic interaction models, that are one of the important ingredients of the simulations of the Extensive Air Showers (EAS) produced by primary cosmic rays.
Since 2009 the LHCf detector has taken data in different configurations of the LHC: p-p collisions at center of mass energies of 900\,GeV, 2.76\,TeV, 7\,TeV and 13\,TeV, and p-Pb collisions at $\sqrt{s_{NN}}\,=\,5.02$\,TeV and 8.16\,TeV. The main results obtained in 2018 is shortly presented in the next paragraphs.
\section{The LHCf detector}
The LHCf detector is made of two independent electromagnetic calorimeters placed along the beam line at 140\,m on both sides of the ATLAS Interaction Point, IP1 \cite{LHCf_experiment, LHCf_detector}. Each of the two detectors, called Arm1 and Arm2, contains two separate calorimeter towers allowing to optimize the reconstruction of neutral pion events decaying into couples of gamma rays. During data taking the LHCf detectors are installed in the so called \"recombination chambers\", a place where the beam pipe of IP1 splits into two separate pipes, thus allowing small detectors to be inserted just on the interaction line (this position is shared with the ATLAS ZDC e.m. modules). For this reason the size of the calorimeter towers is very limited (few centimeters). Because of the performance needed to study very high energy particles with the requested precision to allow discriminating between different hadronic interaction models, careful simulations of particle collisions and detector’s response are mandatory. In particular, due to the tiny transversal size of the detectors, large effects are observed due to e.m. shower leackage in and out of the calorimeter towers. Most of the simulations produced by the LHCf Collaboration for the study and calibration of the Arm2 detector have been run exploiting the resources made available at CNAF.
\section{Results obtained in 2018}
During 2018 no experimental operations were performed in LHC tunnel or SPS experimental area, so all the work was concentrated to the analysis of data collected during the 2015 operation in p-p collisions at 13 TeV and during 2016 operation in p-Pb collisions at 8.16 TeV.
The final results of photon and neutron production spectra in proton-proton collisions at $\sqrt{s} =$ 13 TeV in the very forward region ($8.81 < \eta < 8.99$ and $\eta > 10.94$ for photons, $8.81 < \eta < 9.22$ and $\eta > 10.76$ for neutrons, where $\eta$ is the pseudorapidity of the particle\footnote{In accelerator experiments the pseudorapidity of a particle is defined as $\eta = - \ln [ \tan(\theta / 2) ]$, where $\theta$ is the angle between the particle momentum and the beam axis.}) were published on Physics Letters B and Journal of High Energy Physics, respectively \cite{LHCf_photons, LHCf_neutrons}.
These are the first published results of the collaboration at the highest available collision energy of 13 TeV at the LHC.
In addition to proton-proton results, preliminary results for photon spectrum in proton-lead collisions at $\sqrt{s_{NN}} = 8.16$ TeV were obtained and presented in several international conferences.
\section{LHCf simulations and data processing}
A full LHCf event involves two kinds of simulations: the first one was produced making use of the COSMOS and EPICS libraries, the second one making use of the CRMC toolkit. In both cases we used the most common generators employed in cosmic ray physics. For the second group only secondary particles produced by collisions were considered, whereas for the first group transport through the beam pipe and detector interaction were simulated as well. For this purpose, all this software was at first installed on the CNAF dedicated machine, then we performed some debug and finally we interactively run some test simulations.
In order to optimize the usage of resources, simulations production was shared between Italian and Japanese side of the collaboration. For this reason, the machine was used as well to transfer data from/to Japanese server.
In addition to simulations activity, CNAF resources were important for data analysis, both for experimental and simulation files. This work required to apply all reconstruction processes, from raw data up to a ROOT file containing all relevant physics quantities reconstructed from detector information. For this purpose, LHCf analysis software was installed, debugged and continuously updated on the system. Because the reconstruction of a single file can take several hours and the number of files to be reconstructed is large, the usage of the queue dedicated to LHCf was necessary to accomplish this task. ROOT files were then transferred to local PCs in Firenze, in order to have more flexibility on the final analysis steps, that does not require long computing time.
In 2018, the CNAF resources were mainly used by LHCf for mass production of MC simulations needed for the $\pi^0$ analysis of LHC data relative to proton-proton collisions at $\sqrt{s} = 13\,$TeV.
In order to extend the rapidity coverage, in $\pi^0$ analysis also the data acquired with the detector shifted 5 mm upward with respect to the nominal position are analysed.
As a consequence all the MC simulations involving the detector have to be generated again with that modified geometry.
The full sample of $10^8$ collisions was generated for QGSJET model, while about 50\% of the EPOS sample was completed.
\section*{References}
\begin{thebibliography}{9}
\bibitem{LHCf_experiment} O. Adriani {\it et~al.}, JINST \textbf{3}, S08006 (2008)
\bibitem{LHCf_detector} O. Adriani {\it et~al.}, JINST \textbf{5}, P01012 (2010)
\bibitem{LHCf_photons} O. Adriani {\it et~al.}, Physics Letters B \textbf{780} (2018) 233–239
\bibitem{LHCf_neutrons} O. Adriani {\it et~al.}, J. High Energ. Phys. (2018) \textbf{2018}: 73.
\end{thebibliography}
\end{document}
......@@ -3,9 +3,9 @@
\begin{document}
\title{CSES-Limadou at CNAF}
\author{Matteo Merg\'e}
\author{Matteo Merg\'e$^1$}
\address{Agenzia Spaziale Italiana, Space Science Data Center ASI-SSDC \newline via del politecnico 1, 00133, Rome, Italy }
\address{$^1$ Agenzia Spaziale Italiana, Space Science Data Center ASI-SSDC, Rome, IT}
\ead{matteo.merge@roma2.infn.it, matteo.merge@ssdc.asi.it}
......@@ -21,7 +21,7 @@ The High-Energy Particle Detector (HEPD), developed by the INFN, detects electro
The instrument consists of several detectors. Two planes of double-side silicon microstrip sensors placed on the top of the instrument provide the direction of the incident particle. Just below, two layers of plastic scintillators, one thin segmented, give the trigger; they are followed by a calorimeter, constituted by other 16 scintillators and a layer of LYSO sensors. A scintillator veto system completes the instrument.
\section{HEPD Data}
The reconstruction occurs in three phases, which determine three different data formats, namely 0, 1 and 2, with increasing degree of abstraction. This structure is reflected on the data-persistency format, as well as on the software design. Raw data as downlinked from the CSES. They include ADC counts from the silicon strip, detector, from trigger scintillators, from energy scintillators and from LYSO crystals. ADC counts from lateral veto are also there, together with other very low-level information. Data are usually stored in ROOT format. Level 1 data contain all detector responses after calibration and equalization. The tracker response is clustered (if not already in this format at level0) and corrected for the signal integration time. All scintillator responses are calibrated and equalized. Information on the event itself like time, trigger flags, dead/alive time, etc… are directly inherited from level 0. Data are usually stored in ROOT format. Level 2 data contain higher level information, used to compute final data products. Currently the data are transferred from China as soon as they are downlinked from the CSES satellite and are processed at a dedicated facility at ASI Space Science Data Center (ASI-SSDC \cite{ssdc}) and then distributed to the analysis sites includind CNAF.
The reconstruction occurs in three phases, which determine three different data formats, namely 0, 1 and 2, with increasing degree of abstraction. This structure is reflected on the data-persistency format, as well as on the software design. Raw data as downlinked from the CSES. They include ADC counts from the silicon strip, detector, from trigger scintillators, from energy scintillators and from LYSO crystals. ADC counts from lateral veto are also there, together with other very low-level information. Data are usually stored in ROOT format. Level 1 data contain all detector responses after calibration and equalization. The tracker response is clustered (if not already in this format at level0) and corrected for the signal integration time. All scintillator responses are calibrated and equalized. Information on the event itself like time, trigger flags, dead/alive time, etc… are directly inherited from level 0. Data are usually stored in ROOT format. Level 2 data contain higher level information, used to compute final data products. Currently the data are transferred from China as soon as they are downlinked from the CSES satellite and are processed at a dedicated facility at ASI Space Science Data Center (ASI-SSDC \cite{ssdc}) and then distributed to the analysis sites including CNAF.
\section{HEPD Data Analysis at CNAF}
Level2 data of the HEPD detector are currently produced daily in ROOT format from the raw files. Once a week they are transferred at CNAF, using gfal-tools for the analysis team to be used. Raw data are transfered also to the CNAF facility on weekly basis and will be transferred to the tape storage. Most of the data analysis software and tools have been developed to be used at CNAF. Geant4 MC simulations are currently ran at CNAF by the collaboration, the facility proved to be crucial to perform, computational intensive, optical photons simulations needed to simulate the light yield of the plastic scintillators of the detector. Most of the software is written in C++/ROOT while several attempts to use Machine Learning and Neural Networks tecniques are pushing the collaboration to use more frequently Python for the analysis.
......
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\bibliographystyle{iopart-num}
%\usepackage{citesort}
\begin{document}
\title{The NA62 experiment at CERN}
\author{Antonino Sergi, on behalf of the NA62 collaboration}
%\address{}
\ead{antonino.sergi@cern.ch}
\begin{abstract}
Rare decays are theoretically clean processes excellent to test new
physics at the highest scale complementary to LHC. The NA62 experiment at CERN SPS aims
to collect of the order of 100
events in two years of data taking, keeping the background lower than 20\% of the signal.
\end{abstract}
\section{Introduction}
Among the flavour changing neutral current $K$ and $B$ decays, the $K\to\pi\nu\bar\nu$ decays play a key role in the search for new physics through the underlying mechanisms of flavour mixing. These decays are strongly suppressed in the SM (the highest CKM suppression), and are dominated by top-quark loop contributions. The SM branching ratios have been computed to high
precision with respect to other loop-induced meson decays: ${\rm
BR}(K^+\to\pi^+\nu\bar\nu)=8.22(75)\times 10^{-11}$ and ${\rm
BR}(K_L\to\pi^0\nu\bar\nu)=2.57(37)\times 10^{-11}$; the uncertainties are dominated by parametric ones, and the irreducible theoretical uncertainties are at a $\sim 1\%$ level~\cite{br11}. The theoretical cleanness of these decays remains also in certain new physics scenarios. Experimentally, the $K^+\to\pi^+\nu\bar\nu$ decay has been observed by the BNL E787/E949 experiments, and the measured branching ratio is
$\left(1.73^{+1.15}_{-1.05}\right)\times 10^{-10}$~\cite{ar09}. The
achieved precision is inferior to that of the SM expectation.
The main goal of the NA62 experiment at CERN is the measurement of the $K^+\to\pi^+\nu\bar\nu$ decay rate at the 10\% precision level, which would constitute a significant test of the SM. The experiment is expected to collect about 100 signal events in two years of data taking, keeping the systematic uncertainties and backgrounds low. Assuming a 10\% signal acceptance and the SM decay rate, the kaon flux should correspond to at least $10^{13}$ $K^+$ decays in the fiducial volume. In order to achieve a small systematic uncertainty, a rejection factor for generic kaon decays of the order of $10^{12}$ is required, and the background suppression factors need to be measured directly from the data. In order to achieve the required kaon intensity, signal acceptance and
background suppression, most of the NA48/NA62 apparatus used until 2008
was replaced with new detectors. The CERN SPS extraction line used by the NA48 experiment is capable of delivering beam intensity sufficient for the NA62. Consequently the new setup is housed at the CERN North Area High Intensity Facility where the NA48 was located. The decay in flight technique will be used; optimisation of the signal acceptance drives the
choice of a 75 GeV/$c$ charged kaon beam with 1\% momentum bite. The
experimental setup includes
a $\sim 100$~m long beam line to form the appropriate secondary
beam, a $\sim 80$~m long evacuated decay volume, and a series of
downstream detectors measuring the secondary particles from the
$K^+$ decays in the fiducial decay volume.
The signal signature is one track in the final state matched to one $K^+$ track in the beam. The integrated rate upstream is about 800 MHz (only 6\% of the beam particles are kaons, the others being mostly $\pi^+$ and protons). The rate seen by the detector downstream is about 10 MHz, mainly due to $K^+$ decays. Timing and
spatial information are required to match the upstream and downstream tracks. Backgrounds come from kaon decays with a single reconstructed track in the final state, including accidentally matched upstream and downstream tracks. The background suppression profits from the high kaon beam momentum. A variety of techniques are employed in combination in order to reach the required level of background rejection. They can be schematically divided into kinematic rejection, precise timing, highly efficient photon and muon veto systems, and precise particle identification systems to distinguish $\pi^+$, $K^+$ and positrons. The above requirements drove the design and the construction of the subdetector systems.
The main NA62 subdetectors are: a differential Cherenkov counter (CEDAR) on the beam line to identify the $K^+$ in the beam; a silicon pixel beam tracker; guard-ring counters surrounding the beam tracker to veto catastrophic interactions of particles; a downstream spectrometer composed of 4 straw chambers operating in vacuum; a RICH detector to identify pions and muons; a scintillator hodoscope; a muon veto detector. The photon veto detectors include a series of annular lead glass calorimeters surrounding the decay and detector volume, the NA48 LKr calorimeter, and two small angle calorimeters to provide hermetic coverage for photons emitted at close to zero angle to the beam. The design of the experimental apparatus and the R\&D of the new subdetectors have been completed. The experiment started collecting physics data in 2015, and since 2016 is fully commissioned and in its production phase.
\section{NA62 computing model and the role of CNAF}
NA62 raw data consist in custom binary files, collecting data packets directly from the DAQ electronics, after a minimal overall formatting; there is a one to one correspondence between files and spills from the SPS. Data contains up to 16 different level-0 trigger streams, for a total maximum bandwidth of 1 MHz, which are filtered by software algorithms to reduce the output rate to less than 50kHz.
Raw data is stored on CASTOR and promptly calibrated and reconstructed, on a scale of few hours, for data quality monitoring using the batch system at CERN and EOS. Near-line fast physics selection for data quality, off-line data processing and analysis is currently performed using only CERN computing facilities.
Currently NA62 exploits the GRID only for Monte Carlo productions, under the management of the UK GRID-PP collaboration members; in 2018 CNAF resources have been used as one of the GRID sites that serve NA62VO.
\section*{References}
\begin{thebibliography}{99} % Use for 10-99 references
%
\bibitem{br11}
J. Brod, M. Gorbahn and E. Stamou, Phys. Rev. {\bf D83}, 034030
(2011).
%
\bibitem{ar09}
A.V. Artamonov {\it et al.}, Phys. Rev. Lett. {\bf 101} (2008) 191802.
%
\end{thebibliography}
\end{document}
contributions/net/cineca-schema.png

217 KiB

contributions/net/cineca.png

71.7 KiB

contributions/net/connection-schema.png

117 KiB

contributions/net/gpn.png

95.3 KiB

contributions/net/lhcone-opn.png

79.1 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{The INFN-Tier 1: Network and Security}
\author{S.~Zani$^1$, D.~De~Girolamo$^1$, L.~Chiarelli$^{1,2}$, V.~Ciaschini$^1$}
\address{$^1$ INFN-CNAF, Bologna, IT}
\address{$^2$ GARR Consortium, Roma, IT}
\ead{stefano.zani@cnaf.infn.it}
%\begin{abstract}
%DA SCRIVERE
%\end{abstract}
\section{Introduction}
The Network unit manages the wide area and local area connections of CNAF.
Moreover, it is responsible for the security of the center, and it contributes to the management of the local CNAF services
(e.g. DNS, Windows domain etc.) and some of the INFN national ICT services. It gives also support to the GARR PoP hosted at CNAF.
\section{Wide Area Network}
T he main PoP of GARR network, based on a fully managed dark fiber infrastructure, is hosted tnside CNAF data center.
CNAF is connected to the WAN via GARR/GEANT essentially with two physical links:
\begin{itemize}
\item General Internet: General IP link is 20 Gbps (2x10 Gbps) via GARR and GEANT;
\item LHCOPN/LHCONE: The link to WLCG destinations is 200 Gbps (2x100 Gbps) link shared between the LHC-OPN network for traffic
with the Tier 0 (CERN) and the other Tier 1 sites and LHCONE network mainly for traffic with the Tier 2 centers.
Since summer 2018, the LHCOPN dedicated link to CERN (from Milan GARR POP) has been upgraded to 2x100 Gbps
while the peering to LHCONE is at 100 Gbps (from Milan GARR POP and GEANT GARR POP).
\end{itemize}
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{connection-schema.png}\hspace{2pc}%
%\begin{minipage}[b]{14pc}
\caption{\label{schema-rete}INFN-CNAF connection schema.}
%\end{minipage}
\end{center}
\end{figure}
As shown in Figures~\ref{lhc-opn-usage} and \ref{gpn-usage}, network usage is growing both on LHCOPN/ONE and on General IP,
even if, at the beginning of last year, the traffic was very low because of the flooding occurred in November 2017
(the Computing Center returned completely online during February 2018).
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{lhcone-opn.png}\hspace{2pc}%
\caption{\label{lhc-opn-usage}LHC OPN + LHC ONE link usage.}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{gpn.png}\hspace{2pc}%
\caption{\label{gpn-usage}General IP link usage.}
\end{center}
\end{figure}
Currently the dedicated bandwidth for LHCOPN to CERN is 100 Gbps with a backup link of 4x10 Gbps.
During 2019, the configuration will change and 2x100 Gb/s links to the two CERN POP will be provided in order to grant a better resiliency
and to give potentially 200 Gbps full speed with CERN and the Tier 1s.
\section{Data Center Interconnect with CINECA}
At the beginning of 2018, CNAF obtained from CINECA the use of 216 Servers based on Intel Xeon CPU E5-2697 v4
(with 36 physical cores) coming from the Super Computer “Marconi” partition 1 in phase-out for HPC workflows.
In order to integrate all of those computing resources in our farm, it has been fundamental to guarantee the appropriate access bandwidth to the storage resources located at CNAF. This has been implemented with the collaboration of GARR using the Data Center Interconnect (DCI) technology
provided by a pair of Infinera Cloud Express 2 (CX1200).
The Cloud Express 2 are Transponders with 12 x 100 Gigabit Ethernet interfaces on LAN Side and one LC fiber interface on “Line” side capable of up to 1,2 Tbps on a single mode fiber at a maximum distance of 100 kilometers (CNAF and CINECA are 17 km far). In CNAF-CINECA case, the systems are configured for a 400 Gbps connection.
The latency introduced by each CX1200 is of $\sim 5 \mu$s and the total RTT (Round Trip Time) between servers at CNAF and servers at CINECA is of 0,48 ms,
comparable to what we observe on the LAN (0,28 ms).
All worker nodes on the network segment at CINECA have IP addresses of the INFN Tier 1 network and are used as they were installed
at the Tier 1 facility (see Figure~\ref{cineca-schema}). The data access bandwidth is 400 Gbps but it can scale up to 1,2 Tbps.
This DCI interconnection has been implemented rapidly and as a proof of concept
(this is the first time this technology has been used in Italy). Currently, it is in production, and as it is becoming
a stable and relevant asset for CNAF (Figure~\ref{cineca-traffic}), we plan to have a second optical fiber between CNAF and CINECA for resiliency reasons.
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{cineca-schema.png}\hspace{2pc}%
\caption{\label{cineca-schema}INFN Tier 1–CINECA Data Center Interconnection.}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{cineca.png}\hspace{2pc}%
\caption{\label{cineca-traffic}INFN Tier 1–CINECA link usage}
\end{center}
\end{figure}
\section{Security}
The network security policies are mainly implemented as hardware-based ACLs on the access router
and on the core switches (with a dedicated ASICS on the devices).
The network group, in coordination with GARR-CERT and EGI-CSIRT, takes also care of security incidents at CNAF
(both for compromised systems or credential and known vulnerability of software and grid middleware) cooperating with the involved parties.
During 2018, CNAF Security Group has been reorganized with the formal involvement of at least one representative
for each unit in order to obtain a stronger coordination on security policies implementation and a faster reaction to security incidents.
As always, in 2018 CNAF has had an important commitment to security, and it has been active on several fronts, as described in the following.
\subsection{“Misure Minime” Implementation}
CNAF has had an important role in determining how the whole INFN would implement compliance with the
“Misure Minime”\footnote{Misure Minime is a set of minimum ICT security measures to be adopted
by all the Italian public administrations.} regulation.
It actively contributed to the discussion and to the implementation guidelines for each OS,
and it had a central role in defining the Risk Management procedures, writing the prototype version and co-writing the final definition.
\subsection{Vulnerability scanning}
In an effort to monitor the security of the center, CNAF has started a campaign of systematic and periodic scanning all of its machines,
personal and not, looking for vulnerabilities in an effort to find and fix them before they could be actively exploited by an attacker.
As expected, this scanning brought to light a number of issues that were promptly corrected (when possible) or mitigated (when not) thus nipping a number of potential problems in the bud.
\subsection{Security Assessment}
In light of its growing importance, a security assessment of Indigo-IAM has also taken place.
Focused on testing the actual security of the product and finding ways in which it could be exploited, this assessment brought to light a number of issues of varying importance which have been sent back and discussed with the developers to increase the security and reliability of the product.
\subsection{Technology tracking}
A constant technology tracking activity on security tools and devices is ongoing.
In particular, meetings with some of the main Next Generation Firewall producers have been scheduled in 2017 and in 2018.
During the two years, three Next-Generation firewall from Fortinet, Huawei and Palo Alto Networks had been tested on production links
in order to define the fundamental characteristics to be included in the tender for the acquisition of the NG Firewall to be installed on the “General IP” Wide Area Network Link.
%\section*{References}
%\begin{thebibliography}{9}
%\bibitem{iopartnum} IOP Publishing is to grateful Mark A Caprio, Center for Theoretical Physics, Yale University, for permission to include the {\tt iopart-num} \BibTeX package (version 2.0, December 21, 2006) with this documentation. Updates and new releases of {\tt iopart-num} can be found on \verb"www.ctan.org" (CTAN).
%\end{thebibliography}
\end{document}
contributions/net/net-board.png

170 KiB

......@@ -12,12 +12,12 @@
M.~Papa$^1$, S.~Pirrone$^{1}$, G.~Politi$^{2,1}$, F.~Rizzo$^{2,3}$,
P.~Russotto$^{3}$, A.~Trifir\`o$^{5,1}$, M~Trimarchi$^{5,1}$ }
\address{$^1$ INFN, Sezione di Catania, Italy}
\address{$^2$ Dip. di Fisica e Astronomia, Universit\`a di Catania, Italy}
\address{$^3$ INFN, Laboratori Nazionali del Sud, Catania, Italy}
\address{$^4$ CSFNSM, Catania, Italy}
\address{$^5$ Dipartimento di Scienze MITF, Universit\`a di Messina, Italy}
\address{$^6$ Universit\`a di Enna, ``Kore'', Italy}
\address{$^1$ INFN Sezione di Catania, Catania, IT}
\address{$^2$ Universit\`a di Catania, Catania, IT}
\address{$^3$ INFN Laboratori Nazionali del Sud, Catania, IT}
\address{$^4$ CSFNSM, Catania, IT}
\address{$^5$ Universit\`a di Messina, Messina, IT}
\address{$^6$ Universit\`a di Enna, Enna, IT}
\ead{defilippo@ct.infn.it}
......@@ -29,10 +29,10 @@ the 2018 experiment campaigns.
\section{Introduction}
The CHIMERA 4$\pi$ detector is constituted by 1192 Si-CsI(Tl) telescopes. The first stage of
the telescope is a 300 $\mu$m thick silicon detector followed by a CsI(Tl) crystal, having a
thickness from 6 to 12 cm in length with photodiode readout. One of the key point of this device is the low threshold for simultaneous mass and charge identifications of particles and light ions, the velocity measurement by Time-of-Flight technique and the Pulse Shape Detection (PSD) aiming to measure the rise time of signals for charged particles stopping in the first Silicon detector layer of the telescopes. The CHIMERA array was designed to study the processes responsible for particle productions in nuclear fragmentation, the reaction dynamics and the isospin degree of freedom. Studies of Nuclear Equation of State (EOS) in asymmetric nuclear matter have been performed both at lower densities with respect to nuclear saturation density, in the Fermi energy
thickness from 6 to 12 cm in length with photodiode readout. One of the key points of this device is the low threshold for simultaneous mass and charge identifications of particles and light ions, the velocity measurement by Time-of-Flight technique and the Pulse Shape Detection (PSD) aiming to measure the rise time of signals for charged particles stopping in the first Silicon detector layer of the telescopes. The CHIMERA array was designed to study the processes responsible for particle productions in nuclear fragmentation, the reaction dynamics and the isospin degree of freedom. Studies of Nuclear Equation of State (EOS) in asymmetric nuclear matter have been performed both at lower densities with respect to nuclear saturation density, in the Fermi energy
regime at LNS Catania facilities \cite{def14}, and at high densities in the relativistic heavy ions beams energy domain at GSI \cite{rus16}. The production of Radioactive Ion Beams (RIB) at LNS in the recent years has also opened the use of the 4$\pi$ detector CHIMERA to nuclear structure and clustering studies \cite{acqu16, mar18}.
FARCOS (Femtoscope ARray for COrrelations and Spectroscopy) is an ancillary and compact multi-detector with high angular granularity and energy resolution for the detection of light charged particles (LCP) and Intermediate Mass Fragments (IMF) \cite{epag16}. It has been designed as an array for particle-particle correlation measurements in order to characterize the time scale and shape of emission sources in the dynamical evolution of heavy ion collisions. The FARCOS array is constituted, in the final project, by 20 independent telescopes. Each telescope is composed by three detection stages: the first $\Delta E$ is a 300 $\mu$m thick DSSSD silicon strip detector with 32x32 strips; the second is a DSSSD, 1500 $\mu$m thick with 32x32 strips; the final stage is constituted by 4 CsI(Tl) scintillators, each one of 6 cm in length.
FARCOS (Femtoscope ARray for COrrelations and Spectroscopy) is an ancillary and compact multi-detector with high angular granularity and energy resolution for the detection of light charged particles (LCP) and Intermediate Mass Fragments (IMF) \cite{epag16}. It has been designed as an array for particle-particle correlation measurements in order to characterize the time scale and shape of emission sources in the dynamical evolution of heavy ion collisions. The FARCOS array is constituted, in the final project, by 20 independent telescopes. Each telescope is composed by three detection stages: the first $\Delta E$ is a 300 $\mu$m thick DSSSD (Double-Sided Silicon Strip Detector) with 32x32 strips; the second is a DSSSD, 1500 $\mu$m thick with 32x32 strips; the final stage is constituted by 4 CsI(Tl) scintillators, each one of 6 cm in length.
\begin{figure}[t]
\begin{center}
......@@ -49,7 +49,7 @@ The total number of GET channels for the CHIMERA + FARCOS (20 telescopes) device
\section{CNAF support for Newchim}
In the new digital data acquisition we store the sampled signals, thus producing a huge set of raw data. The data rate can be evaluated at 3-5 TB/day in a experiment (without FARCOS). For example the last CHIMERA experiment in 2018 collected a total of 70 TB of data in two weeks of beam time.
Clearly this easily saturates our local disk servers storage capabilities. We use the CNAF as main backup storage center: after data merging and processing, the raw data (signals) are reduced to physical variables in ROOT format, while the original raw data are copied and stored at CNAF. Copy is done in the {\it /storage/gpfs...} storage area in the general purpose tier1-UI machines by using the Tier-1 infrastructure and middleware software. In the future could be interesting to use also the CPU resources at CNAF in order to run the data merger and signal processing software directly on the copied data. Indeed we expect a significative increase of the storage resources needed when the FARCOS array will be fully operational.
Clearly this easily saturates our local disk servers storage capabilities. We use the CNAF as main backup storage center: after data merging and processing, the raw data (signals) are reduced to physical variables in ROOT format, while the original raw data are copied and stored at CNAF. Copy is done in the {\it /storage/gpfs...} storage area in the general purpose tier 1-UI machines by using the Tier 1 infrastructure and middleware software. In the future could be interesting to use also the CPU resources at CNAF in order to run the data merger and signal processing software directly on the copied data. Indeed we expect a significative increase of the storage resources needed when the FARCOS array will be fully operational.
\section*{References}
......@@ -64,4 +64,4 @@ Clearly this easily saturates our local disk servers storage capabilities. We us
012003
\bibitem{cas18} A. Castoldi, C. Guazzoni, T. Parsani, 2018 {\it Nuovo Cimento C} {\bf 41} 168
\end{thebibliography}
\end{document}
\ No newline at end of file
\end{document}
File added