Skip to content
Snippets Groups Projects
lhcb.tex 17.55 KiB
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{LHCb Computing at CNAF}

\author{S. Perazzini$^1$, C. Bozzi$^{2,3}$}

\address{$^1$ INFN Sezione di Bologna, Bologna, IT}
\address{$^2$ CERN, Gen\`eve, CH} 
\address{$^3$ INFN Sezione di Ferrara, Ferrara, IT}

\ead{stefano.perazzini@bo.infn.it, concezio.bozzi@fe.infn.it}


\begin{abstract}
In this document a summary of the LHCb computing activities during the 2018 is reported. The usage of the CPU, disk and tape resources spread among various computing centers is analysed, with particular attention to the performances of the INFN Tier 1 at CNAF. Projections of the necessary resources in the years to come are also briefly discussed.
\end{abstract}

\section{Introduction}
The Large Hadron Collider beauty (LHCb) experiment is the experiment dedicated to the study of $c$- and $b$-physics at the Large Hadrond Collider (LHC) accelerator at CERN. Exploiting the large production cross section of $b\bar{b}$ and $c\bar{c}$ quark pairs in proton-proton ($p-p$) collisions at the LHC, LHCb is able to analyse unprecedented quantities of heavy-flavoured hadrons, with particular attention to their $C\!P$-violation observables. Besides its core programme, the LHCb experiment is also able to perform analyses of production cross sections and electroweak physics in the forward region. To date, the LHCb collaboration is composed of about 1350 people, from 79 institutes spread all around the world in 18 countries. More than 50 physics paper have been published by LHCb during 2018, for a total of almost 500 papers since the start of its activities in 2010.

The LHCb detector is a single-arm forward spectrometer covering the pseudorapidity range between 2 and 5. The detector includes a
high-precision tracking system consisting of a silicon-strip vertex detector surrounding the $p-p$ interaction region, a large-area
silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip
detectors and straw drift tubes placed downstream. The combined tracking system provides a momentum measurement with relative
uncertainty that varies from 0.4\% at 5 GeV/$c$ to 0.6\% at 100 GeV/$c$, and impact parameter resolution of 20 $\mu$m for tracks with high transverse momenta. Charged hadrons are identified using two ring-imaging Cherenkov detectors. Photon, electron and hadron candidates are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The trigger consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction.  

\section{Overview of LHCb computing activities in 2018}

The usage of offline computing resources involved: (a) the production of simulated events, which runs continuously; (b) running user jobs, which is also continuous; (c) stripping cycles before and after the end of data taking; (d) processing (i.e. reconstruction and stripping of the full stream, µDST streaming of the TURBO stream) of data taken in 2018 in proton-proton and heavy ion collisions; (e) centralized production of ntuples for analysis working groups.

Activities related to the 2018 data taking were tested in May and started at the beginning of the LHC physics run mid-June. A steady processing and export of data transferred from the pit to offline were always seen. 

We recall that LHCb implemented in Run2 a trigger strategy, by which the high-level trigger is split in two parts. The first one (HLT1), synchronous with data taking, writes events at a 150kHz output rate in a temporary disk buffer located on the HLT farm nodes. Real-time calibrations and alignments are then performed and used in the second high-level trigger stage (HLT2), where event reconstruction algorithms as close as possible to those run offline are applied, and event selection is taking place.

Events passing the high-level trigger selections are sent to offline, either via a FULL stream of RAW events which are then reconstructed and processed as in Run1, or via a TURBO stream which directly records the results of the online reconstruction on tape. TURBO data are subsequently reformatted in a µDST format that does not require further processing, are stored on disk and can be used right away for physics analysis.

The information saved for an event of the TURBO stream is customizable and can range from information related to signal candidates only to the full event. The average size of a TURBO event is about 30kB, to be compared with a size of 60kB for the full event. The TURBO output is also streamed in O(5) streams, in order to optimize data access.

The offline reconstruction of the FULL stream for proton collision data run from May until November. The reconstruction of heavy-ion collision data was run in December.

A full re-stripping of 2015, 2016 and 2017 proton collision data, started in autumn 2017, ended in April 2018. A stripping cycle of 2015 lead collision data was also performed in that period. The stripping cycle concurrent with the 2018 proton collision data taking started in June and run continuously until November.

The INFN Tier 1 center at CNAF was in downtime from November 2017, due to a major flood incident. However, the site was again fully available in March 2018, allowing the completion of the stripping cycles on hold, waiting for the data located at CNAF (about 20\% of the total). Despite the unavailability of CNAF resources for the first months of 2018 the site performed excellently for the rest of the year, as testified by the number reported in this report.

As in previous years, LHCb continued to make use of opportunistic resources, that are not pledged to WLCG, but that significantly contributed to the overall usage.

\section{Resource usage in 2018}

Table~\ref{tab:pledges} shows the resources pledged for LHCb at the various tier levels for the 2018 period.
\begin{table}[htbp]
  \caption{LHCb 2017 WLCG pledges.}
  \centering
  \begin{tabular}{lccc}
    \hline
    2018 & CPU & Disk & Tape \\
             & kHS06 & PB & PB \\
\hline
Tier 0	& 88	& 11.4	& 33.6 \\
Tier 1	& 250	& 26.2	& 56.9 \\
Tier 2	& 164	& 3.7	&  \\ \hline
Total WLCG	& 502	& 41.3 & 90.5\\ \hline
  \end{tabular}
  \label{tab:pledges}
\end{table}

The usage of WLCG CPU resources by LHCb is obtained from the different views provided by the EGI Accounting portal. The CPU usage is presented in Figure~\ref{fig:T0T1} for the Tier 0 and Tier 1 sites and in Figure~\ref{fig:T2} for Tier 2 sites. The same data is presented in tabular form in Table~\ref{tab:T0T1} and Table~\ref{tab:T2}, respectively. 

\begin{figure}
\begin{center}