\documentclass[a4paper]{jpconf}
\usepackage{graphicx}

\newcommand{\hyp}    {$^{3}_{\Lambda}\mathrm H$}
\newcommand{\antihyp}{$^{3}_{\bar{\Lambda}} \overline{\mathrm H}$}
\newcommand{\fourhhyp}  {$^{4}_{\Lambda}\mathrm H$}
\newcommand{\fourhehyp} {$^{4}_{\Lambda}\mathrm{He}$}
\newcommand{\parantihyp}{$\left(^{3}_{\bar{\Lambda}} \overline{\mathrm H} \right)$}
\newcommand{\he}     {$^{3}\mathrm{He}$}
\newcommand{\antihe} {$^{3}\mathrm{\overline{He}}$}
\newcommand{\hefour} {$^{4}\mathrm{He}$}
\newcommand{\pip}    {$\pi^+$}
\newcommand{\pim}    {$\pi^-$}
\newcommand{\pio}    {$\pi$}
\newcommand{\dedx}   {d$E$/d$x$}
\newcommand{\pp}     {pp\;}
\newcommand{\pPb}    {p--Pb\;}
\newcommand{\PbPb}   {Pb--Pb\;}
\newcommand{\XeXe}   {Xe--Xe\;}
\newcommand{\Mmom}   {\mbox{\rm MeV$\kern-0.15em /\kern-0.12em c$}}
\newcommand{\Gmom}   {\mbox{\rm GeV$\kern-0.15em /\kern-0.12em c$}}
\newcommand{\Gmass}  {\mbox{\rm GeV$\kern-0.15em /\kern-0.12em c^2$}}
\newcommand{\Mmass}  {\mbox{\rm MeV$\kern-0.15em /\kern-0.12em c^2$}}
%\newcommand{\pt}     {$p_{\rm T}$}
\newcommand{\ctau}   {$c \tau$}
\newcommand{\ct}     {$ct$}
\newcommand{\LKz}    {$\Lambda$/$K^{0}$}
\newcommand{\s}      {\sqrt{s}}
\newcommand{\snn}    {\sqrt{s_{\mathrm{NN}}}}
\newcommand{\dndy}   {d$N$/d$y$}
\newcommand{\OO}     {$\mathrm{O^{2}}$}

\begin{document}
\title{ALICE computing at the INFN CNAF Tier 1}

\author{S. Piano$^1$, D. Elia$^2$, S. Bagnasco$^3$, F. Noferini$^4$, N. Jacazio$^5$, G. Vino$^2$}
\address{$^1$ INFN Sezione di Trieste, Trieste, IT}
\address{$^2$ INFN Sezione di Bari, Bari, IT}
\address{$^3$ INFN Sezione di Torino, Torino, IT}
\address{$^4$ INFN Sezione di Bologna, Bologna, IT}
\address{$^5$ INFN-CNAF, Bologna, IT}

\ead{stefano.piano@ts.infn.it}

\begin{abstract}
In this paper the computing activities for the ALICE experiment at the CERN LHC are described, in particular those in connection with the contribution of the Italian community and the role of the Tier1 located at the INFN CNAF in Bologna.
\end{abstract}

\section{Experimental apparatus and physics goal}
ALICE (A Large Ion Collider Experiment) is a general-purpose heavy-ion experiment specifically designed to study the physics of strongly interacting matter and QGP (Quark-Gluon Plasma) in nucleus-nucleus collisions at the CERN LHC (Large Hadron Collider).
The experimental apparatus consists of a central barrel part, which measures hadrons, electrons and photons, and a forward spectrometer to measure muons. It has been upgraded for Run2 by installing a second arm complementing the EMCAL at the opposite azimuth and thus enhancing the jet and di-jet physics. This extension, named DCAL for “Dijet Calorimeter” has been installed during the Long Shutdown 1 (LS1) period. Other detectors were also upgraded or completed: in particular the last few modules of TRD and PHOS were also installed while the TPC was refilled with a different gas mixture and equipped with a new redesigned readout electronics. Also the DAQ and HLT computing farms were upgraded to match the increased data rate foreseen in Run2 from the TPC and the TRD. A detailed description of the ALICE sub-detectors can be found in~\cite{Abelev:2014ffa}.\\
The main goal of ALICE is the study of the hot and dense matter created in ultra-relativistic nuclear collisions. At high temperature the Quantum CromoDynamics (QCD) predicts a phase transition between hadronic matter, where quarks and gluons are confined inside hadrons, and a deconfined state of matter known as Quark-Gluon Plasma~\cite{Adam:2015ptt, Adam:2016izf, Acharya:2018qsh}. Such deconfined state was also created in the primordial matter, a few microseconds after the Big Bang. The ALICE experiment creates the QGP in the laboratory through head-on collisions of heavy nuclei at the unprecedented energies of the LHC. The heavier the colliding nuclei and the higher the center-of-mass energy, the greater the chance of creating the QGP: for this reason, ALICE has also chosen lead, which is one of the largest nuclei readily available. In addition to the Pb-Pb collisions, the ALICE Collaboration is currently studying \pp and \PbPb systems, which are also used as reference data for the nucleus-nucleus collisions. During 2017 ALICE acquired also a short pilot run (corresponding to one LHC fill) of \XeXe collisions.


\section{Data taking, physics results and upgrade activities}

The main goal of the run in 2018 was to complete the approved Run2 physics program and it was fully achieved thanks to the excellent performance of the apparatus.
 
ALICE resumed data taking with beams in April at the restart of LHC operation with pp collisions ($\s=13$~TeV). ALICE continued to collect statistics with pp collisions from April 2nd to October 25th with the same trigger mix as in 2017. As planned, ALICE was operating with pp luminosity leveled to $\mathrm{2.6\times10^{30}}$ $\mathrm{cm^{-2}s^{-1}}$ providing an interaction rate of 150 kHz. The HLT compression factor was improved to 8.5 throughout the data taking, thus the HLT was able to reject the higher amount of spurious clusters, which were anticipated with Ar-CO2 gas mixture in the TPC. The average RAW data event size after compression was 1.7MB at the nominal interaction rate (150 kHz), exactly as expected and used for the resource calculations. At the end of the pp period, ALICE arrived at 43\% combined efficiency (LHC availability 47\% * ALICE efficiency 92\%).
 
The \PbPb ($\snn=5.02$~TeV) data taking period started in November 2018 and was scheduled for 24 days. The target was to reach a total integrated luminosity of 1 $\mathrm{nb^{-1}}$ for Run2 and to complete the ALICE goals for the collection of a large sample of central and minimum bias collisions. To achieve this, the interaction rate was leveled at 8 kHz (L = $\mathrm{1.0\times10^{27}}$ $\mathrm{cm^{-2}s^{-1}}$) and data taken at close to the maximum achievable readout rate. The accelerator conditions were different compared to the foreseen mainly because of the delay in the beam start by 3-4 days due to solenoid coil fault in LINAC3 and the 20\% loss of integrated luminosity due to beam sizes 50\% larger at IP2 than at IP1/IP5 during the whole Pb-Pb period. The LHC time in Stable Beams was 47\%, the average data taking efficiency by ALICE was 87\% and a maximum HLT compression factor close to 9 has been reached during the Pb-Pb period. To compensate for the reduced beam availability, the rates of different triggers were adjusted to increase as much as possible the statistics in central and semi-central events. Overall, we collected 251M central and mid-central events and 159M minimum bias events. To further minimize the impact of Pb-bPb run on tape resources, ALICE additionally compressed the non-TPC portion of RAW data (by applying level 2 gzip compression) resulting in additional 17\% reduction of data volume on tape. As a result, the accumulated amount of Pb--Pb RAW data was 5.5~PiB. A total amount of RAW data of 11~PiB, including pp, was written to tape at Tier0, and then replicated at the Tier1s. The data accumulation curve at Tier0 is shown in Fig.\ref{fig:rawdata} and about 4.2~PiB of RAW data has been replicated to CNAF during 2018 with a maximum rate of 360 TiB per week, limited only by the tape drives speed considering the 100 Gb/s LHCOPN bandwidth between CERN and CNAF, as shown by the Fig.\ref{fig:tottraftape}. 

\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.75\textwidth]{raw_data_accumulation_run2}
\end{center} 
\caption{Raw data accumulation curve for Run2.}
\label{fig:rawdata}
\end{figure}

The p-p data collected in 2018 has been fully calibrated and processed in Pass1, as well as the associated general-purpose MC. All productions were executed according to plan and within the CPU and storage budget, in time to free the resources for the Pb-Pb data processing. The Pb-Pb RAW data calibration and offline quality assurance validation started in parallel with the data taking, with samples of data uniformly taken from each LHC fill. Full calibration was completed by 20 December, then the production pass began at the T0/T1s. On average, 40K CPU cores are used for the production and the processing has been completed by the end of February 2019. In parallel, the general purpose MC associated with the Pb-Pb data is being validated and prepared for full production. 

\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.75\textwidth]{total_traffic_cnaf_tape_2018}
\end{center} 
\caption{ALICE traffic per week and total traffic on the CNAF tape during 2018.}
\label{fig:tottraftape}
\end{figure}

Along 2018 ALICE many new physics results have been obtained from pp, p--Pb, \PbPb and \XeXe collisions from Run2 data taking, while also the collaboration has continued to work on results from the analysis of the Run1 data. Almost 50 papers have been submitted to journals in the last year, including in particular the main topics reported in the following.

 In \pp and in \pPb collisions, for instance, ALICE studied the 
 $\Lambda_{\rm c}^+$ production~\cite{Acharya:2017kfy}, the prompt and non-prompt $\hbox {J}/\psi$ production and nuclear modification at mid-rapidity~\cite{Acharya:2018yud} and the measurement of the inclusive $\hbox {J}/\psi$ polarization at forward rapidity in \pp collisions 
at $\s= 8$~TeV \cite{Acharya:2018uww}. 
Looking at \PbPb data ALICE succeeded in studying 
the $D$-meson azimuthal anisotropy in midcentral Pb-Pb collisions at $\snn=5.02$~TeV~\cite{Acharya:2017qps}, the Z$^0$-boson production at large rapidities in Pb-Pb collisions at $\snn=5.02$~TeV~\cite{Acharya:2017wpf} and the anisotropic flow of identified particles in Pb-Pb collisions at $ \snn=5.02 $~TeV~\cite{Acharya:2018zuq}. The anisotropic flow was also studied in \XeXe collisions at $\snn = 5.44$~TeV~\cite{Acharya:2018ihu}, together with the inclusive $\hbox {J}/\psi$ production~\cite{Acharya:2018jvc} and the transverse momentum spectra and nuclear modification factors of charged particles~\cite{Acharya:2018eaq}.\\
 
The general upgrade strategy for Run3 is conceived to deal with this challenge with expected \PbPb interaction rates of up to 50 kHz aiming at an integrated luminosity above 10 $\mathrm{nb^{-1}}$. The five TDRs, namely for the new ITS, the TPC GEM-based readout chambers, the Muon Forward Tracker, the Trigger and Readout system, and the Online/Offline computing system were fully approved by the CERN Research Board between 2014 and 2015. In 2017 the transition from the R\&D phase to the construction of prototypes of the final detector elements was successfully completed. For the major systems, the final prototype tests and evaluations were performed and the production readiness reviews have been successful, the production started during the 2017 and has been continued throughout 2018.

\section{Computing model and R\&D activity in Italy}  

The ALICE computing model is still heavily based on Grid distributed computing; since the very beginning, the base principle underlying it has been that every physicist should have equal access to the data and computing resources~\cite{ALICE:2005aa}. According to this principle, the ALICE peculiarity has always been to operate its Grid as a “cloud” of computing resources (both CPU and storage) with no specific role assigned to any given center, the only difference between them being the Tier level to which they belong. All resources have to be made available to all ALICE members, according only to experiment policy and not on resource physical location, and data is distributed according to network topology and availability of resources and not in pre-defined datasets. Tier1s only peculiarities are their size and the availability of tape custodial storage, which holds a collective second copy of raw data and allows the collaboration to run event reconstruction tasks there. In the ALICE model, though, tape recall is almost never done: all useful data reside on disk, and the custodial tape copy is used only for safekeeping. All data access is done through the xrootd protocol, either through the use of “native” xrootd storage or, like in many large deployments, using xrootd servers in front of a distributed parallel filesystem like GPFS.\\
The model has not changed significantly for Run2, except for scavenging of some extra computing power by opportunistically use the HLT farm when not needed for data taking. All raw data collected in 2017 has been passed through the calibration stages, including the newly developed track distortion calibration for the TPC, and has been validated by the offline QA process before entering the final reconstruction phase. The ALICE software build system has been extended with additional functionality to validate the AliRoot release candidates with a large set of raw data from different years as well as with various MC generators and configurations. It uses the CERN elastic cloud infrastructure, thus allowing for dynamic provision of resources as needed. The Grid utilization in the accounting period remained high, with no major incidents. The CPU/Wall efficiency remained constant, at about 85\% across all Tiers, similar to the previous year. The much higher data rate foreseen for Run3, though, will require a major rethinking of the current computing model in all its components, from the software framework to the algorithms and of the distributed infrastructure. The design of the new computing framework for Run3, started in 2013 and mainly based on the concepts of Online-Offline integration (“\OO\  Project”), has been finalized with the corresponding Technical Design Report~\cite{Buncic:2015ari}: development and implementation phases as well as performance tests are currently ongoing.\\
The Italian share of the ALICE distributed computing effort (currently about 17\%) includes resources both form the Tier1 at CNAF and from the Tier2s in Bari, Catania, Torino and Padova-LNL, plus some extra resources in Trieste. The contribution from the Italian community to the ALICE computing in 2018 has been mainly spread over the usual items, such as the development and maintenance of the (AliRoot) software framework, the management of the computing infrastructure (Tier1 and Tier2 sites) and the participation in the Grid operations of the experiment.\\
 
In addition, in the framework of the computing R\&D activities in Italy, the design and development of a site dashboard project started a couple of years ago has been continued in 2017 and has been finalized in the first half of 2018. In its original idea, the project aimed at building a monitoring system able to gather information from all the available sources to improve the management of a Tier2 datacenter. A centralized site dashboard based on specific tools selected to meet tight technical requirements, like the capability to manage a huge amount of data in a fast way and through an interactive and customizable graphical user interface, has been developed. Its current version, running in the Bari Tier2 site since more than two years, relies on an open source time-series database (InfluxDB), a dashboard builder for visualizing time-series metrics (Grafana) and dedicated code written to implement the gathering sensors. The Bari dashboard has been exported in all the other sites along 2016 and 2017: the project has now entered the final phase where a unique centralized dashboard for the ALICE computing in Italy is being implemented. The project prospects also include the design of a more general monitoring system for distributed datacenters able to provide active support to site administrators in detecting critical events as well as to improve problem solving and debugging procedures. A contribution on the Italian dashboard has been presented at CHEP 2016~\cite{Elia:2017dzc}. This project also underlies the development of the new ALICE monitoring system for the \OO\ farm at CERN, which was approved by the \OO\ Technical Board; a first prototype of such monitoring system has been made ready to be used for the TPC detector test at P2 in May 2018. This development corresponds to the main activity item for one of the three fellowship contracts provided by the INFN in 2017 for the LHC computing developments towards Run3 and Run4. The other two fellowships are devoted to the analysis framework and to new strategies in the analysis algorithms. In particular, one fellow focuses on implementing a new analysis framework for the \OO\ system, while the other one is implementing a general-purpose framework to easily include industry-standard Machine Learning tools in the analysis workflows.\\

\section{Role and contribution of the INFN Tier1 at CNAF} 

CNAF is a full-fledged ALICE Tier1 center, having been one of the first to join the production infrastructure years ago. According to the ALICE cloud-like computing model, it has no special assigned task or reference community, but provides computing and storage resources to the whole collaboration, along with offering valuable support staff for the experiment’s computing activities. It provides reliable xrootd access both to its disk storage and to the tape infrastructure, through a TSM plugin that was developed by CNAF staff specifically for ALICE use.\\
As a result of flooding, the CNAF computing center stopped operation on November 8th, 2017; tape access had been made available again on January 31st 2018, and the ALICE Storage Element was fully recovered by February 23th. The loss of CPU resources during the Tier1 shutdown was partially mitigated by the reallocation of the Tier1 worker nodes located in Bari to the Tier2 Bari queue. At the end of February 2018 the CNAF local farm had been powered again moving from 50 kHS06 gradually to 140 kHS06. In addition, on March 15th 170 kHS06 at CINECA became available thanks to a 500 Gb/s dedicated link. 
Since March running at CNAF has been remarkably stable: for example, both the disk and tape storage availabilities have been better than 98\%, ranking CNAF in the top 5 most reliable sites for ALICE. The computing resources provided for ALICE at the CNAF Tier1 center were fully used along the year, matching and often exceeding the pledged amounts due to access to resources unused by other collaborations. Overall, about 64\% of the ALICE computing activity was Monte Carlo simulation, 14\% raw data processing (which takes place at the Tier0 and Tier1 centers only) and 22\% analysis activities: Fig.~\ref{fig:runjobsusers} illustrates the share among the different activities in the ALICE running job profile along the last 12 months.\\
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.75\textwidth]{running_jobs_per_users_2018}
\end{center} 
\caption{Share among the different ALICE activities in the 2018 running jobs profile.}
\label{fig:runjobsusers}
\end{figure}
In order to optimize the use of resources and enhance the “CPU efficiency” (the ratio of CPU to Wall Clock times), an effort was started in 2011 to move the analysis tasks from user-submitted “chaotic” jobs to organized, centrally managed “analysis trains”. The current split of analysis activities, in terms of CPU hours, is about 14\% individual jobs and 86\% organized trains.
Since April 2018, CNAF deployed the pledged resources corresponding to about 52 kHS06 CPU, 5140 TB disk and 13530 TB tape storage.\\
The INFN Tier1 has provided about 4,9\% since March 2018 and about 4.20\% along all year of the total CPU hours used by ALICE, ranking second of the ALICE Tier1 sites despite the flooding incident, as shown in Fig. \ref{fig:walltimesharet1}. The cumulated fraction of CPU hours along the whole year for CNAF is about 21\% of the all ALICE Tier1 sites, following only FZK in Karlsruhe (24\%). 
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.75\textwidth]{wall_time_tier1_2018}
\end{center} 
\caption{Ranking of CNAF among ALICE Tier1 centers in 2018.}
\label{fig:walltimesharet1}
\end{figure}
This amounts to about 44\% of the total Wall Time of the INFN contribution: it successfully completed nearly 10.5 million jobs, for a total of more than 44 millions CPU hours, the running job profile at CNAF in 2018 is shown in Fig.\ref{fig:rjobsCNAFunov}.\\
Since mid-November a new job submission queue has been made available to ALICE and used to successfully test the job queueing mechanism, the scheduling policy, the priority scheme, the resource monitoring and the resource management with HTCondor at CNAF. 
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.75\textwidth]{running_jobs_CNAF_2018}
\end{center} 
\caption{Running jobs profile at CNAF in 2018.}
\label{fig:rjobsCNAFunov}
\end{figure}
At the end of the last year ALICE was keeping on disk at CNAF more than 4.1 PiB of data in nearly 118 million files, plus more than 10 PiB of raw data on custodial tape storage; the reliability of the storage infrastructure is commendable, even taking into account the extra layer of complexity introduced by the xrootd interfaces. The excellent FS performances allow to analyse data from SE with an average throughput of about 1.6 GB/s and a peak throughput of about 3.0 GB/s, as shown in Fig.\ref{fig:nettrafse}. 
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.75\textwidth]{network_traffic_cnaf_se_2018}
\end{center} 
\caption{Network traffic on the ALICE xrootd servers at CNAF during 2018.}
\label{fig:nettrafse}
\end{figure}
Also network connectivity has always been reliable; the 100 Gb/s of the LHCOPN and the 100 Gb/s of the LHCONE WAN links makes CNAF one of the better-connected sites in the ALICE Computing Grid, allowing ALICE to sustain a total traffic up to 360 TB of raw data per week from Tier0 to CNAF, as demonstrated by the Fig.\ref{fig:tottraftape}. 


\section*{References}

\begin{thebibliography}{9}

%\cite{Abelev:2014ffa}
\bibitem{Abelev:2014ffa}
  B.~B.~Abelev {\it et al.} [ALICE Collaboration],
  %``Performance of the ALICE Experiment at the CERN LHC,''
  Int.\ J.\ Mod.\ Phys.\ A {\bf 29} (2014) 1430044.
  %doi:10.1142/S0217751X14300440
  %[arXiv:1402.4476 [nucl-ex]].
  %%CITATION = doi:10.1142/S0217751X14300440;%%
  %310 citations counted in INSPIRE as of 01 Mar 2018

%\cite{Adam:2015ptt}
\bibitem{Adam:2015ptt}
  J.~Adam {\it et al.} [ALICE Collaboration],
  %``Centrality dependence of the charged-particle multiplicity density at midrapidity in Pb-Pb collisions at $\snn = 5.02$ TeV,''
  Phys.\ Rev.\ Lett.\  {\bf 116} (2016) no.22,  222302.
  % doi:10.1103/PhysRevLett.116.222302
  % [arXiv:1512.06104 [nucl-ex]].
  %%CITATION = doi:10.1103/PhysRevLett.116.222302;%%
  %73 citations counted in INSPIRE as of 04 Mar 2018

%\cite{Adam:2016izf}
\bibitem{Adam:2016izf}
  J.~Adam {\it et al.} [ALICE Collaboration],
  %``Anisotropic flow of charged particles in Pb-Pb collisions at $\snn=5.02$ TeV,''
  Phys.\ Rev.\ Lett.\  {\bf 116} (2016) no.13,  132302.
  % doi:10.1103/PhysRevLett.116.132302
  % [arXiv:1602.01119 [nucl-ex]].
  %%CITATION = doi:10.1103/PhysRevLett.116.132302;%%
  %69 citations counted in INSPIRE as of 04 Mar 2018
	
%\cite{Acharya:2018qsh}
\bibitem{Acharya:2018qsh}
  S.~Acharya {\it et al.} [ALICE Collaboration],
  %``Transverse momentum spectra and nuclear modification factors of charged particles in pp, p-Pb and Pb-Pb collisions at the LHC,''
  arXiv:1802.09145 [nucl-ex].
  %%CITATION = ARXIV:1802.09145;%%
	
%\cite{Acharya:2017kfy}
\bibitem{Acharya:2017kfy}
  S.~Acharya {\it et al.} [ALICE Collaboration],
  %``$\Lambda_{\rm c}^+$ production in pp collisions at $\s = 7$ TeV and in p-Pb collisions at $\snn = 5.02$ TeV,''
  JHEP {\bf 1804} (2018) 108.
%  doi:10.1007/JHEP04(2018)108
%  [arXiv:1712.09581 [nucl-ex]].
  %%CITATION = doi:10.1007/JHEP04(2018)108;%%
  %34 citations counted in INSPIRE as of 07 May 2019

\bibitem{Acharya:2018yud}
  S.~Acharya {\it et al.} [ALICE Collaboration],
  %``Prompt and non-prompt $\hbox {J}/\psi $ production and nuclear modification at mid-rapidity in p–Pb collisions at $\snn= 5.02}$  TeV,''
  Eur.\ Phys.\ J.\ C {\bf 78} (2018) no.6,  466.
%  doi:10.1140/epjc/s10052-018-5881-2
%  [arXiv:1802.00765 [nucl-ex]].
  %%CITATION = doi:10.1140/epjc/s10052-018-5881-2;%%
  %3 citations counted in INSPIRE as of 07 May 2019

%\cite{Acharya:2018uww}
\bibitem{Acharya:2018uww}
  S.~Acharya {\it et al.} [ALICE Collaboration],
  %``Measurement of the inclusive J/ $\psi $ polarization at forward rapidity in pp collisions at $\s = 8$  TeV,''
  Eur.\ Phys.\ J.\ C {\bf 78} (2018) no.7,  562.
%  doi:10.1140/epjc/s10052-018-6027-2
%  [arXiv:1805.04374 [hep-ex]].
  %%CITATION = doi:10.1140/epjc/s10052-018-6027-2;%%
  %4 citations counted in INSPIRE as of 07 May 2019

%\cite{Acharya:2017qps}
\bibitem{Acharya:2017qps}
  S.~Acharya {\it et al.} [ALICE Collaboration],
  %``$D$-meson azimuthal anisotropy in midcentral Pb-Pb collisions at $\snn=5.02}$ TeV,''
  Phys.\ Rev.\ Lett.\  {\bf 120} (2018) no.10,  102301.
%  doi:10.1103/PhysRevLett.120.102301
%  [arXiv:1707.01005 [nucl-ex]].
  %%CITATION = doi:10.1103/PhysRevLett.120.102301;%%
  %42 citations counted in INSPIRE as of 07 May 2019

%\cite{Acharya:2017wpf}
\bibitem{Acharya:2017wpf}
  S.~Acharya {\it et al.} [ALICE Collaboration],
  %``Measurement of Z$^0$-boson production at large rapidities in Pb-Pb collisions at $\snn=5.02$ TeV,''
  Phys.\ Lett.\ B {\bf 780} (2018) 372.
%  doi:10.1016/j.physletb.2018.03.010
%  [arXiv:1711.10753 [nucl-ex]].
  %%CITATION = doi:10.1016/j.physletb.2018.03.010;%%
  %4 citations counted in INSPIRE as of 07 May 2019

%\cite{Acharya:2018zuq}
\bibitem{Acharya:2018zuq}
  S.~Acharya {\it et al.} [ALICE Collaboration],
  %``Anisotropic flow of identified particles in Pb-Pb collisions at $\snn=5.02 $ TeV,''
  JHEP {\bf 1809} (2018) 006.
%  doi:10.1007/JHEP09(2018)006
%  [arXiv:1805.04390 [nucl-ex]].
  %%CITATION = doi:10.1007/JHEP09(2018)006;%%
  %11 citations counted in INSPIRE as of 07 May 2019

%\cite{Acharya:2018ihu}
\bibitem{Acharya:2018ihu}
  S.~Acharya {\it et al.} [ALICE Collaboration],
  %``Anisotropic flow in Xe-Xe collisions at $\snn = 5.44}$ TeV,''
  Phys.\ Lett.\ B {\bf 784} (2018) 82.
%  doi:10.1016/j.physletb.2018.06.059
%  [arXiv:1805.01832 [nucl-ex]].
  %%CITATION = doi:10.1016/j.physletb.2018.06.059;%%
  %19 citations counted in INSPIRE as of 07 May 2019
  
  %\cite{Acharya:2018jvc}
\bibitem{Acharya:2018jvc}
  S.~Acharya {\it et al.} [ALICE Collaboration],
  %``Inclusive J/$\psi$ production in Xe–Xe collisions at $\snn = 5.44$ TeV,''
  Phys.\ Lett.\ B {\bf 785} (2018) 419.
%  doi:10.1016/j.physletb.2018.08.047
%  [arXiv:1805.04383 [nucl-ex]].
  %%CITATION = doi:10.1016/j.physletb.2018.08.047;%%
  %5 citations counted in INSPIRE as of 07 May 2019
  
  %\cite{Acharya:2018eaq}
\bibitem{Acharya:2018eaq}
  S.~Acharya {\it et al.} [ALICE Collaboration],
  %``Transverse momentum spectra and nuclear modification factors of charged particles in Xe-Xe collisions at $\snn= 5.44$ TeV,''
  Phys.\ Lett.\ B {\bf 788} (2019) 166.
%  doi:10.1016/j.physletb.2018.10.052
%  [arXiv:1805.04399 [nucl-ex]].
  %%CITATION = doi:10.1016/j.physletb.2018.10.052;%%
  %18 citations counted in INSPIRE as of 07 May 2019

%\cite{ALICE:2005aa}
\bibitem{ALICE:2005aa}
  P.~Cortese {\it et al.} [ALICE Collaboration],
  %``ALICE technical design report of the computing,''
  CERN-LHCC-2005-018.
  %%CITATION = CERN-LHCC-2005-018;%%
  %44 citations counted in INSPIRE as of 01 Mar 2018

%\cite{Buncic:2015ari}
\bibitem{Buncic:2015ari}
  P.~Buncic, M.~Krzewicki and P.~Vande Vyvre,
  %``Technical Design Report for the Upgrade of the Online-Offline Computing System,''
  CERN-LHCC-2015-006, ALICE-TDR-019.
  %%CITATION = CERN-LHCC-2015-006, ALICE-TDR-019;%%
  %53 citations counted in INSPIRE as of 01 Mar 2018
	
%\cite{Elia:2017dzc}
\bibitem{Elia:2017dzc}
  D.~Elia {\it et al.} [ALICE Collaboration],
  %``A dashboard for the Italian computing in ALICE,''
  J.\ Phys.\ Conf.\ Ser.\  {\bf 898} (2017) no.9,  092054.
  %doi:10.1088/1742-6596/898/9/092054
  %%CITATION = doi:10.1088/1742-6596/898/9/092054;%%

%\cite{Abelev:2014dsa}
%\bibitem{Abelev:2014dsa}
%  B.~B.~Abelev {\it et al.} [ALICE Collaboration],
%  %``Transverse momentum dependence of inclusive primary charged-particle production in p-Pb 
% collisions at $\snn=5.02~\text {TeV}$,''
%  Eur.\ Phys.\ J.\ C {\bf 74} (2014) no.9, 3054.
%  %doi:10.1140/epjc/s10052-014-3054-5
%  %[arXiv:1405.2737 [nucl-ex]].
%  %%CITATION = doi:10.1140/epjc/s10052-014-3054-5;%%
%  %73 citations counted in INSPIRE as of 01 Mar 2018

%\cite{Abelev:2013haa}
%\bibitem{Abelev:2013haa}
%  B.~B.~Abelev {\it et al.} [ALICE Collaboration],
%  %``Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions 
% at $\snn = 5.02$ TeV,''
%  Phys.\ Lett.\ B {\bf 728} (2014) 25.
%  %doi:10.1016/j.physletb.2013.11.020
%  %[arXiv:1307.6796 [nucl-ex]].
%  %%CITATION = doi:10.1016/j.physletb.2013.11.020;%%
%  %233 citations counted in INSPIRE as of 01 Mar 2018
	
\end{thebibliography}

\end{document}