Skip to content
Snippets Groups Projects
Commit e0c428e0 authored by bovy89's avatar bovy89
Browse files

sync from upstream

parents 26c3d0ea f1bbf4d4
No related branches found
No related tags found
No related merge requests found
Showing
with 33968 additions and 22 deletions
...@@ -72,21 +72,20 @@ cd ${builddir} ...@@ -72,21 +72,20 @@ cd ${builddir}
build_from_source user-support main.tex *.PNG build_from_source user-support main.tex *.PNG
build_from_source ams AMS-report-2019.tex AMS_nuovo.pdf contributors.pdf He-MC.pdf input_output.jpg production_jobs.jpg build_from_source ams AMS-report-2019.tex AMS_nuovo.pdf contributors.pdf He-MC.pdf input_output.jpg production_jobs.jpg
#build_from_source alice alice.tex *.png *.eps build_from_source alice main.tex *.png
#build_from_source atlas atlas.tex build_from_source atlas atlas.tex
#build_from_source borexino borexino.tex #build_from_source borexino borexino.tex
build_from_source cms report-cms-feb-2019.tex tier1-jobs-2018.pdf tier1-readiness-2018.pdf build_from_source cms report-cms-feb-2019.tex tier1-jobs-2018.pdf tier1-readiness-2018.pdf
link_pdf belle Cnaf-2019-5.0.pdf link_pdf belle Cnaf-2019-5.0.pdf
#build_from_source cosa cosa.tex biblio.bib beegfs.PNG #build_from_source cosa cosa.tex biblio.bib beegfs.PNG
build_from_source cnprov cnprov.tex build_from_source cnprov cnprov.tex
#build_from_source cta cta.tex *.eps build_from_source cta CTA_annualreport_2018_v1.tex *.eps
#build_from_source cuore cnaf_cuore.tex cnaf_cuore.bib #build_from_source cuore cnaf_cuore.tex cnaf_cuore.bib
#build_from_source cupid cupid.tex cupid.bib build_from_source cupid main.tex cupid-biblio.bib
#link_pdf dampe dampe.pdf build_from_source dampe main.tex *.jpg *.png
#link_pdf darkside ds.pdf #link_pdf darkside ds.pdf
#build_from_source eee eee.tex EEEarch.eps EEEmonitor.eps EEEtracks.png ELOGquery.png request.png #build_from_source eee eee.tex EEEarch.eps EEEmonitor.eps EEEtracks.png ELOGquery.png request.png
#build_from_source exanest exanest.tex biblio.bib monitoring.PNG storage.png #build_from_source exanest exanest.tex biblio.bib monitoring.PNG storage.png
build_from_source test TEST.tex test.eps
#build_from_source fazia fazia.tex #build_from_source fazia fazia.tex
build_from_source fermi fermi.tex build_from_source fermi fermi.tex
build_from_source gamma gamma.tex build_from_source gamma gamma.tex
...@@ -94,9 +93,10 @@ build_from_source gamma gamma.tex ...@@ -94,9 +93,10 @@ build_from_source gamma gamma.tex
#build_from_source glast glast.tex #build_from_source glast glast.tex
#link_pdf juno juno.pdf #link_pdf juno juno.pdf
build_from_source km3net km3net.tex compmodel.png threetier.png build_from_source km3net km3net.tex compmodel.png threetier.png
build_from_source na62 main.tex
build_from_source newchim repnewchim18.tex fig1.png build_from_source newchim repnewchim18.tex fig1.png
#build_from_source lhcb lhcb.tex *.jpg #build_from_source lhcb lhcb.tex *.jpg
#build_from_source lhcf lhcf.tex build_from_source lhcf lhcf.tex
build_from_source limadou limadou.tex build_from_source limadou limadou.tex
#build_from_source lowcostdev lowcostdev.tex *.jpg #build_from_source lowcostdev lowcostdev.tex *.jpg
#build_from_source lspe lspe.tex biblio.bib lspe_data_path.pdf #build_from_source lspe lspe.tex biblio.bib lspe_data_path.pdf
...@@ -110,9 +110,9 @@ build_from_source virgo AdV_computing_CNAF.tex ...@@ -110,9 +110,9 @@ build_from_source virgo AdV_computing_CNAF.tex
#build_from_source mw-iam mw-iam.tex #build_from_source mw-iam mw-iam.tex
#build_from_source na62 na62.tex #build_from_source na62 na62.tex
#link_pdf padme padme.pdf link_pdf padme 2019_PADMEcontribution.pdf
#build_from_source xenon xenon.tex xenon-computing-model.pdf #build_from_source xenon xenon.tex xenon-computing-model.pdf
#build_from_source sysinfo sysinfo.tex pres_rundeck.png deploy_grafana.png build_from_source sysinfo sysinfo.tex *.png
#link_pdf virgo VirgoComputing.pdf #link_pdf virgo VirgoComputing.pdf
#build_from_source tier1 tier1.tex #build_from_source tier1 tier1.tex
...@@ -128,9 +128,9 @@ build_from_source virgo AdV_computing_CNAF.tex ...@@ -128,9 +128,9 @@ build_from_source virgo AdV_computing_CNAF.tex
#build_from_source ssnn2 vmware.tex *.JPG *.jpg #build_from_source ssnn2 vmware.tex *.JPG *.jpg
#build_from_source infra Chiller.tex chiller-location.png #build_from_source infra Chiller.tex chiller-location.png
build_from_source audit Audit-2018.tex
#build_from_source cloud_cnaf cloud_cnaf.tex *.png #build_from_source cloud_cnaf cloud_cnaf.tex *.png
#build_from_source srp SoftRel.tex ar2017.bib build_from_source dmsq dmsq2018.tex ar2018.bib
#build_from_source st StatMet.tex sm2017.bib #build_from_source st StatMet.tex sm2017.bib
#build_from_source cloud_a cloud_a.tex *.pdf #build_from_source cloud_a cloud_a.tex *.pdf
#build_from_source cloud_b cloud_b.tex *.png *.jpg #build_from_source cloud_b cloud_b.tex *.png *.jpg
......
...@@ -144,32 +144,31 @@ Introducing the sixth annual report of CNAF... ...@@ -144,32 +144,31 @@ Introducing the sixth annual report of CNAF...
%\includepdf[pages=1, pagecommand={\thispagestyle{empty}}]{papers/experiment.pdf} %\includepdf[pages=1, pagecommand={\thispagestyle{empty}}]{papers/experiment.pdf}
\cleardoublepage \cleardoublepage
\ia{User and Operational Support at CNAF}{user-support} \ia{User and Operational Support at CNAF}{user-support}
%\ia{ALICE computing at the INFN CNAF Tier 1}{alice} \ia{ALICE computing at the INFN CNAF Tier1}{alice}
\ia{AMS-02 data processing and analysis at CNAF}{ams} \ia{AMS-02 data processing and analysis at CNAF}{ams}
%\ia{The ATLAS experiment at the INFN CNAF Tier 1}{atlas} %\ia{The ATLAS experiment at the INFN CNAF Tier 1}{atlas}
%\ia{The Borexino-SOX experiment at the INFN CNAF Tier 1}{borexino} %\ia{The Borexino-SOX experiment at the INFN CNAF Tier 1}{borexino}
%\ia{The Cherenkov Telescope Array}{cta} \ia{The Cherenkov Telescope Array}{cta}
\ia{The CMS experiment at the INFN CNAF Tier 1}{cms} \ia{The CMS experiment at the INFN CNAF Tier 1}{cms}
\ia{The Belle II experiment at CNAF}{belle} \ia{The Belle II experiment at CNAF}{belle}
\ia{CSES-Limadou at CNAF}{limadou} \ia{CSES-Limadou at CNAF}{limadou}
%\ia{CUORE experiment}{cuore} %\ia{CUORE experiment}{cuore}
%\ia{CUPID-0 experiment}{cupid} \ia{CUPID-0 experiment}{cupid}
%\ia{DAMPE data processing and analysis at CNAF}{dampe} \ia{DAMPE data processing and analysis at CNAF}{dampe}
%\ia{DarkSide-50 experiment at CNAF}{darkside} %\ia{DarkSide-50 experiment at CNAF}{darkside}
%\ia{The EEE Project activity at CNAF}{eee} %\ia{The EEE Project activity at CNAF}{eee}
\ia{TEST FOR COMMITTEE}{test}
\ia{The \emph{Fermi}-LAT experiment}{fermi} \ia{The \emph{Fermi}-LAT experiment}{fermi}
%\ia{Fazia: running dynamical simulations for heavy ion collisions at Fermi energies}{fazia} %\ia{Fazia: running dynamical simulations for heavy ion collisions at Fermi energies}{fazia}
\ia{GAMMA experiment}{gamma} \ia{GAMMA experiment}{gamma}
%\ia{The GERDA experiment}{gerda} %\ia{The GERDA experiment}{gerda}
%\ia{Juno experimenti at CNAF}{juno} %\ia{Juno experimenti at CNAF}{juno}
\ia{The KM3NeT neutrino telescope network and CNAF}{km3net} \ia{The KM3NeT neutrino telescope network and CNAF}{km3net}
\ia{The NEWCHIM activity at CNAF for the CHIMERA and FARCOS devices}{newchim}
%\ia{LHCb Computing at CNAF}{lhcb} %\ia{LHCb Computing at CNAF}{lhcb}
%\ia{The LHCf experiment}{lhcf} \ia{The LHCf experiment}{lhcf}
%\ia{The LSPE experiment at INFN CNAF}{lspe} %\ia{The LSPE experiment at INFN CNAF}{lspe}
%\ia{The NA62 experiment at CERN}{na62} \ia{The NA62 experiment at CERN}{na62}
%\ia{The PADME experiment at INFN CNAF}{padme} \ia{The NEWCHIM activity at CNAF for the CHIMERA and FARCOS devices}{newchim}
\ia{The PADME experiment at INFN CNAF}{padme}
%\ia{XENON computing activities}{xenon} %\ia{XENON computing activities}{xenon}
\ia{Advanced Virgo computing at CNAF}{virgo} \ia{Advanced Virgo computing at CNAF}{virgo}
% %
...@@ -191,7 +190,7 @@ Introducing the sixth annual report of CNAF... ...@@ -191,7 +190,7 @@ Introducing the sixth annual report of CNAF...
%\ia{Cooling system upgrade and Power Usage Effectiveness improvement in the INFN CNAF Tier 1 infrastructure}{infra} %\ia{Cooling system upgrade and Power Usage Effectiveness improvement in the INFN CNAF Tier 1 infrastructure}{infra}
%\ia{National ICT Services Infrastructure and Services}{ssnn1} %\ia{National ICT Services Infrastructure and Services}{ssnn1}
%\ia{National ICT Services hardware and software infrastructures for Central Services}{ssnn2} %\ia{National ICT Services hardware and software infrastructures for Central Services}{ssnn2}
%\ia{The INFN Information System}{sysinfo} \ia{The INFN Information System}{sysinfo}
\ia{CNAF Provisioning system: Puppet 5 upgrade}{cnprov} \ia{CNAF Provisioning system: Puppet 5 upgrade}{cnprov}
...@@ -202,13 +201,14 @@ Introducing the sixth annual report of CNAF... ...@@ -202,13 +201,14 @@ Introducing the sixth annual report of CNAF...
\addtocontents{toc}{\protect\mbox{}\protect\hrulefill\par} \addtocontents{toc}{\protect\mbox{}\protect\hrulefill\par}
%\includepdf[pages=1, pagecommand={\thispagestyle{empty}}]{papers/research.pdf} %\includepdf[pages=1, pagecommand={\thispagestyle{empty}}]{papers/research.pdf}
\cleardoublepage \cleardoublepage
\ia{Internal Auditing INFN for GDPR compliance}{audit}
%\ia{Continuous Integration and Delivery with Kubernetes}{mw-kube} %\ia{Continuous Integration and Delivery with Kubernetes}{mw-kube}
%\ia{Middleware support, maintenance and development}{mw-software} %\ia{Middleware support, maintenance and development}{mw-software}
%\ia{Evolving the INDIGO IAM service}{mw-iam} %\ia{Evolving the INDIGO IAM service}{mw-iam}
%\ia{Esaco: an OAuth/OIDC token introspection service}{mw-esaco} %\ia{Esaco: an OAuth/OIDC token introspection service}{mw-esaco}
%\ia{StoRM Quality of Service and Data Lifecycle support through CDMI}{mw-cdmi-storm} %\ia{StoRM Quality of Service and Data Lifecycle support through CDMI}{mw-cdmi-storm}
%\ia{A low-cost platform for space software development}{lowcostdev} %\ia{A low-cost platform for space software development}{lowcostdev}
%\ia{Overview of Software Reliability literature}{srp} \ia{Comparing Data Mining Techniques for Software Defect Prediction}{dmsq}
%\ia{Summary of a tutorial on statistical methods}{st} %\ia{Summary of a tutorial on statistical methods}{st}
%\ia{Dynfarm: Transition to Production}{dynfarm} %\ia{Dynfarm: Transition to Production}{dynfarm}
%\ia{Official testing and increased compatibility for Dataclient}{dataclient} %\ia{Official testing and increased compatibility for Dataclient}{dataclient}
......
This diff is collapsed.
contributions/alice/network_traffic_cnaf_se_2018.png

146 KiB

contributions/alice/raw_data_accumulation_run2.png

66.2 KiB

contributions/alice/running_jobs_CNAF_2018.png

122 KiB

contributions/alice/running_jobs_per_users_2018.png

182 KiB

contributions/alice/total_traffic_cnaf_tape_2018.png

65.1 KiB

contributions/alice/wall_time_tier1_2018.png

70.5 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{The ATLAS Experiment at the INFN CNAF Tier-1}
\author{Alessandro De Salvo$^1$, Lorenzo Rinaldi$^2$}
\address{$^1$ INFN Sezione di Roma-1, piazzale Aldo Moro 2, 00185 Roma, Italy,\\ $^2$ Universit\`a di Bologna e INFN, via Irnerio 46, 40126 Bologna, Italy}
\ead{alessandro.desalvo@roma1.infn.it, lorenzo.rinaldi@bo.infn.it}
\begin{abstract}
The ATLAS experiment at LHC was fully operating in 2017. In this contribution we describe the ATLAS computing activities performed in the Italian sites of the Collaboration, and in particular the utilisation of the CNAF Tier-1.
\end{abstract}
\section{Introduction}
ATLAS \cite{ATLAS-det} is one of two general-purpose detectors at the Large Hadron Collider (LHC). It investigates a wide range of physics, from the search for the Higgs boson and standard model studies to extra dimensions and particles that could make up dark matter. Beams of particles from the LHC collide at the centre of the ATLAS detector making collision debris in the form of new particles, which fly out from the collision point in all directions. Six different detecting subsystems arranged in layers around the collision point record the paths, momentum, and energy of the particles, allowing them to be individually identified. A huge magnet system bends the paths of charged particles so that their momenta can be measured. The interactions in the ATLAS detectors create an enormous flow of data. To digest the data, ATLAS uses an advanced trigger system to tell the detector which events to record and which to ignore. Complex data-acquisition and computing systems are then used to analyse the collision events recorded. At 46 m long, 25 m high and 25 m wide, the 7000-tons ATLAS detector is the largest volume particle detector ever built. It sits in a cavern 100 m below ground near the main CERN site, close to the village of Meyrin in Switzerland.
More than 3000 scientists from 174 institutes in 38 countries work on the ATLAS experiment.
ATLAS has been taking data from 2010 to 2012, at center of mass energies of 7 and 8 TeV, collecting about 5 and 20 fb$^{-1}$ of integrated luminosity, respectively. During the complete Run-2 phase (2015-2018) ATLAS collected and registered at the Tier-0 147 fb$^{-1}$ of integrated luminosity at center of mass energies of 13 TeV.
The experiment has been designed to look for New Physics over a very large set of final states and signatures, and for precision measurements of known Standard Model (SM) processes. Its most notable result up to now has been the discovery of a new resonance at a mass of about 125 GeV \cite{ATLAS higgs}, followed by the measurement of its properties (mass, production cross sections in various channels and couplings). These measurements have confirmed the compatibility of the new resonance with the Higgs boson, foreseen by the SM but never observed before.
\section{The ATLAS Computing System}
The ATLAS Computing System \cite{ATLAS-cm} is responsible for the provision of the software framework and services, the data management system, user-support services, and the world-wide data access and job-submission system. The development of detector-specific algorithmic code for simulation, calibration, alignment, trigger and reconstruction is under the responsibility of the detector projects, but the Software and Computing Project plans and coordinates these activities across detector boundaries. In particular, a significant effort has been made to ensure that relevant parts of the “offline” framework and event-reconstruction code can be used in the High Level Trigger. Similarly, close cooperation with Physics Coordination and the Combined Performance groups ensures the smooth development of global event-reconstruction code and of software tools for physics analysis.
\subsection{The ATLAS Computing Model}
The ATLAS Computing Model embraces the Grid paradigm and a high degree of decentralisation and sharing of computing resources. The required level of computing resources means that off-site facilities are vital to the operation of ATLAS in a way that was not the case for previous CERN-based experiments. The primary event processing occurs at CERN in a Tier-0 Facility. The RAW data is archived at CERN and copied (along with the primary processed data) to the Tier-1 facilities around the world. These facilities archive the raw data, provide the reprocessing capacity, provide access to the various processed versions, and allow scheduled analysis of the processed data by physics analysis groups. Derived datasets produced by the physics groups are copied to the Tier-2 facilities for further analysis. The Tier-2 facilities also provide the simulation capacity for the experiment, with the simulated data housed at Tier-1s. In addition, Tier-2 centres provide analysis facilities, and some provide the capacity to produce calibrations based on processing raw data. A CERN Analysis Facility provides an additional analysis capacity, with an important role in the calibration and algorithmic development work. ATLAS has adopted an object-oriented approach to software, based primarily on the C++ programming language, but with some components implemented using FORTRAN and Java. A component-based model has been adopted, whereby applications are built up from collections of plug-compatible components based on a variety of configuration files. This capability is supported by a common framework that provides common data-processing support. This approach results in great flexibility in meeting both the basic processing needs of the experiment, but also for responding to changing requirements throughout its lifetime. The heavy use of abstract interfaces allows for different implementations to be provided, supporting different persistency technologies, or optimized for the offline or high-level trigger environments.
The Athena framework is an enhanced version of the Gaudi framework that was originally developed by the LHCb experiment, but is now a common ATLAS-LHCb project. Major
design principles are the clear separation of data and algorithms, and between transient (in-memory) and persistent (in-file) data. All levels of processing of ATLAS data, from high-level trigger to event simulation, reconstruction and analysis, take place within the Athena framework; in this way it is easier for code developers and users to test and run algorithmic code, with the assurance that all geometry and conditions data will be the same for all types of applications ( simulation, reconstruction, analysis, visualization).
One of the principal challenges for ATLAS computing is to develop and operate a data storage and management infrastructure able to meet the demands of a yearly data volume of O(10PB) utilized by data processing and analysis activities spread around the world. The ATLAS Computing Model establishes the environment and operational requirements that ATLAS data-handling systems must support and provides the primary guidance for the development of the data management systems.
The ATLAS Databases and Data Management Project (DB Project) leads and coordinates ATLAS activities in these areas, with a scope encompassing technical data bases (detector production, installation and survey data), detector geometry, online/TDAQ databases, conditions databases (online and offline), event data, offline processing configuration and bookkeeping, distributed data management, and distributed database and data management services. The project is responsible for ensuring the coherent development, integration and operational capability of the distributed database and data management software and infrastructure for ATLAS across these areas.
The ATLAS Computing Model defines the distribution of raw and processed data to Tier-1 and Tier-2 centres, so as to be able to exploit fully the computing resources that are made available to the Collaboration. Additional computing resources are available for data processing and analysis at Tier-3 centres and other computing facilities to which ATLAS may have access. A complex set of tools and distributed services, enabling the automatic distribution and processing of the large amounts of data, has been developed and deployed by ATLAS in cooperation with the LHC Computing Grid (LCG) Project and with the middleware providers of the three large Grid infrastructures we use: EGI, OSG and NorduGrid. The tools are designed in a flexible way, in order to have the possibility to extend them to use other types of Grid middleware in the future.
The main computing operations that ATLAS have to run comprise the preparation, distribution and validation of ATLAS software, and the computing and data management operations run centrally on Tier-0, Tier-1s and Tier-2s. The ATLAS Virtual Organization allows production and analysis users to run jobs and access data at remote sites using the ATLAS-developed Grid tools.
The Computing Model, together with the knowledge of the resources needed to store and process each ATLAS event, gives rise to estimates of required resources that can be used to design and set up the various facilities. It is not assumed that all Tier-1s or Tier-2s are of the same size; however, in order to ensure a smooth operation of the Computing Model, all Tier-1s usually have broadly similar proportions of disk, tape and CPU, and similarly for the Tier-2s.
The organization of the ATLAS Software and Computing Project reflects all areas of activity within the project itself. Strong high-level links are established with other parts of the ATLAS organization, such as the TDAQ Project and Physics Coordination, through cross-representation in the respective steering boards. The Computing Management
Board, and in particular the Planning Officer, acts to make sure that software and computing developments take place coherently across sub-systems and that the project as a whole meets its milestones. The International Computing Board assures the information flow between the ATLAS Software and Computing Project and the national resources and their Funding Agencies.
\section{The role of the Italian Computing facilities in the global ATLAS Computing}
Italy provides Tier-1, Tier-2 and Tier-3 facilities to the ATLAS collaboration. The Tier-1, located at CNAF, Bologna, is the main centre, also referred as “regional” centre. The Tier-2 centres are distributed in different areas of Italy, namely in Frascati, Napoli, Milano and Roma. All 4 Tier-2 sites are considered as Direct Tier-2 (T2D), meaning that they have an higher importance with respect to normal Tier-2s and can have primary data too. They are also considered satellites of the Tier-1, also identified as nucleus. The total of the Tier-2 sites corresponds to more than the total ATLAS size at the Tier-1, for what concerns disk and CPUs; tape is not available in the Tier-2 sites. A third category of sites is the so-called Tier-3 centres. Those are smaller centres, scattered in different places in Italy, that nevertheless contributes in a consistent way to the overall computing power, in terms of disk and CPUs. The overall size of the Tier-3 sites corresponds roughly to the size of a Tier-2 site. The Tier-1 and Tier-2 sites have pledged resources, while the Tier-3 sites do not have any pledge resource available.
In terms of pledged resources, Italy contributes to the ATLAS computing as 9\% of both CPU and disk for the Tier-1. The share of the Tier-2 facilities corresponds to 7\% of disk and 9\% of CPU of the whole ATLAS computing infrastructure. The Italian Tier-1, together with the other Italian centres, provides both resources and expertise to the ATLAS computing community, and manages the so-called Italian Cloud of computing. Since 2015 the Italian Cloud does not only include Italian sites, but also Tier-3 sites of other countries, namely South Africa and Greece.
The computing resources, in terms of disk, tape and CPU, available in the Tier-1 at CNAF have been very important for all kind of activities, including event generation, simulation, reconstruction, reprocessing and analysis, for both MonteCarlo and real data. Its major contribution has been the data reprocessing, since this is a very I/O and memory intense operation, normally executed only in Tier-1 centres. In this sense CNAF has played a fundamental role for the fine measurement of the Higgs [3] properties in 2018 and other analysis. The Italian centres, including CNAF, have been very active not only in the operation side, but contributed a lot in various aspect of the Computing of the ATLAS experiment, in particular for what concerns the network, the storage systems, the storage federations and the monitoring tools. The Tier-1 at CNAF has been very important for the ATLAS community in 2018, for some specific activities:
\begin{itemize}
\item improvements on the WebDAV/HTTPS access for StoRM, in order to be used as main renaming method for the ATLAS files in StoRM and for http federation purposes;
\item improvements of the dynamic model of the multi-core resources operated via the LSF resource management system and simplification of the PanDA queues, using the Harvester service to mediate the control and information flow between PanDA and the resources.
\item network troubleshooting via the Perfsonar-PS network monitoring system, used for the LHCONE overlay network, together with the other Tier-1 and Tier-2 sites;
\item planning, readiness testing and implementation of the HTCondor batch system for the farming resources management.
\end{itemize}
\section{Main achievements of ATLAS Computing centers in Italy}
The Italian Tier-2 Federation runs all the ATLAS computing activities in the Italian cloud supporting the operations at CNAF, the Italian Tier-1 centre, and the Milano, Napoli, Roma1 and Frascati Tier-2 sites. This insures an optimized use of the resources and a fair and efficient data access. The computing activities of the ATLAS collaboration have been constantly carried out over the whole 2018, in order to analyse the data of the Run-2 and produce the Monte Carlo data needed for the 2018 run.
The LHC data taking started in April 2018 and, until the end of the operation in December 2018, all the Italian sites, the CNAF Tier-1 and the four Tier-2s, have been involved in all the computing operations of the collaboration: data reconstruction, Monte Carlo simulation, user and group analysis and data transfer among all the sites. Besides these activities, the Italian centers have contributed to the upgrade of the Computing Model both from the testing side and the development of specific working groups. ATLAS collected and registered at the Tier-0 ~60.6 fb$^{-1}$ and ~25 PB of raw and derived data, while the cumulative data volume distributed in all the data centers in the grid was of the order of ~80 PB. The data has been replicated with an efficiency of 100\% and an average throughput of the order of ~13 GB/s during the data taking period, with peaks above 25 GB/s. For just Italy, the average throughput was of the order of 800 MB/s with peaks above 2GB/s. The data replication speed from Tier-0 to the Tier-2s has been quite fast with a transfer time lower than 4 hours. The average number of simultaneous jobs running on the grid has been of about 110k for production (simulation and reconstruction) and data analysis, with peaks over 150k, with an average CPU efficiency up to more than 80\%. The use of the grid for analysis has been stable on ~26k simultaneous jobs, with peaks around the conferences’ periods to over 40k, showing the reliability and effectiveness of the use of grid tools for data analysis.
The Italian sites contributed to the development of the Xrootd and http/webdav federation. In the latter case the access to the storage resources is managed using the http/webdav protocol, in collaboration with the CERN DPM team, the Belle2 experiment, the Canadian Corporate Cloud ant the RAL (UK) site. The purpose is to build a reliable storage federation, alternative to the Xrootd one, to access physics data both on the grid and on cloud storage infrastructures (like Amazon S3, MicroSoft Azure, etc). The Italian community is particularly involved in this project and the first results have been presented to the WLCG collaboration.
The Italian community also contributes to develop new tools for distributed data analysis and management. Another topic of interest is the usage of new computing technologies: in this field the Italian community contributed to the development and testing of muon tracking algorithms in the ATLAS High Level Trigger, using GPGPU. Other topics in which the Italian community is involved are the Machine Learning/Deep Learning for both analysis and Operational Intelligence and their applications to the experiment software and infrastructure, by using accelerators like GPGPU and FPGAs.
The contribution of the Italian sites to the computing activities in terms of processed jobs and data recorded has been of about 9\%, corresponding to the order of the resource pledged to the collaboration, with very good performance in term of availability, reliability and efficiency. All the sites are always in the top positions in the ranking of the collaboration sites.
Besides the Tier-1 and Tier-2s, in 2018 also the Tier-3s gave a significant contribution to the Italian physicists community for the data analysis. The Tier-3s are local farms dedicated to the interactive data analysis, the last step of the analysis workflow, and to the grid analysis over small data sample. Several italian groups set up a farm for such a purpose in their universities and, after a testing and validation process performed by the distributed computing team of the collaboration, all have been recognized as official Tier-3s of the collaboration.
\section{Impact of CNAF flooding incident on ATLAS computing activities}
The ATLAS Computing Model was designed to have a sufficient redundancy of the available resources in order to tackle emergency situations like the flooding occurred on November 9th 2017 at CNAF. Thanks to the huge effort of the whole community of the CNAF, the operativity of the data centre restarted gradually from the second half of February 2018. A continuous interaction between ATLAS distributed computing community and CNAF people was needed to bring the computing operation fully back to normality. The deep collaboration was very successful and after one month the site was almost fully operational and the ATLAS data management and processing activities were running smoothly again. Eventually, the overall impact of the incident was limited enough, mainly thanks to the relatively quick recovery of the CNAF data center and to the robustness of the computing model.
\section*{References}
\begin{thebibliography}{9}
\bibitem{ATLAS-det} The ATLAS Computing Technical Design Report ATLAS-TDR-017;
CERN-LHCC-2005-022, June 2005
\bibitem{ATLAS higgs} Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, the ATLAS Collaboration, Physics Letters B, Volume 716, Issue 1, 17 September 2012, Pages 1–29
\bibitem{ATLAS-cm} The evolution of the ATLAS computing model; R W L Jones and D Barberis 2010 J. Phys.: Conf. Ser. 219 072037 doi:10.1088/1742-6596/219/7/072037
\end{thebibliography}
\end{document}
\documentclass[a4paper]{jpconf}
\bibliographystyle{iopart-num}
\begin{document}
\title{Internal Auditing INFN for GDPR compliance}
\author{V.~Ciaschini, P.~Belluomo}
\address{INFN CNAF, Viale Berti Pichat 6/2, 40127, Bologna, Italy}
\address{INFN sezione di Catania, Via Santa Sofia 64, 95123, Catania, Italy}
\begin{abstract}
With the General Data Protection Regulation (GDPR) coming into
force, INFN had to decide how to implement its principles and
requirements. To monitor their application and in general INFN's
compliance with GDPR, INFN created a new group, called ``Compliance
Auditing,'' whose job is to be internal auditors for all structures.
This article describes the startup activity for the group.
\end{abstract}
\section{Compliance Auditing Group}
\subsection{Rationale for creation}
When discussing GDPR application during the Commissione Calcolo e Reti
(CCR) 2018 workshop in Rimini, it became clear that setting up
a set of rules and assuming that all parts of INFN would correctly
follow them was not, by itself, enough. Indeed it was necessary to
comply with the duty of vigilance, which in turn required periodic
checkups.
To counteract this worries, and to vigilate on its proper application,
it was soon proposed to create a team which would take the
name of ``compliance auditors,'' whose job was to act as internal
auditors for all of INFN structures to check on the proper
application of the regulations as implemented by INFN.
\subsection{Startup Activity}
Following the proposal of the group creation, the first task to solve
was how to staff it. Two people, who had previous experience with the
setup of ISO compliance structures for some of INFN sections
volunteered, Patrizia Belluomo (Lead auditor, Sezione di Catania) and
Vincenzo Ciaschini (CNAF).
The first activity undertaken by the group was a collection, followed
by the study of all the norms applicable to INFN's implementation of
GDPR, like the text of the normative itself, other applicable Italian
legislation, the documents describing INFN's implementation, and
several INFN regulations that, while not specifically talking about
GDPR, still governed issues that were related to it, e.g data
retention policies.
We also had to decide how to structure the audits. We decided to
implement it according to well-known quality assurance principles. To
apply these principles, we ended up deciding on a set of arguments
that would be investigated during the audits, and a set of questions
that could, but not necessarily would, be asked during the audits
themselves, to act as a set of guidelines and to permit INFN
structures to prepare properly.
When the group was formally approved, these procedures were
presented at the CCR workshop in Pisa in October, and an indicative
calendar for the audits created and sent to the structures as a
proposal on when they would be audited.
Due to budget limitations, it was also decided that, at least for the
first year, most of the audits would be done by telepresence, with
on-site audits reserved for the sections that had, or would have, the
most critical data, i.e: the structures that hosted or would host
INFN's Sistema Informativo.
The rest of the year was devoted to refine this organization and
prepare the formal documentation that would be the output of the
audits and the procedures that we would follow during the audits,
which began in earnest in 9 January 2019, but that would be out of
scope for 2018's Annual Report.
\end{document}
This diff is collapsed.
\documentclass[a4paper]{jpconf}
\usepackage{epsfig}
%\usepackage{epstopdf}
\usepackage{graphicx}
\begin{document}
\title{The Cherenkov Telescope Array}
\author{L. Arrabito$^1$, C. Bigongiari$^2$, F. Di Pierro$^3$ and P. Vallania$^{3,4}$}
\address{$^1$ Laboratoire Univers et Particules de Montpellier, Universit\'e de Montpellier II Place Eug\`ene Bataillon - CC 72, CNRS/IN2P3,
F-34095 Montpellier, France}
\address{$^2$ INAF Osservatorio Astronomico di Roma - Via Frascati 33, 00040, Monte Porzio Catone (RM), Italy}
\address{$^3$ INFN Sezione di Torino - Via Pietro Giuria 1, 10125, Torino (TO), Italy}
\address{$^4$ INAF Osservatorio Astrofisico di Torino - Via Pietro Giuria 1, 10125, Torino (TO), Italy}
\ead{arrabito@in2p3.fr, ciro.bigongiari@oa-roma.inaf.it, federico.dipierro@to.infn.it, piero.vallania@to.infn.it}
\begin{abstract}
The Cherenkov Telescope Array (CTA) is an ongoing worldwide project to build a new generation ground based observatory for Very High Energy (VHE) gamma-ray astronomy.
CTA will feature two arrays of Imaging Atmospheric Cherenkov Telescopes (IACTs), one in each Earth hemisphere, to ensure the full sky coverage and will be operated as an open observatory to maximize its scientific yield.
Each array will be composed of tens of IACTs of different sizes to achieve a ten-fold improvement in sensitivity,
with respect to current generation facilities, over an unprecedented energy range which extends from a few tens of GeV to a hundred of TeV.
Imaging Cherenkov telescopes have already discovered tens of VHE gamma-ray emitters providing plentiful of valuable data and clearly demonstrating the power of the imaging Cherenkov technique.
The much higher telescope multiplicity provided by CTA will drive to highly improved angular and energy resolution, which will permit more accurate morphological and spectrographical studies of VHE gamma-ray sources. CTA project combines therefore guaranteed scientific return, in the form of high precision astrophysics, with considerable potential for major discoveries in astrophysics, cosmology and fundamental physics.
\end{abstract}
\section{Introduction}
Since the discovery of the first VHE gamma-ray source, the Crab Nebula \cite{CrabDiscovery} by the Whipple collaboration in 1989, ground-based gamma-ray astronomy has undergone an impressive development which drove to the discovery of more than 190 gamma-ray sources in less than 30 years \cite{TevCat}.
Whenever a new generation of ground-based gamma-ray observatory came into play gamma-ray astronomy experienced a major step in the number of discovered sources as well as in the comprehension of the astrophysical phenomena involved in the emission of VHE gamma radiation.
Present generation facilities like H.E.S.S. \cite{HESS}, MAGIC \cite{MAGIC} and VERITAS \cite{VERITAS} already provided a deep insight into the non-thermal processes which are responsible of the high energy emission by many astrophysical sources, like Supernova Remnants, Pulsar Wind Nebulae, Micro-quasars and Active Galactic Nuclei, clearly demonstrating the huge physics potential of this field, which is not restricted to pure astrophysical observations, but allows significant contributions to particle physics and cosmology too, see \cite{DeNauroisMazin2015,LemoineGoumard2015} for recent reviews. The impressive physics achievements obtained with the present generation instruments as well as the technological developments regarding mirror production
and new photon-detectors triggered many projects for a new-generation gamma-ray observatory by groups of astroparticle physicists around the world which later merged to form the CTA consortium \cite{CtaConsortium}.
CTA members are carrying on a worldwide effort to provide the scientific community with a state-of-the-art ground-based gamma-ray observatory, allowing exploration of cosmic radiation in the very high energy range with unprecedented accuracy and sensitivity.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{CTA_ProjectTimeline_Nov2018.eps}
\caption{\label{CtaTimeline} CTA project time line.}
\end{figure}
VHE gamma-rays can be produced in the collision of highly relativistic particles with surrounding gas clouds or in their interaction with low energy photons or magnetic fields. Possible sources of such energetic particles include jets emerging from active galactic nuclei, remnants of supernova explosions, and the environment of rapidly spinning neutron stars. High-energy gamma-rays can also be produced in top-down scenarios by the decay of heavy particles such as hypothetical dark matter candidates or cosmic strings.
The CTA observations will be used for detailed studies of above-mentioned astrophysical sources as well as for fundamental physics measurements, such as the indirect search of dark matter, searches for high energy violation of Lorentz invariance and searches for axion-like particles.
High-energy gamma-rays can be used moreover to trace the populations of high-energy particles, thus providing insightful information about the sources of cosmic rays.
Close cooperation with observatories of other wavelength ranges of the electromagnetic spectrum, and those using cosmic rays, neutrinos and gravitational waves are foreseen.
To achieve a full sky-coverage the CTA observatory will consist of two arrays of IACTs, one in both Earth hemispheres. The northern array will be placed at the Observatorio del Roque de Los Muchachos on La Palma Island, Spain, while the southern array will be located in Chile at the ESO site close to the Cerro Paranal.
The two sites were selected after years of careful consideration of extensive studies of the environmental conditions, simulations of the science performance and assessments of construction and operation costs.
Each array will be composed by IACTs of different sizes to achieve an overall ten-fold improvement in sensitivity with respect to current IACT arrays while extending the covered energy range from about 20 GeV to about 300 TeV.
The southern hemisphere array will feature telescopes of three different sizes to cover the full energy range for a detailed investigation of galactic sources, and in particular of the Galactic center, without neglecting observations of extragalactic objects.
The northern hemisphere array instead will consist of telescopes of two different sizes only covering the low energy end of the above-mentioned range (up to some tens of TeV) and will be dedicated mainly to northern extragalactic objects and cosmology studies.
The CTA observatory with its two arrays will be operated by one single consortium and a significant and increasing fraction of the observation time will be open to the general astrophysical community to maximize CTA scientific return.
The CTA project has entered the pre-construction phase. The first Large Size Telescope has been inaugurated in October 2018, accordingly to the schedule (see Fig. \ref{CtaTimeline}), in the La Palma CTA Northern Site. During 2019 the construction of 3 more LSTs will start. In December 2018 another telescope prototype, the Dual Mirror Medium Size Telescope has been also inaugurated at the Mount Whipple Observatory (Arizona, US).
Meanwhile detailed geophysical characterization of the southern site is ongoing and the agreement between the hosting country and the CTA Observatory has been signed.
First commissioning data from LST1 have started to be acquired at the end of 2018, in 2019 the first gamma-rays observations are expected.
CTA Observatory is expected to become fully operational by 2025 but precursors mini-arrays are expected to operate already in 2020.
A detailed description of the project and its expected performance can be found in a dedicated volume of the Astroparticle Physics journal \cite{CtaApP}, while an update on the project status can be found in \cite{Ong2017}.
CTA is included in the 2008 roadmap of the European Strategy Forum on Research Infrastructures (ESFRI),
is one of the Magnificent Seven of the European strategy for astroparticle physics by ASPERA,
and highly ranked in the strategic plan for European astronomy of ASTRONET.
\section{Computing Model}
In the pre-construction phase the available computing resources are used mainly for the simulation of atmospheric showers and their interaction with the Cherenkov telescopes of the CTA arrays to evaluate the expected performance and optimize many construction parameters.
The simulation of the atmospheric shower development, performed with Corsika \cite{Corsika}, is followed by the simulation of the detector response with sim\_telarray \cite{SimTelarray}, a code developed within the CTA consortium.
It is worthwhile to notice that thanks to the very high rejection of hadronic background achieved with the IACT technique, huge samples of simulated hadronic events are needed to achieve statistically significant estimates of the CTA performance.
About $10^{11}$ cosmic ray induced atmospheric showers for each site are needed to properly estimate the array sensitivity, energy and angular resolution requiring extensive computing needs in term of both disk space and CPU power. Given these large storage and computing requirements, the Grid approach was chosen to pursue this task and a Virtual Organization for CTA was created in 2008 and is presently supported by 20 EGI sites and one ARC site spread over 7 countries, with more than 3.6 PB of storage, about 7000 available cores on average and usage peaks as high as 12000 concurrent running jobs.
The CTA production system currently in use \cite{Arrabito2015} is based on the DIRAC framework \cite{Dirac}, which has been originally developed to support the production activities of the LHCb (Large Hadron Collider Beauty) experiment and today is extensively used by several particle physics and biology communities. DIRAC offers powerful job submission functionalities and can interface with a palette of heterogeneous resources, such as grid sites, cloud sites, HPC centers, computer clusters and volunteer computing platforms. Moreover, DIRAC provides a layer for interfacing with different types of resources, like computing elements, catalogs or storage systems.
A massive production of simulated data has been carried on in 2018 to estimate the expected performance with improved telescopes' models and with different night-sky background levels. A simulation dedicated to the detailed comparison of different Small Size Telescope versions was also carried on. Simulated data have been analyzed with two different analysis chains to crosscheck the results and have been also used for the development of the new official CTA reconstruction and analysis pipeline.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{cpu-days-used-2018-bysite.eps}
\caption{\label{CPU} CPU power provided in 2018 by Grid sites in the CTA Virtual Organization.}
\end{figure}
About 2.7 million of GRID jobs have been executed in 2018 for such task corresponding to about 206.4 millions of HS06 hours of CPU power and 10 PB of data transferred.
CNAF contributed to this effort with about 16.8 millions of HS06 hours and 790 TB of disk space corresponding to 8\% of the overall CPU power used and the 17\% of the disk space resulting the second contributor in terms of storage and the fourth in terms of CPU time (see Fig. \ref{CPU}-\ref{Disk}).
\begin{figure}[ht]
\includegraphics[width=0.8\textwidth]{normalized-cpu-used-2018-bysite-cumulative.eps}
\caption{\label{CPU-cumu} Cumulative normalized CPU used in 2018 by Grid sites in the CTA Virtual Organization.}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=0.8\textwidth]{transfered-data-2018-bysite.eps}
\caption{\label{Disk} Total transferred data in 2018, for the Grid sites in the CTA Virtual Organization.}
\end{figure}
\clearpage
\section*{References}
\begin{thebibliography}{19}
\bibitem{CrabDiscovery} Weekes T C {\it et al.} 1898 ``Observation of TeV gamma rays from the Crab nebula using the atmospheric Cerenkov imaging technique''
{\it ApJ} {\bf 342} 379-95
\bibitem{TevCat} TevCat web page http://tevcat.uchicago.edu
\bibitem{HESS} H.E.S.S. web page https://www.mpi-hd.mpg.de/hfm/HESS/
\bibitem{MAGIC} MAGIC web page https://magic.mppmu.mpg.de
\bibitem{VERITAS} VERITAS web page http://veritas.sao.arizona.edu
\bibitem{DeNauroisMazin2015} de Naurois M and Mazin D ``Ground-based detectors in very-high-energy gamma-ray astronomy''
Comptes Rendus - Physique {\bf 16} Issue 6-7, 610-27
\bibitem{LemoineGoumard2015} Lemoine-Goumard M 2015 ``Status of ground-based gamma-ray astronomy'' Conf. Proc of $34^{th}$ International Conference on C, 2015, The Hague,
PoS ICRC2015 (2016) 012
\bibitem{CtaConsortium} CTA web page https://www.cta-observatory.org/about/cta-consortium/
\bibitem{CtaApP} Hinton J, Sarkar S, Torres D and Knapp J 2013 ``Seeing the High-Energy Universe with the Cherenkov Telescope Array. The Science Explored with the CTA'' {\it Astropart. Phys.} {\bf 43} 1-356
%\bibitem{Bigongiari2016} Bigongiari C 2016 ``The Cherenkov Telescope Array'' Proc. of Cosmic Ray International Seminar (CRIS2015), %2015, Gallipoli,
% {\it Nucl. Part. Phys. Proc.} {\bf 279–281} 174-81
\bibitem{Ong2017} Ong R A et al. 2017 ``Cherenkov Telescope Array: The Next Generation Gamma-Ray Observatory''
Proc. of 35th Int. Cosmic Ray Conf. - ICRC2017, 10-20 July, 2017, Busan, Korea (arXiv:1709.05434v1)
\bibitem{Corsika} Heck D, Knapp J, Capdevielle J N, Schatz G and Thouw T 1998 ``CORSIKA: a Monte Carlo code to simulate extensive air showers''
Forschungszentrum Karlsruhe GmbH, Karlsruhe (Germany), Feb 1998, V + 90 p., TIB Hannover, D-30167 Hannover (Germany)
\bibitem{SimTelarray} Bernlh{\"o}r K 2008 ``Simulation of imaging atmospheric Cherenkov telescopes with CORSIKA and sim\_telarray'' {\it Astropart. Phys} {\bf 30} 149-58
\bibitem{Arrabito2015} Arrabito L, Bregeon J, Haupt A, Graciani Diaz R, Stagni F and Tsaregorodtsev A 2015 ``Prototype of a production system for Cherenkov Telescope Array with DIRAC'' Proc. of $21^{st}$ Int. Conf.e on Computing in High Energy and Nuclear Physics (CHEP2015), 2015, Okinawa,
{\it J. Phys.: Conf. Series} {\bf 664} 032001
\bibitem{Dirac} Tsaregorodtsev A {\it et al.} 2014 ``DIRAC Distributed Computing Services'' Proc. of $20^{st}$ Int. Conf.e on Computing in High Energy and Nuclear Physics (CHEP2013)
{\it J. Phys.: Conf. Series} {\bf 513} 032096
\end{thebibliography}
\end{document}
Source diff could not be displayed: it is too large. Options to address this: view the blob.
This diff is collapsed.
This diff is collapsed.
%% This BibTeX bibliography file was created using BibDesk.
%% http://bibdesk.sourceforge.net/
%% Created for Fabio Bellini at 2018-02-24 11:10:52 +0100
%% Saved with string encoding Unicode (UTF-8)
@article{Azzolini:2018tum,
author = "Azzolini, O. and others",
title = "{CUPID-0: the first array of enriched scintillating
bolometers for $0\nu\beta\beta$ decay investigations}",
collaboration = "CUPID",
journal = "Eur. Phys. J.",
volume = "C78",
year = "2018",
number = "5",
pages = "428",
doi = "10.1140/epjc/s10052-018-5896-8",
eprint = "1802.06562",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
SLACcitation = "%%CITATION = ARXIV:1802.06562;%%"
}
@article{Azzolini:2018dyb,
author = "Azzolini, O. and others",
title = "{First Result on the Neutrinoless Double-$\beta$ Decay of
$^{82}Se$ with CUPID-0}",
collaboration = "CUPID-0",
journal = "Phys. Rev. Lett.",
volume = "120",
year = "2018",
number = "23",
pages = "232502",
doi = "10.1103/PhysRevLett.120.232502",
eprint = "1802.07791",
archivePrefix = "arXiv",
primaryClass = "nucl-ex",
SLACcitation = "%%CITATION = ARXIV:1802.07791;%%"
}
@article{Azzolini:2018yye,
author = "Azzolini, O. and others",
title = "{Analysis of cryogenic calorimeters with light and heat
read-out for double beta decay searches}",
journal = "Eur. Phys. J.",
volume = "C78",
year = "2018",
number = "9",
pages = "734",
doi = "10.1140/epjc/s10052-018-6202-5",
eprint = "1806.02826",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
SLACcitation = "%%CITATION = ARXIV:1806.02826;%%"
}
@article{Azzolini:2018oph,
author = "Azzolini, O. and others",
title = "{Search of the neutrino-less double beta decay of$^{82}$
Se into the excited states of$^{82}$ Kr with CUPID-0}",
collaboration = "CUPID",
journal = "Eur. Phys. J.",
volume = "C78",
year = "2018",
number = "11",
pages = "888",
doi = "10.1140/epjc/s10052-018-6340-9",
eprint = "1807.00665",
archivePrefix = "arXiv",
primaryClass = "nucl-ex",
SLACcitation = "%%CITATION = ARXIV:1807.00665;%%"
}
@article{DiDomizio:2018ldc,
author = "Di Domizio, S. and others",
title = "{A data acquisition and control system for large mass
bolometer arrays}",
journal = "JINST",
volume = "13",
year = "2018",
number = "12",
pages = "P12003",
doi = "10.1088/1748-0221/13/12/P12003",
eprint = "1807.11446",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
SLACcitation = "%%CITATION = ARXIV:1807.11446;%%"
}
@article{Beretta:2019bmm,
author = "Beretta, M. and others",
title = "{Resolution enhancement with light/heat decorrelation in
CUPID-0 bolometric detector}",
year = "2019",
eprint = "1901.10434",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
SLACcitation = "%%CITATION = ARXIV:1901.10434;%%"
}
@article{Azzolini:2019nmi,
author = "Azzolini, O. and others",
title = "{Background Model of the CUPID-0 Experiment}",
collaboration = "CUPID",
year = "2019",
eprint = "1904.10397",
archivePrefix = "arXiv",
primaryClass = "nucl-ex",
SLACcitation = "%%CITATION = ARXIV:1904.10397;%%"
}
\ No newline at end of file
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\bibliographystyle{iopart-num}
%\usepackage{citesort}
\begin{document}
\title{CUPID-0 experiment}
\author{CUPID-0 collaboration}
%\address{}
\ead{stefano.pirro@lngs.infn.it}
\begin{abstract}
With their excellent energy resolution, efficiency, and intrinsic radio-purity, cryogenic calorimeters are primed for the search of neutrino-less double beta decay (0$\nu$DBD).
CUPID-0 is an array of 24 Zn$^{82}$Se scintillating bolometers used to search for 0$\nu$DBD of $^{82}$Se.
It is the first large mass 0$\nu$DBD experiment exploiting a double read-out technique: the heat signal to accurately measure particle energies and the light signal to identify the particle type.
The CUPID-0 is in data taking since March 2017 and obtained several outstanding scientific results.
The configuration of the CUPID-0 data processing environment on the CNAF computing cluster has been used for the analysis of the first period of data taking.
\end{abstract}
\section{The experiment}
Neutrino-less Double Beta Decay (0$\nu$DBD) is a hypothesized nuclear transition in which a nucleus decays emitting only two electrons.
This process can not be accommodated in the Standard Model, as the absence of emitted neutrinos would violate the lepton number conservation.
Among the several experimental approaches proposed for the search of 0$\nu$DBD, cryogenic calorimeters (bolometers) stand out for the possibility of achieving excellent energy resolution ($\sim$0.1\%), efficiency ($\ge$80\%) and intrinsic radio-purity. Moreover, the crystals that are operated as bolometers can be grown starting from most of the 0$\nu$DBD emitters, enabling the test of different nuclei.
The state of the art of the bolometric technique is represented by CUORE, an experiment composed of 988 bolometers for a total mass of 741 kg, presently in data taking at Laboratori Nazionali del Gran Sasso.
The ultimate limit of the CUORE background suppression resides in the presence of $\alpha$-decaying isotopes located in the detector structure.
The CUPID-0 project \cite{Azzolini:2018dyb,Azzolini:2018tum} was born to overcome the actual limits.
The main breakthrough of CUPID-0 is the addition of independent devices to measure the light signals emitted from scintillation in ZnSe bolometers.
The different properties of the light emission of electrons and $\alpha$ particles will enable event-by-event rejection of $\alpha$ interactions, suppressing the overall background in the region of interest for 0$\nu$DBD of at least one order of magnitude.
The detector is composed by 26 ZnSe ultra-pure $\sim$ 500g bolometers, enriched at 95\% in $^{82}$Se, the 0$\nu$DBD emitter, and faced to Ge disks light detector operated as bolometers.
CUPID-0 is hosted in a dilution refrigerator at the Laboratori Nazionali del Gran Sasso and started the data taking in March 2017.
The first scientific run (i.e.,~ Phase I) ended in December 2018, collecting 9.95 kg$\times$y of ZnSe exposure.
Such data were used to calculate a new limits on the $^{82}$Se 0$\nu$DBD~\cite{Azzolini:2018dyb,Azzolini:2018oph} and to develop a full background model of the experiment~\cite{Azzolini:2019nmi}.
Phase II will start in June 2019 with an improved detector configuration.
\section{CUPID-0 computing model and the role of CNAF}
The CUPID-0 computing model is similar to the CUORE one, being the only difference in the sampling frequency and working point of the light detector bolometers.
The full data stream is saved in root files, and a derivative trigger is software generated with a channel dependent threshold.
%Raw data are saved in Root files and contain events in correspondence with energy releases occurred in the bolometers.
Each event contains the waveform of the triggering bolometer and those geometrically close to it, plus some ancillary information.
The non-event-based information is stored in a PostgreSQL database that is also accessed by the offline data analysis software.
The data taking is arranged in runs, each run lasting about two days.
Details of the CUPID-0 data acquisition and control system can be found in \cite{DiDomizio:2018ldc}.
Raw data are transferred from the DAQ computers (LNGS) to the permanent storage area (located at CNAF) at the end of each run.
A full copy of data is also preserved on tape.
The data analysis flow consists of two steps; in the first level analysis, the event-based quantities are evaluated, while in the second level analysis the energy spectra are produced.
The analysis software is organized in sequences.
Each sequence consists of a collection of modules that scan the events in the Root files sequentially, evaluate some relevant quantities and store them back in the events.
The analysis flow consists of several key steps that can be summarized in pulse amplitude estimation, detector gain correction, energy calibration and search for events in coincidence among multiple bolometers.
The new tools developed for CUPID-0 to handle the light signals are introduced in \cite{Azzolini:2018yye,Beretta:2019bmm}.
The main instance of the database was located at CNAF and the full analysis framework was used to analyze data until November 2017. A web page for offline reconstruction monitoring was maintained.
%During 2017 a more intense usage of the CNAF resources is expected, both in terms of computing resourced and storage space.
\section*{References}
\bibliography{cupid-biblio}
\end{document}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment