Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • faproietti/ar2018
  • chierici/ar2018
  • SDDS/ar2018
  • cnaf/annual-report/ar2018
4 results
Show changes
Showing
with 33851 additions and 93 deletions
contributions/chnet/metadataSchema.png

31.6 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{The CMS Experiment at the INFN CNAF Tier1}
\author{Giuseppe Bagliesi}
\address{INFN Sezione di Pisa, L.go B.Pontecorvo 3, 56127 Pisa, Italy}
\ead{giuseppe.bagliesi@cern.ch}
\begin{abstract}
A brief description of the CMS Computing operations during LHC RunII and their recent developments is given. The CMS utilization at Tier-1 CNAF is described
\end{abstract}
\section{Introduction}
The CMS Experiment \cite{CMS-descr} at CERN collects and analyses data from the pp collisions in the LHC Collider.
The first physics Run, at centre of mass energy of 7-8 TeV, started in late March 2010, and ended in February 2013; more than 25~fb$^{-1}$ of collisions were collected during the Run. RunII, at 13 TeV, started in 2015, and finished at the end of 2018.
During the first two years of RunII, LHC has been able to largely exceed its design parameters: already in 2016 instantaneous luminosity reached $1.5\times 10^{34}\mathrm{cm^{-2}s^{-1}}$, 50\% more than the planned “high luminosity” LHC phase. The most astonishing achievement, still, is a huge improvement on the fraction of time LHC can serve physics collision, increased form ~35\% of RunI to more than 80\% in some months on 2016.
The most visible effect, computing wise, is a large increase of data to be stored, processed and analysed offline, with 2016 allowing for the collection of more than 40 fb$^{-1}$ of physics data.
In 2017 CMS recorded more than 46 fb$^{-1}$ of pp collisions, in addition to the data collected during 2016. These data were collected under considerably higher than expected pileup conditions forcing CMS to request a lumi-levelling to PU~55 for the first hours of the LHC fill; this has challenged both the computing system and CMS analysts with more complex events to process with respect to the modelling. From the computing operations side, higher pileup meant larger events and more time to process events than anticipated in the 2017 planning. As these data taking conditions affected only the second part of the year, the average 2017 pileup was in line with that used during the CMS resource planning.
2018 was another excellent year for LHC operations and luminosity delivered to the experiments. CMS recorded 64 fb$^{-1}$ of pp collisions during 2018, in addition to the 84 fb$^{-1}$ collected during 2016 and 2017. This brings the total luminosity delivered in RunII to more than 150 fb$^{-1}$ , and the total RunI + RunII dataset to more than 190 fb$^{-1}$.
\section{Run II computing operations}
During Run~II, the computing 2004 model designed for Run~I has greatly evolved. The MONARC Hierarchical division of sites in Tier0, Tier-1s and Tier-2s, is still present, but less relevant during operations. All simulation, analysis and processing workflows can now be executed at virtually any site, with a full transfer mesh allowing for point-to-point data movement, outside the rigid hierarchy.
Remote access to data, using WAN-aware protocols like XrootD and data federations, are used more and more instead of planned data movement, allowing for an easier exploitation of CPU resources.
Opportunistic computing is becoming a key component, with CMS having explored access to HPC systems, Commercial Clouds, and with the capability of running its workflows on virtually any (sizeable) resource we have access to.
In 2018 CMS deployed singularity \cite{singu} to all sites supporting the CMS VO. Singularity is a container solution which allows CMS to select the OS on a per job basis and decouples the OS of worker nodes from that required by experiments. Sites can setup worker nodes with a Singularity supported OS and CMS will choose the appropriate OS image for each job.
CMS deployed a new version of the prompt reconstruction software on July 2018, during LHC MD2. This software is adapted to detector upgrades and data taking conditions, and the production level of alignment and calibration algorithms is reached. Data collected before this point has now been reprocessed for a fully consistent data set for analysis, in time for the Moriond 2019 conference. Production and distributed analysis activities continued at a very high level throughout 2018. The MC17 campaign, to be used for Winter and Summer 18 conferences, continued throughout the year, with decreasing utilization of resources; overall, more than 15B events were available by the Summer. The equivalent simulation campaign for 2018 data, MC18, started in October 2018 and is now almost completed.
Developments to increase CMS throughput and disk usage efficiently continue. Of particular interest is the development of the NanoAOD data tier as a new alternative for analysis users.
The NanoAOD size per event is approximately 1 kB, 30-50 times smaller than the MiniAOD data tier and relies on only simple data types rather than the hierarchical data format structure in the CMS MiniAOD (and AOD) data tier. NanoAOD samples for the 2016, 2017 and 2018 data and corresponding Monte Carlo simulation have been produced, and are being used in many analyses. NanoAOD is now automatically produced in all the central production campaigns, and fast reprocessing campaigns from MiniAOD to NanoAOD have been tested and are able to achieve more than 4B events per day using only a fraction of CMS resources.
\section{CMS WLCG Resources and expected increase}
CMS Computing model has been used to request resources for 2018-19 RunII data taking and reprocessing, with total requests (Tier-0 + Tier-1s + Tier-2s) exceeding 2073 kHS06, 172 PB on disk, and 320 PB on tape.
However the actual pledged resources have been substantially lower than the requests due to budget restrictions from the funding agencies. To reduce the impact of this issue, CMS was able to achieve and deploy several technological advancements, including reducing the needed amount of AOD(SIM) on disk and to reduce the amount of simulated raw events on tape. In addition, some computing resource providers were able to provide more than their pledged level of resources to CMS during 2018.
Thanks to the optimizations and technological improvements described before it has been possible to tune accordingly the computing model of CMS. Year-by-year increases, which would have been large in presence of the reference computing model, have been reduced substantially.
Italy contributes to CMS computing with 13\% of the Tier-1 and Tier-2 resources. The increase of CNAF pledges for 2019 have been reduced by a factor two with respect to the original request, due to INFN budget limitations, and the remaining increase has been postponed to 2021.
The 2019 pledges are therefore 78 kHS06 of CPU, 8020 TB of disk, and 26 PB for tape.
CMS usage of CNAF is very intense and it represents one of the largest Tier-1 in CMS as number of processed hours, after the US Tier-1; the same holds for total number of processed jobs, as shown in Fig.~\ref{cms-jobs}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth,bb=0 0 900 900]{tier1-jobs-2018.pdf}
\end{center}
\caption{\label{cms-jobs}Jobs processed at CMS Tier1s during 2018}
\end{figure}
\section{The CNAF flood incident}
On November 9th 2017 a major incident happened when the CNAF computer center was flooded.
This caused an interruption of all CNAF services and the damage of many disk arrays and servers, as well as of the tape library. About 40 damaged tapes (out of a total of 150) belonged to CMS. They contained unique copy of MC and RECO data. Six tapes contained a 2nd custodial copy of RAW data.
A special recovery procedure was adopted by CNAF team through a specialized company and no data have been permanently lost.
The impact of this incident for CMS, although serious, was mitigated thanks to the intrinsic redundancy of our distributed computing model. Other Tier1s increased temporary their share to compensate the CPU loss, deploying the 2018 pledges as soon as possible.
A full recovery of CMS services of CNAF was achieved by beginning of March 2018.
It is important to point out that, despite the incident affecting the first months of 2018, the integrated site readiness of CNAF in 2018 was very good and at the same level or better than the other CMS Tier1s, see Fig.~\ref{tier1-cms-sr}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth,bb=0 0 900 900]{tier1-readiness-2018.pdf}
\end{center}
\caption{\label{tier1-cms-sr}Site readiness of CMS Tier1s in 2018}
\end{figure}
\section{Conclusions}
CNAF is an important asset for the CMS Collaboration, being the second Tier1 in terms of resource utilization, pledges and availability.
The unfortunate incident of the end of 2017 has been managed professionally and efficiently by the CNAF staff, guaranteeing the fastest possible recovery with minimal data losses at the beginning of 2018.
\section*{References}
\begin{thebibliography}{9}
\bibitem{CMS-descr}CMS Collaboration, The CMS experiment at the CERN LHC, JINST 3 (2008) S08004,
doi:10.1088/1748-0221/3/08/S08004.
\bibitem{singu} http://singularity.lbl.gov/
\end{thebibliography}
\end{document}
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{The CMS Experiment at the INFN CNAF Tier 1}
\author{Giuseppe Bagliesi$^1$}
\address{$^1$ INFN Sezione di Pisa, Pisa, IT}
\ead{giuseppe.bagliesi@cern.ch}
\begin{abstract}
A brief description of the CMS Computing operations during LHC RunII and their recent developments is given. The CMS utilization at Tier 1 CNAF is described
\end{abstract}
\section{Introduction}
The CMS Experiment \cite{CMS-descr} at CERN collects and analyses data from the pp collisions in the LHC Collider.
The first physics Run, at center of mass energy of 7-8 TeV, started in late March 2010, and ended in February 2013; more than 25~fb$^{-1}$ of collisions were collected during the Run. RunII, at 13 TeV, started in 2015, and finished at the end of 2018.
During the first two years of RunII, LHC has been able to largely exceed its design parameters: already in 2016 instantaneous luminosity reached $1.5\times 10^{34}\mathrm{cm^{-2}s^{-1}}$, 50\% more than the planned “high luminosity” LHC phase. The most astonishing achievement, still, is a huge improvement on the fraction of time LHC can serve physics collisions, increased from ~35\% of RunI to more than 80\% in some months on 2016.
The most visible effect, computing wise, is a large increase of data to be stored, processed and analysed offline, with 2016 allowing for the collection of more than 40 fb$^{-1}$ of physics data.
In 2017 CMS recorded more than 46 fb$^{-1}$ of pp collisions, in addition to the data collected during 2016. These data were collected under considerably higher than expected pileup conditions forcing CMS to request a lumi-levelling to PU~55 for the first hours of the LHC fill; this has challenged both the computing system and CMS analysts with more complex events to process with respect to the modelling. From the computing operations side, higher pileup meant larger events and more time to process events than anticipated in the 2017 planning. As these data taking conditions affected only the second part of the year, the average 2017 pileup was in line with that used during the CMS resource planning.
2018 was another excellent year for LHC operations and luminosity delivered to the experiments. CMS recorded 64 fb$^{-1}$ of pp collisions during 2018, in addition to the 84 fb$^{-1}$ collected during 2016 and 2017. This brings the total luminosity delivered in RunII to more than 150 fb$^{-1}$ , and the total RunI + RunII dataset to a total of about 180 fb$^{-1}$.
\section{Run II computing operations}
During Run~II, the computing 2004 model designed for Run~I has greatly evolved. The MONARC Hierarchical division of sites in Tier 0, Tier 1s and Tier 2s, is still present, but less relevant during operations. All simulation, analysis and processing workflows can now be executed at virtually any site, with a full transfer mesh allowing for point-to-point data movement, outside the rigid hierarchy.
Remote access to data, using WAN-aware protocols like XrootD and data federations, are used more and more instead of planned data movement, allowing for an easier exploitation of CPU resources.
Opportunistic computing is becoming a key component, with CMS having explored access to HPC systems, Commercial Clouds, and with the capability of running its workflows on virtually any (sizeable) resource we have access to.
In 2018 CMS deployed Singularity \cite{singu} to all sites supporting the CMS VO. Singularity is a container solution which allows CMS to select the OS on a per job basis and decouples the OS of worker nodes from that required by experiments. Sites can setup worker nodes with a Singularity supported OS and CMS will choose the appropriate OS image for each job.
CMS deployed a new version of the prompt reconstruction software on July 2018, during LHC Machine Development 2. This software is adapted to detector upgrades and data taking conditions, and the production level of alignment and calibration algorithms is reached. Data collected before this point has now been reprocessed for a fully consistent data set for analysis, in time for the Moriond 2019 conference. Production and distributed analysis activities continued at a very high level throughout 2018. The MC17 campaign, to be used for Winter and Summer 18 conferences, continued throughout the year, with decreasing utilization of resources; overall, more than 15 billion of events were available by the Summer. The equivalent simulation campaign for 2018 data, MC18, started in October 2018 and is now almost completed.
Developments to increase CMS throughput and disk usage efficiently continue. Of particular interest is the development of the NanoAOD data tier as a new alternative for analysis users.
The NanoAOD size per event is approximately 1 kB, 30-50 times smaller than the MiniAOD data tier and relies on only simple data types rather than the hierarchical data format structure in the CMS MiniAOD (and AOD) data tier. NanoAOD samples for the 2016, 2017 and 2018 data and corresponding Monte Carlo simulation have been produced, and are being used in many analyses. NanoAOD is now automatically produced in all the central production campaigns, and fast reprocessing campaigns from MiniAOD to NanoAOD have been tested and are able to achieve more than 4B events per day using only a fraction of CMS resources.
\section{CMS WLCG Resources and expected increase}
CMS Computing model has been used to request resources for 2018-19 RunII data taking and reprocessing, with total requests (Tier 0 + Tier 1s + Tier 2s) exceeding 2073 kHS06, 172 PB on disk, and 320 PB on tape.
However the actual pledged resources have been substantially lower than the requests due to budget restrictions from the funding agencies. To reduce the impact of this issue, CMS was able to achieve and deploy several technological advancements, including reducing the needed amount of AOD(SIM) on disk and to reduce the amount of simulated raw events on tape. In addition, some computing resource providers were able to provide more than their pledged level of resources to CMS during 2018.
Thanks to the optimizations and technological improvements described before it has been possible to tune accordingly the computing model of CMS. Year-by-year increases, which would have been large in presence of the reference computing model, have been reduced substantially.
Italy contributes to CMS computing with 13\% of the Tier 1 and Tier 2 resources. The increase of CNAF pledges for 2019 have been reduced by a factor two with respect to the original request, due to INFN budget limitations, and the remaining increase has been postponed to 2021.
The 2019 total pledges are therefore 78 kHS06 of CPU, 8020 TB of disk, and 26 PB for tape.
CMS usage of CNAF is very intense and it represents one of the largest Tier 1 in CMS as number of processed hours, after the US Tier 1; the same holds for total number of processed jobs, as shown in Fig.~\ref{cms-jobs}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth,bb=0 0 900 900]{tier1-jobs-2018.pdf}
\end{center}
\caption{\label{cms-jobs}Jobs processed at CMS Tier 1 sites during 2018}
\end{figure}
\section{The CNAF flood incident}
On November 9th 2017 a major incident happened when the CNAF computer center was flooded.
This caused an interruption of all CNAF services and the damage of many disk arrays and servers, as well as of the tape library. About 40 damaged tapes (out of a total of 150) belonged to CMS. They contained unique copy of MC and RECO data. Six tapes contained a 2nd custodial copy of RAW data.
A special recovery procedure was adopted by CNAF team through a specialized company and no data have been permanently lost.
The impact of this incident for CMS, although serious, was mitigated thanks to the intrinsic redundancy of our distributed computing model. Other Tier 1 sites increased temporary their share to compensate the CPU loss, deploying the 2018 pledges as soon as possible.
A full recovery of CMS services of CNAF was achieved by beginning of March 2018.
It is important to point out that, despite the incident affecting the first months of 2018, the integrated site readiness of CNAF in 2018 was very good and at the same level or better than the other CMS Tier 1 sites, see Fig.~\ref{tier1-cms-sr}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth,bb=0 0 900 900]{tier1-readiness-2018.pdf}
\end{center}
\caption{\label{tier1-cms-sr}Site readiness of CMS Tier 1s in 2018}
\end{figure}
\section{Conclusions}
CNAF is an important asset for the CMS Collaboration, being the second Tier 1 in terms of resource utilization, pledges and availability.
The unfortunate incident of the end of 2017 has been managed professionally and efficiently by the CNAF staff, guaranteeing the fastest possible recovery with minimal data losses at the beginning of 2018.
\section*{References}
\begin{thebibliography}{9}
\bibitem{CMS-descr}CMS Collaboration, The CMS experiment at the CERN LHC, JINST 3 (2008) S08004,
doi:10.1088/1748-0221/3/08/S08004.
\bibitem{singu} http://singularity.lbl.gov/
\end{thebibliography}
\end{document}
>>>>>>> df6666e07f77183d3b9b7271912d9bff64107129
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{eurosym}
\begin{document}
\title{CNAF Provisioning system: Puppet 5 upgrade}
\author{
S. Bovina$^1$,
D. Michelotto$^1$,
E. Fattibene$^1$,
A. Falabella$^1$,
A. Chierici$^1$
}
\address{$^1$ INFN-CNAF, Bologna, IT}
\ead{
stefano.bovina@cnaf.infn.it,
diego.michelotto@cnaf.infn.it,
enrico.fattibene@cnaf.infn.it,
antonio.falabella@cnaf.infn.it,
andrea.chierici@cnaf.infn.it
}
\begin{abstract}
Since 2015 CNAF departments can take advantage of a common provisioning system based on Foreman/Puppet to install and configure heterogeneous sets of physical and virtual machines.
During 2017 and 2018, the CNAF provisioning system, preciously based on Puppet~\cite{ref:puppet} version 3, has been upgraded, since that version has reached ``End-of-life'' at 31/12/2016.
Due to other higher priority tasks, the start of this activity was postponed to 2017.
In this report we are going to describe activities that have been carried on in order to finalize the migration from Puppet 3 to Puppet 5.
\end{abstract}
\section{Provisioning at CNAF}
The installation and configuration activity, in a big computing center like CNAF, must take into account the size of the resources
(roughly a thousand nodes to manage), the heterogeneity of the systems (virtual vs physical nodes, computing nodes and different type of servers)
and the different working group in charge for their management.
To meet this challenge CNAF implemented a unique solution, adopted by all the departments,
based on two well known open source technologies: Foreman~\cite{ref:foreman} for the initial installation, and Puppet for the configuration.
\newline
Due to the importance of this infrastructure, it is crucial to upgrade it trying to minimize the impact on production systems, so that
preventing service disruptions or broken configurations.
To achieve this, we have worked on the Puppet test suite based on RSpec~\cite{ref:rspec} and rspec-puppet~\cite{ref:rspec-puppet} tools.
\section{Puppet 5 upgrade: finalization}
Going from Puppet 3 to Puppet 5 implies a major software upgrade, with lots of configurations and functionality changes.
For this reason, the main activity in 2017 has been the setup of automated tests and the development of specific utilities for Puppet modules in order to prepare the migration.
During 2017 we prepared and documented a detailed procedure for the upgrade of all resources administered by the different CNAF departments and several tests were performed
in order to minimize the risks of issues in production.
At the beginning of 2018 we started the upgrade of the production environment which consisted in the following steps:
\begin{itemize}
\item Upgrade Foreman to version 1.16.0 in order to support Puppet 5;
\item Ensuring that every client configuration contains the ca\_server entry;
\item Ensuring that every client is at version 3.8.7 in order to support Puppet 5;
\item Deploy a Puppet 5 CA as replacement of the Puppet 3 CA;
\item Deploy Puppet 5 workers (3x Puppetserver) in addition to the existing ones;
\item Upgrade the server configuration entry (from Puppet 3 to 5) on every client while keeping client at version 3.8.7;
\item Upgrade Puppet client to version 5;
\item Remove old Puppet 3 infrastructure;
\item Upgrade Puppetforge modules to a newer version (not Puppet 3 compatible).
\end{itemize}
By the end of 2018 all the ~1500 hosts administered through the CNAF provisioning system were made able to exploit the upgraded infrastructure.
\section{Future works}
Currently, we are finalizing the Puppetforge modules upgrade. Once all modules are updated, Foreman will be updated to the latest version and tests for Puppet 6 migration will begin immediately after.
\section{References}
\begin{thebibliography}{1}
\bibitem{ref:puppet} Puppet webpage: https://puppet.com/
\bibitem{ref:foreman} The Foreman webpage: https://theforeman.org/
\bibitem{ref:rspec} The RSpec webpage: http://rspec.info/
\bibitem{ref:rspec-puppet} The rspec-puppet webpage: http://rspec-puppet.com/
\end{thebibliography}
\end{document}
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{eurosym}
\begin{document}
\title{CNAF Provisioning system: Puppet 5 upgrade}
\author{
Stefano Bovina$^1$,
Diego Michelotto$^1$,
Enrico Fattibene$^1$,
Antonio Falabella$^1$,
Andrea Chierici$^1$
}
\address{$^1$ INFN CNAF, Viale Berti Pichat 6/2, 40126, Bologna, Italy}
\ead{
stefano.bovina@cnaf.infn.it,
diego.michelotto@cnaf.infn.it,
enrico.fattibene@cnaf.infn.it,
antonio.falabella@cnaf.infn.it,
andrea.chierici@cnaf.infn.it
}
\begin{abstract}
Since 2015 CNAF departments can take advantage of a common provisioning system based on Foreman/Puppet to install and configure heterogeneous sets of physical and virtual machines.
During 2017 and 2018, the CNAF provisioning system, priviously based on Puppet~\cite{ref:puppet} version 3, has been upgraded, since that version has reached "End-of-life" at 31/12/2016.
Due to other higher priority tasks, the start of this activity was postponed to 2017.
In this report we are going to describe activities that have been carried on in order to finalize the migration from Puppet 3 to Puppet 5.
\end{abstract}
\section{Provisioning at CNAF}
The installation and configuration activity, in a big computing centre like CNAF, must take into account the size of the resources
(roughly a thousand nodes to manage), the heterogeneity of the systems (virtual vs physical nodes, computing nodes and different type of servers)
and the different working group in charge for their management.
To meet this challenge CNAF implemented a unique solution, adopted by all the departments,
based on two well known open source technologies: Foreman~\cite{ref:foreman} for the initial installation, and Puppet for the configuration.
\newline
Due to the importance of this infrastructure, it is crucial to upgrade it trying to minimize the impact on production systems, so that
preventing service disruptions or broken configurations.
To achieve this, we have worked on the Puppet test suite based on RSpec~\cite{ref:rspec} and rspec-puppet~\cite{ref:rspec-puppet} tools.
\section{Puppet 5 upgrade: finalization}
Going from Puppet 3 to Puppet 5 implies a major software upgrade, with lots of configurations and functionality changes.
For this reason, the main activity in 2017 has been the setup of automated tests and the development of specific utilities for Puppet modules in order to prepare the migration.
During 2017 we prepared and documented a detailed procedure for the upgrade of all resources administred by the different CNAF departments and several tests were performed
in order to minimize the risks of issues in production.
At the beginning of 2018 we started the upgrade of the production environment which consisted in the following steps:
\begin{itemize}
\item Upgrade Foreman to version 1.16.0 in order to support Puppet 5;
\item Ensuring that every client configuration contains the ca\_server entry;
\item Ensuring that every client is at version 3.8.7 in order to support Puppet 5;
\item Deploy a Puppet 5 CA as replacement of the Puppet 3 CA;
\item Deploy Puppet 5 workers (3x Puppetserver) in addition to the existing ones;
\item Upgrade the server config entry (from Puppet 3 to 5) on every client while keeping client at version 3.8.7;
\item Upgrade Puppet client to version 5;
\item Remove old Puppet 3 infrastructure;
\item Upgrade Puppetforge modules to a newer version (not Puppet 3 compatible).
\end{itemize}
By the end of 2018 all the ~1500 hosts administered through the CNAF provisioning system were made able to exploit the upgraded infrastructure.
\section{Future works}
Currently, we are finalizing the Puppetforge modules upgrade. Once all modules are updated, Foreman will be updated to the latest version and tests for Puppet 6 migration will begin immediately after.
\section{References}
\begin{thebibliography}{1}
\bibitem{ref:puppet} Puppet webpage: https://puppet.com/
\bibitem{ref:foreman} The Foreman webpage: https://theforeman.org/
\bibitem{ref:rspec} The RSpec webpage: http://rspec.info/
\bibitem{ref:rspec-puppet} The rspec-puppet webpage: http://rspec-puppet.com/
\end{thebibliography}
\end{document}
File added
This diff is collapsed.
\documentclass[a4paper]{jpconf}
\usepackage{epsfig}
%\usepackage{epstopdf}
\usepackage{graphicx}
\begin{document}
\title{The Cherenkov Telescope Array}
\author{L. Arrabito$^1$, C. Bigongiari$^2$, F. Di Pierro$^3$, P. Vallania$^{3,4}$}
\address{$^1$ Laboratoire Univers et Particules de Montpellier et Universit\'e de Montpellier II, Montpellier, FR}
\address{$^2$ INAF Osservatorio Astronomico di Roma, Monte Porzio Catone (RM), IT}
\address{$^3$ INFN Sezione di Torino, Torino, IT}
\address{$^4$ INAF Osservatorio Astrofisico di Torino, Torino, IT}
\ead{arrabito@in2p3.fr, ciro.bigongiari@oa-roma.inaf.it, federico.dipierro@to.infn.it, piero.vallania@to.infn.it}
\begin{abstract}
The Cherenkov Telescope Array (CTA) is an ongoing worldwide project to build a new generation ground based observatory for Very High Energy (VHE) gamma-ray astronomy.
CTA will feature two arrays of Imaging Atmospheric Cherenkov Telescopes (IACTs), one in each Earth hemisphere, to ensure the full sky coverage and will be operated as an open observatory to maximize its scientific yield.
Each array will be composed of tens of IACTs of different sizes to achieve a ten-fold improvement in sensitivity,
with respect to current generation facilities, over an unprecedented energy range which extends from a few tens of GeV to a hundred of TeV.
Imaging Cherenkov telescopes have already discovered tens of VHE gamma-ray emitters providing plentiful of valuable data and clearly demonstrating the power of the imaging Cherenkov technique.
The much higher telescope multiplicity provided by CTA will drive to highly improved angular and energy resolution, which will permit more accurate morphological and spectrographical studies of VHE gamma-ray sources. CTA project combines therefore guaranteed scientific return, in the form of high precision astrophysics, with considerable potential for major discoveries in astrophysics, cosmology and fundamental physics.
\end{abstract}
\section{Introduction}
Since the discovery of the first VHE gamma-ray source, the Crab Nebula \cite{CrabDiscovery} by the Whipple collaboration in 1989, ground-based gamma-ray astronomy has undergone an impressive development which drove to the discovery of more than 190 gamma-ray sources in less than 30 years \cite{TevCat}.
Whenever a new generation of ground-based gamma-ray observatory came into play gamma-ray astronomy experienced a major step in the number of discovered sources as well as in the comprehension of the astrophysical phenomena involved in the emission of VHE gamma radiation.
Present generation facilities like H.E.S.S. \cite{HESS}, MAGIC \cite{MAGIC} and VERITAS \cite{VERITAS} already provided a deep insight into the non-thermal processes which are responsible of the high energy emission by many astrophysical sources, like Supernova Remnants, Pulsar Wind Nebulae, Micro-quasars and Active Galactic Nuclei, clearly demonstrating the huge physics potential of this field, which is not restricted to pure astrophysical observations, but allows significant contributions to particle physics and cosmology too, see \cite{DeNauroisMazin2015,LemoineGoumard2015} for recent reviews. The impressive physics achievements obtained with the present generation instruments as well as the technological developments regarding mirror production
and new photon-detectors triggered many projects for a new-generation gamma-ray observatory by groups of astroparticle physicists around the world which later merged to form the CTA consortium \cite{CtaConsortium}.
CTA members are carrying on a worldwide effort to provide the scientific community with a state-of-the-art ground-based gamma-ray observatory, allowing exploration of cosmic radiation in the very high energy range with unprecedented accuracy and sensitivity.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{CTA_ProjectTimeline_Nov2018.eps}
\caption{\label{CtaTimeline} CTA project time line.}
\end{figure}
VHE gamma-rays can be produced in the collision of highly relativistic particles with surrounding gas clouds or in their interaction with low energy photons or magnetic fields. Possible sources of such energetic particles include jets emerging from active galactic nuclei, remnants of supernova explosions, and the environment of rapidly spinning neutron stars. High-energy gamma-rays can also be produced in top-down scenarios by the decay of heavy particles such as hypothetical dark matter candidates or cosmic strings.
The CTA observations will be used for detailed studies of above-mentioned astrophysical sources as well as for fundamental physics measurements, such as the indirect search of dark matter, searches for high energy violation of Lorentz invariance and searches for axion-like particles.
High-energy gamma-rays can be used moreover to trace the populations of high-energy particles, thus providing insightful information about the sources of cosmic rays.
Close cooperation with observatories of other wavelength ranges of the electromagnetic spectrum, and those using cosmic rays, neutrinos and gravitational waves are foreseen.
To achieve a full sky-coverage the CTA observatory will consist of two arrays of IACTs, one in both Earth hemispheres. The northern array will be placed at the Observatorio del Roque de Los Muchachos on La Palma Island, Spain, while the southern array will be located in Chile at the ESO site close to the Cerro Paranal.
The two sites were selected after years of careful consideration of extensive studies of the environmental conditions, simulations of the science performance and assessments of construction and operation costs.
Each array will be composed by IACTs of different sizes to achieve an overall ten-fold improvement in sensitivity with respect to current IACT arrays while extending the covered energy range from about 20 GeV to about 300 TeV.
The southern hemisphere array will feature telescopes of three different sizes to cover the full energy range for a detailed investigation of galactic sources, and in particular of the Galactic center, without neglecting observations of extragalactic objects.
The northern hemisphere array instead will consist of telescopes of two different sizes only covering the low energy end of the above-mentioned range (up to some tens of TeV) and will be dedicated mainly to northern extragalactic objects and cosmology studies.
The CTA observatory with its two arrays will be operated by one single consortium and a significant and increasing fraction of the observation time will be open to the general astrophysical community to maximize CTA scientific return.
The CTA project has entered the pre-construction phase. The first Large Size Telescope (LST) has been inaugurated in October 2018, accordingly to the schedule (see Fig. \ref{CtaTimeline}), in the La Palma CTA Northern Site. During 2019 the construction of 3 more LSTs will start. In December 2018 another telescope prototype, the Dual Mirror Medium Size Telescope has been also inaugurated at the Mount Whipple Observatory (Arizona, US).
Meanwhile detailed geophysical characterization of the southern site is ongoing and the agreement between the hosting country and the CTA Observatory has been signed.
First commissioning data from LST1 have started to be acquired at the end of 2018, in 2019 the first gamma-rays observations are expected.
CTA Observatory is expected to become fully operational by 2025 but precursors mini-arrays are expected to operate already in 2020.
A detailed description of the project and its expected performance can be found in a dedicated volume of the Astroparticle Physics journal \cite{CtaApP}, while an update on the project status can be found in \cite{Ong2017}.
CTA is included in the 2008 roadmap of the European Strategy Forum on Research Infrastructures (ESFRI),
is one of the Magnificent Seven of the European strategy for astroparticle physics by ASPERA,
and highly ranked in the strategic plan for European astronomy of ASTRONET.
\section{Computing Model}
In the pre-construction phase the available computing resources are used mainly for the simulation of atmospheric showers and their interaction with the Cherenkov telescopes of the CTA arrays to evaluate the expected performance and optimize many construction parameters.
The simulation of the atmospheric shower development, performed with Corsika \cite{Corsika}, is followed by the simulation of the detector response with sim\_telarray \cite{SimTelarray}, a code developed within the CTA consortium.
It is worthwhile to notice that thanks to the very high rejection of hadronic background achieved with the IACT technique, huge samples of simulated hadronic events are needed to achieve statistically significant estimates of the CTA performance.
About $10^{11}$ cosmic ray induced atmospheric showers for each site are needed to properly estimate the array sensitivity, energy and angular resolution requiring extensive computing needs in term of both disk space and CPU power. Given these large storage and computing requirements, the Grid approach was chosen to pursue this task and a Virtual Organization for CTA was created in 2008 and is presently supported by 20 EGI sites and one ARC site spread over 7 countries, with more than 3.6 PB of storage, about 7000 available cores on average and usage peaks as high as 12000 concurrent running jobs.
The CTA production system currently in use \cite{Arrabito2015} is based on the DIRAC framework \cite{Dirac}, which has been originally developed to support the production activities of the LHCb (Large Hadron Collider Beauty) experiment and today is extensively used by several particle physics and biology communities. DIRAC offers powerful job submission functionalities and can interface with a palette of heterogeneous resources, such as grid sites, cloud sites, HPC centers, computer clusters and volunteer computing platforms. Moreover, DIRAC provides a layer for interfacing with different types of resources, like computing elements, catalogs or storage systems.
A massive production of simulated data has been carried on in 2018 to estimate the expected performance with improved telescopes' models and with different night-sky background levels. A simulation dedicated to the detailed comparison of different Small Size Telescope versions was also carried on. Simulated data have been analyzed with two different analysis chains to crosscheck the results and have been also used for the development of the new official CTA reconstruction and analysis pipeline.
\begin{figure}[ht]
\includegraphics[width=0.8\textwidth]{cpu-days-used-2018-bysite.eps}
\caption{\label{CPU} CPU power provided in 2018 by Grid sites in the CTA Virtual Organization.}
\end{figure}
About 2.7 million of GRID jobs have been executed in 2018 for such task corresponding to about 206.4 millions of HS06 hours of CPU power and 10 PB of data transferred.
CNAF contributed to this effort with about 16.8 millions of HS06 hours and 790 TB of disk space corresponding to 8\% of the overall CPU power used and the 17\% of the disk space resulting the second contributor in terms of storage and the fourth in terms of CPU time (see Fig. \ref{CPU}-\ref{Disk}).
\begin{figure}[ht]
\includegraphics[width=0.8\textwidth]{normalized-cpu-used-2018-bysite-cumulative.eps}
\caption{\label{CPU-cumu} Cumulative normalized CPU used in 2018 by Grid sites in the CTA Virtual Organization.}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=0.8\textwidth]{transfered-data-2018-bysite.eps}
\caption{\label{Disk} Total transferred data in 2018, for the Grid sites in the CTA Virtual Organization.}
\end{figure}
\clearpage
\section*{References}
\begin{thebibliography}{19}
\bibitem{CrabDiscovery} Weekes T C {\it et al.} 1898 ``Observation of TeV gamma rays from the Crab nebula using the atmospheric Cerenkov imaging technique''
{\it ApJ} {\bf 342} 379-95
\bibitem{TevCat} TevCat web page http://tevcat.uchicago.edu
\bibitem{HESS} H.E.S.S. web page https://www.mpi-hd.mpg.de/hfm/HESS/
\bibitem{MAGIC} MAGIC web page https://magic.mppmu.mpg.de
\bibitem{VERITAS} VERITAS web page http://veritas.sao.arizona.edu
\bibitem{DeNauroisMazin2015} de Naurois M and Mazin D ``Ground-based detectors in very-high-energy gamma-ray astronomy''
Comptes Rendus - Physique {\bf 16} Issue 6-7, 610-27
\bibitem{LemoineGoumard2015} Lemoine-Goumard M 2015 ``Status of ground-based gamma-ray astronomy'' Conf. Proc of $34^{th}$ International Conference on C, 2015, The Hague,
PoS ICRC2015 (2016) 012
\bibitem{CtaConsortium} CTA web page https://www.cta-observatory.org/about/cta-consortium/
\bibitem{CtaApP} Hinton J, Sarkar S, Torres D and Knapp J 2013 ``Seeing the High-Energy Universe with the Cherenkov Telescope Array. The Science Explored with the CTA'' {\it Astropart. Phys.} {\bf 43} 1-356
%\bibitem{Bigongiari2016} Bigongiari C 2016 ``The Cherenkov Telescope Array'' Proc. of Cosmic Ray International Seminar (CRIS2015), %2015, Gallipoli,
% {\it Nucl. Part. Phys. Proc.} {\bf 279–281} 174-81
\bibitem{Ong2017} Ong R A et al. 2017 ``Cherenkov Telescope Array: The Next Generation Gamma-Ray Observatory''
Proc. of 35th Int. Cosmic Ray Conf. - ICRC2017, 10-20 July, 2017, Busan, Korea (arXiv:1709.05434v1)
\bibitem{Corsika} Heck D, Knapp J, Capdevielle J N, Schatz G and Thouw T 1998 ``CORSIKA: a Monte Carlo code to simulate extensive air showers''
Forschungszentrum Karlsruhe GmbH, Karlsruhe (Germany), Feb 1998, V + 90 p., TIB Hannover, D-30167 Hannover (Germany)
\bibitem{SimTelarray} Bernlh{\"o}r K 2008 ``Simulation of imaging atmospheric Cherenkov telescopes with CORSIKA and sim\_telarray'' {\it Astropart. Phys} {\bf 30} 149-58
\bibitem{Arrabito2015} Arrabito L, Bregeon J, Haupt A, Graciani Diaz R, Stagni F and Tsaregorodtsev A 2015 ``Prototype of a production system for Cherenkov Telescope Array with DIRAC'' Proc. of $21^{st}$ Int. Conf.e on Computing in High Energy and Nuclear Physics (CHEP2015), 2015, Okinawa,
{\it J. Phys.: Conf. Series} {\bf 664} 032001
\bibitem{Dirac} Tsaregorodtsev A {\it et al.} 2014 ``DIRAC Distributed Computing Services'' Proc. of $20^{st}$ Int. Conf.e on Computing in High Energy and Nuclear Physics (CHEP2013)
{\it J. Phys.: Conf. Series} {\bf 513} 032096
\end{thebibliography}
\end{document}
This diff is collapsed.
This diff is collapsed.
%% This BibTeX bibliography file was created using BibDesk.
%% http://bibdesk.sourceforge.net/
%% Created for Fabio Bellini at 2017-02-28 14:54:59 +0100
%% Saved with string encoding Unicode (UTF-8)
@article{Alduino:2017ehq,
author = "Alduino, C. and others",
title = "{First Results from CUORE: A Search for Lepton Number
Violation via $0\nu\beta\beta$ Decay of $^{130}$Te}",
collaboration = "CUORE",
journal = "Phys. Rev. Lett.",
volume = "120",
year = "2018",
number = "13",
pages = "132501",
doi = "10.1103/PhysRevLett.120.132501",
eprint = "1710.07988",
archivePrefix = "arXiv",
primaryClass = "nucl-ex",
SLACcitation = "%%CITATION = ARXIV:1710.07988;%%"
}
@article{Alduino:2016vtd,
Archiveprefix = {arXiv},
Author = {Alduino, C. and others},
Collaboration = {CUORE},
Date-Added = {2017-02-28 13:49:12 +0000},
Date-Modified = {2017-02-28 13:49:12 +0000},
Doi = {10.1140/epjc/s10052-016-4498-6},
Eprint = {1609.01666},
Journal = {Eur. Phys. J.},
Number = {1},
Pages = {13},
Primaryclass = {nucl-ex},
Slaccitation = {%%CITATION = ARXIV:1609.01666;%%},
Title = {{Measurement of the two-neutrino double-beta decay half-life of$^{130}$ Te with the CUORE-0 experiment}},
Volume = {C77},
Year = {2017},
Bdsk-Url-1 = {http://dx.doi.org/10.1140/epjc/s10052-016-4498-6}}
@article{Artusa:2014lgv,
Archiveprefix = {arXiv},
Author = {Artusa, D.R. and others},
Collaboration = {CUORE},
Doi = {10.1155/2015/879871},
Eprint = {1402.6072},
Journal = {Adv.High Energy Phys.},
Pages = {879871},
Primaryclass = {physics.ins-det},
Slaccitation = {%%CITATION = ARXIV:1402.6072;%%},
Title = {{Searching for neutrinoless double-beta decay of $^{130}$Te with CUORE}},
Volume = {2015},
Year = {2015},
Bdsk-Url-1 = {http://dx.doi.org/10.1155/2015/879871}}
@inproceedings{Adams:2018nek,
author = "Adams, D. Q. and others",
title = "{Update on the recent progress of the CUORE experiment}",
booktitle = "{28th International Conference on Neutrino Physics and
Astrophysics (Neutrino 2018) Heidelberg, Germany, June
4-9, 2018}",
collaboration = "CUORE",
url = "https://doi.org/10.5281/zenodo.1286904",
year = "2018",
eprint = "1808.10342",
archivePrefix = "arXiv",
primaryClass = "nucl-ex",
SLACcitation = "%%CITATION = ARXIV:1808.10342;%%"
}
\ No newline at end of file
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\bibliographystyle{iopart-num}
%\usepackage{citesort}
\begin{document}
\title{CUORE experiment}
\author{CUORE collaboration}
%\address{}
\ead{cuore-spokesperson@lngs.infn.it}
\begin{abstract}
CUORE is a ton scale bolometric experiment for the search of neutrinoless double beta decay in $^{130}$Te.
The detector started taking data in April 2017 at the Laboratori Nazionali del Gran Sasso of INFN, in Italy.
The projected CUORE sensitivity for the neutrinoless double beta decay half life of $^{130}$Te is of 9$\times$10$^{25}\,$y after five years of live time.
In 2018 the CUORE computing and storage resources at CNAF were used for the data processing and for the production of the Monte Carlo simulations used for a preliminary measurement of the 2$\nu$ double-beta decay of $^{130}$Te.
\end{abstract}
\section{The experiment}
The main goal of the CUORE experiment~\cite{Artusa:2014lgv} is to search for Majorana neutrinos through the neutrinoless double beta decay (0$\nu$DBD): $(A,Z) \rightarrow (A, Z+2) + 2e^-$.
The 0$\nu$DBD has never been observed so far and its half life is expected to be higher than 10$^{25}$\,y.
CUORE searches for 0$\nu$DBD in a particular isotope of Tellurium ($^{130}$Te), using thermal detectors (bolometers). A thermal detector is a sensitive calorimeter which measures the
energy deposited by a single interacting particle through the temperature rise induced in the calorimeter itself.
This is accomplished by using suitable materials for the detector (dielectric crystals) and by running it at very low temperatures (in the 10 mK range) in a dilution refrigerator. In such condition a small energy release in the crystal results in a measurable temperature rise. The temperature change is measured by means of a proper thermal sensor, a NTD germanium thermistor glued onto the crystal.
The bolometers act at the same time as source and detectors for the sought signal.
The CUORE detector is an array of 988 TeO$_2$ crystals operated as bolometers, for a total TeO$_2$ mass of 741$\,$kg.
The tellurium used for the crystals has natural isotopic abundances ($\sim$\,34.2\% of $^{130}$Te), thus the CUORE crystals contain overall 206$\,$kg of $^{130}$Te.
The bolometers are arranged in 19 towers, each tower is composed by 13 floors of 4 bolometers each.
A single bolometer is a cubic TeO$_2$ crystal with 5$\,$cm side and a mass of 0.75$\,$kg.
CUORE will reach a sensitivity on the $^{130}$Te 0$\nu$DBD half life of $9\times10^{25}$\,y.
The cool down of the CUORE detector was completed in January 2017, and after a few weeks of pre-operation and optimization, the experiment started taking physics data in April 2017.
The first CUORE results were released in summer 2017 and were followed by a second data release with an extended exposure in autumn 2017~\cite{Alduino:2017ehq}.
The same data release was used in 2018 to produce a preliminary measurement of the 2-neutrino double-beta decay~\cite{Adams:2018nek}.
In 2018 CUORE acquired less than two months worth of physics data, due to cryogenic problems that required a long stop of the data taking.
\section{CUORE computing model and the role of CNAF}
The CUORE raw data consist in Root files containing the continuous data stream of $\sim$1000 channels recorded by the DAQ at sampling frequencies of 1 kHz. Triggers are implemented via software and saved in a custom format based on the ROOT data analysis framework.
The non event-based information is stored in a PostgreSQL database that is also accessed by the offline data analysis software.
The data taking is organized in runs, each run lasting about one day.
Raw data are transferred from the DAQ computers to the permanent storage area at the end of each run.
In CUORE about 20$\,$TB/y of raw data are being produced.
A full copy of data is maintained at CNAF and preserved also on tape.
The main instance of the CUORE database is located on a computing cluster at the Laboratori Nazionali del Gran Sasso and a replica is synchronized at CNAF.
The full analysis framework at CNAF is working and kept up to date to official CUORE software release.
The CUORE data analysis flow consists in two steps.
In the first level analysis the event-based quantities are evaluated, while in the second level analysis the energy spectra are produced.
The analysis software is organized in sequences.
Each sequence consists in a collection of modules that scan the events in the Root files sequentially, evaluate some relevant quantities and store them back in the events.
The analysis flow consists in several fundamental steps that can be summarized in pulse amplitude estimation, detector gain correction, energy calibration, search for events in coincidence among multiple bolometers, evaluation of pulse-shape parameters used to select physical events.
The CUORE simulation code is based on the GEANT4 package, for which the 4.9.6 and the 10.xx up to 10.03 releases have been installed.
The goal of this work is the evaluation, at the present knowledge of material contaminations, of the background index reachable by the experiment in the region of interest of the energy spectrum (0$\nu$DBD is expected to produce a peak at 2528\,keV).
Depending on the specific efficiency of the simulated radioactive sources (sources located outside the lead shielding are really inefficient), the Monte Carlo simulation could exploit from 5 to 500 computing nodes, with durations up to some weeks.
Recently Monte Carlo simulations of the CUORE calibration sources were also performed at CNAF.
Thanks to these simulations, it was possible to produce calibration sources with an activity specifically optimized for the CUORE needs.
In 2018 the CNAF computing resources were exploited for the production of a preliminary measurement of the 2-neutrino double-beta decay of $^{130}$Te.
In order to obtain this result, which was based on the 2017 data, both the processing of the expeirimental data and the production of Monte Carlo simulations were required.
In the last two months of the year a data reprocessing campaign was performed with an updated version of the CUORE analysis software.
This reprocessing campaign, which also included the new data acquired in 2018, allowed to verify the scalability of the CUORE computing model to the amount of data that CUORE will have to process in a few years from now.
\section*{References}
\bibliography{cuore}
\end{document}
%% This BibTeX bibliography file was created using BibDesk.
%% http://bibdesk.sourceforge.net/
%% Created for Fabio Bellini at 2018-02-24 11:10:52 +0100
%% Saved with string encoding Unicode (UTF-8)
@article{Azzolini:2018tum,
author = "Azzolini, O. and others",
title = "{CUPID-0: the first array of enriched scintillating
bolometers for $0\nu\beta\beta$ decay investigations}",
collaboration = "CUPID",
journal = "Eur. Phys. J.",
volume = "C78",
year = "2018",
number = "5",
pages = "428",
doi = "10.1140/epjc/s10052-018-5896-8",
eprint = "1802.06562",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
SLACcitation = "%%CITATION = ARXIV:1802.06562;%%"
}
@article{Azzolini:2018dyb,
author = "Azzolini, O. and others",
title = "{First Result on the Neutrinoless Double-$\beta$ Decay of
$^{82}Se$ with CUPID-0}",
collaboration = "CUPID-0",
journal = "Phys. Rev. Lett.",
volume = "120",
year = "2018",
number = "23",
pages = "232502",
doi = "10.1103/PhysRevLett.120.232502",
eprint = "1802.07791",
archivePrefix = "arXiv",
primaryClass = "nucl-ex",
SLACcitation = "%%CITATION = ARXIV:1802.07791;%%"
}
@article{Azzolini:2018yye,
author = "Azzolini, O. and others",
title = "{Analysis of cryogenic calorimeters with light and heat
read-out for double beta decay searches}",
journal = "Eur. Phys. J.",
volume = "C78",
year = "2018",
number = "9",
pages = "734",
doi = "10.1140/epjc/s10052-018-6202-5",
eprint = "1806.02826",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
SLACcitation = "%%CITATION = ARXIV:1806.02826;%%"
}
@article{Azzolini:2018oph,
author = "Azzolini, O. and others",
title = "{Search of the neutrino-less double beta decay of$^{82}$
Se into the excited states of$^{82}$ Kr with CUPID-0}",
collaboration = "CUPID",
journal = "Eur. Phys. J.",
volume = "C78",
year = "2018",
number = "11",
pages = "888",
doi = "10.1140/epjc/s10052-018-6340-9",
eprint = "1807.00665",
archivePrefix = "arXiv",
primaryClass = "nucl-ex",
SLACcitation = "%%CITATION = ARXIV:1807.00665;%%"
}
@article{DiDomizio:2018ldc,
author = "Di Domizio, S. and others",
title = "{A data acquisition and control system for large mass
bolometer arrays}",
journal = "JINST",
volume = "13",
year = "2018",
number = "12",
pages = "P12003",
doi = "10.1088/1748-0221/13/12/P12003",
eprint = "1807.11446",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
SLACcitation = "%%CITATION = ARXIV:1807.11446;%%"
}
@article{Beretta:2019bmm,
author = "Beretta, M. and others",
title = "{Resolution enhancement with light/heat decorrelation in
CUPID-0 bolometric detector}",
year = "2019",
eprint = "1901.10434",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
SLACcitation = "%%CITATION = ARXIV:1901.10434;%%"
}
@article{Azzolini:2019nmi,
author = "Azzolini, O. and others",
title = "{Background Model of the CUPID-0 Experiment}",
collaboration = "CUPID",
year = "2019",
eprint = "1904.10397",
archivePrefix = "arXiv",
primaryClass = "nucl-ex",
SLACcitation = "%%CITATION = ARXIV:1904.10397;%%"
}
\ No newline at end of file
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\bibliographystyle{iopart-num}
%\usepackage{citesort}
\begin{document}
\title{CUPID-0 experiment}
\author{CUPID-0 collaboration}
%\address{}
\ead{stefano.pirro@lngs.infn.it}
\begin{abstract}
With their excellent energy resolution, efficiency, and intrinsic radio-purity, cryogenic calorimeters are primed for the search of neutrino-less double beta decay (0$\nu$DBD).
CUPID-0 is an array of 24 Zn$^{82}$Se scintillating bolometers used to search for 0$\nu$DBD of $^{82}$Se.
It is the first large mass 0$\nu$DBD experiment exploiting a double read-out technique: the heat signal to accurately measure particle energies and the light signal to identify the particle type.
The CUPID-0 is in data taking since March 2017 and obtained several outstanding scientific results.
The configuration of the CUPID-0 data processing environment on the CNAF computing cluster has been used for the analysis of the first period of data taking.
\end{abstract}
\section{The experiment}
Neutrino-less Double Beta Decay (0$\nu$DBD) is a hypothesized nuclear transition in which a nucleus decays emitting only two electrons.
This process can not be accommodated in the Standard Model, as the absence of emitted neutrinos would violate the lepton number conservation.
Among the several experimental approaches proposed for the search of 0$\nu$DBD, cryogenic calorimeters (bolometers) stand out for the possibility of achieving excellent energy resolution ($\sim$0.1\%), efficiency ($\ge$80\%) and intrinsic radio-purity. Moreover, the crystals that are operated as bolometers can be grown starting from most of the 0$\nu$DBD emitters, enabling the test of different nuclei.
The state of the art of the bolometric technique is represented by CUORE, an experiment composed of 988 bolometers for a total mass of 741 kg, presently in data taking at Laboratori Nazionali del Gran Sasso.
The ultimate limit of the CUORE background suppression resides in the presence of $\alpha$-decaying isotopes located in the detector structure.
The CUPID-0 project \cite{Azzolini:2018dyb,Azzolini:2018tum} was born to overcome the actual limits.
The main breakthrough of CUPID-0 is the addition of independent devices to measure the light signals emitted from scintillation in ZnSe bolometers.
The different properties of the light emission of electrons and $\alpha$ particles will enable event-by-event rejection of $\alpha$ interactions, suppressing the overall background in the region of interest for 0$\nu$DBD of at least one order of magnitude.
The detector is composed by 26 ZnSe ultra-pure $\sim$ 500g bolometers, enriched at 95\% in $^{82}$Se, the 0$\nu$DBD emitter, and faced to Ge disks light detector operated as bolometers.
CUPID-0 is hosted in a dilution refrigerator at the Laboratori Nazionali del Gran Sasso and started the data taking in March 2017.
The first scientific run (i.e.,~ Phase I) ended in December 2018, collecting 9.95 kg$\times$y of ZnSe exposure.
Such data were used to calculate a new limits on the $^{82}$Se 0$\nu$DBD~\cite{Azzolini:2018dyb,Azzolini:2018oph} and to develop a full background model of the experiment~\cite{Azzolini:2019nmi}.
Phase II will start in June 2019 with an improved detector configuration.
\section{CUPID-0 computing model and the role of CNAF}
The CUPID-0 computing model is similar to the CUORE one, being the only difference in the sampling frequency and working point of the light detector bolometers.
The full data stream is saved in ROOT files, and a derivative trigger is software generated with a channel dependent threshold.
%Raw data are saved in Root files and contain events in correspondence with energy releases occurred in the bolometers.
Each event contains the waveform of the triggering bolometer and those geometrically close to it, plus some ancillary information.
The non-event-based information is stored in a PostgreSQL database that is also accessed by the offline data analysis software.
The data taking is arranged in runs, each run lasting about two days.
Details of the CUPID-0 data acquisition and control system can be found in \cite{DiDomizio:2018ldc}.
Raw data are transferred from the DAQ computers (LNGS) to the permanent storage area (located at CNAF) at the end of each run.
A full copy of data is also preserved on tape.
The data analysis flow consists of two steps; in the first level analysis, the event-based quantities are evaluated, while in the second level analysis the energy spectra are produced.
The analysis software is organized in sequences.
Each sequence consists of a collection of modules that scan the events in the ROOT files sequentially, evaluate some relevant quantities and store them back in the events.
The analysis flow consists of several key steps that can be summarized in pulse amplitude estimation, detector gain correction, energy calibration and search for events in coincidence among multiple bolometers.
The new tools developed for CUPID-0 to handle the light signals are introduced in \cite{Azzolini:2018yye,Beretta:2019bmm}.
The main instance of the database was located at CNAF
and the full analysis framework was used to analyze data until November 2017. A web page for offline reconstruction monitoring was maintained.
Then, since the flooding at INFN Tier 1, we have been using the database of our DAQ servers at LNGS.
%During 2017 a more intense usage of the CNAF resources is expected, both in terms of computing resourced and storage space.
\section*{References}
\bibliography{cupid-biblio}
\end{document}
contributions/dampe/CNAF_HS06_2017.png

62.5 KiB

contributions/dampe/dampe_layout_2.jpg

84.1 KiB

contributions/dampe/figureCNAF2018.png

20.2 KiB