Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • faproietti/ar2018
  • chierici/ar2018
  • SDDS/ar2018
  • cnaf/annual-report/ar2018
4 results
Show changes
Showing
with 2990 additions and 23 deletions
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\bibliographystyle{iopart-num}
%\usepackage{citesort}
\begin{document}
\title{CUORE experiment}
\author{CUORE collaboration}
%\address{}
\ead{cuore-spokesperson@lngs.infn.it}
\begin{abstract}
CUORE is a ton scale bolometric experiment for the search of neutrinoless double beta decay in $^{130}$Te.
The detector started taking data in April 2017 at the Laboratori Nazionali del Gran Sasso of INFN, in Italy.
The projected CUORE sensitivity for the neutrinoless double beta decay half life of $^{130}$Te is of 9$\times$10$^{25}\,$y after five years of live time.
In 2018 the CUORE computing and storage resources at CNAF were used for the data processing and for the production of the Monte Carlo simulations used for a preliminary measurement of the 2$\nu$ double-beta decay of $^{130}$Te.
\end{abstract}
\section{The experiment}
The main goal of the CUORE experiment~\cite{Artusa:2014lgv} is to search for Majorana neutrinos through the neutrinoless double beta decay (0$\nu$DBD): $(A,Z) \rightarrow (A, Z+2) + 2e^-$.
The 0$\nu$DBD has never been observed so far and its half life is expected to be higher than 10$^{25}$\,y.
CUORE searches for 0$\nu$DBD in a particular isotope of Tellurium ($^{130}$Te), using thermal detectors (bolometers). A thermal detector is a sensitive calorimeter which measures the
energy deposited by a single interacting particle through the temperature rise induced in the calorimeter itself.
This is accomplished by using suitable materials for the detector (dielectric crystals) and by running it at very low temperatures (in the 10 mK range) in a dilution refrigerator. In such condition a small energy release in the crystal results in a measurable temperature rise. The temperature change is measured by means of a proper thermal sensor, a NTD germanium thermistor glued onto the crystal.
The bolometers act at the same time as source and detectors for the sought signal.
The CUORE detector is an array of 988 TeO$_2$ crystals operated as bolometers, for a total TeO$_2$ mass of 741$\,$kg.
The tellurium used for the crystals has natural isotopic abundances ($\sim$\,34.2\% of $^{130}$Te), thus the CUORE crystals contain overall 206$\,$kg of $^{130}$Te.
The bolometers are arranged in 19 towers, each tower is composed by 13 floors of 4 bolometers each.
A single bolometer is a cubic TeO$_2$ crystal with 5$\,$cm side and a mass of 0.75$\,$kg.
CUORE will reach a sensitivity on the $^{130}$Te 0$\nu$DBD half life of $9\times10^{25}$\,y.
The cool down of the CUORE detector was completed in January 2017, and after a few weeks of pre-operation and optimization, the experiment started taking physics data in April 2017.
The first CUORE results were released in summer 2017 and were followed by a second data release with an extended exposure in autumn 2017~\cite{Alduino:2017ehq}.
The same data release was used in 2018 to produce a preliminary measurement of the 2-neutrino double-beta decay~\cite{Adams:2018nek}.
In 2018 CUORE acquired less than two months worth of physics data, due to cryogenic problems that required a long stop of the data taking.
\section{CUORE computing model and the role of CNAF}
The CUORE raw data consist in Root files containing the continuous data stream of $\sim$1000 channels recorded by the DAQ at sampling frequencies of 1 kHz. Triggers are implemented via software and saved in a custom format based on the ROOT data analysis framework.
The non event-based information is stored in a PostgreSQL database that is also accessed by the offline data analysis software.
The data taking is organized in runs, each run lasting about one day.
Raw data are transferred from the DAQ computers to the permanent storage area at the end of each run.
In CUORE about 20$\,$TB/y of raw data are being produced.
A full copy of data is maintained at CNAF and preserved also on tape.
The main instance of the CUORE database is located on a computing cluster at the Laboratori Nazionali del Gran Sasso and a replica is synchronized at CNAF.
The full analysis framework at CNAF is working and kept up to date to official CUORE software release.
The CUORE data analysis flow consists in two steps.
In the first level analysis the event-based quantities are evaluated, while in the second level analysis the energy spectra are produced.
The analysis software is organized in sequences.
Each sequence consists in a collection of modules that scan the events in the Root files sequentially, evaluate some relevant quantities and store them back in the events.
The analysis flow consists in several fundamental steps that can be summarized in pulse amplitude estimation, detector gain correction, energy calibration, search for events in coincidence among multiple bolometers, evaluation of pulse-shape parameters used to select physical events.
The CUORE simulation code is based on the GEANT4 package, for which the 4.9.6 and the 10.xx up to 10.03 releases have been installed.
The goal of this work is the evaluation, at the present knowledge of material contaminations, of the background index reachable by the experiment in the region of interest of the energy spectrum (0$\nu$DBD is expected to produce a peak at 2528\,keV).
Depending on the specific efficiency of the simulated radioactive sources (sources located outside the lead shielding are really inefficient), the Monte Carlo simulation could exploit from 5 to 500 computing nodes, with durations up to some weeks.
Recently Monte Carlo simulations of the CUORE calibration sources were also performed at CNAF.
Thanks to these simulations, it was possible to produce calibration sources with an activity specifically optimized for the CUORE needs.
In 2018 the CNAF computing resources were exploited for the production of a preliminary measurement of the 2-neutrino double-beta decay of $^{130}$Te.
In order to obtain this result, which was based on the 2017 data, both the processing of the expeirimental data and the production of Monte Carlo simulations were required.
In the last two months of the year a data reprocessing campaign was performed with an updated version of the CUORE analysis software.
This reprocessing campaign, which also included the new data acquired in 2018, allowed to verify the scalability of the CUORE computing model to the amount of data that CUORE will have to process in a few years from now.
\section*{References}
\bibliography{cuore}
\end{document}
......@@ -38,7 +38,7 @@ Phase II will start in June 2019 with an improved detector configuration.
\section{CUPID-0 computing model and the role of CNAF}
The CUPID-0 computing model is similar to the CUORE one, being the only difference in the sampling frequency and working point of the light detector bolometers.
The full data stream is saved in root files, and a derivative trigger is software generated with a channel dependent threshold.
The full data stream is saved in ROOT files, and a derivative trigger is software generated with a channel dependent threshold.
%Raw data are saved in Root files and contain events in correspondence with energy releases occurred in the bolometers.
Each event contains the waveform of the triggering bolometer and those geometrically close to it, plus some ancillary information.
The non-event-based information is stored in a PostgreSQL database that is also accessed by the offline data analysis software.
......@@ -49,10 +49,13 @@ A full copy of data is also preserved on tape.
The data analysis flow consists of two steps; in the first level analysis, the event-based quantities are evaluated, while in the second level analysis the energy spectra are produced.
The analysis software is organized in sequences.
Each sequence consists of a collection of modules that scan the events in the Root files sequentially, evaluate some relevant quantities and store them back in the events.
Each sequence consists of a collection of modules that scan the events in the ROOT files sequentially, evaluate some relevant quantities and store them back in the events.
The analysis flow consists of several key steps that can be summarized in pulse amplitude estimation, detector gain correction, energy calibration and search for events in coincidence among multiple bolometers.
The new tools developed for CUPID-0 to handle the light signals are introduced in \cite{Azzolini:2018yye,Beretta:2019bmm}.
The main instance of the database was located at CNAF and the full analysis framework was used to analyze data until November 2017. A web page for offline reconstruction monitoring was maintained.
The main instance of the database was located at CNAF
and the full analysis framework was used to analyze data until November 2017. A web page for offline reconstruction monitoring was maintained.
Then, since the flooding at INFN Tier 1, we have been using the database of our DAQ servers at LNGS.
%During 2017 a more intense usage of the CNAF resources is expected, both in terms of computing resourced and storage space.
\section*{References}
......
......@@ -9,16 +9,16 @@
\author{G. Ambrosi$^1$, G. Donvito$^5$, D.F.Droz$^6$, M. Duranti$^1$, D. D'Urso$^{2,3,4}$, F. Gargano$^{5,\ast}$, G. Torralba Elipe$^{7,8}$}
\address{$^1$ INFN, Sezione di Perugia, I-06100 Perugia, Italy}
\address{$^2$ Universit\`a di Sassari, I-07100 Sassari, Italy}
\address{$^3$ ASDC, I-00133 Roma, Italy}
\address{$^4$ INFN-LNS, I-95123 Catania, Italy}
\address{$^1$ INFN Sezione di Perugia, Perugia, IT}
\address{$^2$ Universit\`a di Sassari, Sassari, IT}
\address{$^3$ ASDC, Roma, IT}
\address{$^4$ INFN - Laboratori Nazionali del Sud, Catania, IT}
%\address{$^3$ Universit\`a di Perugia, I-06100 Perugia, Italy}
\address{$^5$ INFN, Sezione di Bari, I-70125 Bari, Italy}
\address{$^6$ University of Geneva, Departement de physique nucléaire et corpusculaire (DPNC), CH-1211, Gen\`eve 4, Switzerland}
\address{$^5$ INFN Sezione di Bari, Bari, IT}
\address{$^6$ University of Geneva, Gen\`eve, CH}
\address{$^7$ Gran Sasso Science Institute, L'Aquila, Italy}
\address{$^8$ INFN - Laboratori Nazionali del Gran Sasso, L'Aquila, Italy}
\address{$^7$ Gran Sasso Science Institute, L'Aquila, IT}
\address{$^8$ INFN - Laboratori Nazionali del Gran Sasso, L'Aquila, IT}
\address{DAMPE experiment \url{http://dpnc.unige.ch/dampe/},
......@@ -59,15 +59,19 @@ PMO is the deputed center for DAMPE data production. Data are collected 4 times
Data processing and reconstruction activities are currently supported by a computing farm consisting of more than 1400 computing cores, able to reprocess 3 years DAMPE data in 1 month.
\subsection{Monte Carlo Production}
Analysis of DAMPE data requires large amounts of Monte Carlo simulation, to fully understand detector capabilities, measurement limits and systematic. In order to facilitate easy work-flow handling and management and also enable effective monitoring of a large number of batch jobs in various states, a NoSQL meta-data database using MongoDB \cite{mongo} was developed with a prototype currently running at the Physics Department of Geneva University. Database access is provided through a web-frontend and command tools based on the flask-web toolkit \cite{flask} with a client-backend of cron scripts that run on the selected computing farm. The design and implementation of this work-flow system were heavily influenced by the implementation of the Fermi-LAT data processing pipeline \cite{latpipeline} and the DIRAC computing framework \cite{dirac}.
Analysis of DAMPE data requires large amounts of Monte Carlo simulation, to fully understand detector capabilities, measurement limits and systematic. In order to facilitate easy work-flow handling and management and also enable effective monitoring of a large number of batch jobs in various states, a NoSQL meta-data database using MongoDB \cite{mongo} was developed with a prototype currently running at the Physics Department of Geneva University. Database access is provided through a web-frontend and command tools based on the flask-web toolkit \cite{flask} with a client-backend of cron scripts that run on the selected computing farm.
The design and completion of this work-flow system were heavily influenced by the implementation of the Fermi-LAT data processing pipeline \cite{latpipeline}
and the DIRAC computing framework \cite{dirac}.
Once submitted, each batch job continuously reports its status to the database through outgoing HTTP requests. To that end, computing nodes need to allow for outgoing internet access. Each batch job implements a work-flow where input and output data transfers are being performed (and their return codes are reported) as well as the actual running of the payload of a job (which is defined in the metadata description of the job). Dependencies on productions are implemented at the framework level and jobs are only submitted once dependencies are satisfied.
Once submitted, each batch job continuously reports its status to the database through outgoing HTTP requests.
To that end, computing nodes must have outgoing connectivity enabled. Each batch job implements a work-flow where input and output data transfers are being performed (and their return codes are reported) as well as the actual running of the payload of a job (which is defined in the metadata description of the job). Dependencies on productions are implemented at the framework level and jobs are only submitted once dependencies are satisfied.
Once generated, a secondary job is initiated which performs digitization and reconstruction of existing MC data with a given release for large amounts of MC data in bulk. This process is set-up via a cronjob at DPNC and occupies up to 200 slots in a 6-hour limited computing queue.
\subsection{Data availability}
DAMPE data are available to the Chinese Collaboration through the PMO institute, while they are kept accessible to the European Collaboration transferring them from PMO to CNAF and from there to the DPNC.
DAMPE data are available to the Chinese Collaboration through the PMO institute, while they are kept accessible to the European Collaboration transferring them from PMO to CNAF, and also from there to the DPNC.
Every time a new {\it 1B}, {\it 1F} or {\it 2A} data files are available at PMO, they are copied, using the GridFTP \cite{gridftp} protocol,
to a server at CNAF, \texttt{ gridftp-plain-virgo.cr.cnaf.infn.it}, into the DAMPE storage area. From CNAF, every 4 hours a copy of each stream is triggered to the Geneva computing farm via rsync. Dedicated lsf jobs are submitted once per day to asynchronously verify the checksum of newly transferred data from PMO to CNAF and from CNAF to Geneva.
into the DAMPE storage area at CNAF. From CNAF, every 4 hours a copy of each stream
is triggered to the Geneva computing farm via rsync. Dedicated batch jobs are submitted once per day to asynchronously verify the checksum of newly transferred data from PMO to CNAF and from CNAF to Geneva.
Data verification and copy processes are managed through a dedicated User Interface (UI), \texttt{ui-dampe}.
The connection to China is passing through the Orientplus \cite{orientplus} link of the G${\rm \acute{e}}$ant Consortium \cite{geant}. The data transfer rate is currently limited by the connection of the PMO to the China Education and Research Network (CERNET), which has a maximum bandwidth of 100 Mb/s. So the PMO-CNAF copy processed is used for daily data production.
......@@ -88,10 +92,10 @@ DAMPE activities at CNAF in 2018 have been related to data transfer, Monte Carlo
\subsection{Data transfer}
The daily activity of data transfer from PMO to CNAF and thereafter from CNAF to GVA have been performed all along the year.
Daily transfer rate has been of about 100 GB per day from PMO to CNAF more 100 GB from CNAF to PMO.
The daily activity of data transfer from PMO to CNAF and thereafter from CNAF to CERN have been performed all along the year.
Daily transfer rate has been of about 100 GB per day from PMO to CNAF and more than 100 GB per day from CNAF to PMO.
The step between PMO and CNAF is performed, as seen in previous sections, via \texttt{gridftp} protocol.
Two strategies have been, instead, used to copy data from CNAF to PMO: via \texttt{rsync} from the UI and via \texttt{rsync} managed by batch (LSF) jobs.
Two strategies have been, instead, used to copy data from CNAF to PMO: via \texttt{rsync} from the UI and via \texttt{rsync} managed by batch jobs.
DAMPE data have been reprocessed three times along the year and a dedicated copy task has been fulfilled to copy the new production releases, in addition to the ordinary daily copy.
......@@ -142,7 +146,7 @@ Most of the analysis in Europe is performed at CNAF and its role has been crucia
\section{Acknowledgments}
The DAMPE mission was founded by the strategic priority science and technology projects in space science of the Chinese Academy of Sciences and in part by the National Key Program for Research and Development, and the 100 Talents program of the Chinese Academy of Sciences. In Europe, the work is supported by the Italian National Institute for Nuclear Physics (INFN), the Italian University and Research Ministry (MIUR), and the University of Geneva. We extend our gratitude to CNAF-T1 for their continued support also beyond providing computing resources.
The DAMPE mission was founded by the strategic priority science and technology projects in space science of the Chinese Academy of Sciences and in part by the National Key Program for Research and Development, and the 100 Talents program of the Chinese Academy of Sciences. In Europe, the work is supported by the Italian National Institute for Nuclear Physics (INFN), the Italian University and Research Ministry (MIUR), and the University of Geneva. We extend our gratitude to INFN-T1 for their continued support also beyond providing computing resources.
\section*{References}
......@@ -162,4 +166,4 @@ The DAMPE mission was founded by the strategic priority science and technology p
\end{thebibliography}
\end{document}
\ No newline at end of file
\end{document}
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\bibliographystyle{iopart-num}
%\usepackage{citesort}
\begin{document}
\title{DarkSide program at CNAF}
\author{S. Bussino, S. M. Mari, S. Sanfilippo}
\address{INFN and Universit\`{a} degli Studi Roma 3}
\ead{bussino@fis.uniroma3.it; stefanomaria.mari@uniroma3.it; simone.sanfilippo@roma3.infn.it}
\begin{abstract}
DarkSide is a direct dark matter research program based at the underground Laboratori Nazionali del Gran Sasso
(\textit {LNGS}) and it is searching for the rare nuclear recoils (possibly) induced by the so called Weakly
Interacting Massive Particles (\textit{WIMPs}). It is based on a dual-phase Time Projection Chamber filled with liquid
Argon (\textit{LAr-TPC}) from underground sources. The prototype project is a LAr-TPC with a $(46.4\pm0.7)$kg
active mass, the DarkSide-50 (\textit{DS-50}) experiment, which is installed inside a 30 t organic liquid scintillator
neutron veto, which is in turn installed at the center of a 1kt water Cherenkov veto for the residual flux of cosmic
muons. DS-50 has been taking data since November 2013 with Atmospheric Argon (\textit{AAr}) and, since April 2015, has
been operated with Underground Argon (\textit{UAr}) highly depleted in radioactive ${}^{39}Ar$. The exposure of 1422
kg d of AAr has demonstrated that the operation of DS-50 for three years in a background free condition is a solid
reality, thank to the excellent performance of the pulse shape analysis. The first release of results from an exposure
of 2616 kg d of UAr has shown no dark matter candidate events. This is the most sensitive dark matter search performed
with an Argon-based detector, corresponding to a 90\% CL upper limit on the WIMP-nucleon spin-indipendent cross section
of $2\times10^{-44} cm^2$ for a WIMP mass of 100 $GeV/c^2$. DS-50 will be operated till the end of the year 2019.
From the experience of DS-50, the DS-20k project has been presented based on a new LAr-TPC of more than 20 tonne.
\end{abstract}
\section{The DS-50 experiment}
The existence of dark matter is now established from different gravitational effects, but its nature is still a deep mystery. One possibility, motivated by other considerations in elementary particle physics, is that dark matter consists of new undiscovered elementary particles. A leading candidate explanation, motivated by supersymmetry theory (\textit{SUSY}), is that dark matter is composed of as-yet undiscovered Weakly Interacting Massive Particles (\textit{WIMPs}) formed in the early universe and subsequently gravitationally clustered in association with baryonic matter \cite{Good85}. Evidence for new particles that could constitute WIMP dark matter may come from upcoming experiments at the Large Hadron Collider (\textit{LHC}) at CERN or from sensitive astronomical instruments that detect radiation produced by WIMP-WIMP annihilations in galaxy halos. The thermal motion of the WIMPs comprising the dark matter halo surrounding the galaxy and the Earth should result in WIMP-nuclear collisions of sufficient energy to be observable by sensitive laboratory apparatus. WIMPs could in principle be detected in terrestrial experiments through their collisions with ordinary nuclei, giving observable low-energy $<$100 keV nuclear recoils. The predicted low collision rates require ultra-low background detectors with large (0.1-10 ton) target masses, located in deep underground sites to eliminate neutron background from cosmic ray muons. The DarkSide program is the first to employ a Liquid Argon Time Projection Chamber (\textit{LAr-TPC}) with low levels of ${}^{39}Ar$, together with innovations in photon detection and background suppression.
The DS-50 detector is installed in Hall C at Laboratori Nazionali del Gran Sasso (\textit{LNGS}) at a depth of 3800 m.w.e.\footnote{The meter water equivalent (m.w.e.) is a standard measure of cosmic ray attenuation in underground laboratories.}, and it will continue to taking data up to the end of 2019. The project will continue with DarkSide-20k (\textit{DS-20k}) and \textit{Argo}, a multi-ton detector with an expected sensitivity improvement of two orders of magnitude. The DS-50 target volume is hosted in a dual phase TPC that contains Argon in both phases, liquid and gaseous, the latter on the top of the former one. The scattering of WIMPs or background particles in the active volume induces a prompt scintillation light, called S1, and ionization. Electrons which not recombine are drifted by an electric field of 200 V/cm applied along the z-axis. They are then extracted into gaseous phase above the extraction grid, and accelerated by an electric field of about 4200 V/cm. Here a secondary larger signal due to electroluminescence takes place, the so called S2. The light is collected by two arrays of 19 3"-PMTs on each side of the TPC corresponding to a 60\% geometrical coverage of the end plates and 20\% of the total TPC surface. The detector is capable of reconstructing the position of the interaction in 3D. The z-coordinate, in particular, is easily computed by the electron drift time, while the time profile of the S2 light collected by the top plate PMTs allows to reconstruct the \textit{x} and the \textit{y} coordinates. The LAr-TPC can exploit Pulse Shape Discrimination (\textit{PSD}) and the ratio of scintillation to ionization (S1/S2) to reject $\beta/\gamma$ background in favor of the nuclear recoil events expected from WIMP scattering \cite{Ben08, Bou06}.\\ Events due to neutrons from cosmogenic sources and from radioactive contamination in the detector components, which also produces nuclear recoils, are suppressed by the combined action of the neutron and cosmic rays vetoes. The first one in particular is a 4.0 meter-diameter stainless steel sphere filled with 30 t of borated liquid scintillator acting as Liquid Scintillator Veto (\textit{LSV}). The sphere is lined with \textit{Lumirror} reflecting foils and it is equipped with an array of 110 Hamamatsu 8"-PMTs with low-radioactive components and high-quantum-efficiency photocathodes. The cosmic rays veto, on the other hand, is an 11m-diameter, 10 m-high cylindrical tank filled with high purity water which acts as a Water Cherenkov Detector (\textit{WCD}). The inside surface of the tank is covered with a laminated \textit{Tyvek-polyethylene-Tyvek} reflector and it is equipped with an array of 80 ETL 8"-PMTs with low-radioactive components and high-quantum-efficiency photocathodes.
The exposure of 1422 kg d of AAr has demonstrated that the operation of DS-50 for three years in a background free condition is a solid reality, thank to the excellent performance of the pulse shape analysis. The first release of results from an exposure of 2616 kg d of UAr has shown no dark matter candidate events. This is the most sensitive dark matter search performed with an Argon-based detector, corresponding to a 90\% CL upper limit on the WIMP-nucleon spin-indipendent cross section of $2\times10^{-44} cm^2$ for a WIMP mass of 100 $GeV/c^2$ \cite{Dang16}.
\section{DkS-50 at CNAF}
The data readout in the three detector subsystems is managed by dedicated trigger boards: each subsystem is equipped with an user-customizable FPGA unit, in which the trigger logic is implemented. The inputs and outputs from the different trigger modules are processed by a set of electrical-to-optical converters and the communication between the subsystems uses dedicated optical links. To keep the TPC and the Veto readouts aligned, a pulse per second (\textit{PPS}) generated by a GPS receiver is sent to the two systems, where it is acquired and interpolated with a resolution of 20 ns to allow offline confirmation of event matching.
To acquire data, the DarkSide detector uses a DAQ machine equipped with a storage buffer of 7 TB. Raw data are processed and automatically sent to CNAF farm via a 10 Gbit optical link (almost with approximately 7 hours delay). At CNAF data are housed on a disk storage system of about 1 PB net capacity with a part of the data (300 TB) backed up on the tape library. Raw data from CNAF, and processed ones from LNGS are then semi-automatically copied to Fermi National Laboratories (\textit{FNAL}) via a 100 Gbit optical link. Part of reconstructed data are sent back to CNAF via the same link as before with a rate of about 0.5 TB/month (RECO files). Data processed and analyzed at FNAL, are compared with the analysis performed at CNAF. The INFN Roma 3 group has an active role to maintain and follow, step by step, the overall transferring procedure and to arrange the data management.
\section{The future of DarkSide: DS-20k}
Building on the successful experience in operating the DS-50 detector, the DarkSide program will continue with DS-20k, a direct WIMP search detector using a two-phase Liquid Argon Time Projection Chamber (LAr TPC) with an active (fiducial) mass of 23 t (20 t), which will be built in the next years. The optical sensors will be Silicon Photon Multiplier (\textit{SiPM}) matrices with very low radioactivity. Operation of DS-50 demonstrated a major reduction in the dominant ${}^{39}Ar$ background when using Argon extracted from an underground source, before applying pulse shape analysis. Data from DS-50, in combination with MC simulations and analytical modelling, also shows that a rejection factor for discrimination between electron and nuclear recoils greater than $3\times10^9$ is achievable. The expected large rejection factor, along with the use of the veto system and utilizing silicon photomultipliers in the LAr-TPC, are the keys to unlock the path to large LAr-TPC detector masses, while maintaining an experiment in which less than $<0.1$ events is expected to occur within the WIMP search region during the planned exposure.
Thanks to the measured ultra-low background, DS-20k will have sensitivity to WIMP-nucleon cross sections of
$1.2\times10^{-47}\ cm^2$ and $1.1\times10^{-46}\ cm^2$ for WIMPs respectively of
$1\ TeV/c^2$ and $10\ TeV/c^2$ mass, to be achieved during a 5 yr run producing an exposure of 100 t yr free from any instrumental background.
DS-20k could then extend its operation to a decade, increasing the exposure to 200 t yr, reaching a sensitivity of $7.4\times10^{-48}\ cm^2$ and $6.9\times10^{-47}\ cm^2$ for WIMPs respectively of $1\ TeV/c^2$ and $10\ TeV/c^2$ mass.
DS-20k will be more than two orders of magnitude larger in size compared to DS-50 and will utilize SiPM technologies. Therefore, the collaboration plans to build a prototype detector of intermediate size, called DS-Proto, incorporating the new technologies for their full validation. The choice of about 1t mass scale allows a full validation of the technological choices for DS-20k. DS-proto will be built at CERN laboratory, the data taking is foreseen to start in the year 2020.
\section{DS-proto at CNAF}
Data from DS-proto will be stored and managed at CNAF. The construction, operation, and commissioning of DS-proto will allow validation of the major innovative technical features of DS-20k. Data taking will start in the year 2020. The computing resources have been evaluated according to the data throughput, trigger rate and duty cycle of the experiment. A computing power of about 1kHS06 and 300 net TB is needed to fully support DS-proto data taking and data analysis in the year 2020. In order to perform at CNAF the CPU demanding Monte Carlo production, 30 net TB and 2kHS06 are needed. The DS-proto data taking has been foreseen for few years, requiring a total disk space of the order of some PB and a computing capacity of several kHS06.
%However, the goal of DS-20k is a background free exposure of 100 ton-year of liquid Argon which requires further suppression of ${}^{39}Ar$ background with respect to DS-50. The project \textit{URANIA} involves the upgrade of the UAr extraction plant to a massive production rate suitable for multi-ton detectors. The project \textit{ARIA} instead involves the construction of a very tall cryogenic distillation column in the Seruci mine (Sardinia, Italy) with the high-volume capability of chemical and isotopic purification of UAr.\\ The projected sensitivity of DS-20k and Argo reaches a WIMP-nucleon cross section of $10^{-47}\ cm^2$ and $10^{-48}\ cm^2$ respectively, for a WIMP mass of 100 $GeV/cm^2$, exploring the region of the parameters plane down to the irreducible background due to atmospheric neutrinos.
\section*{References}
\begin{thebibliography} {17}
\bibitem{Good85} M.~W.~Goodman, E.~Witten, Phys. Rev. D {\bf 31} 3059 (1985);
\bibitem{Loo83} H.~H.~Loosli, Earth Plan. Sci. Lett. {\bf 63} 51 (1983);
\bibitem{Ben07} P.~Benetti et al. (WARP Collaboration), Nucl. Inst. Meth. A {\bf 574} 83 (2007);
\bibitem{Ben08} P.~Benetti et al. (WARP Collaboration), Astropart. Phys. {\bf 28} 495 (2008);
\bibitem{Bou06} M.~G.~Boulay, A.~Hime, Astropart. Phys. {\bf 25} 179 (2006);
\bibitem{Dang16} D.~D'Angelo et al. (DARKSIDE Collaboration), Il nuovo cimento C {\bf 39} 312 (2016).
\end{thebibliography}
\end{document}
\ No newline at end of file
File added
......@@ -7,9 +7,9 @@
\begin{document}
\title{Comparing Data Mining Techniques for Software Defect Prediction}
\author{Marco Canaparo, Elisabetta Ronchieri}
\author{M. Canaparo$^1$, E. Ronchieri$^1$}
\address{INFN CNAF, Bologna, Italy}
\address{$^1$ INFN-CNAF, Bologna, IT}
\ead{marco.canaparo@cnaf.infn.it, elisabetta.ronchieri@cnaf.infn.it}
......@@ -54,7 +54,7 @@ Concerning software metrics, we have collected all the metrics used in literatur
\noindent\textbf{McCabe} (e.g. Cyclomatic Complexity, Essential Complexity): is used to evaluate the complexity of a software program. It is derived from a flow graph and is mathematically computed using graph theory. Basically, it is determined by counting the number of decision statements in a program \cite{McCabe1976, McCabe1989}.
\noindent\textbf{Halstead} (e.g. Base Measures, Derived Measures): is used to measure some characteristics of a program module - such as the "Length", the "Potential Volume", "Difficulty", the "Programming Time" - by employing some basic metrics like number of unique operators, number of unique operands, total occurrences of operators, total occurrences of operands \cite{shen, Halstead1977}.
\noindent\textbf{Halstead} (e.g. Base Measures, Derived Measures): is used to measure some characteristics of a program module - such as the ``Length'', the ``Potential Volume'', ``Difficulty'', the ``Programming Time'' - by employing some basic metrics like number of unique operators, number of unique operands, total occurrences of operators, total occurrences of operands \cite{shen, Halstead1977}.
\noindent\textbf{Size} (e.g. Lines of Code, Comment Lines of Code): the Lines of Code (LOC) is used to measure a software module and the accumulated LOC of all the modules for measuring a program \cite{li}.
......
File added
%%
%% This is file `iopams.sty'
%% File to include AMS fonts and extra definitions for bold greek
%% characters for use with iopart.cls
%%
\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{iopams}[1997/02/13 v1.0]
\RequirePackage{amsgen}[1995/01/01]
\RequirePackage{amsfonts}[1995/01/01]
\RequirePackage{amssymb}[1995/01/01]
\RequirePackage{amsbsy}[1995/01/01]
%
\iopamstrue % \newif\ifiopams in iopart.cls & iopbk2e.cls
% % allows optional text to be in author guidelines
%
% Bold lower case Greek letters
%
\newcommand{\balpha}{\boldsymbol{\alpha}}
\newcommand{\bbeta}{\boldsymbol{\beta}}
\newcommand{\bgamma}{\boldsymbol{\gamma}}
\newcommand{\bdelta}{\boldsymbol{\delta}}
\newcommand{\bepsilon}{\boldsymbol{\epsilon}}
\newcommand{\bzeta}{\boldsymbol{\zeta}}
\newcommand{\bfeta}{\boldsymbol{\eta}}
\newcommand{\btheta}{\boldsymbol{\theta}}
\newcommand{\biota}{\boldsymbol{\iota}}
\newcommand{\bkappa}{\boldsymbol{\kappa}}
\newcommand{\blambda}{\boldsymbol{\lambda}}
\newcommand{\bmu}{\boldsymbol{\mu}}
\newcommand{\bnu}{\boldsymbol{\nu}}
\newcommand{\bxi}{\boldsymbol{\xi}}
\newcommand{\bpi}{\boldsymbol{\pi}}
\newcommand{\brho}{\boldsymbol{\rho}}
\newcommand{\bsigma}{\boldsymbol{\sigma}}
\newcommand{\btau}{\boldsymbol{\tau}}
\newcommand{\bupsilon}{\boldsymbol{\upsilon}}
\newcommand{\bphi}{\boldsymbol{\phi}}
\newcommand{\bchi}{\boldsymbol{\chi}}
\newcommand{\bpsi}{\boldsymbol{\psi}}
\newcommand{\bomega}{\boldsymbol{\omega}}
\newcommand{\bvarepsilon}{\boldsymbol{\varepsilon}}
\newcommand{\bvartheta}{\boldsymbol{\vartheta}}
\newcommand{\bvaromega}{\boldsymbol{\varomega}}
\newcommand{\bvarrho}{\boldsymbol{\varrho}}
\newcommand{\bvarzeta}{\boldsymbol{\varsigma}} %NB really sigma
\newcommand{\bvarsigma}{\boldsymbol{\varsigma}}
\newcommand{\bvarphi}{\boldsymbol{\varphi}}
%
% Bold upright capital Greek letters
%
\newcommand{\bGamma}{\boldsymbol{\Gamma}}
\newcommand{\bDelta}{\boldsymbol{\Delta}}
\newcommand{\bTheta}{\boldsymbol{\Theta}}
\newcommand{\bLambda}{\boldsymbol{\Lambda}}
\newcommand{\bXi}{\boldsymbol{\Xi}}
\newcommand{\bPi}{\boldsymbol{\Pi}}
\newcommand{\bSigma}{\boldsymbol{\Sigma}}
\newcommand{\bUpsilon}{\boldsymbol{\Upsilon}}
\newcommand{\bPhi}{\boldsymbol{\Phi}}
\newcommand{\bPsi}{\boldsymbol{\Psi}}
\newcommand{\bOmega}{\boldsymbol{\Omega}}
%
% Bold versions of miscellaneous symbols
%
\newcommand{\bpartial}{\boldsymbol{\partial}}
\newcommand{\bell}{\boldsymbol{\ell}}
\newcommand{\bimath}{\boldsymbol{\imath}}
\newcommand{\bjmath}{\boldsymbol{\jmath}}
\newcommand{\binfty}{\boldsymbol{\infty}}
\newcommand{\bnabla}{\boldsymbol{\nabla}}
\newcommand{\bdot}{\boldsymbol{\cdot}}
%
% Symbols for caption
%
\renewcommand{\opensquare}{\mbox{$\square$}}
\renewcommand{\opentriangle}{\mbox{$\vartriangle$}}
\renewcommand{\opentriangledown}{\mbox{$\triangledown$}}
\renewcommand{\opendiamond}{\mbox{$\lozenge$}}
\renewcommand{\fullsquare}{\mbox{$\blacksquare$}}
\newcommand{\fulldiamond}{\mbox{$\blacklozenge$}}
\newcommand{\fullstar}{\mbox{$\bigstar$}}
\newcommand{\fulltriangle}{\mbox{$\blacktriangle$}}
\newcommand{\fulltriangledown}{\mbox{$\blacktriangledown$}}
\endinput
%%
%% End of file `iopams.sty'.
This diff is collapsed.
%%
%% This is file `jpconf11.clo'
%%
%% This file is distributed in the hope that it will be useful,
%% but WITHOUT ANY WARRANTY; without even the implied warranty of
%% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
%%
%% \CharacterTable
%% {Upper-case \A\B\C\D\E\F\G\H\I\J\K\L\M\N\O\P\Q\R\S\T\U\V\W\X\Y\Z
%% Lower-case \a\b\c\d\e\f\g\h\i\j\k\l\m\n\o\p\q\r\s\t\u\v\w\x\y\z
%% Digits \0\1\2\3\4\5\6\7\8\9
%% Exclamation \! Double quote \" Hash (number) \#
%% Dollar \$ Percent \% Ampersand \&
%% Acute accent \' Left paren \( Right paren \)
%% Asterisk \* Plus \+ Comma \,
%% Minus \- Point \. Solidus \/
%% Colon \: Semicolon \; Less than \<
%% Equals \= Greater than \> Question mark \?
%% Commercial at \@ Left bracket \[ Backslash \\
%% Right bracket \] Circumflex \^ Underscore \_
%% Grave accent \` Left brace \{ Vertical bar \|
%% Right brace \} Tilde \~}
\ProvidesFile{jpconf11.clo}[2005/05/04 v1.0 LaTeX2e file (size option)]
\renewcommand\normalsize{%
\@setfontsize\normalsize\@xipt{13}%
\abovedisplayskip 12\p@ \@plus3\p@ \@minus7\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6.5\p@ \@plus3.5\p@ \@minus3\p@
\belowdisplayskip \abovedisplayskip
\let\@listi\@listI}
\normalsize
\newcommand\small{%
\@setfontsize\small\@xpt{12}%
\abovedisplayskip 11\p@ \@plus3\p@ \@minus6\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6.5\p@ \@plus3.5\p@ \@minus3\p@
\def\@listi{\leftmargin\leftmargini
\topsep 9\p@ \@plus3\p@ \@minus5\p@
\parsep 4.5\p@ \@plus2\p@ \@minus\p@
\itemsep \parsep}%
\belowdisplayskip \abovedisplayskip}
\newcommand\footnotesize{%
% \@setfontsize\footnotesize\@xpt\@xiipt
\@setfontsize\footnotesize\@ixpt{11}%
\abovedisplayskip 10\p@ \@plus2\p@ \@minus5\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6\p@ \@plus3\p@ \@minus3\p@
\def\@listi{\leftmargin\leftmargini
\topsep 6\p@ \@plus2\p@ \@minus2\p@
\parsep 3\p@ \@plus2\p@ \@minus\p@
\itemsep \parsep}%
\belowdisplayskip \abovedisplayskip
}
\newcommand\scriptsize{\@setfontsize\scriptsize\@viiipt{9.5}}
\newcommand\tiny{\@setfontsize\tiny\@vipt\@viipt}
\newcommand\large{\@setfontsize\large\@xivpt{18}}
\newcommand\Large{\@setfontsize\Large\@xviipt{22}}
\newcommand\LARGE{\@setfontsize\LARGE\@xxpt{25}}
\newcommand\huge{\@setfontsize\huge\@xxvpt{30}}
\let\Huge=\huge
\if@twocolumn
\setlength\parindent{14\p@}
\else
\setlength\parindent{18\p@}
\fi
\if@letterpaper%
%\input{letmarg.tex}%
\setlength{\hoffset}{0mm}
\setlength{\marginparsep}{0mm}
\setlength{\marginparwidth}{0mm}
\setlength{\textwidth}{160mm}
\setlength{\oddsidemargin}{-0.4mm}
\setlength{\evensidemargin}{-0.4mm}
\setlength{\voffset}{0mm}
\setlength{\headheight}{8mm}
\setlength{\headsep}{5mm}
\setlength{\footskip}{0mm}
\setlength{\textheight}{230mm}
\setlength{\topmargin}{1.6mm}
\else
%\input{a4marg.tex}%
\setlength{\hoffset}{0mm}
\setlength{\marginparsep}{0mm}
\setlength{\marginparwidth}{0mm}
\setlength{\textwidth}{160mm}
\setlength{\oddsidemargin}{-0.4mm}
\setlength{\evensidemargin}{-0.4mm}
\setlength{\voffset}{0mm}
\setlength{\headheight}{8mm}
\setlength{\headsep}{5mm}
\setlength{\footskip}{0mm}
\setlength{\textheight}{230mm}
\setlength{\topmargin}{1.6mm}
\fi
\setlength\maxdepth{.5\topskip}
\setlength\@maxdepth\maxdepth
\setlength\footnotesep{8.4\p@}
\setlength{\skip\footins} {10.8\p@ \@plus 4\p@ \@minus 2\p@}
\setlength\floatsep {14\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\textfloatsep {24\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\intextsep {16\p@ \@plus 4\p@ \@minus 4\p@}
\setlength\dblfloatsep {16\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\dbltextfloatsep{24\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\@fptop{0\p@}
\setlength\@fpsep{10\p@ \@plus 1fil}
\setlength\@fpbot{0\p@}
\setlength\@dblfptop{0\p@}
\setlength\@dblfpsep{10\p@ \@plus 1fil}
\setlength\@dblfpbot{0\p@}
\setlength\partopsep{3\p@ \@plus 2\p@ \@minus 2\p@}
\def\@listI{\leftmargin\leftmargini
\parsep=\z@
\topsep=6\p@ \@plus3\p@ \@minus3\p@
\itemsep=3\p@ \@plus2\p@ \@minus1\p@}
\let\@listi\@listI
\@listi
\def\@listii {\leftmargin\leftmarginii
\labelwidth\leftmarginii
\advance\labelwidth-\labelsep
\topsep=3\p@ \@plus2\p@ \@minus\p@
\parsep=\z@
\itemsep=\parsep}
\def\@listiii{\leftmargin\leftmarginiii
\labelwidth\leftmarginiii
\advance\labelwidth-\labelsep
\topsep=\z@
\parsep=\z@
\partopsep=\z@
\itemsep=\z@}
\def\@listiv {\leftmargin\leftmarginiv
\labelwidth\leftmarginiv
\advance\labelwidth-\labelsep}
\def\@listv{\leftmargin\leftmarginv
\labelwidth\leftmarginv
\advance\labelwidth-\labelsep}
\def\@listvi {\leftmargin\leftmarginvi
\labelwidth\leftmarginvi
\advance\labelwidth-\labelsep}
\endinput
%%
%% End of file `iopart12.clo'.
contributions/ds_cloud_c/catc_monitoring.png

31.7 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{Cloud@CNAF Management and Evolution}
\author{C. Duma$^1$, A. Costantini$^1$, D. Michelotto$^1$ and D. Salomoni$^1$}
\address{$^1$INFN Division CNAF, Bologna, Italy}
\ead{ds@cnaf.infn.it}
\begin{abstract}
Cloud@CNAF is the cloud infrastructure hosted at CNAF, based on open source solutions aiming
to serve different use cases present here. The infrastructure is the result of
the collaboration of a transversal group of people from all CNAF
functional units: networking, storage, farming, national services, distributed systems.
If 2016 was for the Cloud@CNAF IaaS (Infrastructure as a Service) based on OpenStack,
a period of consolidation and improvement, 2017 was an year of consolidation and
operation ended with an extreme event - the flooding of the DataCenter, when an
aqueduct pipe located in the street nearby CNAF went broke. This event caused
down of the entire DataCenter, including the Cloud@CNAF infrastructure.This paper
presents the activities carried out throughout 2018 to ensure the functioning
of the center cloud infrastructure, that saw the its migration from CNAF to INFN-Ferrara,
starting to the re-design of the entire to cope with the limited availability of
space and weigth imposed by the new location, to the physical migration of the
racks and remote management and operation of infrastructure in order to continue
to provide high-quality services for our users and communities.
\end{abstract}
\section{Introduction}
The main goal of Cloud@CNAF \cite{catc} project is to provide a production quality
Cloud Infrastructure for CNAF internal activities as well as national and
international projects hosted at CNAF:
\begin{itemize}
\item Internal activities
\begin{itemize}
\item Provisioning VM for CNAF departments and staff members
\item Tutorial and courses
\end{itemize}
\item National and international projects
\begin{itemize}
\item Providing VMs for experiments hosted at CNAF, like CMS, ATLAS, EEE and FAZIA
\item testbeds for testing the services developed by projects like the INDIGO-DataCloud, eXtreme-DataCLoud and DEEP-HybridDataCloud
\end{itemize}
\end{itemize}
The infrastructure made available is based on OpenStack \cite{openstack}, version Mitaka, with all the
services deployed using a High-Availability (HA) setup or in a
clustered manner (for ex. for the DBs used). During 2016 the infrastructure has been
enhanced, by adding new resources, compute and network, and its operation has been improved and guaranteed by
adding the monitoring part, improving the support, automating the maintenance activities.
Thanks to this enhancement, Cloud@CNAF was able to offer high reliable services to the users and communities who rely on such infrastructure.
At the end of 2017, on November 9th early at morning, an aqueduct pipe located in the street nearby CNAF, broke as documented in Ref. \cite{flood}.
As a result, a river of water and mud flowed towards the Tier1 data center. The level of the water did not exceeded the
threshold of safety of the waterproof doors but, due to the porosity of the external walls and the floor, it could find a way
into the data center. Both electric lines failed at about 7.10AM CET. Access to the data center was possible only
in the afternoon, after all the water had been pumped out.
As a result, the entire Tier1 data center went down, included the Cloud@CNAF infrastructure.
\section{The resource migration}
Some weeks after the flooding, has been decided to move the Cloud@CNAF core services in a different location
in order to recover the services provided for communities and experiments.
Thanks to a strong relationship, both University of Parma/INFN-Parma and INFN-Ferrara proposed to host our
core machinery and related services.
Due to the geographical proximity and the presence of Point of Presence (PoP) GARR, the
Cloud@CNAF core machinery was moved to the INFN-Ferrara location.
Unfortunately, we were not able to move all the Cloud@CNAF resources due to a limited power and weight availability in the new location.
For the above mentioned reason, the re-design of the new infrastructure has been considered.
As a first step, the services and the related machinery to move to the new - temporary - location have been selected in order to
fit the maximum power consumption and weight estimated for each of the two rooms devoted to host Cloud@CNAF services (see Table \ref{table:1} for details).
\begin{table} [ht]
\centering
\begin{tabular}{ l|c|c|c||c||c| }
\cline{2-6}
& \multicolumn{3}{c||}{Room1} & Room2 & Tot \\
\cline{2-5}
& Rack1 & Rack2 & Tot & Rack3 & \\
\hline
Power consumption (kW) & 8,88 & 4,91 & 13,79 (15) & 5,8 (7)& 19,59\\
Weight (Kg) & 201 & 151 & 352 (400Kg/mq) & 92 (400Kg/mq) & 444 \\
Occupancy (U) & 9 & 12 & 21 & 10 & 31 \\
\hline
\end{tabular}
\caption{Power consumption weight and occupancy for each Rack. In brackets, the maximum value admitted for the Room.}
\label{table:1}
\end{table}
\section{Re-design the new infrastructure}
Due to the limitations described in Table\ref{table:1} only three racks have been used to host Cloud@CNAF core service.
Among this three racks, the first hosts the storage resources, the second hosts the Openstack controller, the network
services and the GPFS cluster. The third hosts Ovirt and Openstack compute nodes, together with
some other ancillary services (see Table \ref{table:2} for details).
Rack1 and Rack2 have been connected by 2x40Gbps through our Brocade VDX switches and Rack1 and Rack3 have been connected
by 2x10Gbps through PowerConnect switches.
\begin{table} [ht]
\centering
\begin{tabular}{ c|l|l|l| }
\cline{2-4}
& \multicolumn{1}{|c|}{Rack1} & \multicolumn{1}{|c|}{Rack2} & \multicolumn{1}{|c|}{Rack3}\\
\hline
& VDX & VDX & PowerConnect x2 \\
Resources & EqualLogic & Cloud controllers & Ovirt nodes\\
and & Powervault & Cloud networks & Compute nodes\\
Services & & Gridstore & DBs nodes\\
& & Other services & Cloud UI\\
\hline
\end{tabular}
\caption{List of resources and services hosted per Rack}
\label{table:2}
\end{table}
Moreover, Rack1 is connected to PoP GARR with 1x1Gbps fiber connection to guarantee external connectivity.
A complete overview of the new infrastructure and related resource location is shown in Figure \ref{new_c_at_c}.
As depicted from the Figure \ref{new_c_at_c} and taking into account the limitations described in Table \ref{table:1}) the power consumption
has been limited up to 13,79kW in respect to Room1 (limit 15kW) and up to 5.8kW (limit 7kW) in respect to Room2.
The whole migration process (from the design to the reconfiguration of the new infrastructure) took just a business week
and after that the Cloud@CNAF infrastructure and related services where up and running, able to serve again different projects and communities.
Thanks to the experience and documentation gathered, in June 2018 - after the Tier1 returned in its production status,
Cloud@CNAF has been migrated back in less than three business days.
\section{Cloud@CNAF evolution}
Starting from the activity carried out in 2016 related to the improvements done at the infrastructure level \cite{catc}, in
2018 (after the return of the core infrastructure services due to the flooding)
the increase of the computing resources, in terms of quality and quantity, continued in order to enhance both the
services and the performance offered to users.
Thanks to such activity, during the last year the Cloud@CNAF saw a growth on the number of users and use cases
implemented in the infrastructure, in particular the number of projects increased up to 87 using approximately
1035 virtual CPUS, 1.766TB of RAM, with a total of 267 virtual machines (see Figure \ref{catc_monitor} for more details).
Among others, some of the projects that used the cloud infrastructure are:
\begin{itemize}
\item HARMONY - Proof-of-concept under the TTLab coordination, is a project aimed at finding resourceful medicines offensive against neoplasms in hematology,
\item EEE - Extreme Energy Events - Science inside Schools (EEE), is a special research activity about the origin of cosmic rays carried out with the essential contribution of students and teachers of high schools,
\item CHNET-DHLab - Cultural heritage network of INFN for the development of virtual laboratories services,
\item USER Support - for the development of experiments dashboard and the hosting of the production instance of the dashboard, displayed on the monitor present on the CNAF hallway,
\item EOSC-hub DODAS - Temaic service for Elastic Extension of Computing Centre batch resources on external clouds,
\item Services devoted to EU projects like DEEP-HDC \cite{deep}, XDC \cite{xdc} and EOSC-pilot \cite{pilot}.
\end{itemize}
\section{Conclusions and future work}
Due to a damage in the aqueduct pipe located in the street nearby CNAF, a river of water and mud flowed towards the Tier1 data center causing the
shutdown of the entire data center. For such reason, the services and related resources hosted by Cloud@CNAF went down.
To cope with this problem, the decision to temporary migrate the core resources and services of Clud@CNAF to INFN-Ferrara has been taken and adopted.
In order to do this, a complete re-design of the entire infrastructure was needed to tackle the limitations in terms of power consumption and
weight imposed by the new location.
The joint effort and expertise of all the CNAF people and the INFN-Ferrara colleagues made possible to re-design, migrate and make operational
the Cloud@CNAF infrastructure and related hosted services in less than a business week.
Thanks to the experience and the documentation gathered, in June 2018 - after the Tier1 returned in its production status, Cloud@CNAF
has been migrated back in less than three business days.
Even with the above described problems, the Cloud@CNAF infrastructure has been maintained and evolved, giving the possibility
to the users to carry on their activities and obtain their desidered results.
For the next year new and challenging activities are planned, in particular the migration to the OpenStack Rocky version and the deployment of a new architecture distributed between
differnet functional units, Data Center and SDDS.
\begin{figure}[h]
\centering
\includegraphics[width=15cm,clip]{infn-fe23.png}
\caption{The new architecture of the Cloud@CNAF developed to cope the limitations at INFN-Ferrara.}
\label{new_c_at_c}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=12cm,clip]{catc_monitoring.png}
\caption{Cloud@CNAF monitoring and status}
\label{catc_monitor}
\end{figure}
\section{References}
\begin{thebibliography}{}
\bibitem{catc}
Cloud@CNAF - maintenance and operation, C. Duma, R. Bucchi, A. Costantini, D. Michelotto, M. Panella, D. Salomoni and G. Zizzi, CNAF Annual Report 2016, https://www.cnaf.infn.it/Annual-Report/annual-report-2016.pdf
\bibitem{openstack}
Web site: https://www.openstack.org/
\bibitem{flood}
The flood, L. dell’Agnello, CNAF Annual Report 2017, https://www.cnaf.infn.it/wp-content/uploads/2018/09/cnaf-annual-report-2017.pdf
\bibitem{deep}
Web site: https://deep-hybrid-datacloud.eu/
\bibitem{xdc}
Web site: www.extreme-datacloud.eu
\bibitem{pilot}
Web site: https://eoscpilot.eu
\end{thebibliography}
\end{document}
contributions/ds_cloud_c/infn-fe23.png

82.5 KiB

File added
%%
%% This is file `iopams.sty'
%% File to include AMS fonts and extra definitions for bold greek
%% characters for use with iopart.cls
%%
\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{iopams}[1997/02/13 v1.0]
\RequirePackage{amsgen}[1995/01/01]
\RequirePackage{amsfonts}[1995/01/01]
\RequirePackage{amssymb}[1995/01/01]
\RequirePackage{amsbsy}[1995/01/01]
%
\iopamstrue % \newif\ifiopams in iopart.cls & iopbk2e.cls
% % allows optional text to be in author guidelines
%
% Bold lower case Greek letters
%
\newcommand{\balpha}{\boldsymbol{\alpha}}
\newcommand{\bbeta}{\boldsymbol{\beta}}
\newcommand{\bgamma}{\boldsymbol{\gamma}}
\newcommand{\bdelta}{\boldsymbol{\delta}}
\newcommand{\bepsilon}{\boldsymbol{\epsilon}}
\newcommand{\bzeta}{\boldsymbol{\zeta}}
\newcommand{\bfeta}{\boldsymbol{\eta}}
\newcommand{\btheta}{\boldsymbol{\theta}}
\newcommand{\biota}{\boldsymbol{\iota}}
\newcommand{\bkappa}{\boldsymbol{\kappa}}
\newcommand{\blambda}{\boldsymbol{\lambda}}
\newcommand{\bmu}{\boldsymbol{\mu}}
\newcommand{\bnu}{\boldsymbol{\nu}}
\newcommand{\bxi}{\boldsymbol{\xi}}
\newcommand{\bpi}{\boldsymbol{\pi}}
\newcommand{\brho}{\boldsymbol{\rho}}
\newcommand{\bsigma}{\boldsymbol{\sigma}}
\newcommand{\btau}{\boldsymbol{\tau}}
\newcommand{\bupsilon}{\boldsymbol{\upsilon}}
\newcommand{\bphi}{\boldsymbol{\phi}}
\newcommand{\bchi}{\boldsymbol{\chi}}
\newcommand{\bpsi}{\boldsymbol{\psi}}
\newcommand{\bomega}{\boldsymbol{\omega}}
\newcommand{\bvarepsilon}{\boldsymbol{\varepsilon}}
\newcommand{\bvartheta}{\boldsymbol{\vartheta}}
\newcommand{\bvaromega}{\boldsymbol{\varomega}}
\newcommand{\bvarrho}{\boldsymbol{\varrho}}
\newcommand{\bvarzeta}{\boldsymbol{\varsigma}} %NB really sigma
\newcommand{\bvarsigma}{\boldsymbol{\varsigma}}
\newcommand{\bvarphi}{\boldsymbol{\varphi}}
%
% Bold upright capital Greek letters
%
\newcommand{\bGamma}{\boldsymbol{\Gamma}}
\newcommand{\bDelta}{\boldsymbol{\Delta}}
\newcommand{\bTheta}{\boldsymbol{\Theta}}
\newcommand{\bLambda}{\boldsymbol{\Lambda}}
\newcommand{\bXi}{\boldsymbol{\Xi}}
\newcommand{\bPi}{\boldsymbol{\Pi}}
\newcommand{\bSigma}{\boldsymbol{\Sigma}}
\newcommand{\bUpsilon}{\boldsymbol{\Upsilon}}
\newcommand{\bPhi}{\boldsymbol{\Phi}}
\newcommand{\bPsi}{\boldsymbol{\Psi}}
\newcommand{\bOmega}{\boldsymbol{\Omega}}
%
% Bold versions of miscellaneous symbols
%
\newcommand{\bpartial}{\boldsymbol{\partial}}
\newcommand{\bell}{\boldsymbol{\ell}}
\newcommand{\bimath}{\boldsymbol{\imath}}
\newcommand{\bjmath}{\boldsymbol{\jmath}}
\newcommand{\binfty}{\boldsymbol{\infty}}
\newcommand{\bnabla}{\boldsymbol{\nabla}}
\newcommand{\bdot}{\boldsymbol{\cdot}}
%
% Symbols for caption
%
\renewcommand{\opensquare}{\mbox{$\square$}}
\renewcommand{\opentriangle}{\mbox{$\vartriangle$}}
\renewcommand{\opentriangledown}{\mbox{$\triangledown$}}
\renewcommand{\opendiamond}{\mbox{$\lozenge$}}
\renewcommand{\fullsquare}{\mbox{$\blacksquare$}}
\newcommand{\fulldiamond}{\mbox{$\blacklozenge$}}
\newcommand{\fullstar}{\mbox{$\bigstar$}}
\newcommand{\fulltriangle}{\mbox{$\blacktriangle$}}
\newcommand{\fulltriangledown}{\mbox{$\blacktriangledown$}}
\endinput
%%
%% End of file `iopams.sty'.
This diff is collapsed.
%%
%% This is file `jpconf11.clo'
%%
%% This file is distributed in the hope that it will be useful,
%% but WITHOUT ANY WARRANTY; without even the implied warranty of
%% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
%%
%% \CharacterTable
%% {Upper-case \A\B\C\D\E\F\G\H\I\J\K\L\M\N\O\P\Q\R\S\T\U\V\W\X\Y\Z
%% Lower-case \a\b\c\d\e\f\g\h\i\j\k\l\m\n\o\p\q\r\s\t\u\v\w\x\y\z
%% Digits \0\1\2\3\4\5\6\7\8\9
%% Exclamation \! Double quote \" Hash (number) \#
%% Dollar \$ Percent \% Ampersand \&
%% Acute accent \' Left paren \( Right paren \)
%% Asterisk \* Plus \+ Comma \,
%% Minus \- Point \. Solidus \/
%% Colon \: Semicolon \; Less than \<
%% Equals \= Greater than \> Question mark \?
%% Commercial at \@ Left bracket \[ Backslash \\
%% Right bracket \] Circumflex \^ Underscore \_
%% Grave accent \` Left brace \{ Vertical bar \|
%% Right brace \} Tilde \~}
\ProvidesFile{jpconf11.clo}[2005/05/04 v1.0 LaTeX2e file (size option)]
\renewcommand\normalsize{%
\@setfontsize\normalsize\@xipt{13}%
\abovedisplayskip 12\p@ \@plus3\p@ \@minus7\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6.5\p@ \@plus3.5\p@ \@minus3\p@
\belowdisplayskip \abovedisplayskip
\let\@listi\@listI}
\normalsize
\newcommand\small{%
\@setfontsize\small\@xpt{12}%
\abovedisplayskip 11\p@ \@plus3\p@ \@minus6\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6.5\p@ \@plus3.5\p@ \@minus3\p@
\def\@listi{\leftmargin\leftmargini
\topsep 9\p@ \@plus3\p@ \@minus5\p@
\parsep 4.5\p@ \@plus2\p@ \@minus\p@
\itemsep \parsep}%
\belowdisplayskip \abovedisplayskip}
\newcommand\footnotesize{%
% \@setfontsize\footnotesize\@xpt\@xiipt
\@setfontsize\footnotesize\@ixpt{11}%
\abovedisplayskip 10\p@ \@plus2\p@ \@minus5\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6\p@ \@plus3\p@ \@minus3\p@
\def\@listi{\leftmargin\leftmargini
\topsep 6\p@ \@plus2\p@ \@minus2\p@
\parsep 3\p@ \@plus2\p@ \@minus\p@
\itemsep \parsep}%
\belowdisplayskip \abovedisplayskip
}
\newcommand\scriptsize{\@setfontsize\scriptsize\@viiipt{9.5}}
\newcommand\tiny{\@setfontsize\tiny\@vipt\@viipt}
\newcommand\large{\@setfontsize\large\@xivpt{18}}
\newcommand\Large{\@setfontsize\Large\@xviipt{22}}
\newcommand\LARGE{\@setfontsize\LARGE\@xxpt{25}}
\newcommand\huge{\@setfontsize\huge\@xxvpt{30}}
\let\Huge=\huge
\if@twocolumn
\setlength\parindent{14\p@}
\else
\setlength\parindent{18\p@}
\fi
\if@letterpaper%
%\input{letmarg.tex}%
\setlength{\hoffset}{0mm}
\setlength{\marginparsep}{0mm}
\setlength{\marginparwidth}{0mm}
\setlength{\textwidth}{160mm}
\setlength{\oddsidemargin}{-0.4mm}
\setlength{\evensidemargin}{-0.4mm}
\setlength{\voffset}{0mm}
\setlength{\headheight}{8mm}
\setlength{\headsep}{5mm}
\setlength{\footskip}{0mm}
\setlength{\textheight}{230mm}
\setlength{\topmargin}{1.6mm}
\else
%\input{a4marg.tex}%
\setlength{\hoffset}{0mm}
\setlength{\marginparsep}{0mm}
\setlength{\marginparwidth}{0mm}
\setlength{\textwidth}{160mm}
\setlength{\oddsidemargin}{-0.4mm}
\setlength{\evensidemargin}{-0.4mm}
\setlength{\voffset}{0mm}
\setlength{\headheight}{8mm}
\setlength{\headsep}{5mm}
\setlength{\footskip}{0mm}
\setlength{\textheight}{230mm}
\setlength{\topmargin}{1.6mm}
\fi
\setlength\maxdepth{.5\topskip}
\setlength\@maxdepth\maxdepth
\setlength\footnotesep{8.4\p@}
\setlength{\skip\footins} {10.8\p@ \@plus 4\p@ \@minus 2\p@}
\setlength\floatsep {14\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\textfloatsep {24\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\intextsep {16\p@ \@plus 4\p@ \@minus 4\p@}
\setlength\dblfloatsep {16\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\dbltextfloatsep{24\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\@fptop{0\p@}
\setlength\@fpsep{10\p@ \@plus 1fil}
\setlength\@fpbot{0\p@}
\setlength\@dblfptop{0\p@}
\setlength\@dblfpsep{10\p@ \@plus 1fil}
\setlength\@dblfpbot{0\p@}
\setlength\partopsep{3\p@ \@plus 2\p@ \@minus 2\p@}
\def\@listI{\leftmargin\leftmargini
\parsep=\z@
\topsep=6\p@ \@plus3\p@ \@minus3\p@
\itemsep=3\p@ \@plus2\p@ \@minus1\p@}
\let\@listi\@listI
\@listi
\def\@listii {\leftmargin\leftmarginii
\labelwidth\leftmarginii
\advance\labelwidth-\labelsep
\topsep=3\p@ \@plus2\p@ \@minus\p@
\parsep=\z@
\itemsep=\parsep}
\def\@listiii{\leftmargin\leftmarginiii
\labelwidth\leftmarginiii
\advance\labelwidth-\labelsep
\topsep=\z@
\parsep=\z@
\partopsep=\z@
\itemsep=\z@}
\def\@listiv {\leftmargin\leftmarginiv
\labelwidth\leftmarginiv
\advance\labelwidth-\labelsep}
\def\@listv{\leftmargin\leftmarginv
\labelwidth\leftmarginv
\advance\labelwidth-\labelsep}
\def\@listvi {\leftmargin\leftmarginvi
\labelwidth\leftmarginvi
\advance\labelwidth-\labelsep}
\endinput
%%
%% End of file `iopart12.clo'.
contributions/ds_devops_pe/CI-tools.png

33.2 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{Common software lifecycle management in external projects}
\author{C. Duma$^1$, A. Costantini$^1$, D. Michelotto$^1$,
P. Orviz$^2$, D. Salomoni$^1$}
\address{$^1$ INFN-CNAF, Bologna, IT}
\address{$^2$ IFCA, Consejo Superior de Investigaciones Cientificas-CSIC, Santander, SP}
\ead{ds@cnaf.infn.it}
\begin{abstract}
This paper describes the common procedure defined and adopted in the field of Software Lifecycle Management and
Continuous Integration and Delivery to manage the new releases, as a first step to ensure
the quality of the provided solutions, services and components, while strengthening the collaboration between
developers and operations teams among different external projects.
In particular, the paper analyses the common software lifecycle management procedure developed during the
INDIO-DataCloud project and recently improved and adopted in two EC funded
projects: eXtreme DataCloud and DEEP Hybrid DataCloud.
\end{abstract}
\section{Introduction}
The eXtreme-DataCloud (XDC) \cite{xdc} and DEEP-HybridDataCloud (DEEP-HDC) \cite{deep} projects are aimed at addressing requirements from a wide range of User Communities belonging to several disciplines and test the developed software solutions against the real life use cases.
The software solutions carried out by the both projects are released as Open Source and are based on already existing components (TRL8+) that the projects will enrich with new functionalities and plugins.
The use of standards and protocols widely available on the state-of-the-art distributed computing ecosystems may be not enough to guarantee that the released components can be easily plugged into the European e-Infrastructures and in general on cloud based computing environments and the definition and implementation of the entire Software
Lifecycle Management process becames mandatory in such projects.
As the software components envisaged by both projects have a history of development
in previous successful European projects (such as the INDIGO-DataCloud \cite{indigo} project) implementing different types of modern software development techniques,
the natural choice was to complement the previous, individual, Continuous Development and Integration services
with a Continuous Testing, Deployment and Monitoring as part of a DevOps approach:
\begin{itemize}
\item Continuous Testing - the activity of continuously testing the developed software in order to identify issues in
the early phases of the development. For Continuous testing, automation tools will be used. These tools enable
the QA’s for testing multiple code-bases and in parallel, to ensure that there are no flaws in the functionality. In
this activity the use of Docker containers for simulating testing environments on the fly, is also a preferred choice.
Once the code is tested, it is continuously integrated with the existing code.
\item Continuous Deployment - the activity of continuously updating production environment once new code is made
available. Here we ensure that the code is correctly deployed on all the servers. If there is any addition of functionality
or a new feature is introduced, then one should be ready to add resources according to needs. Therefore, it is also
the responsibility of the SysAdmin to scale up the servers. Since the new code is deployed on a continuous basis,
automation tools play an important role for executing tasks quickly and frequently. Puppet, Chef, SaltStack and
Ansible are some popular tools that could be used at this step. This activity represents the Configuration
Management - the process of standardising the resources configurations and enforcing their state across
infrastructures in an automated manner. The extensive use of containerisation techniques will provide an
entire runtime environment: application/service, all its dependencies, libraries and binaries, and configuration
files needed to run it, bundled in one package - container. T3.1 will also manage the scalability testing, being
able to manage the configurations and do the deployments of any number of nodes automatically.
\item Continuous Monitoring - very crucial activity in the DevOps model of managing software lifecycle, which is
aimed at improving the quality of the software by monitoring its performance. This practice involves the participation
of the Operations team who will monitor the users’ activity to discover bugs or improper behavior of the system.
This can also be achieved by making use of dedicated monitoring tools, which will continuously monitor the application
performance and highlight issues. Some popular tools useful in this step are Nagios \cite{nagios}, NewRelic \cite{newrelic}
and Sensu \cite{sensu}. These tools
help to monitor the health of the system proactively and improve productivity and increase the reliability of the
systems, reducing IT support costs. Any major issues found could be reported to the Development team to be
fixed in the continuous development phase.
\end{itemize}
These DevOps activities are carried out on loop continuously until the desired product quality is achieved.
Automation will play a central role in all the activities in order to achieve a complete release automation, moving the
software from the developers through build and quality assurance checks, to deployment into integration testbeds
and finally to production sites part of the Pilot Infrastructures.
In the following sections, an overview of the recentli defined best practices that have been adopted in both XDC and DEEP-XDC projects for the Software Lifecycle Management and Continuous Integration and Delivery are presented and described.
\section{Software Quality Assurance and Control}
Software Quality Assurance (SQA) covers the set of software engineering processes
that foster the quality and reliability in the software produced. The activities involved in this task are mainly focused on:
\begin{itemize}
\item Defining and maintaining a common SQA procedure to guide the software development efforts throughout its life cycle.
\item Formulating a representative set of metrics for the software quality control to follow up on the behavior of the
software produced, aiming to detect and fix early deviations in the software produced.
\item Enabling a continuous integration process, eventually complemented by a continuous delivery scenario, promoting
the automation adoption for the testing, building, deployment and release activities.
\end{itemize}
In order to define the SQA process, the specific context of the software developed in the project has to be taken into account.
The following particularities characterize the corresponding development teams:
\begin{itemize}
\item Heterogeneous developer profiles: different backgrounds and different degrees of expertise.
\item Geographically distributed.
\item Different home institutes which implies different cultures, different development technologies, process and methods.
\item High turnover due to the limited duration of the projects where the grid software has been developed so far.
\item More focus on development activities, with limited resources, if any, available for quality assurance activities.
\end{itemize}
The Quality Assurance process has to take all above described factors into account to define the Software Quality Assurance Plan (SQAP).
A set of ``QA Policies'' have also to be defined to guide the development teams towards uniform practices and processes.
These QA Policies define the main activities of the software lifecycle, such as releasing, tracking, packaging and documenting
the software carried out by the project. This is done in collaboration with development teams, making sure they are flexible
enough to co-exist as much as possible with current development methods. The SQA activities have to be monitored and controlled
to track their evolution and put in place corrective countermeasures in case of deviations.
Moreover, a quality model have to be defined
to help in evaluating the software products and process quality. It helps to set quality goals for software products and processes.
The Quality Model has to follow the ISO/IEC 25010:2011 “Systems and software engineering - Systems and software
Quality Requirements and Evaluation (SQuaRE) - System and software quality models” \cite{R18} to identify a set of characteristics (criteria)
that need to be present in software products and processes to be able to meet the quality requirements.
Those SQA criteria \cite{R22} have the goal to
\begin{itemize}
\item Enhance the visibility, accessibility and distribution of the produced source code through the alignment with to the Open Source Definition \cite{R23}.
\item Promote code style standards to deliver good quality source code emphasizing its readability and reusability.
\item Improve the quality and reliability of software by covering different testing methods at development and pre-production stages.
\item Propose a change-based driven scenario where all new updates in the source code are continuously evaluated by the automated execution of the relevant tests.
\item Adopt an agile approach to effectively produce timely and audience-specific documentation.
\item Lower the barriers of software adoption by delivering quality documentation and the utilization of automated deployment solutions.
\item Encourage secure coding practices and security static analysis at the development phase while providing recommendations on external security assessment.
\end{itemize}
\section{Software Maintenance and Support}
Regarding the software maintenance and support area of the software lifecycle management,
the main objectives that should be and described in the Maintenance plan are:
\begin{itemize}
\item To increase the quality levels of the software by contributing to the implementation and automation
of the Quality Assurance (QA) and Control procedures defined by the project.
\item To boost the software delivery process, relying on automation.
\item To emphasize the communication and feedback with/from end users, in order to guarantee adequate
requirements gathering and support.
\item To guarantee the stability of services already deployed in production and the increase of their readiness
levels, where needed.
\end{itemize}
Moreover the common practices deal with the definition of those processes and procedures related to the software maintenance and
support and their continuous execution:
\begin{itemize}
\item Software Maintenance - regarding software preparation \& transition from the developers to production
repositories and final users.
\item Problem Management - providing the analysis \& documentation of problems.
\item Change Management - control code, configuration changes, retirement calendars.
\item Coordination the provisioning of adequate support to released software.
\item Responsible for the release management and coordination and the maintenance of the artifacts
repositories, defining policies and release cycles.
\end{itemize}
The plan regarding the software maintenance and support management have to follow the guidelines of the
ISO/IEC 14764:2006 standard \cite{R30}, and includes a set of organizational and administrative roles to handle
maintenance implementation, change management and validation, software release, migration and retirement, support
and helpdesk activities.
Component releases are classified in major, minor, revision and emergency, based on the impact of the changes on the
component interface and behavior. Requests for Change (RfC) are managed adopting a priority-driven approach,
so that the risk of compromising the stability of the software deployed in a production environment is minimized.
The User Support activity deals, instead, with the coordination of the support to the users that make use of the software components (developed within the project activities) and included in the main project software distributions.
\section{Services for continuous integration and SQA}
To support the Software Quality Assurance, the
Continuous Integration and the software release and maintenance activities, a set of tools and services are needed.
Usually, those tools and services are provided by using publicly available cloud services due to the following reasons:
\begin{itemize}
\item Higher public visibility and in line with project objectives for open source software,
\item Provides a path to further development, support and exploitation beyond the end of the project,
\item Smaller effort needed inside the project to operate and manage those services.
\end{itemize}
The list of services needed is given in Table 1 with a small description for each service and the related Web link.
\begin{figure}[h!]
\centering
Table 1: Tools and services to support DevOps.
\includegraphics[width=10cm,clip]{CI-tools.png}
%\caption{The list of services.}
\label{citools}
\end{figure}
\section{Key Performance Indicators}
Defining appropriate KPIs for maintenance, release and support activities, and monitor them during the project lifetime
may help in highlight the project achievements and put in place the appropriate corrective actions in case of deviations.
In principle, the KPIs should address the following impact areas and reflect the related goal:
\begin{itemize}
\item Prepare data and computing e-Infrastructures to absorb the needs of communities that push the envelope in terms of data and intensive computing
\begin{itemize}
\item Goal: Extending the quality \& quantity of services provided by e-infrastructures
\end{itemize}
\item Promote new research possibilities in Europe
\begin{itemize}
\item Goal: Increasing the capacity for innovation and production of new knowledge
\end{itemize}
\end{itemize}
\section{Conclusions}
The paper describes the common procedures to be applied in the field of software lifecycle management, aimed at managing
the new releases and ensure the quality of the provided solutions, services and components provided by the project.
In particular, the paper described the best practices to adopt in order to i) foster the quality and reliability of the software produced,
ii) to define the processes and procedures regarding the software maintenance and support, iii) identify the services needed
to support the Software Quality Assurance, the Continuous Integration and the software release and maintenance
and iv) define appropriate KPIs to monitor the project achievements.
The experience gathered throughout this activity with regards to
The adoption of different DevOps practices is becaming mandatory for software development projects,
the experience gathered throughout this activity can be also applicable to the development and distribution of software products coming, for example, from the user communities and other software product activities.
\section*{Acknowledgments}
DEEP-HybridDataCloud has been funded by the European Commission H2020 research and innovation program under grant agreement RIA 777435.
eXtreme DataCloud has been funded by the European Commission H2020 research and innovation program under grant agreement RIA 777367.
\section{References}
\begin{thebibliography}{}
\bibitem{xdc}
Web site: www.extreme-datacloud.eu
\bibitem{deep}
Web site: www.deep-hybrid-datacloud.eu
\bibitem{indigo}
Web site: www.indigo-datacloud.eu
\bibitem{nagios}
Web site: https://www.nagios.org
\bibitem{newrelic}
Web site: https://newrelic.com
\bibitem{sensu}
Web site: https://sensu.io
\bibitem{R18}
ISO/IEC 25010:2011, ``Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models'': https://www.iso.org/standard/35733.html
\bibitem{R22}
A set of Common Software Quality Assurance Baseline Criteria for Research Projects, http://digital.csic.es/bitstream/10261/160086/4/CommonSQA-v2.pdf
\bibitem{R23}
The Open Source Definition, https://opensource.org/osd
\bibitem{R30}
ISO/IEC 14764:2006 standard, https://www.iso.org/standard/39064.html
\end{thebibliography}
\end{document}