Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • faproietti/ar2018
  • chierici/ar2018
  • SDDS/ar2018
  • cnaf/annual-report/ar2018
4 results
Show changes
Showing
with 4435 additions and 0 deletions
contributions/dampe/figure_all.png

85.1 KiB

contributions/dampe/figure_cnaf.png

41.1 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{todonotes}
\begin{document}
\title{DAMPE data processing and analysis at CNAF}
\author{G. Ambrosi$^1$, G. Donvito$^5$, D.F.Droz$^6$, M. Duranti$^1$, D. D'Urso$^{2,3,4}$, F. Gargano$^{5,\ast}$, G. Torralba Elipe$^{7,8}$}
\address{$^1$ INFN Sezione di Perugia, Perugia, IT}
\address{$^2$ Universit\`a di Sassari, Sassari, IT}
\address{$^3$ ASDC, Roma, IT}
\address{$^4$ INFN - Laboratori Nazionali del Sud, Catania, IT}
%\address{$^3$ Universit\`a di Perugia, I-06100 Perugia, Italy}
\address{$^5$ INFN Sezione di Bari, Bari, IT}
\address{$^6$ University of Geneva, Gen\`eve, CH}
\address{$^7$ Gran Sasso Science Institute, L'Aquila, IT}
\address{$^8$ INFN - Laboratori Nazionali del Gran Sasso, L'Aquila, IT}
\address{DAMPE experiment \url{http://dpnc.unige.ch/dampe/},
\url{http://dampe.pg.infn.it}}
\ead{* fabio.gargano@ba.infn.it}
\begin{abstract}
DAMPE (DArk Matter Particle Explorer) is one of the five satellite missions in the framework of the Strategic Pioneer Research Program in Space Science of the Chinese Academy of Sciences (CAS). DAMPE has been launched the 17 December 2015 at 08:12 Beijing time into a sun-synchronous orbit at the altitude of 500 km. The satellite is equipped with a powerful space telescope for high energy gamma-ray, electron and cosmic ray detection.
CNAF computing center is the mirror of DAMPE data outside China and the main data center for Monte Carlo production. It also supports user data analysis of the Italian DAMPE Collaboration.
\end{abstract}
\section{Introduction}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=20pc]{dampe_layout_2.jpg}
\end{center}
\caption{\label{fig:dampe_layout} DAMPE telescope scheme: a double layer of the plastic scintillator strip detector (PSD);
the silicon-tungsten tracker-converter (STK) made of 6 tracking double layers; the imaging calorimeter with about 31 radiation lengths thickness, made of 14 layers of Bismuth Germanium Oxide (BGO) bars in a hodoscopic arrangement and finally
the neutron detector (NUD) placed just below the calorimeter.}
\end{figure}
DAMPE is a space telescope for high energy cosmic-ray detection.
In Fig. \ref{fig:dampe_layout} a scheme of the DAMPE telescope is shown. The top, the plastic scintillator strip detector (PSD) consists of one double layer of scintillating plastic strips detector, which serves as an anti-coincidence detector and to measure particle charge, followed by a silicon-tungsten tracker-converter (STK), which is made of 6 tracking layers. Each tracking layer consists of two layers of single-sided silicon strip detectors measuring the position on the two orthogonal views perpendicular to the pointing direction of the apparatus. Three layers of Tungsten plates with a thickness of 1~mm are inserted in front of tracking layers 3, 4 and 5 to promote photon conversion into electron-positron pairs. The STK is followed by an imaging calorimeter of about 31 radiation lengths thickness, made up of 14 layers of Bismuth Germanium Oxide (BGO) bars which are placed in a hodoscopic arrangement. The total thickness of the BGO and the STK corresponds to about 33 radiation lengths, making it the deepest calorimeter ever used in space. Finally, in order to detect delayed neutron resulting from hadron showers and to improve the electron/proton separation power, a neutron detector (NUD) is placed just below the calorimeter. The NUD consists of 16, 1~cm thick, boron-doped plastic scintillator plates of 19.5 $\times$ 19.5 cm$^2$ large, each read out by a photomultiplier.
The primary scientific goal of DAMPE is to measure electrons and photons with much higher energy resolution and energy reach than achievable with existing space experiments. This will help to identify possible Dark Matter signatures but also may advance our understanding of the origin and propagation mechanisms of high energy cosmic rays and possibly lead to new discoveries in high energy gamma-ray astronomy.
DAMPE was designed to have unprecedented sensitivity and energy reach for electrons, photons and heavier cosmic rays (proton and heavy ions). For electrons and photons, the detection range is 2 GeV-10 TeV, with an energy resolution of about 1.5\% at 100 GeV. For proton and heavy ions, the detection range is 100 GeV-100 TeV, with an energy resolution better than 40\% at 800 GeV. The geometrical factor is about 0.3 m$^2$ sr for electrons and photons, and about 0.2 m$^2$ sr for heavier cosmic rays. The angular resolution is 0.1$^{\circ}$ at 100 GeV.
\section{DAMPE Computing Model and Computing Facilities}
As a Chinese satellite, DAMPE data are collected via the Chinese space communication system and transmitted to the China National Space Administration (CNSA) center in Beijing. From Beijing data are then transmitted to the Purple Mountain Observatory (PMO) in Nanjing, where they are processed and reconstructed.
On the European side, the DAMPE collaboration consists of research groups from INFN and University of Perugia, Lecce and Bari, and from the Department of Particle and Nuclear Physics (DPNC) at the University of Geneva in Switzerland.
\subsection{Data production}
PMO is the deputed center for DAMPE data production. Data are collected 4 times per day, each time the DAMPE satellite is passing over Chinese ground stations (almost every 6 hours). Once transferred to PMO, binary data, downloaded from the satellite, are processed to produce a stream of raw data in ROOT \cite{root} format ({\it 1B} data stream, $\sim$ 7 GB/day), and a second stream that include the orbital and slow control information ({\it 1F} data stream, $\sim$ 7 GB/day). The {\it 1B} and {\it 1F} streams are used to derive calibration files for the different subdetectors ($\sim$ 400MB/day). Finally, data are reconstructed using the DAMPE official reconstruction code, and the so-called {\it 2A} data stream (ROOT files, $\sim$ 85 GB/day) is produced. The total amount of data volume produced per day is $\sim$ 100 GB.
Data processing and reconstruction activities are currently supported by a computing farm consisting of more than 1400 computing cores, able to reprocess 3 years DAMPE data in 1 month.
\subsection{Monte Carlo Production}
Analysis of DAMPE data requires large amounts of Monte Carlo simulation, to fully understand detector capabilities, measurement limits and systematic. In order to facilitate easy work-flow handling and management and also enable effective monitoring of a large number of batch jobs in various states, a NoSQL meta-data database using MongoDB \cite{mongo} was developed with a prototype currently running at the Physics Department of Geneva University. Database access is provided through a web-frontend and command tools based on the flask-web toolkit \cite{flask} with a client-backend of cron scripts that run on the selected computing farm.
The design and completion of this work-flow system were heavily influenced by the implementation of the Fermi-LAT data processing pipeline \cite{latpipeline}
and the DIRAC computing framework \cite{dirac}.
Once submitted, each batch job continuously reports its status to the database through outgoing HTTP requests.
To that end, computing nodes must have outgoing connectivity enabled. Each batch job implements a work-flow where input and output data transfers are being performed (and their return codes are reported) as well as the actual running of the payload of a job (which is defined in the metadata description of the job). Dependencies on productions are implemented at the framework level and jobs are only submitted once dependencies are satisfied.
Once generated, a secondary job is initiated which performs digitization and reconstruction of existing MC data with a given release for large amounts of MC data in bulk. This process is set-up via a cronjob at DPNC and occupies up to 200 slots in a 6-hour limited computing queue.
\subsection{Data availability}
DAMPE data are available to the Chinese Collaboration through the PMO institute, while they are kept accessible to the European Collaboration transferring them from PMO to CNAF, and also from there to the DPNC.
Every time a new {\it 1B}, {\it 1F} or {\it 2A} data files are available at PMO, they are copied, using the GridFTP \cite{gridftp} protocol,
into the DAMPE storage area at CNAF. From CNAF, every 4 hours a copy of each stream
is triggered to the Geneva computing farm via rsync. Dedicated batch jobs are submitted once per day to asynchronously verify the checksum of newly transferred data from PMO to CNAF and from CNAF to Geneva.
Data verification and copy processes are managed through a dedicated User Interface (UI), \texttt{ui-dampe}.
The connection to China is passing through the Orientplus \cite{orientplus} link of the G${\rm \acute{e}}$ant Consortium \cite{geant}. The data transfer rate is currently limited by the connection of the PMO to the China Education and Research Network (CERNET), which has a maximum bandwidth of 100 Mb/s. So the PMO-CNAF copy processed is used for daily data production.
To transfer towards Europe data in case of DAMPE data re-processing and to share in China Monte Carlo generated in Europe,
a dedicated DAMPE server has been installed at the Institute for high energy physics, IHEP, in Beijing which is connected to CERNT with a 1Gb/s bandwidth. Data synchronization between this server and PMO is done by a manually induced hard-drive exchange.
To simplify user data access overall Europe, an XRootD federation has been implemented: an XRootD redirector has been set up in Bari with end-point XRootD server installations (providing the real data) at CNAF, Bari and in Geneva. These end-points provide unified read access for users in Europe.
\section{CNAF contribution}
The CNAF computing center is the mirror of DAMPE data outside China and the main data center for Monte Carlo production.\\
In 2018, a dedicated user interface, 300 TB of disk space and 7.8k HS06 of CPU time have been allocated for the DAMPE activities.
\section{Activities in 2018}
DAMPE activities at CNAF in 2018 have been related to data transfer, Monte Carlo production and data analysis.
\subsection{Data transfer}
The daily activity of data transfer from PMO to CNAF and thereafter from CNAF to CERN have been performed all along the year.
Daily transfer rate has been of about 100 GB per day from PMO to CNAF and more than 100 GB per day from CNAF to PMO.
The step between PMO and CNAF is performed, as seen in previous sections, via \texttt{gridftp} protocol.
Two strategies have been, instead, used to copy data from CNAF to PMO: via \texttt{rsync} from the UI and via \texttt{rsync} managed by batch jobs.
DAMPE data have been reprocessed three times along the year and a dedicated copy task has been fulfilled to copy the new production releases, in addition to the ordinary daily copy.
\subsection{Monte Carlo Production}
\iffalse
\begin{figure}
\begin{center}
\includegraphics[width=30pc]{CNAF_HS06_2017}
\end{center}
\caption{\label{fig:hs06_2017} CPU time consumption, in terms of HS06 (blue solid for daily computation, dashed for the average over the entire year). The red solid line corresponds to the annual pledge and the green dotted line corresponds to the job efficiency computed in a 14-day sliding window.}
\end{figure}
\fi
\begin{figure}[ht]
\begin{center}
\includegraphics[width=35pc]{figure_cnaf.png}
\end{center}
\caption{\label{fig:figure_cnaf} Status of completed simulation production at CNAF.}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=35pc]{figureCNAF2018.png}
\end{center}
\caption{\label{fig:figure_cnaf_2018} Status of completed simulation production at CNAF in 2018.}
\end{figure}
\iffalse
\begin{figure}[ht]
\begin{center}
\includegraphics[width=35pc]{figure_all.png}
\end{center}
\caption{\label{fig:figure_all} Status of completed simulation production at all DAMPE simulation sites.}
\end{figure}
\fi
As the main data center for Monte Carlo production, CNAF has been strongly involved in the Monte Carlo campaign.
At CNAF almost 300 thousand jobs have been executed for a total of about 3 billion of Monte Carlo events.
Monte Carlo campaign is still ongoing for different species and different energy ranges.
In figure \ref{fig:figure_cnaf} the status of completed simulation production at CNAF is shown.
During 2019 we will perform a new full simulation campaign with an improved version of our simulation code: this is crucial for all the forthcoming analysis.
\subsection{Data Analysis}
Most of the analysis in Europe is performed at CNAF and its role has been crucial for all the DAMPE publications such as the Nature paper on direct detection of a break in the TeV cosmic-ray spectrum of electrons and positrons \cite{nature}.
\section{Acknowledgments}
The DAMPE mission was founded by the strategic priority science and technology projects in space science of the Chinese Academy of Sciences and in part by the National Key Program for Research and Development, and the 100 Talents program of the Chinese Academy of Sciences. In Europe, the work is supported by the Italian National Institute for Nuclear Physics (INFN), the Italian University and Research Ministry (MIUR), and the University of Geneva. We extend our gratitude to INFN-T1 for their continued support also beyond providing computing resources.
\section*{References}
\begin{thebibliography}{9}
\bibitem{root} Antcheva I. {\it et al.} 2009 {\it Computer Physics Communications} {\bf 180} 12, 2499 - 2512, \newline https://root.cern.ch/guides/reference-guide.
\bibitem{mongo} https://www.mongodb.org
\bibitem{flask} http://flask.pocoo.org
\bibitem{latpipeline} Dubois R. 2009 {\it ASP Conference Series} {\bf 411} 189
\bibitem{dirac} Tsaregorodtsev A. et al. 2008 {\it Journal of Physics: Conference Series} {\bf 119} 062048
\bibitem{gridftp} Allcock, W.; Bresnahan, J.; Kettimuthu, R.; Link, M. (2005). "The Globus Striped GridFTP Framework and Server". ACM/IEEE SC 2005 Conference (SC'05). p. 54. doi:10.1109/SC.2005.72. \newline ISBN 1-59593-061-2. http://www.globus.org/toolkit/docs/latest-stable/gridftp/
\bibitem{nature} Ambrosi, G et al. 'Direct detection of a break in the teraelectronvolt cosmic-ray spectrum of electrons and positrons' {\it NATURE} Vol. {\bf 552} (2017)
\bibitem{orientplus} http://www.orientplus.eu
\bibitem{geant} http://www.geant.org
\bibitem{cernet} http://www.cernet.edu.cn/HomePage/english/index.shtml
\bibitem{asdc} http://www.asdc.asi.it
\end{thebibliography}
\end{document}
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\bibliographystyle{iopart-num}
%\usepackage{citesort}
\begin{document}
\title{DarkSide program at CNAF}
\author{S. Bussino, S. M. Mari, S. Sanfilippo}
\address{INFN and Universit\`{a} degli Studi Roma 3}
\ead{bussino@fis.uniroma3.it; stefanomaria.mari@uniroma3.it; simone.sanfilippo@roma3.infn.it}
\begin{abstract}
DarkSide is a direct dark matter research program based at the underground Laboratori Nazionali del Gran Sasso
(\textit {LNGS}) and it is searching for the rare nuclear recoils (possibly) induced by the so called Weakly
Interacting Massive Particles (\textit{WIMPs}). It is based on a dual-phase Time Projection Chamber filled with liquid
Argon (\textit{LAr-TPC}) from underground sources. The prototype project is a LAr-TPC with a $(46.4\pm0.7)$kg
active mass, the DarkSide-50 (\textit{DS-50}) experiment, which is installed inside a 30 t organic liquid scintillator
neutron veto, which is in turn installed at the center of a 1kt water Cherenkov veto for the residual flux of cosmic
muons. DS-50 has been taking data since November 2013 with Atmospheric Argon (\textit{AAr}) and, since April 2015, has
been operated with Underground Argon (\textit{UAr}) highly depleted in radioactive ${}^{39}Ar$. The exposure of 1422
kg d of AAr has demonstrated that the operation of DS-50 for three years in a background free condition is a solid
reality, thank to the excellent performance of the pulse shape analysis. The first release of results from an exposure
of 2616 kg d of UAr has shown no dark matter candidate events. This is the most sensitive dark matter search performed
with an Argon-based detector, corresponding to a 90\% CL upper limit on the WIMP-nucleon spin-indipendent cross section
of $2\times10^{-44} cm^2$ for a WIMP mass of 100 $GeV/c^2$. DS-50 will be operated till the end of the year 2019.
From the experience of DS-50, the DS-20k project has been presented based on a new LAr-TPC of more than 20 tonne.
\end{abstract}
\section{The DS-50 experiment}
The existence of dark matter is now established from different gravitational effects, but its nature is still a deep mystery. One possibility, motivated by other considerations in elementary particle physics, is that dark matter consists of new undiscovered elementary particles. A leading candidate explanation, motivated by supersymmetry theory (\textit{SUSY}), is that dark matter is composed of as-yet undiscovered Weakly Interacting Massive Particles (\textit{WIMPs}) formed in the early universe and subsequently gravitationally clustered in association with baryonic matter \cite{Good85}. Evidence for new particles that could constitute WIMP dark matter may come from upcoming experiments at the Large Hadron Collider (\textit{LHC}) at CERN or from sensitive astronomical instruments that detect radiation produced by WIMP-WIMP annihilations in galaxy halos. The thermal motion of the WIMPs comprising the dark matter halo surrounding the galaxy and the Earth should result in WIMP-nuclear collisions of sufficient energy to be observable by sensitive laboratory apparatus. WIMPs could in principle be detected in terrestrial experiments through their collisions with ordinary nuclei, giving observable low-energy $<$100 keV nuclear recoils. The predicted low collision rates require ultra-low background detectors with large (0.1-10 ton) target masses, located in deep underground sites to eliminate neutron background from cosmic ray muons. The DarkSide program is the first to employ a Liquid Argon Time Projection Chamber (\textit{LAr-TPC}) with low levels of ${}^{39}Ar$, together with innovations in photon detection and background suppression.
The DS-50 detector is installed in Hall C at Laboratori Nazionali del Gran Sasso (\textit{LNGS}) at a depth of 3800 m.w.e.\footnote{The meter water equivalent (m.w.e.) is a standard measure of cosmic ray attenuation in underground laboratories.}, and it will continue to taking data up to the end of 2019. The project will continue with DarkSide-20k (\textit{DS-20k}) and \textit{Argo}, a multi-ton detector with an expected sensitivity improvement of two orders of magnitude. The DS-50 target volume is hosted in a dual phase TPC that contains Argon in both phases, liquid and gaseous, the latter on the top of the former one. The scattering of WIMPs or background particles in the active volume induces a prompt scintillation light, called S1, and ionization. Electrons which not recombine are drifted by an electric field of 200 V/cm applied along the z-axis. They are then extracted into gaseous phase above the extraction grid, and accelerated by an electric field of about 4200 V/cm. Here a secondary larger signal due to electroluminescence takes place, the so called S2. The light is collected by two arrays of 19 3"-PMTs on each side of the TPC corresponding to a 60\% geometrical coverage of the end plates and 20\% of the total TPC surface. The detector is capable of reconstructing the position of the interaction in 3D. The z-coordinate, in particular, is easily computed by the electron drift time, while the time profile of the S2 light collected by the top plate PMTs allows to reconstruct the \textit{x} and the \textit{y} coordinates. The LAr-TPC can exploit Pulse Shape Discrimination (\textit{PSD}) and the ratio of scintillation to ionization (S1/S2) to reject $\beta/\gamma$ background in favor of the nuclear recoil events expected from WIMP scattering \cite{Ben08, Bou06}.\\ Events due to neutrons from cosmogenic sources and from radioactive contamination in the detector components, which also produces nuclear recoils, are suppressed by the combined action of the neutron and cosmic rays vetoes. The first one in particular is a 4.0 meter-diameter stainless steel sphere filled with 30 t of borated liquid scintillator acting as Liquid Scintillator Veto (\textit{LSV}). The sphere is lined with \textit{Lumirror} reflecting foils and it is equipped with an array of 110 Hamamatsu 8"-PMTs with low-radioactive components and high-quantum-efficiency photocathodes. The cosmic rays veto, on the other hand, is an 11m-diameter, 10 m-high cylindrical tank filled with high purity water which acts as a Water Cherenkov Detector (\textit{WCD}). The inside surface of the tank is covered with a laminated \textit{Tyvek-polyethylene-Tyvek} reflector and it is equipped with an array of 80 ETL 8"-PMTs with low-radioactive components and high-quantum-efficiency photocathodes.
The exposure of 1422 kg d of AAr has demonstrated that the operation of DS-50 for three years in a background free condition is a solid reality, thank to the excellent performance of the pulse shape analysis. The first release of results from an exposure of 2616 kg d of UAr has shown no dark matter candidate events. This is the most sensitive dark matter search performed with an Argon-based detector, corresponding to a 90\% CL upper limit on the WIMP-nucleon spin-indipendent cross section of $2\times10^{-44} cm^2$ for a WIMP mass of 100 $GeV/c^2$ \cite{Dang16}.
\section{DkS-50 at CNAF}
The data readout in the three detector subsystems is managed by dedicated trigger boards: each subsystem is equipped with an user-customizable FPGA unit, in which the trigger logic is implemented. The inputs and outputs from the different trigger modules are processed by a set of electrical-to-optical converters and the communication between the subsystems uses dedicated optical links. To keep the TPC and the Veto readouts aligned, a pulse per second (\textit{PPS}) generated by a GPS receiver is sent to the two systems, where it is acquired and interpolated with a resolution of 20 ns to allow offline confirmation of event matching.
To acquire data, the DarkSide detector uses a DAQ machine equipped with a storage buffer of 7 TB. Raw data are processed and automatically sent to CNAF farm via a 10 Gbit optical link (almost with approximately 7 hours delay). At CNAF data are housed on a disk storage system of about 1 PB net capacity with a part of the data (300 TB) backed up on the tape library. Raw data from CNAF, and processed ones from LNGS are then semi-automatically copied to Fermi National Laboratories (\textit{FNAL}) via a 100 Gbit optical link. Part of reconstructed data are sent back to CNAF via the same link as before with a rate of about 0.5 TB/month (RECO files). Data processed and analyzed at FNAL, are compared with the analysis performed at CNAF. The INFN Roma 3 group has an active role to maintain and follow, step by step, the overall transferring procedure and to arrange the data management.
\section{The future of DarkSide: DS-20k}
Building on the successful experience in operating the DS-50 detector, the DarkSide program will continue with DS-20k, a direct WIMP search detector using a two-phase Liquid Argon Time Projection Chamber (LAr TPC) with an active (fiducial) mass of 23 t (20 t), which will be built in the next years. The optical sensors will be Silicon Photon Multiplier (\textit{SiPM}) matrices with very low radioactivity. Operation of DS-50 demonstrated a major reduction in the dominant ${}^{39}Ar$ background when using Argon extracted from an underground source, before applying pulse shape analysis. Data from DS-50, in combination with MC simulations and analytical modelling, also shows that a rejection factor for discrimination between electron and nuclear recoils greater than $3\times10^9$ is achievable. The expected large rejection factor, along with the use of the veto system and utilizing silicon photomultipliers in the LAr-TPC, are the keys to unlock the path to large LAr-TPC detector masses, while maintaining an experiment in which less than $<0.1$ events is expected to occur within the WIMP search region during the planned exposure.
Thanks to the measured ultra-low background, DS-20k will have sensitivity to WIMP-nucleon cross sections of
$1.2\times10^{-47}\ cm^2$ and $1.1\times10^{-46}\ cm^2$ for WIMPs respectively of
$1\ TeV/c^2$ and $10\ TeV/c^2$ mass, to be achieved during a 5 yr run producing an exposure of 100 t yr free from any instrumental background.
DS-20k could then extend its operation to a decade, increasing the exposure to 200 t yr, reaching a sensitivity of $7.4\times10^{-48}\ cm^2$ and $6.9\times10^{-47}\ cm^2$ for WIMPs respectively of $1\ TeV/c^2$ and $10\ TeV/c^2$ mass.
DS-20k will be more than two orders of magnitude larger in size compared to DS-50 and will utilize SiPM technologies. Therefore, the collaboration plans to build a prototype detector of intermediate size, called DS-Proto, incorporating the new technologies for their full validation. The choice of about 1t mass scale allows a full validation of the technological choices for DS-20k. DS-proto will be built at CERN laboratory, the data taking is foreseen to start in the year 2020.
\section{DS-proto at CNAF}
Data from DS-proto will be stored and managed at CNAF. The construction, operation, and commissioning of DS-proto will allow validation of the major innovative technical features of DS-20k. Data taking will start in the year 2020. The computing resources have been evaluated according to the data throughput, trigger rate and duty cycle of the experiment. A computing power of about 1kHS06 and 300 net TB is needed to fully support DS-proto data taking and data analysis in the year 2020. In order to perform at CNAF the CPU demanding Monte Carlo production, 30 net TB and 2kHS06 are needed. The DS-proto data taking has been foreseen for few years, requiring a total disk space of the order of some PB and a computing capacity of several kHS06.
%However, the goal of DS-20k is a background free exposure of 100 ton-year of liquid Argon which requires further suppression of ${}^{39}Ar$ background with respect to DS-50. The project \textit{URANIA} involves the upgrade of the UAr extraction plant to a massive production rate suitable for multi-ton detectors. The project \textit{ARIA} instead involves the construction of a very tall cryogenic distillation column in the Seruci mine (Sardinia, Italy) with the high-volume capability of chemical and isotopic purification of UAr.\\ The projected sensitivity of DS-20k and Argo reaches a WIMP-nucleon cross section of $10^{-47}\ cm^2$ and $10^{-48}\ cm^2$ respectively, for a WIMP mass of 100 $GeV/cm^2$, exploring the region of the parameters plane down to the irreducible background due to atmospheric neutrinos.
\section*{References}
\begin{thebibliography} {17}
\bibitem{Good85} M.~W.~Goodman, E.~Witten, Phys. Rev. D {\bf 31} 3059 (1985);
\bibitem{Loo83} H.~H.~Loosli, Earth Plan. Sci. Lett. {\bf 63} 51 (1983);
\bibitem{Ben07} P.~Benetti et al. (WARP Collaboration), Nucl. Inst. Meth. A {\bf 574} 83 (2007);
\bibitem{Ben08} P.~Benetti et al. (WARP Collaboration), Astropart. Phys. {\bf 28} 495 (2008);
\bibitem{Bou06} M.~G.~Boulay, A.~Hime, Astropart. Phys. {\bf 25} 179 (2006);
\bibitem{Dang16} D.~D'Angelo et al. (DARKSIDE Collaboration), Il nuovo cimento C {\bf 39} 312 (2016).
\end{thebibliography}
\end{document}
\ No newline at end of file
File added
This diff is collapsed.
This diff is collapsed.
File added
%%
%% This is file `iopams.sty'
%% File to include AMS fonts and extra definitions for bold greek
%% characters for use with iopart.cls
%%
\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{iopams}[1997/02/13 v1.0]
\RequirePackage{amsgen}[1995/01/01]
\RequirePackage{amsfonts}[1995/01/01]
\RequirePackage{amssymb}[1995/01/01]
\RequirePackage{amsbsy}[1995/01/01]
%
\iopamstrue % \newif\ifiopams in iopart.cls & iopbk2e.cls
% % allows optional text to be in author guidelines
%
% Bold lower case Greek letters
%
\newcommand{\balpha}{\boldsymbol{\alpha}}
\newcommand{\bbeta}{\boldsymbol{\beta}}
\newcommand{\bgamma}{\boldsymbol{\gamma}}
\newcommand{\bdelta}{\boldsymbol{\delta}}
\newcommand{\bepsilon}{\boldsymbol{\epsilon}}
\newcommand{\bzeta}{\boldsymbol{\zeta}}
\newcommand{\bfeta}{\boldsymbol{\eta}}
\newcommand{\btheta}{\boldsymbol{\theta}}
\newcommand{\biota}{\boldsymbol{\iota}}
\newcommand{\bkappa}{\boldsymbol{\kappa}}
\newcommand{\blambda}{\boldsymbol{\lambda}}
\newcommand{\bmu}{\boldsymbol{\mu}}
\newcommand{\bnu}{\boldsymbol{\nu}}
\newcommand{\bxi}{\boldsymbol{\xi}}
\newcommand{\bpi}{\boldsymbol{\pi}}
\newcommand{\brho}{\boldsymbol{\rho}}
\newcommand{\bsigma}{\boldsymbol{\sigma}}
\newcommand{\btau}{\boldsymbol{\tau}}
\newcommand{\bupsilon}{\boldsymbol{\upsilon}}
\newcommand{\bphi}{\boldsymbol{\phi}}
\newcommand{\bchi}{\boldsymbol{\chi}}
\newcommand{\bpsi}{\boldsymbol{\psi}}
\newcommand{\bomega}{\boldsymbol{\omega}}
\newcommand{\bvarepsilon}{\boldsymbol{\varepsilon}}
\newcommand{\bvartheta}{\boldsymbol{\vartheta}}
\newcommand{\bvaromega}{\boldsymbol{\varomega}}
\newcommand{\bvarrho}{\boldsymbol{\varrho}}
\newcommand{\bvarzeta}{\boldsymbol{\varsigma}} %NB really sigma
\newcommand{\bvarsigma}{\boldsymbol{\varsigma}}
\newcommand{\bvarphi}{\boldsymbol{\varphi}}
%
% Bold upright capital Greek letters
%
\newcommand{\bGamma}{\boldsymbol{\Gamma}}
\newcommand{\bDelta}{\boldsymbol{\Delta}}
\newcommand{\bTheta}{\boldsymbol{\Theta}}
\newcommand{\bLambda}{\boldsymbol{\Lambda}}
\newcommand{\bXi}{\boldsymbol{\Xi}}
\newcommand{\bPi}{\boldsymbol{\Pi}}
\newcommand{\bSigma}{\boldsymbol{\Sigma}}
\newcommand{\bUpsilon}{\boldsymbol{\Upsilon}}
\newcommand{\bPhi}{\boldsymbol{\Phi}}
\newcommand{\bPsi}{\boldsymbol{\Psi}}
\newcommand{\bOmega}{\boldsymbol{\Omega}}
%
% Bold versions of miscellaneous symbols
%
\newcommand{\bpartial}{\boldsymbol{\partial}}
\newcommand{\bell}{\boldsymbol{\ell}}
\newcommand{\bimath}{\boldsymbol{\imath}}
\newcommand{\bjmath}{\boldsymbol{\jmath}}
\newcommand{\binfty}{\boldsymbol{\infty}}
\newcommand{\bnabla}{\boldsymbol{\nabla}}
\newcommand{\bdot}{\boldsymbol{\cdot}}
%
% Symbols for caption
%
\renewcommand{\opensquare}{\mbox{$\square$}}
\renewcommand{\opentriangle}{\mbox{$\vartriangle$}}
\renewcommand{\opentriangledown}{\mbox{$\triangledown$}}
\renewcommand{\opendiamond}{\mbox{$\lozenge$}}
\renewcommand{\fullsquare}{\mbox{$\blacksquare$}}
\newcommand{\fulldiamond}{\mbox{$\blacklozenge$}}
\newcommand{\fullstar}{\mbox{$\bigstar$}}
\newcommand{\fulltriangle}{\mbox{$\blacktriangle$}}
\newcommand{\fulltriangledown}{\mbox{$\blacktriangledown$}}
\endinput
%%
%% End of file `iopams.sty'.
This diff is collapsed.
%%
%% This is file `jpconf11.clo'
%%
%% This file is distributed in the hope that it will be useful,
%% but WITHOUT ANY WARRANTY; without even the implied warranty of
%% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
%%
%% \CharacterTable
%% {Upper-case \A\B\C\D\E\F\G\H\I\J\K\L\M\N\O\P\Q\R\S\T\U\V\W\X\Y\Z
%% Lower-case \a\b\c\d\e\f\g\h\i\j\k\l\m\n\o\p\q\r\s\t\u\v\w\x\y\z
%% Digits \0\1\2\3\4\5\6\7\8\9
%% Exclamation \! Double quote \" Hash (number) \#
%% Dollar \$ Percent \% Ampersand \&
%% Acute accent \' Left paren \( Right paren \)
%% Asterisk \* Plus \+ Comma \,
%% Minus \- Point \. Solidus \/
%% Colon \: Semicolon \; Less than \<
%% Equals \= Greater than \> Question mark \?
%% Commercial at \@ Left bracket \[ Backslash \\
%% Right bracket \] Circumflex \^ Underscore \_
%% Grave accent \` Left brace \{ Vertical bar \|
%% Right brace \} Tilde \~}
\ProvidesFile{jpconf11.clo}[2005/05/04 v1.0 LaTeX2e file (size option)]
\renewcommand\normalsize{%
\@setfontsize\normalsize\@xipt{13}%
\abovedisplayskip 12\p@ \@plus3\p@ \@minus7\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6.5\p@ \@plus3.5\p@ \@minus3\p@
\belowdisplayskip \abovedisplayskip
\let\@listi\@listI}
\normalsize
\newcommand\small{%
\@setfontsize\small\@xpt{12}%
\abovedisplayskip 11\p@ \@plus3\p@ \@minus6\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6.5\p@ \@plus3.5\p@ \@minus3\p@
\def\@listi{\leftmargin\leftmargini
\topsep 9\p@ \@plus3\p@ \@minus5\p@
\parsep 4.5\p@ \@plus2\p@ \@minus\p@
\itemsep \parsep}%
\belowdisplayskip \abovedisplayskip}
\newcommand\footnotesize{%
% \@setfontsize\footnotesize\@xpt\@xiipt
\@setfontsize\footnotesize\@ixpt{11}%
\abovedisplayskip 10\p@ \@plus2\p@ \@minus5\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6\p@ \@plus3\p@ \@minus3\p@
\def\@listi{\leftmargin\leftmargini
\topsep 6\p@ \@plus2\p@ \@minus2\p@
\parsep 3\p@ \@plus2\p@ \@minus\p@
\itemsep \parsep}%
\belowdisplayskip \abovedisplayskip
}
\newcommand\scriptsize{\@setfontsize\scriptsize\@viiipt{9.5}}
\newcommand\tiny{\@setfontsize\tiny\@vipt\@viipt}
\newcommand\large{\@setfontsize\large\@xivpt{18}}
\newcommand\Large{\@setfontsize\Large\@xviipt{22}}
\newcommand\LARGE{\@setfontsize\LARGE\@xxpt{25}}
\newcommand\huge{\@setfontsize\huge\@xxvpt{30}}
\let\Huge=\huge
\if@twocolumn
\setlength\parindent{14\p@}
\else
\setlength\parindent{18\p@}
\fi
\if@letterpaper%
%\input{letmarg.tex}%
\setlength{\hoffset}{0mm}
\setlength{\marginparsep}{0mm}
\setlength{\marginparwidth}{0mm}
\setlength{\textwidth}{160mm}
\setlength{\oddsidemargin}{-0.4mm}
\setlength{\evensidemargin}{-0.4mm}
\setlength{\voffset}{0mm}
\setlength{\headheight}{8mm}
\setlength{\headsep}{5mm}
\setlength{\footskip}{0mm}
\setlength{\textheight}{230mm}
\setlength{\topmargin}{1.6mm}
\else
%\input{a4marg.tex}%
\setlength{\hoffset}{0mm}
\setlength{\marginparsep}{0mm}
\setlength{\marginparwidth}{0mm}
\setlength{\textwidth}{160mm}
\setlength{\oddsidemargin}{-0.4mm}
\setlength{\evensidemargin}{-0.4mm}
\setlength{\voffset}{0mm}
\setlength{\headheight}{8mm}
\setlength{\headsep}{5mm}
\setlength{\footskip}{0mm}
\setlength{\textheight}{230mm}
\setlength{\topmargin}{1.6mm}
\fi
\setlength\maxdepth{.5\topskip}
\setlength\@maxdepth\maxdepth
\setlength\footnotesep{8.4\p@}
\setlength{\skip\footins} {10.8\p@ \@plus 4\p@ \@minus 2\p@}
\setlength\floatsep {14\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\textfloatsep {24\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\intextsep {16\p@ \@plus 4\p@ \@minus 4\p@}
\setlength\dblfloatsep {16\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\dbltextfloatsep{24\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\@fptop{0\p@}
\setlength\@fpsep{10\p@ \@plus 1fil}
\setlength\@fpbot{0\p@}
\setlength\@dblfptop{0\p@}
\setlength\@dblfpsep{10\p@ \@plus 1fil}
\setlength\@dblfpbot{0\p@}
\setlength\partopsep{3\p@ \@plus 2\p@ \@minus 2\p@}
\def\@listI{\leftmargin\leftmargini
\parsep=\z@
\topsep=6\p@ \@plus3\p@ \@minus3\p@
\itemsep=3\p@ \@plus2\p@ \@minus1\p@}
\let\@listi\@listI
\@listi
\def\@listii {\leftmargin\leftmarginii
\labelwidth\leftmarginii
\advance\labelwidth-\labelsep
\topsep=3\p@ \@plus2\p@ \@minus\p@
\parsep=\z@
\itemsep=\parsep}
\def\@listiii{\leftmargin\leftmarginiii
\labelwidth\leftmarginiii
\advance\labelwidth-\labelsep
\topsep=\z@
\parsep=\z@
\partopsep=\z@
\itemsep=\z@}
\def\@listiv {\leftmargin\leftmarginiv
\labelwidth\leftmarginiv
\advance\labelwidth-\labelsep}
\def\@listv{\leftmargin\leftmarginv
\labelwidth\leftmarginv
\advance\labelwidth-\labelsep}
\def\@listvi {\leftmargin\leftmarginvi
\labelwidth\leftmarginvi
\advance\labelwidth-\labelsep}
\endinput
%%
%% End of file `iopart12.clo'.
contributions/ds_cloud_c/catc_monitoring.png

31.7 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{Cloud@CNAF Management and Evolution}
\author{C. Duma$^1$, A. Costantini$^1$, D. Michelotto$^1$ and D. Salomoni$^1$}
\address{$^1$INFN Division CNAF, Bologna, Italy}
\ead{ds@cnaf.infn.it}
\begin{abstract}
Cloud@CNAF is the cloud infrastructure hosted at CNAF, based on open source solutions aiming
to serve different use cases present here. The infrastructure is the result of
the collaboration of a transversal group of people from all CNAF
functional units: networking, storage, farming, national services, distributed systems.
If 2016 was for the Cloud@CNAF IaaS (Infrastructure as a Service) based on OpenStack,
a period of consolidation and improvement, 2017 was an year of consolidation and
operation ended with an extreme event - the flooding of the DataCenter, when an
aqueduct pipe located in the street nearby CNAF went broke. This event caused
down of the entire DataCenter, including the Cloud@CNAF infrastructure.This paper
presents the activities carried out throughout 2018 to ensure the functioning
of the center cloud infrastructure, that saw the its migration from CNAF to INFN-Ferrara,
starting to the re-design of the entire to cope with the limited availability of
space and weigth imposed by the new location, to the physical migration of the
racks and remote management and operation of infrastructure in order to continue
to provide high-quality services for our users and communities.
\end{abstract}
\section{Introduction}
The main goal of Cloud@CNAF \cite{catc} project is to provide a production quality
Cloud Infrastructure for CNAF internal activities as well as national and
international projects hosted at CNAF:
\begin{itemize}
\item Internal activities
\begin{itemize}
\item Provisioning VM for CNAF departments and staff members
\item Tutorial and courses
\end{itemize}
\item National and international projects
\begin{itemize}
\item Providing VMs for experiments hosted at CNAF, like CMS, ATLAS, EEE and FAZIA
\item testbeds for testing the services developed by projects like the INDIGO-DataCloud, eXtreme-DataCLoud and DEEP-HybridDataCloud
\end{itemize}
\end{itemize}
The infrastructure made available is based on OpenStack \cite{openstack}, version Mitaka, with all the
services deployed using a High-Availability (HA) setup or in a
clustered manner (for ex. for the DBs used). During 2016 the infrastructure has been
enhanced, by adding new resources, compute and network, and its operation has been improved and guaranteed by
adding the monitoring part, improving the support, automating the maintenance activities.
Thanks to this enhancement, Cloud@CNAF was able to offer high reliable services to the users and communities who rely on such infrastructure.
At the end of 2017, on November 9th early at morning, an aqueduct pipe located in the street nearby CNAF, broke as documented in Ref. \cite{flood}.
As a result, a river of water and mud flowed towards the Tier1 data center. The level of the water did not exceeded the
threshold of safety of the waterproof doors but, due to the porosity of the external walls and the floor, it could find a way
into the data center. Both electric lines failed at about 7.10AM CET. Access to the data center was possible only
in the afternoon, after all the water had been pumped out.
As a result, the entire Tier1 data center went down, included the Cloud@CNAF infrastructure.
\section{The resource migration}
Some weeks after the flooding, has been decided to move the Cloud@CNAF core services in a different location
in order to recover the services provided for communities and experiments.
Thanks to a strong relationship, both University of Parma/INFN-Parma and INFN-Ferrara proposed to host our
core machinery and related services.
Due to the geographical proximity and the presence of Point of Presence (PoP) GARR, the
Cloud@CNAF core machinery was moved to the INFN-Ferrara location.
Unfortunately, we were not able to move all the Cloud@CNAF resources due to a limited power and weight availability in the new location.
For the above mentioned reason, the re-design of the new infrastructure has been considered.
As a first step, the services and the related machinery to move to the new - temporary - location have been selected in order to
fit the maximum power consumption and weight estimated for each of the two rooms devoted to host Cloud@CNAF services (see Table \ref{table:1} for details).
\begin{table} [ht]
\centering
\begin{tabular}{ l|c|c|c||c||c| }
\cline{2-6}
& \multicolumn{3}{c||}{Room1} & Room2 & Tot \\
\cline{2-5}
& Rack1 & Rack2 & Tot & Rack3 & \\
\hline
Power consumption (kW) & 8,88 & 4,91 & 13,79 (15) & 5,8 (7)& 19,59\\
Weight (Kg) & 201 & 151 & 352 (400Kg/mq) & 92 (400Kg/mq) & 444 \\
Occupancy (U) & 9 & 12 & 21 & 10 & 31 \\
\hline
\end{tabular}
\caption{Power consumption weight and occupancy for each Rack. In brackets, the maximum value admitted for the Room.}
\label{table:1}
\end{table}
\section{Re-design the new infrastructure}
Due to the limitations described in Table\ref{table:1} only three racks have been used to host Cloud@CNAF core service.
Among this three racks, the first hosts the storage resources, the second hosts the Openstack controller, the network
services and the GPFS cluster. The third hosts Ovirt and Openstack compute nodes, together with
some other ancillary services (see Table \ref{table:2} for details).
Rack1 and Rack2 have been connected by 2x40Gbps through our Brocade VDX switches and Rack1 and Rack3 have been connected
by 2x10Gbps through PowerConnect switches.
\begin{table} [ht]
\centering
\begin{tabular}{ c|l|l|l| }
\cline{2-4}
& \multicolumn{1}{|c|}{Rack1} & \multicolumn{1}{|c|}{Rack2} & \multicolumn{1}{|c|}{Rack3}\\
\hline
& VDX & VDX & PowerConnect x2 \\
Resources & EqualLogic & Cloud controllers & Ovirt nodes\\
and & Powervault & Cloud networks & Compute nodes\\
Services & & Gridstore & DBs nodes\\
& & Other services & Cloud UI\\
\hline
\end{tabular}
\caption{List of resources and services hosted per Rack}
\label{table:2}
\end{table}
Moreover, Rack1 is connected to PoP GARR with 1x1Gbps fiber connection to guarantee external connectivity.
A complete overview of the new infrastructure and related resource location is shown in Figure \ref{new_c_at_c}.
As depicted from the Figure \ref{new_c_at_c} and taking into account the limitations described in Table \ref{table:1}) the power consumption
has been limited up to 13,79kW in respect to Room1 (limit 15kW) and up to 5.8kW (limit 7kW) in respect to Room2.
The whole migration process (from the design to the reconfiguration of the new infrastructure) took just a business week
and after that the Cloud@CNAF infrastructure and related services where up and running, able to serve again different projects and communities.
Thanks to the experience and documentation gathered, in June 2018 - after the Tier1 returned in its production status,
Cloud@CNAF has been migrated back in less than three business days.
\section{Cloud@CNAF evolution}
Starting from the activity carried out in 2016 related to the improvements done at the infrastructure level \cite{catc}, in
2018 (after the return of the core infrastructure services due to the flooding)
the increase of the computing resources, in terms of quality and quantity, continued in order to enhance both the
services and the performance offered to users.
Thanks to such activity, during the last year the Cloud@CNAF saw a growth on the number of users and use cases
implemented in the infrastructure, in particular the number of projects increased up to 87 using approximately
1035 virtual CPUS, 1.766TB of RAM, with a total of 267 virtual machines (see Figure \ref{catc_monitor} for more details).
Among others, some of the projects that used the cloud infrastructure are:
\begin{itemize}
\item HARMONY - Proof-of-concept under the TTLab coordination, is a project aimed at finding resourceful medicines offensive against neoplasms in hematology,
\item EEE - Extreme Energy Events - Science inside Schools (EEE), is a special research activity about the origin of cosmic rays carried out with the essential contribution of students and teachers of high schools,
\item CHNET-DHLab - Cultural heritage network of INFN for the development of virtual laboratories services,
\item USER Support - for the development of experiments dashboard and the hosting of the production instance of the dashboard, displayed on the monitor present on the CNAF hallway,
\item EOSC-hub DODAS - Temaic service for Elastic Extension of Computing Centre batch resources on external clouds,
\item Services devoted to EU projects like DEEP-HDC \cite{deep}, XDC \cite{xdc} and EOSC-pilot \cite{pilot}.
\end{itemize}
\section{Conclusions and future work}
Due to a damage in the aqueduct pipe located in the street nearby CNAF, a river of water and mud flowed towards the Tier1 data center causing the
shutdown of the entire data center. For such reason, the services and related resources hosted by Cloud@CNAF went down.
To cope with this problem, the decision to temporary migrate the core resources and services of Clud@CNAF to INFN-Ferrara has been taken and adopted.
In order to do this, a complete re-design of the entire infrastructure was needed to tackle the limitations in terms of power consumption and
weight imposed by the new location.
The joint effort and expertise of all the CNAF people and the INFN-Ferrara colleagues made possible to re-design, migrate and make operational
the Cloud@CNAF infrastructure and related hosted services in less than a business week.
Thanks to the experience and the documentation gathered, in June 2018 - after the Tier1 returned in its production status, Cloud@CNAF
has been migrated back in less than three business days.
Even with the above described problems, the Cloud@CNAF infrastructure has been maintained and evolved, giving the possibility
to the users to carry on their activities and obtain their desidered results.
For the next year new and challenging activities are planned, in particular the migration to the OpenStack Rocky version and the deployment of a new architecture distributed between
differnet functional units, Data Center and SDDS.
\begin{figure}[h]
\centering
\includegraphics[width=15cm,clip]{infn-fe23.png}
\caption{The new architecture of the Cloud@CNAF developed to cope the limitations at INFN-Ferrara.}
\label{new_c_at_c}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=12cm,clip]{catc_monitoring.png}
\caption{Cloud@CNAF monitoring and status}
\label{catc_monitor}
\end{figure}
\section{References}
\begin{thebibliography}{}
\bibitem{catc}
Cloud@CNAF - maintenance and operation, C. Duma, R. Bucchi, A. Costantini, D. Michelotto, M. Panella, D. Salomoni and G. Zizzi, CNAF Annual Report 2016, https://www.cnaf.infn.it/Annual-Report/annual-report-2016.pdf
\bibitem{openstack}
Web site: https://www.openstack.org/
\bibitem{flood}
The flood, L. dell’Agnello, CNAF Annual Report 2017, https://www.cnaf.infn.it/wp-content/uploads/2018/09/cnaf-annual-report-2017.pdf
\bibitem{deep}
Web site: https://deep-hybrid-datacloud.eu/
\bibitem{xdc}
Web site: www.extreme-datacloud.eu
\bibitem{pilot}
Web site: https://eoscpilot.eu
\end{thebibliography}
\end{document}
contributions/ds_cloud_c/infn-fe23.png

82.5 KiB

File added
%%
%% This is file `iopams.sty'
%% File to include AMS fonts and extra definitions for bold greek
%% characters for use with iopart.cls
%%
\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{iopams}[1997/02/13 v1.0]
\RequirePackage{amsgen}[1995/01/01]
\RequirePackage{amsfonts}[1995/01/01]
\RequirePackage{amssymb}[1995/01/01]
\RequirePackage{amsbsy}[1995/01/01]
%
\iopamstrue % \newif\ifiopams in iopart.cls & iopbk2e.cls
% % allows optional text to be in author guidelines
%
% Bold lower case Greek letters
%
\newcommand{\balpha}{\boldsymbol{\alpha}}
\newcommand{\bbeta}{\boldsymbol{\beta}}
\newcommand{\bgamma}{\boldsymbol{\gamma}}
\newcommand{\bdelta}{\boldsymbol{\delta}}
\newcommand{\bepsilon}{\boldsymbol{\epsilon}}
\newcommand{\bzeta}{\boldsymbol{\zeta}}
\newcommand{\bfeta}{\boldsymbol{\eta}}
\newcommand{\btheta}{\boldsymbol{\theta}}
\newcommand{\biota}{\boldsymbol{\iota}}
\newcommand{\bkappa}{\boldsymbol{\kappa}}
\newcommand{\blambda}{\boldsymbol{\lambda}}
\newcommand{\bmu}{\boldsymbol{\mu}}
\newcommand{\bnu}{\boldsymbol{\nu}}
\newcommand{\bxi}{\boldsymbol{\xi}}
\newcommand{\bpi}{\boldsymbol{\pi}}
\newcommand{\brho}{\boldsymbol{\rho}}
\newcommand{\bsigma}{\boldsymbol{\sigma}}
\newcommand{\btau}{\boldsymbol{\tau}}
\newcommand{\bupsilon}{\boldsymbol{\upsilon}}
\newcommand{\bphi}{\boldsymbol{\phi}}
\newcommand{\bchi}{\boldsymbol{\chi}}
\newcommand{\bpsi}{\boldsymbol{\psi}}
\newcommand{\bomega}{\boldsymbol{\omega}}
\newcommand{\bvarepsilon}{\boldsymbol{\varepsilon}}
\newcommand{\bvartheta}{\boldsymbol{\vartheta}}
\newcommand{\bvaromega}{\boldsymbol{\varomega}}
\newcommand{\bvarrho}{\boldsymbol{\varrho}}
\newcommand{\bvarzeta}{\boldsymbol{\varsigma}} %NB really sigma
\newcommand{\bvarsigma}{\boldsymbol{\varsigma}}
\newcommand{\bvarphi}{\boldsymbol{\varphi}}
%
% Bold upright capital Greek letters
%
\newcommand{\bGamma}{\boldsymbol{\Gamma}}
\newcommand{\bDelta}{\boldsymbol{\Delta}}
\newcommand{\bTheta}{\boldsymbol{\Theta}}
\newcommand{\bLambda}{\boldsymbol{\Lambda}}
\newcommand{\bXi}{\boldsymbol{\Xi}}
\newcommand{\bPi}{\boldsymbol{\Pi}}
\newcommand{\bSigma}{\boldsymbol{\Sigma}}
\newcommand{\bUpsilon}{\boldsymbol{\Upsilon}}
\newcommand{\bPhi}{\boldsymbol{\Phi}}
\newcommand{\bPsi}{\boldsymbol{\Psi}}
\newcommand{\bOmega}{\boldsymbol{\Omega}}
%
% Bold versions of miscellaneous symbols
%
\newcommand{\bpartial}{\boldsymbol{\partial}}
\newcommand{\bell}{\boldsymbol{\ell}}
\newcommand{\bimath}{\boldsymbol{\imath}}
\newcommand{\bjmath}{\boldsymbol{\jmath}}
\newcommand{\binfty}{\boldsymbol{\infty}}
\newcommand{\bnabla}{\boldsymbol{\nabla}}
\newcommand{\bdot}{\boldsymbol{\cdot}}
%
% Symbols for caption
%
\renewcommand{\opensquare}{\mbox{$\square$}}
\renewcommand{\opentriangle}{\mbox{$\vartriangle$}}
\renewcommand{\opentriangledown}{\mbox{$\triangledown$}}
\renewcommand{\opendiamond}{\mbox{$\lozenge$}}
\renewcommand{\fullsquare}{\mbox{$\blacksquare$}}
\newcommand{\fulldiamond}{\mbox{$\blacklozenge$}}
\newcommand{\fullstar}{\mbox{$\bigstar$}}
\newcommand{\fulltriangle}{\mbox{$\blacktriangle$}}
\newcommand{\fulltriangledown}{\mbox{$\blacktriangledown$}}
\endinput
%%
%% End of file `iopams.sty'.
This diff is collapsed.
This diff is collapsed.
contributions/ds_devops_pe/CI-tools.png

33.2 KiB