Skip to content
Snippets Groups Projects
Commit 9d88f2d7 authored by Fornari's avatar Fornari
Browse files

Merge branch 'master' of https://baltig.infn.it/cnaf/annual-report/ar2018 to...

Merge branch 'master' of https://baltig.infn.it/cnaf/annual-report/ar2018 to add icarus contribution
parents 0a3389fc 545eaf27
No related branches found
No related tags found
No related merge requests found
Pipeline #20904 passed
......@@ -102,6 +102,7 @@ build_from_source limadou limadou.tex
#build_from_source lowcostdev lowcostdev.tex *.jpg
#build_from_source lspe lspe.tex biblio.bib lspe_data_path.pdf
build_from_source virgo AdV_computing_CNAF.tex
build_from_source xenon main.tex xenon-computing-model.pdf
#build_from_source mw-esaco mw-esaco.tex *.png
......@@ -125,7 +126,7 @@ build_from_source HTC_testbed HTC_testbed_AR2018.tex
#build_from_source seagate seagate.tex biblio.bib *.png *.jpg
#build_from_source dataclient dataclient.tex
#build_from_source ltpd ltpd.tex *.png
#build_from_source net net.tex *.png
build_from_source net main.tex *.png
#build_from_source ssnn1 ssnn.tex *.jpg
#build_from_source ssnn2 vmware.tex *.JPG *.jpg
......
......@@ -170,8 +170,9 @@ Introducing the sixth annual report of CNAF...
\ia{The NA62 experiment at CERN}{na62}
\ia{The NEWCHIM activity at CNAF for the CHIMERA and FARCOS devices}{newchim}
\ia{The PADME experiment at INFN CNAF}{padme}
%\ia{XENON computing activities}{xenon}
\ia{XENON computing model}{xenon}
\ia{Advanced Virgo computing at CNAF}{virgo}
%
% to keep together the next part title with its chapters in the toc
%\addtocontents{toc}{\newpage}
......@@ -187,7 +188,7 @@ Introducing the sixth annual report of CNAF...
%\ia{Data management and storage systems}{storage}
%\ia{Evaluation of the ClusterStor G200 Storage System}{seagate}
%\ia{Activity of the INFN CNAF Long Term Data Preservation (LTDP) group}{ltpd}
%\ia{The INFN Tier 1: Network}{net}
\ia{The INFN-Tier1: Network and Security}{net}
%\ia{Cooling system upgrade and Power Usage Effectiveness improvement in the INFN CNAF Tier 1 infrastructure}{infra}
%\ia{National ICT Services Infrastructure and Services}{ssnn1}
%\ia{National ICT Services hardware and software infrastructures for Central Services}{ssnn2}
......
contributions/net/cineca-schema.png

217 KiB

contributions/net/cineca.png

71.7 KiB

contributions/net/connection-schema.png

117 KiB

contributions/net/gpn.png

95.3 KiB

contributions/net/lhcone-opn.png

79.1 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{The INFN-Tier1: Network and Security}
\author{S.~Zani$^1$, D.~De~Girolamo$^1$, L.~Chiarelli$^{1,2}$, V.~Ciaschini$^1$}
\address{$^1$ INFN-CNAF, Bologna, IT}
\address{$^2$ GARR Consortium, Roma, IT}
\ead{stefano.zani@cnaf.infn.it}
%\begin{abstract}
%DA SCRIVERE
%\end{abstract}
\section{Introduction}
The Network unit manages the wide area and local area connections of CNAF, it is responsible for the security of the center, contributes to the management of the local CNAF services (e.g., DNS, Windows domain etc.) and some of the INFN national ICT services. It gives also support to the GARR PoP hosted at CNAF.
\section{Wide Area Network}
Inside CNAF datacentre is hosted the main PoP of GARR network, based on a fully managed dark fiber infrastructure.
CNAF is connected to the WAN via GARR/GEANT essentially with two physical links:
\begin{itemize}
\item General Internet: General IP link is $20 Gbps$ (2x10 Gbps) via GARR and GEANT
\item LHCOPN/LHCONE: The link to WLCG destinations is $200Gbps$ (2x100 Gbps) link shared between the LHC-OPN network for traffic with the Tier-0 (CERN) and the other Tier-1s and LHCONE network mainly for traffic with the Tier-2s. Since Summer 2018, the LHCOPN dedicated link to CERN (from Milan GARR POP) has been upgraded to 2x100 Gbps while the peering to LHCONE is at $100Gbps$ (from Milan GARR POP and GEANT GARR POP).
\end{itemize}
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{connection-schema.png}\hspace{2pc}%
%\begin{minipage}[b]{14pc}
\caption{\label{schema-rete}INFN CNAF connection schema.}
%\end{minipage}
\end{center}
\end{figure}
As shown in the figures~\ref{lhc-opn-usage} and \ref{gpn-usage}, the network usage is growing both on LHCOPN/ONE and on General IP, even if, at the beginning of last year the traffic was very low because of the flooding occurred in November 2017 (the Computing Center returned completely online during February 2018).
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{lhcone-opn.png}\hspace{2pc}%
\caption{\label{lhc-opn-usage}LHC OPN + LHC ONE link usage.}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{gpn.png}\hspace{2pc}%
\caption{\label{gpn-usage}General IP link usage.}
\end{center}
\end{figure}
Currently the dedicated bandwidth for LHCOPN to CERN is 100Gbps with a backup link of 4x10Gbps. During 2019 the configuration will change and there will be provided 2x100 Gb/s links to the two CERN POP granting a better resiliency and giving potentially 200 Gbpss full speed with CERN and the Tier-1s.
\section{Data Center Interconnect with CINECA}
At the beginning of the 2018, CNAF obtained by CINECA the use of 216 Servers based on Intel Xeon CPU E5-2697 v4 (with 36 physical cores) coming from the Super Computer “Marconi” partition 1 in phase out for HPC workflows.
In order to integrate all of those computing resources in our farm, it has been fundamental to guarantee the appropriate access bandwidth to the storage resources located at CNAF. This has been implemented with the collaboration of GARR using the Data Center Interconnect (DCI) technology
provided by a pair of Infinera Cloud Express 2 (CX1200).
The Cloud Express 2 are Transponders with 12 x 100 Gigabit Ethernet interfaces on LAN Side and one LC fiber interface on “Line” side capable of up to 1,2 Tbps on a single mode fiber at a maximum distance of 100 Kilometers (CNAF and CINECA are 17 km far). In CNAF-CINECA case, the systems are configured for a 400 Gbps connection.
The latency introduced by each CX1200 is of $\sim 5 \mu$s and the total RTT (Round Trip Time) between servers at CNAF and servers at CINECA is of 0,48 ms comparable to what we observe on the LAN (0,28 ms).
All worker nodes on the network segment at CINECA have IP addresses of the INFN Tier-1 network and are used as they were installed at the Tier-1 facility (see fig.~\ref{cineca-schema}). The data access bandwidth is 400 Gbps but can scale up to 1,2 Tbps.
This DCI interconnection has been implemented rapidly and as a proof of concept (this is the first time this technology has been used in Italy), now it is in production and as it is becoming a stable and relevant asset for CNAF (fig.~\ref{cineca-traffic}), it is in our plan to have a second optical fiber (between CNAF and CINECA) for resiliency reasons.
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{cineca-schema.png}\hspace{2pc}%
\caption{\label{cineca-schema}INFN Tier-1 – CINECA Data Center Interconnection.}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{cineca.png}\hspace{2pc}%
\caption{\label{cineca-traffic}INFN Tier-1 – CINECA link usage}
\end{center}
\end{figure}
\section{Security}
The network security policies are mainly implemented as hardware based ACLs on the access router and on the core switches (with a dedicated ASICS on the devices).
The network group, in coordination with GARR-CERT and EGI-CSIRT, also takes care of security incidents at CNAF (both for compromised systems or credential and known vulnerability of software and grid middleware) cooperating with the involved parties.
During 2018, CNAF Security Group has been reorganized with the formal involvement of at least one representative for each unit in order to obtain a stronger coordination on security policies implementation and a faster reaction to security incidents.
As always in 2018 CNAF's has had an important commitment to security which had seen it active on several fronts.
\subsection{“Misure Minime” Implementation}
CNAF has had an important role in determining how the whole of INFN would implement compliance with the “Misure Minime”\footnote{Misure Minime is a set of minimum ICT security measures to be adopted by all the Italian public administrations.} regulation. It actively contributed to the discussion and to the implementation guidelines for each OS, and had a central role in defining the Risk Management procedures, writing the prototype version and co-writing the final definition.
\subsection{Vulnerability scanning}
In an effort to monitor the security of the centre, CNAF has started a campaign of systematic and periodic scanning all of its machines, personal and not, looking for vulnerabilities in an effort to find and fixing them before they could be actively exploited by an attacker.
As expected, this scanning brought to light a number of issues that were promptly corrected (when possible) or mitigated (when not) thus nipping a number of potential problems in the bud.
\subsection{Security Assessment}
In light of its growing importance, a security assessment of Indigo-IAM has also taken place.
Focused on testing the actual security of the product and finding ways in which it could be exploited, this assessment brought to light a number of issues of varying importance which have been sent back and discussed with the developers to increase the security and reliability of the product.
\subsection{Technology tracking}
A constant technology tracking activity is ongoing on security tools and devices. In particular meeting with some the main Next Generation Firewall producers have been scheduled in 2017 and in 2018. During this two years three Next-Generation firewall from Fortinet, Huawei and Palo Alto Networks had been tested on production links in order to define the fundamental characteristics to be included in the tender for the acquisition of the NG Firewall to be installed on the “General IP” Wide Area Network Link.
%\section*{References}
%\begin{thebibliography}{9}
%\bibitem{iopartnum} IOP Publishing is to grateful Mark A Caprio, Center for Theoretical Physics, Yale University, for permission to include the {\tt iopart-num} \BibTeX package (version 2.0, December 21, 2006) with this documentation. Updates and new releases of {\tt iopart-num} can be found on \verb"www.ctan.org" (CTAN).
%\end{thebibliography}
\end{document}
contributions/net/net-board.png

170 KiB

\documentclass[a4paper,12pt]{jpconf}
\usepackage[american]{babel}
\usepackage{geometry}
%\usepackage{fancyhdr}
\usepackage{graphicx}
\geometry{a4paper,top=4.0cm,left=2.5cm,right=2.5cm,bottom=2.7cm}
%\usepackage[mmm]{fncychap}
%\fancyhf{} % azzeriamo testatine e piedino
%\fancyhead[L]{\thepage}
%\renewcommand{\sectionmark}[1]{\markleft{\thesection.\ #1}}
%\fancyhead[R]{\bfseries\leftmark}
%\rhead{XENON computing activities}
\begin{document}
\title{XENON computing model}
%\pagestyle{fancy}
\author{M. Selvi}
\address{INFN - Sezione di Bologna}
\ead{marco.selvi@bo.infn.it}
\begin{abstract}
The XENON project is dedicated to the direct search of dark matter at LNGS.
XENON1T was the largest double-phase TPC ever built and operated so far, with 2 t of active xenon, decommissioned in December 2018. It successfully set the best world-wide limit to the interaction cross-section of WIMPs with nucleons. In the context of rare event search detectors, the amount of data (in the form of raw waveform) was significant: order of 1 PB/year, including both Science and Calibration runs. The next phase of the experiment, XENONnT, is under construction at LNGS, with a 3 times larger TPC and correspondingly increased data rate. Its commissioning is foreseen by the end of 2019.
We describe the computing model of the XENON project, with details of the data transfer and management, the massive raw data processing, and the production of Monte Carlo simulation.
All these topics are addressed using in the most efficient way the computing resources spread mainly in the US and EU, thanks to the OSG and EGI facilities, including those available at CNAF.
\end{abstract}
\section{The XENON project}
\thispagestyle{empty}
The matter composition of the universe has been a debate topic
among scientists for centuries. In the last couple of decades a series
of astronomical and astrophysical measurements have corroborated
the hypothesis that ordinary matter e.g. electrons, quarks,
neutrinos, etc. represents only 15\% of the total matter in the universe.
The remaining 85\% is thought to be made of a
new, yet-undiscovered exotic species of elementary particles called
dark matter. These indirect evidences of its existence
triggered a world-wide effort to try observe its interaction with
ordinary matter in extremely sensitive detectors, but its nature is
still a mystery.
The XENON experimental program \cite{225, mc, instr-1T} is searching
for weakly interacting massive particles (WIMPs), hypothetical
particles that, if existing, could account for dark matter and
that might interact with ordinary matter through nuclear recoil.
XENON1T is the third generation of the experimental
program; it completed the data taking at the end of 2018, setting the best world-wide limit to the interaction cross-section of WIMPs with nucleons.
The experiment employs a dual-phase (liquid-gas) xenon
time projection chamber (TPC) featuring as target for WIMPs two
tonnes of ultrapure liquid xenon. The detector is designed
in such a way to be sensitive to rare nuclear recoils of xenon
nuclei possibly induced by WIMPs scattering within the detector.
The TPC is surrounded by a water-based muon veto (MV). Each
sub-detector is read out by its own data acquisition system (DAQ).
The detector is located underground at the INFN Laboratori Nazionali
del Gran Sasso in Italy to shield the experiment from cosmic rays.
XENON1T is an order of magnitude larger than any of its predecessor
experiments. This upscaling in detector size produced a
proportional increase in the data rate and computing needs of
the collaboration. The size of the data set required the collaboration
to transition from a centralized computing model, i.e. the entire
dataset is stored on a local facility at various institutions, to having
to distribute the data across collaboration resources. Similarly,
the computing requirements called for incorporating distributed
resources, such as the Open Science Grid (OSG) \cite{osg} and the European
Grid Infrastructure (EGI) \cite{egi}, for main computing tasks,
e.g. initial data processing and Monte Carlo production.
\section{XENON1T}
For what concern the data flow, the XENON1T experiment uses a DAQ machine hosted in the XENON1T service
building underground to acquire data. The DAQ rate in DM mode is ~1.3 TB/day, while in calibration mode it can be significantly larger: up to
$\sim$13 TB/day.
A significant challenge for the collaboration has been that there is
no single institution that has the capacity to store the entire data set.
This requires the data to either be stored in a cloud environment
or be distributed across various collaboration institutions. Storing
the data in a cloud environment is prohibitively expensive at this
point. The data set size and the network traffic charges would
consume the entire computing budget several times over.
The only feasible option was to distribute the data across several
computing facilities associated with collaboration institutions.
The raw data are copied into {\it Rucio}, a data handling system. There are several Rucio endpoints or Rucio
storage elements (RSE) around the world, including LNGS, NIKHEF, Lyon and Chicago. The raw data are replicated in at
least two positions and there are two mirrored tape backups, at CNAF and in Stockholm, with 5.6 PB in total. %Help
When the data have to be processed, they are first copied onto Chicago storage then they are processed using the OSG. The processed data are
then copied back to Chicago and become available for the analysis.
In addition, for each user there is a home space of 100 GB available on a disk of 10 TB. A dedicated server will take
care of the data transfer to/from remote facilities. A high memory 32 cores machine is used to host several virtual
machines, each one running a dedicated service: code (data processing and Monte Carlo) and documents repository on
SVN/GIT, the run database, the on-line monitoring web interface, the XENON wiki and GRID UI.
In fig. \ref{fig:xenonCM} we show a sketch of the XENON computing model and data management scheme.
\begin{figure}[t]
\begin{center}
\includegraphics[width=15cm]{xenon-computing-model.pdf}
\end{center}
\caption{Overview of the XENON1T Job and Data Management Scheme.}
\label{fig:xenonCM}
\end{figure}
The resources at CNAF (CPU and Disk) are used so far mainly for the Monte Carlo simulation of the
detector (GEANT4 model of the detector and waveform generator), and for the €œreal-data€ storage and processing. %Currently we used about XX TB of the XX TB available for 2018. %Help
%For this purpose,
There were some improvements performed recently by the Computing Working group of the experiment. The CNAF Disk at the beginning was not integrated into the Rucio framework because it was not large enough to justify the amount of work needed for the integration (it was 60 TB up to 2016). For this reason we required for 2018 an additional amount of 90 TB, to reach a total 200 TB which is considered significant by the collaboration to consider a full integration of the Disk space.\\
The second improvement has been to perform the data processing on both the US and EU GRID (previously it was done in the US only). Some software tools have been successfully developed and tested during 2017, and they are used for a fully distributed massive data processing. To fulfil this goal, we required 300 HS06 additional CPUs, for a total of 1000 HS06, equivalent to the resources available on the US OSG.\\
The request of Tapes (1000 TB) in 2018 was done to fulfil the requirement by INFN to have a copy of all the XENON1T data in Italy, as discussed inside the INFN Astroparticle Committee. A dedicate automatic data transfer to tapes has been developed by CNAF.
The computing model described in this report allowed for a fast and effective processing and analysis of the first XENON1T data in 2017, and of the final ones in 2018, which led to the best limit in the search of WIMPs so far \cite{sr0, sr1}.
\section{XENONnT}
The planning and initial implementation of the data and job management
for the next generation experiment, XENONnT, has already
begun. The experiment is currenlty under construction at LNGS, and it's scheduled to start taking data by the end of 2019. The current plan is the increase the TPC volume by a factor 3
to have 6 t of active liquid xenon. The new experimental setup will
also have an additional veto layer called Neutron Veto.
The larger detector will require modifications to the current data
and job management. The processing chain and its products will
undergo significant changes. The larger data volume
and improved knowledge about data access patterns has informed
changes to the data organization. Rather than store the full raw
dataset for later re-processing, the data coming from the detector
will be filtered to only include interesting events. The full raw
dataset will only be stored on tape at one or two sites, where one
of these sites is for long-term archival. The filtered raw dataset will
be stored at OSG/EGI sites for later reprocessing. The overall data
volume of the reduced dataset will be similar to the current data
volume of XENON1T.
\section{References}
\begin{thebibliography}{9}
\bibitem{225} Aprile E. et al (XENON Collaboration), {\it Dark Matter Results from 225 Live Days of XENON100 Data}, Phys. Rev. Lett. {\bf 109} (2012), 181301
\bibitem{mc} Aprile E. et al (XENON Collaboration), {\it Physics reach of the XENON1T dark matter experiment}, JCAP {\bf 04} (2016), 027
\bibitem{instr-1T} Aprile E. et al (XENON Collaboration), {\it The XENON1T Dark Matter Experiment}, Eur. Phys. J. C77 {\bf 12} (2017), 881
\bibitem{osg} Ruth Pordes et al., {\it The open science grid}, Journal of Physics: Conference Series 78, 1 (2007), 012057.
\bibitem{egi} D. Kranzlmüller et al., {\it The European Grid Initiative (EGI)}, Remote Instrumentation and Virtual Laboratories. Springer US, Boston, MA, 61–66 (2010).
\bibitem{sr0} Aprile E. et al (XENON Collaboration), {\it First Dark Matter Search Results from the XENON1T Experiment }, Phys. Rev. Lett. {\bf 119} (2017), 181301
\bibitem{sr1} Aprile E. et al (XENON Collaboration), {\it Dark Matter Search Results from a One Ton-Year Exposure of XENON1T}, Phys. Rev. Lett. {\bf 121} (2018), 111302
\end{thebibliography}
\end{document}
File added
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment