Newer
Older
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\author{S.~Zani$^1$, D.~De~Girolamo$^1$, L.~Chiarelli$^{1,2}$, V.~Ciaschini$^1$}
\address{$^1$ INFN-CNAF, Bologna, IT}
\address{$^2$ GARR Consortium, Roma, IT}
\ead{stefano.zani@cnaf.infn.it}
%\begin{abstract}
%DA SCRIVERE
%\end{abstract}
\section{Introduction}
The Network unit manages the wide area and local area connections of CNAF.
Moreover, it is responsible for the security of the center, and it contributes to the management of the local CNAF services
(e.g. DNS, Windows domain etc.) and some of the INFN national ICT services. It gives also support to the GARR PoP hosted at CNAF.
T he main PoP of GARR network, based on a fully managed dark fiber infrastructure, is hosted tnside CNAF data center.
CNAF is connected to the WAN via GARR/GEANT essentially with two physical links:
\begin{itemize}
\item General Internet: General IP link is 20 Gbps (2x10 Gbps) via GARR and GEANT;
\item LHCOPN/LHCONE: The link to WLCG destinations is 200 Gbps (2x100 Gbps) link shared between the LHC-OPN network for traffic
with the Tier 0 (CERN) and the other Tier 1 sites and LHCONE network mainly for traffic with the Tier 2 centers.
Since summer 2018, the LHCOPN dedicated link to CERN (from Milan GARR POP) has been upgraded to 2x100 Gbps
while the peering to LHCONE is at 100 Gbps (from Milan GARR POP and GEANT GARR POP).
\end{itemize}
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{connection-schema.png}\hspace{2pc}%
%\begin{minipage}[b]{14pc}
\caption{\label{schema-rete}INFN-CNAF connection schema.}
%\end{minipage}
\end{center}
\end{figure}
As shown in Figures~\ref{lhc-opn-usage} and \ref{gpn-usage}, network usage is growing both on LHCOPN/ONE and on General IP,
even if, at the beginning of last year, the traffic was very low because of the flooding occurred in November 2017
(the Computing Center returned completely online during February 2018).
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{lhcone-opn.png}\hspace{2pc}%
\caption{\label{lhc-opn-usage}LHC OPN + LHC ONE link usage.}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{gpn.png}\hspace{2pc}%
\caption{\label{gpn-usage}General IP link usage.}
\end{center}
\end{figure}
Currently the dedicated bandwidth for LHCOPN to CERN is 100 Gbps with a backup link of 4x10 Gbps.
During 2019, the configuration will change and 2x100 Gb/s links to the two CERN POP will be provided in order to grant a better resiliency
and to give potentially 200 Gbps full speed with CERN and the Tier 1s.
\section{Data Center Interconnect with CINECA}
At the beginning of 2018, CNAF obtained from CINECA the use of 216 Servers based on Intel Xeon CPU E5-2697 v4
(with 36 physical cores) coming from the Super Computer “Marconi” partition 1 in phase-out for HPC workflows.
In order to integrate all of those computing resources in our farm, it has been fundamental to guarantee the appropriate access bandwidth to the storage resources located at CNAF. This has been implemented with the collaboration of GARR using the Data Center Interconnect (DCI) technology
provided by a pair of Infinera Cloud Express 2 (CX1200).
The Cloud Express 2 are Transponders with 12 x 100 Gigabit Ethernet interfaces on LAN Side and one LC fiber interface on “Line” side capable of up to 1,2 Tbps on a single mode fiber at a maximum distance of 100 kilometers (CNAF and CINECA are 17 km far). In CNAF-CINECA case, the systems are configured for a 400 Gbps connection.
The latency introduced by each CX1200 is of $\sim 5 \mu$s and the total RTT (Round Trip Time) between servers at CNAF and servers at CINECA is of 0,48 ms,
comparable to what we observe on the LAN (0,28 ms).
All worker nodes on the network segment at CINECA have IP addresses of the INFN Tier 1 network and are used as they were installed
at the Tier 1 facility (see Figure~\ref{cineca-schema}). The data access bandwidth is 400 Gbps but it can scale up to 1,2 Tbps.
This DCI interconnection has been implemented rapidly and as a proof of concept
(this is the first time this technology has been used in Italy). Currently, it is in production, and as it is becoming
a stable and relevant asset for CNAF (Figure~\ref{cineca-traffic}), we plan to have a second optical fiber between CNAF and CINECA for resiliency reasons.
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{cineca-schema.png}\hspace{2pc}%
\caption{\label{cineca-schema}INFN Tier 1–CINECA Data Center Interconnection.}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=30pc]{cineca.png}\hspace{2pc}%
\caption{\label{cineca-traffic}INFN Tier 1–CINECA link usage}
\end{center}
\end{figure}
\section{Security}
The network security policies are mainly implemented as hardware-based ACLs on the access router
and on the core switches (with a dedicated ASICS on the devices).
The network group, in coordination with GARR-CERT and EGI-CSIRT, takes also care of security incidents at CNAF
(both for compromised systems or credential and known vulnerability of software and grid middleware) cooperating with the involved parties.
During 2018, CNAF Security Group has been reorganized with the formal involvement of at least one representative
for each unit in order to obtain a stronger coordination on security policies implementation and a faster reaction to security incidents.
As always, in 2018 CNAF has had an important commitment to security, and it has been active on several fronts, as described in the following.
\subsection{“Misure Minime” Implementation}
CNAF has had an important role in determining how the whole INFN would implement compliance with the
“Misure Minime”\footnote{Misure Minime is a set of minimum ICT security measures to be adopted
by all the Italian public administrations.} regulation.
It actively contributed to the discussion and to the implementation guidelines for each OS,
and it had a central role in defining the Risk Management procedures, writing the prototype version and co-writing the final definition.
\subsection{Vulnerability scanning}
In an effort to monitor the security of the center, CNAF has started a campaign of systematic and periodic scanning all of its machines,
personal and not, looking for vulnerabilities in an effort to find and fix them before they could be actively exploited by an attacker.
As expected, this scanning brought to light a number of issues that were promptly corrected (when possible) or mitigated (when not) thus nipping a number of potential problems in the bud.
\subsection{Security Assessment}
In light of its growing importance, a security assessment of Indigo-IAM has also taken place.
Focused on testing the actual security of the product and finding ways in which it could be exploited, this assessment brought to light a number of issues of varying importance which have been sent back and discussed with the developers to increase the security and reliability of the product.
\subsection{Technology tracking}
A constant technology tracking activity on security tools and devices is ongoing.
In particular, meetings with some of the main Next Generation Firewall producers have been scheduled in 2017 and in 2018.
During the two years, three Next-Generation firewall from Fortinet, Huawei and Palo Alto Networks had been tested on production links
in order to define the fundamental characteristics to be included in the tender for the acquisition of the NG Firewall to be installed on the “General IP” Wide Area Network Link.
%\section*{References}
%\begin{thebibliography}{9}
%\bibitem{iopartnum} IOP Publishing is to grateful Mark A Caprio, Center for Theoretical Physics, Yale University, for permission to include the {\tt iopart-num} \BibTeX package (version 2.0, December 21, 2006) with this documentation. Updates and new releases of {\tt iopart-num} can be found on \verb"www.ctan.org" (CTAN).
%\end{thebibliography}
\end{document}