Skip to content
Snippets Groups Projects
Commit 1f2c570c authored by Lucia Morganti's avatar Lucia Morganti
Browse files

Update main.tex

parent 8a12e2d1
No related branches found
No related tags found
No related merge requests found
Pipeline #22814 passed
\documentclass[a4paper]{jpconf} \documentclass[a4paper]{jpconf}
\usepackage{graphicx} \usepackage{graphicx}
\begin{document} \begin{document}
\title{The INFN-Tier1: Network and Security} \title{The INFN-Tier 1: Network and Security}
\author{S.~Zani$^1$, D.~De~Girolamo$^1$, L.~Chiarelli$^{1,2}$, V.~Ciaschini$^1$} \author{S.~Zani$^1$, D.~De~Girolamo$^1$, L.~Chiarelli$^{1,2}$, V.~Ciaschini$^1$}
\address{$^1$ INFN-CNAF, Bologna, IT} \address{$^1$ INFN-CNAF, Bologna, IT}
...@@ -28,7 +27,7 @@ CNAF is connected to the WAN via GARR/GEANT essentially with two physical links: ...@@ -28,7 +27,7 @@ CNAF is connected to the WAN via GARR/GEANT essentially with two physical links:
\begin{itemize} \begin{itemize}
\item General Internet: General IP link is $20 Gbps$ (2x10 Gbps) via GARR and GEANT \item General Internet: General IP link is $20 Gbps$ (2x10 Gbps) via GARR and GEANT
\item LHCOPN/LHCONE: The link to WLCG destinations is $200Gbps$ (2x100 Gbps) link shared between the LHC-OPN network for traffic with the Tier-0 (CERN) and the other Tier-1s and LHCONE network mainly for traffic with the Tier-2s. Since Summer 2018, the LHCOPN dedicated link to CERN (from Milan GARR POP) has been upgraded to 2x100 Gbps while the peering to LHCONE is at $100Gbps$ (from Milan GARR POP and GEANT GARR POP). \item LHCOPN/LHCONE: The link to WLCG destinations is $200Gbps$ (2x100 Gbps) link shared between the LHC-OPN network for traffic with the Tier 0 (CERN) and the other Tier 1 sites and LHCONE network mainly for traffic with the Tier 2 centres. Since Summer 2018, the LHCOPN dedicated link to CERN (from Milan GARR POP) has been upgraded to 2x100 Gbps while the peering to LHCONE is at $100Gbps$ (from Milan GARR POP and GEANT GARR POP).
\end{itemize} \end{itemize}
...@@ -62,7 +61,7 @@ As shown in the figures~\ref{lhc-opn-usage} and \ref{gpn-usage}, the network usa ...@@ -62,7 +61,7 @@ As shown in the figures~\ref{lhc-opn-usage} and \ref{gpn-usage}, the network usa
\end{figure} \end{figure}
Currently the dedicated bandwidth for LHCOPN to CERN is 100Gbps with a backup link of 4x10Gbps. During 2019 the configuration will change and there will be provided 2x100 Gb/s links to the two CERN POP granting a better resiliency and giving potentially 200 Gbpss full speed with CERN and the Tier-1s. Currently the dedicated bandwidth for LHCOPN to CERN is 100Gbps with a backup link of 4x10Gbps. During 2019 the configuration will change and there will be provided 2x100 Gb/s links to the two CERN POP granting a better resiliency and giving potentially 200 Gbpss full speed with CERN and the Tier 1s.
\section{Data Center Interconnect with CINECA} \section{Data Center Interconnect with CINECA}
...@@ -76,7 +75,7 @@ The Cloud Express 2 are Transponders with 12 x 100 Gigabit Ethernet interfaces o ...@@ -76,7 +75,7 @@ The Cloud Express 2 are Transponders with 12 x 100 Gigabit Ethernet interfaces o
The latency introduced by each CX1200 is of $\sim 5 \mu$s and the total RTT (Round Trip Time) between servers at CNAF and servers at CINECA is of 0,48 ms comparable to what we observe on the LAN (0,28 ms). The latency introduced by each CX1200 is of $\sim 5 \mu$s and the total RTT (Round Trip Time) between servers at CNAF and servers at CINECA is of 0,48 ms comparable to what we observe on the LAN (0,28 ms).
All worker nodes on the network segment at CINECA have IP addresses of the INFN Tier-1 network and are used as they were installed at the Tier-1 facility (see fig.~\ref{cineca-schema}). The data access bandwidth is 400 Gbps but can scale up to 1,2 Tbps. All worker nodes on the network segment at CINECA have IP addresses of the INFN Tier 1 network and are used as they were installed at the Tier 1 facility (see fig.~\ref{cineca-schema}). The data access bandwidth is 400 Gbps but can scale up to 1,2 Tbps.
This DCI interconnection has been implemented rapidly and as a proof of concept (this is the first time this technology has been used in Italy), now it is in production and as it is becoming a stable and relevant asset for CNAF (fig.~\ref{cineca-traffic}), it is in our plan to have a second optical fiber (between CNAF and CINECA) for resiliency reasons. This DCI interconnection has been implemented rapidly and as a proof of concept (this is the first time this technology has been used in Italy), now it is in production and as it is becoming a stable and relevant asset for CNAF (fig.~\ref{cineca-traffic}), it is in our plan to have a second optical fiber (between CNAF and CINECA) for resiliency reasons.
...@@ -85,7 +84,7 @@ This DCI interconnection has been implemented rapidly and as a proof of concept ...@@ -85,7 +84,7 @@ This DCI interconnection has been implemented rapidly and as a proof of concept
\begin{figure}[h] \begin{figure}[h]
\begin{center} \begin{center}
\includegraphics[width=30pc]{cineca-schema.png}\hspace{2pc}% \includegraphics[width=30pc]{cineca-schema.png}\hspace{2pc}%
\caption{\label{cineca-schema}INFN Tier-1 – CINECA Data Center Interconnection.} \caption{\label{cineca-schema}INFN Tier 1 – CINECA Data Center Interconnection.}
\end{center} \end{center}
\end{figure} \end{figure}
...@@ -94,7 +93,7 @@ This DCI interconnection has been implemented rapidly and as a proof of concept ...@@ -94,7 +93,7 @@ This DCI interconnection has been implemented rapidly and as a proof of concept
\begin{figure}[h] \begin{figure}[h]
\begin{center} \begin{center}
\includegraphics[width=30pc]{cineca.png}\hspace{2pc}% \includegraphics[width=30pc]{cineca.png}\hspace{2pc}%
\caption{\label{cineca-traffic}INFN Tier-1 – CINECA link usage} \caption{\label{cineca-traffic}INFN Tier 1 – CINECA link usage}
\end{center} \end{center}
\end{figure} \end{figure}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment