Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • faproietti/ar2018
  • chierici/ar2018
  • SDDS/ar2018
  • cnaf/annual-report/ar2018
4 results
Show changes
Showing
with 1066 additions and 0 deletions
contributions/sysinfo/container_ci.png

54 KiB

contributions/sysinfo/cronjob_annotation.png

40.1 KiB

contributions/sysinfo/deps_scan.png

4.3 MiB

contributions/sysinfo/presenze_kibana.png

22.4 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\usepackage{hyperref}
\begin{document}
\title{The INFN Information System}
\author{
S. Bovina$^1$,
M. Canaparo$^1$,
E. Capannini$^1$,
F. Capannini$^1$,
C. Galli$^1$,
G. Guizzunti$^1$,
B. Demin$^1$
}
\address{$^1$ INFN-CNAF, Bologna, IT}
\ead{
stefano.bovina@cnaf.infn.it,
marco.canaparo@cnaf.infn.it,
enrico.capannini@cnaf.infn.it,
fabio.capannini@cnaf.infn.it,
claudio.galli@cnaf.infn.it,
guido.guizzunti@cnaf.infn.it,
barbara.demin@cnaf.infn.it
}
\begin{abstract}
The mission of the Information System Service is the implementation, management and optimization of all the infrastructural and application components of the administrative services of the Institute. In order to guarantee high reliability and redundancy, the same systems are replicated in an analogous infrastructure at the National Laboratories of Frascati (LNF).
The Information System's team manages all the administrative services of the Institute,
both from the hardware and the software point of view, and it is in charge of carrying out several software projects.
The core of the Information System is made up of the salary and HR systems.
Connected to the core, there are several other systems reachable from a unique web portal:
firstly, the organizational chart system (GODiVA); secondly, the accounting, the time and attendance,
the trip and purchase order and the business intelligence systems.
Finally, there are other systems which manage the training of the employees, their subsidies, their timesheet, the official documents,
the computer protocol, the recruitment, the user support etc.
\end{abstract}
\section{Introduction}
The INFN Information System project was set up in 2001 with the purpose of digitizing and managing all the administrative and accounting processes of the INFN Institute,
and of carrying out a gradual dematerialization of documents.\\
In 2010, INFN decided to transfer the accounting system, based on the Oracle Business Suite (EBS) and the SUN Solaris operating system,
from the National Laboratories of Frascati (LNF) to CNAF, where the SUN Solaris platform was migrated to a RedHat Linux Cluster and implemented on commodity hardware.\\
The Service “Information System” was officially established at CNAF in 2013 with the aim of developing, maintaining and coordinating many IT services which are critical
for INFN. Together with the corresponding office at the National Laboratories of Frascati, it is actively involved in fields related to INFN management and administration, developing tools for business intelligence and research quality assurance; it is also involved in the dematerialization process and in the provisioning of interfaces between users and INFN administration.\\
Over the years, other services have been added, leading to a complex infrastructure that covers all aspects of people's life working at INFN.
In 2018, the Information System service team at CNAF was composed of 8 people, both developers and system engineers.\\
\section{Infrastructure}
In 2018, the infrastructure-related activity was composed of various tasks that can be summarized as follows:
firstly, the consolidation of the Disaster Recovery site in Bari and the restore of CNAF as primary site;
secondly, the finalization of Puppet 3 phase out and related Foreman upgrades;
thirdly, the improvement of our ELK (Elasticsearch/Logstash/Kibana) and monitoring infrastructure and finally, several ``Misure Minime'' AGID and GDPR compliance adjustments.
\newline
After the complete revisiting and upgrade of the ELK stack to version 5 last year,
many activities have been done to enhance systems and applications monitoring using this set of tools.
To improve the discovery and resolution of problems, several views and dashboards (see Figure~\ref{fig:presenze_kibana}) have been created on Kibana,
as well as a deep analysis and customization of application logs to introduce useful information.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.5]{presenze_kibana.png}
\end{center}
\caption{\label{fig:presenze_kibana} Time and attendance system manual squaring statistics on Kibana (ELK).}
\end{figure}
With the aim of enhancing our cronjobs management, improving its monitoring and management, avoiding cronjob overlap and in order to identify ``dead-man-switches'''
a new cronjob management tool has been adopted.
Cronjob executions are available both on Kibana and Grafana (as annotation),
so they can be used to be correlated with system events (see Figure~\ref{fig:cronjob_annotation}); In the same way, software releases are also displayed on Grafana.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.5]{cronjob_annotation.png}
\end{center}
\caption{\label{fig:cronjob_annotation} Annotations for cronjobs on Grafana.}
\end{figure}
\newpage
Because of the recent regulations that came into force (``Misure Minime'' AGID and GDPR), many audits and related adjustments were made, also relying on both official Center for Internet Security (CIS) guides and Openscap scan, using the Payment Card Industry - Data Security Standard (PCI-DSS) profile.
Afterwards, we introduced a proactive security model on some pilot projects, adopting tools for static code analysis and dependency scanning (see Figure~\ref{fig:deps_scan}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\textwidth]{deps_scan.png}
\end{center}
\caption{\label{fig:deps_scan} Dependencies scan tool in action on Gitlab-CI.}
\end{figure}
In addition to this, the Platform as a Service (PaaS) infrastructure based on RedHat Openshift Origin (3.x) was upgraded to release 3.11
and a signature/scan services was deployed at container registry level for all container-based projects (see Figure~\ref{fig:container_ci}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\textwidth]{container_ci.png}
\end{center}
\caption{\label{fig:container_ci} Container registry details and related Gitlab-CI pipeline.}
\end{figure}
\newpage
In 2018, Oracle databases related activities concerned their maintenance,
an initial analysis about the necessary activities to upgrade to later versions and the study on how to achieve real-time replication
between the Oracle databases of the Accounting application. Periodic recovery tests were also conducted on the Bari Disaster Recovery site.
\section{Time and attendance system improvements}
The time and attendance system allows employees to clock in and out electronically via swipe card.
The data is instantly transferred into a database and shown in a web-based application.
This system tracks the working hours and offers employees self-service that allows them to handle many time-tracking tasks on their own,
all subjected to customizable approval workflows, which include reviewing the hours they have worked, the current and future schedule and requests of paid or unpaid leaves.
In 2018, the Time and Attendance system related activities concerned both the introduction of new features and the modifications of the existing ones. Furthermore, developers focused on the performance improvement of the system through the optimization of some common procedures.
The Time and Attendance system was enabled to ``read'' codes introduced together with the clock in/out: through this mechanism, employees can specify the reasons for their leave of absence without using the web-based application.
Some modifications have been carried out to implement some changes occurred in the national collective agreement. This activity included two new leaves of absence and an extension from three to four months of the period for the check of the average weekly working hours.
As concerns performance, the developers' team have optimized the procedure that manages the clock in/out by web portal, and the report that shows the paid overtime aggregated in sectors, employees and months.
\section{Oracle EBS improvements}
In 2018, a new Electronic Payments and Receipts (EPR) Framework was introduced,
in compliance with the standard set by the Agency for Digital Italy (Agenzia per l'Italia Digitale, AgID) and transmitted through SIOPE+.
SIOPE+ is the new infrastructure that enables general government entities and banks that provide treasury services
to exchange information, with the aim of improving the quality of the data used for monitoring government expenditure and tracking the payment times to firms that supply general government entities.
SIOPE+ responds to the following needs:
\begin{itemize}
\item availability of detailed information on payments made by general government bodies without burdening the entities involved in the flow of outlays and collections. This will make it easier to obtain information on the payments of trade receivables and, more broadly, to monitor public sector financial flows in real time.
\item standardization of information exchange between government bodies and treasury service providers by adopting a single digital standard OPI (Ordinativo di Pagamento e Incasso) in place of the previous local standard OIL (Ordinativo Informatico Locale), with the aim of raising the quality of treasury services, facilitating further integration between the accounting systems of the entities and between payment processes, and supporting the development of electronic payments services.
\end{itemize}
\section{Business Intelligence improvements}
In 2018, the main task was investigating alternative technical solutions to the current Business Intelligence installation,
with the aim of reducing licensing costs, while remaining on an open source solution and preserving functionalities and compatibility with other INFN tools and platforms.
At the end of this activity, the current solution, based on TIBCO platform, was confirmed the best one.
%At present, we are converting reports that are using deprecated features. Once all reports are converted, the Business Intelligence infrastructure will be upgraded to the last version.
\section{Contratti}
Contratti (previously named Repertorio Contratti) is a new Java application (in test phase) for long term preservation of contracts made between INFN and an external supplier, based on Alfresco and mDM protocol.
Each contract is enriched with a full set of metadata which describe the contract in its relevant parts, and suppliers are extracted automatically from the central supplier registry, together with details of the contract signer.
Last year, several bugfix and improvements have been made, in order to respect our customers requirements. Improvements can be summarized as follows:
\begin{enumerate}
\item integration with mDM protocol:
\begin{itemize}
\item it is now possible to manage a set of folders where to store the contract file, as if it was a complete folder explorer;
\item before the contract file is stored in mDM, a protocol signature is written onto the document, without invalidating PAdES (PDF Advanced Electronic Signatures) signature of the issuer.
\end{itemize}
\item complete refactoring of the ACLs mechanism used to manage document and app permissions;
\item added email notification in order to send a contract link to a set of recipients, extracted automatically from Godiva;
\item it is now possible to print a label containing the relevant characteristics of the contract;
\item complete UI restyling in order to improve both readability and usability of the product.
\end{enumerate}
\end{document}
\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{Preparing a paper using \LaTeXe\ for publication in \jpcs}
\author{Jacky Mucklow}
\address{Production Editor, \jpcs, \iopp, Dirac House, Temple Back, Bristol BS1~6BE, UK}
\ead{jacky.mucklow@iop.org}
\begin{abstract}
All articles {\it must} contain an abstract. This document describes the preparation of a conference paper to be published in \jpcs\ using \LaTeXe\ and the \cls\ class file. The abstract text should be formatted using 10 point font and indented 25 mm from the left margin. Leave 10 mm space after the abstract before you begin the main text of your article. The text of your article should start on the same page as the abstract. The abstract follows the addresses and should give readers concise information about the content of the article and indicate the main results obtained and conclusions drawn. As the abstract is not part of the text it should be complete in itself; no table numbers, figure numbers, references or displayed mathematical expressions should be included. It should be suitable for direct inclusion in abstracting services and should not normally exceed 200 words. The abstract should generally be restricted to a single paragraph. Since contemporary information-retrieval systems rely heavily on the content of titles and abstracts to identify relevant articles in literature searches, great care should be taken in constructing both.
\end{abstract}
\section{Introduction}
These guidelines show how to prepare articles for publication in \jpcs\ using \LaTeX\ so they can be published quickly and accurately. Articles will be refereed by the \corg s but the accepted PDF will be published with no editing, proofreading or changes to layout. It is, therefore, the author's responsibility to ensure that the content and layout are correct. This document has been prepared using \cls\ so serves as a sample document. The class file and accompanying documentation are available from \verb"http://jpcs.iop.org".
\section{Preparing your paper}
\verb"jpconf" requires \LaTeXe\ and can be used with other package files such
as those loading the AMS extension fonts
\verb"msam" and \verb"msbm" (these fonts provide the
blackboard bold alphabet and various extra maths symbols as well as
symbols useful in figure captions); an extra style file \verb"iopams.sty" is
provided to load these packages and provide extra definitions for bold Greek letters.
\subsection{Headers, footers and page numbers}
Authors should {\it not} add headers, footers or page numbers to the pages of their article---they will
be added by \iopp\ as part of the production process.
\subsection{{\cls\ }package options}
The \cls\ class file has two options `a4paper' and `letterpaper':
\begin{verbatim}
\documentclass[a4paper]{jpconf}
\end{verbatim}
or \begin{verbatim}
\documentclass[letterpaper]{jpconf}
\end{verbatim}
\begin{center}
\begin{table}[h]
\caption{\label{opt}\cls\ class file options.}
%\footnotesize\rm
\centering
\begin{tabular}{@{}*{7}{l}}
\br
Option&Description\\
\mr
\verb"a4paper"&Set the paper size and margins for A4 paper.\\
\verb"letterpaper"&Set the paper size and margins for US letter paper.\\
\br
\end{tabular}
\end{table}
\end{center}
The default paper size is A4 (i.e., the default option is {\tt a4paper}) but this can be changed to Letter by
using \verb"\documentclass[letterpaper]{jpconf}". It is essential that you do not put macros into the text which alter the page dimensions.
\section{The title, authors, addresses and abstract}
The code for setting the title page information is slightly different from
the normal default in \LaTeX\ but please follow these instructions as carefully as possible so all articles within a conference have the same style to the title page.
The title is set in bold unjustified type using the command
\verb"\title{#1}", where \verb"#1" is the title of the article. The
first letter of the title should be capitalized with the rest in lower case.
The next information required is the list of all authors' names followed by
the affiliations. For the authors' names type \verb"\author{#1}",
where \verb"#1" is the
list of all authors' names. The style for the names is initials then
surname, with a comma after all but the last
two names, which are separated by `and'. Initials should {\it not} have
full stops. First names may be used if desired. The command \verb"\maketitle" is not
required.
The addresses of the authors' affiliations follow the list of authors.
Each address should be set by using
\verb"\address{#1}" with the address as the single parameter in braces.
If there is more
than one address then a superscripted number, followed by a space, should come at the start of
each address. In this case each author should also have a superscripted number or numbers following their name to indicate which address is the appropriate one for them.
Please also provide e-mail addresses for any or all of the authors using an \verb"\ead{#1}" command after the last address. \verb"\ead{#1}" provides the text Email: so \verb"#1" is just the e-mail address or a list of emails.
The abstract follows the addresses and
should give readers concise information about the content
of the article and should not normally exceed 200
words. {\bf All articles must include an abstract}. To indicate the start
of the abstract type \verb"\begin{abstract}" followed by the text of the
abstract. The abstract should normally be restricted
to a single paragraph and is terminated by the command
\verb"\end{abstract}"
\subsection{Sample coding for the start of an article}
\label{startsample}
The code for the start of a title page of a typical paper might read:
\begin{verbatim}
\title{The anomalous magnetic moment of the
neutrino and its relation to the solar neutrino problem}
\author{P J Smith$^1$, T M Collins$^2$,
R J Jones$^{3,}$\footnote[4]{Present address:
Department of Physics, University of Bristol, Tyndalls Park Road,
Bristol BS8 1TS, UK.} and Janet Williams$^3$}
\address{$^1$ Mathematics Faculty, Open University,
Milton Keynes MK7~6AA, UK}
\address{$^2$ Department of Mathematics,
Imperial College, Prince Consort Road, London SW7~2BZ, UK}
\address{$^3$ Department of Computer Science,
University College London, Gower Street, London WC1E~6BT, UK}
\ead{williams@ucl.ac.uk}
\begin{abstract}
The abstract appears here.
\end{abstract}
\end{verbatim}
\section{The text}
The text of the article should should be produced using standard \LaTeX\ formatting. Articles may be divided into sections and subsections, but the length limit provided by the \corg\ should be adhered to.
\subsection{Acknowledgments}
Authors wishing to acknowledge assistance or encouragement from
colleagues, special work by technical staff or financial support from
organizations should do so in an unnumbered Acknowledgments section
immediately following the last numbered section of the paper. The
command \verb"\ack" sets the acknowledgments heading as an unnumbered
section.
\subsection{Appendices}
Technical detail that it is necessary to include, but that interrupts
the flow of the article, may be consigned to an appendix.
Any appendices should be included at the end of the main text of the paper, after the acknowledgments section (if any) but before the reference list.
If there are two or more appendices they will be called Appendix A, Appendix B, etc.
Numbered equations will be in the form (A.1), (A.2), etc,
figures will appear as figure A1, figure B1, etc and tables as table A1,
table B1, etc.
The command \verb"\appendix" is used to signify the start of the
appendixes. Thereafter \verb"\section", \verb"\subsection", etc, will
give headings appropriate for an appendix. To obtain a simple heading of
`Appendix' use the code \verb"\section*{Appendix}". If it contains
numbered equations, figures or tables the command \verb"\appendix" should
precede it and \verb"\setcounter{section}{1}" must follow it.
\section{References}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In the online version of \jpcs\ references will be linked to their original source or to the article within a secondary service such as INSPEC or ChemPort wherever possible. To facilitate this linking extra care should be taken when preparing reference lists.
Two different styles of referencing are in common use: the Harvard alphabetical system and the Vancouver numerical system. For \jpcs, the Vancouver numerical system is preferred but authors should use the Harvard alphabetical system if they wish to do so. In the numerical system references are numbered sequentially throughout the text within square brackets, like this [2], and one number can be used to designate several references.
\subsection{Using \BibTeX}
We highly recommend the {\ttfamily\textbf\selectfont iopart-num} \BibTeX\ package by Mark~A~Caprio \cite{iopartnum}, which is included with this documentation.
\subsection{Reference lists}
A complete reference should provide the reader with enough information to locate the article concerned, whether published in print or electronic form, and should, depending on the type of reference, consist of:
\begin{itemize}
\item name(s) and initials;
\item date published;
\item title of journal, book or other publication;
\item titles of journal articles may also be included (optional);
\item volume number;
\item editors, if any;
\item town of publication and publisher in parentheses for {\it books};
\item the page numbers.
\end{itemize}
Up to ten authors may be given in a particular reference; where
there are more than ten only the first should be given followed by
`{\it et al}'. If an author is unsure of a particular journal's abbreviated title it is best to leave the title in
full. The terms {\it loc.\ cit.\ }and {\it ibid.\ }should not be used.
Unpublished conferences and reports should generally not be included
in the reference list and articles in the course of publication should
be entered only if the journal of publication is known.
A thesis submitted for a higher degree may be included
in the reference list if it has not been superseded by a published
paper and is available through a library; sufficient information
should be given for it to be traced readily.
\subsection{Formatting reference lists}
Numeric reference lists should contain the references within an unnumbered section (such as \verb"\section*{References}"). The
reference list itself is started by the code
\verb"\begin{thebibliography}{<num>}", where \verb"<num>" is the largest
number in the reference list and is completed by
\verb"\end{thebibliography}".
Each reference starts with \verb"\bibitem{<label>}", where `label' is the label used for cross-referencing. Each \verb"\bibitem" should only contain a reference to a single article (or a single article and a preprint reference to the same article). When one number actually covers a group of two or more references to different articles, \verb"\nonum"
should replace \verb"\bibitem{<label>}" at
the start of each reference in the group after the first.
For an alphabetic reference list use \verb"\begin{thereferences}" ... \verb"\end{thereferences}" instead of the
`thebibliography' environment and each reference can be start with just \verb"\item" instead of \verb"\bibitem{label}"
as cross referencing is less useful for alphabetic references.
\subsection {References to printed journal articles}
A normal reference to a journal article contains three changes of font (see table \ref{jfonts}) and is constructed as follows:
\begin{itemize}
\item the authors should be in the form surname (with only the first letter capitalized) followed by the initials with no periods after the initials. Authors should be separated by a comma except for the last two which should be separated by `and' with no comma preceding it;
\item the article title (if given) should be in lower case letters, except for an initial capital, and should follow the date;
\item the journal title is in italic and is abbreviated. If a journal has several parts denoted by different letters the part letter should be inserted after the journal in Roman type, e.g. {\it Phys. Rev.} A;
\item the volume number should be in bold type;
\item both the initial and final page numbers should be given where possible. The final page number should be in the shortest possible form and separated from the initial page number by an en rule `-- ', e.g. 1203--14, i.e. the numbers `12' are not repeated.
\end{itemize}
A typical (numerical) reference list might begin
\medskip
\begin{thebibliography}{9}
\item Strite S and Morkoc H 1992 {\it J. Vac. Sci. Technol.} B {\bf 10} 1237
\item Jain S C, Willander M, Narayan J and van Overstraeten R 2000
{\it J. Appl. Phys}. {\bf 87} 965
\item Nakamura S, Senoh M, Nagahama S, Iwase N, Yamada T, Matsushita T, Kiyoku H
and Sugimoto Y 1996 {\it Japan. J. Appl. Phys.} {\bf 35} L74
\item Akasaki I, Sota S, Sakai H, Tanaka T, Koike M and Amano H 1996
{\it Electron. Lett.} {\bf 32} 1105
\item O'Leary S K, Foutz B E, Shur M S, Bhapkar U V and Eastman L F 1998
{\it J. Appl. Phys.} {\bf 83} 826
\item Jenkins D W and Dow J D 1989 {\it Phys. Rev.} B {\bf 39} 3317
\end{thebibliography}
\smallskip
\noindent which would be obtained by typing
\begin{verbatim}
\begin{\thebibliography}{9}
\item Strite S and Morkoc H 1992 {\it J. Vac. Sci. Technol.} B {\bf 10} 1237
\item Jain S C, Willander M, Narayan J and van Overstraeten R 2000
{\it J. Appl. Phys}. {\bf 87} 965
\item Nakamura S, Senoh M, Nagahama S, Iwase N, Yamada T, Matsushita T, Kiyoku H
and Sugimoto Y 1996 {\it Japan. J. Appl. Phys.} {\bf 35} L74
\item Akasaki I, Sota S, Sakai H, Tanaka T, Koike M and Amano H 1996
{\it Electron. Lett.} {\bf 32} 1105
\item O'Leary S K, Foutz B E, Shur M S, Bhapkar U V and Eastman L F 1998
{\it J. Appl. Phys.} {\bf 83} 826
\item Jenkins D W and Dow J D 1989 {\it Phys. Rev.} B {\bf 39} 3317
\end{\thebibliography}
\end{verbatim}
\begin{center}
\begin{table}[h]
\centering
\caption{\label{jfonts}Font styles for a reference to a journal article.}
\begin{tabular}{@{}l*{15}{l}}
\br
Element&Style\\
\mr
Authors&Roman type\\
Date&Roman type\\
Article title (optional)&Roman type\\
Journal title&Italic type\\
Volume number&Bold type\\
Page numbers&Roman type\\
\br
\end{tabular}
\end{table}
\end{center}
\subsection{References to \jpcs\ articles}
Each conference proceeding published in \jpcs\ will be a separate {\it volume};
references should follow the style for conventional printed journals. For example:\vspace{6pt}
\numrefs{1}
\item Douglas G 2004 \textit{J. Phys.: Conf. Series} \textbf{1} 23--36
\endnumrefs
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{References to preprints}
For preprints there are two distinct cases:
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item Where the article has been published in a journal and the preprint is supplementary reference information. In this case it should be presented as:
\medskip
\numrefs{1}
\item Kunze K 2003 T-duality and Penrose limits of spatially homogeneous and inhomogeneous cosmologies {\it Phys. Rev.} D {\bf 68} 063517 ({\it Preprint} gr-qc/0303038)
\endnumrefs
\item Where the only reference available is the preprint. In this case it should be presented as
\medskip
\numrefs{1}
\item Milson R, Coley A, Pravda V and Pravdova A 2004 Alignment and algebraically special tensors {\it Preprint} gr-qc/0401010
\endnumrefs
\end{enumerate}
\subsection{References to electronic-only journals}
In general article numbers are given, and no page ranges, as most electronic-only journals start each article on page 1.
\begin{itemize}
\item For {\it New Journal of Physics} (article number may have from one to three digits)
\numrefs{1}
\item Fischer R 2004 Bayesian group analysis of plasma-enhanced chemical vapour deposition data {\it New. J. Phys.} {\bf 6} 25
\endnumrefs
\item For SISSA journals the volume is divided into monthly issues and these form part of the article number
\numrefs{2}
\item Horowitz G T and Maldacena J 2004 The black hole final state {\it J. High Energy Phys.} JHEP02(2004)008
\item Bentivegna E, Bonanno A and Reuter M 2004 Confronting the IR fixed point cosmology with high-redshift observations {\it J. Cosmol. Astropart. Phys.} JCAP01(2004)001
\endnumrefs
\end{itemize}
\subsection{References to books, conference proceedings and reports}
References to books, proceedings and reports are similar to journal references, but have
only two changes of font (see table~\ref{book}).
\begin{table}
\centering
\caption{\label{book}Font styles for references to books, conference proceedings and reports.}
\begin{tabular}{@{}l*{15}{l}}
\br
Element&Style\\
\mr
Authors&Roman type\\
Date&Roman type\\
Book title (optional)&Italic type\\
Editors&Roman type\\
Place (city, town etc) of publication&Roman type\\
Publisher&Roman type\\
Volume&Roman type\\
Page numbers&Roman type\\
\br
\end{tabular}
\end{table}
Points to note are:
\medskip
\begin{itemize}
\item Book titles are in italic and should be spelt out in full with initial capital letters for all except minor words. Words such as Proceedings, Symposium, International, Conference, Second, etc should be abbreviated to {\it Proc.}, {\it Symp.}, {\it Int.}, {\it Conf.}, {\it 2nd}, respectively, but the rest of the title should be given in full, followed by the date of the conference and the town or city where the conference was held. For Laboratory Reports the Laboratory should be spelt out wherever possible, e.g. {\it Argonne National Laboratory Report}.
\item The volume number, for example vol 2, should be followed by the editors, if any, in a form such as `ed A J Smith and P R Jones'. Use {\it et al} if there are more than two editors. Next comes the town of publication and publisher, within brackets and separated by a colon, and finally the page numbers preceded by p if only one number is given or pp if both the initial and final numbers are given.
\end{itemize}
Examples taken from published papers:
\medskip
\numrefs{99}
\item Kurata M 1982 {\it Numerical Analysis for Semiconductor Devices} (Lexington, MA: Heath)
\item Selberherr S 1984 {\it Analysis and Simulation of Semiconductor Devices} (Berlin: Springer)
\item Sze S M 1969 {\it Physics of Semiconductor Devices} (New York: Wiley-Interscience)
\item Dorman L I 1975 {\it Variations of Galactic Cosmic Rays} (Moscow: Moscow State University Press) p 103
\item Caplar R and Kulisic P 1973 {\it Proc. Int. Conf. on Nuclear Physics (Munich)} vol 1 (Amsterdam: North-Holland/American Elsevier) p 517
\item Cheng G X 2001 {\it Raman and Brillouin Scattering-Principles and Applications} (Beijing: Scientific)
\item Szytula A and Leciejewicz J 1989 {\it Handbook on the Physics and Chemistry of Rare Earths} vol 12, ed K A Gschneidner Jr and L Erwin (Amsterdam: Elsevier) p 133
\item Kuhn T 1998 {\it Density matrix theory of coherent ultrafast dynamics Theory of Transport Properties of Semiconductor Nanostructures} (Electronic Materials vol 4) ed E Sch\"oll (London: Chapman and Hall) chapter 6 pp 173--214
\endnumrefs
\section{Tables and table captions}
Tables should be numbered serially and referred to in the text
by number (table 1, etc, {\bf rather than} tab. 1). Each table should be a float and be positioned within the text at the most convenient place near to where it is first mentioned in the text. It should have an
explanatory caption which should be as concise as possible.
\subsection{The basic table format}
The standard form for a table is:
\begin{verbatim}
\begin{table}
\caption{\label{label}Table caption.}
\begin{center}
\begin{tabular}{llll}
\br
Head 1&Head 2&Head 3&Head 4\\
\mr
1.1&1.2&1.3&1.4\\
2.1&2.2&2.3&2.4\\
\br
\end{tabular}
\end{center}
\end{table}
\end{verbatim}
The above code produces table~\ref{ex}.
\begin{table}[h]
\caption{\label{ex}Table caption.}
\begin{center}
\begin{tabular}{llll}
\br
Head 1&Head 2&Head 3&Head 4\\
\mr
1.1&1.2&1.3&1.4\\
2.1&2.2&2.3&2.4\\
\br
\end{tabular}
\end{center}
\end{table}
Points to note are:
\medskip
\begin{enumerate}
\item The caption comes before the table.
\item The normal style is for tables to be centred in the same way as
equations. This is accomplished
by using \verb"\begin{center}" \dots\ \verb"\end{center}".
\item The default alignment of columns should be aligned left.
\item Tables should have only horizontal rules and no vertical ones. The rules at
the top and bottom are thicker than internal rules and are set with
\verb"\br" (bold rule).
The rule separating the headings from the entries is set with
\verb"\mr" (medium rule). These commands do not need a following double backslash.
\item Numbers in columns should be aligned as appropriate, usually on the decimal point;
to help do this a control sequence \verb"\lineup" has been defined
which sets \verb"\0" equal to a space the size of a digit, \verb"\m"
to be a space the width of a minus sign, and \verb"\-" to be a left
overlapping minus sign. \verb"\-" is for use in text mode while the other
two commands may be used in maths or text.
(\verb"\lineup" should only be used within a table
environment after the caption so that \verb"\-" has its normal meaning
elsewhere.) See table~\ref{tabone} for an example of a table where
\verb"\lineup" has been used.
\end{enumerate}
\begin{table}[h]
\caption{\label{tabone}A simple example produced using the standard table commands
and $\backslash${\tt lineup} to assist in aligning columns on the
decimal point. The width of the
table and rules is set automatically by the
preamble.}
\begin{center}
\lineup
\begin{tabular}{*{7}{l}}
\br
$\0\0A$&$B$&$C$&\m$D$&\m$E$&$F$&$\0G$\cr
\mr
\0\023.5&60 &0.53&$-20.2$&$-0.22$ &\01.7&\014.5\cr
\0\039.7&\-60&0.74&$-51.9$&$-0.208$&47.2 &146\cr
\0123.7 &\00 &0.75&$-57.2$&\m--- &--- &---\cr
3241.56 &60 &0.60&$-48.1$&$-0.29$ &41 &\015\cr
\br
\end{tabular}
\end{center}
\end{table}
\section{Figures and figure captions}
Figures must be included in the source code of an article at the appropriate place in the text not grouped together at the end.
Each figure should have a brief caption describing it and, if
necessary, interpreting the various lines and symbols on the figure.
As much lettering as possible should be removed from the figure itself and
included in the caption. If a figure has parts, these should be
labelled ($a$), ($b$), ($c$), etc.
\Tref{blobs} gives the definitions for describing symbols and lines often
used within figure captions (more symbols are available
when using the optional packages loading the AMS extension fonts).
\begin{table}[h]
\caption{\label{blobs}Control sequences to describe lines and symbols in figure
captions.}
\begin{center}
\begin{tabular}{lllll}
\br
Control sequence&Output&&Control sequence&Output\\
\mr
\verb"\dotted"&\dotted &&\verb"\opencircle"&\opencircle\\
\verb"\dashed"&\dashed &&\verb"\opentriangle"&\opentriangle\\
\verb"\broken"&\broken&&\verb"\opentriangledown"&\opentriangledown\\
\verb"\longbroken"&\longbroken&&\verb"\fullsquare"&\fullsquare\\
\verb"\chain"&\chain &&\verb"\opensquare"&\opensquare\\
\verb"\dashddot"&\dashddot &&\verb"\fullcircle"&\fullcircle\\
\verb"\full"&\full &&\verb"\opendiamond"&\opendiamond\\
\br
\end{tabular}
\end{center}
\end{table}
Authors should try and use the space allocated to them as economically as possible. At times it may be convenient to put two figures side by side or the caption at the side of a figure. To put figures side by side, within a figure environment, put each figure and its caption into a minipage with an appropriate width (e.g. 3in or 18pc if the figures are of equal size) and then separate the figures slightly by adding some horizontal space between the two minipages (e.g. \verb"\hspace{.2in}" or \verb"\hspace{1.5pc}". To get the caption at the side of the figure add the small horizontal space after the \verb"\includegraphics" command and then put the \verb"\caption" within a minipage of the appropriate width aligned bottom, i.e. \verb"\begin{minipage}[b]{3in}" etc (see code in this file used to generate figures 1--3).
Note that it may be necessary to adjust the size of the figures (using optional arguments to \verb"\includegraphics", for instance \verb"[width=3in]") to get you article to fit within your page allowance or to obtain good page breaks.
\begin{figure}[h]
\begin{minipage}{14pc}
\includegraphics[width=14pc]{name.eps}
\caption{\label{label}Figure caption for first of two sided figures.}
\end{minipage}\hspace{2pc}%
\begin{minipage}{14pc}
\includegraphics[width=14pc]{name.eps}
\caption{\label{label}Figure caption for second of two sided figures.}
\end{minipage}
\end{figure}
\begin{figure}[h]
\includegraphics[width=14pc]{name.eps}\hspace{2pc}%
\begin{minipage}[b]{14pc}\caption{\label{label}Figure caption for a narrow figure where the caption is put at the side of the figure.}
\end{minipage}
\end{figure}
Using the graphicx package figures can be included using code such as:
\begin{verbatim}
\begin{figure}
\begin{center}
\includegraphics{file.eps}
\end{center}
\caption{\label{label}Figure caption}
\end{figure}
\end{verbatim}
\section*{References}
\begin{thebibliography}{9}
\bibitem{iopartnum} IOP Publishing is to grateful Mark A Caprio, Center for Theoretical Physics, Yale University, for permission to include the {\tt iopart-num} \BibTeX package (version 2.0, December 21, 2006) with this documentation. Updates and new releases of {\tt iopart-num} can be found on \verb"www.ctan.org" (CTAN).
\end{thebibliography}
\end{document}
%!PS-Adobe-2.0 EPSF-1.2
%%BoundingBox: 71 71 217 217
%test.eps (dummy file containing box)
72 72 moveto
144 0 rlineto
0 144 rlineto
-144 0 rlineto
closepath
stroke
/Times-Roman findfont
12 scalefont
setfont
100 144 moveto
(reserved for figure) show
showpage
\ No newline at end of file
contributions/tier1/cpu2018.png

27.8 KiB

contributions/tier1/disk2018.png

28.4 KiB

contributions/tier1/pledge.png

180 KiB

contributions/tier1/tape2018.png

30.2 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\usepackage{url}
\usepackage{color, colortbl}
\definecolor{LightCyan}{rgb}{0.88,1,1}
\definecolor{LightYellow}{rgb}{1,1,0.88}
\definecolor{Red}{rgb}{1,0,0}
\definecolor{Green}{rgb}{0,1,0}
\definecolor{MediumSpringGreen}{rgb}{0,0.98,0.6} %rgb(0,250,154)
\definecolor{Gold}{rgb}{1,0.84,0}%rgb(255,215,0)
\definecolor{Gainsboro}{rgb}{0.86,0.86,0.86}%rgb(220,220,220)
\begin{document}
\title{The INFN Tier 1}
\author{Luca dell'Agnello$^1$}
\address{$^1$ INFN-CNAF, Bologna, IT}
\ead{luca.dellagnello@cnaf.infn.it}
\section{Introduction}
CNAF hosts the Italian Tier 1 data center for WLCG: over the years, Tier 1 has become the main computing facility for INFN.
Nowadays, besides the four LHC experiments, the INFN Tier 1 provides services and resources to 30 other scientific collaborations,
including BELLE2 and several astro-particle experiments (see Table \ref{T1-pledge}).
As shown in Fig.~\ref{pledge2018}, besides LHC, the main users are the astro-particle experiments.
\begin{figure}[h]
\begin{center}
\includegraphics[keepaspectratio,width=15cm]{pledge.png}
\caption{\label{pledge2018}Relative requests of resources at INFN Tier 1}
\end{center}
\end{figure}
Despite the flooding that occurred at the end of 2017, we were able to provide the resources committed to the experiments for 2018 almost in time.
\begin{table}
\begin{center}
\begin{tabular}{l|rrr}
\br
\textbf{Experiment}&\textbf{CPU (kHS06)}&\textbf{Disk (PB-N)}&\textbf{Tape (PB)}\\
\hline
\rowcolor{MediumSpringGreen}
ALICE&52020&5185&13497\\
\rowcolor{MediumSpringGreen}
ATLAS&85410&6480&17550\\
\rowcolor{MediumSpringGreen}
CMS&72000&7200&24440\\
\rowcolor{MediumSpringGreen}
LHCB&46805&5606&11400\\
\rowcolor{MediumSpringGreen}
\hline
\textbf{LHC Total}&\textbf{256235}&\textbf{24471}&\textbf{66887}\\
\hline
\rowcolor{LightYellow}
Belle2&13000&350&0\\
\rowcolor{LightYellow}
CDF&0&0&4000\\
\rowcolor{LightYellow}
Compass&40&10&40\\
\rowcolor{LightYellow}
KLOE&0&33&3075\\
\rowcolor{LightYellow}
LCHf&6000&90&0\\
\rowcolor{LightYellow}
NA62&3000&250&200\\
\rowcolor{LightYellow}
PADME&1500&10&500\\
\rowcolor{LightYellow}
LHCb Tier2&26085&0&0\\
\rowcolor{LightYellow}
\hline
\rowcolor{LightYellow}
\textbf{CSN 1 Total}&\textbf{49625}&\textbf{743}&\textbf{7815}\\
\hline
\rowcolor{LightCyan}
AMS&15800&1990&510\\
\rowcolor{LightCyan}
ARGO&0&120&1000\\
\rowcolor{LightCyan}
Auger&2000&615&0\\
\rowcolor{LightCyan}
BOREX&2000&185&41\\
\rowcolor{LightCyan}
CTA&4000&796&120\\
\rowcolor{LightCyan}
CUORE&1900&262&0\\
\rowcolor{LightCyan}
Cupid&100&15&10\\
\rowcolor{LightCyan}
DAMPE&8000&200&100\\
\rowcolor{LightCyan}
DARKSIDE&2000&980&300\\
\rowcolor{LightCyan}
ENUBET&500&10&0\\
\rowcolor{LightCyan}
EUCLID&1000&1042&0\\
\rowcolor{LightCyan}
Fermi&500&15&40\\
\rowcolor{LightCyan}
Gerda&40&45&40\\
\rowcolor{LightCyan}
Icarus&4000&500&1500\\
\rowcolor{LightCyan}
JUNO&3000&230&0\\
\rowcolor{LightCyan}
KM3&300&250&200\\
\rowcolor{LightCyan}
LHAASO&300&60&0\\
\rowcolor{LightCyan}
LIMADOU&400&8&0\\
\rowcolor{LightCyan}
LSPE&1000&14&0\\
\rowcolor{LightCyan}
MAGIC&296&65&150\\
\rowcolor{LightCyan}
NEWS&200&60&60\\
\rowcolor{LightCyan}
Opera&200&15&15\\
\rowcolor{LightCyan}
PAMELA&650&100&150\\
\rowcolor{LightCyan}
Virgo&30000&656&1368\\
\rowcolor{LightCyan}
Xenon100&1000&200&1000\\
\rowcolor{LightCyan}
\hline
\rowcolor{LightCyan}
\textbf{CSN 2 Total}&\textbf{79186}&\textbf{8433}&\textbf{6604}\\
\hline
\rowcolor{Gainsboro}
FOOT&200&20&0\\
\rowcolor{Gainsboro}
Famu&2250&15&187\\
\rowcolor{Gainsboro}
GAMMA/AGATA&0&0&1160\\
\rowcolor{Gainsboro}
NEWCHIM/FARCOS&0&10&300\\
\rowcolor{Gainsboro}
\hline
\rowcolor{Gainsboro}
\textbf{CSN 3 Total}&\textbf{2450}&\textbf{45}&\textbf{1460}\\
\hline \hline
\rowcolor{Green}
\textbf{Grand Total}&\textbf{387496}&\textbf{33692}&\textbf{82766}\\
\rowcolor{Green}
\textbf{Installed}&\textbf{340000}&\textbf{34000}&\textbf{71000}\\
\br
\end{tabular}
\end{center}
\caption{Pledged and installed resources at INFN Tier 1 in 2018 (for the CPU power an overlap factor is applied). CSN 1, CSN 2 and CSN 3 are the National Scientific Committees of the INFN, respectively, for experiments in high energy physics with accelerators, astro-particle experiments and experiments in nuclear physics with accelerators.}
\label{T1-pledge}
\hfill
\end{table}
\subsection{Out of the mud}
The year 2018 began with the recovery procedures of the data center after the flooding of November 2017.
Despite the serious damages to the power plants (both power lines were compromised), immediately after the flooding we started the recovery procedures of both the infrastructure and the IT equipment. The first mandatory intervention was to restore, at least, one of the two power lines (with a leased UPS in the first period). This goal was achieved during December 2017.
In January, after the restart of the chillers, we could proceed to re-open all services, including part of the farm (at the beginning only $\sim$ 50 kHS06, 1/5 of the total power capacity, were online, while 13\% was lost) and, one by one, the storage systems.
The first experiments to resume operations at CNAF have been Alice, Virgo and Darkside:
in fact, the storage system used by Virgo and Darkside had been easily recovered after Christmas break, while Alice is able to use computing resources relaying on remote storage. During February and March, we were able to progressively re-open the services for all other experiments.
%(Fig.\ref{farm2018} shows the restart of the farm). Meanwhile, we had setup a new partition of the farm hosted at CINECA super-computing center premises (see Par.~\ref{CINECAext}).
The final damage inventory shows the loss of $\sim$ 30 kHS06,
1.4 PB of data and 60 tapes: on the other hand, it was possible to repair all the other systems recovering $\sim$ 20 PB of data;
with respect to the infrastructure, the second line was recovered (see \cite{FLOODCHEP} for details).
%\begin{figure}[h]
% \begin{center}
% \includegraphics[width=40pc]{t1-img/farm2018.png}\hspace{2pc}%
% \caption{\label{farm2018}Farm usage in 2018}
% \end{center}
%\end{figure}
\subsection{The long-term consequences of the flooding}
The data center was designed taking into account all possible accidents, e.g. fires, power outages... except very unlikely events
such as the breaking of one of the main water pipelines in Bologna, located in a road next to CNAF,
which is precisely what happened in November 2017.
In fact, it was believed that the only threat due to water could come from a very heavy rain and, indeed,
waterproof doors were installed some years ago, after a heavy rain.
The post-mortem analysis showed that the causes, beside the breaking of the pipe, are to be found in the unfavorable position (2 underground levels) and in the excessive permeability of the perimeter (while the anti-flood doors worked). Therefore, an intervention has been carried out to increase the waterproofing of the data center and, moreover, work is planned for summer 2019 to strengthen the perimeter of the building and build a second water collection tank.
Even if the search for a new location to move the data center had started before the flooding (the main drive being its limited expandability not able to cope with the foreseen requirements for HL-LHC era when we should scale up to 10 MW of power for IT), the flooding gave us a second strong reason to move.
An opportunity is given by the new ECMWF center which will be hosted in Bologna, in a new Technopole area, starting from 2019.
In the same area the INFN Tier 1 and the CINECA\footnote{CINECA is the Italian Supercomputing center, also located near Bologna ($\sim17$ km far from CNAF). See \url{http://www.cineca.it/}} computing centers can be hosted too: funding has been guaranteed to INFN and CINECA by the Italian Government for this. The goal is to have the new data center for the INFN Tier 1 fully operational by the end of 2021.
\section{INFN Tier 1 extension at CINECA}\label{CINECAext}
Out of the 400 kHS06 CPU power (340 kHS06 pledged) of the CNAF farm, $\sim180$ are provided by servers installed in the CINECA data center.
%Each server is equipped with a 10 Gbit uplink connection to the rack switch while each of them, in turn, is connected to the aggregation router with 4x40 Gbit links.
The logical network of the farm partition at CINECA is set as an extension of INFN Tier 1 LAN: a dedicated fiber couple interconnects the aggregation router at CINECA with the core switch at the INFN Tier 1 (see Farm and Network Chapters for more details). %Fig.~\ref{cineca-t1}).
%The transmission on the fiber is managed by a couple of Infinera DCI, allowing to have a logical channel up to 1.2 Tbps (currently it is configured to transmit up to 400 Gbps).
%\begin{figure}
% % \begin{minipage}[b]{0.45\textwidth}
% \begin{center}
% \includegraphics[width=30pc]{t1-img/cineca-t1.png}
% \caption{\label{cineca-t1}Schematic view of the CINECA - INFN Tier-1 interconnection}
% \end{center}
% % \end{minipage}
%\end{figure}
These nodes, in production since March 2018 for WLCG experiments have been gradually opened to all other collaborations. %Due the low latency (the RTT is 0.48 ms vs. 0.28 ms measured on the CNAF LAN), there is no need of a disk cache on the CINECA side and the WNs directly access the storage located at CNAF; in fact, the
The efficiency of the jobs\footnote{The efficiency of a job is defined as the ratio beyween its CPU time and its wall-clock time.} is comparable to the one measured on the farm partition at CNAF.
Since this partition have been installed from the beginning with CentOS 7, legacy applications requiring a different flavour of Operating System can use it through the container technology Singularity~\cite{singularity}.
%Moreover, this partition has undergone several reconfigurations due to both the hardware and the type of workflow of the experiments. In April we had to upgrade the BIOS to overcome a bug which was preventing the full resource usage, limiting at $\sim$~78\% of the total what we were getting from the nodes. Moreover a reconfiguration of the local RAID configuration of disks is ongoing\footnote{The initial choice of using RAID-1 for local disks instead of RAID-0 has been proven to slow down the system even if safer from an operational point of view.} as well as tests to choose the best number of computing slots.
\section*{References}
\begin{thebibliography}{9}
\bibitem{FLOODCHEP} L. dell'Agnello, "Disaster recovery of the INFN Tier 1 data center: lesson learned" to be published in Proceedings of the 23rd International Conference on Computing in High Energy and Nuclear Physics - EPJ Web of Conferences
\bibitem{singularity} \url{http://singularity.lbl.gov}
\end{thebibliography}
\end{document}
File added
contributions/user-support/cpu.PNG

62.7 KiB

contributions/user-support/disco.PNG

30.1 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{User and Operational Support at CNAF}
\author{D. Cesini$^1$, E. Corni$^1$, F. Fornari$^1$, L. Morganti$^1$, C. Pellegrino$^1$, M. V. P. Soares$^1$, M. Tenti$^1$, L. Dell'Agnello$^1$}
\address{$^1$ INFN-CNAF, Bologna, IT}
\ead{user-support@lists.cnaf.infn.it}
\begin{abstract}
Many different research groups, typically organized in Virtual Organizations (VOs),
exploit the Tier 1 Data center facilities for computing and/or data storage and management. Moreover, CNAF hosts two small HPC farms and a Cloud infrastructure. The User Support unit provides to the users of all CNAF facilities with a direct operational support, and promotes common technologies and best-practices to access the ICT resources in order to facilitate the usage of the center and maximize its efficiency.
\end{abstract}
\section{Current status}
Born in April 2012, the User Support team in 2018 was composed by one coordinator and up to five fellows with post-doctoral education or equivalent work experience in scientific research or computing.
The main activities of the team include:
\begin{itemize}
\item providing a prompt feedback to VO-specific issues via ticketing systems or official mail channels;
\item forwarding to the appropriate Tier 1 units those requests which cannot be autonomously satisfied, and taking care of answers and fixes, e.g. via the tracker JIRA, until a solution is delivered to the experiments;
\item supporting the experiments in the definition and debugging of computing models in distributed and Cloud environments;
\item helping the supported experiments by developing code, monitoring frameworks and writing guides and documentation for users (see e.g. https://www.cnaf.infn.it/en/users-faqs/);
\item solving issues on experiment software installation, access problems, new accounts creation and any other daily usage problems;
\item porting applications to new parallel architectures (e.g. GPUs and HPC farms);
\item providing the Tier 1 Run Coordinator, who represents CNAF at the Daily WLCG calls, and reports about resource usage and problems at the monthly meeting of the Tier 1 management body (Comitato di Gestione del Tier 1).
\end{itemize}
People belonging to the User Support team represent INFN Tier 1 inside the VOs.
In some cases, they are directly integrated in the supported experiments. Moreover, they can play the role of a member of any VO for debugging purposes.
The User Support staff is also involved in different CNAF internal projects, notably the Computing on SoC Architectures (COSA) project (www.cosa-project.it) dedicated to the technology tracking and benchmarking of the modern low-power architectures for computing applications.
\section{Supported experiments}
The LHC experiments represent the main users of the data center, handling more than 80\% of the total computing and storage resources funded at CNAF. Besides the four LHC experiments (ALICE, ATLAS, CMS, LHCb) for which CNAF acts as Tier 1 site, the data center also supports an ever increasing number of experiments from the Astrophysics, Astroparticle physics and High Energy Physics domains, and specifically Agata, AMS-02, Auger, Belle II, Borexino, CDF, Compass, COSMO-WNEXT CTA, Cuore, Cupid, Dampe, DarkSide-50, Enubet, Famu, Fazia, Fermi-LAT, Gerda, Icarus, LHAASO, LHCf, Limadou, Juno, Kloe, KM3Net, Magic, NA62, Newchim, NEWS, NTOP, Opera, Padme, Pamela, Panda, Virgo, and XENON.
Clearly, a bigger effort from the User Support team is needed to answer to the varied and diverse needs from these no-LHC experiments and to encourage them to adopt more modern technologies, e.g. FTS, Dirac, token-based authorization.
\begin{figure}[ht]
\centering
§\includegraphics[width=32pc]{cpu.PNG}\hspace{0.8pc}%
\caption{\label{fig:cpu} Monthly averaged CPU usage during 2016.
Non-LHC experiments are grouped together (\textit{Other}).}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=32pc]{disco.PNG}\hspace{0.8pc}%
\begin{minipage}[b]{36pc}\caption{\label{fig:disk}
Disk usage for all VOs in 2018. The non LHC VOs are grouped together (\textit{other}). The lines show the pledges and the assignments.}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=32pc]{tape.PNG}\hspace{0.8pc}%
\begin{minipage}[b]{36pc}\caption{\label{fig:tape}Tape used (TB) by the supported experiments during 2018. Non-LHC experiments are grouped together (\textit{other}). The lines show the total pledged and assigned resources.}
\end{minipage}
\end{figure}
The following figures show resources pledged and used by the supported experiments during 2018. Fig.~\ref{fig:cpu} refer to CPU, Fig.~\ref{fig:disk} to disk and Fig.~\ref{fig:tape} to tape.
Unfortunately, the accounting data for storage, both disk and tape statistics, are available only after summer 2018, given the restoration of the complex system of sensors for accounting after the 2017 flooding had a lower priority with respect to activities needed for a complete of the storage resources involved in the flood.
\section{Support to HPC and cloud-based experiment}
Apart from Tier 1 facilities, CNAF hosts two small HPC farms and a cloud infrastructure. The first HPC cluster, in production since 2015, is composed of 27 nodes, some of them also equipped with one or more GPUs (NVIDIA Tesla K20, K40 and K1). All nodes are infiniband interconnected and equipped with 2 Intel CPUs, 8 physical cores each, HyperThread enabled. The cluster is accessible via the LSF batch system. It is open to various INFN communities, but the main users are theoretical physicists dealing with plasma laser acceleration simulations. The cluster is used as a testing infrastructure to prepare the high resolution runs to be submitted afterwards to supercomputers.
A second HPC cluster entered into production in 2017 to serve the CERN accelerators R/D groups. The cluster consists of 12 nodes OmniPath interconnected. It can be access through batch queues managed by the IBM LSF system.
The support is provided on a daily base for what concerns software installation, access problems, new accounts creation and any other usage problems.
The User Support team manages an OpenStack-based tenant hosted within the Cloud@CNAF. This tenant, provided with 300 vCPUs, is mostly devoted to support peculiar use cases which require unusual software configurations and only for a limited amount of time. The most important of these use cases is the FAZIA experiment, for which 256 vCPUs were provided, distributed over 16 worker nodes with 8GB of RAM each, where the Debian 8.4 operating system has been installed and configured with LDAP and Kerberos for user authentication and authorization, and NFS 4 for network storage sharing.
Recently, other experiments started accessing the Cloud infrastructure: AMS, EEE, Icarus and NTOF.
\end{document}
% references section
\bibliographystyle{iopart-num}
\section*{References}
\bibliography{biblio}
\end{document}
contributions/user-support/tape.PNG

23.4 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{Advanced Virgo computing at CNAF}
%\author{P. Astone$^1$, F. Badaracco$^{2,3}$, S. Bagnasco$^4$, S. Caudill$^5$, F. Carbognani$^6$, A. Cirone$^{7,8}$, G. Fronz\'e$^{4}$, J. Harms$^{2,3}$, I. LaRosa$^1$, C. Lazzaro$^9$, P. Leaci$^1$, S. Lusso$^4$, C. Palomba$^1$, R. DePietri$^{11,12}$, M. Punturo$^{10}$, L. Rei$^8$, L. Salconi$^6$, S. Vallero$^{4}$, on behalf of the Virgo collaboration}
\author{P. Astone$^1$, F. Badaracco$^{2,3}$, S. Bagnasco$^4$, S. Caudill$^5$, F. Carbognani$^6$, A. Cirone$^{7,8}$, M. Drago$^{2,3}$, G. Fronz\'e$^{4}$, J. Harms$^{2,3}$, I. LaRosa$^1$, C. Lazzaro$^9$, P. Leaci$^1$, S. Lusso$^4$, C. Palomba$^1$, R. DePietri$^{11,12}$, M. Punturo$^{10}$, L. Rei$^8$, L. Salconi$^6$, S. Vallero$^{4}$, on behalf of the Virgo collaboration}
\address{$^1$ INFN Sezione di Roma, Roma, IT}
\address{$^2$ Gran Sasso Science Institute (GSSI), L'Aquila, IT}
\address{$^3$ INFN Laboratori Nazionali del Gran Sasso, L'Aquila, IT}
\address{$^4$ INFN Sezione di Torino, Torino, IT}
\address{$^5$ Nikhef, Amsterdam, NL}
\address{$^6$ EGO-European Gravitational Observatory, Cascina (PI), IT}
\address{$^7$ Universit\`a degli Studi di Genova, Genova, IT}
\address{$^8$ INFN Sezione di Genova, Genova, IT}
\address{$^9$ INFN Sezione di Padova, Padova, IT}
\address{$^{10}$ INFN Sezione di Perugia, Perugia, IT}
\address{$^{11}$ Universit\`a degli Studi di Parma, Parma, IT}
\address{$^{12}$ INFN Gruppo Collegato Parma, Parma, IT}
%\address{Production Editor, \jpcs, \iopp, Dirac House, Temple Back, Bristol BS1~6BE, UK}
\ead{luca.rei@ge.infn.it}
\begin{abstract}
Advanced Virgo (AdV) is a gravitational wave (GW) interferometric detector located near Pisa, Italy. It's a Michelson laser interferometer with 3 km long Fabry-P\'erot cavities in both arms; it is the largest GW detector in Europe and it operates in network with the two LIGO detectors in the US. Together we are part of the LIGO-Virgo Collaboration (\emph{LVC}). During the first two observing runs (O1 and O2), the LVC was able to detect more than 15 possible GW signals, among which 11 confirmed and 1 with an electromagnetic counterpart. LVC has just started a new observing run (O3) on April 1, with a planned duration of 12 months. The spectral sensitivity of the three detectors has largely increased over the last year and with it also the accessible volume of space, leading to an event rate of roughly one per week for Binary Black Hole (\emph{BBH}) mergers and one per month for coalescing Binary Neutron Stars (\emph{BNS}). With that in mind, the LVC is developing new cutting-edge data analysis pipelines, also to identify and study related electromagnetic (\emph{EM}) counterparts with a very low-latency from the GW events. Starting from last year both GW data and software are freely released to the EM follow-up partners to let them support our analysis. At the same time the Gamma-Ray Coordinates Network (\emph{GCN}) will automatically trigger and coordinate all the telescopes of the collaboration and release public alerts. The era of Multimessenger astronomy has just begun and CNAF is going to play a key role for the Virgo collaboration data management.
\end{abstract}
\section{Advanced Virgo 2018-2019 achievements}
%The analysys of data aquired in the 2018 has enlightened some aspect of general relativity, proving that our models are quiete well-founded, and explored other gravity theoryes. To explore better the science of gravity the LVC has worked in the last year to greatly increased the sensibility of all three interferometers and improved pipeline analysis and GW algorithmics. Virgo is now able to sense a `standard' BNS up to 50 Mpc, almost doubling the BNS range during O2 of nearly 27 Mpc. Virgo data acquisition system scaled conseguently, passing from 36MB/s to ~ 50-60MB/s in the commissioning phase and during scientific run (after removing some channels) is narrowed to ~ 35MB/s. All this leads to a partial reorganization of data transfer and management and on the way virgo computes.
The amount of data processed during the last few years has emphasized the fact that our General Relativity based models are considerably robust, while still leaving some room for alternative modified gravity theories. In order to investigate further, LVC is working hard to improve the detector performances and expand the sensible Universe horizon, which for instance is now up to 50 Mpc for AdV and for BNS merger events. The AdV data acquisition system ha scaled in parallel, moving from 36 MB/s to 50-60 MB/s during the commissioning phases and stabilizing at 35 MB/s during the scientific run (O3). This has been achieved thanks to a partial reorganization of the data transfer, management and computing facilities.
\section{Advanced Virgo computing model}
\subsection{Data production and data transfer}
The Advanced Virgo data acquisition system is writing about 35MB/s of data (so-called ``bulk data'') during O3. CNAF and CC-IN2P3 are the Virgo Tier 0: during the science runs, bulk data is stored in a circular buffer located at the Virgo site, and simultaneously transferred to the remote computing centers where they are archived in tape libraries. The transfer is realized through an ad-hoc procedure based on GridFTP (at CNAF) and iRods (at CC-IN2P3). Other data fluxes reach CNAF during science runs:
\begin{itemize}
\item trend data (few GB/day), periodically transferred using the system described above;
\item Virgo-RDS or Reduced Data Set (about 100GB/day), containing the main Virgo channels including the calibrated dark fringe. This set of data is currently transferred from Virgo to the LIGO computing repositories using LDR (LIGO Data Replicator), but plans are to use Rucio instead shortly (while still using iRODS to CCIN2P3);
\item LIGO-RDS, containing the reduced set of data produced by the two LIGO detectors and analysed at CNAF, transferred through LDR;
\end{itemize}
\subsection{Data Analysis at CNAF}
%The analysis of the LIGO and Virgo data was made jointly by the two collaborations; the analysis pipelines are distributed among the worldwide network of computing facilities offering computing resources to the GW experiments. CNAF was mainly used for CW analysis, looking for continuous gravitational wave signals, developed by INFN–Roma people (see hereafter more details). But at CNAF is also running part of the pyCBC pipeline, submitted via OSG, looking for compact binaries signals. pyCBC has a crucial role in the detection of the coalescence of BBH and BNS. CNAF contributed to the computation performed through pyCBC for the analysis of the events GW170814, the first BBH coalescence detected also by Virgo, and GW170817, the BNS coalescence. During the last month a new extension of CVMFS, \emph{big cvmfs} was mounted at cnaf to support another OSG pipeline, \emph{BayesWave}. The big cvmfs is able to export, in a posix fashion, big file of data from nearby cache in Amsterdam instead of accessing data directly from Nebraska. BayesWave is a Bayesian algorithm designed to robustly distinguish gravitational wave signals from noise and instrumental glitches without relying on any prior assumptions of waveform morphology. In the last year coherent WaveBurst \emph{cwb} was ported to cnaf and made available to run. cwb is a pipeline based on coherent algorithm for detection and reconstruction of modelled and unmodelled GW bursts. A new newtonian noise cancellation algoritmh, developed by the group of Gran Sasso Science Institute (\emph{GSSI}) was made available very recently. The increased number of LVC pipelines running at cnaf has led to saturate advance virgo pledge at cnaf, cnaf promptly rensponded to advance virgo needed enlargin our quota and giving experimental access to gpu.
LIGO-Virgo data analysis is organized jointly, meaning that the analysis pipelines are made available to the computing facilities related to the LVC network, ready to be distributed to each GW detector. CNAF has been mainly used for Continuous Wave (\emph{CW}) analysis, led by the Roma INFN group, and for the Compact Binary Coalescence python-based analysis (\emph{pyCBC}), submitted via OSG. In particular CNAF computationally contributed to GW170814 and GW170817 events, respectively the first BBH coalescence detected by Virgo and the first BNS merger ever observed. During the last month a new extension of CVMFS, so-called ``big cvmfs'', was mounted at CNAF to support another OSG-based pipeline, Bayes Wave. The former is able to make available, in a POSIX-like fashion, big data files from a cache in Amsterdam, instead of accessing the data directly from Nebraska. The latter is a Bayesian algorithm, designed to robustly distinguish GW signals from noise and instrumental glitches, without relying on any prior assumptions on the waveform shape. During the last year, coherent WaveBurst (\emph{cWB}), an algorithm dedicated to the detection and reconstruction of GW Bursts, was also ported to CNAF. Furthermore, new Newtonian Noise cancellation algorithms, which are currently being developed by the GSSI group, were made recently available. The increasing number of LVC pipelines running at CNAF has led to resource saturation, and consequently to a demand for enlarged computing power, together with access to GPUs.
\subsubsection{CW pipeline}
CNAF has been in 2018 the main computing center for Virgo all-sky continuous wave (CW) searches. The search for this kind of signals, emitted by spinning neutron stars, covers a large portion of the source parameter space and consists of several steps organized in a hierarchical analysis pipeline. CNAF has been mainly used for the ``incoherent'' stage, based of a particular implementation of the Hough transform, which is the heaviest part of the analysis from a computational point of view. The code implementing the Hough transform has been written in such a way that the exploration of the parameter space can be split in several independent jobs, each covering a range of signal frequencies and a portion of the sky. This is an embarrassingly parallel problem, very well suited to be run in a distributed computing environment. The analysis jobs have been run using the EGI UMD grid middleware, with input and output files stored in a StoRM-based Storage Element at CNAF. Candidate post-processing, consisting of clusterisation, coincidences and ranking, and parts of the candidate follow-up analysis have been also carried on at CNAF. A typical Hough transform job needs about 4GB of memory (with a fraction requiring more, up to 8GB). Past year most of the resources have been used to analyze Advanced LIGO O2 data. Overall, in 2018 more than 10M CPU hours have been used at CNAF for CW searches, by running O($10^5$) jobs, with duration from a few hours to ~3 days.
\subsubsection{cWB pipeline}
Starting in 2019, the coherent WaveBurst based pipelines have been ported and adapted to run at CNAF to reproduce the cWB environment setup on the worker nodes, without the constraint to read the user home account during running. It is planned to run at CNAF all Virgo offline long duration all-sky searches on the data that will be collected during the Observational Run 3 (03) that started April 1, 2019. cWB is a data-analysis tool to search for a broad range of gravitational-wave (GW) transients. The pipeline identifies coincident events in the GW data from earth-based interferometric detectors and reconstructs the gravitational wave signal by using a constrained maximum likelihood approach. The algorithm performs a time-frequency analysis of the data, using wavelet representation, and identifies the events by clustering time-frequency pixels with significant excess coherent power. The likelihood statistics is built as a coherent sum over the responses of different detectors and estimates the total signal to noise ratio of the GW signal in the network. The pipeline splits the total analysis time into sub-periods to be analyzed in parallel jobs, using HTCondor tools and it is expected to use a consistent amount of CPU hours during 2019.
\subsubsection{Newtonian noise pipeline}
The cancellation of gravitational noise from seismic fields will be a major challenge both from theoretical and computational point of view, since the involved simulations are very demanding. This activity requires the accurate positioning of a large number of seismometers. A cluster at CNAF was used to run position optimisations of the seismic arrays used for cancellation and to determine the cancellation performance as a function of the number of sensors and its robustness with respect to sensor-positioning accuracy.
\subsection{Outlook}
The first detection of gravitational waves (GW) and the birth of multi-messenger astrophysics have opened a new field of scientific research. With the possibility to detect GW from various kinds of sources we can probe new physical phenomena in regions of the Universe we couldn't explore before, with new perspectives on our knowledge about how it works.
Indeed, so far only signals from the coalescence of compact objects have been detected, while one of the most interesting and promising class of continuous GW signals, coming from asymmetrical rotating neutron stars, is still missing. Wide searches of this kind of signals require a huge amount of computational power due to the Doppler effect of the Earth motion, which disrupts the incoming signal and dramatically increases the parameters space. This means that it is necessary to develop complex algorithms to reduce the computational power needed, at the price of significantly reducing the sensitivity of the search.
The development of new algorithms, which use the high efficiency and computational power of modern GPUs, showed that the new codes on a single GPU can run with a factor of ten speed-up with respect to the older ones on a ten times more expensive multi-core CPU.
For the CW case, using real data from the 9 months long run of the LIGO detectors we have estimated that on a cluster of about 200 GPUs a complete search can be done in about a couple of months, to be compared with the several months required by the older code on a 2000 CPUs cluster.\\ A GPU cluster would be also extremely useful to test and train Machine Learning algorithms, which in the recent years were shown to be able to face very complex analyses with high efficiency and speed.\\
Advanced Virgo and Advanced LIGO are also exploring different technologies to face the new challenges of GW physics. The growing number of computing centers involved in GW research forces us to relax our idea on computing, searching a way to uniformly run different pipelines in complex and heterogeneous infrastructures. For example, the de-supporting of GridFTP pushes towards the use of Rucio, a well supported and flexible tool for data-transfer and management, while the de-supporting of the Cream-CE suggests a redesign of the job submission strategy, possibly under the control of an overall management system like DIRAC. \\ CNAF staff is intensively supporting Virgo members in all this these tests.
\end{document}