Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • faproietti/ar2018
  • chierici/ar2018
  • SDDS/ar2018
  • cnaf/annual-report/ar2018
4 results
Show changes
Showing
with 1584 additions and 40 deletions
File added
%%
%% This is file `iopams.sty'
%% File to include AMS fonts and extra definitions for bold greek
%% characters for use with iopart.cls
%%
\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{iopams}[1997/02/13 v1.0]
\RequirePackage{amsgen}[1995/01/01]
\RequirePackage{amsfonts}[1995/01/01]
\RequirePackage{amssymb}[1995/01/01]
\RequirePackage{amsbsy}[1995/01/01]
%
\iopamstrue % \newif\ifiopams in iopart.cls & iopbk2e.cls
% % allows optional text to be in author guidelines
%
% Bold lower case Greek letters
%
\newcommand{\balpha}{\boldsymbol{\alpha}}
\newcommand{\bbeta}{\boldsymbol{\beta}}
\newcommand{\bgamma}{\boldsymbol{\gamma}}
\newcommand{\bdelta}{\boldsymbol{\delta}}
\newcommand{\bepsilon}{\boldsymbol{\epsilon}}
\newcommand{\bzeta}{\boldsymbol{\zeta}}
\newcommand{\bfeta}{\boldsymbol{\eta}}
\newcommand{\btheta}{\boldsymbol{\theta}}
\newcommand{\biota}{\boldsymbol{\iota}}
\newcommand{\bkappa}{\boldsymbol{\kappa}}
\newcommand{\blambda}{\boldsymbol{\lambda}}
\newcommand{\bmu}{\boldsymbol{\mu}}
\newcommand{\bnu}{\boldsymbol{\nu}}
\newcommand{\bxi}{\boldsymbol{\xi}}
\newcommand{\bpi}{\boldsymbol{\pi}}
\newcommand{\brho}{\boldsymbol{\rho}}
\newcommand{\bsigma}{\boldsymbol{\sigma}}
\newcommand{\btau}{\boldsymbol{\tau}}
\newcommand{\bupsilon}{\boldsymbol{\upsilon}}
\newcommand{\bphi}{\boldsymbol{\phi}}
\newcommand{\bchi}{\boldsymbol{\chi}}
\newcommand{\bpsi}{\boldsymbol{\psi}}
\newcommand{\bomega}{\boldsymbol{\omega}}
\newcommand{\bvarepsilon}{\boldsymbol{\varepsilon}}
\newcommand{\bvartheta}{\boldsymbol{\vartheta}}
\newcommand{\bvaromega}{\boldsymbol{\varomega}}
\newcommand{\bvarrho}{\boldsymbol{\varrho}}
\newcommand{\bvarzeta}{\boldsymbol{\varsigma}} %NB really sigma
\newcommand{\bvarsigma}{\boldsymbol{\varsigma}}
\newcommand{\bvarphi}{\boldsymbol{\varphi}}
%
% Bold upright capital Greek letters
%
\newcommand{\bGamma}{\boldsymbol{\Gamma}}
\newcommand{\bDelta}{\boldsymbol{\Delta}}
\newcommand{\bTheta}{\boldsymbol{\Theta}}
\newcommand{\bLambda}{\boldsymbol{\Lambda}}
\newcommand{\bXi}{\boldsymbol{\Xi}}
\newcommand{\bPi}{\boldsymbol{\Pi}}
\newcommand{\bSigma}{\boldsymbol{\Sigma}}
\newcommand{\bUpsilon}{\boldsymbol{\Upsilon}}
\newcommand{\bPhi}{\boldsymbol{\Phi}}
\newcommand{\bPsi}{\boldsymbol{\Psi}}
\newcommand{\bOmega}{\boldsymbol{\Omega}}
%
% Bold versions of miscellaneous symbols
%
\newcommand{\bpartial}{\boldsymbol{\partial}}
\newcommand{\bell}{\boldsymbol{\ell}}
\newcommand{\bimath}{\boldsymbol{\imath}}
\newcommand{\bjmath}{\boldsymbol{\jmath}}
\newcommand{\binfty}{\boldsymbol{\infty}}
\newcommand{\bnabla}{\boldsymbol{\nabla}}
\newcommand{\bdot}{\boldsymbol{\cdot}}
%
% Symbols for caption
%
\renewcommand{\opensquare}{\mbox{$\square$}}
\renewcommand{\opentriangle}{\mbox{$\vartriangle$}}
\renewcommand{\opentriangledown}{\mbox{$\triangledown$}}
\renewcommand{\opendiamond}{\mbox{$\lozenge$}}
\renewcommand{\fullsquare}{\mbox{$\blacksquare$}}
\newcommand{\fulldiamond}{\mbox{$\blacklozenge$}}
\newcommand{\fullstar}{\mbox{$\bigstar$}}
\newcommand{\fulltriangle}{\mbox{$\blacktriangle$}}
\newcommand{\fulltriangledown}{\mbox{$\blacktriangledown$}}
\endinput
%%
%% End of file `iopams.sty'.
\NeedsTeXFormat{LaTeX2e}[1995/12/01]
\ProvidesClass{jpconf}
[2007/03/07 v1.1
LaTeX class for Journal of Physics: Conference Series]
%\RequirePackage{graphicx}
\newcommand\@ptsize{1}
\newif\if@restonecol
\newif\if@letterpaper
\newif\if@titlepage
\newif\ifiopams
\@titlepagefalse
\@letterpaperfalse
\DeclareOption{a4paper}
{\setlength\paperheight {297mm}%
\setlength\paperwidth {210mm}%
\@letterpaperfalse}
\DeclareOption{letterpaper}
{\setlength\paperheight {279.4mm}%
\setlength\paperwidth {215.9mm}%
\@letterpapertrue}
\DeclareOption{landscape}
{\setlength\@tempdima {\paperheight}%
\setlength\paperheight {\paperwidth}%
\setlength\paperwidth {\@tempdima}}
\DeclareOption{twoside}{\@twosidetrue \@mparswitchtrue}
\renewcommand\@ptsize{1}
%\ExecuteOptions{A4paper, twoside}
\ExecuteOptions{A4paper}
\ProcessOptions
\DeclareMathAlphabet{\bi}{OML}{cmm}{b}{it}
\DeclareMathAlphabet{\bcal}{OMS}{cmsy}{b}{n}
\input{jpconf1\@ptsize.clo}
\setlength\lineskip{1\p@}
\setlength\normallineskip{1\p@}
\renewcommand\baselinestretch{}
\setlength\parskip{0\p@ \@plus \p@}
\@lowpenalty 51
\@medpenalty 151
\@highpenalty 301
\setlength\parindent{5mm}
\setcounter{topnumber}{8}
\renewcommand\topfraction{1}
\setcounter{bottomnumber}{3}
\renewcommand\bottomfraction{.99}
\setcounter{totalnumber}{8}
\renewcommand\textfraction{0.01}
\renewcommand\floatpagefraction{.8}
\setcounter{dbltopnumber}{6}
\renewcommand\dbltopfraction{1}
\renewcommand\dblfloatpagefraction{.8}
\renewcommand{\title}{\@ifnextchar[{\@stitle}{\@ftitle}}
\pretolerance=5000
\tolerance=8000
% Headings for all pages apart from first
%
\def\ps@headings{%
\let\@oddfoot\@empty
\let\@evenfoot\@empty
\let\@oddhead\@empty
\let\@evenhead\@empty
%\def\@evenhead{\thepage\hfil\itshape\rightmark}%
%\def\@oddhead{{\itshape\leftmark}\hfil\thepage}%
%\def\@evenhead{{\itshape Journal of Physics: Conference Series}\hfill}%
%\def\@oddhead{\hfill {\itshape Journal of Physics: Conference Series}}%%
\let\@mkboth\markboth
\let\sectionmark\@gobble
\let\subsectionmark\@gobble}
%
% Headings for first page
%
\def\ps@myheadings{\let\@oddfoot\@empty\let\@evenfoot\@empty
\let\@oddhead\@empty\let\@evenhead\@empty
\let\@mkboth\@gobbletwo
\let\sectionmark\@gobble
\let\subsectionmark\@gobble}
%
\def\@stitle[#1]#2{\markboth{#1}{#1}%
%\pagestyle{empty}%
\thispagestyle{myheadings}
\vspace*{25mm}{\exhyphenpenalty=10000\hyphenpenalty=10000
%\Large
\fontsize{18bp}{24bp}\selectfont\bf\raggedright\noindent#2\par}}
\def\@ftitle#1{\markboth{#1}{#1}%
\thispagestyle{myheadings}
%\pagestyle{empty}%
\vspace*{25mm}{\exhyphenpenalty=10000\hyphenpenalty=10000
%\Large\raggedright\noindent\bf#1\par}
\fontsize{18bp}{24bp}\selectfont\bf\noindent\raggedright#1\par}}
%AUTHOR
\renewcommand{\author}{\@ifnextchar[{\@sauthor}{\@fauthor}}
\def\@sauthor[#1]#2{\markright{#1} % for production only
\vspace*{1.5pc}%
\begin{indented}%
\item[]\normalsize\bf\raggedright#2
\end{indented}%
\smallskip}
\def\@fauthor#1{%\markright{#1} for production only
\vspace*{1.5pc}%
\begin{indented}%
\item[]\normalsize\bf\raggedright#1
\end{indented}%
\smallskip}
%E-MAIL
\def\eads#1{\vspace*{5pt}\address{E-mail: #1}}
\def\ead#1{\vspace*{5pt}\address{E-mail: \mailto{#1}}}
\def\mailto#1{{\tt #1}}
%ADDRESS
\newcommand{\address}[1]{\begin{indented}
\item[]\rm\raggedright #1
\end{indented}}
\newlength{\indentedwidth}
\newdimen\mathindent
\mathindent = 6pc
\indentedwidth=\mathindent
% FOOTNOTES
%\renewcommand\footnoterule{%
% \kern-3\p@
% \hrule\@width.4\columnwidth
% \kern2.6\p@}
%\newcommand\@makefntext[1]{%
% \parindent 1em%
% \noindent
% \hb@xt@1.8em{\hss\@makefnmark}#1}
% Footnotes: symbols selected in same order as address indicators
% unless optional argument of [<num>] use to specify required symbol,
% 1=\dag, 2=\ddag, etc
% Usage: \footnote{Text of footnote}
% \footnote[3]{Text of footnote}
%
\def\footnoterule{}%
\setcounter{footnote}{0}
\long\def\@makefntext#1{\parindent 1em\noindent
\makebox[1em][l]{\footnotesize\rm$\m@th{\fnsymbol{footnote}}$}%
\footnotesize\rm #1}
\def\@makefnmark{\normalfnmark}
\def\normalfnmark{\hbox{${\fnsymbol{footnote}}\m@th$}}
\def\altfnmark{\hbox{$^{\rm Note}\ {\fnsymbol{footnote}}\m@th$}}
\def\footNote#1{\let\@makefnmark\altfnmark\footnote{#1}\let\@makefnmark\normalfnmark}
\def\@thefnmark{\fnsymbol{footnote}}
\def\footnote{\protect\pfootnote}
\def\pfootnote{\@ifnextchar[{\@xfootnote}{\stepcounter{\@mpfn}%
\begingroup\let\protect\noexpand
\xdef\@thefnmark{\thempfn}\endgroup
\@footnotemark\@footnotetext}}
\def\@xfootnote[#1]{\setcounter{footnote}{#1}%
\addtocounter{footnote}{-1}\footnote}
\newcommand\ftnote{\protect\pftnote}
\newcommand\pftnote[1]{\setcounter{footnote}{#1}%
\addtocounter{footnote}{-1}\footnote}
\newcommand{\fnm}[1]{\setcounter{footnote}{#1}\footnotetext}
\def\@fnsymbol#1{\ifnum\thefootnote=99\hbox{*}\else^{\thefootnote}\fi\relax}
%
% Address marker
%
\newcommand{\ad}[1]{\noindent\hbox{$^{#1}$}\relax}
\newcommand{\adnote}[2]{\noindent\hbox{$^{#1,}$}\setcounter{footnote}{#2}%
\addtocounter{footnote}{-1}\footnote}
\def\@tnote{}
\newcounter{oldftnote}
\newcommand{\tnote}[1]{*\gdef\@tnote{%
\setcounter{oldftnote}{\c@footnote}%
\setcounter{footnote}{99}%
\footnotetext{#1}%
\setcounter{footnote}{\c@oldftnote}\addtocounter{footnote}{-1}}}
%==================
% Acknowledgments (no heading if letter)
% Usage \ack for Acknowledgments, \ackn for Acknowledgement
\def\ack{\section*{Acknowledgments}}
\def\ackn{\section*{Acknowledgment}}
%SECTION DEFINITIONS
\setcounter{secnumdepth}{3}
\newcounter {section}
\newcounter {subsection}[section]
\newcounter {subsubsection}[subsection]
\newcounter {paragraph}[subsubsection]
\newcounter {subparagraph}[paragraph]
\renewcommand \thesection {\arabic{section}}
\renewcommand\thesubsection {\thesection.\arabic{subsection}}
\renewcommand\thesubsubsection{\thesubsection .\arabic{subsubsection}}
\renewcommand\theparagraph {\thesubsubsection.\arabic{paragraph}}
\renewcommand\thesubparagraph {\theparagraph.\arabic{subparagraph}}
%\nosections
\def\nosections{\vspace{30\p@ plus12\p@ minus12\p@}
\noindent\ignorespaces}
%\renewcommand{\@startsection}[6]
%{%
%\if@noskipsec \leavevmode \fi
%\par
% \@tempskipa #4\relax
%%\@tempskipa 0pt\relax
% \@afterindenttrue
% \ifdim \@tempskipa <\z@
% \@tempskipa -\@tempskipa \@afterindentfalse
% \fi
% \if@nobreak
% \everypar{}%
% \else
% \addpenalty\@secpenalty\addvspace\@tempskipa
% \fi
% \@ifstar
% {\@ssect{#3}{#4}{#5}{#6}}%
% {\@dblarg{\@sect{#1}{#2}{#3}{#4}{#5}{#6}}}}
%\renewcommand{\@sect}[8]{%
% \ifnum #2>\c@secnumdepth
% \let\@svsec\@empty
% \else
% \refstepcounter{#1}%
% \protected@edef\@svsec{\@seccntformat{#1}\relax}%
% \fi
% \@tempskipa #5\relax
% \ifdim \@tempskipa>\z@
% \begingroup
% #6{%
% \@hangfrom{\hskip #3\relax\@svsec}%
% \interlinepenalty \@M #8\@@par}%
% \endgroup
% \csname #1mark\endcsname{#7}%
% \addcontentsline{toc}{#1}{%
% \ifnum #2>\c@secnumdepth \else
% \protect\numberline{\csname the#1\endcsname}%
% \fi
% #7}%
% \else
% \def\@svsechd{%
% #6{\hskip #3\relax
% \@svsec #8}%
% \csname #1mark\endcsname{#7}%
% \addcontentsline{toc}{#1}{%
% \ifnum #2>\c@secnumdepth \else
% \protect\numberline{\csname the#1\endcsname}%
% \fi
% #7}}%
% \fi
% \@xsect{#5}}
%\renewcommand{\@xsect}[1]{%
% \@tempskipa #1\relax
% \ifdim \@tempskipa>\z@
% \par \nobreak
% \vskip \@tempskipa
% \@afterheading
% \else
% \@nobreakfalse
% \global\@noskipsectrue
% \everypar{%
% \if@noskipsec
% \global\@noskipsecfalse
% {\setbox\z@\lastbox}%
% \clubpenalty\@M
% \begingroup \@svsechd \endgroup
% \unskip
% \@tempskipa #1\relax
% \hskip -\@tempskipa
% \else
% \clubpenalty \@clubpenalty
% \everypar{}%
% \fi}%
% \fi
% \ignorespaces}
%========================================================================
\newcommand\section{\@startsection {section}{1}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1sp}%
{\reset@font\normalsize\bfseries\raggedright}}
\newcommand\subsection{\@startsection{subsection}{2}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1sp}%
{\reset@font\normalsize\itshape\raggedright}}
\newcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{-1em \@plus .2em}%
{\reset@font\normalsize\itshape}}
\newcommand\paragraph{\@startsection{paragraph}{4}{\z@}%
{3.25ex \@plus1ex \@minus.2ex}%
{-1em}%
{\reset@font\normalsize\itshape}}
\newcommand\subparagraph{\@startsection{subparagraph}{5}{\parindent}%
{3.25ex \@plus1ex \@minus .2ex}%
{-1em}%
{\reset@font\normalsize\itshape}}
\def\@sect#1#2#3#4#5#6[#7]#8{\ifnum #2>\c@secnumdepth
\let\@svsec\@empty\else
\refstepcounter{#1}\edef\@svsec{\csname the#1\endcsname. }\fi
\@tempskipa #5\relax
\ifdim \@tempskipa>\z@
\begingroup #6\relax
\noindent{\hskip #3\relax\@svsec}{\interlinepenalty \@M #8\par}%
\endgroup
\csname #1mark\endcsname{#7}\addcontentsline
{toc}{#1}{\ifnum #2>\c@secnumdepth \else
\protect\numberline{\csname the#1\endcsname}\fi
#7}\else
\def\@svsechd{#6\hskip #3\relax %% \relax added 2 May 90
\@svsec #8\csname #1mark\endcsname
{#7}\addcontentsline
{toc}{#1}{\ifnum #2>\c@secnumdepth \else
\protect\numberline{\csname the#1\endcsname}\fi
#7}}\fi
\@xsect{#5}}
%
\def\@ssect#1#2#3#4#5{\@tempskipa #3\relax
\ifdim \@tempskipa>\z@
\begingroup #4\noindent{\hskip #1}{\interlinepenalty \@M #5\par}\endgroup
\else \def\@svsechd{#4\hskip #1\relax #5}\fi
\@xsect{#3}}
% LIST DEFINITIONS
\setlength\leftmargini {2em}
\leftmargin \leftmargini
\setlength\leftmarginii {2em}
\setlength\leftmarginiii {1.8em}
\setlength\leftmarginiv {1.6em}
\setlength\leftmarginv {1em}
\setlength\leftmarginvi {1em}
\setlength\leftmargin{\leftmargini}
\setlength \labelsep {.5em}
\setlength \labelwidth{\leftmargini}
\addtolength\labelwidth{-\labelsep}
\@beginparpenalty -\@lowpenalty
\@endparpenalty -\@lowpenalty
\@itempenalty -\@lowpenalty
\renewcommand\theenumi{\roman{enumi}}
\renewcommand\theenumii{\alph{enumii}}
\renewcommand\theenumiii{\arabic{enumiii}}
\renewcommand\theenumiv{\Alph{enumiv}}
\newcommand\labelenumi{(\theenumi)}
\newcommand\labelenumii{(\theenumii)}
\newcommand\labelenumiii{\theenumiii.}
\newcommand\labelenumiv{(\theenumiv)}
\renewcommand\p@enumii{(\theenumi)}
\renewcommand\p@enumiii{(\theenumi.\theenumii)}
\renewcommand\p@enumiv{(\theenumi.\theenumii.\theenumiii)}
\newcommand\labelitemi{$\m@th\bullet$}
\newcommand\labelitemii{\normalfont\bfseries --}
\newcommand\labelitemiii{$\m@th\ast$}
\newcommand\labelitemiv{$\m@th\cdot$}
\renewcommand \theequation {\@arabic\c@equation}
%%%%%%%%%%%%% Figures
\newcounter{figure}
\renewcommand\thefigure{\@arabic\c@figure}
\def\fps@figure{tbp}
\def\ftype@figure{1}
\def\ext@figure{lof}
\def\fnum@figure{\figurename~\thefigure}
\newenvironment{figure}{\footnotesize\rm\@float{figure}}%
{\end@float\normalsize\rm}
\newenvironment{figure*}{\footnotesize\rm\@dblfloat{figure}}{\end@dblfloat}
\newcounter{table}
\renewcommand\thetable{\@arabic\c@table}
\def\fps@table{tbp}
\def\ftype@table{2}
\def\ext@table{lot}
\def\fnum@table{\tablename~\thetable}
\newenvironment{table}{\footnotesize\rm\@float{table}}%
{\end@float\normalsize\rm}
\newenvironment{table*}{\footnotesize\rm\@dblfloat{table}}%
{\end@dblfloat\normalsize\rm}
\newlength\abovecaptionskip
\newlength\belowcaptionskip
\setlength\abovecaptionskip{10\p@}
\setlength\belowcaptionskip{0\p@}
%Table Environments
%\newenvironment{tableref}[3][\textwidth]{%
%\begin{center}%
%\begin{table}%
%\captionsetup[table]{width=#1}
%\centering\caption{\label{#2}#3}}{\end{table}\end{center}}
%%%%%%%%%%%%%%%%%
%\newcounter{figure}
%\renewcommand \thefigure {\@arabic\c@figure}
%\def\fps@figure{tbp}
%\def\ftype@figure{1}
%\def\ext@figure{lof}
%\def\fnum@figure{\figurename~\thefigure}
%ENVIRONMENT: figure
%\newenvironment{figure}
% {\@float{figure}}
% {\end@float}
%ENVIRONMENT: figure*
%\newenvironment{figure*}
% {\@dblfloat{figure}}
% {\end@dblfloat}
%ENVIRONMENT: table
%\newcounter{table}
%\renewcommand\thetable{\@arabic\c@table}
%\def\fps@table{tbp}
%\def\ftype@table{2}
%\def\ext@table{lot}
%\def\fnum@table{\tablename~\thetable}
%\newenvironment{table}
% {\@float{table}}
% {\end@float}
%ENVIRONMENT: table*
%\newenvironment{table*}
% {\@dblfloat{table}}
% {\end@dblfloat}
%\newlength\abovecaptionskip
%\newlength\belowcaptionskip
%\setlength\abovecaptionskip{10\p@}
%\setlength\belowcaptionskip{0\p@}
% CAPTIONS
% Added redefinition of \@caption so captions are not written to
% aux file therefore less need to \protect fragile commands
%
\long\def\@caption#1[#2]#3{\par\begingroup
\@parboxrestore
\normalsize
\@makecaption{\csname fnum@#1\endcsname}{\ignorespaces #3}\par
\endgroup}
\long\def\@makecaption#1#2{%
\vskip\abovecaptionskip
\sbox\@tempboxa{{\bf #1.} #2}%
\ifdim \wd\@tempboxa >\hsize
{\bf #1.} #2\par
\else
\global \@minipagefalse
\hb@xt@\hsize{\hfil\box\@tempboxa\hfil}%
\fi
\vskip\belowcaptionskip}
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\DeclareRobustCommand*\cal{\@fontswitch\relax\mathcal}
\DeclareRobustCommand*\mit{\@fontswitch\relax\mathnormal}
%\newcommand\@pnumwidth{1.55em}
%\newcommand\@tocrmarg{2.55em}
%\newcommand\@dotsep{4.5}
%\setcounter{tocdepth}{3}
%\newcommand\tableofcontents{%
% \section*{\contentsname
% \@mkboth{%
% \MakeUppercase\contentsname}{\MakeUppercase\contentsname}}%
% \@starttoc{toc}%
% }
%\newcommand*\l@part[2]{%
% \ifnum \c@tocdepth >-2\relax
% \addpenalty\@secpenalty
% \addvspace{2.25em \@plus\p@}%
% \begingroup
% \parindent \z@ \rightskip \@pnumwidth
% \parfillskip -\@pnumwidth
% {\leavevmode
% \large \bfseries #1\hfil \hb@xt@\@pnumwidth{\hss #2}}\par
% \nobreak
% \if@compatibility
% \global\@nobreaktrue
% \everypar{\global\@nobreakfalse\everypar{}}%
% \fi
% \endgroup
% \fi}
%\newcommand*\l@section[2]{%
% \ifnum \c@tocdepth >\z@
% \addpenalty\@secpenalty
% \addvspace{1.0em \@plus\p@}%
% \setlength\@tempdima{1.5em}%
% \begingroup
% \parindent \z@ \rightskip \@pnumwidth
% \parfillskip -\@pnumwidth
% \leavevmode \bfseries
% \advance\leftskip\@tempdima
% \hskip -\leftskip
% #1\nobreak\hfil \nobreak\hb@xt@\@pnumwidth{\hss #2}\par
% \endgroup
% \fi}
%\newcommand*\l@subsection{\@dottedtocline{2}{1.5em}{2.3em}}
%\newcommand*\l@subsubsection{\@dottedtocline{3}{3.8em}{3.2em}}
%\newcommand*\l@paragraph{\@dottedtocline{4}{7.0em}{4.1em}}
%\newcommand*\l@subparagraph{\@dottedtocline{5}{10em}{5em}}
%\newcommand\listoffigures{%
% \section*{\listfigurename
% \@mkboth{\MakeUppercase\listfigurename}%
% {\MakeUppercase\listfigurename}}%
% \@starttoc{lof}%
% }
%\newcommand*\l@figure{\@dottedtocline{1}{1.5em}{2.3em}}
%\newcommand\listoftables{%
% \section*{\listtablename
% \@mkboth{%
% \MakeUppercase\listtablename}{\MakeUppercase\listtablename}}%
% \@starttoc{lot}%
% }
%\let\l@table\l@figure
%======================================
%ENVIRONMENTS
%======================================
%ENVIRONMENT: indented
\newenvironment{indented}{\begin{indented}}{\end{indented}}
\newenvironment{varindent}[1]{\begin{varindent}{#1}}{\end{varindent}}
%
\def\indented{\list{}{\itemsep=0\p@\labelsep=0\p@\itemindent=0\p@
\labelwidth=0\p@\leftmargin=\mathindent\topsep=0\p@\partopsep=0\p@
\parsep=0\p@\listparindent=15\p@}\footnotesize\rm}
\let\endindented=\endlist
\def\varindent#1{\setlength{\varind}{#1}%
\list{}{\itemsep=0\p@\labelsep=0\p@\itemindent=0\p@
\labelwidth=0\p@\leftmargin=\varind\topsep=0\p@\partopsep=0\p@
\parsep=0\p@\listparindent=15\p@}\footnotesize\rm}
\let\endvarindent=\endlist
%ENVIRONMENT: abstract
\newenvironment{abstract}{%
\vspace{16pt plus3pt minus3pt}
\begin{indented}
\item[]{\bfseries \abstractname.}\quad\rm\ignorespaces}
{\end{indented}\vspace{10mm}}
%ENVIRONMENT: description
\newenvironment{description}
{\list{}{\labelwidth\z@ \itemindent-\leftmargin
\let\makelabel\descriptionlabel}}
{\endlist}
\newcommand\descriptionlabel[1]{\hspace\labelsep
\normalfont\bfseries #1}
%ENVIRONMENT: quotation
\newenvironment{quotation}
{\list{}{\listparindent 1.5em%
\itemindent \listparindent
\rightmargin \leftmargin
\parsep \z@ \@plus\p@}%
\item[]}
{\endlist}
%ENVIRONMENT: quote
\newenvironment{quote}
{\list{}{\rightmargin\leftmargin}%
\item[]}
{\endlist}
%ENVIRONMENT: verse
\newenvironment{verse}
{\let\\=\@centercr
\list{}{\itemsep \z@
\itemindent -1.5em%
\listparindent\itemindent
\rightmargin \leftmargin
\advance\leftmargin 1.5em}%
\item[]}
{\endlist}
%ENVIRONMENT: bibliography
\newdimen\bibindent
\setlength\bibindent{1.5em}
\def\thebibliography#1{\list
{\hfil[\arabic{enumi}]}{\topsep=0\p@\parsep=0\p@
\partopsep=0\p@\itemsep=0\p@
\labelsep=5\p@\itemindent=-10\p@
\settowidth\labelwidth{\footnotesize[#1]}%
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\advance\leftmargin -\itemindent
\usecounter{enumi}}\footnotesize
\def\newblock{\ }
\sloppy\clubpenalty4000\widowpenalty4000
\sfcode`\.=1000\relax}
\let\endthebibliography=\endlist
\def\numrefs#1{\begin{thebibliography}{#1}}
\def\endnumrefs{\end{thebibliography}}
\let\endbib=\endnumrefs
%%%%%%%%%%%%%%%%%%
%\newenvironment{thebibliography}[1]
% {\section*{References}
% \list{\@biblabel{\@arabic\c@enumiv}}%
% {\settowidth\labelwidth{\@biblabel{#1}}%
% \leftmargin\labelwidth
% \advance\leftmargin\labelsep
% \@openbib@code
% \usecounter{enumiv}%
% \let\p@enumiv\@empty
% \renewcommand\theenumiv{\@arabic\c@enumiv}}%
% \sloppy
% \clubpenalty4000
% \@clubpenalty \clubpenalty
% \widowpenalty4000%
% \sfcode`\.\@m}
% {\def\@noitemerr
% {\@latex@warning{Empty `thebibliography' environment}}%
% \endlist}
%\newcommand\newblock{\hskip .11em\@plus.33em\@minus.07em}
%\let\@openbib@code\@empty
%ENVIRONMENT: theindex
\newenvironment{theindex}
{\if@twocolumn
\@restonecolfalse
\else
\@restonecoltrue
\fi
\columnseprule \z@
\columnsep 35\p@
\twocolumn[\section*{\indexname}]%
\@mkboth{\MakeUppercase\indexname}%
{\MakeUppercase\indexname}%
\thispagestyle{plain}\parindent\z@
\parskip\z@ \@plus .3\p@\relax
\let\item\@idxitem}
{\if@restonecol\onecolumn\else\clearpage\fi}
\newcommand\@idxitem{\par\hangindent 40\p@}
\newcommand\subitem{\@idxitem \hspace*{20\p@}}
\newcommand\subsubitem{\@idxitem \hspace*{30\p@}}
\newcommand\indexspace{\par \vskip 10\p@ \@plus5\p@ \@minus3\p@\relax}
%=====================
\def\appendix{\@ifnextchar*{\@appendixstar}{\@appendix}}
\def\@appendix{\eqnobysec\@appendixstar}
\def\@appendixstar{\@@par
\ifnumbysec % Added 30/4/94 to get Table A1,
\@addtoreset{table}{section} % Table B1 etc if numbering by
\@addtoreset{figure}{section}\fi % section
\setcounter{section}{0}
\setcounter{subsection}{0}
\setcounter{subsubsection}{0}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\def\thesection{Appendix \Alph{section}}
\def\theequation{\ifnumbysec
\Alph{section}.\arabic{equation}\else
\Alph{section}\arabic{equation}\fi} % Comment A\arabic{equation} maybe
\def\thetable{\ifnumbysec % better? 15/4/95
\Alph{section}\arabic{table}\else
A\arabic{table}\fi}
\def\thefigure{\ifnumbysec
\Alph{section}\arabic{figure}\else
A\arabic{figure}\fi}}
\def\noappendix{\setcounter{figure}{0}
\setcounter{table}{0}
\def\thetable{\arabic{table}}
\def\thefigure{\arabic{figure}}}
\setlength\arraycolsep{5\p@}
\setlength\tabcolsep{6\p@}
\setlength\arrayrulewidth{.4\p@}
\setlength\doublerulesep{2\p@}
\setlength\tabbingsep{\labelsep}
\skip\@mpfootins = \skip\footins
\setlength\fboxsep{3\p@}
\setlength\fboxrule{.4\p@}
\renewcommand\theequation{\arabic{equation}}
% NAME OF STRUCTURES
\newcommand\contentsname{Contents}
\newcommand\listfigurename{List of Figures}
\newcommand\listtablename{List of Tables}
\newcommand\refname{References}
\newcommand\indexname{Index}
\newcommand\figurename{Figure}
\newcommand\tablename{Table}
\newcommand\partname{Part}
\newcommand\appendixname{Appendix}
\newcommand\abstractname{Abstract}
%Miscellaneous commands
\newcommand{\BibTeX}{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\newcommand{\jpcsit}{{\bfseries\itshape\selectfont Journal of Physics: Conference Series}}
\newcommand{\jpcs}{{\itshape\selectfont Journal of Physics: Conference Series}}
\newcommand{\iopp}{IOP Publishing}
\newcommand{\cls}{{\upshape\selectfont\texttt{jpconf.cls}}}
\newcommand{\corg}{conference organizer}
\newcommand\today{\number\day\space\ifcase\month\or
January\or February\or March\or April\or May\or June\or
July\or August\or September\or October\or November\or December\fi
\space\number\year}
\setlength\columnsep{10\p@}
\setlength\columnseprule{0\p@}
\newcommand{\Tables}{\clearpage\section*{Tables and table captions}
\def\fps@table{hp}\noappendix}
\newcommand{\Figures}{\clearpage\section*{Figure captions}
\def\fps@figure{hp}\noappendix}
%
\newcommand{\Figure}[1]{\begin{figure}
\caption{#1}
\end{figure}}
%
\newcommand{\Table}[1]{\begin{table}
\caption{#1}
\begin{indented}
\lineup
\item[]\begin{tabular}{@{}l*{15}{l}}}
\def\endTable{\end{tabular}\end{indented}\end{table}}
\let\endtab=\endTable
%
\newcommand{\fulltable}[1]{\begin{table}
\caption{#1}
\lineup
\begin{tabular*}{\textwidth}{@{}l*{15}{@{\extracolsep{0pt plus 12pt}}l}}}
\def\endfulltable{\end{tabular*}\end{table}}
%BIBLIOGRAPHY and References
%\newcommand{\Bibliography}[1]{\section*{References}\par\numrefs{#1}}
%\newcommand{\References}{\section*{References}\par\refs}
%\def\thebibliography#1{\list
% {\hfil[\arabic{enumi}]}{\topsep=0\p@\parsep=0\p@
% \partopsep=0\p@\itemsep=0\p@
% \labelsep=5\p@\itemindent=-10\p@
% \settowidth\labelwidth{\footnotesize[#1]}%
% \leftmargin\labelwidth
% \advance\leftmargin\labelsep
% \advance\leftmargin -\itemindent
% \usecounter{enumi}}\footnotesize
% \def\newblock{\ }
% \sloppy\clubpenalty4000\widowpenalty4000
% \sfcode`\.=1000\relax}
%\let\endthebibliography=\endlist
%\def\numrefs#1{\begin{thebibliography}{#1}}
%\def\endnumrefs{\end{thebibliography}}
%\let\endbib=\endnumrefs
\def\thereferences{\list{}{\topsep=0\p@\parsep=0\p@
\partopsep=0\p@\itemsep=0\p@\labelsep=0\p@\itemindent=-18\p@
\labelwidth=0\p@\leftmargin=18\p@
}\footnotesize\rm
\def\newblock{\ }
\sloppy\clubpenalty4000\widowpenalty4000
\sfcode`\.=1000\relax}%
\let\endthereferences=\endlist
% MISC EQUATRION STUFF
%\def\[{\relax\ifmmode\@badmath\else
% \begin{trivlist}
% \@beginparpenalty\predisplaypenalty
% \@endparpenalty\postdisplaypenalty
% \item[]\leavevmode
% \hbox to\linewidth\bgroup$ \displaystyle
% \hskip\mathindent\bgroup\fi}
%\def\]{\relax\ifmmode \egroup $\hfil \egroup \end{trivlist}\else \@badmath \fi}
%\def\equation{\@beginparpenalty\predisplaypenalty
% \@endparpenalty\postdisplaypenalty
%\refstepcounter{equation}\trivlist \item[]\leavevmode
% \hbox to\linewidth\bgroup $ \displaystyle
%\hskip\mathindent}
%\def\endequation{$\hfil \displaywidth\linewidth\@eqnnum\egroup \endtrivlist}
%\@namedef{equation*}{\[}
%\@namedef{endequation*}{\]}
%\def\eqnarray{\stepcounter{equation}\let\@currentlabel=\theequation
%\global\@eqnswtrue
%\global\@eqcnt\z@\tabskip\mathindent\let\\=\@eqncr
%\abovedisplayskip\topsep\ifvmode\advance\abovedisplayskip\partopsep\fi
%\belowdisplayskip\abovedisplayskip
%\belowdisplayshortskip\abovedisplayskip
%\abovedisplayshortskip\abovedisplayskip
%$$\halign to
%\linewidth\bgroup\@eqnsel$\displaystyle\tabskip\z@
% {##{}}$&\global\@eqcnt\@ne $\displaystyle{{}##{}}$\hfil
% &\global\@eqcnt\tw@ $\displaystyle{{}##}$\hfil
% \tabskip\@centering&\llap{##}\tabskip\z@\cr}
%\def\endeqnarray{\@@eqncr\egroup
% \global\advance\c@equation\m@ne$$\global\@ignoretrue }
%\mathindent = 6pc
%%
%\def\eqalign#1{\null\vcenter{\def\\{\cr}\openup\jot\m@th
% \ialign{\strut$\displaystyle{##}$\hfil&$\displaystyle{{}##}$\hfil
% \crcr#1\crcr}}\,}
%%
%\def\eqalignno#1{\displ@y \tabskip\z@skip
% \halign to\displaywidth{\hspace{5pc}$\@lign\displaystyle{##}$%
% \tabskip\z@skip
% &$\@lign\displaystyle{{}##}$\hfill\tabskip\@centering
% &\llap{$\@lign\hbox{\rm##}$}\tabskip\z@skip\crcr
% #1\crcr}}
%%
\newif\ifnumbysec
\def\theequation{\ifnumbysec
\arabic{section}.\arabic{equation}\else
\arabic{equation}\fi}
\def\eqnobysec{\numbysectrue\@addtoreset{equation}{section}}
\newcounter{eqnval}
\def\numparts{\addtocounter{equation}{1}%
\setcounter{eqnval}{\value{equation}}%
\setcounter{equation}{0}%
\def\theequation{\ifnumbysec
\arabic{section}.\arabic{eqnval}{\it\alph{equation}}%
\else\arabic{eqnval}{\it\alph{equation}}\fi}}
\def\endnumparts{\def\theequation{\ifnumbysec
\arabic{section}.\arabic{equation}\else
\arabic{equation}\fi}%
\setcounter{equation}{\value{eqnval}}}
%
\def\cases#1{%
\left\{\,\vcenter{\def\\{\cr}\normalbaselines\openup1\jot\m@th%
\ialign{\strut$\displaystyle{##}\hfil$&\tqs
\rm##\hfil\crcr#1\crcr}}\right.}%
\def\eqalign#1{\null\vcenter{\def\\{\cr}\openup\jot\m@th
\ialign{\strut$\displaystyle{##}$\hfil&$\displaystyle{{}##}$\hfil
\crcr#1\crcr}}\,}
% OTHER USEFUL BITS
\newcommand{\e}{\mathrm{e}}
\newcommand{\rme}{\mathrm{e}}
\newcommand{\rmi}{\mathrm{i}}
\newcommand{\rmd}{\mathrm{d}}
\renewcommand{\qquad}{\hspace*{25pt}}
\newcommand{\tdot}[1]{\stackrel{\dots}{#1}} % Added 1/9/94
\newcommand{\tqs}{\hspace*{25pt}}
\newcommand{\fl}{\hspace*{-\mathindent}}
\newcommand{\Tr}{\mathop{\mathrm{Tr}}\nolimits}
\newcommand{\tr}{\mathop{\mathrm{tr}}\nolimits}
\newcommand{\Or}{\mathord{\mathrm{O}}} %changed from \mathop 20/1/95
\newcommand{\lshad}{[\![}
\newcommand{\rshad}{]\!]}
\newcommand{\case}[2]{{\textstyle\frac{#1}{#2}}}
\def\pt(#1){({\it #1\/})}
\newcommand{\dsty}{\displaystyle}
\newcommand{\tsty}{\textstyle}
\newcommand{\ssty}{\scriptstyle}
\newcommand{\sssty}{\scriptscriptstyle}
\def\lo#1{\llap{${}#1{}$}}
\def\eql{\llap{${}={}$}}
\def\lsim{\llap{${}\sim{}$}}
\def\lsimeq{\llap{${}\simeq{}$}}
\def\lequiv{\llap{${}\equiv{}$}}
%
\newcommand{\eref}[1]{(\ref{#1})}
%\newcommand{\eqref}[1]{Equation (\ref{#1})}
%\newcommand{\Eqref}[1]{Equation (\ref{#1})}
\newcommand{\sref}[1]{section~\ref{#1}}
\newcommand{\fref}[1]{figure~\ref{#1}}
\newcommand{\tref}[1]{table~\ref{#1}}
\newcommand{\Sref}[1]{Section~\ref{#1}}
\newcommand{\Fref}[1]{Figure~\ref{#1}}
\newcommand{\Tref}[1]{Table~\ref{#1}}
\newcommand{\opencircle}{\mbox{\Large$\circ\,$}} % moved Large outside maths
\newcommand{\opensquare}{\mbox{$\rlap{$\sqcap$}\sqcup$}}
\newcommand{\opentriangle}{\mbox{$\triangle$}}
\newcommand{\opentriangledown}{\mbox{$\bigtriangledown$}}
\newcommand{\opendiamond}{\mbox{$\diamondsuit$}}
\newcommand{\fullcircle}{\mbox{{\Large$\bullet\,$}}} % moved Large outside maths
\newcommand{\fullsquare}{\,\vrule height5pt depth0pt width5pt}
\newcommand{\dotted}{\protect\mbox{${\mathinner{\cdotp\cdotp\cdotp\cdotp\cdotp\cdotp}}$}}
\newcommand{\dashed}{\protect\mbox{-\; -\; -\; -}}
\newcommand{\broken}{\protect\mbox{-- -- --}}
\newcommand{\longbroken}{\protect\mbox{--- --- ---}}
\newcommand{\chain}{\protect\mbox{--- $\cdot$ ---}}
\newcommand{\dashddot}{\protect\mbox{--- $\cdot$ $\cdot$ ---}}
\newcommand{\full}{\protect\mbox{------}}
\def\;{\protect\psemicolon}
\def\psemicolon{\relax\ifmmode\mskip\thickmuskip\else\kern .3333em\fi}
\def\lineup{\def\0{\hbox{\phantom{0}}}%
\def\m{\hbox{$\phantom{-}$}}%
\def\-{\llap{$-$}}}
%
%%%%%%%%%%%%%%%%%%%%%
% Tables rules %
%%%%%%%%%%%%%%%%%%%%%
\newcommand{\boldarrayrulewidth}{1\p@}
% Width of bold rule in tabular environment.
\def\bhline{\noalign{\ifnum0=`}\fi\hrule \@height
\boldarrayrulewidth \futurelet \@tempa\@xhline}
\def\@xhline{\ifx\@tempa\hline\vskip \doublerulesep\fi
\ifnum0=`{\fi}}
%
% Rules for tables with extra space around
%
\newcommand{\br}{\ms\bhline\ms}
\newcommand{\mr}{\ms\hline\ms}
%
\newcommand{\centre}[2]{\multispan{#1}{\hfill #2\hfill}}
\newcommand{\crule}[1]{\multispan{#1}{\hspace*{\tabcolsep}\hrulefill
\hspace*{\tabcolsep}}}
\newcommand{\fcrule}[1]{\ifnum\thetabtype=1\multispan{#1}{\hrulefill
\hspace*{\tabcolsep}}\else\multispan{#1}{\hrulefill}\fi}
%
% Extra spaces for tables and displayed equations
%
\newcommand{\ms}{\noalign{\vspace{3\p@ plus2\p@ minus1\p@}}}
\newcommand{\bs}{\noalign{\vspace{6\p@ plus2\p@ minus2\p@}}}
\newcommand{\ns}{\noalign{\vspace{-3\p@ plus-1\p@ minus-1\p@}}}
\newcommand{\es}{\noalign{\vspace{6\p@ plus2\p@ minus2\p@}}\displaystyle}%
%
\newcommand{\etal}{{\it et al\/}\ }
\newcommand{\dash}{------}
\newcommand{\nonum}{\par\item[]} %\par added 1/9/93
\newcommand{\mat}[1]{\underline{\underline{#1}}}
%
% abbreviations for IOPP journals
%
\newcommand{\CQG}{{\it Class. Quantum Grav.} }
\newcommand{\CTM}{{\it Combust. Theory Modelling\/} }
\newcommand{\DSE}{{\it Distrib. Syst. Engng\/} }
\newcommand{\EJP}{{\it Eur. J. Phys.} }
\newcommand{\HPP}{{\it High Perform. Polym.} } % added 4/5/93
\newcommand{\IP}{{\it Inverse Problems\/} }
\newcommand{\JHM}{{\it J. Hard Mater.} } % added 4/5/93
\newcommand{\JO}{{\it J. Opt.} }
\newcommand{\JOA}{{\it J. Opt. A: Pure Appl. Opt.} }
\newcommand{\JOB}{{\it J. Opt. B: Quantum Semiclass. Opt.} }
\newcommand{\JPA}{{\it J. Phys. A: Math. Gen.} }
\newcommand{\JPB}{{\it J. Phys. B: At. Mol. Phys.} } %1968-87
\newcommand{\jpb}{{\it J. Phys. B: At. Mol. Opt. Phys.} } %1988 and onwards
\newcommand{\JPC}{{\it J. Phys. C: Solid State Phys.} } %1968--1988
\newcommand{\JPCM}{{\it J. Phys.: Condens. Matter\/} } %1989 and onwards
\newcommand{\JPD}{{\it J. Phys. D: Appl. Phys.} }
\newcommand{\JPE}{{\it J. Phys. E: Sci. Instrum.} }
\newcommand{\JPF}{{\it J. Phys. F: Met. Phys.} }
\newcommand{\JPG}{{\it J. Phys. G: Nucl. Phys.} } %1975--1988
\newcommand{\jpg}{{\it J. Phys. G: Nucl. Part. Phys.} } %1989 and onwards
\newcommand{\MSMSE}{{\it Modelling Simulation Mater. Sci. Eng.} }
\newcommand{\MST}{{\it Meas. Sci. Technol.} } %1990 and onwards
\newcommand{\NET}{{\it Network: Comput. Neural Syst.} }
\newcommand{\NJP}{{\it New J. Phys.} }
\newcommand{\NL}{{\it Nonlinearity\/} }
\newcommand{\NT}{{\it Nanotechnology} }
\newcommand{\PAO}{{\it Pure Appl. Optics\/} }
\newcommand{\PM}{{\it Physiol. Meas.} } % added 4/5/93
\newcommand{\PMB}{{\it Phys. Med. Biol.} }
\newcommand{\PPCF}{{\it Plasma Phys. Control. Fusion\/} } % added 4/5/93
\newcommand{\PSST}{{\it Plasma Sources Sci. Technol.} }
\newcommand{\PUS}{{\it Public Understand. Sci.} }
\newcommand{\QO}{{\it Quantum Opt.} }
\newcommand{\QSO}{{\em Quantum Semiclass. Opt.} }
\newcommand{\RPP}{{\it Rep. Prog. Phys.} }
\newcommand{\SLC}{{\it Sov. Lightwave Commun.} } % added 4/5/93
\newcommand{\SST}{{\it Semicond. Sci. Technol.} }
\newcommand{\SUST}{{\it Supercond. Sci. Technol.} }
\newcommand{\WRM}{{\it Waves Random Media\/} }
\newcommand{\JMM}{{\it J. Micromech. Microeng.\/} }
%
% Other commonly quoted journals
%
\newcommand{\AC}{{\it Acta Crystallogr.} }
\newcommand{\AM}{{\it Acta Metall.} }
\newcommand{\AP}{{\it Ann. Phys., Lpz.} }
\newcommand{\APNY}{{\it Ann. Phys., NY\/} }
\newcommand{\APP}{{\it Ann. Phys., Paris\/} }
\newcommand{\CJP}{{\it Can. J. Phys.} }
\newcommand{\JAP}{{\it J. Appl. Phys.} }
\newcommand{\JCP}{{\it J. Chem. Phys.} }
\newcommand{\JJAP}{{\it Japan. J. Appl. Phys.} }
\newcommand{\JP}{{\it J. Physique\/} }
\newcommand{\JPhCh}{{\it J. Phys. Chem.} }
\newcommand{\JMMM}{{\it J. Magn. Magn. Mater.} }
\newcommand{\JMP}{{\it J. Math. Phys.} }
\newcommand{\JOSA}{{\it J. Opt. Soc. Am.} }
\newcommand{\JPSJ}{{\it J. Phys. Soc. Japan\/} }
\newcommand{\JQSRT}{{\it J. Quant. Spectrosc. Radiat. Transfer\/} }
\newcommand{\NC}{{\it Nuovo Cimento\/} }
\newcommand{\NIM}{{\it Nucl. Instrum. Methods\/} }
\newcommand{\NP}{{\it Nucl. Phys.} }
\newcommand{\PL}{{\it Phys. Lett.} }
\newcommand{\PR}{{\it Phys. Rev.} }
\newcommand{\PRL}{{\it Phys. Rev. Lett.} }
\newcommand{\PRS}{{\it Proc. R. Soc.} }
\newcommand{\PS}{{\it Phys. Scr.} }
\newcommand{\PSS}{{\it Phys. Status Solidi\/} }
\newcommand{\PTRS}{{\it Phil. Trans. R. Soc.} }
\newcommand{\RMP}{{\it Rev. Mod. Phys.} }
\newcommand{\RSI}{{\it Rev. Sci. Instrum.} }
\newcommand{\SSC}{{\it Solid State Commun.} }
\newcommand{\ZP}{{\it Z. Phys.} }
%===================
\pagestyle{headings}
\pagenumbering{arabic}
\raggedbottom
\onecolumn
\endinput
%%
%% End of file `jconf.cls'.
%%
%% This is file `jpconf11.clo'
%%
%% This file is distributed in the hope that it will be useful,
%% but WITHOUT ANY WARRANTY; without even the implied warranty of
%% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
%%
%% \CharacterTable
%% {Upper-case \A\B\C\D\E\F\G\H\I\J\K\L\M\N\O\P\Q\R\S\T\U\V\W\X\Y\Z
%% Lower-case \a\b\c\d\e\f\g\h\i\j\k\l\m\n\o\p\q\r\s\t\u\v\w\x\y\z
%% Digits \0\1\2\3\4\5\6\7\8\9
%% Exclamation \! Double quote \" Hash (number) \#
%% Dollar \$ Percent \% Ampersand \&
%% Acute accent \' Left paren \( Right paren \)
%% Asterisk \* Plus \+ Comma \,
%% Minus \- Point \. Solidus \/
%% Colon \: Semicolon \; Less than \<
%% Equals \= Greater than \> Question mark \?
%% Commercial at \@ Left bracket \[ Backslash \\
%% Right bracket \] Circumflex \^ Underscore \_
%% Grave accent \` Left brace \{ Vertical bar \|
%% Right brace \} Tilde \~}
\ProvidesFile{jpconf11.clo}[2005/05/04 v1.0 LaTeX2e file (size option)]
\renewcommand\normalsize{%
\@setfontsize\normalsize\@xipt{13}%
\abovedisplayskip 12\p@ \@plus3\p@ \@minus7\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6.5\p@ \@plus3.5\p@ \@minus3\p@
\belowdisplayskip \abovedisplayskip
\let\@listi\@listI}
\normalsize
\newcommand\small{%
\@setfontsize\small\@xpt{12}%
\abovedisplayskip 11\p@ \@plus3\p@ \@minus6\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6.5\p@ \@plus3.5\p@ \@minus3\p@
\def\@listi{\leftmargin\leftmargini
\topsep 9\p@ \@plus3\p@ \@minus5\p@
\parsep 4.5\p@ \@plus2\p@ \@minus\p@
\itemsep \parsep}%
\belowdisplayskip \abovedisplayskip}
\newcommand\footnotesize{%
% \@setfontsize\footnotesize\@xpt\@xiipt
\@setfontsize\footnotesize\@ixpt{11}%
\abovedisplayskip 10\p@ \@plus2\p@ \@minus5\p@
\abovedisplayshortskip \z@ \@plus3\p@
\belowdisplayshortskip 6\p@ \@plus3\p@ \@minus3\p@
\def\@listi{\leftmargin\leftmargini
\topsep 6\p@ \@plus2\p@ \@minus2\p@
\parsep 3\p@ \@plus2\p@ \@minus\p@
\itemsep \parsep}%
\belowdisplayskip \abovedisplayskip
}
\newcommand\scriptsize{\@setfontsize\scriptsize\@viiipt{9.5}}
\newcommand\tiny{\@setfontsize\tiny\@vipt\@viipt}
\newcommand\large{\@setfontsize\large\@xivpt{18}}
\newcommand\Large{\@setfontsize\Large\@xviipt{22}}
\newcommand\LARGE{\@setfontsize\LARGE\@xxpt{25}}
\newcommand\huge{\@setfontsize\huge\@xxvpt{30}}
\let\Huge=\huge
\if@twocolumn
\setlength\parindent{14\p@}
\else
\setlength\parindent{18\p@}
\fi
\if@letterpaper%
%\input{letmarg.tex}%
\setlength{\hoffset}{0mm}
\setlength{\marginparsep}{0mm}
\setlength{\marginparwidth}{0mm}
\setlength{\textwidth}{160mm}
\setlength{\oddsidemargin}{-0.4mm}
\setlength{\evensidemargin}{-0.4mm}
\setlength{\voffset}{0mm}
\setlength{\headheight}{8mm}
\setlength{\headsep}{5mm}
\setlength{\footskip}{0mm}
\setlength{\textheight}{230mm}
\setlength{\topmargin}{1.6mm}
\else
%\input{a4marg.tex}%
\setlength{\hoffset}{0mm}
\setlength{\marginparsep}{0mm}
\setlength{\marginparwidth}{0mm}
\setlength{\textwidth}{160mm}
\setlength{\oddsidemargin}{-0.4mm}
\setlength{\evensidemargin}{-0.4mm}
\setlength{\voffset}{0mm}
\setlength{\headheight}{8mm}
\setlength{\headsep}{5mm}
\setlength{\footskip}{0mm}
\setlength{\textheight}{230mm}
\setlength{\topmargin}{1.6mm}
\fi
\setlength\maxdepth{.5\topskip}
\setlength\@maxdepth\maxdepth
\setlength\footnotesep{8.4\p@}
\setlength{\skip\footins} {10.8\p@ \@plus 4\p@ \@minus 2\p@}
\setlength\floatsep {14\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\textfloatsep {24\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\intextsep {16\p@ \@plus 4\p@ \@minus 4\p@}
\setlength\dblfloatsep {16\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\dbltextfloatsep{24\p@ \@plus 2\p@ \@minus 4\p@}
\setlength\@fptop{0\p@}
\setlength\@fpsep{10\p@ \@plus 1fil}
\setlength\@fpbot{0\p@}
\setlength\@dblfptop{0\p@}
\setlength\@dblfpsep{10\p@ \@plus 1fil}
\setlength\@dblfpbot{0\p@}
\setlength\partopsep{3\p@ \@plus 2\p@ \@minus 2\p@}
\def\@listI{\leftmargin\leftmargini
\parsep=\z@
\topsep=6\p@ \@plus3\p@ \@minus3\p@
\itemsep=3\p@ \@plus2\p@ \@minus1\p@}
\let\@listi\@listI
\@listi
\def\@listii {\leftmargin\leftmarginii
\labelwidth\leftmarginii
\advance\labelwidth-\labelsep
\topsep=3\p@ \@plus2\p@ \@minus\p@
\parsep=\z@
\itemsep=\parsep}
\def\@listiii{\leftmargin\leftmarginiii
\labelwidth\leftmarginiii
\advance\labelwidth-\labelsep
\topsep=\z@
\parsep=\z@
\partopsep=\z@
\itemsep=\z@}
\def\@listiv {\leftmargin\leftmarginiv
\labelwidth\leftmarginiv
\advance\labelwidth-\labelsep}
\def\@listv{\leftmargin\leftmarginv
\labelwidth\leftmarginv
\advance\labelwidth-\labelsep}
\def\@listvi {\leftmargin\leftmarginvi
\labelwidth\leftmarginvi
\advance\labelwidth-\labelsep}
\endinput
%%
%% End of file `iopart12.clo'.
contributions/storage/danni.PNG

1.52 MiB

\documentclass[a4paper]{jpconf}
\usepackage{url}
\usepackage[]{color}
\usepackage{graphicx}
\usepackage{makecell}
\usepackage{booktabs}
\usepackage{subfig}
\usepackage{float}
\usepackage{graphicx}
\usepackage{tikz}
\usepackage[binary-units=true,per-mode=symbol]{siunitx}
% \usepackage{pgfplots}
% \usepgfplotslibrary{patchplots}
% \usepackage[binary-units=true,per-mode=symbol]{siunitx}
\begin{document}
\title{Data management and storage systems}
\author{A. Cavalli, D. Cesini, A. Falabella, E. Fattibene, L.Morganti, A. Prosperini and V. Sapunenko}
\address{INFN-CNAF, Bologna, IT}
\ead{vladimir.sapunenko@cnaf.infn.it}
\section{Introduction}
The Data management group, composed by 7 people (5 permanent), is responsible for the installation, configuration and operations of all Data Storage systems including the Storage Area Network (SAN) infrastructure, the disk servers, the Mass Storage System (MSS) and the tape library, as well as the data management services like GridFTP, XRootD, WebDAV and the SRM interfaces (StoRM in our case) to the storage systems.
The installed capacity, at the end of 2018, was around 37 PB of net disk space, and around 72 PB of tape space.
The storage infrastructure is based on industry standards allowing the implementation of a completely redundant data access system from a hardware point of view and capable of very high performances.
The structure is the following:
\begin{itemize}
\item Hardware level: Medium Range or enterprise level storage systems interconnected via Storage Area Network (Fiber Channel or InfiniBand) to the disk-servers.
\item File-system level: IBM Spectrum Scale (formerly GPFS) to manage the storage systems. The GPFS file-systems, one for each main experiment, are directly mounted on the compute nodes, so that jobs from users have direct Posix access to all data. Given the latency between two sites is low (comparable to the one experienced on the LAN), the file-systems from CNAF are directly mounted on CINECA nodes as well.
\item Hierarchical Storage Manager (HSM): GEMSS (Grid Enabled Mass Storage System, see Section 3 below), a thin software layer developed in house, and IBM Spectrum Protect (formerly TSM) to manage the tape library. In the current setup, a total number of 5 HSM nodes (1 for each LHC experiments and 1 for all the others) are used in production to provide all the data movements between disks and tapes for all the experiments.
\item Data access: users can manage data via StoRM and access the data via Posix, GridFTP, XRootD, WebDAV/http. Virtualization of data management servers (StoRM FrontEnd and BackEnd) within single VM for each major experiment permitted to consolidate HW and increase availability of services; data movers, instead, run on
dedicated high performing hardware.
\end{itemize}
By the end of March 2018, we have concluded the installation of the last part of the 2018 tender. The new storage
was represented by 3 Huawei OceanStor 18800v5 systems for the total of 11.52 PB of usable space and 12 I/O servers equiped with 2x100GbE and 2x56Gps IB cards.
By the end of 2018, we dismissed older storage systems which were in service for more than 6 years, and migrated about 9 PB of data to the recently installed storage. The data migration was performed without any service interruption thanks to Spectrum Scale functionality which permits file system reconfiguration on-the-fly.
A list of storage systems in production as of 31.12.2018 is given in Table \ref{table:3}.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|}
\hline
Storage System & Quantity & Net Capacity, TB \\
\hline
DDN SFA 12K & 2 & 10240 \\
DELL MD3860 & 4 & 2304 \\
Huawei OS6800v5 & 1 & 5521\\
Huawei OS18000v5 & 5 & 19320\\
\hline
Total (On-line) &&37385 \\
\hline
\end{tabular}
\caption{Storage systems in production as of 31.12.2018.}
\label{table:3}
\end{table}
\section{Recovery from the flooding of 9/11/2017}
The first three months of 2018 were completely dedicated to recovery of the hardware and restoring of the services after the flood event which
happened on November $9^{th}$ 2017.
At that time, the Tier-1 storage at CNAF consisted of the resources listed in Table \ref{table:1}. Almost all storage resources were damaged or contaminated by dirty water (Figure \ref{fig:danni}).
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
System & Quantity & Net Capacity, TB & (\%) of Use \\
\hline
DDN SFA 12K & 2 & 10240 & 95 \\
DDN SFA 10K & 1 & 2500 & 96 \\
DDN S2A 9900 & 6 & 5700 & 80 \\
DELL MD3860 & 4 & 2200 & 97 \\
Huawei OS6800v5 & 1 & 4480 & 48 \\
\hline
Total (On-line) && 25120 &\\
\hline
\end{tabular}
\caption{Storage systems in production as of November $9^{th}$ 2017.}
\label{table:1}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{danni.PNG}
\caption[]{ Almost all storage resources were damaged or contaminated by dirty water.}
\label{fig:danni}
\end{figure}
The recovery started as soon as the flooded halls became accessible. As the first step, we extracted all tape cartridges and hard disks which went in contact with water respectively from the tape library and from the disk enclosures. After extraction, all of them were marked with respective position, cleaned, dried and stored in secure place.
\subsection{Recovery of disk storage systems}
The strategy for recovering disk storage systems varied depending on redundancy configuration and availability of technical support.
\subsubsection{DDN}
All DDN storage systems consisted of a pair of controllers and 10 Disk Enclosures and were configured with RAID6 (8+2) level of data protection in such a way that every RAID group was distributed over all 10 enclosures. Thus, having one Disk Enclosure damaged in every DDN storage system means reduced level of redundancy. In this case we decided to operate systems with reduced redundancy for the time needed to evacuate data to newly installed storage or substitute damaged enclosures and relative disks with new ones and rebuild missing parity.
For the most recent and still maintained systems, we decided to replace all potentially damaged parts, and specifically 3 Disk Enclosures and 3x84 8TB disks.
After cleaning and drying, we tested several disk drives in our lab and found that Helium filled HDD, being well insulated, are mostly immune to water contamination.
The only sensitive parts on such drives are the electronic board and connectors which are easily cleanable even without special equipment.
Cleaning of Disk Enclosures is much more complicated or even impossible.
For this reason, we decided to replace only Disk Array Enclosures (DAE) and populate them with old but cleaned HDDs, startup the system and then replace and reconstruct old disks one by one while in production. In this way we were able to start using the biggest part of our storage immediately after restore of our power plant.
For the older DDN systems like SFA10000 and S2A 9900, we decided to disconnect contaminated enclosures (one in each system) and to run them with reduced redundancy (RAID5 8+1 instead RAID6 8+2) while moving data to the new storage systems.
\subsubsection{Dell}
After cleaning and drying, air-filled disks demonstrated limited operability (up to 2-3 weeks), usually enough for data evacuation.
For Dell MD3860f storage system the situation was quite different since there were only 3 DAE of 60 HDD each, 24 contaminated disks in each system
and data protection was based on Distributed RAID technology.
In this case, working in close connection with Dell Support Service and trying to minimize costs,
we decided to replace only contaminated elements like electronics boards, backplanes and chassis, leaving original (cleaned and dried) disks
in their places and replacing them one-by-one with new ones after powering-on the system, so to allow the rebuild of missing parity.
Replacement and rebuild took about 3 weeks for each MD3860f system. During this time, we observed only 3 failures (distributed in time) of “wet” HDDs successfully recovered by automated rebuild using reserved capacity.
\subsubsection{Huawei}
The Huawei OceanStor 6800v5 storage system, consisting of 12 disk enclosures of 75 HDD each, were installed in 2 cabinets and ended up with two disks enclosures on the lowest level. Therefore, they were contaminated by water. The two contaminated disk enclosures belonged to two different Storage pools.
The data protection in this case was similar to that adopted for Dell MD3860, i.e. three Distributed Raid groups built on top of Storage pools of four disk enclosures. For the recovery we followed the procedure described above, and replaced two disk enclosures. The spare parts were delivered and installed, the disks were cleaned and installed in their original places. However, when powered on, the system did not recognize the new enclosures. It turned out that delivered enclosures were incompatible on firmware level with the controllers. While debugging this issue, the system remained powered on and the disks began deteriorating. Finally, when the compatibility issue was solved after two weeks, the number of failed disks had exceeded the supported redundancy. Hence, two out of three RAID-set became permanently damaged, and two third of all data stored on this system were permanently lost.
The total volume of lost data amounts to 1.4 PB out of 22 PB stored at CNAF data center at the moment of flood.
\subsection{Recovery of tapes and tape library}
The SL8500 tape library was contaminated by water in its lowest 20 cm, enough to damage several components and 166 cartridges that were stored in the first two levels of slots (out of a total of 5500 cartridges in the tape library).
Part of the damaged tapes (16) were still empty.
As a first intervention, wet tapes were removed and placed in a safe place, so to let them dry and to start evaluating the potential data loss. The Spectrum Protect database was restored from a backup copy saved on a separate storage system, evacuated to CNR site. This operation permitted to identify the content of all wet tapes.
We communicated the content of each wet tape to the experiments, asking them whether the data on those tapes could be recovered from other sites or possibly be reproduced.
It turned out that data contained in 75 tapes were unique and non-reproducible, so those cartridges were sent to a laboratory of an external company to be recovered.
The recovery process lasted 6 months and 6 tapes resulted partially unrecoverable (20 TB lost out of a total of 630 TB).
In parallel, a not-trivial work started to clean, repair and certify again the library, finally reinstating the maintenance contract that we still had in place (though temporarily suspended) with Oracle. External technicians disassembled and cleaned all the library and its modules, which also allowed the underlying damaged floating floor to be replaced. Main powers and two robot hands were replaced, and one T10kD tape drive got lost. When the SL8500 was finally ready and turned on again, a control board placed in the front door panel got burned, and was therefore replaced, clearly damaged by the moisture.
Once the tape system was put back in production, we audited a sample of non-wet cartridges in order to understand whether the humidity had damaged the tapes during the period immediately after the flood. 500 cartridges (4.2 PB), heterogeneous per experiment and age, were chosen. As a result, 90 files resulted unreadable from 2 tapes, that is a normal error rate compared to production, so no issue related to the exposure to water has been observed.
The flood affected also several tapes (of 8 GB each) containing data taken from the RUN1 of the CDF experiment, that ran at Fermilab from 1990 to 1995. When the flood happened, CNAF team had been working to replicate CDF data stored on those old media tapes to modern and reliable storage technologies, in order to make them accessible for further usage. Those tapes were dried in the hours immediately after the flood, but their legibility was not verified afterwards.
\subsection{Recovery of servers, switches, etc.}
In total 15 servers were damaged by contact with water, mainly by leak of acid from on-board batteries which happens in prolonged presence of moisture. In fact, recovery of servers was not our priority, and all contaminated servers remained untouched for about a month. Only one server has been recovered, 6 servers were replaced by already dismissed ones still in working conditions, and 8 servers were purchased as new.
Also, three Fiber Channel switches were affected by the flood: Brocade 48000 (384 ports) and two Brocade 5300 (96 ports each). All three switches were successfully recovered after cleaning and replacement of power supply modules.
\subsection{Results of hardware recovery}
At the end, after the restart of the Tier1 data center, we have completely recovered all services and most part of the HW, as described in the following Table \ref{table:2}.
\begin{table}[h!]
\begin{tabular}{|c|c|c|c|c|p{4cm}|}
\hline
Category & Device & Q.ty & Tot. Capacity & Status & Comment \\
\hline
SAN & Brocade 48000 & 1 & 384 ports & recovered & repaired power distribution board \\
SAN & Brocade 5300 & 2 & 196 ports & recovered & replaced power supply units \\
Storage & DDN S2A 9900 & 6 & 5.7 TB & recovered & repaired 6 controllers, replaced 30 disks and 6 JBODs using already decommissioned system, all data preserved \\
Storage & DDN SFA 10000 & 1& 2.5 PB & recovered & with reduced redundancy, all data moved to a new storage, than dismissed \\
Storage & DDN SFA 120000 & 3 & 11.7 PB & recovered & replaced 4 JBOD, 240 disks of 8TB (new) and 60 disks of 3TB (the last one from decommissioned system), all data preserved \\
Storage &Dell MD3860 &2&1.1PB& recovered & replaced 2 enclosures and 48 disks, all data preserved\\
Storage & Huawei OS6800 & 1 & 4.5PB & recovered & replaced 2 enclosures and 150 disks, 1.4PB of user data lost\\
Servers & & 15 && recovered & 1 recovered and 14 replaced\\
\hline
\end{tabular}
\caption{2017 flood: summary of damages.}
\label{table:2}
\end{table}
\section{Storage infrastructure resiliency}
Considering the increase in single disk capacity, we have moved from RAID6 data protection to Distributed RAID in order to speed up the rebuild of eventually failed disks. On the other hand, given the foreseen (huge) increase of the installed disk capacity, we are doing a consolidation of the disk-server infrastructure with a sharp decrease in their number: in the last two tenders, each server was configured with 2x100 Gbps Ethernet and 2x56 Gbps (FDR) IB connections while the disk density has been increased from ~200 TB-N/server to ~1000 TB-N/server.
Currently, we have about 45 disk servers to manage ~37 PB of storage capacity.
Also the SAN is being moved from FC to IB, which is cheaper and more performing, whereas the part dedicated to the tape drives and the TSM servers (Tape Area Network or TAN) will remain based on FC.
We are trying to keep all our infrastructures redundant: the dual-path connection from the servers to the SAN, coupled with the path-failover mechanism, which implements also load-balancing, eliminates several single points of failure (server connections, SAN switches, controllers of the disk storage box) and allows a robust and performing implementation of clustered file-systems like GPFS.
The StoRM instances have been virtualized both allowing the implementation of HA.
\section{Tape library and drives}
At present, a single tape library Oracle SL8500 is installed.
The library has undergone various upgrades and it is now populated with tape cartridges having 8.4 TB of capacity each,
for a total installed capacity of 70 PB at the end of 2018.
Since the present library is expected to be completely filled over 2019, a tender is ongoing for a new one.
In the meanwhile, the TAN infrastructure has been upgraded to FC 16 Gbps.
The 16 T10kD tape drives are shared among the file systems handling the scientific data.
In our current production configuration, there is no way to allocate dynamically more or less drives to recall or migration activities on the different file-systems. In fact, the HSM system administrators can only set manually the maximum number of migration or recall threads for each file system by modifying the GEMSS configuration file. Due to this static setup, we experience that frequently some drives are idle and, at the same time, we notice a certain number of pending recall threads that could become running by using those free drives. In order to overcome this inefficiency, we designed a software solution, namely a GEMSS extension, to automatically assign free tape drives to accomplish pending recalls and to perform administrative tasks on tape storage pools, such as space reclamations or repack. We plan to put this solution in production during 2019.
\section{Data preservation}
CNAF provides the Long Term Data Preservation of the CDF RUN-2 dataset (4 PB) collected between 2001 and 2011 and already stored on CNAF tapes since 2015. 140 TB of CDF data were unfortunately lost because of the flood occurred at CNAF on November 2017; however now all these data have been successfully re-transferred from Fermilab via GridFTP protocol. The CDF database (based on Oracle), containing information about CDF datasets such as their structure, file locations and metadata, has been imported from FNAL to CNAF.
The Sequential Access via Metadata (SAM) station, a data-handling tool specific to CDF data management and developed at Fermilab,
has been installed on a dedicated SL6 server at CNAF. This is a fundamental step in the perspective of a complete decommissioning of CDF services by Fermilab.
The SAM station allows to manage data transfers and to retrieve information from the CDF database;
it also provides a SAMWeb tool which uses HTTP protocol for accessing the CDF database.
Work is ongoing to verify the availability and the correctness of all CDF data stored on CNAF tapes: we are reading all files from the tapes,
calculating their checksum and comparing it with the one stored in the database and retrieved through the SAM station.
Recent tests showed that CDF analysis jobs, using CDF software distributed via CVMFS and requesting delivery of CDF files stored on CNAF tapes, work properly.
When some minor issues regarding the use of X.509 certificates for authentication on CNAF farm will be completely solved, CDF users will be able to access CNAF nodes and submit their jobs via LSF or HTCondor batch systems.
\section{Third Party Copy activities in DOMA}
At the end of the summer, we joined the TPC (Third Party Copy) subgroup of the WLCG’s DOMA\footnote{Data Organization, Management, and Access. see https://twiki.cern.ch/twiki/bin/view/LCG/DomaActivities} project, dedicated to improving bulk transfers between WLCG sites using non-GridFTP protocols. In particular, the Tier 1 is involved in these activities for what concerns StoRM WebDAV.
In October, the two StoRM WebDAV servers used in production by the ATLAS experiment have been upgraded to a version that implements basic support for Third-Party-Copy, and both endpoints entered the distributed TPC testbed of volunteer sites.
\section{References}
\begin{thebibliography}{9}
\bibitem{ref:GEMSS} Ricci, Pier Paolo et al., The {G}rid {E}nabled {M}ass {S}torage {S}ystem ({GEMSS}): the {S}torage and {D}ata management system used at the {INFN} {T}ier1 at {CNAF} {\it J. Phys.: Conf. Ser.} {\bf 396} 042051 - IOP Publishing (2012)
\bibitem{ref:storm} Carbone, A., dell'Agnello, L., Forti, A., Ghiselli, A., Lanciotti, E., Magnoni, L., ... \& Zappi, R. (2007, December). Performance studies of the StoRM storage resource manager. In e-Science and Grid Computing, IEEE International Conference on (pp. 423-430). IEEE.
\bibitem{ref:puppet} “CNAF Provisioning system” on CNAF Annual Report 2015
\end{thebibliography}
\end{document}
contributions/summerstudent/MLalgorithms.png

25.4 KiB

contributions/summerstudent/StoRM-full-picture.png

381 KiB

contributions/summerstudent/StoRM.png

17 KiB

contributions/summerstudent/kibana.png

388 KiB

\documentclass[a4paper]{jpconf}
\usepackage{graphicx}
\begin{document}
\title{INFN CNAF log analysis: a first experience with summer students}
\author{D. Bonacorsi$^1$, A. Ceccanti$^2$, T. Diotalevi$^1$, A. Falabella$^2$, L. Giommi$^2$, B. Martelli$^2$, D. Michelotto$^2$, L. Morganti$^2$, E. Ronchieri$^2$, S. Rossi Tisbeni$^1$, E. Vianello$^2$}
\address{$^1$ University of Bologna, Bologna, IT}
\address{$^2$ INFN-CNAF, Bologna, IT}
\ead{barbara.martelli@cnaf.infn.it}
\begin{abstract}
In 2018 the INFN CNAF computing center has started to investigate predictive and preventive maintenance solutions in order to improve fault diagnosis by applying machine learning techniques to hardware and service logs. An excellent experience has been carried out by three students who dedicated three summer months to collect logs of the StoRM services and the resources that host them, to preprocess these logs in order to remove all bias information and to perform initial data analysis. Here we are going to present the activities fulfilled by these students, the initial outcome and the ongoing work at the INFN CNAF data center.
\end{abstract}
\section{Introduction}
In recent years INFN CNAF has put a great effort to define and implement a common monitoring infrastructure based on Sensu, InfluxDB and Grafana and to centralize logs from the most relevant services \cite{bovina2015, bovina2017}. Nowadays, this unified infrastructure has been fully integrated in the data center \cite{fattibene2018} and there is the intention to face the new challenge/opportunity to correlate this vast volume of data and extract actionable insights.
During the summer 2018 a first investigation has been exploited with the help of three summer student \cite{seminario}. Once identified a specific system to analyze, i. e. StoRM, the following activities have been addressed:
\begin{itemize}
\item Log collection and harmonization
\item Log parsing of various services, such as StoRMfrontend, StoRMbackend, heartbeat, messages, GridFTP and GPFS (not covered in our study, but potentially interesting)
\item Metrics data adding (from Tier 1 InfluxDB)
\end{itemize}
However, to provide a first proof of concept for the predictive and preventive maintenance, data categorization and machine learning techniques application represent two key points that have been conducted from the end 2018 and the middle 2019.
\section{Log collection and harmonization}
The first part of the work consisted in the collection of StoRM logs from the StoRM servers dedicated to the Atlas experiment.
Subsequently, most relevant information was extracted from the logs using the ELK Stack suite \cite{elk}. The ELK stack consists of four components: Beats used for data collection from multiple sources, Logstash used for data aggregation and processing, Elasticsearch used for store and index data, Kibana for data analysis and visualization. In particular, Logstash has been used to ingest data from Beats in a continuous live-feed streaming, filter relevant entries and parse each event, identifying named fields to build a user defined structure and ship parsed data to the Elasticsearch engine. Most data was filtered using a \textit{grok} filter which is based on regular expressions and provides predefined filters together with the ability of defining customized ones.
Finally, several dashboards were created using Kibana in order to show in a human-friendly way a summary of the most relevant information derived from StoRM logs (ee for example \ref{fig3}).
\begin{figure}[h]
\includegraphics[width=20pc]{kibana.png}\hspace{2pc}
\begin{minipage}[b]{14pc}\caption{\label{fig3}An example of Kibana dashboard created.}
\end{minipage}
\end{figure}
\section{Log parsing}
Among the INFN Tier 1 services hosted at the INFN CNAF computing center, there are efficient storage systems, like StoRM that is a grid Storage Resource Manager (SRM) solution. Figure \ref{fig1} shows the StoRM architecture: the frontend service manages user authentication and stores requests data, while the backend service executes SRM functionalities and takes care of space and authorization.
The log files contains basically three types of information: timestamp, metrics, and messages.
\begin{figure}[h]
\includegraphics[width=20pc]{StoRM-full-picture.png}\hspace{2pc}%
\begin{minipage}[b]{14pc}\caption{\label{fig1}The StoRM architecture.}
\end{minipage}
\end{figure}
At the beginning of this work (mid 2018), StoRM at Tier 1 was monitored by InfluxDB and Grafana. Metrics monitored included CPU, RAM, network and disk usage; number of sync SRM request per minute per host; duration of async PTG and PTP per host (avg). We wanted to add information derived from the analysis of StoRM logs to already available monitoring information, in order to derive new insights potentially useful to enhance service availability and efficiency with the long-term intent of implementing a global predictive maintenance solution for Tier 1. In order to build a Machine Learning model for anomaly prediction, logs from two different period were analyzed: a normal behavior period and a critical behavior period (due to wrong configuration of the file system and wrong configuration of the queues coming from the farm).
A four-steps activity has been carried out:
\begin{enumerate}
\item Parsing: log files were parsed and deconstructed, converting them to CSV format
\item Feature selection: was done grouping messages based on their common content (core part of the message). The grouping phase resulted in 20 \textit{Request Types} (Connection, Run, Ping, Ls, Check permission, PTG, PTG status, Get space tokens, PTP, PTP status, BOL status, Put don, Release files, Mv, Mkdir, BOL, Abort request, Abort files, Get space metadata, nan) and 15 \textit{Result Types} (SRM\_SUCCESS, SRM\_FAILURE, SRM\_NOT\_SUPPORTED, SRM\_REQUEST\_QUEUED, SRM\_REQUEST\_INPROGRESS, Protocol check failed, Received 4 protocols, Some protocols supported, SRM\_DUPLICATION\_ERROR, rpcResponseHandler\_AbortFiles, SRM\_INVALID\_REQUEST, SRM\_INVALID\_PATH, Received 5 protocols, SRM\_INTERNAL\_ERROR, nan). A first data exploration phase was performed by counting occurrencies of messages in each group.
Techniques used for the feature selection procedure were: SelectKBest with the chi-squared statistical test, Recursive Feature Elimination, Principal Component Analysis (PCA) and Feature Importance from ensembles of decision tree methods.
\item One-hot encoding: CSV rows encoded in binary vectors (feature vectors). Each vector represents the summary of 15-minutes log contents.
\item Labelling: operation specific for StoRM log files done manually discriminating between normal and critical period based on help-desk tickets.
\end{enumerate}
Feature vectors obtained in (iii) and labeled datasets built in (iv) were used to train several ML algorithms and to test their accuracy. Figure \ref{fig2} depicts the results of tests performed on the following algorithms: LogisticRegression (LR), LinearDiscriminantAnalysis (LDA), KNeighborsClassifier (KNN), GaussianNB (GNB), DecisionTreeClassifier (CART), BaggingClassifier (BgDT), RandomForestClassifier (RF), ExtraTreesClassifier (ET), AdaBoostClassifier (AB), GradientBoostingClassifier (GB), XGBoostClassifier (XGB), MultiLayerPerceptronClassifier (MLP).
\begin{center}
\begin{figure}[h]
\includegraphics[width=20pc]{MLalgorithms.png}\hspace{2pc}
\begin{minipage}[b]{14pc}\caption{\label{fig2}Machine Learning Algorithms Comparison (scorer=accuracy).}
\end{minipage}
\end{figure}
\end{center}
\section{Metrics data adding}
This activity was mainly focused on collecting metric data from InfluxDB in order to put them in relation with StoRM logs obtained with activities explained in previous sections and extract new insights.
Key components of log files were identified, parsed and structured in a CSV file with the following columns: timestamp, metric, message, descriptive keys and separators. All timestamps were converted in UNIX epoch time in order to be comparable. On one side, InfluxDB stores information with different granularity depending on the age of data collected and on the other side, StoRM front-end and back-end logs are produced with different frequencies (one line each minute for heartbeat logs, multiple lines every minute for metrics logs, one line every five minutes for InfluxDB more recent data, and so on). Therefore, some concatenation rules have been implemented in order to correctly put in relation all data sources based on the time of occurrence of the event: backend metrics are split by type, timestamp is rounded off to one‐minute precision, in case of overlap the more recent is kept and every CSV file is concatenated and ordered by timestamp.
\section{Conclusion}
This experience is a good example of mutually beneficial collaboration between university students and INFN CNAF. The outcome has allowed master students (i.e. Diotalevi T. and Giommi L.) to publish papers at international conferences \cite{diotalevi, giommi20191}, to win Giulia Vita Finzi's award \cite{giommi20192}, and to start their PhD courses with success. Furthermore, the undergraduate student (i.e. Rossi Tisbeni R) will hold a master degree in Physics in July 2019. On the other hand, the INFN CNAF data center managers has decided to continue exploiting predictive and preventive maintenance to establish where and when to use it to keep services running optimally.
\section*{References}
\begin{thebibliography}{9}
\bibitem{seminario} Martelli B, Giommi L, Rossi Tisbeni S, Diotalevi T, https://agenda.infn.it/event/17430/, 2018.
\bibitem{bovina2015} Bovina S, Michelotto D, Misurelli G, \emph{CNAF Annual Report}, pp. 111--114, 2015.
\bibitem{bovina2017} Bovina S, Michelotto D, In Proc of CHEP 2017.
\bibitem{fattibene2018} Fattibene E, Dal Pra S, Falabella A, De Cristofaro T, Cincinelli G, Ruini M, In Proc of CHEP 2018.
\bibitem{diotalevi} Diotalevi T, Bonacorsi D, Michelotto D, Falabella A, In Proc of International Symposium on Grids \& Clouds (ISGC), Taipei, Taiwan, 2019 (under review).
\bibitem{giommi20191} Giommi L, Bonacorsi D, Diotalevi T, Rossi Tisbeni S, Rinaldi L, Morganti L, Falabella A, Ronchieri E, Ceccanti A, Martelli B, In Proc of International Symposium on Grids \& Clouds (ISGC), Taipei, Taiwan, 2019 (under review).
\bibitem{giommi20192} Giommi L, In INFN CCR Workshop, La Biodola, 3-7 June 2019.
\bibitem{elk}https://www.elastic.co/, site visited on June 2019.
\end{thebibliography}
\end{document}
contributions/sysinfo/deps_scan.png

108 KiB | W: 0px | H: 0px

contributions/sysinfo/deps_scan.png

4.3 MiB | W: 0px | H: 0px

contributions/sysinfo/deps_scan.png
contributions/sysinfo/deps_scan.png
contributions/sysinfo/deps_scan.png
contributions/sysinfo/deps_scan.png
  • 2-up
  • Swipe
  • Onion skin
...@@ -6,16 +6,16 @@ ...@@ -6,16 +6,16 @@
\title{The INFN Information System} \title{The INFN Information System}
\author{ \author{
Stefano Bovina$^1$, S. Bovina$^1$,
Marco Canaparo$^1$, M. Canaparo$^1$,
Enrico Capannini$^1$, E. Capannini$^1$,
Fabio Capannini$^1$, F. Capannini$^1$,
Claudio Galli$^1$, C. Galli$^1$,
Guido Guizzunti$^1$, G. Guizzunti$^1$,
Barbara Demin$^1$ B. Demin$^1$
} }
\address{$^1$ INFN CNAF, Viale Berti Pichat 6/2, 40126, Bologna, Italy} \address{$^1$ INFN-CNAF, Bologna, IT}
\ead{ \ead{
stefano.bovina@cnaf.infn.it, stefano.bovina@cnaf.infn.it,
...@@ -28,114 +28,140 @@ ...@@ -28,114 +28,140 @@
} }
\begin{abstract} \begin{abstract}
The Information System Service's mission is the implementation, management and optimization of all the infrastructural and application components of the administrative services of the Institute. In order to guarantee high reliability and redundancy, the same systems are replicated in an analogous infrastructure at the National Laboratories of Frascati (LNF). The mission of the Information System Service is the implementation, management and optimization of all the infrastructural and application components of the administrative services of the Institute. In order to guarantee high reliability and redundancy, the same systems are replicated in an analogous infrastructure at the National Laboratories of Frascati (LNF).
The Information System's team manages all the administrative services of the Institute, both from the hardware and the software point of view and they are in charge of carrying out several software projects. The Information System's team manages all the administrative services of the Institute,
both from the hardware and the software point of view, and it is in charge of carrying out several software projects.
The core of the Information System is made up of the salary and HR systems. The core of the Information System is made up of the salary and HR systems.
Connected to the core there are several other systems reachable from a unique web portal: firstly, the organizational chart system (GODiVA); secondly, the accounting, the time and attendance, the trip and purchase order and the business intelligence systems. Finally, there are other systems which manage: the training of the employees, their subsidies, their timesheet, the official documents, the computer protocol, the recruitment, the user support etc. Connected to the core, there are several other systems reachable from a unique web portal:
firstly, the organizational chart system (GODiVA); secondly, the accounting, the time and attendance,
the trip and purchase order and the business intelligence systems.
Finally, there are other systems which manage the training of the employees, their subsidies, their timesheet, the official documents,
the computer protocol, the recruitment, the user support etc.
\end{abstract} \end{abstract}
\section{Introduction} \section{Introduction}
The INFN Information System project was set up in 2001 with the purpose of digitizing and managing all the administrative and accounting processes of the INFN Institute, and of carrying out a gradual dematerialization of documents.\\ The INFN Information System project was set up in 2001 with the purpose of digitizing and managing all the administrative and accounting processes of the INFN Institute,
In 2010, INFN decided to transfer the accounting system, based on the Oracle Business Suite (EBS) and the SUN Solaris operating system, from the National Laboratories of Frascati (LNF) to CNAF, where the SUN Solaris platform was migrated to a RedHat Linux Cluster and implemented on commodity hardware.\\ and of carrying out a gradual dematerialization of documents.\\
The Service “Information System” was officially established at CNAF in 2013 with the aim of developing, maintaining and coordinating many IT services which are critical for INFN. Together with the corresponding office in the National Laboratories of Frascati, it is actively involved in fields related to INFN management and administration, developing tools for business intelligence and research quality assurance; it is also involved in the dematerialization process and in the provisioning of interfaces between users and INFN administration.\\ In 2010, INFN decided to transfer the accounting system, based on the Oracle Business Suite (EBS) and the SUN Solaris operating system,
The Information System service team at CNAF in 2018 was composed of 8 people, both developers and system engineers.\\ from the National Laboratories of Frascati (LNF) to CNAF, where the SUN Solaris platform was migrated to a RedHat Linux Cluster and implemented on commodity hardware.\\
The Service “Information System” was officially established at CNAF in 2013 with the aim of developing, maintaining and coordinating many IT services which are critical
for INFN. Together with the corresponding office at the National Laboratories of Frascati, it is actively involved in fields related to INFN management and administration, developing tools for business intelligence and research quality assurance; it is also involved in the dematerialization process and in the provisioning of interfaces between users and INFN administration.\\
Over the years, other services have been added, leading to a complex infrastructure that covers all aspects of people's life working at INFN. Over the years, other services have been added, leading to a complex infrastructure that covers all aspects of people's life working at INFN.
In 2018, the Information System service team at CNAF was composed of 8 people, both developers and system engineers.\\
\section{Infrastructure} \section{Infrastructure}
In 2018, the infrastructure-related activity was composed of various tasks that can be summarized as follows: firstly, the consolidation of the Disaster Recovery site in Bari and the restore of CNAF as primary site; secondly, the finalization of Puppet 3 phase out and related Foreman upgrades; thirdly, the improvement of our ELK (Elasticsearch/Logstash/Kibana) and monitoring infrastructure and finally, several "Misure Minime" AGID and GDPR compliance adjustment. In 2018, the infrastructure-related activity was composed of various tasks that can be summarized as follows:
firstly, the consolidation of the Disaster Recovery site in Bari and the restore of CNAF as primary site;
secondly, the finalization of Puppet 3 phase out and related Foreman upgrades;
thirdly, the improvement of our ELK (Elasticsearch/Logstash/Kibana) and monitoring infrastructure and finally, several ``Misure Minime'' AGID and GDPR compliance adjustments.
\newline \newline
After the complete revisiting and upgrade of the ELK stack to version 5 last year, many activities have been done to enhance systems and applications monitoring using this set of tools. To improve the discovery and resolution of problems, several views and dashboards (see Fig.~\ref{fig:presenze_kibana}) have been created on Kibana, as well as a deep analysis and customizations of application logs to introduce useful information. After the complete revisiting and upgrade of the ELK stack to version 5 last year,
many activities have been done to enhance systems and applications monitoring using this set of tools.
To improve the discovery and resolution of problems, several views and dashboards (see Figure~\ref{fig:presenze_kibana}) have been created on Kibana,
as well as a deep analysis and customization of application logs to introduce useful information.
\begin{figure}[htbp] \begin{figure}[htbp]
\begin{center} \begin{center}
\includegraphics[scale=0.5]{presenze_kibana.png} \includegraphics[scale=0.5]{presenze_kibana.png}
\end{center} \end{center}
\caption{\label{fig:presenze_kibana} Time and attendance system manual squaring statistics on Kibana (ELK)} \caption{\label{fig:presenze_kibana} Time and attendance system manual squaring statistics on Kibana (ELK).}
\end{figure} \end{figure}
With the aim of enhancing our cronjobs management, improving its monitoring and management, avoiding cronjob overlap and in order to identify "dead-man-switches" a new cronjob management tool has been adopted. With the aim of enhancing our cronjobs management, improving its monitoring and management, avoiding cronjob overlap and in order to identify ``dead-man-switches'''
Cronjob executions are available both on Kibana and Grafana (as annotation), so they can be used to be correlated with system events (see Fig.~\ref{fig:cronjob_annotation}); In the same way, software releases are also displayed on Grafana. a new cronjob management tool has been adopted.
Cronjob executions are available both on Kibana and Grafana (as annotation),
so they can be used to be correlated with system events (see Figure~\ref{fig:cronjob_annotation}); In the same way, software releases are also displayed on Grafana.
\begin{figure}[htbp] \begin{figure}[htbp]
\begin{center} \begin{center}
\includegraphics[scale=0.5]{cronjob_annotation.png} \includegraphics[scale=0.5]{cronjob_annotation.png}
\end{center} \end{center}
\caption{\label{fig:cronjob_annotation} Annotations for cronjobs on Grafana} \caption{\label{fig:cronjob_annotation} Annotations for cronjobs on Grafana.}
\end{figure} \end{figure}
\newpage \newpage
Because of the recent regulations that came into force ("Misure Minime" AGID and GDPR), many audits and related adjustments were made, also relying on both official Center for Internet Security (CIS) guides and Openscap scan, using the Payment Card Industry - Data Security Standard (PCI-DSS) profile. Because of the recent regulations that came into force (``Misure Minime'' AGID and GDPR), many audits and related adjustments were made, also relying on both official Center for Internet Security (CIS) guides and Openscap scan, using the Payment Card Industry - Data Security Standard (PCI-DSS) profile.
Afterwards, we introduced a proactive security model on some pilot projects, adopting tools for static code analysis and dependency scanning (see Fig.~\ref{fig:deps_scan}). Afterwards, we introduced a proactive security model on some pilot projects, adopting tools for static code analysis and dependency scanning (see Figure~\ref{fig:deps_scan}).
\begin{figure}[htbp] \begin{figure}[htbp]
\begin{center} \begin{center}
\includegraphics[width=1.0\textwidth]{deps_scan.png} \includegraphics[width=1.0\textwidth]{deps_scan.png}
\end{center} \end{center}
\caption{\label{fig:deps_scan} Dependencies scan tool in action on Gitlab-CI} \caption{\label{fig:deps_scan} Dependencies scan tool in action on Gitlab-CI.}
\end{figure} \end{figure}
In addition to this, the Platform as a Service (PaaS) infrastructure based on RedHat Openshift Origin (3.x) was upgraded to release 3.11 and for all container-based projects, a signature/scan services was deployed at container registry level (see Fig.~\ref{fig:container_ci}). In addition to this, the Platform as a Service (PaaS) infrastructure based on RedHat Openshift Origin (3.x) was upgraded to release 3.11
and a signature/scan services was deployed at container registry level for all container-based projects (see Figure~\ref{fig:container_ci}).
\begin{figure}[htbp] \begin{figure}[htbp]
\begin{center} \begin{center}
\includegraphics[width=1.0\textwidth]{container_ci.png} \includegraphics[width=1.0\textwidth]{container_ci.png}
\end{center} \end{center}
\caption{\label{fig:container_ci} Container registry details and related Gitlab-CI pipeline} \caption{\label{fig:container_ci} Container registry details and related Gitlab-CI pipeline.}
\end{figure} \end{figure}
\newpage \newpage
In 2018, Oracle databases related activities concerned their maintenance, an initial analysis about the necessary activities to upgrade to Oracle to databases’ later versions and the study about how to achieve real time replication between the Oracle databases of the Accounting application. Periodic recovery tests were also conducted on the Bari Disaster Recovery site. In 2018, Oracle databases related activities concerned their maintenance,
an initial analysis about the necessary activities to upgrade to later versions and the study on how to achieve real-time replication
between the Oracle databases of the Accounting application. Periodic recovery tests were also conducted on the Bari Disaster Recovery site.
\section{Time and attendance system improvements} \section{Time and attendance system improvements}
The time and attendance system allows employees to clock in and out electronically via swipe card. The data is instantly transferred into a database and shown in a web-based application. This system tracks the working hours and offers employees self-service that allows them to handle many time-tracking tasks on their own all subjected to customizable approval workflows and which include reviewing the hours they have worked, the current and future schedule and requests of paid or unpaid leaves. The time and attendance system allows employees to clock in and out electronically via swipe card.
The data is instantly transferred into a database and shown in a web-based application.
This system tracks the working hours and offers employees self-service that allows them to handle many time-tracking tasks on their own,
all subjected to customizable approval workflows, which include reviewing the hours they have worked, the current and future schedule and requests of paid or unpaid leaves.
In 2018, the Time and Attendance system related activities concerned both the introduction of new features and the modifications of the existing ones. Furthermore, developers focused on the performance improvement of the system through the optimization of some common procedures. In 2018, the Time and Attendance system related activities concerned both the introduction of new features and the modifications of the existing ones. Furthermore, developers focused on the performance improvement of the system through the optimization of some common procedures.
The Time and attendance system was enabled to "read" codes introduced together with the clock in/out: through this mechanism, employees can specify the reasons for their leave of absence without using the web-based application. The Time and Attendance system was enabled to ``read'' codes introduced together with the clock in/out: through this mechanism, employees can specify the reasons for their leave of absence without using the web-based application.
Some modifications have been carried out to implement some changes occurred in the national collective agreement. This activity included two new leaves of absence and an extension from three to four months of the period for the check of the average weekly working hours. Some modifications have been carried out to implement some changes occurred in the national collective agreement. This activity included two new leaves of absence and an extension from three to four months of the period for the check of the average weekly working hours.
As concerns performance, the developers' team have optimized the procedure that manages the clock in/out by web portal, and the report that shows the paid overtime aggregated in sectors, employees and months. As concerns performance, the developers' team have optimized the procedure that manages the clock in/out by web portal, and the report that shows the paid overtime aggregated in sectors, employees and months.
\section{Oracle EBS improvements} \section{Oracle EBS improvements}
In 2018, a new Electronic Payments and Receipts (EPR) Framework was introduced, in compliance with the standard set by the Agency for Digital Italy (Agenzia per l'Italia Digitale, AgID) and transmitted through SIOPE+. In 2018, a new Electronic Payments and Receipts (EPR) Framework was introduced,
in compliance with the standard set by the Agency for Digital Italy (Agenzia per l'Italia Digitale, AgID) and transmitted through SIOPE+.
SIOPE+ is the new infrastructure that enables general government entities and banks that provide treasury services to exchange information with the objective of improving the quality of the data used for monitoring government expenditure and tracking the payment times to firms that supply general government entities. SIOPE+ is the new infrastructure that enables general government entities and banks that provide treasury services
to exchange information, with the aim of improving the quality of the data used for monitoring government expenditure and tracking the payment times to firms that supply general government entities.
SIOPE+ responds to the following needs: SIOPE+ responds to the following needs:
\begin{itemize} \begin{itemize}
\item Availability of detailed information on payments made by general government bodies without burdening the entities involved in the flow of outlays and collections. This will make it easier to obtain information on the payments of trade receivables and, more broadly, to monitor public sector financial flows in real time. \item availability of detailed information on payments made by general government bodies without burdening the entities involved in the flow of outlays and collections. This will make it easier to obtain information on the payments of trade receivables and, more broadly, to monitor public sector financial flows in real time.
\item Standardization of information exchange between government bodies and treasury service providers by adopting a single digital standard OPI (Ordinativo di Pagamento e Incasso) in place of the previous local standard OIL (Ordinativo Informatico Locale), with the aim of raising the quality of treasury services, facilitating further integration between the accounting systems of the entities and between payment processes, and supporting the development of electronic payments services. \item standardization of information exchange between government bodies and treasury service providers by adopting a single digital standard OPI (Ordinativo di Pagamento e Incasso) in place of the previous local standard OIL (Ordinativo Informatico Locale), with the aim of raising the quality of treasury services, facilitating further integration between the accounting systems of the entities and between payment processes, and supporting the development of electronic payments services.
\end{itemize} \end{itemize}
\section{Business Intelligence improvements} \section{Business Intelligence improvements}
In 2018, the main task was investigating technical solutions as alternatives to the current Business Intelligence installation, with the aim of reducing licensing costs, while remaining on an open source solution, preserving functionalities and compatibility with other INFN tools and platforms. In 2018, the main task was investigating alternative technical solutions to the current Business Intelligence installation,
with the aim of reducing licensing costs, while remaining on an open source solution and preserving functionalities and compatibility with other INFN tools and platforms.
At the end of this activity, the current solution, based on TIBCO platform, was confirmed the best one. At the end of this activity, the current solution, based on TIBCO platform, was confirmed the best one.
%At present, we are converting reports that are using deprecated features. Once all reports are converted, the Business Intelligence infrastructure will be upgraded to the last version. %At present, we are converting reports that are using deprecated features. Once all reports are converted, the Business Intelligence infrastructure will be upgraded to the last version.
\section{Contratti} \section{Contratti}
Contratti (previously named Repertorio Contratti) is a new Java application (in test phase) for long term preservation of contract made between INFN and an external supplier, based on Alfresco and mDM protocol. Contratti (previously named Repertorio Contratti) is a new Java application (in test phase) for long term preservation of contracts made between INFN and an external supplier, based on Alfresco and mDM protocol.
Each contract is enriched with a full set of metadata which describe the Contract in its relevant parts and suppliers are extracted automatically from the central supplier registry, together with details of the contract signer. Each contract is enriched with a full set of metadata which describe the contract in its relevant parts, and suppliers are extracted automatically from the central supplier registry, together with details of the contract signer.
Last year, several bugfix and improvements has been made, in order to respect our customers requirements. Improvements, can be summarized as following: Last year, several bugfix and improvements have been made, in order to respect our customers requirements. Improvements can be summarized as follows:
\begin{enumerate} \begin{enumerate}
\item integration with mDM protocol: \item integration with mDM protocol:
\begin{itemize} \begin{itemize}
\item it is now possible to manage a set of folder where to store the contract file, as if it was a complete folder explorer; \item it is now possible to manage a set of folders where to store the contract file, as if it was a complete folder explorer;
\item before the contract file is stored in mDM, a protocol signature is written onto the document, without invalidating PAdES signature of the issuer. \item before the contract file is stored in mDM, a protocol signature is written onto the document, without invalidating PAdES (PDF Advanced Electronic Signatures) signature of the issuer.
\end{itemize} \end{itemize}
\item complete refactoring of ACLs mechanism, used to manage document and app permissions; \item complete refactoring of the ACLs mechanism used to manage document and app permissions;
\item added email notification in order to send a contract link to a set of recipients, extracted automatically from Godiva; \item added email notification in order to send a contract link to a set of recipients, extracted automatically from Godiva;
\item it is now possible to print a label containing the relevant characteristics of the contract; \item it is now possible to print a label containing the relevant characteristics of the contract;
\item complete UI restyling in order to improve both readability and usability of the product. \item complete UI restyling in order to improve both readability and usability of the product.
......
contributions/tier1/cpu2018.png

27.8 KiB

contributions/tier1/disk2018.png

28.4 KiB

contributions/tier1/pledge.png

180 KiB

contributions/tier1/tape2018.png

30.2 KiB