Skip to content
Snippets Groups Projects
Commit a20f2436 authored by Doina Cristina Duma's avatar Doina Cristina Duma
Browse files

Update ds_eoscpilot.tex

parent 0210eee2
No related branches found
No related tags found
Loading
Pipeline #21960 failed
......@@ -45,12 +45,12 @@ coordinator of the activities of the task T6.3 - “Interoperability pilots
(service implementation, integration, validation, provisioning for Science
Demonstrators)”.
One of the main project Objectives is to:
One of the project's main Objectives related to WP6 is to:
\begin{itemize}
\item “Develop a number of pilots that integrate services and infrastructures
to demonstrate interoperability in a number of scientific domains”
\end{itemize}
while specific Objectives addressed by the 6.3 task during 2018 were:
mapped into some specific Objectives addressed by the T6.3 task:
\begin{itemize}
\item Validating the compliance of services provided by WP5, "Services",
with specifications and requirements defined by the Science Demonstrators in WP4,
......@@ -77,6 +77,9 @@ During 2018 the main activities coordinated by INFN-CNAF were:
\item Data accessibility & interoperability of underlying storage systems –
distributed Onedata deployment
\end{itemize}
\item Continuous interaction and communication with Science Demonstrators shepherds
in order to collect eventual new requirements result of the activities done in the
implementation of the SDs specific use cases.
\end{itemize}
\subsection{Interoperability pilots: Transparent Networking}
......@@ -86,7 +89,7 @@ interoperability pilots between generic, community agnostic, infrastructures,
especially Tier-1 (National HPC/HTC centres), and Tier-2 (HPC/HTC regional centres).
Its main objective is the automation of frequent, community agnostic, data flow
(many large files) and code exchange between HPC (National, Regional) and HTC (national, grid) infrastructures
technical groups have been set up:
During 2018 technical groups have been set up:
\begin{itemize}
\item one for building a network of peer to peer federations between iRODS zones
(data storage service), between Tier1 & Tier 2, between Tier2, and between Tier 2 and the grid
......@@ -96,7 +99,7 @@ technical groups have been set up:
machines, using containers, packages for configuration management, and notebooks
\end{itemize}
In (Figure~\ref{fig:1}) we see the curent status of te project with the sites participating.
In (Figure~\ref{fig:1}) we see the curent status of te project with the sites involved.
\begin{figure}
\centering
......@@ -105,6 +108,49 @@ In (Figure~\ref{fig:1}) we see the curent status of te project with the sites pa
\label{fig:1}
\end{figure}
\subsection{Interoperability pilots: Grid-Cloud interoperability demonstrator
for HEP community}
Dynamic On Demand Analysis Service (DODAS) is a Platform as a Service tool built
combining several solutions and products developed by the INDIGO-DataCloud H2020
project. It has been extensively tested on a dedicated interoperability testbed
under the umbrella of the EOSCpilot project, during the first year of the project.
Although originally designed for the Compact Muon Solenoid (CMS) Experiment at
LHC, DODAS has been quickly adopted by the Alpha Magnetic Spectrometer (AMS)
astroparticle physics experiment mounted on the ISS as a solution to exploit
opportunistic computing, nowadays an extremely important topic for research domains where computing needs constantly increase. Given its flexibility and efficiency, DODAS was selected as one of the Thematic Services that will provide multi-disciplinary solutions in the EOSC-hub project. An integration and management system of the European Open Science Cloud starting in January 2018.
During the integration pilot the usage of any cloud (both public and private)
to seamlessly integrate existing Grid computing model of CMS was demonstrated.
Overall, integration has been successful and much experience has been gained
resulting in improved understanding of weaknesses and aspects to improve and to optimise.
Weaknesses, and aspects to be improved include:
\begin{itemize}
\item Federation: federated access to underlying IaaS is a key. So far we’ve
experienced several issues. Frequently we had issues with the IaaS provider
already using OpenID Connect Authorization Server and thus unable to federate
additional services. We adopted ESACO solution to solve such a problem. It
would be crucial to have it as a EOSC provided service.
\begin{itemize}
\item for non-proprietary IaaSes would be extremely important in the EOSC
landscape. A scenario where, as example, a commercial cloud is used, would
benefit of such functionality for counting the overall HEPSpec .
\end{itemize}
\item Transparent Data Access: so far the only scalable solution we can use is
XrootD . However, this might not fit all possible use cases. A more generic
solution would be a big plus.
\item Resource monitoring: we didn’t find a common solution for monitoring
cloud resources. Although we implemented our own we are convinced that a
common strategy would be extremely valuable.
\item PaaS Orchestration: Although the current INDIGO PaaS Orchestrator has
been fully integrated and show enormous advantages while dealing with multiple
IaaSes, there is room for improvement both in the interface and in the management
of IaaS ranking.
\end{itemize}
\subsection{Interoperability pilots: AAI}
TOCHANGE
The software development lifecycle (SDL) process (Figure~\ref{fig:1}) in INDIGO has been supported by a continuous
software improvement process that regarded the software quality assurance, software maintenance,
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment