2-9 July 2014
Valencia, Spain
Europe/Madrid timezone

PanDA: A New Paradigm for Distributed Computing in HEP Through the Lens of ATLAS and other Experiments

4 Jul 2014, 17:15
15m
Sala 6+7 ()

Sala 6+7

Oral presentation Computing and Data Handling Computing and Data Handling

Speaker

Kaushik De (Univ. of Texas at Arlington)

Description

Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide, thousands of physicists analyzing the data need remote access to hundreds of computing sites, the volume of processed data is beyond the exabyte scale, and data processing requires more than a billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of computing in HEP was discarded in favor of a far more flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at a million computing jobs per day, and processing over an exabyte of data in 2013. We will describe the design and implementation of PanDA, present data on the performance of PanDA at the LHC, and discuss plans for future evolution of the system to meet new challenges of scale, heterogeneity and increasing user base.

Primary author

Kaushik De (Univ. of Texas at Arlington)

Co-authors

Alexei Klimentov (Brookhaven National Laboratory) Paul Nilsson (Univ. of Texas at Arlington) Tadashi Maeno (Brookhaven National Laboratory) Torre Wenaus (Brookhaven National Laboratory)

Presentation Materials

Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×