Conveners
Computing and Data Handling
- Gang CHEN (Institute of High Energy Physics, CAS)
- Ian Fisk ()
Computing and Data Handling
- Borut Paul Kersevan (Faculty for Mathematics and Physics)
- Gang CHEN (Institute of High Energy Physics, CAS)
Description
Allocated time includes time for questions as follows: 15 (13+2), 20 (17+3), 30 (25+5)
Dr.
Tim Head
(CERN)
03/07/2014, 15:00
Computing and Data Handling
Oral presentation
The LHCb experiment is a spectrometer dedicated to the study of heavy
flavour at the LHC. The current LHCb trigger system consists of a hardware level, which
reduces the LHC inelastic collision rate of 13MHz to 1 MHz, at which
the entire detector is read out. In a second level, implemented in a
farm of 20k parallel-processing CPUs, the event rate is reduced to
about 5 kHz. We review the...
Dr.
Silvio Pardi
(INFN)
03/07/2014, 15:30
Computing and Data Handling
Oral presentation
The existence of large matter-antimatter asymmetry (CP violation) in the b-quark system as predicted in the Kobayashi-Maskawa theory was established by the B-Factory experiments, Belle and BaBar. However, this cannot explain the magnitude of the matter-antimatter asymmetry of the universe we live in today. This indicates undiscovered new physics exists. The Belle II experiment, the next...
Prof.
Dario Barberis
(Università e INFN Genova)
03/07/2014, 16:00
Computing and Data Handling
Oral presentation
ATLAS Computing challenges before the next LHC run
On behalf of the ATLAS Collaboration
ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a...
Dr.
Maria Girone
(CERN)
03/07/2014, 16:30
Computing and Data Handling
Oral presentation
The CMS Computing system was successfully commissioned and operated in the first run of LHC. Beginning in 2015, CMS will collect, process, simulate and analyze 1kHz of higher complexity, higher energy events. In order to meet this increased computing challenge within the resource budget expected, we have had to evolve the computing model and the techniques used. In this presentation we...
Dr.
Miao He
(Institute of High Energy Physics, Beijing)
03/07/2014, 17:00
Computing and Data Handling
Oral presentation
The Daya Bay Reactor Antineutrino Experiment reported the first observation of the non-zero neutrino mixing angle θ13 using the first 55 days of data. It has also provided the most precise measurement of θ13 with the extended data to 217 days. Daya Bay will keep running for another 3 years or so. There is about 100TB raw data produced per year, as well as several copies of reconstruction data...
Mr.
Josef NOVY
(Czech Technical University in Prage (Czech Rep.))
04/07/2014, 15:00
Computing and Data Handling
Oral presentation
This paper presents development and recent status of the new data acquisiton system of the COMPASS experiment at CERN with up to 50 kHz trigger rate and 36 kB average event size during 10 second period with beam followed by approximately 40 second period without beam. In the original DAQ, the event building is performed by software deployed on switched computer network, moreover the data...
Dr.
Jose Guillermo Panduro Vazquez
(Royal Holloway, University of London)
04/07/2014, 15:15
Computing and Data Handling
Oral presentation
The experience gained during the first period of very successful data
taking of the ATLAS experiment (Run I) has inspired a number of ideas
for improvement of the Data AcQuisition (DAQ) system that are being
put in place during the so-called Long Shutdown 1 of the Large Hadron
Collider (LHC), in 2013/14. We have updated the data-flow
architecture, rewritten an important fraction of the...
Dr.
federico de guio
(CERN)
04/07/2014, 15:30
Computing and Data Handling
Oral presentation
We review the online and offline workflows designed to align and calibrate the CMS detector. Starting from the gained experience during the first LHC run, we discuss the expected developments for Run II. In particular, we describe the envisioned different stages, from the alignment using cosmic rays data to the detector alignment and calibration using the first proton-proton collisions data (...
Dr.
sue cheatham
(Technion)
04/07/2014, 15:45
Computing and Data Handling
Oral presentation
During the 2011 data-taking run, the Large Hadron Collider (LHC) collided proton beams at the energy of 7 TeV in the centre-of mass, as well as heavy ions at the centre of mass energy of 2.76 TeV. The ATLAS Trigger is designed to reduce the rate of events from the nominal maximum bunch-crossing rate of 20 MHz to approximately 400 Hz, which will then be written on disk offline. The online...
Dr.
Nancy Marinelli
(University of Notre Dame, US)
04/07/2014, 16:00
Computing and Data Handling
Oral presentation
The LHC Run II will confront us with new challenges, mainly due to the higher number of interactions per bunch crossing (pileup) and the reduced time distance between bunches. Moreover, the higher energy shifts the interest to complex physics objects such as boosted topologies for jet studies.
In order to be ready for the beginning of the run, in view of an early discovery, the CMS...
Alvaro Fernandez
(IFIC)
04/07/2014, 16:15
Computing and Data Handling
Oral presentation
The Event Index project consists in the development and deployment of a complete catalogue of events for experiments with large amounts of data, such as the ATLAS experiment at the LHC accelerator at CERN. Data to be stored in the EventIndex are produced by all production jobs that run at CERN or the GRID; for every permanent output file a snippet of information, containing the file unique...
Dr.
Giovanni Franzoni
(CERN)
04/07/2014, 16:30
Computing and Data Handling
Oral presentation
Recorded data at the CMS experiment are funnelled into streams, integrated in the HLT menu, and further organised in a hierarchical structure of primary datasets, secondary datasets, and dedicated skims. Datasets are defined according to the final-state particles reconstructed by the high level trigger, the data format and the use case (physics analysis, alignment and calibration, performance...
Dr.
Cedric Serfon
04/07/2014, 16:45
Computing and Data Handling
Oral presentation
Rucio, the next-generation Data Management system in ATLAS
On behalf of the ATLAS Collaboration
Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale...
Prof.
Doris Yangsoo Kim
(Soongsil University)
04/07/2014, 17:00
Computing and Data Handling
Oral presentation
The rich physics of heavy quark decays provides creative and precise ways to look into nature. Experimentally, B factories have been producing quite prominent discoveries and new insights: The CP violation in B meson decays, charm neutral meson oscillations, discovery of new particles such as X(3872), and various other significant physics results. Based on these successes, a next generation B...
Kaushik De
(Univ. of Texas at Arlington)
04/07/2014, 17:15
Computing and Data Handling
Oral presentation
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide, thousands of physicists analyzing the data need remote access to hundreds of computing sites, the volume of processed data is beyond the exabyte scale, and data processing requires more than a billion hours of computing usage per year. The PanDA (Production...