Cargando…

Past, present and future of data acquisition systems in high energy physics experiments

Data Acquisition (DAQ) systems for large high-energy physics (HEP) experiments in the eighties were designed to handle data rates of megabytes per second. The next generation of HEP experiments at CERN (European Laboratory for High Energy Physics), is being designed around the new Large Hadron Colli...

Descripción completa

Detalles Bibliográficos
Autores principales: Toledo, Jose, Mora-Francisco, J, Müller, Hans
Lenguaje:eng
Publicado: 2003
Materias:
Acceso en línea:https://dx.doi.org/10.1016/S0141-9331(03)00065-6
http://cds.cern.ch/record/725911
Descripción
Sumario:Data Acquisition (DAQ) systems for large high-energy physics (HEP) experiments in the eighties were designed to handle data rates of megabytes per second. The next generation of HEP experiments at CERN (European Laboratory for High Energy Physics), is being designed around the new Large Hadron Collider (LHC) project, and will have to cope with gigabyte-per-second data flows. As a consequence, LHC experiments will require challengingly new equipment for detector readout, event filtering, event building and storage. The Fastbus and VME-based tree architectures of the eighties run out of steam when applied to LHC's requirements. New concepts and architectures from the nineties have substituted rack-mounting backplane buses for high speed point-to-point links, abandoned centralized event building, and instead use switched networks and parallel architectures. Following these trends, and in the context of DAQ and trigger systems for LHC experiments, this paper summarizes the earlier architectures and presents the new concepts for DAQ.