Cargando…
Optimization and performance analysis of the Common Readout Unit for the ALICE experiment at CERN
The ALICE experiment at the CERN Large Hadron Collider is devoted to the research in heavy-ion physics, where the goals are to study the formation of Quark-Gluon Plasma (QGP), a de-confined matter consisting of quarks and gluons. To extend the physics reach and to understand the QGP matter in greate...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2019
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2703141 |
Sumario: | The ALICE experiment at the CERN Large Hadron Collider is devoted to the research in heavy-ion physics, where the goals are to study the formation of Quark-Gluon Plasma (QGP), a de-confined matter consisting of quarks and gluons. To extend the physics reach and to understand the QGP matter in greater detail, ALICE is upgrading the detectors for data taking in the year of 2021, where the beam luminosity for Pb-Pb will increase six times to $6\times10^{27}cm^{-2}sec^{-1}$ at center-of-mass energy of 5.5 TeV. The increased interaction rates and the requirement of acquiring all the events information will result in an unprecedented dataflow of $\sim$3 TB/sec from the detectors to the readout system. One of the major goals of the thesis is to design an efficient readout system to cope with the upsurge in data volume by acquiring data at a high rate and recovery from data error against the multi-bit upsets in radiation environments. A new FPGA based common readout unit (CRU) has been designed which acts as a coalesce between different interfaces and requires detector specific processing logic and firmware. The CRU receives data from the detector front-end electronics (FEE) boards located in the harsh radiation zone, performs online data processing and transfers to the back-end servers and storage located in the non-radiation areas. As a part of the thesis work, optimization and performance analysis of the CRU in the context of the ALICE have been performed. The design aspects, principal tasks and complexities of the CRU have been discussed in detail. The prototype development of the CRU hardware has been illustrated and the detailed qualification tests are executed. The thesis presents performance analysis, evaluation of signal integrity and characterisation tests on the high-speed interfaces. The measurements of resource utilisation, power consumption, critical path latencies, eye diagrams and bit error rate (BER) constitute the figures of merit for efficient system performance. Emphasis is given on the implementation and testing of the error-resilient 4.8 Gbps GBT links and its Qsys model for system integration is also proposed. Signal quality of the GBT core is characterised at the targeted BER of the order of 1 bit in $10^{12}$ bits. Total jitter is in the range of picoseconds only. The margin of receiver sensitivity is found to be 2.1 dBm for the two encoding schemes of GBT. An approach to handle the requisites for the testing, performance monitoring and parameter tuning of optical interconnects in FPGA-based systems is presented. A strategy is designed and developed for the latency-optimized implementation of the link to align the phase of the clocks. CRUs are associated with high rates of data transmission. Hence, optimization methodology for multi-gigabit transceivers is designed and tested to address the challenges of the high-frequency losses during the data transfer. It is implemented on the state-of-the-art 20nm Arria-10 FPGA manufactured by Intel Inc. The setup has been validated for three available high-speed data transmission protocols, namely, GBT, Timing-Trigger and Control over passive optical networks (TTC-PON) and 10-Gbps link. The improvement in the signal integrity is gauged by two metrics, the BER and the eye diagram. It is observed that the technique improves the signal integrity and reduces the BER. The research and development summarized in the thesis is of high relevance for the firmware calibration and the hardware alignment purposes. The work could be further extended to design a load prediction model for efficient data distribution scheme and to architect a dynamic switching topology for the sudden rise of data volume. |
---|