Cargando…

Optimization of RF manipulations in the PS using Reinforcement Learning

The longitudinal structure of the beam is defined by a sequence of RF manipulations in the PS to achieve a bunch distance of 25 ns and a target longitudinal emittance of ǫl = 0.35 eVs for LHC type beams. To produce bunches with identical intensity and emittance, the RF parameters (voltage, phase, tim...

Descripción completa

Detalles Bibliográficos
Autor principal: Wulff, Joel Axel
Lenguaje:eng
Publicado: 2021
Materias:
Acceso en línea:http://cds.cern.ch/record/2780643
Descripción
Sumario:The longitudinal structure of the beam is defined by a sequence of RF manipulations in the PS to achieve a bunch distance of 25 ns and a target longitudinal emittance of ǫl = 0.35 eVs for LHC type beams. To produce bunches with identical intensity and emittance, the RF parameters (voltage, phase, timings) for each manipulation must be controlled. Deep Reinforcement Learning (DRL) offers a way to optimize the RF settings. Agents trained on detecting and correcting simulated phase offsets in common RF manipulations such as a double, quadruple or triple splittings. Two different DRL algorithms have been investigated, Twin-delayed DDPG (TD3) and Soft Actor critic (SAC), with SAC showing most promise. In general, the agents were able to converge to good performance in their respective simulated environments after training, reaching phase prediction errors of < 5 degrees within approximately 3 iterations. An agent trained in the quadruple splitting environment was also tested with beam in the PS, and managed to improve the phase settings systematically with most improvement happening within the first 2-3 iterations.