Cargando…

Optimization of RF manipulations in the PS using Reinforcement Learning

The longitudinal structure of the beam is defined by a sequence of RF manipulations in the PS to achieve a bunch distance of 25 ns and a target longitudinal emittance of ǫl = 0.35 eVs for LHC type beams. To produce bunches with identical intensity and emittance, the RF parameters (voltage, phase, tim...

Descripción completa

Detalles Bibliográficos
Autor principal: Wulff, Joel Axel
Lenguaje:eng
Publicado: 2021
Materias:
Acceso en línea:http://cds.cern.ch/record/2780643
_version_ 1780971887093874688
author Wulff, Joel Axel
author_facet Wulff, Joel Axel
author_sort Wulff, Joel Axel
collection CERN
description The longitudinal structure of the beam is defined by a sequence of RF manipulations in the PS to achieve a bunch distance of 25 ns and a target longitudinal emittance of ǫl = 0.35 eVs for LHC type beams. To produce bunches with identical intensity and emittance, the RF parameters (voltage, phase, timings) for each manipulation must be controlled. Deep Reinforcement Learning (DRL) offers a way to optimize the RF settings. Agents trained on detecting and correcting simulated phase offsets in common RF manipulations such as a double, quadruple or triple splittings. Two different DRL algorithms have been investigated, Twin-delayed DDPG (TD3) and Soft Actor critic (SAC), with SAC showing most promise. In general, the agents were able to converge to good performance in their respective simulated environments after training, reaching phase prediction errors of < 5 degrees within approximately 3 iterations. An agent trained in the quadruple splitting environment was also tested with beam in the PS, and managed to improve the phase settings systematically with most improvement happening within the first 2-3 iterations.
id cern-2780643
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2021
record_format invenio
spelling cern-27806432021-09-10T20:15:41Zhttp://cds.cern.ch/record/2780643engWulff, Joel AxelOptimization of RF manipulations in the PS using Reinforcement LearningComputing and ComputersAccelerators and Storage RingsThe longitudinal structure of the beam is defined by a sequence of RF manipulations in the PS to achieve a bunch distance of 25 ns and a target longitudinal emittance of ǫl = 0.35 eVs for LHC type beams. To produce bunches with identical intensity and emittance, the RF parameters (voltage, phase, timings) for each manipulation must be controlled. Deep Reinforcement Learning (DRL) offers a way to optimize the RF settings. Agents trained on detecting and correcting simulated phase offsets in common RF manipulations such as a double, quadruple or triple splittings. Two different DRL algorithms have been investigated, Twin-delayed DDPG (TD3) and Soft Actor critic (SAC), with SAC showing most promise. In general, the agents were able to converge to good performance in their respective simulated environments after training, reaching phase prediction errors of < 5 degrees within approximately 3 iterations. An agent trained in the quadruple splitting environment was also tested with beam in the PS, and managed to improve the phase settings systematically with most improvement happening within the first 2-3 iterations.CERN-STUDENTS-Note-2021-136oai:cds.cern.ch:27806432021-09-10
spellingShingle Computing and Computers
Accelerators and Storage Rings
Wulff, Joel Axel
Optimization of RF manipulations in the PS using Reinforcement Learning
title Optimization of RF manipulations in the PS using Reinforcement Learning
title_full Optimization of RF manipulations in the PS using Reinforcement Learning
title_fullStr Optimization of RF manipulations in the PS using Reinforcement Learning
title_full_unstemmed Optimization of RF manipulations in the PS using Reinforcement Learning
title_short Optimization of RF manipulations in the PS using Reinforcement Learning
title_sort optimization of rf manipulations in the ps using reinforcement learning
topic Computing and Computers
Accelerators and Storage Rings
url http://cds.cern.ch/record/2780643
work_keys_str_mv AT wulffjoelaxel optimizationofrfmanipulationsinthepsusingreinforcementlearning