Cargando…

Trainability barriers and opportunities in quantum generative modeling

Quantum generative models, in providing inherently efficient sampling strategies, show promise for achieving a near-term advantage on quantum hardware. Nonetheless, important questions remain regarding their scalability. In this work, we investigate the barriers to the trainability of quantum genera...

Descripción completa

Detalles Bibliográficos
Autores principales: Rudolph, Manuel S., Lerch, Sacha, Thanasilp, Supanut, Kiss, Oriel, Vallecorsa, Sofia, Grossi, Michele, Holmes, Zoë
Lenguaje:eng
Publicado: 2023
Materias:
Acceso en línea:http://cds.cern.ch/record/2866746
_version_ 1780978118224248832
author Rudolph, Manuel S.
Lerch, Sacha
Thanasilp, Supanut
Kiss, Oriel
Vallecorsa, Sofia
Grossi, Michele
Holmes, Zoë
author_facet Rudolph, Manuel S.
Lerch, Sacha
Thanasilp, Supanut
Kiss, Oriel
Vallecorsa, Sofia
Grossi, Michele
Holmes, Zoë
author_sort Rudolph, Manuel S.
collection CERN
description Quantum generative models, in providing inherently efficient sampling strategies, show promise for achieving a near-term advantage on quantum hardware. Nonetheless, important questions remain regarding their scalability. In this work, we investigate the barriers to the trainability of quantum generative models posed by barren plateaus and exponential loss concentration. We explore the interplay between explicit and implicit models and losses, and show that using implicit generative models (such as quantum circuit-based models) with explicit losses (such as the KL divergence) leads to a new flavour of barren plateau. In contrast, the Maximum Mean Discrepancy (MMD), which is a popular example of an implicit loss, can be viewed as the expectation value of an observable that is either low-bodied and trainable, or global and untrainable depending on the choice of kernel. However, in parallel, we highlight that the low-bodied losses required for trainability cannot in general distinguish high-order correlations, leading to a fundamental tension between exponential concentration and the emergence of spurious minima. We further propose a new local quantum fidelity-type loss which, by leveraging quantum circuits to estimate the quality of the encoded distribution, is both faithful and enjoys trainability guarantees. Finally, we compare the performance of different loss functions for modelling real-world data from the High-Energy-Physics domain and confirm the trends predicted by our theoretical results.
id cern-2866746
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2023
record_format invenio
spelling cern-28667462023-10-16T05:46:30Zhttp://cds.cern.ch/record/2866746engRudolph, Manuel S.Lerch, SachaThanasilp, SupanutKiss, OrielVallecorsa, SofiaGrossi, MicheleHolmes, ZoëTrainability barriers and opportunities in quantum generative modelingstat.MLMathematical Physics and Mathematicshep-exParticle Physics - Experimentcs.LGComputing and Computersquant-phGeneral Theoretical PhysicsQuantum generative models, in providing inherently efficient sampling strategies, show promise for achieving a near-term advantage on quantum hardware. Nonetheless, important questions remain regarding their scalability. In this work, we investigate the barriers to the trainability of quantum generative models posed by barren plateaus and exponential loss concentration. We explore the interplay between explicit and implicit models and losses, and show that using implicit generative models (such as quantum circuit-based models) with explicit losses (such as the KL divergence) leads to a new flavour of barren plateau. In contrast, the Maximum Mean Discrepancy (MMD), which is a popular example of an implicit loss, can be viewed as the expectation value of an observable that is either low-bodied and trainable, or global and untrainable depending on the choice of kernel. However, in parallel, we highlight that the low-bodied losses required for trainability cannot in general distinguish high-order correlations, leading to a fundamental tension between exponential concentration and the emergence of spurious minima. We further propose a new local quantum fidelity-type loss which, by leveraging quantum circuits to estimate the quality of the encoded distribution, is both faithful and enjoys trainability guarantees. Finally, we compare the performance of different loss functions for modelling real-world data from the High-Energy-Physics domain and confirm the trends predicted by our theoretical results.arXiv:2305.02881oai:cds.cern.ch:28667462023-05-04
spellingShingle stat.ML
Mathematical Physics and Mathematics
hep-ex
Particle Physics - Experiment
cs.LG
Computing and Computers
quant-ph
General Theoretical Physics
Rudolph, Manuel S.
Lerch, Sacha
Thanasilp, Supanut
Kiss, Oriel
Vallecorsa, Sofia
Grossi, Michele
Holmes, Zoë
Trainability barriers and opportunities in quantum generative modeling
title Trainability barriers and opportunities in quantum generative modeling
title_full Trainability barriers and opportunities in quantum generative modeling
title_fullStr Trainability barriers and opportunities in quantum generative modeling
title_full_unstemmed Trainability barriers and opportunities in quantum generative modeling
title_short Trainability barriers and opportunities in quantum generative modeling
title_sort trainability barriers and opportunities in quantum generative modeling
topic stat.ML
Mathematical Physics and Mathematics
hep-ex
Particle Physics - Experiment
cs.LG
Computing and Computers
quant-ph
General Theoretical Physics
url http://cds.cern.ch/record/2866746
work_keys_str_mv AT rudolphmanuels trainabilitybarriersandopportunitiesinquantumgenerativemodeling
AT lerchsacha trainabilitybarriersandopportunitiesinquantumgenerativemodeling
AT thanasilpsupanut trainabilitybarriersandopportunitiesinquantumgenerativemodeling
AT kissoriel trainabilitybarriersandopportunitiesinquantumgenerativemodeling
AT vallecorsasofia trainabilitybarriersandopportunitiesinquantumgenerativemodeling
AT grossimichele trainabilitybarriersandopportunitiesinquantumgenerativemodeling
AT holmeszoe trainabilitybarriersandopportunitiesinquantumgenerativemodeling