Cargando…

Lorentz Group Equivariant Autoencoders

There has been significant work recently in developing machine learning (ML) models in high energy physics (HEP) for tasks such as classification, simulation, and anomaly detection. Often these models are adapted from those designed for datasets in computer vision or natural language processing, whi...

Descripción completa

Detalles Bibliográficos
Autores principales: Hao, Zichun, Kansal, Raghav, Duarte, Javier, Chernyavskaya, Nadezda
Lenguaje:eng
Publicado: 2022
Materias:
Acceso en línea:https://dx.doi.org/10.1140/epjc/s10052-023-11633-5
http://cds.cern.ch/record/2847556
_version_ 1780976798803165184
author Hao, Zichun
Kansal, Raghav
Duarte, Javier
Chernyavskaya, Nadezda
author_facet Hao, Zichun
Kansal, Raghav
Duarte, Javier
Chernyavskaya, Nadezda
author_sort Hao, Zichun
collection CERN
description There has been significant work recently in developing machine learning (ML) models in high energy physics (HEP) for tasks such as classification, simulation, and anomaly detection. Often these models are adapted from those designed for datasets in computer vision or natural language processing, which lack inductive biases suited to HEP data, such as equivariance to its inherent symmetries. Such biases have been shown to make models more performant and interpretable, and reduce the amount of training data needed. To that end, we develop the Lorentz group autoencoder (LGAE), an autoencoder model equivariant with respect to the proper, orthochronous Lorentz group $\textrm{SO}^+(3,1)$, with a latent space living in the representations of the group. We present our architecture and several experimental results on jets at the LHC and find it outperforms graph and convolutional neural network baseline models on several compression, reconstruction, and anomaly detection metrics. We also demonstrate the advantage of such an equivariant model in analyzing the latent space of the autoencoder, which can improve the explainability of potential anomalies discovered by such ML models.
id cern-2847556
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2022
record_format invenio
spelling cern-28475562023-06-30T06:26:56Zdoi:10.1140/epjc/s10052-023-11633-5http://cds.cern.ch/record/2847556engHao, ZichunKansal, RaghavDuarte, JavierChernyavskaya, NadezdaLorentz Group Equivariant Autoencoderscs.LGComputing and Computershep-exParticle Physics - ExperimentThere has been significant work recently in developing machine learning (ML) models in high energy physics (HEP) for tasks such as classification, simulation, and anomaly detection. Often these models are adapted from those designed for datasets in computer vision or natural language processing, which lack inductive biases suited to HEP data, such as equivariance to its inherent symmetries. Such biases have been shown to make models more performant and interpretable, and reduce the amount of training data needed. To that end, we develop the Lorentz group autoencoder (LGAE), an autoencoder model equivariant with respect to the proper, orthochronous Lorentz group $\textrm{SO}^+(3,1)$, with a latent space living in the representations of the group. We present our architecture and several experimental results on jets at the LHC and find it outperforms graph and convolutional neural network baseline models on several compression, reconstruction, and anomaly detection metrics. We also demonstrate the advantage of such an equivariant model in analyzing the latent space of the autoencoder, which can improve the explainability of potential anomalies discovered by such ML models.There has been significant work recently in developing machine learning (ML) models in high energy physics (HEP) for tasks such as classification, simulation, and anomaly detection. Often these models are adapted from those designed for datasets in computer vision or natural language processing, which lack inductive biases suited to HEP data, such as equivariance to its inherent symmetries. Such biases have been shown to make models more performant and interpretable, and reduce the amount of training data needed. To that end, we develop the Lorentz group autoencoder (LGAE), an autoencoder model equivariant with respect to the proper, orthochronous Lorentz group $\mathrm{SO}^+(3,1)$, with a latent space living in the representations of the group. We present our architecture and several experimental results on jets at the LHC and find it outperforms graph and convolutional neural network baseline models on several compression, reconstruction, and anomaly detection metrics. We also demonstrate the advantage of such an equivariant model in analyzing the latent space of the autoencoder, which can improve the explainability of potential anomalies discovered by such ML models.arXiv:2212.07347FERMILAB-PUB-22-963-Voai:cds.cern.ch:28475562022-12-14
spellingShingle cs.LG
Computing and Computers
hep-ex
Particle Physics - Experiment
Hao, Zichun
Kansal, Raghav
Duarte, Javier
Chernyavskaya, Nadezda
Lorentz Group Equivariant Autoencoders
title Lorentz Group Equivariant Autoencoders
title_full Lorentz Group Equivariant Autoencoders
title_fullStr Lorentz Group Equivariant Autoencoders
title_full_unstemmed Lorentz Group Equivariant Autoencoders
title_short Lorentz Group Equivariant Autoencoders
title_sort lorentz group equivariant autoencoders
topic cs.LG
Computing and Computers
hep-ex
Particle Physics - Experiment
url https://dx.doi.org/10.1140/epjc/s10052-023-11633-5
http://cds.cern.ch/record/2847556
work_keys_str_mv AT haozichun lorentzgroupequivariantautoencoders
AT kansalraghav lorentzgroupequivariantautoencoders
AT duartejavier lorentzgroupequivariantautoencoders
AT chernyavskayanadezda lorentzgroupequivariantautoencoders