Cargando…

Using a Neural Network to Approximate the Negative Log Likelihood Function

An increasingly frequent challenge faced in HEP data analysis is to characterize the agreement between a prediction that depends on a dozen or more model parameters-such as predictions coming from an effective field theory (EFT) framework-and the observed data. Traditionally, such characterizations...

Descripción completa

Detalles Bibliográficos
Autor principal: CMS Collaboration
Lenguaje:eng
Publicado: 2023
Materias:
Acceso en línea:http://cds.cern.ch/record/2860206
_version_ 1780977745761665024
author CMS Collaboration
author_facet CMS Collaboration
author_sort CMS Collaboration
collection CERN
description An increasingly frequent challenge faced in HEP data analysis is to characterize the agreement between a prediction that depends on a dozen or more model parameters-such as predictions coming from an effective field theory (EFT) framework-and the observed data. Traditionally, such characterizations take the form of a negative log likelihood (NLL) distribution, which can only be evaluated numerically. The lack of a closed-form description of the NLL function makes it difficult to convey results of the statistical analysis. Typical results are limited to extracting "best fit" values of the model parameters and 1-D intervals or 2-D contours extracted from scanning the higher dimensional parameter space. It is desirable to explore these high-dimensional model parameter spaces in more sophisticated ways. One option for overcoming this challenge is to use a neural network to approximate the NLL function. This approach has the advantage of being continuous and differentiable by construction, which are essential properties for an NLL function and may also provide useful handles in exploring the NLL as a function of the model parameters. This note demonstrates the application of this technique to an analysis involving a search for new physics in the top quark sector within the framework of effective field theory.
id cern-2860206
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2023
record_format invenio
spelling cern-28602062023-05-30T21:36:21Zhttp://cds.cern.ch/record/2860206engCMS CollaborationUsing a Neural Network to Approximate the Negative Log Likelihood FunctionDetectors and Experimental TechniquesAn increasingly frequent challenge faced in HEP data analysis is to characterize the agreement between a prediction that depends on a dozen or more model parameters-such as predictions coming from an effective field theory (EFT) framework-and the observed data. Traditionally, such characterizations take the form of a negative log likelihood (NLL) distribution, which can only be evaluated numerically. The lack of a closed-form description of the NLL function makes it difficult to convey results of the statistical analysis. Typical results are limited to extracting "best fit" values of the model parameters and 1-D intervals or 2-D contours extracted from scanning the higher dimensional parameter space. It is desirable to explore these high-dimensional model parameter spaces in more sophisticated ways. One option for overcoming this challenge is to use a neural network to approximate the NLL function. This approach has the advantage of being continuous and differentiable by construction, which are essential properties for an NLL function and may also provide useful handles in exploring the NLL as a function of the model parameters. This note demonstrates the application of this technique to an analysis involving a search for new physics in the top quark sector within the framework of effective field theory.CMS-DP-2023-027CERN-CMS-DP-2023-027oai:cds.cern.ch:28602062023-05-19
spellingShingle Detectors and Experimental Techniques
CMS Collaboration
Using a Neural Network to Approximate the Negative Log Likelihood Function
title Using a Neural Network to Approximate the Negative Log Likelihood Function
title_full Using a Neural Network to Approximate the Negative Log Likelihood Function
title_fullStr Using a Neural Network to Approximate the Negative Log Likelihood Function
title_full_unstemmed Using a Neural Network to Approximate the Negative Log Likelihood Function
title_short Using a Neural Network to Approximate the Negative Log Likelihood Function
title_sort using a neural network to approximate the negative log likelihood function
topic Detectors and Experimental Techniques
url http://cds.cern.ch/record/2860206
work_keys_str_mv AT cmscollaboration usinganeuralnetworktoapproximatethenegativeloglikelihoodfunction