Cargando…
In the context of forensic casework, are there meaningful metrics of the degree of calibration?
Forensic-evaluation systems should output likelihood-ratio values that are well calibrated. If they do not, their output will be misleading. Unless a forensic-evaluation system is intrinsically well-calibrated, it should be calibrated using a parsimonious parametric model that is trained using calib...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8212664/ https://www.ncbi.nlm.nih.gov/pubmed/34179740 http://dx.doi.org/10.1016/j.fsisyn.2021.100157 |
_version_ | 1783709681380753408 |
---|---|
author | Morrison, Geoffrey Stewart |
author_facet | Morrison, Geoffrey Stewart |
author_sort | Morrison, Geoffrey Stewart |
collection | PubMed |
description | Forensic-evaluation systems should output likelihood-ratio values that are well calibrated. If they do not, their output will be misleading. Unless a forensic-evaluation system is intrinsically well-calibrated, it should be calibrated using a parsimonious parametric model that is trained using calibration data. The system should then be tested using validation data. Metrics of degree of calibration that are based on the pool-adjacent-violators (PAV) algorithm recalibrate the likelihood-ratio values calculated from the validation data. The PAV algorithm overfits on the validation data because it is both trained and tested on the validation data, and because it is a non-parametric model with weak constraints. For already-calibrated systems, PAV-based ostensive metrics of degree of calibration do not actually measure degree of calibration; they measure sampling variability between the calibration data and the validation data, and overfitting on the validation data. Monte Carlo simulations are used to demonstrate that this is the case. We therefore argue that, in the context of casework, PAV-based metrics are not meaningful metrics of degree of calibration; however, we also argue that, in the context of casework, a metric of degree of calibration is not required. |
format | Online Article Text |
id | pubmed-8212664 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-82126642021-06-25 In the context of forensic casework, are there meaningful metrics of the degree of calibration? Morrison, Geoffrey Stewart Forensic Sci Int Synerg Interdisciplinary Forensics Forensic-evaluation systems should output likelihood-ratio values that are well calibrated. If they do not, their output will be misleading. Unless a forensic-evaluation system is intrinsically well-calibrated, it should be calibrated using a parsimonious parametric model that is trained using calibration data. The system should then be tested using validation data. Metrics of degree of calibration that are based on the pool-adjacent-violators (PAV) algorithm recalibrate the likelihood-ratio values calculated from the validation data. The PAV algorithm overfits on the validation data because it is both trained and tested on the validation data, and because it is a non-parametric model with weak constraints. For already-calibrated systems, PAV-based ostensive metrics of degree of calibration do not actually measure degree of calibration; they measure sampling variability between the calibration data and the validation data, and overfitting on the validation data. Monte Carlo simulations are used to demonstrate that this is the case. We therefore argue that, in the context of casework, PAV-based metrics are not meaningful metrics of degree of calibration; however, we also argue that, in the context of casework, a metric of degree of calibration is not required. Elsevier 2021-06-12 /pmc/articles/PMC8212664/ /pubmed/34179740 http://dx.doi.org/10.1016/j.fsisyn.2021.100157 Text en © 2021 The Author. Published by Elsevier B.V. https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). |
spellingShingle | Interdisciplinary Forensics Morrison, Geoffrey Stewart In the context of forensic casework, are there meaningful metrics of the degree of calibration? |
title | In the context of forensic casework, are there meaningful metrics of the degree of calibration? |
title_full | In the context of forensic casework, are there meaningful metrics of the degree of calibration? |
title_fullStr | In the context of forensic casework, are there meaningful metrics of the degree of calibration? |
title_full_unstemmed | In the context of forensic casework, are there meaningful metrics of the degree of calibration? |
title_short | In the context of forensic casework, are there meaningful metrics of the degree of calibration? |
title_sort | in the context of forensic casework, are there meaningful metrics of the degree of calibration? |
topic | Interdisciplinary Forensics |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8212664/ https://www.ncbi.nlm.nih.gov/pubmed/34179740 http://dx.doi.org/10.1016/j.fsisyn.2021.100157 |
work_keys_str_mv | AT morrisongeoffreystewart inthecontextofforensiccaseworkaretheremeaningfulmetricsofthedegreeofcalibration |