Cargando…

Assessing the performance of real-time epidemic forecasts: A case study of Ebola in the Western Area region of Sierra Leone, 2014-15

Real-time forecasts based on mathematical models can inform critical decision-making during infectious disease outbreaks. Yet, epidemic forecasts are rarely evaluated during or after the event, and there is little guidance on the best metrics for assessment. Here, we propose an evaluation approach t...

Descripción completa

Detalles Bibliográficos
Autores principales: Funk, Sebastian, Camacho, Anton, Kucharski, Adam J., Lowe, Rachel, Eggo, Rosalind M., Edmunds, W. John
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6386417/
https://www.ncbi.nlm.nih.gov/pubmed/30742608
http://dx.doi.org/10.1371/journal.pcbi.1006785
_version_ 1783397381832704000
author Funk, Sebastian
Camacho, Anton
Kucharski, Adam J.
Lowe, Rachel
Eggo, Rosalind M.
Edmunds, W. John
author_facet Funk, Sebastian
Camacho, Anton
Kucharski, Adam J.
Lowe, Rachel
Eggo, Rosalind M.
Edmunds, W. John
author_sort Funk, Sebastian
collection PubMed
description Real-time forecasts based on mathematical models can inform critical decision-making during infectious disease outbreaks. Yet, epidemic forecasts are rarely evaluated during or after the event, and there is little guidance on the best metrics for assessment. Here, we propose an evaluation approach that disentangles different components of forecasting ability using metrics that separately assess the calibration, sharpness and bias of forecasts. This makes it possible to assess not just how close a forecast was to reality but also how well uncertainty has been quantified. We used this approach to analyse the performance of weekly forecasts we generated in real time for Western Area, Sierra Leone, during the 2013–16 Ebola epidemic in West Africa. We investigated a range of forecast model variants based on the model fits generated at the time with a semi-mechanistic model, and found that good probabilistic calibration was achievable at short time horizons of one or two weeks ahead but model predictions were increasingly unreliable at longer forecasting horizons. This suggests that forecasts may have been of good enough quality to inform decision making based on predictions a few weeks ahead of time but not longer, reflecting the high level of uncertainty in the processes driving the trajectory of the epidemic. Comparing forecasts based on the semi-mechanistic model to simpler null models showed that the best semi-mechanistic model variant performed better than the null models with respect to probabilistic calibration, and that this would have been identified from the earliest stages of the outbreak. As forecasts become a routine part of the toolkit in public health, standards for evaluation of performance will be important for assessing quality and improving credibility of mathematical models, and for elucidating difficulties and trade-offs when aiming to make the most useful and reliable forecasts.
format Online
Article
Text
id pubmed-6386417
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-63864172019-03-08 Assessing the performance of real-time epidemic forecasts: A case study of Ebola in the Western Area region of Sierra Leone, 2014-15 Funk, Sebastian Camacho, Anton Kucharski, Adam J. Lowe, Rachel Eggo, Rosalind M. Edmunds, W. John PLoS Comput Biol Research Article Real-time forecasts based on mathematical models can inform critical decision-making during infectious disease outbreaks. Yet, epidemic forecasts are rarely evaluated during or after the event, and there is little guidance on the best metrics for assessment. Here, we propose an evaluation approach that disentangles different components of forecasting ability using metrics that separately assess the calibration, sharpness and bias of forecasts. This makes it possible to assess not just how close a forecast was to reality but also how well uncertainty has been quantified. We used this approach to analyse the performance of weekly forecasts we generated in real time for Western Area, Sierra Leone, during the 2013–16 Ebola epidemic in West Africa. We investigated a range of forecast model variants based on the model fits generated at the time with a semi-mechanistic model, and found that good probabilistic calibration was achievable at short time horizons of one or two weeks ahead but model predictions were increasingly unreliable at longer forecasting horizons. This suggests that forecasts may have been of good enough quality to inform decision making based on predictions a few weeks ahead of time but not longer, reflecting the high level of uncertainty in the processes driving the trajectory of the epidemic. Comparing forecasts based on the semi-mechanistic model to simpler null models showed that the best semi-mechanistic model variant performed better than the null models with respect to probabilistic calibration, and that this would have been identified from the earliest stages of the outbreak. As forecasts become a routine part of the toolkit in public health, standards for evaluation of performance will be important for assessing quality and improving credibility of mathematical models, and for elucidating difficulties and trade-offs when aiming to make the most useful and reliable forecasts. Public Library of Science 2019-02-11 /pmc/articles/PMC6386417/ /pubmed/30742608 http://dx.doi.org/10.1371/journal.pcbi.1006785 Text en © 2019 Funk et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Funk, Sebastian
Camacho, Anton
Kucharski, Adam J.
Lowe, Rachel
Eggo, Rosalind M.
Edmunds, W. John
Assessing the performance of real-time epidemic forecasts: A case study of Ebola in the Western Area region of Sierra Leone, 2014-15
title Assessing the performance of real-time epidemic forecasts: A case study of Ebola in the Western Area region of Sierra Leone, 2014-15
title_full Assessing the performance of real-time epidemic forecasts: A case study of Ebola in the Western Area region of Sierra Leone, 2014-15
title_fullStr Assessing the performance of real-time epidemic forecasts: A case study of Ebola in the Western Area region of Sierra Leone, 2014-15
title_full_unstemmed Assessing the performance of real-time epidemic forecasts: A case study of Ebola in the Western Area region of Sierra Leone, 2014-15
title_short Assessing the performance of real-time epidemic forecasts: A case study of Ebola in the Western Area region of Sierra Leone, 2014-15
title_sort assessing the performance of real-time epidemic forecasts: a case study of ebola in the western area region of sierra leone, 2014-15
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6386417/
https://www.ncbi.nlm.nih.gov/pubmed/30742608
http://dx.doi.org/10.1371/journal.pcbi.1006785
work_keys_str_mv AT funksebastian assessingtheperformanceofrealtimeepidemicforecastsacasestudyofebolainthewesternarearegionofsierraleone201415
AT camachoanton assessingtheperformanceofrealtimeepidemicforecastsacasestudyofebolainthewesternarearegionofsierraleone201415
AT kucharskiadamj assessingtheperformanceofrealtimeepidemicforecastsacasestudyofebolainthewesternarearegionofsierraleone201415
AT lowerachel assessingtheperformanceofrealtimeepidemicforecastsacasestudyofebolainthewesternarearegionofsierraleone201415
AT eggorosalindm assessingtheperformanceofrealtimeepidemicforecastsacasestudyofebolainthewesternarearegionofsierraleone201415
AT edmundswjohn assessingtheperformanceofrealtimeepidemicforecastsacasestudyofebolainthewesternarearegionofsierraleone201415