Cargando…
Summarising and validating test accuracy results across multiple studies for use in clinical practice
Following a meta‐analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta‐analysis findings, b...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley and Sons Inc.
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4973708/ https://www.ncbi.nlm.nih.gov/pubmed/25800943 http://dx.doi.org/10.1002/sim.6471 |
_version_ | 1782446439760134144 |
---|---|
author | Riley, Richard D. Ahmed, Ikhlaaq Debray, Thomas P. A. Willis, Brian H. Noordzij, J. Pieter Higgins, Julian P.T. Deeks, Jonathan J. |
author_facet | Riley, Richard D. Ahmed, Ikhlaaq Debray, Thomas P. A. Willis, Brian H. Noordzij, J. Pieter Higgins, Julian P.T. Deeks, Jonathan J. |
author_sort | Riley, Richard D. |
collection | PubMed |
description | Following a meta‐analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta‐analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta‐analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta‐analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post‐test probabilities (PPV and NPV) in a new population based on existing meta‐analysis results and propose a cross‐validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post‐test probabilities calibrate better when tailored to the prevalence in the new population, with cross‐validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. |
format | Online Article Text |
id | pubmed-4973708 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2015 |
publisher | John Wiley and Sons Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-49737082016-08-17 Summarising and validating test accuracy results across multiple studies for use in clinical practice Riley, Richard D. Ahmed, Ikhlaaq Debray, Thomas P. A. Willis, Brian H. Noordzij, J. Pieter Higgins, Julian P.T. Deeks, Jonathan J. Stat Med Research Articles Following a meta‐analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta‐analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta‐analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta‐analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post‐test probabilities (PPV and NPV) in a new population based on existing meta‐analysis results and propose a cross‐validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post‐test probabilities calibrate better when tailored to the prevalence in the new population, with cross‐validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. John Wiley and Sons Inc. 2015-03-20 2015-06-15 /pmc/articles/PMC4973708/ /pubmed/25800943 http://dx.doi.org/10.1002/sim.6471 Text en © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. This is an open access article under the terms of the Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Articles Riley, Richard D. Ahmed, Ikhlaaq Debray, Thomas P. A. Willis, Brian H. Noordzij, J. Pieter Higgins, Julian P.T. Deeks, Jonathan J. Summarising and validating test accuracy results across multiple studies for use in clinical practice |
title | Summarising and validating test accuracy results across multiple studies for use in clinical practice |
title_full | Summarising and validating test accuracy results across multiple studies for use in clinical practice |
title_fullStr | Summarising and validating test accuracy results across multiple studies for use in clinical practice |
title_full_unstemmed | Summarising and validating test accuracy results across multiple studies for use in clinical practice |
title_short | Summarising and validating test accuracy results across multiple studies for use in clinical practice |
title_sort | summarising and validating test accuracy results across multiple studies for use in clinical practice |
topic | Research Articles |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4973708/ https://www.ncbi.nlm.nih.gov/pubmed/25800943 http://dx.doi.org/10.1002/sim.6471 |
work_keys_str_mv | AT rileyrichardd summarisingandvalidatingtestaccuracyresultsacrossmultiplestudiesforuseinclinicalpractice AT ahmedikhlaaq summarisingandvalidatingtestaccuracyresultsacrossmultiplestudiesforuseinclinicalpractice AT debraythomaspa summarisingandvalidatingtestaccuracyresultsacrossmultiplestudiesforuseinclinicalpractice AT willisbrianh summarisingandvalidatingtestaccuracyresultsacrossmultiplestudiesforuseinclinicalpractice AT noordzijjpieter summarisingandvalidatingtestaccuracyresultsacrossmultiplestudiesforuseinclinicalpractice AT higginsjulianpt summarisingandvalidatingtestaccuracyresultsacrossmultiplestudiesforuseinclinicalpractice AT deeksjonathanj summarisingandvalidatingtestaccuracyresultsacrossmultiplestudiesforuseinclinicalpractice |