Cargando…
Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling
Radiation outcomes prediction (ROP) plays an important role in personalized prescription and adaptive radiotherapy. A clinical decision may not only depend on an accurate radiation outcomes’ prediction, but also needs to be made based on an informed understanding of the relationship among patients’...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
The British Institute of Radiology.
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7592485/ https://www.ncbi.nlm.nih.gov/pubmed/33178948 http://dx.doi.org/10.1259/bjro.20190021 |
_version_ | 1783601195319820288 |
---|---|
author | Luo, Yi Tseng, Huan-Hsin Cui, Sunan Wei, Lise Ten Haken, Randall K. El Naqa, Issam |
author_facet | Luo, Yi Tseng, Huan-Hsin Cui, Sunan Wei, Lise Ten Haken, Randall K. El Naqa, Issam |
author_sort | Luo, Yi |
collection | PubMed |
description | Radiation outcomes prediction (ROP) plays an important role in personalized prescription and adaptive radiotherapy. A clinical decision may not only depend on an accurate radiation outcomes’ prediction, but also needs to be made based on an informed understanding of the relationship among patients’ characteristics, radiation response and treatment plans. As more patients’ biophysical information become available, machine learning (ML) techniques will have a great potential for improving ROP. Creating explainable ML methods is an ultimate task for clinical practice but remains a challenging one. Towards complete explainability, the interpretability of ML approaches needs to be first explored. Hence, this review focuses on the application of ML techniques for clinical adoption in radiation oncology by balancing accuracy with interpretability of the predictive model of interest. An ML algorithm can be generally classified into an interpretable (IP) or non-interpretable (NIP) (“black box”) technique. While the former may provide a clearer explanation to aid clinical decision-making, its prediction performance is generally outperformed by the latter. Therefore, great efforts and resources have been dedicated towards balancing the accuracy and the interpretability of ML approaches in ROP, but more still needs to be done. In this review, current progress to increase the accuracy for IP ML approaches is introduced, and major trends to improve the interpretability and alleviate the “black box” stigma of ML in radiation outcomes modeling are summarized. Efforts to integrate IP and NIP ML approaches to produce predictive models with higher accuracy and interpretability for ROP are also discussed. |
format | Online Article Text |
id | pubmed-7592485 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | The British Institute of Radiology. |
record_format | MEDLINE/PubMed |
spelling | pubmed-75924852020-11-10 Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling Luo, Yi Tseng, Huan-Hsin Cui, Sunan Wei, Lise Ten Haken, Randall K. El Naqa, Issam BJR Open Review Article Radiation outcomes prediction (ROP) plays an important role in personalized prescription and adaptive radiotherapy. A clinical decision may not only depend on an accurate radiation outcomes’ prediction, but also needs to be made based on an informed understanding of the relationship among patients’ characteristics, radiation response and treatment plans. As more patients’ biophysical information become available, machine learning (ML) techniques will have a great potential for improving ROP. Creating explainable ML methods is an ultimate task for clinical practice but remains a challenging one. Towards complete explainability, the interpretability of ML approaches needs to be first explored. Hence, this review focuses on the application of ML techniques for clinical adoption in radiation oncology by balancing accuracy with interpretability of the predictive model of interest. An ML algorithm can be generally classified into an interpretable (IP) or non-interpretable (NIP) (“black box”) technique. While the former may provide a clearer explanation to aid clinical decision-making, its prediction performance is generally outperformed by the latter. Therefore, great efforts and resources have been dedicated towards balancing the accuracy and the interpretability of ML approaches in ROP, but more still needs to be done. In this review, current progress to increase the accuracy for IP ML approaches is introduced, and major trends to improve the interpretability and alleviate the “black box” stigma of ML in radiation outcomes modeling are summarized. Efforts to integrate IP and NIP ML approaches to produce predictive models with higher accuracy and interpretability for ROP are also discussed. The British Institute of Radiology. 2019-07-04 /pmc/articles/PMC7592485/ /pubmed/33178948 http://dx.doi.org/10.1259/bjro.20190021 Text en © 2019 The Authors. Published by the British Institute of Radiology This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Review Article Luo, Yi Tseng, Huan-Hsin Cui, Sunan Wei, Lise Ten Haken, Randall K. El Naqa, Issam Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling |
title | Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling |
title_full | Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling |
title_fullStr | Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling |
title_full_unstemmed | Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling |
title_short | Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling |
title_sort | balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling |
topic | Review Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7592485/ https://www.ncbi.nlm.nih.gov/pubmed/33178948 http://dx.doi.org/10.1259/bjro.20190021 |
work_keys_str_mv | AT luoyi balancingaccuracyandinterpretabilityofmachinelearningapproachesforradiationtreatmentoutcomesmodeling AT tsenghuanhsin balancingaccuracyandinterpretabilityofmachinelearningapproachesforradiationtreatmentoutcomesmodeling AT cuisunan balancingaccuracyandinterpretabilityofmachinelearningapproachesforradiationtreatmentoutcomesmodeling AT weilise balancingaccuracyandinterpretabilityofmachinelearningapproachesforradiationtreatmentoutcomesmodeling AT tenhakenrandallk balancingaccuracyandinterpretabilityofmachinelearningapproachesforradiationtreatmentoutcomesmodeling AT elnaqaissam balancingaccuracyandinterpretabilityofmachinelearningapproachesforradiationtreatmentoutcomesmodeling |