Cargando…
Temporal quality degradation in AI models
As AI models continue to advance into many real-life applications, their ability to maintain reliable quality over time becomes increasingly important. The principal challenge in this task stems from the very nature of current machine learning models, dependent on the data as it was at the time of t...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9270447/ https://www.ncbi.nlm.nih.gov/pubmed/35803963 http://dx.doi.org/10.1038/s41598-022-15245-z |
_version_ | 1784744471503568896 |
---|---|
author | Vela, Daniel Sharp, Andrew Zhang, Richard Nguyen, Trang Hoang, An Pianykh, Oleg S. |
author_facet | Vela, Daniel Sharp, Andrew Zhang, Richard Nguyen, Trang Hoang, An Pianykh, Oleg S. |
author_sort | Vela, Daniel |
collection | PubMed |
description | As AI models continue to advance into many real-life applications, their ability to maintain reliable quality over time becomes increasingly important. The principal challenge in this task stems from the very nature of current machine learning models, dependent on the data as it was at the time of training. In this study, we present the first analysis of AI “aging”: the complex, multifaceted phenomenon of AI model quality degradation as more time passes since the last model training cycle. Using datasets from four different industries (healthcare operations, transportation, finance, and weather) and four standard machine learning models, we identify and describe the main temporal degradation patterns. We also demonstrate the principal differences between temporal model degradation and related concepts that have been explored previously, such as data concept drift and continuous learning. Finally, we indicate potential causes of temporal degradation, and suggest approaches to detecting aging and reducing its impact. |
format | Online Article Text |
id | pubmed-9270447 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-92704472022-07-10 Temporal quality degradation in AI models Vela, Daniel Sharp, Andrew Zhang, Richard Nguyen, Trang Hoang, An Pianykh, Oleg S. Sci Rep Article As AI models continue to advance into many real-life applications, their ability to maintain reliable quality over time becomes increasingly important. The principal challenge in this task stems from the very nature of current machine learning models, dependent on the data as it was at the time of training. In this study, we present the first analysis of AI “aging”: the complex, multifaceted phenomenon of AI model quality degradation as more time passes since the last model training cycle. Using datasets from four different industries (healthcare operations, transportation, finance, and weather) and four standard machine learning models, we identify and describe the main temporal degradation patterns. We also demonstrate the principal differences between temporal model degradation and related concepts that have been explored previously, such as data concept drift and continuous learning. Finally, we indicate potential causes of temporal degradation, and suggest approaches to detecting aging and reducing its impact. Nature Publishing Group UK 2022-07-08 /pmc/articles/PMC9270447/ /pubmed/35803963 http://dx.doi.org/10.1038/s41598-022-15245-z Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Vela, Daniel Sharp, Andrew Zhang, Richard Nguyen, Trang Hoang, An Pianykh, Oleg S. Temporal quality degradation in AI models |
title | Temporal quality degradation in AI models |
title_full | Temporal quality degradation in AI models |
title_fullStr | Temporal quality degradation in AI models |
title_full_unstemmed | Temporal quality degradation in AI models |
title_short | Temporal quality degradation in AI models |
title_sort | temporal quality degradation in ai models |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9270447/ https://www.ncbi.nlm.nih.gov/pubmed/35803963 http://dx.doi.org/10.1038/s41598-022-15245-z |
work_keys_str_mv | AT veladaniel temporalqualitydegradationinaimodels AT sharpandrew temporalqualitydegradationinaimodels AT zhangrichard temporalqualitydegradationinaimodels AT nguyentrang temporalqualitydegradationinaimodels AT hoangan temporalqualitydegradationinaimodels AT pianykholegs temporalqualitydegradationinaimodels |