Cargando…

An explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks

BACKGROUND: Mild cognitive impairment (MCI) is an early stage of cognitive decline which could develop into dementia. An early detection of MCI is a crucial step for timely prevention and intervention. Recent studies have developed deep learning models to detect MCI and dementia using a bedside task...

Descripción completa

Detalles Bibliográficos
Autores principales: Ruengchaijatuporn, Natthanan, Chatnuntawech, Itthi, Teerapittayanon, Surat, Sriswasdi, Sira, Itthipuripat, Sirawaj, Hemrungrojn, Solaphat, Bunyabukkana, Prodpran, Petchlorlian, Aisawan, Chunamchai, Sedthapong, Chotibut, Thiparat, Chunharas, Chaipat
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9361513/
https://www.ncbi.nlm.nih.gov/pubmed/35945568
http://dx.doi.org/10.1186/s13195-022-01043-2
_version_ 1784764543352700928
author Ruengchaijatuporn, Natthanan
Chatnuntawech, Itthi
Teerapittayanon, Surat
Sriswasdi, Sira
Itthipuripat, Sirawaj
Hemrungrojn, Solaphat
Bunyabukkana, Prodpran
Petchlorlian, Aisawan
Chunamchai, Sedthapong
Chotibut, Thiparat
Chunharas, Chaipat
author_facet Ruengchaijatuporn, Natthanan
Chatnuntawech, Itthi
Teerapittayanon, Surat
Sriswasdi, Sira
Itthipuripat, Sirawaj
Hemrungrojn, Solaphat
Bunyabukkana, Prodpran
Petchlorlian, Aisawan
Chunamchai, Sedthapong
Chotibut, Thiparat
Chunharas, Chaipat
author_sort Ruengchaijatuporn, Natthanan
collection PubMed
description BACKGROUND: Mild cognitive impairment (MCI) is an early stage of cognitive decline which could develop into dementia. An early detection of MCI is a crucial step for timely prevention and intervention. Recent studies have developed deep learning models to detect MCI and dementia using a bedside task like the classic clock drawing test (CDT). However, it remains a challenge to predict the early stage of the disease using the CDT data alone. Moreover, the state-of-the-art deep learning techniques still face black box challenges, making it questionable to implement them in a clinical setting. METHODS: We recruited 918 subjects from King Chulalongkorn Memorial Hospital (651 healthy subjects and 267 MCI patients). We propose a novel deep learning framework that incorporates data from the CDT, cube-copying, and trail-making tests. Soft label and self-attention were applied to improve the model performance and provide a visual explanation. The interpretability of the visualization of our model and the Grad-CAM approach were rated by experienced medical personnel and quantitatively evaluated using intersection over union (IoU) between the models’ heat maps and the regions of interest. RESULTS: Rather than using a single CDT image in the baseline VGG16 model, using multiple drawing tasks as inputs into our proposed model with soft label significantly improves the classification performance between the healthy aging controls and the MCI patients. In particular, the classification accuracy increases from 0.75 (baseline model) to 0.81. The F1-score increases from 0.36 to 0.65, and the area under the receiver operating characteristic curve (AUC) increases from 0.74 to 0.84. Compared to the multi-input model that also offers interpretable visualization, i.e., Grad-CAM, our model receives higher interpretability scores given by experienced medical experts and higher IoUs. CONCLUSIONS: Our model achieves better classification performance at detecting MCI compared to the baseline model. In addition, the model provides visual explanations that are superior to those of the baseline model as quantitatively evaluated by experienced medical personnel. Thus, our work offers an interpretable machine learning model with high classification performance, both of which are crucial aspects of artificial intelligence in medical diagnosis.
format Online
Article
Text
id pubmed-9361513
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-93615132022-08-10 An explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks Ruengchaijatuporn, Natthanan Chatnuntawech, Itthi Teerapittayanon, Surat Sriswasdi, Sira Itthipuripat, Sirawaj Hemrungrojn, Solaphat Bunyabukkana, Prodpran Petchlorlian, Aisawan Chunamchai, Sedthapong Chotibut, Thiparat Chunharas, Chaipat Alzheimers Res Ther Research BACKGROUND: Mild cognitive impairment (MCI) is an early stage of cognitive decline which could develop into dementia. An early detection of MCI is a crucial step for timely prevention and intervention. Recent studies have developed deep learning models to detect MCI and dementia using a bedside task like the classic clock drawing test (CDT). However, it remains a challenge to predict the early stage of the disease using the CDT data alone. Moreover, the state-of-the-art deep learning techniques still face black box challenges, making it questionable to implement them in a clinical setting. METHODS: We recruited 918 subjects from King Chulalongkorn Memorial Hospital (651 healthy subjects and 267 MCI patients). We propose a novel deep learning framework that incorporates data from the CDT, cube-copying, and trail-making tests. Soft label and self-attention were applied to improve the model performance and provide a visual explanation. The interpretability of the visualization of our model and the Grad-CAM approach were rated by experienced medical personnel and quantitatively evaluated using intersection over union (IoU) between the models’ heat maps and the regions of interest. RESULTS: Rather than using a single CDT image in the baseline VGG16 model, using multiple drawing tasks as inputs into our proposed model with soft label significantly improves the classification performance between the healthy aging controls and the MCI patients. In particular, the classification accuracy increases from 0.75 (baseline model) to 0.81. The F1-score increases from 0.36 to 0.65, and the area under the receiver operating characteristic curve (AUC) increases from 0.74 to 0.84. Compared to the multi-input model that also offers interpretable visualization, i.e., Grad-CAM, our model receives higher interpretability scores given by experienced medical experts and higher IoUs. CONCLUSIONS: Our model achieves better classification performance at detecting MCI compared to the baseline model. In addition, the model provides visual explanations that are superior to those of the baseline model as quantitatively evaluated by experienced medical personnel. Thus, our work offers an interpretable machine learning model with high classification performance, both of which are crucial aspects of artificial intelligence in medical diagnosis. BioMed Central 2022-08-09 /pmc/articles/PMC9361513/ /pubmed/35945568 http://dx.doi.org/10.1186/s13195-022-01043-2 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Ruengchaijatuporn, Natthanan
Chatnuntawech, Itthi
Teerapittayanon, Surat
Sriswasdi, Sira
Itthipuripat, Sirawaj
Hemrungrojn, Solaphat
Bunyabukkana, Prodpran
Petchlorlian, Aisawan
Chunamchai, Sedthapong
Chotibut, Thiparat
Chunharas, Chaipat
An explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks
title An explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks
title_full An explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks
title_fullStr An explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks
title_full_unstemmed An explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks
title_short An explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks
title_sort explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9361513/
https://www.ncbi.nlm.nih.gov/pubmed/35945568
http://dx.doi.org/10.1186/s13195-022-01043-2
work_keys_str_mv AT ruengchaijatupornnatthanan anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT chatnuntawechitthi anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT teerapittayanonsurat anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT sriswasdisira anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT itthipuripatsirawaj anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT hemrungrojnsolaphat anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT bunyabukkanaprodpran anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT petchlorlianaisawan anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT chunamchaisedthapong anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT chotibutthiparat anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT chunharaschaipat anexplainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT ruengchaijatupornnatthanan explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT chatnuntawechitthi explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT teerapittayanonsurat explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT sriswasdisira explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT itthipuripatsirawaj explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT hemrungrojnsolaphat explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT bunyabukkanaprodpran explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT petchlorlianaisawan explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT chunamchaisedthapong explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT chotibutthiparat explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks
AT chunharaschaipat explainableselfattentiondeepneuralnetworkfordetectingmildcognitiveimpairmentusingmultiinputdigitaldrawingtasks