Cargando…

Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study

BACKGROUND: Composition of tissue types within a wound is a useful indicator of its healing progression. Tissue composition is clinically used in wound healing tools (eg, Bates-Jensen Wound Assessment Tool) to assess risk and recommend treatment. However, wound tissue identification and the estimati...

Descripción completa

Detalles Bibliográficos
Autores principales: Ramachandram, Dhanesh, Ramirez-GarciaLuna, Jose Luis, Fraser, Robert D J, Martínez-Jiménez, Mario Aurelio, Arriaga-Caballero, Jesus E, Allport, Justin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9077502/
https://www.ncbi.nlm.nih.gov/pubmed/35451982
http://dx.doi.org/10.2196/36977
_version_ 1784702130721914880
author Ramachandram, Dhanesh
Ramirez-GarciaLuna, Jose Luis
Fraser, Robert D J
Martínez-Jiménez, Mario Aurelio
Arriaga-Caballero, Jesus E
Allport, Justin
author_facet Ramachandram, Dhanesh
Ramirez-GarciaLuna, Jose Luis
Fraser, Robert D J
Martínez-Jiménez, Mario Aurelio
Arriaga-Caballero, Jesus E
Allport, Justin
author_sort Ramachandram, Dhanesh
collection PubMed
description BACKGROUND: Composition of tissue types within a wound is a useful indicator of its healing progression. Tissue composition is clinically used in wound healing tools (eg, Bates-Jensen Wound Assessment Tool) to assess risk and recommend treatment. However, wound tissue identification and the estimation of their relative composition is highly subjective. Consequently, incorrect assessments could be reported, leading to downstream impacts including inappropriate dressing selection, failure to identify wounds at risk of not healing, or failure to make appropriate referrals to specialists. OBJECTIVE: This study aimed to measure inter- and intrarater variability in manual tissue segmentation and quantification among a cohort of wound care clinicians and determine if an objective assessment of tissue types (ie, size and amount) can be achieved using deep neural networks. METHODS: A data set of 58 anonymized wound images of various types of chronic wounds from Swift Medical’s Wound Database was used to conduct the inter- and intrarater agreement study. The data set was split into 3 subsets with 50% overlap between subsets to measure intrarater agreement. In this study, 4 different tissue types (epithelial, granulation, slough, and eschar) within the wound bed were independently labeled by the 5 wound clinicians at 1-week intervals using a browser-based image annotation tool. In addition, 2 deep convolutional neural network architectures were developed for wound segmentation and tissue segmentation and were used in sequence in the workflow. These models were trained using 465,187 and 17,000 image-label pairs, respectively. This is the largest and most diverse reported data set used for training deep learning models for wound and wound tissue segmentation. The resulting models offer robust performance in diverse imaging conditions, are unbiased toward skin tones, and could execute in near real time on mobile devices. RESULTS: A poor to moderate interrater agreement in identifying tissue types in chronic wound images was reported. A very poor Krippendorff α value of .014 for interrater variability when identifying epithelization was observed, whereas granulation was most consistently identified by the clinicians. The intrarater intraclass correlation (3,1), however, indicates that raters were relatively consistent when labeling the same image multiple times over a period. Our deep learning models achieved a mean intersection over union of 0.8644 and 0.7192 for wound and tissue segmentation, respectively. A cohort of wound clinicians, by consensus, rated 91% (53/58) of the tissue segmentation results to be between fair and good in terms of tissue identification and segmentation quality. CONCLUSIONS: The interrater agreement study validates that clinicians exhibit considerable variability when identifying and visually estimating wound tissue proportion. The proposed deep learning technique provides objective tissue identification and measurements to assist clinicians in documenting the wound more accurately and could have a significant impact on wound care when deployed at scale.
format Online
Article
Text
id pubmed-9077502
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-90775022022-05-08 Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study Ramachandram, Dhanesh Ramirez-GarciaLuna, Jose Luis Fraser, Robert D J Martínez-Jiménez, Mario Aurelio Arriaga-Caballero, Jesus E Allport, Justin JMIR Mhealth Uhealth Original Paper BACKGROUND: Composition of tissue types within a wound is a useful indicator of its healing progression. Tissue composition is clinically used in wound healing tools (eg, Bates-Jensen Wound Assessment Tool) to assess risk and recommend treatment. However, wound tissue identification and the estimation of their relative composition is highly subjective. Consequently, incorrect assessments could be reported, leading to downstream impacts including inappropriate dressing selection, failure to identify wounds at risk of not healing, or failure to make appropriate referrals to specialists. OBJECTIVE: This study aimed to measure inter- and intrarater variability in manual tissue segmentation and quantification among a cohort of wound care clinicians and determine if an objective assessment of tissue types (ie, size and amount) can be achieved using deep neural networks. METHODS: A data set of 58 anonymized wound images of various types of chronic wounds from Swift Medical’s Wound Database was used to conduct the inter- and intrarater agreement study. The data set was split into 3 subsets with 50% overlap between subsets to measure intrarater agreement. In this study, 4 different tissue types (epithelial, granulation, slough, and eschar) within the wound bed were independently labeled by the 5 wound clinicians at 1-week intervals using a browser-based image annotation tool. In addition, 2 deep convolutional neural network architectures were developed for wound segmentation and tissue segmentation and were used in sequence in the workflow. These models were trained using 465,187 and 17,000 image-label pairs, respectively. This is the largest and most diverse reported data set used for training deep learning models for wound and wound tissue segmentation. The resulting models offer robust performance in diverse imaging conditions, are unbiased toward skin tones, and could execute in near real time on mobile devices. RESULTS: A poor to moderate interrater agreement in identifying tissue types in chronic wound images was reported. A very poor Krippendorff α value of .014 for interrater variability when identifying epithelization was observed, whereas granulation was most consistently identified by the clinicians. The intrarater intraclass correlation (3,1), however, indicates that raters were relatively consistent when labeling the same image multiple times over a period. Our deep learning models achieved a mean intersection over union of 0.8644 and 0.7192 for wound and tissue segmentation, respectively. A cohort of wound clinicians, by consensus, rated 91% (53/58) of the tissue segmentation results to be between fair and good in terms of tissue identification and segmentation quality. CONCLUSIONS: The interrater agreement study validates that clinicians exhibit considerable variability when identifying and visually estimating wound tissue proportion. The proposed deep learning technique provides objective tissue identification and measurements to assist clinicians in documenting the wound more accurately and could have a significant impact on wound care when deployed at scale. JMIR Publications 2022-04-22 /pmc/articles/PMC9077502/ /pubmed/35451982 http://dx.doi.org/10.2196/36977 Text en ©Dhanesh Ramachandram, Jose Luis Ramirez-GarciaLuna, Robert D J Fraser, Mario Aurelio Martínez-Jiménez, Jesus E Arriaga-Caballero, Justin Allport. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 22.04.2022. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Ramachandram, Dhanesh
Ramirez-GarciaLuna, Jose Luis
Fraser, Robert D J
Martínez-Jiménez, Mario Aurelio
Arriaga-Caballero, Jesus E
Allport, Justin
Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study
title Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study
title_full Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study
title_fullStr Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study
title_full_unstemmed Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study
title_short Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study
title_sort fully automated wound tissue segmentation using deep learning on mobile devices: cohort study
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9077502/
https://www.ncbi.nlm.nih.gov/pubmed/35451982
http://dx.doi.org/10.2196/36977
work_keys_str_mv AT ramachandramdhanesh fullyautomatedwoundtissuesegmentationusingdeeplearningonmobiledevicescohortstudy
AT ramirezgarcialunajoseluis fullyautomatedwoundtissuesegmentationusingdeeplearningonmobiledevicescohortstudy
AT fraserrobertdj fullyautomatedwoundtissuesegmentationusingdeeplearningonmobiledevicescohortstudy
AT martinezjimenezmarioaurelio fullyautomatedwoundtissuesegmentationusingdeeplearningonmobiledevicescohortstudy
AT arriagacaballerojesuse fullyautomatedwoundtissuesegmentationusingdeeplearningonmobiledevicescohortstudy
AT allportjustin fullyautomatedwoundtissuesegmentationusingdeeplearningonmobiledevicescohortstudy