Cargando…

Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks

Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs i...

Descripción completa

Detalles Bibliográficos
Autores principales: Stember, J. N., Celik, H., Krupinski, E., Chang, P. D., Mutasa, S., Wood, B. J., Lignelli, A., Moonis, G., Schwartz, L. H., Jambawalikar, S., Bagci, U.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6646645/
https://www.ncbi.nlm.nih.gov/pubmed/31044392
http://dx.doi.org/10.1007/s10278-019-00220-4
_version_ 1783437582249492480
author Stember, J. N.
Celik, H.
Krupinski, E.
Chang, P. D.
Mutasa, S.
Wood, B. J.
Lignelli, A.
Moonis, G.
Schwartz, L. H.
Jambawalikar, S.
Bagci, U.
author_facet Stember, J. N.
Celik, H.
Krupinski, E.
Chang, P. D.
Mutasa, S.
Wood, B. J.
Lignelli, A.
Moonis, G.
Schwartz, L. H.
Jambawalikar, S.
Bagci, U.
author_sort Stember, J. N.
collection PubMed
description Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs is obtaining large-scale and precisely annotated imaging data. We sought to address the lack of annotated data with eye tracking technology. As a proof of principle, our hypothesis was that segmentation masks generated with the help of eye tracking (ET) would be very similar to those rendered by hand annotation (HA). Additionally, our goal was to show that a CNN trained on ET masks would be equivalent to one trained on HA masks, the latter being the current standard approach. Step 1: Screen captures of 19 publicly available radiologic images of assorted structures within various modalities were analyzed. ET and HA masks for all regions of interest (ROIs) were generated from these image datasets. Step 2: Utilizing a similar approach, ET and HA masks for 356 publicly available T1-weighted postcontrast meningioma images were generated. Three hundred six of these image + mask pairs were used to train a CNN with U-net-based architecture. The remaining 50 images were used as the independent test set. Step 1: ET and HA masks for the nonneurological images had an average Dice similarity coefficient (DSC) of 0.86 between each other. Step 2: Meningioma ET and HA masks had an average DSC of 0.85 between each other. After separate training using both approaches, the ET approach performed virtually identically to HA on the test set of 50 images. The former had an area under the curve (AUC) of 0.88, while the latter had AUC of 0.87. ET and HA predictions had trimmed mean DSCs compared to the original HA maps of 0.73 and 0.74, respectively. These trimmed DSCs between ET and HA were found to be statistically equivalent with a p value of 0.015. We have demonstrated that ET can create segmentation masks suitable for deep learning semantic segmentation. Future work will integrate ET to produce masks in a faster, more natural manner that distracts less from typical radiology clinical workflow.
format Online
Article
Text
id pubmed-6646645
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-66466452019-08-14 Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks Stember, J. N. Celik, H. Krupinski, E. Chang, P. D. Mutasa, S. Wood, B. J. Lignelli, A. Moonis, G. Schwartz, L. H. Jambawalikar, S. Bagci, U. J Digit Imaging Article Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs is obtaining large-scale and precisely annotated imaging data. We sought to address the lack of annotated data with eye tracking technology. As a proof of principle, our hypothesis was that segmentation masks generated with the help of eye tracking (ET) would be very similar to those rendered by hand annotation (HA). Additionally, our goal was to show that a CNN trained on ET masks would be equivalent to one trained on HA masks, the latter being the current standard approach. Step 1: Screen captures of 19 publicly available radiologic images of assorted structures within various modalities were analyzed. ET and HA masks for all regions of interest (ROIs) were generated from these image datasets. Step 2: Utilizing a similar approach, ET and HA masks for 356 publicly available T1-weighted postcontrast meningioma images were generated. Three hundred six of these image + mask pairs were used to train a CNN with U-net-based architecture. The remaining 50 images were used as the independent test set. Step 1: ET and HA masks for the nonneurological images had an average Dice similarity coefficient (DSC) of 0.86 between each other. Step 2: Meningioma ET and HA masks had an average DSC of 0.85 between each other. After separate training using both approaches, the ET approach performed virtually identically to HA on the test set of 50 images. The former had an area under the curve (AUC) of 0.88, while the latter had AUC of 0.87. ET and HA predictions had trimmed mean DSCs compared to the original HA maps of 0.73 and 0.74, respectively. These trimmed DSCs between ET and HA were found to be statistically equivalent with a p value of 0.015. We have demonstrated that ET can create segmentation masks suitable for deep learning semantic segmentation. Future work will integrate ET to produce masks in a faster, more natural manner that distracts less from typical radiology clinical workflow. Springer International Publishing 2019-05-01 2019-08 /pmc/articles/PMC6646645/ /pubmed/31044392 http://dx.doi.org/10.1007/s10278-019-00220-4 Text en © The Author(s) 2019 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Article
Stember, J. N.
Celik, H.
Krupinski, E.
Chang, P. D.
Mutasa, S.
Wood, B. J.
Lignelli, A.
Moonis, G.
Schwartz, L. H.
Jambawalikar, S.
Bagci, U.
Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks
title Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks
title_full Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks
title_fullStr Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks
title_full_unstemmed Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks
title_short Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks
title_sort eye tracking for deep learning segmentation using convolutional neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6646645/
https://www.ncbi.nlm.nih.gov/pubmed/31044392
http://dx.doi.org/10.1007/s10278-019-00220-4
work_keys_str_mv AT stemberjn eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks
AT celikh eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks
AT krupinskie eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks
AT changpd eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks
AT mutasas eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks
AT woodbj eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks
AT lignellia eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks
AT moonisg eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks
AT schwartzlh eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks
AT jambawalikars eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks
AT bagciu eyetrackingfordeeplearningsegmentationusingconvolutionalneuralnetworks