Cargando…

Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network

BACKGROUND: Using deep learning techniques in image analysis is a dynamically emerging field. This study aims to use a convolutional neural network (CNN), a deep learning approach, to automatically classify esophageal cancer (EC) and distinguish it from premalignant lesions. METHODS: A total of 1,27...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Gaoshuang, Hua, Jie, Wu, Zhan, Meng, Tianfang, Sun, Mengxue, Huang, Peiyun, He, Xiaopu, Sun, Weihao, Li, Xueliang, Chen, Yang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: AME Publishing Company 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7210177/
https://www.ncbi.nlm.nih.gov/pubmed/32395530
http://dx.doi.org/10.21037/atm.2020.03.24
_version_ 1783531228442394624
author Liu, Gaoshuang
Hua, Jie
Wu, Zhan
Meng, Tianfang
Sun, Mengxue
Huang, Peiyun
He, Xiaopu
Sun, Weihao
Li, Xueliang
Chen, Yang
author_facet Liu, Gaoshuang
Hua, Jie
Wu, Zhan
Meng, Tianfang
Sun, Mengxue
Huang, Peiyun
He, Xiaopu
Sun, Weihao
Li, Xueliang
Chen, Yang
author_sort Liu, Gaoshuang
collection PubMed
description BACKGROUND: Using deep learning techniques in image analysis is a dynamically emerging field. This study aims to use a convolutional neural network (CNN), a deep learning approach, to automatically classify esophageal cancer (EC) and distinguish it from premalignant lesions. METHODS: A total of 1,272 white-light images were adopted from 748 subjects, including normal cases, premalignant lesions, and cancerous lesions; 1,017 images were used to train the CNN, and another 255 images were examined to evaluate the CNN architecture. Our proposed CNN structure consists of two subnetworks (O-stream and P-stream). The original images were used as the inputs of the O-stream to extract the color and global features, and the pre-processed esophageal images were used as the inputs of the P-stream to extract the texture and detail features. RESULTS: The CNN system we developed achieved an accuracy of 85.83%, a sensitivity of 94.23%, and a specificity of 94.67% after the fusion of the 2 streams was accomplished. The classification accuracy of normal esophagus, premalignant lesion, and EC were 94.23%, 82.5%, and 77.14%, respectively, which shows a better performance than the Local Binary Patterns (LBP) + Support Vector Machine (SVM) and Histogram of Gradient (HOG) + SVM methods. A total of 8 of the 35 (22.85%) EC lesions were categorized as premalignant lesions because of their slightly reddish and flat lesions. CONCLUSIONS: The CNN system, with 2 streams, demonstrated high sensitivity and specificity with the endoscopic images. It obtained better detection performance than the currently used methods based on the same datasets and has great application prospects in assisting endoscopists to distinguish esophageal lesion subclasses.
format Online
Article
Text
id pubmed-7210177
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher AME Publishing Company
record_format MEDLINE/PubMed
spelling pubmed-72101772020-05-11 Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network Liu, Gaoshuang Hua, Jie Wu, Zhan Meng, Tianfang Sun, Mengxue Huang, Peiyun He, Xiaopu Sun, Weihao Li, Xueliang Chen, Yang Ann Transl Med Original Article BACKGROUND: Using deep learning techniques in image analysis is a dynamically emerging field. This study aims to use a convolutional neural network (CNN), a deep learning approach, to automatically classify esophageal cancer (EC) and distinguish it from premalignant lesions. METHODS: A total of 1,272 white-light images were adopted from 748 subjects, including normal cases, premalignant lesions, and cancerous lesions; 1,017 images were used to train the CNN, and another 255 images were examined to evaluate the CNN architecture. Our proposed CNN structure consists of two subnetworks (O-stream and P-stream). The original images were used as the inputs of the O-stream to extract the color and global features, and the pre-processed esophageal images were used as the inputs of the P-stream to extract the texture and detail features. RESULTS: The CNN system we developed achieved an accuracy of 85.83%, a sensitivity of 94.23%, and a specificity of 94.67% after the fusion of the 2 streams was accomplished. The classification accuracy of normal esophagus, premalignant lesion, and EC were 94.23%, 82.5%, and 77.14%, respectively, which shows a better performance than the Local Binary Patterns (LBP) + Support Vector Machine (SVM) and Histogram of Gradient (HOG) + SVM methods. A total of 8 of the 35 (22.85%) EC lesions were categorized as premalignant lesions because of their slightly reddish and flat lesions. CONCLUSIONS: The CNN system, with 2 streams, demonstrated high sensitivity and specificity with the endoscopic images. It obtained better detection performance than the currently used methods based on the same datasets and has great application prospects in assisting endoscopists to distinguish esophageal lesion subclasses. AME Publishing Company 2020-04 /pmc/articles/PMC7210177/ /pubmed/32395530 http://dx.doi.org/10.21037/atm.2020.03.24 Text en 2020 Annals of Translational Medicine. All rights reserved. https://creativecommons.org/licenses/by-nc-nd/4.0/Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0 (https://creativecommons.org/licenses/by-nc-nd/4.0/) .
spellingShingle Original Article
Liu, Gaoshuang
Hua, Jie
Wu, Zhan
Meng, Tianfang
Sun, Mengxue
Huang, Peiyun
He, Xiaopu
Sun, Weihao
Li, Xueliang
Chen, Yang
Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network
title Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network
title_full Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network
title_fullStr Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network
title_full_unstemmed Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network
title_short Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network
title_sort automatic classification of esophageal lesions in endoscopic images using a convolutional neural network
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7210177/
https://www.ncbi.nlm.nih.gov/pubmed/32395530
http://dx.doi.org/10.21037/atm.2020.03.24
work_keys_str_mv AT liugaoshuang automaticclassificationofesophageallesionsinendoscopicimagesusingaconvolutionalneuralnetwork
AT huajie automaticclassificationofesophageallesionsinendoscopicimagesusingaconvolutionalneuralnetwork
AT wuzhan automaticclassificationofesophageallesionsinendoscopicimagesusingaconvolutionalneuralnetwork
AT mengtianfang automaticclassificationofesophageallesionsinendoscopicimagesusingaconvolutionalneuralnetwork
AT sunmengxue automaticclassificationofesophageallesionsinendoscopicimagesusingaconvolutionalneuralnetwork
AT huangpeiyun automaticclassificationofesophageallesionsinendoscopicimagesusingaconvolutionalneuralnetwork
AT hexiaopu automaticclassificationofesophageallesionsinendoscopicimagesusingaconvolutionalneuralnetwork
AT sunweihao automaticclassificationofesophageallesionsinendoscopicimagesusingaconvolutionalneuralnetwork
AT lixueliang automaticclassificationofesophageallesionsinendoscopicimagesusingaconvolutionalneuralnetwork
AT chenyang automaticclassificationofesophageallesionsinendoscopicimagesusingaconvolutionalneuralnetwork