Cargando…
Computer-aided characterization of early cancer in Barrett’s esophagus on i-scan magnification imaging: a multicenter international study
BACKGROUND AND AIMS: We aimed to develop a computer-aided characterization system that could support the diagnosis of dysplasia in Barrett’s esophagus (BE) on magnification endoscopy. METHODS: Videos were collected in high-definition magnification white-light and virtual chromoendoscopy with i-scan...
Autores principales: | , , , , , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Mosby Yearbook
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10590905/ https://www.ncbi.nlm.nih.gov/pubmed/36460087 http://dx.doi.org/10.1016/j.gie.2022.11.020 |
Sumario: | BACKGROUND AND AIMS: We aimed to develop a computer-aided characterization system that could support the diagnosis of dysplasia in Barrett’s esophagus (BE) on magnification endoscopy. METHODS: Videos were collected in high-definition magnification white-light and virtual chromoendoscopy with i-scan (Pentax Hoya, Japan) imaging in patients with dysplastic and nondysplastic BE (NDBE) from 4 centers. We trained a neural network with a Resnet101 architecture to classify frames as dysplastic or nondysplastic. The network was tested on 3 different scenarios: high-quality still images, all available video frames, and a selected sequence within each video. RESULTS: Fifty-seven patients, each with videos of magnification areas of BE (34 dysplasia, 23 NDBE), were included. Performance was evaluated by a leave-1-patient-out cross-validation method. In all, 60,174 (39,347 dysplasia, 20,827 NDBE) magnification video frames were used to train the network. The testing set included 49,726 i-scan-3/optical enhancement magnification frames. On 350 high-quality still images, the network achieved a sensitivity of 94%, specificity of 86%, and area under the receiver operator curve (AUROC) of 96%. On all 49,726 available video frames, the network achieved a sensitivity of 92%, specificity of 82%, and AUROC of 95%. On a selected sequence of frames per case (total of 11,471 frames), we used an exponentially weighted moving average of classifications on consecutive frames to characterize dysplasia. The network achieved a sensitivity of 92%, specificity of 84%, and AUROC of 96%. The mean assessment speed per frame was 0.0135 seconds (SD ± 0.006). CONCLUSION: Our network can characterize BE dysplasia with high accuracy and speed on high-quality magnification images and sequence of video frames, moving it toward real-time automated diagnosis. |
---|