Cargando…

Deep Vision for Breast Cancer Classification and Segmentation

SIMPLE SUMMARY: Breast cancer misdiagnoses increase individual and system stressors as well as costs and result in increased morbidity and mortality. Digital mammography studies are typically about 80% sensitive and 90% specific. Improvement in classification of breast cancer imagery is possible usi...

Descripción completa

Detalles Bibliográficos
Autores principales: Fulton, Lawrence, McLeod, Alex, Dolezel, Diane, Bastian, Nathaniel, Fulton, Christopher P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8582536/
https://www.ncbi.nlm.nih.gov/pubmed/34771547
http://dx.doi.org/10.3390/cancers13215384
_version_ 1784597009664049152
author Fulton, Lawrence
McLeod, Alex
Dolezel, Diane
Bastian, Nathaniel
Fulton, Christopher P.
author_facet Fulton, Lawrence
McLeod, Alex
Dolezel, Diane
Bastian, Nathaniel
Fulton, Christopher P.
author_sort Fulton, Lawrence
collection PubMed
description SIMPLE SUMMARY: Breast cancer misdiagnoses increase individual and system stressors as well as costs and result in increased morbidity and mortality. Digital mammography studies are typically about 80% sensitive and 90% specific. Improvement in classification of breast cancer imagery is possible using deep vision methods, and these methods may be further used to identify autonomously regions of interest most closely associated with anomalies to support clinician analysis. This research explores deep vision techniques for improving mammography classification and for identifying associated regions of interest. The findings from this research contribute to the future of automated assistive diagnoses of breast cancer and the isolation of regions of interest. ABSTRACT: (1) Background: Female breast cancer diagnoses odds have increased from 11:1 in 1975 to 8:1 today. Mammography false positive rates (FPR) are associated with overdiagnoses and overtreatment, while false negative rates (FNR) increase morbidity and mortality. (2) Methods: Deep vision supervised learning classifies 299 × 299 pixel de-noised mammography images as negative or non-negative using models built on 55,890 pre-processed training images and applied to 15,364 unseen test images. A small image representation from the fitted training model is returned to evaluate the portion of the loss function gradient with respect to the image that maximizes the classification probability. This gradient is then re-mapped back to the original images, highlighting the areas of the original image that are most influential for classification (perhaps masses or boundary areas). (3) Results: initial classification results were 97% accurate, 99% specific, and 83% sensitive. Gradient techniques for unsupervised region of interest mapping identified areas most associated with the classification results clearly on positive mammograms and might be used to support clinician analysis. (4) Conclusions: deep vision techniques hold promise for addressing the overdiagnoses and treatment, underdiagnoses, and automated region of interest identification on mammography.
format Online
Article
Text
id pubmed-8582536
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-85825362021-11-12 Deep Vision for Breast Cancer Classification and Segmentation Fulton, Lawrence McLeod, Alex Dolezel, Diane Bastian, Nathaniel Fulton, Christopher P. Cancers (Basel) Article SIMPLE SUMMARY: Breast cancer misdiagnoses increase individual and system stressors as well as costs and result in increased morbidity and mortality. Digital mammography studies are typically about 80% sensitive and 90% specific. Improvement in classification of breast cancer imagery is possible using deep vision methods, and these methods may be further used to identify autonomously regions of interest most closely associated with anomalies to support clinician analysis. This research explores deep vision techniques for improving mammography classification and for identifying associated regions of interest. The findings from this research contribute to the future of automated assistive diagnoses of breast cancer and the isolation of regions of interest. ABSTRACT: (1) Background: Female breast cancer diagnoses odds have increased from 11:1 in 1975 to 8:1 today. Mammography false positive rates (FPR) are associated with overdiagnoses and overtreatment, while false negative rates (FNR) increase morbidity and mortality. (2) Methods: Deep vision supervised learning classifies 299 × 299 pixel de-noised mammography images as negative or non-negative using models built on 55,890 pre-processed training images and applied to 15,364 unseen test images. A small image representation from the fitted training model is returned to evaluate the portion of the loss function gradient with respect to the image that maximizes the classification probability. This gradient is then re-mapped back to the original images, highlighting the areas of the original image that are most influential for classification (perhaps masses or boundary areas). (3) Results: initial classification results were 97% accurate, 99% specific, and 83% sensitive. Gradient techniques for unsupervised region of interest mapping identified areas most associated with the classification results clearly on positive mammograms and might be used to support clinician analysis. (4) Conclusions: deep vision techniques hold promise for addressing the overdiagnoses and treatment, underdiagnoses, and automated region of interest identification on mammography. MDPI 2021-10-27 /pmc/articles/PMC8582536/ /pubmed/34771547 http://dx.doi.org/10.3390/cancers13215384 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Fulton, Lawrence
McLeod, Alex
Dolezel, Diane
Bastian, Nathaniel
Fulton, Christopher P.
Deep Vision for Breast Cancer Classification and Segmentation
title Deep Vision for Breast Cancer Classification and Segmentation
title_full Deep Vision for Breast Cancer Classification and Segmentation
title_fullStr Deep Vision for Breast Cancer Classification and Segmentation
title_full_unstemmed Deep Vision for Breast Cancer Classification and Segmentation
title_short Deep Vision for Breast Cancer Classification and Segmentation
title_sort deep vision for breast cancer classification and segmentation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8582536/
https://www.ncbi.nlm.nih.gov/pubmed/34771547
http://dx.doi.org/10.3390/cancers13215384
work_keys_str_mv AT fultonlawrence deepvisionforbreastcancerclassificationandsegmentation
AT mcleodalex deepvisionforbreastcancerclassificationandsegmentation
AT dolezeldiane deepvisionforbreastcancerclassificationandsegmentation
AT bastiannathaniel deepvisionforbreastcancerclassificationandsegmentation
AT fultonchristopherp deepvisionforbreastcancerclassificationandsegmentation