Cargando…

Comparison of Diagnostic Performance in Mammography Assessment: Radiologist with Reference to Clinical Information Versus Standalone Artificial Intelligence Detection

We compared diagnostic performances between radiologists with reference to clinical information and standalone artificial intelligence (AI) detection of breast cancer on digital mammography. This study included 392 women (average age: 57.3 ± 12.1 years, range: 30–94 years) diagnosed with malignancy...

Descripción completa

Detalles Bibliográficos
Autores principales: Choi, Won Jae, An, Jin Kyung, Woo, Jeong Joo, Kwak, Hee Yong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9818877/
https://www.ncbi.nlm.nih.gov/pubmed/36611409
http://dx.doi.org/10.3390/diagnostics13010117
_version_ 1784865093587042304
author Choi, Won Jae
An, Jin Kyung
Woo, Jeong Joo
Kwak, Hee Yong
author_facet Choi, Won Jae
An, Jin Kyung
Woo, Jeong Joo
Kwak, Hee Yong
author_sort Choi, Won Jae
collection PubMed
description We compared diagnostic performances between radiologists with reference to clinical information and standalone artificial intelligence (AI) detection of breast cancer on digital mammography. This study included 392 women (average age: 57.3 ± 12.1 years, range: 30–94 years) diagnosed with malignancy between January 2010 and June 2021 who underwent digital mammography prior to biopsy. Two radiologists assessed mammographic findings based on clinical symptoms and prior mammography. All mammographies were analyzed via AI. Breast cancer detection performance was compared between radiologists and AI based on how the lesion location was concordant between each analysis method (radiologists or AI) and pathological results. Kappa coefficient was used to measure the concordance between radiologists or AI analysis and pathology results. Binominal logistic regression analysis was performed to identify factors influencing the concordance between radiologists’ analysis and pathology results. Overall, the concordance was higher in radiologists’ diagnosis than on AI analysis (kappa coefficient: 0.819 vs. 0.698). Impact of prior mammography (odds ratio (OR): 8.55, p < 0.001), clinical symptom (OR: 5.49, p < 0.001), and fatty breast density (OR: 5.18, p = 0.008) were important factors contributing to the concordance of lesion location between radiologists’ diagnosis and pathology results.
format Online
Article
Text
id pubmed-9818877
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98188772023-01-07 Comparison of Diagnostic Performance in Mammography Assessment: Radiologist with Reference to Clinical Information Versus Standalone Artificial Intelligence Detection Choi, Won Jae An, Jin Kyung Woo, Jeong Joo Kwak, Hee Yong Diagnostics (Basel) Article We compared diagnostic performances between radiologists with reference to clinical information and standalone artificial intelligence (AI) detection of breast cancer on digital mammography. This study included 392 women (average age: 57.3 ± 12.1 years, range: 30–94 years) diagnosed with malignancy between January 2010 and June 2021 who underwent digital mammography prior to biopsy. Two radiologists assessed mammographic findings based on clinical symptoms and prior mammography. All mammographies were analyzed via AI. Breast cancer detection performance was compared between radiologists and AI based on how the lesion location was concordant between each analysis method (radiologists or AI) and pathological results. Kappa coefficient was used to measure the concordance between radiologists or AI analysis and pathology results. Binominal logistic regression analysis was performed to identify factors influencing the concordance between radiologists’ analysis and pathology results. Overall, the concordance was higher in radiologists’ diagnosis than on AI analysis (kappa coefficient: 0.819 vs. 0.698). Impact of prior mammography (odds ratio (OR): 8.55, p < 0.001), clinical symptom (OR: 5.49, p < 0.001), and fatty breast density (OR: 5.18, p = 0.008) were important factors contributing to the concordance of lesion location between radiologists’ diagnosis and pathology results. MDPI 2022-12-30 /pmc/articles/PMC9818877/ /pubmed/36611409 http://dx.doi.org/10.3390/diagnostics13010117 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Choi, Won Jae
An, Jin Kyung
Woo, Jeong Joo
Kwak, Hee Yong
Comparison of Diagnostic Performance in Mammography Assessment: Radiologist with Reference to Clinical Information Versus Standalone Artificial Intelligence Detection
title Comparison of Diagnostic Performance in Mammography Assessment: Radiologist with Reference to Clinical Information Versus Standalone Artificial Intelligence Detection
title_full Comparison of Diagnostic Performance in Mammography Assessment: Radiologist with Reference to Clinical Information Versus Standalone Artificial Intelligence Detection
title_fullStr Comparison of Diagnostic Performance in Mammography Assessment: Radiologist with Reference to Clinical Information Versus Standalone Artificial Intelligence Detection
title_full_unstemmed Comparison of Diagnostic Performance in Mammography Assessment: Radiologist with Reference to Clinical Information Versus Standalone Artificial Intelligence Detection
title_short Comparison of Diagnostic Performance in Mammography Assessment: Radiologist with Reference to Clinical Information Versus Standalone Artificial Intelligence Detection
title_sort comparison of diagnostic performance in mammography assessment: radiologist with reference to clinical information versus standalone artificial intelligence detection
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9818877/
https://www.ncbi.nlm.nih.gov/pubmed/36611409
http://dx.doi.org/10.3390/diagnostics13010117
work_keys_str_mv AT choiwonjae comparisonofdiagnosticperformanceinmammographyassessmentradiologistwithreferencetoclinicalinformationversusstandaloneartificialintelligencedetection
AT anjinkyung comparisonofdiagnosticperformanceinmammographyassessmentradiologistwithreferencetoclinicalinformationversusstandaloneartificialintelligencedetection
AT woojeongjoo comparisonofdiagnosticperformanceinmammographyassessmentradiologistwithreferencetoclinicalinformationversusstandaloneartificialintelligencedetection
AT kwakheeyong comparisonofdiagnosticperformanceinmammographyassessmentradiologistwithreferencetoclinicalinformationversusstandaloneartificialintelligencedetection