Cargando…
Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets
OBJECTIVE: To evaluate the interpretive performance and inter-observer agreement on digital mammographs among radiologists and to investigate whether radiologist characteristics affect performance and agreement. MATERIALS AND METHODS: The test sets consisted of full-field digital mammograms and cont...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
The Korean Society of Radiology
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6342756/ https://www.ncbi.nlm.nih.gov/pubmed/30672161 http://dx.doi.org/10.3348/kjr.2018.0193 |
_version_ | 1783389158286295040 |
---|---|
author | Kim, Sung Hun Lee, Eun Hye Jun, Jae Kwan Kim, You Me Chang, Yun-Woo Lee, Jin Hwa Kim, Hye-Won Choi, Eun Jung |
author_facet | Kim, Sung Hun Lee, Eun Hye Jun, Jae Kwan Kim, You Me Chang, Yun-Woo Lee, Jin Hwa Kim, Hye-Won Choi, Eun Jung |
author_sort | Kim, Sung Hun |
collection | PubMed |
description | OBJECTIVE: To evaluate the interpretive performance and inter-observer agreement on digital mammographs among radiologists and to investigate whether radiologist characteristics affect performance and agreement. MATERIALS AND METHODS: The test sets consisted of full-field digital mammograms and contained 12 cancer cases among 1000 total cases. Twelve radiologists independently interpreted all mammograms. Performance indicators included the recall rate, cancer detection rate (CDR), positive predictive value (PPV), sensitivity, specificity, false positive rate (FPR), and area under the receiver operating characteristic curve (AUC). Inter-radiologist agreement was measured. The reporting radiologist characteristics included number of years of experience interpreting mammography, fellowship training in breast imaging, and annual volume of mammography interpretation. RESULTS: The mean and range of interpretive performance were as follows: recall rate, 7.5% (3.3–10.2%); CDR, 10.6 (8.0–12.0 per 1000 examinations); PPV, 15.9% (8.8–33.3%); sensitivity, 88.2% (66.7–100%); specificity, 93.5% (90.6–97.8%); FPR, 6.5% (2.2–9.4%); and AUC, 0.93 (0.82–0.99). Radiologists who annually interpreted more than 3000 screening mammograms tended to exhibit higher CDRs and sensitivities than those who interpreted fewer than 3000 mammograms (p = 0.064). The inter-radiologist agreement showed a percent agreement of 77.2–88.8% and a kappa value of 0.27–0.34. Radiologist characteristics did not affect agreement. CONCLUSION: The interpretative performance of the radiologists fulfilled the mammography screening goal of the American College of Radiology, although there was inter-observer variability. Radiologists who interpreted more than 3000 screening mammograms annually tended to perform better than radiologists who did not. |
format | Online Article Text |
id | pubmed-6342756 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | The Korean Society of Radiology |
record_format | MEDLINE/PubMed |
spelling | pubmed-63427562019-02-01 Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets Kim, Sung Hun Lee, Eun Hye Jun, Jae Kwan Kim, You Me Chang, Yun-Woo Lee, Jin Hwa Kim, Hye-Won Choi, Eun Jung Korean J Radiol Breast Imaging OBJECTIVE: To evaluate the interpretive performance and inter-observer agreement on digital mammographs among radiologists and to investigate whether radiologist characteristics affect performance and agreement. MATERIALS AND METHODS: The test sets consisted of full-field digital mammograms and contained 12 cancer cases among 1000 total cases. Twelve radiologists independently interpreted all mammograms. Performance indicators included the recall rate, cancer detection rate (CDR), positive predictive value (PPV), sensitivity, specificity, false positive rate (FPR), and area under the receiver operating characteristic curve (AUC). Inter-radiologist agreement was measured. The reporting radiologist characteristics included number of years of experience interpreting mammography, fellowship training in breast imaging, and annual volume of mammography interpretation. RESULTS: The mean and range of interpretive performance were as follows: recall rate, 7.5% (3.3–10.2%); CDR, 10.6 (8.0–12.0 per 1000 examinations); PPV, 15.9% (8.8–33.3%); sensitivity, 88.2% (66.7–100%); specificity, 93.5% (90.6–97.8%); FPR, 6.5% (2.2–9.4%); and AUC, 0.93 (0.82–0.99). Radiologists who annually interpreted more than 3000 screening mammograms tended to exhibit higher CDRs and sensitivities than those who interpreted fewer than 3000 mammograms (p = 0.064). The inter-radiologist agreement showed a percent agreement of 77.2–88.8% and a kappa value of 0.27–0.34. Radiologist characteristics did not affect agreement. CONCLUSION: The interpretative performance of the radiologists fulfilled the mammography screening goal of the American College of Radiology, although there was inter-observer variability. Radiologists who interpreted more than 3000 screening mammograms annually tended to perform better than radiologists who did not. The Korean Society of Radiology 2019-02 2018-01-16 /pmc/articles/PMC6342756/ /pubmed/30672161 http://dx.doi.org/10.3348/kjr.2018.0193 Text en Copyright © 2019 The Korean Society of Radiology https://creativecommons.org/licenses/by-nc/4.0/ This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Breast Imaging Kim, Sung Hun Lee, Eun Hye Jun, Jae Kwan Kim, You Me Chang, Yun-Woo Lee, Jin Hwa Kim, Hye-Won Choi, Eun Jung Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets |
title | Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets |
title_full | Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets |
title_fullStr | Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets |
title_full_unstemmed | Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets |
title_short | Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets |
title_sort | interpretive performance and inter-observer agreement on digital mammography test sets |
topic | Breast Imaging |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6342756/ https://www.ncbi.nlm.nih.gov/pubmed/30672161 http://dx.doi.org/10.3348/kjr.2018.0193 |
work_keys_str_mv | AT kimsunghun interpretiveperformanceandinterobserveragreementondigitalmammographytestsets AT leeeunhye interpretiveperformanceandinterobserveragreementondigitalmammographytestsets AT junjaekwan interpretiveperformanceandinterobserveragreementondigitalmammographytestsets AT kimyoume interpretiveperformanceandinterobserveragreementondigitalmammographytestsets AT changyunwoo interpretiveperformanceandinterobserveragreementondigitalmammographytestsets AT leejinhwa interpretiveperformanceandinterobserveragreementondigitalmammographytestsets AT kimhyewon interpretiveperformanceandinterobserveragreementondigitalmammographytestsets AT choieunjung interpretiveperformanceandinterobserveragreementondigitalmammographytestsets AT interpretiveperformanceandinterobserveragreementondigitalmammographytestsets |