Cargando…

Interobserver variability in quality assessment of magnetic resonance images

BACKGROUND: The perceptual quality of magnetic resonance (MR) images influences diagnosis and may compromise the treatment. The purpose of this study was to evaluate how the image quality changes influence the interobserver variability of their assessment. METHODS: For the variability evaluation, a...

Descripción completa

Detalles Bibliográficos
Autores principales: Obuchowicz, Rafal, Oszust, Mariusz, Piorkowski, Adam
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7509933/
https://www.ncbi.nlm.nih.gov/pubmed/32962651
http://dx.doi.org/10.1186/s12880-020-00505-z
_version_ 1783585689900679168
author Obuchowicz, Rafal
Oszust, Mariusz
Piorkowski, Adam
author_facet Obuchowicz, Rafal
Oszust, Mariusz
Piorkowski, Adam
author_sort Obuchowicz, Rafal
collection PubMed
description BACKGROUND: The perceptual quality of magnetic resonance (MR) images influences diagnosis and may compromise the treatment. The purpose of this study was to evaluate how the image quality changes influence the interobserver variability of their assessment. METHODS: For the variability evaluation, a dataset containing distorted MRI images was prepared and then assessed by 31 experienced medical professionals (radiologists). Differences between observers were analyzed using the Fleiss’ kappa. However, since the kappa evaluates the agreement among radiologists taking into account aggregated decisions, a typically employed criterion of the image quality assessment (IQA) performance was used to provide a more thorough analysis. The IQA performance of radiologists was evaluated by comparing the Spearman correlation coefficients, ρ, between individual scores with the mean opinion scores (MOS) composed of the subjective opinions of the remaining professionals. RESULTS: The experiments show that there is a significant agreement among radiologists (κ=0.12; 95% confidence interval [CI]: 0.118, 0.121; P<0.001) on the quality of the assessed images. The resulted κ is strongly affected by the subjectivity of the assigned scores, separately presenting close scores. Therefore, the ρ was used to identify poor performance cases and to confirm the consistency of the majority of collected scores (ρ(mean) = 0.5706). The results for interns (ρ(mean) = 0.6868) supports the finding that the quality assessment of MR images can be successfully taught. CONCLUSIONS: The agreement observed among radiologists from different imaging centers confirms the subjectivity of the perception of MR images. It was shown that the image content and severity of distortions affect the IQA. Furthermore, the study highlights the importance of the psychosomatic condition of the observers and their attitude.
format Online
Article
Text
id pubmed-7509933
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-75099332020-09-24 Interobserver variability in quality assessment of magnetic resonance images Obuchowicz, Rafal Oszust, Mariusz Piorkowski, Adam BMC Med Imaging Research Article BACKGROUND: The perceptual quality of magnetic resonance (MR) images influences diagnosis and may compromise the treatment. The purpose of this study was to evaluate how the image quality changes influence the interobserver variability of their assessment. METHODS: For the variability evaluation, a dataset containing distorted MRI images was prepared and then assessed by 31 experienced medical professionals (radiologists). Differences between observers were analyzed using the Fleiss’ kappa. However, since the kappa evaluates the agreement among radiologists taking into account aggregated decisions, a typically employed criterion of the image quality assessment (IQA) performance was used to provide a more thorough analysis. The IQA performance of radiologists was evaluated by comparing the Spearman correlation coefficients, ρ, between individual scores with the mean opinion scores (MOS) composed of the subjective opinions of the remaining professionals. RESULTS: The experiments show that there is a significant agreement among radiologists (κ=0.12; 95% confidence interval [CI]: 0.118, 0.121; P<0.001) on the quality of the assessed images. The resulted κ is strongly affected by the subjectivity of the assigned scores, separately presenting close scores. Therefore, the ρ was used to identify poor performance cases and to confirm the consistency of the majority of collected scores (ρ(mean) = 0.5706). The results for interns (ρ(mean) = 0.6868) supports the finding that the quality assessment of MR images can be successfully taught. CONCLUSIONS: The agreement observed among radiologists from different imaging centers confirms the subjectivity of the perception of MR images. It was shown that the image content and severity of distortions affect the IQA. Furthermore, the study highlights the importance of the psychosomatic condition of the observers and their attitude. BioMed Central 2020-09-22 /pmc/articles/PMC7509933/ /pubmed/32962651 http://dx.doi.org/10.1186/s12880-020-00505-z Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research Article
Obuchowicz, Rafal
Oszust, Mariusz
Piorkowski, Adam
Interobserver variability in quality assessment of magnetic resonance images
title Interobserver variability in quality assessment of magnetic resonance images
title_full Interobserver variability in quality assessment of magnetic resonance images
title_fullStr Interobserver variability in quality assessment of magnetic resonance images
title_full_unstemmed Interobserver variability in quality assessment of magnetic resonance images
title_short Interobserver variability in quality assessment of magnetic resonance images
title_sort interobserver variability in quality assessment of magnetic resonance images
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7509933/
https://www.ncbi.nlm.nih.gov/pubmed/32962651
http://dx.doi.org/10.1186/s12880-020-00505-z
work_keys_str_mv AT obuchowiczrafal interobservervariabilityinqualityassessmentofmagneticresonanceimages
AT oszustmariusz interobservervariabilityinqualityassessmentofmagneticresonanceimages
AT piorkowskiadam interobservervariabilityinqualityassessmentofmagneticresonanceimages