Cargando…
Abdominal fat quantification using convolutional networks
OBJECTIVES: To present software for automated adipose tissue quantification of abdominal magnetic resonance imaging (MRI) data using fully convolutional networks (FCN) and to evaluate its overall performance—accuracy, reliability, processing effort, and time—in comparison with an interactive referen...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Berlin Heidelberg
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10667157/ https://www.ncbi.nlm.nih.gov/pubmed/37436508 http://dx.doi.org/10.1007/s00330-023-09865-w |
_version_ | 1785139190375120896 |
---|---|
author | Schneider, Daniel Eggebrecht, Tobias Linder, Anna Linder, Nicolas Schaudinn, Alexander Blüher, Matthias Denecke, Timm Busse, Harald |
author_facet | Schneider, Daniel Eggebrecht, Tobias Linder, Anna Linder, Nicolas Schaudinn, Alexander Blüher, Matthias Denecke, Timm Busse, Harald |
author_sort | Schneider, Daniel |
collection | PubMed |
description | OBJECTIVES: To present software for automated adipose tissue quantification of abdominal magnetic resonance imaging (MRI) data using fully convolutional networks (FCN) and to evaluate its overall performance—accuracy, reliability, processing effort, and time—in comparison with an interactive reference method. MATERIALS AND METHODS: Single-center data of patients with obesity were analyzed retrospectively with institutional review board approval. Ground truth for subcutaneous (SAT) and visceral adipose tissue (VAT) segmentation was provided by semiautomated region-of-interest (ROI) histogram thresholding of 331 full abdominal image series. Automated analyses were implemented using UNet-based FCN architectures and data augmentation techniques. Cross-validation was performed on hold-out data using standard similarity and error measures. RESULTS: The FCN models reached Dice coefficients of up to 0.954 for SAT and 0.889 for VAT segmentation during cross-validation. Volumetric SAT (VAT) assessment resulted in a Pearson correlation coefficient of 0.999 (0.997), relative bias of 0.7% (0.8%), and standard deviation of 1.2% (3.1%). Intraclass correlation (coefficient of variation) within the same cohort was 0.999 (1.4%) for SAT and 0.996 (3.1%) for VAT. CONCLUSION: The presented methods for automated adipose-tissue quantification showed substantial improvements over common semiautomated approaches (no reader dependence, less effort) and thus provide a promising option for adipose tissue quantification. CLINICAL RELEVANCE STATEMENT: Deep learning techniques will likely enable image-based body composition analyses on a routine basis. The presented fully convolutional network models are well suited for full abdominopelvic adipose tissue quantification in patients with obesity. KEY POINTS: • This work compared the performance of different deep-learning approaches for adipose tissue quantification in patients with obesity. • Supervised deep learning–based methods using fully convolutional networks were suited best. • Measures of accuracy were equal to or better than the operator-driven approach. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00330-023-09865-w. |
format | Online Article Text |
id | pubmed-10667157 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Springer Berlin Heidelberg |
record_format | MEDLINE/PubMed |
spelling | pubmed-106671572023-07-12 Abdominal fat quantification using convolutional networks Schneider, Daniel Eggebrecht, Tobias Linder, Anna Linder, Nicolas Schaudinn, Alexander Blüher, Matthias Denecke, Timm Busse, Harald Eur Radiol Gastrointestinal OBJECTIVES: To present software for automated adipose tissue quantification of abdominal magnetic resonance imaging (MRI) data using fully convolutional networks (FCN) and to evaluate its overall performance—accuracy, reliability, processing effort, and time—in comparison with an interactive reference method. MATERIALS AND METHODS: Single-center data of patients with obesity were analyzed retrospectively with institutional review board approval. Ground truth for subcutaneous (SAT) and visceral adipose tissue (VAT) segmentation was provided by semiautomated region-of-interest (ROI) histogram thresholding of 331 full abdominal image series. Automated analyses were implemented using UNet-based FCN architectures and data augmentation techniques. Cross-validation was performed on hold-out data using standard similarity and error measures. RESULTS: The FCN models reached Dice coefficients of up to 0.954 for SAT and 0.889 for VAT segmentation during cross-validation. Volumetric SAT (VAT) assessment resulted in a Pearson correlation coefficient of 0.999 (0.997), relative bias of 0.7% (0.8%), and standard deviation of 1.2% (3.1%). Intraclass correlation (coefficient of variation) within the same cohort was 0.999 (1.4%) for SAT and 0.996 (3.1%) for VAT. CONCLUSION: The presented methods for automated adipose-tissue quantification showed substantial improvements over common semiautomated approaches (no reader dependence, less effort) and thus provide a promising option for adipose tissue quantification. CLINICAL RELEVANCE STATEMENT: Deep learning techniques will likely enable image-based body composition analyses on a routine basis. The presented fully convolutional network models are well suited for full abdominopelvic adipose tissue quantification in patients with obesity. KEY POINTS: • This work compared the performance of different deep-learning approaches for adipose tissue quantification in patients with obesity. • Supervised deep learning–based methods using fully convolutional networks were suited best. • Measures of accuracy were equal to or better than the operator-driven approach. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00330-023-09865-w. Springer Berlin Heidelberg 2023-07-12 2023 /pmc/articles/PMC10667157/ /pubmed/37436508 http://dx.doi.org/10.1007/s00330-023-09865-w Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Gastrointestinal Schneider, Daniel Eggebrecht, Tobias Linder, Anna Linder, Nicolas Schaudinn, Alexander Blüher, Matthias Denecke, Timm Busse, Harald Abdominal fat quantification using convolutional networks |
title | Abdominal fat quantification using convolutional networks |
title_full | Abdominal fat quantification using convolutional networks |
title_fullStr | Abdominal fat quantification using convolutional networks |
title_full_unstemmed | Abdominal fat quantification using convolutional networks |
title_short | Abdominal fat quantification using convolutional networks |
title_sort | abdominal fat quantification using convolutional networks |
topic | Gastrointestinal |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10667157/ https://www.ncbi.nlm.nih.gov/pubmed/37436508 http://dx.doi.org/10.1007/s00330-023-09865-w |
work_keys_str_mv | AT schneiderdaniel abdominalfatquantificationusingconvolutionalnetworks AT eggebrechttobias abdominalfatquantificationusingconvolutionalnetworks AT linderanna abdominalfatquantificationusingconvolutionalnetworks AT lindernicolas abdominalfatquantificationusingconvolutionalnetworks AT schaudinnalexander abdominalfatquantificationusingconvolutionalnetworks AT bluhermatthias abdominalfatquantificationusingconvolutionalnetworks AT denecketimm abdominalfatquantificationusingconvolutionalnetworks AT busseharald abdominalfatquantificationusingconvolutionalnetworks |