Cargando…
Novel computer aided diagnostic models on multimodality medical images to differentiate well differentiated liposarcomas from lipomas approached by deep learning methods
BACKGROUND: Deep learning methods have great potential to predict tumor characterization, such as histological diagnosis and genetic aberration. The objective of this study was to evaluate and validate the predictive performance of multimodality imaging-derived models using computer-aided diagnostic...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8991509/ https://www.ncbi.nlm.nih.gov/pubmed/35392952 http://dx.doi.org/10.1186/s13023-022-02304-x |
Sumario: | BACKGROUND: Deep learning methods have great potential to predict tumor characterization, such as histological diagnosis and genetic aberration. The objective of this study was to evaluate and validate the predictive performance of multimodality imaging-derived models using computer-aided diagnostic (CAD) methods for prediction of MDM2 gene amplification to identify well-differentiated liposarcoma (WDLPS) and lipoma. MATERIALS AND METHODS: All 127 patients from two institutions were included with 89 patients in one institution for model training and 38 patients in the other institution for external validation between January 2012 and December 2018. For each modality, handcrafted radiomics analysis with manual segmentation was applied to extract 851 features for each modality, and six pretrained convolutional neural networks (CNNs) extracted 512–2048 deep learning features automatically. Extracted imaging-based features were selected via univariate filter selection methods and the recursive feature elimination algorithm, which were then classified by support vector machine for model construction. Integrated with two significant clinical variables, age and LDH level, a clinical-radiological model was constructed for identification WDLPS and lipoma. All differentiation models were evaluated using the area under the receiver operating characteristics curve (AUC) and their 95% confidence interval (CI). RESULTS: The multimodality model on deep learning features extracted from ResNet50 algorithm (RN-DL model) performed great differentiation performance with an AUC of 0.995 (95% CI 0.987–1.000) for the training cohort, and an AUC of 0.950 (95% CI 0.886–1.000), accuracy of 92.11%, sensitivity of 95.00% (95% CI 73.06–99.74%), specificity of 88.89% (95% CI 63.93–98.05%) in external validation. The integrated clinical-radiological model represented an AUC of 0.996 (95% CI 0.989–1.000) for the training cohort, and an AUC of 0.942 (95% CI 0.867–1.000), accuracy of 86.84%, sensitivity of 95.00% (95% CI 73.06–99.74%), and specificity of 77.78% (95% CI 51.92–92.63%) in external validation. CONCLUSIONS: Imaging-based multimodality models represent effective discrimination abilities between WDLPS and lipoma via CAD methods, and might be a practicable approach in assistance of treatment decision. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13023-022-02304-x. |
---|