Cargando…

A multidomain fusion model of radiomics and deep learning to discriminate between PDAC and AIP based on (18)F-FDG PET/CT images

PURPOSE: To explore a multidomain fusion model of radiomics and deep learning features based on (18)F-fluorodeoxyglucose positron emission tomography/computed tomography ((18)F-FDG PET/CT) images to distinguish pancreatic ductal adenocarcinoma (PDAC) and autoimmune pancreatitis (AIP), which could ef...

Descripción completa

Detalles Bibliográficos
Autores principales: Wei, Wenting, Jia, Guorong, Wu, Zhongyi, Wang, Tao, Wang, Heng, Wei, Kezhen, Cheng, Chao, Liu, Zhaobang, Zuo, Changjing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Nature Singapore 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9676903/
https://www.ncbi.nlm.nih.gov/pubmed/36409398
http://dx.doi.org/10.1007/s11604-022-01363-1
Descripción
Sumario:PURPOSE: To explore a multidomain fusion model of radiomics and deep learning features based on (18)F-fluorodeoxyglucose positron emission tomography/computed tomography ((18)F-FDG PET/CT) images to distinguish pancreatic ductal adenocarcinoma (PDAC) and autoimmune pancreatitis (AIP), which could effectively improve the accuracy of diseases diagnosis. MATERIALS AND METHODS: This retrospective study included 48 patients with AIP (mean age, 65 ± 12.0 years; range, 37–90 years) and 64 patients with PDAC patients (mean age, 66 ± 11.3 years; range, 32–88 years). Three different methods were discussed to identify PDAC and AIP based on (18)F-FDG PET/CT images, including the radiomics model (RAD_model), the deep learning model (DL_model), and the multidomain fusion model (MF_model). We also compared the classification results of PET/CT, PET, and CT images in these three models. In addition, we explored the attributes of deep learning abstract features by analyzing the correlation between radiomics and deep learning features. Five-fold cross-validation was used to calculate receiver operating characteristic (ROC), area under the roc curve (AUC), accuracy (Acc), sensitivity (Sen), and specificity (Spe) to quantitatively evaluate the performance of different classification models. RESULTS: The experimental results showed that the multidomain fusion model had the best comprehensive performance compared with radiomics and deep learning models, and the AUC, accuracy, sensitivity, specificity were 96.4% (95% CI 95.4–97.3%), 90.1% (95% CI 88.7–91.5%), 87.5% (95% CI 84.3–90.6%), and 93.0% (95% CI 90.3–95.6%), respectively. And our study proved that the multimodal features of PET/CT were superior to using either PET or CT features alone. First-order features of radiomics provided valuable complementary information for the deep learning model. CONCLUSION: The preliminary results of this paper demonstrated that our proposed multidomain fusion model fully exploits the value of radiomics and deep learning features based on (18)F-FDG PET/CT images, which provided competitive accuracy for the discrimination of PDAC and AIP.