Cargando…

Deep Transfer Learning-Based Approach for Glucose Transporter-1 (GLUT1) Expression Assessment

Glucose transporter-1 (GLUT-1) expression level is a biomarker of tumour hypoxia condition in immunohistochemistry (IHC)-stained images. Thus, the GLUT-1 scoring is a routine procedure currently employed for predicting tumour hypoxia markers in clinical practice. However, visual assessment of GLUT-1...

Descripción completa

Detalles Bibliográficos
Autores principales: Al Zorgani, Maisun Mohamed, Ugail, Hassan, Pors, Klaus, Dauda, Abdullahi Magaji
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10584776/
https://www.ncbi.nlm.nih.gov/pubmed/37670181
http://dx.doi.org/10.1007/s10278-023-00859-0
Descripción
Sumario:Glucose transporter-1 (GLUT-1) expression level is a biomarker of tumour hypoxia condition in immunohistochemistry (IHC)-stained images. Thus, the GLUT-1 scoring is a routine procedure currently employed for predicting tumour hypoxia markers in clinical practice. However, visual assessment of GLUT-1 scores is subjective and consequently prone to inter-pathologist variability. Therefore, this study proposes an automated method for assessing GLUT-1 scores in IHC colorectal carcinoma images. For this purpose, we leverage deep transfer learning methodologies for evaluating the performance of six different pre-trained convolutional neural network (CNN) architectures: AlexNet, VGG16, GoogleNet, ResNet50, DenseNet-201 and ShuffleNet. The target CNNs are fine-tuned as classifiers or adapted as feature extractors with support vector machine (SVM) to classify GLUT-1 scores in IHC images. Our experimental results show that the winning model is the trained SVM classifier on the extracted deep features fusion Feat-Concat from DenseNet201, ResNet50 and GoogLeNet extractors. It yields the highest prediction accuracy of 98.86%, thus outperforming the other classifiers on our dataset. We also conclude, from comparing the methodologies, that the off-the-shelf feature extraction is better than the fine-tuning model in terms of time and resources required for training.