Cargando…
Breast cancer histopathological images classification based on deep semantic features and gray level co-occurrence matrix
Breast cancer is regarded as the leading killer of women today. The early diagnosis and treatment of breast cancer is the key to improving the survival rate of patients. A method of breast cancer histopathological images recognition based on deep semantic features and gray level co-occurrence matrix...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9070886/ https://www.ncbi.nlm.nih.gov/pubmed/35511877 http://dx.doi.org/10.1371/journal.pone.0267955 |
Sumario: | Breast cancer is regarded as the leading killer of women today. The early diagnosis and treatment of breast cancer is the key to improving the survival rate of patients. A method of breast cancer histopathological images recognition based on deep semantic features and gray level co-occurrence matrix (GLCM) features is proposed in this paper. Taking the pre-trained DenseNet201 as the basic model, part of the convolutional layer features of the last dense block are extracted as the deep semantic features, which are then fused with the three-channel GLCM features, and the support vector machine (SVM) is used for classification. For the BreaKHis dataset, we explore the classification problems of magnification specific binary (MSB) classification and magnification independent binary (MIB) classification, and compared the performance with the seven baseline models of AlexNet, VGG16, ResNet50, GoogLeNet, DenseNet201, SqueezeNet and Inception-ResNet-V2. The experimental results show that the method proposed in this paper performs better than the pre-trained baseline models in MSB and MIB classification problems. The highest image-level recognition accuracy of 40×, 100×, 200×, 400× is 96.75%, 95.21%, 96.57%, and 93.15%, respectively. And the highest patient-level recognition accuracy of the four magnifications is 96.33%, 95.26%, 96.09%, and 92.99%, respectively. The image-level and patient-level recognition accuracy for MIB classification is 95.56% and 95.54%, respectively. In addition, the recognition accuracy of the method in this paper is comparable to some state-of-the-art methods. |
---|