Cargando…

An imConvNet-based deep learning model for Chinese medical named entity recognition

BACKGROUND: With the development of current medical technology, information management becomes perfect in the medical field. Medical big data analysis is based on a large amount of medical and health data stored in the electronic medical system, such as electronic medical records and medical reports...

Descripción completa

Detalles Bibliográficos
Autores principales: Zheng, Yuchen, Han, Zhenggong, Cai, Yimin, Duan, Xubo, Sun, Jiangling, Yang, Wei, Huang, Haisong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9677659/
https://www.ncbi.nlm.nih.gov/pubmed/36411432
http://dx.doi.org/10.1186/s12911-022-02049-4
Descripción
Sumario:BACKGROUND: With the development of current medical technology, information management becomes perfect in the medical field. Medical big data analysis is based on a large amount of medical and health data stored in the electronic medical system, such as electronic medical records and medical reports. How to fully exploit the resources of information included in these medical data has always been the subject of research by many scholars. The basis for text mining is named entity recognition (NER), which has its particularities in the medical field, where issues such as inadequate text resources and a large number of professional domain terms continue to face significant challenges in medical NER. METHODS: We improved the convolutional neural network model (imConvNet) to obtain additional text features. Concurrently, we continue to use the classical Bert pre-training model and BiLSTM model for named entity recognition. We use imConvNet model to extract additional word vector features and improve named entity recognition accuracy. The proposed model, named BERT-imConvNet-BiLSTM-CRF, is composed of four layers: BERT embedding layer—getting word embedding vector; imConvNet layer—capturing the context feature of each character; BiLSTM (Bidirectional Long Short-Term Memory) layer—capturing the long-distance dependencies; CRF (Conditional Random Field) layer—labeling characters based on their features and transfer rules. RESULTS: The average F1 score on the public medical data set yidu-s4k reached 91.38% when combined with the classical model; when real electronic medical record text in impacted wisdom teeth is used as the experimental object, the model's F1 score is 93.89%. They all show better results than classical models. CONCLUSIONS: The suggested novel model (imConvNet) significantly improves the recognition accuracy of Chinese medical named entities and applies to various medical corpora.