Cargando…
Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study
BACKGROUND: With the prevalence of online consultation, many patient-doctor dialogues have accumulated, which, in an authentic language environment, are of significant value to the research and development of intelligent question answering and automated triage in recent natural language processing s...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
JMIR Publications
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9073616/ https://www.ncbi.nlm.nih.gov/pubmed/35451969 http://dx.doi.org/10.2196/35606 |
_version_ | 1784701327986655232 |
---|---|
author | Sun, Yuanyuan Gao, Dongping Shen, Xifeng Li, Meiting Nan, Jiale Zhang, Weining |
author_facet | Sun, Yuanyuan Gao, Dongping Shen, Xifeng Li, Meiting Nan, Jiale Zhang, Weining |
author_sort | Sun, Yuanyuan |
collection | PubMed |
description | BACKGROUND: With the prevalence of online consultation, many patient-doctor dialogues have accumulated, which, in an authentic language environment, are of significant value to the research and development of intelligent question answering and automated triage in recent natural language processing studies. OBJECTIVE: The purpose of this study was to design a front-end task module for the network inquiry of intelligent medical services. Through the study of automatic labeling of real doctor-patient dialogue text on the internet, a method of identifying the negative and positive entities of dialogues with higher accuracy has been explored. METHODS: The data set used for this study was from the Spring Rain Doctor internet online consultation, which was downloaded from the official data set of Alibaba Tianchi Lab. We proposed a composite abutting joint model, which was able to automatically classify the types of clinical finding entities into the following 4 attributes: positive, negative, other, and empty. We adapted a downstream architecture in Chinese Robustly Optimized Bidirectional Encoder Representations from Transformers Pretraining Approach (RoBERTa) with whole word masking (WWM) extended (RoBERTa-WWM-ext) combining a text convolutional neural network (CNN). We used RoBERTa-WWM-ext to express sentence semantics as a text vector and then extracted the local features of the sentence through the CNN, which was our new fusion model. To verify its knowledge learning ability, we chose Enhanced Representation through Knowledge Integration (ERNIE), original Bidirectional Encoder Representations from Transformers (BERT), and Chinese BERT with WWM to perform the same task, and then compared the results. Precision, recall, and macro-F1 were used to evaluate the performance of the methods. RESULTS: We found that the ERNIE model, which was trained with a large Chinese corpus, had a total score (macro-F1) of 65.78290014, while BERT and BERT-WWM had scores of 53.18247117 and 69.2795315, respectively. Our composite abutting joint model (RoBERTa-WWM-ext + CNN) had a macro-F1 value of 70.55936311, showing that our model outperformed the other models in the task. CONCLUSIONS: The accuracy of the original model can be greatly improved by giving priority to WWM and replacing the word-based mask with unit to classify and label medical entities. Better results can be obtained by effectively optimizing the downstream tasks of the model and the integration of multiple models later on. The study findings contribute to the translation of online consultation information into machine-readable information. |
format | Online Article Text |
id | pubmed-9073616 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | JMIR Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-90736162022-05-07 Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study Sun, Yuanyuan Gao, Dongping Shen, Xifeng Li, Meiting Nan, Jiale Zhang, Weining JMIR Med Inform Original Paper BACKGROUND: With the prevalence of online consultation, many patient-doctor dialogues have accumulated, which, in an authentic language environment, are of significant value to the research and development of intelligent question answering and automated triage in recent natural language processing studies. OBJECTIVE: The purpose of this study was to design a front-end task module for the network inquiry of intelligent medical services. Through the study of automatic labeling of real doctor-patient dialogue text on the internet, a method of identifying the negative and positive entities of dialogues with higher accuracy has been explored. METHODS: The data set used for this study was from the Spring Rain Doctor internet online consultation, which was downloaded from the official data set of Alibaba Tianchi Lab. We proposed a composite abutting joint model, which was able to automatically classify the types of clinical finding entities into the following 4 attributes: positive, negative, other, and empty. We adapted a downstream architecture in Chinese Robustly Optimized Bidirectional Encoder Representations from Transformers Pretraining Approach (RoBERTa) with whole word masking (WWM) extended (RoBERTa-WWM-ext) combining a text convolutional neural network (CNN). We used RoBERTa-WWM-ext to express sentence semantics as a text vector and then extracted the local features of the sentence through the CNN, which was our new fusion model. To verify its knowledge learning ability, we chose Enhanced Representation through Knowledge Integration (ERNIE), original Bidirectional Encoder Representations from Transformers (BERT), and Chinese BERT with WWM to perform the same task, and then compared the results. Precision, recall, and macro-F1 were used to evaluate the performance of the methods. RESULTS: We found that the ERNIE model, which was trained with a large Chinese corpus, had a total score (macro-F1) of 65.78290014, while BERT and BERT-WWM had scores of 53.18247117 and 69.2795315, respectively. Our composite abutting joint model (RoBERTa-WWM-ext + CNN) had a macro-F1 value of 70.55936311, showing that our model outperformed the other models in the task. CONCLUSIONS: The accuracy of the original model can be greatly improved by giving priority to WWM and replacing the word-based mask with unit to classify and label medical entities. Better results can be obtained by effectively optimizing the downstream tasks of the model and the integration of multiple models later on. The study findings contribute to the translation of online consultation information into machine-readable information. JMIR Publications 2022-04-21 /pmc/articles/PMC9073616/ /pubmed/35451969 http://dx.doi.org/10.2196/35606 Text en ©Yuanyuan Sun, Dongping Gao, Xifeng Shen, Meiting Li, Jiale Nan, Weining Zhang. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 21.04.2022. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included. |
spellingShingle | Original Paper Sun, Yuanyuan Gao, Dongping Shen, Xifeng Li, Meiting Nan, Jiale Zhang, Weining Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study |
title | Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study |
title_full | Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study |
title_fullStr | Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study |
title_full_unstemmed | Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study |
title_short | Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study |
title_sort | multi-label classification in patient-doctor dialogues with the roberta-wwm-ext + cnn (robustly optimized bidirectional encoder representations from transformers pretraining approach with whole word masking extended combining a convolutional neural network) model: named entity study |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9073616/ https://www.ncbi.nlm.nih.gov/pubmed/35451969 http://dx.doi.org/10.2196/35606 |
work_keys_str_mv | AT sunyuanyuan multilabelclassificationinpatientdoctordialogueswiththerobertawwmextcnnrobustlyoptimizedbidirectionalencoderrepresentationsfromtransformerspretrainingapproachwithwholewordmaskingextendedcombiningaconvolutionalneuralnetworkmodelnamedentitystudy AT gaodongping multilabelclassificationinpatientdoctordialogueswiththerobertawwmextcnnrobustlyoptimizedbidirectionalencoderrepresentationsfromtransformerspretrainingapproachwithwholewordmaskingextendedcombiningaconvolutionalneuralnetworkmodelnamedentitystudy AT shenxifeng multilabelclassificationinpatientdoctordialogueswiththerobertawwmextcnnrobustlyoptimizedbidirectionalencoderrepresentationsfromtransformerspretrainingapproachwithwholewordmaskingextendedcombiningaconvolutionalneuralnetworkmodelnamedentitystudy AT limeiting multilabelclassificationinpatientdoctordialogueswiththerobertawwmextcnnrobustlyoptimizedbidirectionalencoderrepresentationsfromtransformerspretrainingapproachwithwholewordmaskingextendedcombiningaconvolutionalneuralnetworkmodelnamedentitystudy AT nanjiale multilabelclassificationinpatientdoctordialogueswiththerobertawwmextcnnrobustlyoptimizedbidirectionalencoderrepresentationsfromtransformerspretrainingapproachwithwholewordmaskingextendedcombiningaconvolutionalneuralnetworkmodelnamedentitystudy AT zhangweining multilabelclassificationinpatientdoctordialogueswiththerobertawwmextcnnrobustlyoptimizedbidirectionalencoderrepresentationsfromtransformerspretrainingapproachwithwholewordmaskingextendedcombiningaconvolutionalneuralnetworkmodelnamedentitystudy |