Cargando…
What does Chinese BERT learn about syntactic knowledge?
Pre-trained language models such as Bidirectional Encoder Representations from Transformers (BERT) have been applied to a wide range of natural language processing (NLP) tasks and obtained significantly positive results. A growing body of research has investigated the reason why BERT is so efficient...
Autores principales: | Zheng, Jianyu, Liu, Ying |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10403162/ https://www.ncbi.nlm.nih.gov/pubmed/37547407 http://dx.doi.org/10.7717/peerj-cs.1478 |
Ejemplares similares
-
A BERT-Span model for Chinese named entity recognition in rehabilitation medicine
por: Zhong, Jinhong, et al.
Publicado: (2023) -
BERT-PAGG: a Chinese relationship extraction model fusing PAGG and entity location information
por: Xu, Bin, et al.
Publicado: (2023) -
MeaningBERT: assessing meaning preservation between sentences
por: Beauchemin, David, et al.
Publicado: (2023) -
Detecting racism and xenophobia using deep learning models on Twitter data: CNN, LSTM and BERT
por: Benítez-Andrades, José Alberto, et al.
Publicado: (2022) -
Research on sentiment classification for netizens based on the BERT-BiLSTM-TextCNN model
por: Jiang, Xuchu, et al.
Publicado: (2022)