Cargando…
Probing language identity encoded in pre-trained multilingual models: a typological view
Pre-trained multilingual models have been extensively used in cross-lingual information processing tasks. Existing work focuses on improving the transferring performance of pre-trained multilingual models but ignores the linguistic properties that models preserve at encoding time—“language identity”...
Autores principales: | Zheng, Jianyu, Liu, Ying |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044357/ https://www.ncbi.nlm.nih.gov/pubmed/35494801 http://dx.doi.org/10.7717/peerj-cs.899 |
Ejemplares similares
-
Improving text mining in plant health domain with GAN and/or pre-trained language model
por: Jiang, Shufan, et al.
Publicado: (2023) -
Quantification: The View From Natural Language Generation
por: Carstensen, Kai-Uwe
Publicado: (2021) -
Attitudes Toward Multilingualism in Luxembourg. A Comparative Analysis of Online News Comments and Crowdsourced Questionnaire Data
por: Purschke, Christoph
Publicado: (2020) -
Models of Language and Multiword Expressions
por: Contreras Kallens, Pablo, et al.
Publicado: (2022) -
What does Chinese BERT learn about syntactic knowledge?
por: Zheng, Jianyu, et al.
Publicado: (2023)