Cargando…
Augmenting interpretable models with large language models during training
Recent large language models (LLMs), such as ChatGPT, have demonstrated remarkable prediction performance for a growing array of tasks. However, their proliferation into high-stakes domains and compute-limited settings has created a burgeoning need for interpretability and efficiency. We address thi...
Autores principales: | Singh, Chandan, Askari, Armin, Caruana, Rich, Gao, Jianfeng |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10689442/ https://www.ncbi.nlm.nih.gov/pubmed/38036543 http://dx.doi.org/10.1038/s41467-023-43713-1 |
Ejemplares similares
-
Fine-tuning large neural language models for biomedical natural language processing
por: Tinn, Robert, et al.
Publicado: (2023) -
Zero-shot interpretable phenotyping of postpartum hemorrhage using large language models
por: Alsentzer, Emily, et al.
Publicado: (2023) -
GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information
por: Jin, Qiao, et al.
Publicado: (2023) -
Almanac: Retrieval-Augmented Language Models for Clinical Medicine
por: Zakka, Cyril, et al.
Publicado: (2023) -
Development of a Liver Disease-Specific Large Language Model Chat Interface using Retrieval Augmented Generation
por: Ge, Jin, et al.
Publicado: (2023)