Cargando…
Medical image captioning via generative pretrained transformers
The proposed model for automatic clinical image caption generation combines the analysis of radiological scans with structured patient information from the textual records. It uses two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records. T...
Autores principales: | Selivanov, Alexander, Rogov, Oleg Y., Chesakov, Daniil, Shelmanov, Artem, Fedulova, Irina, Dylov, Dmitry V. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10010644/ https://www.ncbi.nlm.nih.gov/pubmed/36914733 http://dx.doi.org/10.1038/s41598-023-31223-5 |
Ejemplares similares
-
Visual-Text Reference Pretraining Model for Image Captioning
por: Li, Pengfei, et al.
Publicado: (2022) -
A Future of Smarter Digital Health Empowered by Generative Pretrained Transformer
por: Miao, Hongyu, et al.
Publicado: (2023) -
To pretrain or not? A systematic analysis of the benefits of pretraining in diabetic retinopathy
por: Srinivasan, Vignesh, et al.
Publicado: (2022) -
Deep negative volume segmentation
por: Belikova, Kristina, et al.
Publicado: (2021) -
Pretrained transformer models for predicting the withdrawal of drugs from the market
por: Mazuz, Eyal, et al.
Publicado: (2023)