Cargando…

Medical image captioning via generative pretrained transformers

The proposed model for automatic clinical image caption generation combines the analysis of radiological scans with structured patient information from the textual records. It uses two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records. T...

Descripción completa

Detalles Bibliográficos
Autores principales: Selivanov, Alexander, Rogov, Oleg Y., Chesakov, Daniil, Shelmanov, Artem, Fedulova, Irina, Dylov, Dmitry V.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10010644/
https://www.ncbi.nlm.nih.gov/pubmed/36914733
http://dx.doi.org/10.1038/s41598-023-31223-5
Descripción
Sumario:The proposed model for automatic clinical image caption generation combines the analysis of radiological scans with structured patient information from the textual records. It uses two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records. The generated textual summary contains essential information about pathologies found, their location, along with the 2D heatmaps that localize each pathology on the scans. The model has been tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO, and the results measured with natural language assessment metrics demonstrated its efficient applicability to chest X-ray image captioning.