Cargando…
MedFuseNet: An attention-based multimodal deep learning model for visual question answering in the medical domain
Medical images are difficult to comprehend for a person without expertise. The scarcity of medical practitioners across the globe often face the issue of physical and mental fatigue due to the high number of cases, inducing human errors during the diagnosis. In such scenarios, having an additional o...
Autores principales: | Sharma, Dhruv, Purushotham, Sanjay, Reddy, Chandan K. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8494920/ https://www.ncbi.nlm.nih.gov/pubmed/34615894 http://dx.doi.org/10.1038/s41598-021-98390-1 |
Ejemplares similares
-
Adversarial Learning with Bidirectional Attention for Visual Question Answering
por: Li, Qifeng, et al.
Publicado: (2021) -
Net Improvement of Correct Answers to Therapy Questions After PubMed Searches: Pre/Post Comparison
por: McKibbon, Kathleen Ann, et al.
Publicado: (2013) -
An Effective Dense Co-Attention Networks for Visual Question Answering
por: He, Shirong, et al.
Publicado: (2020) -
Deep Modular Bilinear Attention Network for Visual Question Answering
por: Yan, Feng, et al.
Publicado: (2022) -
Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering
por: Guo, Zihan, et al.
Publicado: (2020)