Cargando…

어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향

Deep learning has recently achieved remarkable results in the field of medical imaging. However, as a deep learning network becomes deeper to improve its performance, it becomes more difficult to interpret the processes within. This can especially be a critical problem in medical fields where diagno...

Descripción completa

Detalles Bibliográficos
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Korean Society of Radiology 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9431827/
https://www.ncbi.nlm.nih.gov/pubmed/36237722
http://dx.doi.org/10.3348/jksr.2020.0150
_version_ 1784780161777926144
collection PubMed
description Deep learning has recently achieved remarkable results in the field of medical imaging. However, as a deep learning network becomes deeper to improve its performance, it becomes more difficult to interpret the processes within. This can especially be a critical problem in medical fields where diagnostic decisions are directly related to a patient's survival. In order to solve this, explainable artificial intelligence techniques are being widely studied, and an attention mechanism was developed as part of this approach. In this paper, attention techniques are divided into two types: post hoc attention, which aims to analyze a network that has already been trained, and trainable attention, which further improves network performance. Detailed comparisons of each method, examples of applications in medical imaging, and future perspectives will be covered.
format Online
Article
Text
id pubmed-9431827
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher The Korean Society of Radiology
record_format MEDLINE/PubMed
spelling pubmed-94318272022-10-12 어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향 Taehan Yongsang Uihakhoe Chi Deep Learning Model for Medical Imaging Deep learning has recently achieved remarkable results in the field of medical imaging. However, as a deep learning network becomes deeper to improve its performance, it becomes more difficult to interpret the processes within. This can especially be a critical problem in medical fields where diagnostic decisions are directly related to a patient's survival. In order to solve this, explainable artificial intelligence techniques are being widely studied, and an attention mechanism was developed as part of this approach. In this paper, attention techniques are divided into two types: post hoc attention, which aims to analyze a network that has already been trained, and trainable attention, which further improves network performance. Detailed comparisons of each method, examples of applications in medical imaging, and future perspectives will be covered. The Korean Society of Radiology 2020-11 2020-11-30 /pmc/articles/PMC9431827/ /pubmed/36237722 http://dx.doi.org/10.3348/jksr.2020.0150 Text en Copyrights © 2020 The Korean Society of Radiology https://creativecommons.org/licenses/by-nc/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0 (https://creativecommons.org/licenses/by-nc/4.0/) ) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Deep Learning Model for Medical Imaging
어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향
title 어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향
title_full 어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향
title_fullStr 어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향
title_full_unstemmed 어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향
title_short 어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향
title_sort 어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향
topic Deep Learning Model for Medical Imaging
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9431827/
https://www.ncbi.nlm.nih.gov/pubmed/36237722
http://dx.doi.org/10.3348/jksr.2020.0150
work_keys_str_mv AT eotensyeongibeobmichuilyoyeongsangeuijeogyongegwanhanchoesindonghyang
AT eotensyeongibeobmichuilyoyeongsangeuijeogyongegwanhanchoesindonghyang
AT eotensyeongibeobmichuilyoyeongsangeuijeogyongegwanhanchoesindonghyang
AT eotensyeongibeobmichuilyoyeongsangeuijeogyongegwanhanchoesindonghyang
AT eotensyeongibeobmichuilyoyeongsangeuijeogyongegwanhanchoesindonghyang
AT eotensyeongibeobmichuilyoyeongsangeuijeogyongegwanhanchoesindonghyang