Cargando…

Multimodal learning for fetal distress diagnosis using a multimodal medical information fusion framework

Cardiotocography (CTG) monitoring is an important medical diagnostic tool for fetal well-being evaluation in late pregnancy. In this regard, intelligent CTG classification based on Fetal Heart Rate (FHR) signals is a challenging research area that can assist obstetricians in making clinical decision...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Yefei, Deng, Yanjun, Zhou, Zhixin, Zhang, Xianfei, Jiao, Pengfei, Zhao, Zhidong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9676934/
https://www.ncbi.nlm.nih.gov/pubmed/36419838
http://dx.doi.org/10.3389/fphys.2022.1021400
_version_ 1784833700786077696
author Zhang, Yefei
Deng, Yanjun
Zhou, Zhixin
Zhang, Xianfei
Jiao, Pengfei
Zhao, Zhidong
author_facet Zhang, Yefei
Deng, Yanjun
Zhou, Zhixin
Zhang, Xianfei
Jiao, Pengfei
Zhao, Zhidong
author_sort Zhang, Yefei
collection PubMed
description Cardiotocography (CTG) monitoring is an important medical diagnostic tool for fetal well-being evaluation in late pregnancy. In this regard, intelligent CTG classification based on Fetal Heart Rate (FHR) signals is a challenging research area that can assist obstetricians in making clinical decisions, thereby improving the efficiency and accuracy of pregnancy management. Most existing methods focus on one specific modality, that is, they only detect one type of modality and inevitably have limitations such as incomplete or redundant source domain feature extraction, and poor repeatability. This study focuses on modeling multimodal learning for Fetal Distress Diagnosis (FDD); however, exists three major challenges: unaligned multimodalities; failure to learn and fuse the causality and inclusion between multimodal biomedical data; modality sensitivity, that is, difficulty in implementing a task in the absence of modalities. To address these three issues, we propose a Multimodal Medical Information Fusion framework named MMIF, where the Category Constrained-Parallel ViT model (CCPViT) was first proposed to explore multimodal learning tasks and address the misalignment between multimodalities. Based on CCPViT, a cross-attention-based image-text joint component is introduced to establish a Multimodal Representation Alignment Network model (MRAN), explore the deep-level interactive representation between cross-modal data, and assist multimodal learning. Furthermore, we designed a simple-structured FDD test model based on the highly modal alignment MMIF, realizing task delegation from multimodal model training (image and text) to unimodal pathological diagnosis (image). Extensive experiments, including model parameter sensitivity analysis, cross-modal alignment assessment, and pathological diagnostic accuracy evaluation, were conducted to show our models’ superior performance and effectiveness.
format Online
Article
Text
id pubmed-9676934
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-96769342022-11-22 Multimodal learning for fetal distress diagnosis using a multimodal medical information fusion framework Zhang, Yefei Deng, Yanjun Zhou, Zhixin Zhang, Xianfei Jiao, Pengfei Zhao, Zhidong Front Physiol Physiology Cardiotocography (CTG) monitoring is an important medical diagnostic tool for fetal well-being evaluation in late pregnancy. In this regard, intelligent CTG classification based on Fetal Heart Rate (FHR) signals is a challenging research area that can assist obstetricians in making clinical decisions, thereby improving the efficiency and accuracy of pregnancy management. Most existing methods focus on one specific modality, that is, they only detect one type of modality and inevitably have limitations such as incomplete or redundant source domain feature extraction, and poor repeatability. This study focuses on modeling multimodal learning for Fetal Distress Diagnosis (FDD); however, exists three major challenges: unaligned multimodalities; failure to learn and fuse the causality and inclusion between multimodal biomedical data; modality sensitivity, that is, difficulty in implementing a task in the absence of modalities. To address these three issues, we propose a Multimodal Medical Information Fusion framework named MMIF, where the Category Constrained-Parallel ViT model (CCPViT) was first proposed to explore multimodal learning tasks and address the misalignment between multimodalities. Based on CCPViT, a cross-attention-based image-text joint component is introduced to establish a Multimodal Representation Alignment Network model (MRAN), explore the deep-level interactive representation between cross-modal data, and assist multimodal learning. Furthermore, we designed a simple-structured FDD test model based on the highly modal alignment MMIF, realizing task delegation from multimodal model training (image and text) to unimodal pathological diagnosis (image). Extensive experiments, including model parameter sensitivity analysis, cross-modal alignment assessment, and pathological diagnostic accuracy evaluation, were conducted to show our models’ superior performance and effectiveness. Frontiers Media S.A. 2022-11-07 /pmc/articles/PMC9676934/ /pubmed/36419838 http://dx.doi.org/10.3389/fphys.2022.1021400 Text en Copyright © 2022 Zhang, Deng, Zhou, Zhang, Jiao and Zhao. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Physiology
Zhang, Yefei
Deng, Yanjun
Zhou, Zhixin
Zhang, Xianfei
Jiao, Pengfei
Zhao, Zhidong
Multimodal learning for fetal distress diagnosis using a multimodal medical information fusion framework
title Multimodal learning for fetal distress diagnosis using a multimodal medical information fusion framework
title_full Multimodal learning for fetal distress diagnosis using a multimodal medical information fusion framework
title_fullStr Multimodal learning for fetal distress diagnosis using a multimodal medical information fusion framework
title_full_unstemmed Multimodal learning for fetal distress diagnosis using a multimodal medical information fusion framework
title_short Multimodal learning for fetal distress diagnosis using a multimodal medical information fusion framework
title_sort multimodal learning for fetal distress diagnosis using a multimodal medical information fusion framework
topic Physiology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9676934/
https://www.ncbi.nlm.nih.gov/pubmed/36419838
http://dx.doi.org/10.3389/fphys.2022.1021400
work_keys_str_mv AT zhangyefei multimodallearningforfetaldistressdiagnosisusingamultimodalmedicalinformationfusionframework
AT dengyanjun multimodallearningforfetaldistressdiagnosisusingamultimodalmedicalinformationfusionframework
AT zhouzhixin multimodallearningforfetaldistressdiagnosisusingamultimodalmedicalinformationfusionframework
AT zhangxianfei multimodallearningforfetaldistressdiagnosisusingamultimodalmedicalinformationfusionframework
AT jiaopengfei multimodallearningforfetaldistressdiagnosisusingamultimodalmedicalinformationfusionframework
AT zhaozhidong multimodallearningforfetaldistressdiagnosisusingamultimodalmedicalinformationfusionframework