Cargando…

Multimodal Deep Learning for Predicting Adverse Birth Outcomes Based on Early Labour Data

Cardiotocography (CTG) is a widely used technique to monitor fetal heart rate (FHR) during labour and assess the health of the baby. However, visual interpretation of CTG signals is subjective and prone to error. Automated methods that mimic clinical guidelines have been developed, but they failed t...

Descripción completa

Detalles Bibliográficos
Autores principales: Asfaw, Daniel, Jordanov, Ivan, Impey, Lawrence, Namburete, Ana, Lee, Raymond, Georgieva, Antoniya
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10294944/
https://www.ncbi.nlm.nih.gov/pubmed/37370663
http://dx.doi.org/10.3390/bioengineering10060730
_version_ 1785063303251230720
author Asfaw, Daniel
Jordanov, Ivan
Impey, Lawrence
Namburete, Ana
Lee, Raymond
Georgieva, Antoniya
author_facet Asfaw, Daniel
Jordanov, Ivan
Impey, Lawrence
Namburete, Ana
Lee, Raymond
Georgieva, Antoniya
author_sort Asfaw, Daniel
collection PubMed
description Cardiotocography (CTG) is a widely used technique to monitor fetal heart rate (FHR) during labour and assess the health of the baby. However, visual interpretation of CTG signals is subjective and prone to error. Automated methods that mimic clinical guidelines have been developed, but they failed to improve detection of abnormal traces. This study aims to classify CTGs with and without severe compromise at birth using routinely collected CTGs from 51,449 births at term from the first 20 min of FHR recordings. Three 1D-CNN and LSTM based architectures are compared. We also transform the FHR signal into 2D images using time-frequency representation with a spectrogram and scalogram analysis, and subsequently, the 2D images are analysed using a 2D-CNNs. In the proposed multi-modal architecture, the 2D-CNN and the 1D-CNN-LSTM are connected in parallel. The models are evaluated in terms of partial area under the curve (PAUC) between 0–10% false-positive rate; and sensitivity at 95% specificity. The 1D-CNN-LSTM parallel architecture outperformed the other models, achieving a PAUC of 0.20 and sensitivity of 20% at 95% specificity. Our future work will focus on improving the classification performance by employing a larger dataset, analysing longer FHR traces, and incorporating clinical risk factors.
format Online
Article
Text
id pubmed-10294944
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102949442023-06-28 Multimodal Deep Learning for Predicting Adverse Birth Outcomes Based on Early Labour Data Asfaw, Daniel Jordanov, Ivan Impey, Lawrence Namburete, Ana Lee, Raymond Georgieva, Antoniya Bioengineering (Basel) Article Cardiotocography (CTG) is a widely used technique to monitor fetal heart rate (FHR) during labour and assess the health of the baby. However, visual interpretation of CTG signals is subjective and prone to error. Automated methods that mimic clinical guidelines have been developed, but they failed to improve detection of abnormal traces. This study aims to classify CTGs with and without severe compromise at birth using routinely collected CTGs from 51,449 births at term from the first 20 min of FHR recordings. Three 1D-CNN and LSTM based architectures are compared. We also transform the FHR signal into 2D images using time-frequency representation with a spectrogram and scalogram analysis, and subsequently, the 2D images are analysed using a 2D-CNNs. In the proposed multi-modal architecture, the 2D-CNN and the 1D-CNN-LSTM are connected in parallel. The models are evaluated in terms of partial area under the curve (PAUC) between 0–10% false-positive rate; and sensitivity at 95% specificity. The 1D-CNN-LSTM parallel architecture outperformed the other models, achieving a PAUC of 0.20 and sensitivity of 20% at 95% specificity. Our future work will focus on improving the classification performance by employing a larger dataset, analysing longer FHR traces, and incorporating clinical risk factors. MDPI 2023-06-19 /pmc/articles/PMC10294944/ /pubmed/37370663 http://dx.doi.org/10.3390/bioengineering10060730 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Asfaw, Daniel
Jordanov, Ivan
Impey, Lawrence
Namburete, Ana
Lee, Raymond
Georgieva, Antoniya
Multimodal Deep Learning for Predicting Adverse Birth Outcomes Based on Early Labour Data
title Multimodal Deep Learning for Predicting Adverse Birth Outcomes Based on Early Labour Data
title_full Multimodal Deep Learning for Predicting Adverse Birth Outcomes Based on Early Labour Data
title_fullStr Multimodal Deep Learning for Predicting Adverse Birth Outcomes Based on Early Labour Data
title_full_unstemmed Multimodal Deep Learning for Predicting Adverse Birth Outcomes Based on Early Labour Data
title_short Multimodal Deep Learning for Predicting Adverse Birth Outcomes Based on Early Labour Data
title_sort multimodal deep learning for predicting adverse birth outcomes based on early labour data
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10294944/
https://www.ncbi.nlm.nih.gov/pubmed/37370663
http://dx.doi.org/10.3390/bioengineering10060730
work_keys_str_mv AT asfawdaniel multimodaldeeplearningforpredictingadversebirthoutcomesbasedonearlylabourdata
AT jordanovivan multimodaldeeplearningforpredictingadversebirthoutcomesbasedonearlylabourdata
AT impeylawrence multimodaldeeplearningforpredictingadversebirthoutcomesbasedonearlylabourdata
AT nambureteana multimodaldeeplearningforpredictingadversebirthoutcomesbasedonearlylabourdata
AT leeraymond multimodaldeeplearningforpredictingadversebirthoutcomesbasedonearlylabourdata
AT georgievaantoniya multimodaldeeplearningforpredictingadversebirthoutcomesbasedonearlylabourdata