Cargando…

Multimodal sensor fusion in the latent representation space

A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for...

Descripción completa

Detalles Bibliográficos
Autores principales: Piechocki, Robert J., Wang, Xiaoyang, Bocus, Mohammud J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9898225/
https://www.ncbi.nlm.nih.gov/pubmed/36737463
http://dx.doi.org/10.1038/s41598-022-24754-w
_version_ 1784882381676609536
author Piechocki, Robert J.
Wang, Xiaoyang
Bocus, Mohammud J.
author_facet Piechocki, Robert J.
Wang, Xiaoyang
Bocus, Mohammud J.
author_sort Piechocki, Robert J.
collection PubMed
description A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.
format Online
Article
Text
id pubmed-9898225
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-98982252023-02-05 Multimodal sensor fusion in the latent representation space Piechocki, Robert J. Wang, Xiaoyang Bocus, Mohammud J. Sci Rep Article A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations. Nature Publishing Group UK 2023-02-03 /pmc/articles/PMC9898225/ /pubmed/36737463 http://dx.doi.org/10.1038/s41598-022-24754-w Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Piechocki, Robert J.
Wang, Xiaoyang
Bocus, Mohammud J.
Multimodal sensor fusion in the latent representation space
title Multimodal sensor fusion in the latent representation space
title_full Multimodal sensor fusion in the latent representation space
title_fullStr Multimodal sensor fusion in the latent representation space
title_full_unstemmed Multimodal sensor fusion in the latent representation space
title_short Multimodal sensor fusion in the latent representation space
title_sort multimodal sensor fusion in the latent representation space
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9898225/
https://www.ncbi.nlm.nih.gov/pubmed/36737463
http://dx.doi.org/10.1038/s41598-022-24754-w
work_keys_str_mv AT piechockirobertj multimodalsensorfusioninthelatentrepresentationspace
AT wangxiaoyang multimodalsensorfusioninthelatentrepresentationspace
AT bocusmohammudj multimodalsensorfusioninthelatentrepresentationspace