Cargando…
Multimodal sensor fusion in the latent representation space
A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for...
Autores principales: | Piechocki, Robert J., Wang, Xiaoyang, Bocus, Mohammud J. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9898225/ https://www.ncbi.nlm.nih.gov/pubmed/36737463 http://dx.doi.org/10.1038/s41598-022-24754-w |
Ejemplares similares
-
A comprehensive ultra-wideband dataset for non-cooperative contextual sensing
por: Bocus, Mohammud J., et al.
Publicado: (2022) -
OPERAnet, a multimodal activity recognition dataset acquired from radio frequency and vision-based sensors
por: Bocus, Mohammud J., et al.
Publicado: (2022) -
Multimodal Medical Image Fusion Based on Multiple Latent Low-Rank Representation
por: Lou, Xi-Cheng, et al.
Publicado: (2021) -
Predicting chemical ecotoxicity by learning latent space chemical representations
por: Gao, Feng, et al.
Publicado: (2022) -
Learning Latent Space Representations to Predict Patient Outcomes: Model Development and Validation
por: Rongali, Subendhu, et al.
Publicado: (2020)