Cargando…
Cross-Modality Interaction Network for Equine Activity Recognition Using Imbalanced Multi-Modal Data †
With the recent advances in deep learning, wearable sensors have increasingly been used in automated animal activity recognition. However, there are two major challenges in improving recognition performance—multi-modal feature fusion and imbalanced data modeling. In this study, to improve classifica...
Autores principales: | Mao, Axiu, Huang, Endai, Gan, Haiming, Parkes, Rebecca S. V., Xu, Weitao, Liu, Kai |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8434387/ https://www.ncbi.nlm.nih.gov/pubmed/34502709 http://dx.doi.org/10.3390/s21175818 |
Ejemplares similares
-
FedAAR: A Novel Federated Learning Framework for Animal Activity Recognition with Wearable Sensors
por: Mao, Axiu, et al.
Publicado: (2022) -
Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention
por: Fu, Changzeng, et al.
Publicado: (2020) -
Cross Task Modality Alignment Network for Sketch Face Recognition
por: Guo, Yanan, et al.
Publicado: (2022) -
Multi-Modal Residual Perceptron Network for Audio–Video Emotion Recognition
por: Chang, Xin, et al.
Publicado: (2021) -
Imbalanced Multi-Modal Multi-Label Learning for Subcellular Localization Prediction of Human Proteins with Both Single and Multiple Sites
por: He, Jianjun, et al.
Publicado: (2012)