Cargando…

Few-shot pulse wave contour classification based on multi-scale feature extraction

The annotation procedure of pulse wave contour (PWC) is expensive and time-consuming, thereby hindering the formation of large-scale datasets to match the requirements of deep learning. To obtain better results under the condition of few-shot PWC, a small-parameter unit structure and a multi-scale f...

Descripción completa

Detalles Bibliográficos
Autores principales: Lu, Peng, Liu, Chao, Mao, Xiaobo, Zhao, Yvping, Wang, Hanzhang, Zhang, Hongpo, Guo, Lili
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7881007/
https://www.ncbi.nlm.nih.gov/pubmed/33580107
http://dx.doi.org/10.1038/s41598-021-83134-y
Descripción
Sumario:The annotation procedure of pulse wave contour (PWC) is expensive and time-consuming, thereby hindering the formation of large-scale datasets to match the requirements of deep learning. To obtain better results under the condition of few-shot PWC, a small-parameter unit structure and a multi-scale feature-extraction model are proposed. In the small-parameter unit structure, information of adjacent cells is transmitted through state variables. Simultaneously, a forgetting gate is used to update the information and retain long-term dependence of PWC in the form of unit series. The multi-scale feature-extraction model is an integrated model containing three parts. Convolution neural networks are used to extract spatial features of single-period PWC and rhythm features of multi-period PWC. Recursive neural networks are used to retain the long-term dependence features of PWC. Finally, an inference layer is used for classification through extracted features. Classification experiments of cardiovascular diseases are performed on photoplethysmography dataset and continuous non-invasive blood pressure dataset. Results show that the classification accuracy of the multi-scale feature-extraction model on the two datasets respectively can reach 80% and 96%, respectively.