Cargando…
Adoption of Transformer Neural Network to Improve the Diagnostic Performance of Oximetry for Obstructive Sleep Apnea
Scoring polysomnography for obstructive sleep apnea diagnosis is a laborious, long, and costly process. Machine learning approaches, such as deep neural networks, can reduce scoring time and costs. However, most methods require prior filtering and preprocessing of the raw signal. Our work presents a...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10536445/ https://www.ncbi.nlm.nih.gov/pubmed/37765980 http://dx.doi.org/10.3390/s23187924 |
Sumario: | Scoring polysomnography for obstructive sleep apnea diagnosis is a laborious, long, and costly process. Machine learning approaches, such as deep neural networks, can reduce scoring time and costs. However, most methods require prior filtering and preprocessing of the raw signal. Our work presents a novel method for diagnosing obstructive sleep apnea using a transformer neural network with learnable positional encoding, which outperforms existing state-of-the-art solutions. This approach has the potential to improve the diagnostic performance of oximetry for obstructive sleep apnea and reduce the time and costs associated with traditional polysomnography. Contrary to existing approaches, our approach performs annotations at one-second granularity. Allowing physicians to interpret the model’s outcome. In addition, we tested different positional encoding designs as the first layer of the model, and the best results were achieved using a learnable positional encoding based on an autoencoder with structural novelty. In addition, we tried different temporal resolutions with various granularity levels from 1 to 360 s. All experiments were carried out on an independent test set from the public OSASUD dataset and showed that our approach outperforms current state-of-the-art solutions with a satisfactory AUC of 0.89, accuracy of 0.80, and F1-score of 0.79. |
---|