Cargando…
Extracting the Auditory Attention in a Dual-Speaker Scenario From EEG Using a Joint CNN-LSTM Model
Human brain performs remarkably well in segregating a particular speaker from interfering ones in a multispeaker scenario. We can quantitatively evaluate the segregation capability by modeling a relationship between the speech signals present in an auditory scene, and the listener's cortical si...
Autores principales: | Kuruvila, Ivine, Muncke, Jan, Fischer, Eghart, Hoppe, Ulrich |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8365753/ https://www.ncbi.nlm.nih.gov/pubmed/34408661 http://dx.doi.org/10.3389/fphys.2021.700655 |
Ejemplares similares
-
Prediction of Speech Intelligibility by Means of EEG Responses to Sentences in Noise
por: Muncke, Jan, et al.
Publicado: (2022) -
EEG-Based Intersubject Correlations Reflect Selective Attention in a Competing Speaker Scenario
por: Rosenkranz, Marc, et al.
Publicado: (2021) -
Automatic Diagnosis of Schizophrenia in EEG Signals Using CNN-LSTM Models
por: Shoeibi, Afshin, et al.
Publicado: (2021) -
EEG-based emotion recognition using hybrid CNN and LSTM classification
por: Chakravarthi, Bhuvaneshwari, et al.
Publicado: (2022) -
Attention Based CNN-ConvLSTM for Pedestrian Attribute Recognition
por: Li, Yang, et al.
Publicado: (2020)