Cargando…
EnViTSA: Ensemble of Vision Transformer with SpecAugment for Acoustic Event Classification
Recent successes in deep learning have inspired researchers to apply deep neural networks to Acoustic Event Classification (AEC). While deep learning methods can train effective AEC models, they are susceptible to overfitting due to the models’ high complexity. In this paper, we introduce EnViTSA, a...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10674441/ https://www.ncbi.nlm.nih.gov/pubmed/38005472 http://dx.doi.org/10.3390/s23229084 |
_version_ | 1785140828782460928 |
---|---|
author | Lim, Kian Ming Lee, Chin Poo Lee, Zhi Yang Alqahtani, Ali |
author_facet | Lim, Kian Ming Lee, Chin Poo Lee, Zhi Yang Alqahtani, Ali |
author_sort | Lim, Kian Ming |
collection | PubMed |
description | Recent successes in deep learning have inspired researchers to apply deep neural networks to Acoustic Event Classification (AEC). While deep learning methods can train effective AEC models, they are susceptible to overfitting due to the models’ high complexity. In this paper, we introduce EnViTSA, an innovative approach that tackles key challenges in AEC. EnViTSA combines an ensemble of Vision Transformers with SpecAugment, a novel data augmentation technique, to significantly enhance AEC performance. Raw acoustic signals are transformed into Log Mel-spectrograms using Short-Time Fourier Transform, resulting in a fixed-size spectrogram representation. To address data scarcity and overfitting issues, we employ SpecAugment to generate additional training samples through time masking and frequency masking. The core of EnViTSA resides in its ensemble of pre-trained Vision Transformers, harnessing the unique strengths of the Vision Transformer architecture. This ensemble approach not only reduces inductive biases but also effectively mitigates overfitting. In this study, we evaluate the EnViTSA method on three benchmark datasets: ESC-10, ESC-50, and UrbanSound8K. The experimental results underscore the efficacy of our approach, achieving impressive accuracy scores of 93.50%, 85.85%, and 83.20% on ESC-10, ESC-50, and UrbanSound8K, respectively. EnViTSA represents a substantial advancement in AEC, demonstrating the potential of Vision Transformers and SpecAugment in the acoustic domain. |
format | Online Article Text |
id | pubmed-10674441 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-106744412023-11-10 EnViTSA: Ensemble of Vision Transformer with SpecAugment for Acoustic Event Classification Lim, Kian Ming Lee, Chin Poo Lee, Zhi Yang Alqahtani, Ali Sensors (Basel) Article Recent successes in deep learning have inspired researchers to apply deep neural networks to Acoustic Event Classification (AEC). While deep learning methods can train effective AEC models, they are susceptible to overfitting due to the models’ high complexity. In this paper, we introduce EnViTSA, an innovative approach that tackles key challenges in AEC. EnViTSA combines an ensemble of Vision Transformers with SpecAugment, a novel data augmentation technique, to significantly enhance AEC performance. Raw acoustic signals are transformed into Log Mel-spectrograms using Short-Time Fourier Transform, resulting in a fixed-size spectrogram representation. To address data scarcity and overfitting issues, we employ SpecAugment to generate additional training samples through time masking and frequency masking. The core of EnViTSA resides in its ensemble of pre-trained Vision Transformers, harnessing the unique strengths of the Vision Transformer architecture. This ensemble approach not only reduces inductive biases but also effectively mitigates overfitting. In this study, we evaluate the EnViTSA method on three benchmark datasets: ESC-10, ESC-50, and UrbanSound8K. The experimental results underscore the efficacy of our approach, achieving impressive accuracy scores of 93.50%, 85.85%, and 83.20% on ESC-10, ESC-50, and UrbanSound8K, respectively. EnViTSA represents a substantial advancement in AEC, demonstrating the potential of Vision Transformers and SpecAugment in the acoustic domain. MDPI 2023-11-10 /pmc/articles/PMC10674441/ /pubmed/38005472 http://dx.doi.org/10.3390/s23229084 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Lim, Kian Ming Lee, Chin Poo Lee, Zhi Yang Alqahtani, Ali EnViTSA: Ensemble of Vision Transformer with SpecAugment for Acoustic Event Classification |
title | EnViTSA: Ensemble of Vision Transformer with SpecAugment for Acoustic Event Classification |
title_full | EnViTSA: Ensemble of Vision Transformer with SpecAugment for Acoustic Event Classification |
title_fullStr | EnViTSA: Ensemble of Vision Transformer with SpecAugment for Acoustic Event Classification |
title_full_unstemmed | EnViTSA: Ensemble of Vision Transformer with SpecAugment for Acoustic Event Classification |
title_short | EnViTSA: Ensemble of Vision Transformer with SpecAugment for Acoustic Event Classification |
title_sort | envitsa: ensemble of vision transformer with specaugment for acoustic event classification |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10674441/ https://www.ncbi.nlm.nih.gov/pubmed/38005472 http://dx.doi.org/10.3390/s23229084 |
work_keys_str_mv | AT limkianming envitsaensembleofvisiontransformerwithspecaugmentforacousticeventclassification AT leechinpoo envitsaensembleofvisiontransformerwithspecaugmentforacousticeventclassification AT leezhiyang envitsaensembleofvisiontransformerwithspecaugmentforacousticeventclassification AT alqahtaniali envitsaensembleofvisiontransformerwithspecaugmentforacousticeventclassification |