Cargando…
An artificial intelligence-based classifier for musical emotion expression in media education
Music can serve as a potent tool for conveying emotions and regulating learners’ moods, while the systematic application of emotional assessment can help to improve teaching efficiency. However, existing music emotion analysis methods based on Artificial Intelligence (AI) rely primarily on pre-marke...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10403192/ https://www.ncbi.nlm.nih.gov/pubmed/37547395 http://dx.doi.org/10.7717/peerj-cs.1472 |
_version_ | 1785085015356342272 |
---|---|
author | Lian, Jue |
author_facet | Lian, Jue |
author_sort | Lian, Jue |
collection | PubMed |
description | Music can serve as a potent tool for conveying emotions and regulating learners’ moods, while the systematic application of emotional assessment can help to improve teaching efficiency. However, existing music emotion analysis methods based on Artificial Intelligence (AI) rely primarily on pre-marked content, such as lyrics and fail to adequately account for music signals’ perception, transmission, and recognition. To address this limitation, this study first employs sound-level segmentation, data frame processing, and threshold determination to enable intelligent segmentation and recognition of notes. Next, based on the extracted audio features, a Radial Basis Function (RBF) model is utilized to construct a music emotion classifier. Finally, correlation feedback was used to label the classification results further and train the classifier. The study compares the music emotion classification method commonly used in Chinese music education with the Hevner emotion model. It identifies four emotion categories: Quiet, Happy, Sad, and Excited, to classify performers’ emotions. The testing results demonstrate that audio feature recognition time is a mere 0.004 min, with an accuracy rate of over 95%. Furthermore, classifying performers’ emotions based on audio features is consistent with conventional human cognition. |
format | Online Article Text |
id | pubmed-10403192 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | PeerJ Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-104031922023-08-05 An artificial intelligence-based classifier for musical emotion expression in media education Lian, Jue PeerJ Comput Sci Artificial Intelligence Music can serve as a potent tool for conveying emotions and regulating learners’ moods, while the systematic application of emotional assessment can help to improve teaching efficiency. However, existing music emotion analysis methods based on Artificial Intelligence (AI) rely primarily on pre-marked content, such as lyrics and fail to adequately account for music signals’ perception, transmission, and recognition. To address this limitation, this study first employs sound-level segmentation, data frame processing, and threshold determination to enable intelligent segmentation and recognition of notes. Next, based on the extracted audio features, a Radial Basis Function (RBF) model is utilized to construct a music emotion classifier. Finally, correlation feedback was used to label the classification results further and train the classifier. The study compares the music emotion classification method commonly used in Chinese music education with the Hevner emotion model. It identifies four emotion categories: Quiet, Happy, Sad, and Excited, to classify performers’ emotions. The testing results demonstrate that audio feature recognition time is a mere 0.004 min, with an accuracy rate of over 95%. Furthermore, classifying performers’ emotions based on audio features is consistent with conventional human cognition. PeerJ Inc. 2023-07-14 /pmc/articles/PMC10403192/ /pubmed/37547395 http://dx.doi.org/10.7717/peerj-cs.1472 Text en © 2023 Lian https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited. |
spellingShingle | Artificial Intelligence Lian, Jue An artificial intelligence-based classifier for musical emotion expression in media education |
title | An artificial intelligence-based classifier for musical emotion expression in media education |
title_full | An artificial intelligence-based classifier for musical emotion expression in media education |
title_fullStr | An artificial intelligence-based classifier for musical emotion expression in media education |
title_full_unstemmed | An artificial intelligence-based classifier for musical emotion expression in media education |
title_short | An artificial intelligence-based classifier for musical emotion expression in media education |
title_sort | artificial intelligence-based classifier for musical emotion expression in media education |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10403192/ https://www.ncbi.nlm.nih.gov/pubmed/37547395 http://dx.doi.org/10.7717/peerj-cs.1472 |
work_keys_str_mv | AT lianjue anartificialintelligencebasedclassifierformusicalemotionexpressioninmediaeducation AT lianjue artificialintelligencebasedclassifierformusicalemotionexpressioninmediaeducation |