Cargando…
Multimodal Discourse Analysis of Interactive Environment of Film Discourse Based on Deep Learning
With the advent of the information age, language is no longer the only way to construct meaning. Besides language, a variety of social symbols, such as gestures, images, music, three-dimensional animation, and so on, are more and more involved in the social practice of meaning construction. Traditio...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9451998/ https://www.ncbi.nlm.nih.gov/pubmed/36089945 http://dx.doi.org/10.1155/2022/1606926 |
_version_ | 1784784846615216128 |
---|---|
author | Man, Shengchong Li, Zepeng |
author_facet | Man, Shengchong Li, Zepeng |
author_sort | Man, Shengchong |
collection | PubMed |
description | With the advent of the information age, language is no longer the only way to construct meaning. Besides language, a variety of social symbols, such as gestures, images, music, three-dimensional animation, and so on, are more and more involved in the social practice of meaning construction. Traditional single-modal sentiment analysis methods have a single expression form and cannot fully utilize multiple modal information, resulting in low sentiment classification accuracy. Deep learning technology can automatically mine emotional states in images, texts, and videos and can effectively combine multiple modal information. In the book Image Reading, the first systematic and comprehensive visual grammatical analysis framework is proposed and the expression of image meaning is discussed from the perspectives of representational meaning, interactive meaning, and composition meaning, compared with the three pure theoretical functions in Halliday's systemic functional grammar. In the past, people often discussed films from the macro perspectives of literary criticism, film criticism, psychology, aesthetics, and so on, and multimodal analysis theory provides film researchers with a set of methods to analyze images, music, and words at the same time. In view of the above considerations, Mu Wen adopts the perspective of social semiotics, based on Halliday's systemic functional linguistics and Gan He's “visual grammar,” and builds a multimodal interaction model as a tool to analyze film discourse by referring to evaluation theory. |
format | Online Article Text |
id | pubmed-9451998 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-94519982022-09-08 Multimodal Discourse Analysis of Interactive Environment of Film Discourse Based on Deep Learning Man, Shengchong Li, Zepeng J Environ Public Health Research Article With the advent of the information age, language is no longer the only way to construct meaning. Besides language, a variety of social symbols, such as gestures, images, music, three-dimensional animation, and so on, are more and more involved in the social practice of meaning construction. Traditional single-modal sentiment analysis methods have a single expression form and cannot fully utilize multiple modal information, resulting in low sentiment classification accuracy. Deep learning technology can automatically mine emotional states in images, texts, and videos and can effectively combine multiple modal information. In the book Image Reading, the first systematic and comprehensive visual grammatical analysis framework is proposed and the expression of image meaning is discussed from the perspectives of representational meaning, interactive meaning, and composition meaning, compared with the three pure theoretical functions in Halliday's systemic functional grammar. In the past, people often discussed films from the macro perspectives of literary criticism, film criticism, psychology, aesthetics, and so on, and multimodal analysis theory provides film researchers with a set of methods to analyze images, music, and words at the same time. In view of the above considerations, Mu Wen adopts the perspective of social semiotics, based on Halliday's systemic functional linguistics and Gan He's “visual grammar,” and builds a multimodal interaction model as a tool to analyze film discourse by referring to evaluation theory. Hindawi 2022-08-31 /pmc/articles/PMC9451998/ /pubmed/36089945 http://dx.doi.org/10.1155/2022/1606926 Text en Copyright © 2022 Shengchong Man and Zepeng Li. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Man, Shengchong Li, Zepeng Multimodal Discourse Analysis of Interactive Environment of Film Discourse Based on Deep Learning |
title | Multimodal Discourse Analysis of Interactive Environment of Film Discourse Based on Deep Learning |
title_full | Multimodal Discourse Analysis of Interactive Environment of Film Discourse Based on Deep Learning |
title_fullStr | Multimodal Discourse Analysis of Interactive Environment of Film Discourse Based on Deep Learning |
title_full_unstemmed | Multimodal Discourse Analysis of Interactive Environment of Film Discourse Based on Deep Learning |
title_short | Multimodal Discourse Analysis of Interactive Environment of Film Discourse Based on Deep Learning |
title_sort | multimodal discourse analysis of interactive environment of film discourse based on deep learning |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9451998/ https://www.ncbi.nlm.nih.gov/pubmed/36089945 http://dx.doi.org/10.1155/2022/1606926 |
work_keys_str_mv | AT manshengchong multimodaldiscourseanalysisofinteractiveenvironmentoffilmdiscoursebasedondeeplearning AT lizepeng multimodaldiscourseanalysisofinteractiveenvironmentoffilmdiscoursebasedondeeplearning |