Cargando…

Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?

In the Thematic Apperception Test, a picture story exercise (TAT/PSE; Heckhausen, 1963), it is assumed that unconscious motives can be detected in the text someone is telling about pictures shown in the test. Therefore, this text is classified by trained experts regarding evaluation rules. We tried...

Descripción completa

Detalles Bibliográficos
Autores principales: Gruber, Nicole, Jockisch, Alfred
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861254/
https://www.ncbi.nlm.nih.gov/pubmed/33733157
http://dx.doi.org/10.3389/frai.2020.00040
_version_ 1783647045960073216
author Gruber, Nicole
Jockisch, Alfred
author_facet Gruber, Nicole
Jockisch, Alfred
author_sort Gruber, Nicole
collection PubMed
description In the Thematic Apperception Test, a picture story exercise (TAT/PSE; Heckhausen, 1963), it is assumed that unconscious motives can be detected in the text someone is telling about pictures shown in the test. Therefore, this text is classified by trained experts regarding evaluation rules. We tried to automate this coding and used a recurrent neuronal network (RNN) because of the sequential input data. There are two different cell types to improve recurrent neural networks regarding long-term dependencies in sequential input data: long-short-term-memory cells (LSTMs) and gated-recurrent units (GRUs). Some results indicate that GRUs can outperform LSTMs; others show the opposite. So the question remains when to use GRU or LSTM cells. The results show (N = 18000 data, 10-fold cross-validated) that the GRUs outperform LSTMs (accuracy = .85 vs. .82) for overall motive coding. Further analysis showed that GRUs have higher specificity (true negative rate) and learn better less prevalent content. LSTMs have higher sensitivity (true positive rate) and learn better high prevalent content. A closer look at a picture x category matrix reveals that LSTMs outperform GRUs only where deep context understanding is important. As these both techniques do not clearly present a major advantage over one another in the domain investigated here, an interesting topic for future work is to develop a method that combines their strengths.
format Online
Article
Text
id pubmed-7861254
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78612542021-03-16 Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text? Gruber, Nicole Jockisch, Alfred Front Artif Intell Artificial Intelligence In the Thematic Apperception Test, a picture story exercise (TAT/PSE; Heckhausen, 1963), it is assumed that unconscious motives can be detected in the text someone is telling about pictures shown in the test. Therefore, this text is classified by trained experts regarding evaluation rules. We tried to automate this coding and used a recurrent neuronal network (RNN) because of the sequential input data. There are two different cell types to improve recurrent neural networks regarding long-term dependencies in sequential input data: long-short-term-memory cells (LSTMs) and gated-recurrent units (GRUs). Some results indicate that GRUs can outperform LSTMs; others show the opposite. So the question remains when to use GRU or LSTM cells. The results show (N = 18000 data, 10-fold cross-validated) that the GRUs outperform LSTMs (accuracy = .85 vs. .82) for overall motive coding. Further analysis showed that GRUs have higher specificity (true negative rate) and learn better less prevalent content. LSTMs have higher sensitivity (true positive rate) and learn better high prevalent content. A closer look at a picture x category matrix reveals that LSTMs outperform GRUs only where deep context understanding is important. As these both techniques do not clearly present a major advantage over one another in the domain investigated here, an interesting topic for future work is to develop a method that combines their strengths. Frontiers Media S.A. 2020-06-30 /pmc/articles/PMC7861254/ /pubmed/33733157 http://dx.doi.org/10.3389/frai.2020.00040 Text en Copyright © 2020 Gruber and Jockisch. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Gruber, Nicole
Jockisch, Alfred
Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?
title Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?
title_full Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?
title_fullStr Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?
title_full_unstemmed Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?
title_short Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?
title_sort are gru cells more specific and lstm cells more sensitive in motive classification of text?
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861254/
https://www.ncbi.nlm.nih.gov/pubmed/33733157
http://dx.doi.org/10.3389/frai.2020.00040
work_keys_str_mv AT grubernicole aregrucellsmorespecificandlstmcellsmoresensitiveinmotiveclassificationoftext
AT jockischalfred aregrucellsmorespecificandlstmcellsmoresensitiveinmotiveclassificationoftext