Cargando…
Applying Attention-Based Models for Detecting Cognitive Processes and Mental Health Conditions
According to the psychological literature, implicit motives allow for the characterization of behavior, subsequent success, and long-term development. Contrary to personality traits, implicit motives are often deemed to be rather stable personality characteristics. Normally, implicit motives are obt...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8286051/ https://www.ncbi.nlm.nih.gov/pubmed/34306241 http://dx.doi.org/10.1007/s12559-021-09901-1 |
_version_ | 1783723664461529088 |
---|---|
author | Villatoro-Tello, Esaú Parida, Shantipriya Kumar, Sajit Motlicek, Petr |
author_facet | Villatoro-Tello, Esaú Parida, Shantipriya Kumar, Sajit Motlicek, Petr |
author_sort | Villatoro-Tello, Esaú |
collection | PubMed |
description | According to the psychological literature, implicit motives allow for the characterization of behavior, subsequent success, and long-term development. Contrary to personality traits, implicit motives are often deemed to be rather stable personality characteristics. Normally, implicit motives are obtained by Operant Motives, unconscious intrinsic desires measured by the Operant Motive Test (OMT). The OMT test requires participants to write freely descriptions associated with a set of provided images and questions. In this work, we explore different recent machine learning techniques and various text representation techniques for facing the problem of the OMT classification task. We focused on advanced language representations (e.g, BERT, XLM, and DistilBERT) and deep Supervised Autoencoders for solving the OMT task. We performed an exhaustive analysis and compared their performance against fully connected neural networks and traditional support vector classifiers. Our comparative study highlights the importance of BERT which outperforms the traditional machine learning techniques by a relative improvement of 7.9%. In addition, we performed an analysis of how the BERT attention mechanism is being modified. Our findings indicate that the writing style features acquire higher importance at the moment of accurately identifying the different OMT categories. This is the first time that a study to determine the performance of different transformer-based architectures in the OMT task is performed. Similarly, our work propose, for the first time, using deep supervised autoencoders in the OMT classification task. Our experiments demonstrate that transformer-based methods exhibit the best empirical results, obtaining a relative improvement of 7.9% over the competitive baseline suggested as part of the GermEval 2020 challenge. Additionally, we show that features associated with the writing style are more important than content-based words. Some of these findings show strong connections to previously reported behavioral research on the implicit psychometrics theory. |
format | Online Article Text |
id | pubmed-8286051 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-82860512021-07-19 Applying Attention-Based Models for Detecting Cognitive Processes and Mental Health Conditions Villatoro-Tello, Esaú Parida, Shantipriya Kumar, Sajit Motlicek, Petr Cognit Comput Article According to the psychological literature, implicit motives allow for the characterization of behavior, subsequent success, and long-term development. Contrary to personality traits, implicit motives are often deemed to be rather stable personality characteristics. Normally, implicit motives are obtained by Operant Motives, unconscious intrinsic desires measured by the Operant Motive Test (OMT). The OMT test requires participants to write freely descriptions associated with a set of provided images and questions. In this work, we explore different recent machine learning techniques and various text representation techniques for facing the problem of the OMT classification task. We focused on advanced language representations (e.g, BERT, XLM, and DistilBERT) and deep Supervised Autoencoders for solving the OMT task. We performed an exhaustive analysis and compared their performance against fully connected neural networks and traditional support vector classifiers. Our comparative study highlights the importance of BERT which outperforms the traditional machine learning techniques by a relative improvement of 7.9%. In addition, we performed an analysis of how the BERT attention mechanism is being modified. Our findings indicate that the writing style features acquire higher importance at the moment of accurately identifying the different OMT categories. This is the first time that a study to determine the performance of different transformer-based architectures in the OMT task is performed. Similarly, our work propose, for the first time, using deep supervised autoencoders in the OMT classification task. Our experiments demonstrate that transformer-based methods exhibit the best empirical results, obtaining a relative improvement of 7.9% over the competitive baseline suggested as part of the GermEval 2020 challenge. Additionally, we show that features associated with the writing style are more important than content-based words. Some of these findings show strong connections to previously reported behavioral research on the implicit psychometrics theory. Springer US 2021-07-17 2021 /pmc/articles/PMC8286051/ /pubmed/34306241 http://dx.doi.org/10.1007/s12559-021-09901-1 Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Villatoro-Tello, Esaú Parida, Shantipriya Kumar, Sajit Motlicek, Petr Applying Attention-Based Models for Detecting Cognitive Processes and Mental Health Conditions |
title | Applying Attention-Based Models for Detecting Cognitive Processes and Mental Health Conditions |
title_full | Applying Attention-Based Models for Detecting Cognitive Processes and Mental Health Conditions |
title_fullStr | Applying Attention-Based Models for Detecting Cognitive Processes and Mental Health Conditions |
title_full_unstemmed | Applying Attention-Based Models for Detecting Cognitive Processes and Mental Health Conditions |
title_short | Applying Attention-Based Models for Detecting Cognitive Processes and Mental Health Conditions |
title_sort | applying attention-based models for detecting cognitive processes and mental health conditions |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8286051/ https://www.ncbi.nlm.nih.gov/pubmed/34306241 http://dx.doi.org/10.1007/s12559-021-09901-1 |
work_keys_str_mv | AT villatorotelloesau applyingattentionbasedmodelsfordetectingcognitiveprocessesandmentalhealthconditions AT paridashantipriya applyingattentionbasedmodelsfordetectingcognitiveprocessesandmentalhealthconditions AT kumarsajit applyingattentionbasedmodelsfordetectingcognitiveprocessesandmentalhealthconditions AT motlicekpetr applyingattentionbasedmodelsfordetectingcognitiveprocessesandmentalhealthconditions |