Cargando…
Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction
Collaborative robots have gained popularity in industries, providing flexibility and increased productivity for complex tasks. However, their ability to interact with humans and adapt to their behavior is still limited. Prediction of human movement intentions is one way to improve the robots adaptat...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10248176/ https://www.ncbi.nlm.nih.gov/pubmed/37304663 http://dx.doi.org/10.3389/fnbot.2023.1157957 |
_version_ | 1785055317027979264 |
---|---|
author | Pettersson, Julius Falkman, Petter |
author_facet | Pettersson, Julius Falkman, Petter |
author_sort | Pettersson, Julius |
collection | PubMed |
description | Collaborative robots have gained popularity in industries, providing flexibility and increased productivity for complex tasks. However, their ability to interact with humans and adapt to their behavior is still limited. Prediction of human movement intentions is one way to improve the robots adaptation. This paper investigates the performance of using Transformers and MLP-Mixer based neural networks to predict the intended human arm movement direction, based on gaze data obtained in a virtual reality environment, and compares the results to using an LSTM network. The comparison will evaluate the networks based on accuracy on several metrics, time ahead of movement completion, and execution time. It is shown in the paper that there exists several network configurations and architectures that achieve comparable accuracy scores. The best performing Transformers encoder presented in this paper achieved an accuracy of 82.74%, for predictions with high certainty, on continuous data and correctly classifies 80.06% of the movements at least once. The movements are, in 99% of the cases, correctly predicted the first time, before the hand reaches the target and more than 19% ahead of movement completion in 75% of the cases. The results shows that there are multiple ways to utilize neural networks to perform gaze based arm movement intention prediction and it is a promising step toward enabling efficient human-robot collaboration. |
format | Online Article Text |
id | pubmed-10248176 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-102481762023-06-09 Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction Pettersson, Julius Falkman, Petter Front Neurorobot Neuroscience Collaborative robots have gained popularity in industries, providing flexibility and increased productivity for complex tasks. However, their ability to interact with humans and adapt to their behavior is still limited. Prediction of human movement intentions is one way to improve the robots adaptation. This paper investigates the performance of using Transformers and MLP-Mixer based neural networks to predict the intended human arm movement direction, based on gaze data obtained in a virtual reality environment, and compares the results to using an LSTM network. The comparison will evaluate the networks based on accuracy on several metrics, time ahead of movement completion, and execution time. It is shown in the paper that there exists several network configurations and architectures that achieve comparable accuracy scores. The best performing Transformers encoder presented in this paper achieved an accuracy of 82.74%, for predictions with high certainty, on continuous data and correctly classifies 80.06% of the movements at least once. The movements are, in 99% of the cases, correctly predicted the first time, before the hand reaches the target and more than 19% ahead of movement completion in 75% of the cases. The results shows that there are multiple ways to utilize neural networks to perform gaze based arm movement intention prediction and it is a promising step toward enabling efficient human-robot collaboration. Frontiers Media S.A. 2023-05-25 /pmc/articles/PMC10248176/ /pubmed/37304663 http://dx.doi.org/10.3389/fnbot.2023.1157957 Text en Copyright © 2023 Pettersson and Falkman. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Pettersson, Julius Falkman, Petter Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction |
title | Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction |
title_full | Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction |
title_fullStr | Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction |
title_full_unstemmed | Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction |
title_short | Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction |
title_sort | comparison of lstm, transformers, and mlp-mixer neural networks for gaze based human intention prediction |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10248176/ https://www.ncbi.nlm.nih.gov/pubmed/37304663 http://dx.doi.org/10.3389/fnbot.2023.1157957 |
work_keys_str_mv | AT petterssonjulius comparisonoflstmtransformersandmlpmixerneuralnetworksforgazebasedhumanintentionprediction AT falkmanpetter comparisonoflstmtransformersandmlpmixerneuralnetworksforgazebasedhumanintentionprediction |