Cargando…

A novel approach to attention mechanism using kernel functions: Kerformer

Artificial Intelligence (AI) is driving advancements across various fields by simulating and enhancing human intelligence. In Natural Language Processing (NLP), transformer models like the Kerformer, a linear transformer based on a kernel approach, have garnered success. However, traditional attenti...

Descripción completa

Detalles Bibliográficos
Autores principales: Gan, Yao, Fu, Yanyun, Wang, Deyong, Li, Yongming
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10483395/
https://www.ncbi.nlm.nih.gov/pubmed/37692884
http://dx.doi.org/10.3389/fnbot.2023.1214203
_version_ 1785102371795238912
author Gan, Yao
Fu, Yanyun
Wang, Deyong
Li, Yongming
author_facet Gan, Yao
Fu, Yanyun
Wang, Deyong
Li, Yongming
author_sort Gan, Yao
collection PubMed
description Artificial Intelligence (AI) is driving advancements across various fields by simulating and enhancing human intelligence. In Natural Language Processing (NLP), transformer models like the Kerformer, a linear transformer based on a kernel approach, have garnered success. However, traditional attention mechanisms in these models have quadratic calculation costs linked to input sequence lengths, hampering efficiency in tasks with extended orders. To tackle this, Kerformer introduces a nonlinear reweighting mechanism, transforming maximum attention into feature-based dot product attention. By exploiting the non-negativity and non-linear weighting traits of softmax computation, separate non-negativity operations for Query(Q) and Key(K) computations are performed. The inclusion of the SE Block further enhances model performance. Kerformer significantly reduces attention matrix time complexity from O(N(2)) to O(N), with N representing sequence length. This transformation results in remarkable efficiency and scalability gains, especially for prolonged tasks. Experimental results demonstrate Kerformer's superiority in terms of time and memory consumption, yielding higher average accuracy (83.39%) in NLP and vision tasks. In tasks with long sequences, Kerformer achieves an average accuracy of 58.94% and exhibits superior efficiency and convergence speed in visual tasks. This model thus offers a promising solution to the limitations posed by conventional attention mechanisms in handling lengthy tasks.
format Online
Article
Text
id pubmed-10483395
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-104833952023-09-08 A novel approach to attention mechanism using kernel functions: Kerformer Gan, Yao Fu, Yanyun Wang, Deyong Li, Yongming Front Neurorobot Neuroscience Artificial Intelligence (AI) is driving advancements across various fields by simulating and enhancing human intelligence. In Natural Language Processing (NLP), transformer models like the Kerformer, a linear transformer based on a kernel approach, have garnered success. However, traditional attention mechanisms in these models have quadratic calculation costs linked to input sequence lengths, hampering efficiency in tasks with extended orders. To tackle this, Kerformer introduces a nonlinear reweighting mechanism, transforming maximum attention into feature-based dot product attention. By exploiting the non-negativity and non-linear weighting traits of softmax computation, separate non-negativity operations for Query(Q) and Key(K) computations are performed. The inclusion of the SE Block further enhances model performance. Kerformer significantly reduces attention matrix time complexity from O(N(2)) to O(N), with N representing sequence length. This transformation results in remarkable efficiency and scalability gains, especially for prolonged tasks. Experimental results demonstrate Kerformer's superiority in terms of time and memory consumption, yielding higher average accuracy (83.39%) in NLP and vision tasks. In tasks with long sequences, Kerformer achieves an average accuracy of 58.94% and exhibits superior efficiency and convergence speed in visual tasks. This model thus offers a promising solution to the limitations posed by conventional attention mechanisms in handling lengthy tasks. Frontiers Media S.A. 2023-08-24 /pmc/articles/PMC10483395/ /pubmed/37692884 http://dx.doi.org/10.3389/fnbot.2023.1214203 Text en Copyright © 2023 Gan, Fu, Wang and Li. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Gan, Yao
Fu, Yanyun
Wang, Deyong
Li, Yongming
A novel approach to attention mechanism using kernel functions: Kerformer
title A novel approach to attention mechanism using kernel functions: Kerformer
title_full A novel approach to attention mechanism using kernel functions: Kerformer
title_fullStr A novel approach to attention mechanism using kernel functions: Kerformer
title_full_unstemmed A novel approach to attention mechanism using kernel functions: Kerformer
title_short A novel approach to attention mechanism using kernel functions: Kerformer
title_sort novel approach to attention mechanism using kernel functions: kerformer
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10483395/
https://www.ncbi.nlm.nih.gov/pubmed/37692884
http://dx.doi.org/10.3389/fnbot.2023.1214203
work_keys_str_mv AT ganyao anovelapproachtoattentionmechanismusingkernelfunctionskerformer
AT fuyanyun anovelapproachtoattentionmechanismusingkernelfunctionskerformer
AT wangdeyong anovelapproachtoattentionmechanismusingkernelfunctionskerformer
AT liyongming anovelapproachtoattentionmechanismusingkernelfunctionskerformer
AT ganyao novelapproachtoattentionmechanismusingkernelfunctionskerformer
AT fuyanyun novelapproachtoattentionmechanismusingkernelfunctionskerformer
AT wangdeyong novelapproachtoattentionmechanismusingkernelfunctionskerformer
AT liyongming novelapproachtoattentionmechanismusingkernelfunctionskerformer