Cargando…

UPANets: Learning from the Universal Pixel Attention Neworks

With the successful development in computer vision, building a deep convolutional neural network (CNNs) has been mainstream, considering the character of shared parameters in a convolutional layer. Stacking convolutional layers into a deep structure improves performance, but over-stacking also ramps...

Descripción completa

Detalles Bibliográficos
Autores principales: Tseng, Ching-Hsun, Lee, Shin-Jye, Feng, Jianan, Mao, Shengzhong, Wu, Yu-Ping, Shang, Jia-Yu, Zeng, Xiao-Jun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9497600/
https://www.ncbi.nlm.nih.gov/pubmed/36141129
http://dx.doi.org/10.3390/e24091243
_version_ 1784794545812144128
author Tseng, Ching-Hsun
Lee, Shin-Jye
Feng, Jianan
Mao, Shengzhong
Wu, Yu-Ping
Shang, Jia-Yu
Zeng, Xiao-Jun
author_facet Tseng, Ching-Hsun
Lee, Shin-Jye
Feng, Jianan
Mao, Shengzhong
Wu, Yu-Ping
Shang, Jia-Yu
Zeng, Xiao-Jun
author_sort Tseng, Ching-Hsun
collection PubMed
description With the successful development in computer vision, building a deep convolutional neural network (CNNs) has been mainstream, considering the character of shared parameters in a convolutional layer. Stacking convolutional layers into a deep structure improves performance, but over-stacking also ramps up the needed resources for GPUs. Seeing another surge of Transformers in computer vision, the issue has aroused severely. A resource-hungry model is hardly implemented for limited hardware or single-customers-based GPU. Therefore, this work focuses on these concerns and proposes an efficient but robust backbone, which equips with channel and spatial direction attentions, so the attentions help to expand receptive fields in shallow convolutional layers and pass the information to every layer. An attention-boosted network based on already efficient CNNs, Universal Pixel Attention Networks (UPANets), is proposed. Through a series of experiments, UPANets fulfil the purposes of learning global information with less needed resources and outshine many existing SOTAs in CIFAR-{10, 100}.
format Online
Article
Text
id pubmed-9497600
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-94976002022-09-23 UPANets: Learning from the Universal Pixel Attention Neworks Tseng, Ching-Hsun Lee, Shin-Jye Feng, Jianan Mao, Shengzhong Wu, Yu-Ping Shang, Jia-Yu Zeng, Xiao-Jun Entropy (Basel) Article With the successful development in computer vision, building a deep convolutional neural network (CNNs) has been mainstream, considering the character of shared parameters in a convolutional layer. Stacking convolutional layers into a deep structure improves performance, but over-stacking also ramps up the needed resources for GPUs. Seeing another surge of Transformers in computer vision, the issue has aroused severely. A resource-hungry model is hardly implemented for limited hardware or single-customers-based GPU. Therefore, this work focuses on these concerns and proposes an efficient but robust backbone, which equips with channel and spatial direction attentions, so the attentions help to expand receptive fields in shallow convolutional layers and pass the information to every layer. An attention-boosted network based on already efficient CNNs, Universal Pixel Attention Networks (UPANets), is proposed. Through a series of experiments, UPANets fulfil the purposes of learning global information with less needed resources and outshine many existing SOTAs in CIFAR-{10, 100}. MDPI 2022-09-04 /pmc/articles/PMC9497600/ /pubmed/36141129 http://dx.doi.org/10.3390/e24091243 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Tseng, Ching-Hsun
Lee, Shin-Jye
Feng, Jianan
Mao, Shengzhong
Wu, Yu-Ping
Shang, Jia-Yu
Zeng, Xiao-Jun
UPANets: Learning from the Universal Pixel Attention Neworks
title UPANets: Learning from the Universal Pixel Attention Neworks
title_full UPANets: Learning from the Universal Pixel Attention Neworks
title_fullStr UPANets: Learning from the Universal Pixel Attention Neworks
title_full_unstemmed UPANets: Learning from the Universal Pixel Attention Neworks
title_short UPANets: Learning from the Universal Pixel Attention Neworks
title_sort upanets: learning from the universal pixel attention neworks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9497600/
https://www.ncbi.nlm.nih.gov/pubmed/36141129
http://dx.doi.org/10.3390/e24091243
work_keys_str_mv AT tsengchinghsun upanetslearningfromtheuniversalpixelattentionneworks
AT leeshinjye upanetslearningfromtheuniversalpixelattentionneworks
AT fengjianan upanetslearningfromtheuniversalpixelattentionneworks
AT maoshengzhong upanetslearningfromtheuniversalpixelattentionneworks
AT wuyuping upanetslearningfromtheuniversalpixelattentionneworks
AT shangjiayu upanetslearningfromtheuniversalpixelattentionneworks
AT zengxiaojun upanetslearningfromtheuniversalpixelattentionneworks