Cargando…
PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution
Research on image-inpainting tasks has mainly focused on enhancing performance by augmenting various stages and modules. However, this trend does not consider the increase in the number of model parameters and operational memory, which increases the burden on computational resources. To solve this p...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575462/ https://www.ncbi.nlm.nih.gov/pubmed/37837143 http://dx.doi.org/10.3390/s23198313 |
_version_ | 1785120928049397760 |
---|---|
author | Ko, Jaekyun Choi, Wanuk Lee, Sanghwan |
author_facet | Ko, Jaekyun Choi, Wanuk Lee, Sanghwan |
author_sort | Ko, Jaekyun |
collection | PubMed |
description | Research on image-inpainting tasks has mainly focused on enhancing performance by augmenting various stages and modules. However, this trend does not consider the increase in the number of model parameters and operational memory, which increases the burden on computational resources. To solve this problem, we propose a Parametric Efficient Image InPainting Network (PEIPNet) for efficient and effective image-inpainting. Unlike other state-of-the-art methods, the proposed model has a one-stage inpainting framework in which depthwise and pointwise convolutions are adopted to reduce the number of parameters and computational cost. To generate semantically appealing results, we selected three unique components: spatially-adaptive denormalization (SPADE), dense dilated convolution module (DDCM), and efficient self-attention (ESA). SPADE was adopted to conditionally normalize activations according to the mask in order to distinguish between damaged and undamaged regions. The DDCM was employed at every scale to overcome the gradient-vanishing obstacle and gradually fill in the pixels by capturing global information along the feature maps. The ESA was utilized to obtain clues from unmasked areas by extracting long-range information. In terms of efficiency, our model has the lowest operational memory compared with other state-of-the-art methods. Both qualitative and quantitative experiments demonstrate the generalized inpainting of our method on three public datasets: Paris StreetView, CelebA, and Places2. |
format | Online Article Text |
id | pubmed-10575462 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-105754622023-10-14 PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution Ko, Jaekyun Choi, Wanuk Lee, Sanghwan Sensors (Basel) Article Research on image-inpainting tasks has mainly focused on enhancing performance by augmenting various stages and modules. However, this trend does not consider the increase in the number of model parameters and operational memory, which increases the burden on computational resources. To solve this problem, we propose a Parametric Efficient Image InPainting Network (PEIPNet) for efficient and effective image-inpainting. Unlike other state-of-the-art methods, the proposed model has a one-stage inpainting framework in which depthwise and pointwise convolutions are adopted to reduce the number of parameters and computational cost. To generate semantically appealing results, we selected three unique components: spatially-adaptive denormalization (SPADE), dense dilated convolution module (DDCM), and efficient self-attention (ESA). SPADE was adopted to conditionally normalize activations according to the mask in order to distinguish between damaged and undamaged regions. The DDCM was employed at every scale to overcome the gradient-vanishing obstacle and gradually fill in the pixels by capturing global information along the feature maps. The ESA was utilized to obtain clues from unmasked areas by extracting long-range information. In terms of efficiency, our model has the lowest operational memory compared with other state-of-the-art methods. Both qualitative and quantitative experiments demonstrate the generalized inpainting of our method on three public datasets: Paris StreetView, CelebA, and Places2. MDPI 2023-10-08 /pmc/articles/PMC10575462/ /pubmed/37837143 http://dx.doi.org/10.3390/s23198313 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Ko, Jaekyun Choi, Wanuk Lee, Sanghwan PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution |
title | PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution |
title_full | PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution |
title_fullStr | PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution |
title_full_unstemmed | PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution |
title_short | PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution |
title_sort | peipnet: parametric efficient image-inpainting network with depthwise and pointwise convolution |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575462/ https://www.ncbi.nlm.nih.gov/pubmed/37837143 http://dx.doi.org/10.3390/s23198313 |
work_keys_str_mv | AT kojaekyun peipnetparametricefficientimageinpaintingnetworkwithdepthwiseandpointwiseconvolution AT choiwanuk peipnetparametricefficientimageinpaintingnetworkwithdepthwiseandpointwiseconvolution AT leesanghwan peipnetparametricefficientimageinpaintingnetworkwithdepthwiseandpointwiseconvolution |