Cargando…

On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks

In recent years, neural network based image priors have been shown to be highly effective for linear inverse problems, often significantly outperforming conventional methods that are based on sparsity and related notions. While pre-trained generative models are perhaps the most common, it has additi...

Descripción completa

Detalles Bibliográficos
Autores principales: Sun, Yang, Zhao, Hangdong, Scarlett, Jonathan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8623203/
https://www.ncbi.nlm.nih.gov/pubmed/34828179
http://dx.doi.org/10.3390/e23111481
_version_ 1784605875843891200
author Sun, Yang
Zhao, Hangdong
Scarlett, Jonathan
author_facet Sun, Yang
Zhao, Hangdong
Scarlett, Jonathan
author_sort Sun, Yang
collection PubMed
description In recent years, neural network based image priors have been shown to be highly effective for linear inverse problems, often significantly outperforming conventional methods that are based on sparsity and related notions. While pre-trained generative models are perhaps the most common, it has additionally been shown that even untrained neural networks can serve as excellent priors in various imaging applications. In this paper, we seek to broaden the applicability and understanding of untrained neural network priors by investigating the interaction between architecture selection, measurement models (e.g., inpainting vs. denoising vs. compressive sensing), and signal types (e.g., smooth vs. erratic). We motivate the problem via statistical learning theory, and provide two practical algorithms for tuning architectural hyperparameters. Using experimental evaluations, we demonstrate that the optimal hyperparameters may vary significantly between tasks and can exhibit large performance gaps when tuned for the wrong task. In addition, we investigate which hyperparameters tend to be more important, and which are robust to deviations from the optimum.
format Online
Article
Text
id pubmed-8623203
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-86232032021-11-27 On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks Sun, Yang Zhao, Hangdong Scarlett, Jonathan Entropy (Basel) Article In recent years, neural network based image priors have been shown to be highly effective for linear inverse problems, often significantly outperforming conventional methods that are based on sparsity and related notions. While pre-trained generative models are perhaps the most common, it has additionally been shown that even untrained neural networks can serve as excellent priors in various imaging applications. In this paper, we seek to broaden the applicability and understanding of untrained neural network priors by investigating the interaction between architecture selection, measurement models (e.g., inpainting vs. denoising vs. compressive sensing), and signal types (e.g., smooth vs. erratic). We motivate the problem via statistical learning theory, and provide two practical algorithms for tuning architectural hyperparameters. Using experimental evaluations, we demonstrate that the optimal hyperparameters may vary significantly between tasks and can exhibit large performance gaps when tuned for the wrong task. In addition, we investigate which hyperparameters tend to be more important, and which are robust to deviations from the optimum. MDPI 2021-11-09 /pmc/articles/PMC8623203/ /pubmed/34828179 http://dx.doi.org/10.3390/e23111481 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Sun, Yang
Zhao, Hangdong
Scarlett, Jonathan
On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks
title On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks
title_full On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks
title_fullStr On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks
title_full_unstemmed On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks
title_short On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks
title_sort on architecture selection for linear inverse problems with untrained neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8623203/
https://www.ncbi.nlm.nih.gov/pubmed/34828179
http://dx.doi.org/10.3390/e23111481
work_keys_str_mv AT sunyang onarchitectureselectionforlinearinverseproblemswithuntrainedneuralnetworks
AT zhaohangdong onarchitectureselectionforlinearinverseproblemswithuntrainedneuralnetworks
AT scarlettjonathan onarchitectureselectionforlinearinverseproblemswithuntrainedneuralnetworks