Cargando…

Robust Mesh Denoising via Triple Sparsity

Mesh denoising is to recover high quality meshes from noisy inputs scanned from the real world. It is a crucial step in geometry processing, computer vision, computer-aided design, etc. Yet, state-of-the-art denoising methods still fall short of handling meshes containing both sharp features and fin...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhong, Saishang, Xie, Zhong, Liu, Jinqin, Liu, Zheng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6427484/
https://www.ncbi.nlm.nih.gov/pubmed/30813651
http://dx.doi.org/10.3390/s19051001
Descripción
Sumario:Mesh denoising is to recover high quality meshes from noisy inputs scanned from the real world. It is a crucial step in geometry processing, computer vision, computer-aided design, etc. Yet, state-of-the-art denoising methods still fall short of handling meshes containing both sharp features and fine details. Besides, some of the methods usually introduce undesired staircase effects in smoothly curved regions. These issues become more severe when a mesh is corrupted by various kinds of noise, including Gaussian, impulsive, and mixed Gaussian–impulsive noise. In this paper, we present a novel optimization method for robustly denoising the mesh. The proposed method is based on a triple sparsity prior: a double sparse prior on first order and second order variations of the face normal field and a sparse prior on the residual face normal field. Numerically, we develop an efficient algorithm based on variable-splitting and augmented Lagrange method to solve the problem. The proposed method can not only effectively recover various features (including sharp features, fine details, smoothly curved regions, etc), but also be robust against different kinds of noise. We testify effectiveness of the proposed method on synthetic meshes and a broad variety of scanned data produced by the laser scanner, Kinect v1, Kinect v2, and Kinect-fusion. Intensive numerical experiments show that our method outperforms all of the compared select-of-the-art methods qualitatively and quantitatively.