Cargando…

Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds

Manual visual inspection of civil infrastructure is high-risk, subjective, and time-consuming. The success of deep learning and the proliferation of low-cost consumer robots has spurred rapid growth in research and application of autonomous inspections. The major components of autonomous inspection...

Descripción completa

Detalles Bibliográficos
Autores principales: Hoskere, Vedhus, Narazaki, Yasutaka, Spencer, Billie F.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8779227/
https://www.ncbi.nlm.nih.gov/pubmed/35062493
http://dx.doi.org/10.3390/s22020532
_version_ 1784637522654003200
author Hoskere, Vedhus
Narazaki, Yasutaka
Spencer, Billie F.
author_facet Hoskere, Vedhus
Narazaki, Yasutaka
Spencer, Billie F.
author_sort Hoskere, Vedhus
collection PubMed
description Manual visual inspection of civil infrastructure is high-risk, subjective, and time-consuming. The success of deep learning and the proliferation of low-cost consumer robots has spurred rapid growth in research and application of autonomous inspections. The major components of autonomous inspection include data acquisition, data processing, and decision making, which are usually studied independently. However, for robust real-world applicability, these three aspects of the overall process need to be addressed concurrently with end-to-end testing, incorporating scenarios such as variations in structure type, color, damage level, camera distance, view angle, lighting, etc. Developing real-world datasets that span all these scenarios is nearly impossible. In this paper, we propose a framework to create a virtual visual inspection testbed using 3D synthetic environments that can enable end-to-end testing of autonomous inspection strategies. To populate the 3D synthetic environment with virtual damaged buildings, we propose the use of a non-linear finite element model to inform the realistic and automated visual rendering of different damage types, the damage state, and the material textures of what are termed herein physics-based graphics models (PBGMs). To demonstrate the benefits of the autonomous inspection testbed, three experiments are conducted with models of earthquake damaged reinforced concrete buildings. First, we implement the proposed framework to generate a new large-scale annotated benchmark dataset for post-earthquake inspections of buildings termed QuakeCity. Second, we demonstrate the improved performance of deep learning models trained using the QuakeCity dataset for inference on real data. Finally, a comparison of deep learning-based damage state estimation for different data acquisition strategies is carried out. The results demonstrate the use of PBGMs as an effective testbed for the development and validation of strategies for autonomous vision-based inspections of civil infrastructure.
format Online
Article
Text
id pubmed-8779227
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-87792272022-01-22 Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds Hoskere, Vedhus Narazaki, Yasutaka Spencer, Billie F. Sensors (Basel) Article Manual visual inspection of civil infrastructure is high-risk, subjective, and time-consuming. The success of deep learning and the proliferation of low-cost consumer robots has spurred rapid growth in research and application of autonomous inspections. The major components of autonomous inspection include data acquisition, data processing, and decision making, which are usually studied independently. However, for robust real-world applicability, these three aspects of the overall process need to be addressed concurrently with end-to-end testing, incorporating scenarios such as variations in structure type, color, damage level, camera distance, view angle, lighting, etc. Developing real-world datasets that span all these scenarios is nearly impossible. In this paper, we propose a framework to create a virtual visual inspection testbed using 3D synthetic environments that can enable end-to-end testing of autonomous inspection strategies. To populate the 3D synthetic environment with virtual damaged buildings, we propose the use of a non-linear finite element model to inform the realistic and automated visual rendering of different damage types, the damage state, and the material textures of what are termed herein physics-based graphics models (PBGMs). To demonstrate the benefits of the autonomous inspection testbed, three experiments are conducted with models of earthquake damaged reinforced concrete buildings. First, we implement the proposed framework to generate a new large-scale annotated benchmark dataset for post-earthquake inspections of buildings termed QuakeCity. Second, we demonstrate the improved performance of deep learning models trained using the QuakeCity dataset for inference on real data. Finally, a comparison of deep learning-based damage state estimation for different data acquisition strategies is carried out. The results demonstrate the use of PBGMs as an effective testbed for the development and validation of strategies for autonomous vision-based inspections of civil infrastructure. MDPI 2022-01-11 /pmc/articles/PMC8779227/ /pubmed/35062493 http://dx.doi.org/10.3390/s22020532 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Hoskere, Vedhus
Narazaki, Yasutaka
Spencer, Billie F.
Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds
title Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds
title_full Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds
title_fullStr Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds
title_full_unstemmed Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds
title_short Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds
title_sort physics-based graphics models in 3d synthetic environments as autonomous vision-based inspection testbeds
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8779227/
https://www.ncbi.nlm.nih.gov/pubmed/35062493
http://dx.doi.org/10.3390/s22020532
work_keys_str_mv AT hoskerevedhus physicsbasedgraphicsmodelsin3dsyntheticenvironmentsasautonomousvisionbasedinspectiontestbeds
AT narazakiyasutaka physicsbasedgraphicsmodelsin3dsyntheticenvironmentsasautonomousvisionbasedinspectiontestbeds
AT spencerbillief physicsbasedgraphicsmodelsin3dsyntheticenvironmentsasautonomousvisionbasedinspectiontestbeds