Cargando…

Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI)

Building-damage mapping using remote sensing images plays a critical role in providing quick and accurate information for the first responders after major earthquakes. In recent years, there has been an increasing interest in generating post-earthquake building-damage maps automatically using differ...

Descripción completa

Detalles Bibliográficos
Autores principales: Matin, Sahar S., Pradhan, Biswajeet
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8271973/
https://www.ncbi.nlm.nih.gov/pubmed/34209169
http://dx.doi.org/10.3390/s21134489
_version_ 1783721116710207488
author Matin, Sahar S.
Pradhan, Biswajeet
author_facet Matin, Sahar S.
Pradhan, Biswajeet
author_sort Matin, Sahar S.
collection PubMed
description Building-damage mapping using remote sensing images plays a critical role in providing quick and accurate information for the first responders after major earthquakes. In recent years, there has been an increasing interest in generating post-earthquake building-damage maps automatically using different artificial intelligence (AI)-based frameworks. These frameworks in this domain are promising, yet not reliable for several reasons, including but not limited to the site-specific design of the methods, the lack of transparency in the AI-model, the lack of quality in the labelled image, and the use of irrelevant descriptor features in building the AI-model. Using explainable AI (XAI) can lead us to gain insight into identifying these limitations and therefore, to modify the training dataset and the model accordingly. This paper proposes the use of SHAP (Shapley additive explanation) to interpret the outputs of a multilayer perceptron (MLP)—a machine learning model—and analyse the impact of each feature descriptor included in the model for building-damage assessment to examine the reliability of the model. In this study, a post-event satellite image from the 2018 Palu earthquake was used. The results show that MLP can classify the collapsed and non-collapsed buildings with an overall accuracy of 84% after removing the redundant features. Further, spectral features are found to be more important than texture features in distinguishing the collapsed and non-collapsed buildings. Finally, we argue that constructing an explainable model would help to understand the model’s decision to classify the buildings as collapsed and non-collapsed and open avenues to build a transferable AI model.
format Online
Article
Text
id pubmed-8271973
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-82719732021-07-11 Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI) Matin, Sahar S. Pradhan, Biswajeet Sensors (Basel) Article Building-damage mapping using remote sensing images plays a critical role in providing quick and accurate information for the first responders after major earthquakes. In recent years, there has been an increasing interest in generating post-earthquake building-damage maps automatically using different artificial intelligence (AI)-based frameworks. These frameworks in this domain are promising, yet not reliable for several reasons, including but not limited to the site-specific design of the methods, the lack of transparency in the AI-model, the lack of quality in the labelled image, and the use of irrelevant descriptor features in building the AI-model. Using explainable AI (XAI) can lead us to gain insight into identifying these limitations and therefore, to modify the training dataset and the model accordingly. This paper proposes the use of SHAP (Shapley additive explanation) to interpret the outputs of a multilayer perceptron (MLP)—a machine learning model—and analyse the impact of each feature descriptor included in the model for building-damage assessment to examine the reliability of the model. In this study, a post-event satellite image from the 2018 Palu earthquake was used. The results show that MLP can classify the collapsed and non-collapsed buildings with an overall accuracy of 84% after removing the redundant features. Further, spectral features are found to be more important than texture features in distinguishing the collapsed and non-collapsed buildings. Finally, we argue that constructing an explainable model would help to understand the model’s decision to classify the buildings as collapsed and non-collapsed and open avenues to build a transferable AI model. MDPI 2021-06-30 /pmc/articles/PMC8271973/ /pubmed/34209169 http://dx.doi.org/10.3390/s21134489 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Matin, Sahar S.
Pradhan, Biswajeet
Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI)
title Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI)
title_full Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI)
title_fullStr Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI)
title_full_unstemmed Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI)
title_short Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI)
title_sort earthquake-induced building-damage mapping using explainable ai (xai)
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8271973/
https://www.ncbi.nlm.nih.gov/pubmed/34209169
http://dx.doi.org/10.3390/s21134489
work_keys_str_mv AT matinsahars earthquakeinducedbuildingdamagemappingusingexplainableaixai
AT pradhanbiswajeet earthquakeinducedbuildingdamagemappingusingexplainableaixai