Cargando…

Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery

An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhao, Yi, Ma, Jiale, Li, Xiaohui, Zhang, Jie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5876738/
https://www.ncbi.nlm.nih.gov/pubmed/29495504
http://dx.doi.org/10.3390/s18030712
_version_ 1783310570965958656
author Zhao, Yi
Ma, Jiale
Li, Xiaohui
Zhang, Jie
author_facet Zhao, Yi
Ma, Jiale
Li, Xiaohui
Zhang, Jie
author_sort Zhao, Yi
collection PubMed
description An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset ‘UAV_Fire’. A 15-layered self-learning DCNN architecture named ‘Fire_Net’ is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, ‘Fire_Net’ guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified.
format Online
Article
Text
id pubmed-5876738
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-58767382018-04-09 Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery Zhao, Yi Ma, Jiale Li, Xiaohui Zhang, Jie Sensors (Basel) Article An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset ‘UAV_Fire’. A 15-layered self-learning DCNN architecture named ‘Fire_Net’ is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, ‘Fire_Net’ guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified. MDPI 2018-02-27 /pmc/articles/PMC5876738/ /pubmed/29495504 http://dx.doi.org/10.3390/s18030712 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhao, Yi
Ma, Jiale
Li, Xiaohui
Zhang, Jie
Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
title Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
title_full Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
title_fullStr Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
title_full_unstemmed Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
title_short Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
title_sort saliency detection and deep learning-based wildfire identification in uav imagery
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5876738/
https://www.ncbi.nlm.nih.gov/pubmed/29495504
http://dx.doi.org/10.3390/s18030712
work_keys_str_mv AT zhaoyi saliencydetectionanddeeplearningbasedwildfireidentificationinuavimagery
AT majiale saliencydetectionanddeeplearningbasedwildfireidentificationinuavimagery
AT lixiaohui saliencydetectionanddeeplearningbasedwildfireidentificationinuavimagery
AT zhangjie saliencydetectionanddeeplearningbasedwildfireidentificationinuavimagery