Cargando…
VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility
Visibility is a complex phenomenon inspired by emissions and air pollutants or by factors, including sunlight, humidity, temperature, and time, which decrease the clarity of what is visible through the atmosphere. This paper provides a detailed overview of the state-of-the-art contributions in relat...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6471280/ https://www.ncbi.nlm.nih.gov/pubmed/30889820 http://dx.doi.org/10.3390/s19061343 |
_version_ | 1783411992542838784 |
---|---|
author | Palvanov, Akmaljon Cho, Young Im |
author_facet | Palvanov, Akmaljon Cho, Young Im |
author_sort | Palvanov, Akmaljon |
collection | PubMed |
description | Visibility is a complex phenomenon inspired by emissions and air pollutants or by factors, including sunlight, humidity, temperature, and time, which decrease the clarity of what is visible through the atmosphere. This paper provides a detailed overview of the state-of-the-art contributions in relation to visibility estimation under various foggy weather conditions. We propose VisNet, which is a new approach based on deep integrated convolutional neural networks for the estimation of visibility distances from camera imagery. The implemented network uses three streams of deep integrated convolutional neural networks, which are connected in parallel. In addition, we have collected the largest dataset with three million outdoor images and exact visibility values for this study. To evaluate the model’s performance fairly and objectively, the model is trained on three image datasets with different visibility ranges, each with a different number of classes. Moreover, our proposed model, VisNet, evaluated under dissimilar fog density scenarios, uses a diverse set of images. Prior to feeding the network, each input image is filtered in the frequency domain to remove low-level features, and a spectral filter is applied to each input for the extraction of low-contrast regions. Compared to the previous methods, our approach achieves the highest performance in terms of classification based on three different datasets. Furthermore, our VisNet considerably outperforms not only the classical methods, but also state-of-the-art models of visibility estimation. |
format | Online Article Text |
id | pubmed-6471280 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-64712802019-04-26 VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility Palvanov, Akmaljon Cho, Young Im Sensors (Basel) Article Visibility is a complex phenomenon inspired by emissions and air pollutants or by factors, including sunlight, humidity, temperature, and time, which decrease the clarity of what is visible through the atmosphere. This paper provides a detailed overview of the state-of-the-art contributions in relation to visibility estimation under various foggy weather conditions. We propose VisNet, which is a new approach based on deep integrated convolutional neural networks for the estimation of visibility distances from camera imagery. The implemented network uses three streams of deep integrated convolutional neural networks, which are connected in parallel. In addition, we have collected the largest dataset with three million outdoor images and exact visibility values for this study. To evaluate the model’s performance fairly and objectively, the model is trained on three image datasets with different visibility ranges, each with a different number of classes. Moreover, our proposed model, VisNet, evaluated under dissimilar fog density scenarios, uses a diverse set of images. Prior to feeding the network, each input image is filtered in the frequency domain to remove low-level features, and a spectral filter is applied to each input for the extraction of low-contrast regions. Compared to the previous methods, our approach achieves the highest performance in terms of classification based on three different datasets. Furthermore, our VisNet considerably outperforms not only the classical methods, but also state-of-the-art models of visibility estimation. MDPI 2019-03-18 /pmc/articles/PMC6471280/ /pubmed/30889820 http://dx.doi.org/10.3390/s19061343 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Palvanov, Akmaljon Cho, Young Im VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility |
title | VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility |
title_full | VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility |
title_fullStr | VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility |
title_full_unstemmed | VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility |
title_short | VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility |
title_sort | visnet: deep convolutional neural networks for forecasting atmospheric visibility |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6471280/ https://www.ncbi.nlm.nih.gov/pubmed/30889820 http://dx.doi.org/10.3390/s19061343 |
work_keys_str_mv | AT palvanovakmaljon visnetdeepconvolutionalneuralnetworksforforecastingatmosphericvisibility AT choyoungim visnetdeepconvolutionalneuralnetworksforforecastingatmosphericvisibility |