Cargando…

Predicting wind-driven spatial deposition through simulated color images using deep autoencoders

For centuries, scientists have observed nature to understand the laws that govern the physical world. The traditional process of turning observations into physical understanding is slow. Imperfect models are constructed and tested to explain relationships in data. Powerful new algorithms can enable...

Descripción completa

Detalles Bibliográficos
Autores principales: Fernández-Godino, M. Giselle, Lucas, Donald D., Kong, Qingkai
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9876895/
https://www.ncbi.nlm.nih.gov/pubmed/36697487
http://dx.doi.org/10.1038/s41598-023-28590-4
_version_ 1784878263952211968
author Fernández-Godino, M. Giselle
Lucas, Donald D.
Kong, Qingkai
author_facet Fernández-Godino, M. Giselle
Lucas, Donald D.
Kong, Qingkai
author_sort Fernández-Godino, M. Giselle
collection PubMed
description For centuries, scientists have observed nature to understand the laws that govern the physical world. The traditional process of turning observations into physical understanding is slow. Imperfect models are constructed and tested to explain relationships in data. Powerful new algorithms can enable computers to learn physics by observing images and videos. Inspired by this idea, instead of training machine learning models using physical quantities, we used images, that is, pixel information. For this work, and as a proof of concept, the physics of interest are wind-driven spatial patterns. These phenomena include features in Aeolian dunes and volcanic ash deposition, wildfire smoke, and air pollution plumes. We use computer model simulations of spatial deposition patterns to approximate images from a hypothetical imaging device whose outputs are red, green, and blue (RGB) color images with channel values ranging from 0 to 255. In this paper, we explore deep convolutional neural network-based autoencoders to exploit relationships in wind-driven spatial patterns, which commonly occur in geosciences, and reduce their dimensionality. Reducing the data dimension size with an encoder enables training deep, fully connected neural network models linking geographic and meteorological scalar input quantities to the encoded space. Once this is achieved, full spatial patterns are reconstructed using the decoder. We demonstrate this approach on images of spatial deposition from a pollution source, where the encoder compresses the dimensionality to 0.02% of the original size, and the full predictive model performance on test data achieves a normalized root mean squared error of 8%, a figure of merit in space of 94% and a precision-recall area under the curve of 0.93.
format Online
Article
Text
id pubmed-9876895
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-98768952023-01-27 Predicting wind-driven spatial deposition through simulated color images using deep autoencoders Fernández-Godino, M. Giselle Lucas, Donald D. Kong, Qingkai Sci Rep Article For centuries, scientists have observed nature to understand the laws that govern the physical world. The traditional process of turning observations into physical understanding is slow. Imperfect models are constructed and tested to explain relationships in data. Powerful new algorithms can enable computers to learn physics by observing images and videos. Inspired by this idea, instead of training machine learning models using physical quantities, we used images, that is, pixel information. For this work, and as a proof of concept, the physics of interest are wind-driven spatial patterns. These phenomena include features in Aeolian dunes and volcanic ash deposition, wildfire smoke, and air pollution plumes. We use computer model simulations of spatial deposition patterns to approximate images from a hypothetical imaging device whose outputs are red, green, and blue (RGB) color images with channel values ranging from 0 to 255. In this paper, we explore deep convolutional neural network-based autoencoders to exploit relationships in wind-driven spatial patterns, which commonly occur in geosciences, and reduce their dimensionality. Reducing the data dimension size with an encoder enables training deep, fully connected neural network models linking geographic and meteorological scalar input quantities to the encoded space. Once this is achieved, full spatial patterns are reconstructed using the decoder. We demonstrate this approach on images of spatial deposition from a pollution source, where the encoder compresses the dimensionality to 0.02% of the original size, and the full predictive model performance on test data achieves a normalized root mean squared error of 8%, a figure of merit in space of 94% and a precision-recall area under the curve of 0.93. Nature Publishing Group UK 2023-01-25 /pmc/articles/PMC9876895/ /pubmed/36697487 http://dx.doi.org/10.1038/s41598-023-28590-4 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Fernández-Godino, M. Giselle
Lucas, Donald D.
Kong, Qingkai
Predicting wind-driven spatial deposition through simulated color images using deep autoencoders
title Predicting wind-driven spatial deposition through simulated color images using deep autoencoders
title_full Predicting wind-driven spatial deposition through simulated color images using deep autoencoders
title_fullStr Predicting wind-driven spatial deposition through simulated color images using deep autoencoders
title_full_unstemmed Predicting wind-driven spatial deposition through simulated color images using deep autoencoders
title_short Predicting wind-driven spatial deposition through simulated color images using deep autoencoders
title_sort predicting wind-driven spatial deposition through simulated color images using deep autoencoders
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9876895/
https://www.ncbi.nlm.nih.gov/pubmed/36697487
http://dx.doi.org/10.1038/s41598-023-28590-4
work_keys_str_mv AT fernandezgodinomgiselle predictingwinddrivenspatialdepositionthroughsimulatedcolorimagesusingdeepautoencoders
AT lucasdonaldd predictingwinddrivenspatialdepositionthroughsimulatedcolorimagesusingdeepautoencoders
AT kongqingkai predictingwinddrivenspatialdepositionthroughsimulatedcolorimagesusingdeepautoencoders