Cargando…
High-Density Functional Near-Infrared Spectroscopy and Machine Learning for Visual Perception Quantification
The main application scenario for wearable sensors involves the generation of data and monitoring metrics. fNIRS (functional near-infrared spectroscopy) allows the nonintrusive monitoring of human visual perception. The quantification of visual perception by fNIRS facilitates applications in enginee...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10650008/ https://www.ncbi.nlm.nih.gov/pubmed/37960396 http://dx.doi.org/10.3390/s23218696 |
_version_ | 1785135680832143360 |
---|---|
author | Xiao, Hongwei Li, Zhao Zhou, Yuting Gao, Zhenhai |
author_facet | Xiao, Hongwei Li, Zhao Zhou, Yuting Gao, Zhenhai |
author_sort | Xiao, Hongwei |
collection | PubMed |
description | The main application scenario for wearable sensors involves the generation of data and monitoring metrics. fNIRS (functional near-infrared spectroscopy) allows the nonintrusive monitoring of human visual perception. The quantification of visual perception by fNIRS facilitates applications in engineering-related fields. This study designed a set of experimental procedures to effectively induce visible alterations and to quantify visual perception in conjunction with the acquisition of Hbt (total hemoglobin), Hb (hemoglobin), and HbO(2) (oxygenated hemoglobin) data obtained from HfNIRS (high-density functional near-infrared spectroscopy). Volunteers completed the visual task separately in response to different visible changes in the simulated scene. HfNIRS recorded the changes in Hbt, Hb, and HbO(2) during the study, the time point of the visual difference, and the time point of the task change. This study consisted of one simulated scene, two visual variations, and four visual tasks. The simulation scene featured a car driving location. The visible change suggested that the brightness and saturation of the car operator interface would change. The visual task represented the completion of the layout, color, design, and information questions answered in response to the visible change. This study collected data from 29 volunteers. The volunteers completed the visual task separately in response to different visual changes in the same simulated scene. HfNIRS recorded the changes in Hbt, Hb, and HbO(2) during the study, the time point of the visible difference, and the time point of the task change. The data analysis methods in this study comprised a combination of channel dimensionality reduction, feature extraction, task classification, and score correlation. Channel downscaling: This study used the data of 15 channels in HfNIRS to calculate the mutual information between different channels to set a threshold, and to retain the data of the channels that were higher than those of the mutual information. Feature extraction: The statistics derived from the visual task, including time, mean, median, variance, extreme variance, kurtosis, bias, information entropy, and approximate entropy were computed. Task classification: This study used the KNN (K-Nearest Neighbors) algorithm to classify different visual tasks and to calculate the accuracy, precision, recall, and F1 scores. Scoring correlation: This study matched the visual task scores with the fluctuations of Hbt, Hb, and HbO(2) and observed the changes in Hbt, Hb, and HbO(2) under different scoring levels. Mutual information was used to downscale the channels, and seven channels were retained for analysis under each visual task. The average accuracy was 96.3% ± 1.99%; the samples that correctly classified the visual task accounted for 96.3% of the total; and the classification accuracy was high. By analyzing the correlation between the scores on different visual tasks and the fluctuations of Hbt, Hb, and HbO(2), it was found that the higher the score, the more obvious, significant, and higher the fluctuations of Hbt, Hb, and HbO(2). Experiments found that changes in visual perception triggered changes in Hbt, Hb, and HbO(2). HfNIRS combined with Hbt, Hb, and HbO(2) recorded by machine learning algorithms can effectively quantify visual perception. However, the related research in this paper still needs to be further refined, and the mathematical relationship between HfNIRS and visual perception needs to be further explored to realize the quantitative study of subjective and objective visual perception supported by the mathematical relationship. |
format | Online Article Text |
id | pubmed-10650008 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-106500082023-10-25 High-Density Functional Near-Infrared Spectroscopy and Machine Learning for Visual Perception Quantification Xiao, Hongwei Li, Zhao Zhou, Yuting Gao, Zhenhai Sensors (Basel) Article The main application scenario for wearable sensors involves the generation of data and monitoring metrics. fNIRS (functional near-infrared spectroscopy) allows the nonintrusive monitoring of human visual perception. The quantification of visual perception by fNIRS facilitates applications in engineering-related fields. This study designed a set of experimental procedures to effectively induce visible alterations and to quantify visual perception in conjunction with the acquisition of Hbt (total hemoglobin), Hb (hemoglobin), and HbO(2) (oxygenated hemoglobin) data obtained from HfNIRS (high-density functional near-infrared spectroscopy). Volunteers completed the visual task separately in response to different visible changes in the simulated scene. HfNIRS recorded the changes in Hbt, Hb, and HbO(2) during the study, the time point of the visual difference, and the time point of the task change. This study consisted of one simulated scene, two visual variations, and four visual tasks. The simulation scene featured a car driving location. The visible change suggested that the brightness and saturation of the car operator interface would change. The visual task represented the completion of the layout, color, design, and information questions answered in response to the visible change. This study collected data from 29 volunteers. The volunteers completed the visual task separately in response to different visual changes in the same simulated scene. HfNIRS recorded the changes in Hbt, Hb, and HbO(2) during the study, the time point of the visible difference, and the time point of the task change. The data analysis methods in this study comprised a combination of channel dimensionality reduction, feature extraction, task classification, and score correlation. Channel downscaling: This study used the data of 15 channels in HfNIRS to calculate the mutual information between different channels to set a threshold, and to retain the data of the channels that were higher than those of the mutual information. Feature extraction: The statistics derived from the visual task, including time, mean, median, variance, extreme variance, kurtosis, bias, information entropy, and approximate entropy were computed. Task classification: This study used the KNN (K-Nearest Neighbors) algorithm to classify different visual tasks and to calculate the accuracy, precision, recall, and F1 scores. Scoring correlation: This study matched the visual task scores with the fluctuations of Hbt, Hb, and HbO(2) and observed the changes in Hbt, Hb, and HbO(2) under different scoring levels. Mutual information was used to downscale the channels, and seven channels were retained for analysis under each visual task. The average accuracy was 96.3% ± 1.99%; the samples that correctly classified the visual task accounted for 96.3% of the total; and the classification accuracy was high. By analyzing the correlation between the scores on different visual tasks and the fluctuations of Hbt, Hb, and HbO(2), it was found that the higher the score, the more obvious, significant, and higher the fluctuations of Hbt, Hb, and HbO(2). Experiments found that changes in visual perception triggered changes in Hbt, Hb, and HbO(2). HfNIRS combined with Hbt, Hb, and HbO(2) recorded by machine learning algorithms can effectively quantify visual perception. However, the related research in this paper still needs to be further refined, and the mathematical relationship between HfNIRS and visual perception needs to be further explored to realize the quantitative study of subjective and objective visual perception supported by the mathematical relationship. MDPI 2023-10-25 /pmc/articles/PMC10650008/ /pubmed/37960396 http://dx.doi.org/10.3390/s23218696 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Xiao, Hongwei Li, Zhao Zhou, Yuting Gao, Zhenhai High-Density Functional Near-Infrared Spectroscopy and Machine Learning for Visual Perception Quantification |
title | High-Density Functional Near-Infrared Spectroscopy and Machine Learning for Visual Perception Quantification |
title_full | High-Density Functional Near-Infrared Spectroscopy and Machine Learning for Visual Perception Quantification |
title_fullStr | High-Density Functional Near-Infrared Spectroscopy and Machine Learning for Visual Perception Quantification |
title_full_unstemmed | High-Density Functional Near-Infrared Spectroscopy and Machine Learning for Visual Perception Quantification |
title_short | High-Density Functional Near-Infrared Spectroscopy and Machine Learning for Visual Perception Quantification |
title_sort | high-density functional near-infrared spectroscopy and machine learning for visual perception quantification |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10650008/ https://www.ncbi.nlm.nih.gov/pubmed/37960396 http://dx.doi.org/10.3390/s23218696 |
work_keys_str_mv | AT xiaohongwei highdensityfunctionalnearinfraredspectroscopyandmachinelearningforvisualperceptionquantification AT lizhao highdensityfunctionalnearinfraredspectroscopyandmachinelearningforvisualperceptionquantification AT zhouyuting highdensityfunctionalnearinfraredspectroscopyandmachinelearningforvisualperceptionquantification AT gaozhenhai highdensityfunctionalnearinfraredspectroscopyandmachinelearningforvisualperceptionquantification |