Cargando…

Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment

Video observations have been widely used for providing ground truth for wearable systems for monitoring food intake in controlled laboratory conditions; however, video observation requires participants be confined to a defined space. The purpose of this analysis was to test an alternative approach f...

Descripción completa

Detalles Bibliográficos
Autores principales: Farooq, Muhammad, Doulah, Abul, Parton, Jason, McCrory, Megan A., Higgins, Janine A., Sazonov, Edward
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6472006/
https://www.ncbi.nlm.nih.gov/pubmed/30871173
http://dx.doi.org/10.3390/nu11030609
_version_ 1783412155970748416
author Farooq, Muhammad
Doulah, Abul
Parton, Jason
McCrory, Megan A.
Higgins, Janine A.
Sazonov, Edward
author_facet Farooq, Muhammad
Doulah, Abul
Parton, Jason
McCrory, Megan A.
Higgins, Janine A.
Sazonov, Edward
author_sort Farooq, Muhammad
collection PubMed
description Video observations have been widely used for providing ground truth for wearable systems for monitoring food intake in controlled laboratory conditions; however, video observation requires participants be confined to a defined space. The purpose of this analysis was to test an alternative approach for establishing activity types and food intake bouts in a relatively unconstrained environment. The accuracy of a wearable system for assessing food intake was compared with that from video observation, and inter-rater reliability of annotation was also evaluated. Forty participants were enrolled. Multiple participants were simultaneously monitored in a 4-bedroom apartment using six cameras for three days each. Participants could leave the apartment overnight and for short periods of time during the day, during which time monitoring did not take place. A wearable system (Automatic Ingestion Monitor, AIM) was used to detect and monitor participants’ food intake at a resolution of 30 s using a neural network classifier. Two different food intake detection models were tested, one trained on the data from an earlier study and the other on current study data using leave-one-out cross validation. Three trained human raters annotated the videos for major activities of daily living including eating, drinking, resting, walking, and talking. They further annotated individual bites and chewing bouts for each food intake bout. Results for inter-rater reliability showed that, for activity annotation, the raters achieved an average (±standard deviation (STD)) kappa value of 0.74 (±0.02) and for food intake annotation the average kappa (Light’s kappa) of 0.82 (±0.04). Validity results showed that AIM food intake detection matched human video-annotated food intake with a kappa of 0.77 (±0.10) and 0.78 (±0.12) for activity annotation and for food intake bout annotation, respectively. Results of one-way ANOVA suggest that there are no statistically significant differences among the average eating duration estimated from raters’ annotations and AIM predictions (p-value = 0.19). These results suggest that the AIM provides accuracy comparable to video observation and may be used to reliably detect food intake in multi-day observational studies.
format Online
Article
Text
id pubmed-6472006
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-64720062019-04-25 Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment Farooq, Muhammad Doulah, Abul Parton, Jason McCrory, Megan A. Higgins, Janine A. Sazonov, Edward Nutrients Article Video observations have been widely used for providing ground truth for wearable systems for monitoring food intake in controlled laboratory conditions; however, video observation requires participants be confined to a defined space. The purpose of this analysis was to test an alternative approach for establishing activity types and food intake bouts in a relatively unconstrained environment. The accuracy of a wearable system for assessing food intake was compared with that from video observation, and inter-rater reliability of annotation was also evaluated. Forty participants were enrolled. Multiple participants were simultaneously monitored in a 4-bedroom apartment using six cameras for three days each. Participants could leave the apartment overnight and for short periods of time during the day, during which time monitoring did not take place. A wearable system (Automatic Ingestion Monitor, AIM) was used to detect and monitor participants’ food intake at a resolution of 30 s using a neural network classifier. Two different food intake detection models were tested, one trained on the data from an earlier study and the other on current study data using leave-one-out cross validation. Three trained human raters annotated the videos for major activities of daily living including eating, drinking, resting, walking, and talking. They further annotated individual bites and chewing bouts for each food intake bout. Results for inter-rater reliability showed that, for activity annotation, the raters achieved an average (±standard deviation (STD)) kappa value of 0.74 (±0.02) and for food intake annotation the average kappa (Light’s kappa) of 0.82 (±0.04). Validity results showed that AIM food intake detection matched human video-annotated food intake with a kappa of 0.77 (±0.10) and 0.78 (±0.12) for activity annotation and for food intake bout annotation, respectively. Results of one-way ANOVA suggest that there are no statistically significant differences among the average eating duration estimated from raters’ annotations and AIM predictions (p-value = 0.19). These results suggest that the AIM provides accuracy comparable to video observation and may be used to reliably detect food intake in multi-day observational studies. MDPI 2019-03-13 /pmc/articles/PMC6472006/ /pubmed/30871173 http://dx.doi.org/10.3390/nu11030609 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Farooq, Muhammad
Doulah, Abul
Parton, Jason
McCrory, Megan A.
Higgins, Janine A.
Sazonov, Edward
Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment
title Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment
title_full Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment
title_fullStr Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment
title_full_unstemmed Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment
title_short Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment
title_sort validation of sensor-based food intake detection by multicamera video observation in an unconstrained environment
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6472006/
https://www.ncbi.nlm.nih.gov/pubmed/30871173
http://dx.doi.org/10.3390/nu11030609
work_keys_str_mv AT farooqmuhammad validationofsensorbasedfoodintakedetectionbymulticameravideoobservationinanunconstrainedenvironment
AT doulahabul validationofsensorbasedfoodintakedetectionbymulticameravideoobservationinanunconstrainedenvironment
AT partonjason validationofsensorbasedfoodintakedetectionbymulticameravideoobservationinanunconstrainedenvironment
AT mccrorymegana validationofsensorbasedfoodintakedetectionbymulticameravideoobservationinanunconstrainedenvironment
AT higginsjaninea validationofsensorbasedfoodintakedetectionbymulticameravideoobservationinanunconstrainedenvironment
AT sazonovedward validationofsensorbasedfoodintakedetectionbymulticameravideoobservationinanunconstrainedenvironment