Cargando…

Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study

BACKGROUND: Quantification of dietary intake is key to the prevention and management of numerous metabolic disorders. Conventional approaches are challenging, laborious, and lack accuracy. The recent advent of depth-sensing smartphones in conjunction with computer vision could facilitate reliable qu...

Descripción completa

Detalles Bibliográficos
Autores principales: Herzig, David, Nakas, Christos T, Stalder, Janine, Kosinski, Christophe, Laesser, Céline, Dehais, Joachim, Jaeggi, Raphael, Leichtle, Alexander Benedikt, Dahlweid, Fried-Michael, Stettler, Christoph, Bally, Lia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7142738/
https://www.ncbi.nlm.nih.gov/pubmed/32209531
http://dx.doi.org/10.2196/15294
_version_ 1783519451925184512
author Herzig, David
Nakas, Christos T
Stalder, Janine
Kosinski, Christophe
Laesser, Céline
Dehais, Joachim
Jaeggi, Raphael
Leichtle, Alexander Benedikt
Dahlweid, Fried-Michael
Stettler, Christoph
Bally, Lia
author_facet Herzig, David
Nakas, Christos T
Stalder, Janine
Kosinski, Christophe
Laesser, Céline
Dehais, Joachim
Jaeggi, Raphael
Leichtle, Alexander Benedikt
Dahlweid, Fried-Michael
Stettler, Christoph
Bally, Lia
author_sort Herzig, David
collection PubMed
description BACKGROUND: Quantification of dietary intake is key to the prevention and management of numerous metabolic disorders. Conventional approaches are challenging, laborious, and lack accuracy. The recent advent of depth-sensing smartphones in conjunction with computer vision could facilitate reliable quantification of food intake. OBJECTIVE: The objective of this study was to evaluate the accuracy of a novel smartphone app combining depth-sensing hardware with computer vision to quantify meal macronutrient content using volumetry. METHODS: The app ran on a smartphone with a built-in depth sensor applying structured light (iPhone X). The app estimated weight, macronutrient (carbohydrate, protein, fat), and energy content of 48 randomly chosen meals (breakfasts, cooked meals, snacks) encompassing 128 food items. The reference weight was generated by weighing individual food items using a precision scale. The study endpoints were (1) error of estimated meal weight, (2) error of estimated meal macronutrient content and energy content, (3) segmentation performance, and (4) processing time. RESULTS: In both absolute and relative terms, the mean (SD) absolute errors of the app’s estimates were 35.1 g (42.8 g; relative absolute error: 14.0% [12.2%]) for weight; 5.5 g (5.1 g; relative absolute error: 14.8% [10.9%]) for carbohydrate content; 1.3 g (1.7 g; relative absolute error: 12.3% [12.8%]) for fat content; 2.4 g (5.6 g; relative absolute error: 13.0% [13.8%]) for protein content; and 41.2 kcal (42.5 kcal; relative absolute error: 12.7% [10.8%]) for energy content. Although estimation accuracy was not affected by the viewing angle, the type of meal mattered, with slightly worse performance for cooked meals than for breakfasts and snacks. Segmentation adjustment was required for 7 of the 128 items. Mean (SD) processing time across all meals was 22.9 seconds (8.6 seconds). CONCLUSIONS: This study evaluated the accuracy of a novel smartphone app with an integrated depth-sensing camera and found highly accurate volume estimation across a broad range of food items. In addition, the system demonstrated high segmentation performance and low processing time, highlighting its usability.
format Online
Article
Text
id pubmed-7142738
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-71427382020-04-21 Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study Herzig, David Nakas, Christos T Stalder, Janine Kosinski, Christophe Laesser, Céline Dehais, Joachim Jaeggi, Raphael Leichtle, Alexander Benedikt Dahlweid, Fried-Michael Stettler, Christoph Bally, Lia JMIR Mhealth Uhealth Original Paper BACKGROUND: Quantification of dietary intake is key to the prevention and management of numerous metabolic disorders. Conventional approaches are challenging, laborious, and lack accuracy. The recent advent of depth-sensing smartphones in conjunction with computer vision could facilitate reliable quantification of food intake. OBJECTIVE: The objective of this study was to evaluate the accuracy of a novel smartphone app combining depth-sensing hardware with computer vision to quantify meal macronutrient content using volumetry. METHODS: The app ran on a smartphone with a built-in depth sensor applying structured light (iPhone X). The app estimated weight, macronutrient (carbohydrate, protein, fat), and energy content of 48 randomly chosen meals (breakfasts, cooked meals, snacks) encompassing 128 food items. The reference weight was generated by weighing individual food items using a precision scale. The study endpoints were (1) error of estimated meal weight, (2) error of estimated meal macronutrient content and energy content, (3) segmentation performance, and (4) processing time. RESULTS: In both absolute and relative terms, the mean (SD) absolute errors of the app’s estimates were 35.1 g (42.8 g; relative absolute error: 14.0% [12.2%]) for weight; 5.5 g (5.1 g; relative absolute error: 14.8% [10.9%]) for carbohydrate content; 1.3 g (1.7 g; relative absolute error: 12.3% [12.8%]) for fat content; 2.4 g (5.6 g; relative absolute error: 13.0% [13.8%]) for protein content; and 41.2 kcal (42.5 kcal; relative absolute error: 12.7% [10.8%]) for energy content. Although estimation accuracy was not affected by the viewing angle, the type of meal mattered, with slightly worse performance for cooked meals than for breakfasts and snacks. Segmentation adjustment was required for 7 of the 128 items. Mean (SD) processing time across all meals was 22.9 seconds (8.6 seconds). CONCLUSIONS: This study evaluated the accuracy of a novel smartphone app with an integrated depth-sensing camera and found highly accurate volume estimation across a broad range of food items. In addition, the system demonstrated high segmentation performance and low processing time, highlighting its usability. JMIR Publications 2020-03-25 /pmc/articles/PMC7142738/ /pubmed/32209531 http://dx.doi.org/10.2196/15294 Text en ©David Herzig, Christos T Nakas, Janine Stalder, Christophe Kosinski, Céline Laesser, Joachim Dehais, Raphael Jaeggi, Alexander Benedikt Leichtle, Fried-Michael Dahlweid, Christoph Stettler, Lia Bally. Originally published in JMIR mHealth and uHealth (http://mhealth.jmir.org), 25.03.2020. https://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on http://mhealth.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Herzig, David
Nakas, Christos T
Stalder, Janine
Kosinski, Christophe
Laesser, Céline
Dehais, Joachim
Jaeggi, Raphael
Leichtle, Alexander Benedikt
Dahlweid, Fried-Michael
Stettler, Christoph
Bally, Lia
Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study
title Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study
title_full Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study
title_fullStr Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study
title_full_unstemmed Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study
title_short Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study
title_sort volumetric food quantification using computer vision on a depth-sensing smartphone: preclinical study
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7142738/
https://www.ncbi.nlm.nih.gov/pubmed/32209531
http://dx.doi.org/10.2196/15294
work_keys_str_mv AT herzigdavid volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy
AT nakaschristost volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy
AT stalderjanine volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy
AT kosinskichristophe volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy
AT laesserceline volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy
AT dehaisjoachim volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy
AT jaeggiraphael volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy
AT leichtlealexanderbenedikt volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy
AT dahlweidfriedmichael volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy
AT stettlerchristoph volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy
AT ballylia volumetricfoodquantificationusingcomputervisiononadepthsensingsmartphonepreclinicalstudy