Cargando…

NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment

Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification...

Descripción completa

Detalles Bibliográficos
Autores principales: Mezgec, Simon, Koroušić Seljak, Barbara
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5537777/
https://www.ncbi.nlm.nih.gov/pubmed/28653995
http://dx.doi.org/10.3390/nu9070657
_version_ 1783254242854699008
author Mezgec, Simon
Koroušić Seljak, Barbara
author_facet Mezgec, Simon
Koroušić Seljak, Barbara
author_sort Mezgec, Simon
collection PubMed
description Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86.72%, along with an accuracy of 94.47% on a detection dataset containing 130,517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson’s disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55%, which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson’s disease patients.
format Online
Article
Text
id pubmed-5537777
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-55377772017-08-04 NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment Mezgec, Simon Koroušić Seljak, Barbara Nutrients Article Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86.72%, along with an accuracy of 94.47% on a detection dataset containing 130,517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson’s disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55%, which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson’s disease patients. MDPI 2017-06-27 /pmc/articles/PMC5537777/ /pubmed/28653995 http://dx.doi.org/10.3390/nu9070657 Text en © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Mezgec, Simon
Koroušić Seljak, Barbara
NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment
title NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment
title_full NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment
title_fullStr NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment
title_full_unstemmed NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment
title_short NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment
title_sort nutrinet: a deep learning food and drink image recognition system for dietary assessment
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5537777/
https://www.ncbi.nlm.nih.gov/pubmed/28653995
http://dx.doi.org/10.3390/nu9070657
work_keys_str_mv AT mezgecsimon nutrinetadeeplearningfoodanddrinkimagerecognitionsystemfordietaryassessment
AT korousicseljakbarbara nutrinetadeeplearningfoodanddrinkimagerecognitionsystemfordietaryassessment