VeggieBox - your kitchen assistant for healthy meals
VeggieBox is a home appliance that can examine ingredients and suggest the best recipes for a good healthy meal out of those ingredients ...learn more
Internet of Things, Artificial Intelligence
Groups
Artificial Intelligence Europe,
Movidius™ Neural Compute Group
Overview / Usage
See the document in the link for more information.
2 Technology
2.1. Vision
At the core of VeggieBox is Intel Movidius NCS running Deep Learning models for object detection on the low-powered device. It will take as input an image captured by the camera, analyze and detect the ingredients. Using state-of-the-art Deep Learning models, the device is able to recognize multiple ingredients at once. The system will then query our recipe database and suggest those that are best matched to the given ingredients. It will then show a step-by-step guide on how to prepare the meal on its touch screen. Text-to-speech and simple voice command engines will provide more comfort when your hands are busy with chopping and picking.
2.2. Energy Saving
Thanks to the Movidius NCS, low powered home devices can now leverage Deep Learning. Our design of VeggieBox further builds up on this advantage so that it consumes as least power as possible. The device always run in idle mode, in particular the vision eye and other background services are not running all the time. The infrared sensor however can detect human interactions with the kitchen table, or other interaction close to the device, henceforth activate the device. Such an infrared sensor mounted on Andruino or Raspberry Pi consumes little energy.
Operating in real time, the nature of the problem allows the detection engine works at low frame rate. Overall, this leads to low power consumption for the camera, video processing chip, and the neural stick altogether.
2.3. Connected Search & Personalization
VeggieBox itself is an standalone device but has great potential to be a front-end device whose centralized servers process instantly recipe searches and collect meal photos. Backend technology uses NodeJS with ExpressJS for serving and ElasticSearch for database and searching.
2.4. Narration and Voice Interaction
State-of-the-art text-to-speech models will be used to transform instructions into voice, helping users to prepare their meal in the most natural and convenient way. On the other hands, models for speech recognition will be employed to detect simple voice commands from users. The touch screen will always be available for interaction with the device, while voice-based interaction will further enhance user's experience.
2.5. Timeline
- Q3 2017: Proposal and Feasibility Study
- Q4 2017: Finish VBox version 1, finalized hardware engineering and fundamental software features: Machine Learning models and recipe database.
- Q2 2018: Case design and advanced features: text-to-speech, voice command, features for social networks.
Collaborators
There are no people to show.