tassAI Computer Vision
Adam Milton-Barker
Bangor, Wales
- 0 Collaborators
Facial and object recognition on the edge ...learn more
Project status: Published/In Market
Internet of Things, Artificial Intelligence
Groups
Internet of Things,
DeepLearning,
Artificial Intelligence Europe,
Movidius™ Neural Compute Group
Intel Technologies
Intel Opt ML/DL Framework,
Movidius NCS
Overview / Usage
tassAI is computer vision software that is able to communicate autonomously with applications and devices via the Internet of Things. There are several versions of tassAI and several different projects that evolved from the concept. Each version of tassAI uses different techniques and technologies to accomplish facial recognition and other computer vision uses.
The projects are now totally open source and have lead to the creation of a number of non facial recognition projects including computer vision project for detecting breast cancer, American Sign Language and classifying white blood cells.
At the Intel® / Microsoft / IoT Solutions World Congress Hackathon 2016 in Barcelona, a version of tassAI was presented as Project H.E.R and won the Intel® Experts Award. Since then tassAI has evolved immensely and the network version now uses Intel® hardware and software for the local A.I server.
Methodology / Approach
First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that are set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.
TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.
IOT CONNECTIVITY:
The IoT connectivity is managed by the IoT JumpWay, an IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.
ARTIFICIAL INTELLIGENCE:
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.
-
The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.
-
The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.
-
The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.
-
For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.
-
For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.
To view the full information about the latest versions of TASS, you can visit the official project page:
Technologies Used
-
Intel Movidius
-
Intel UP2
-
Raspberry Pi
-
Intel NUC
-
Intel Realsense
-
Tensorflow (Inception V3, Yolo, Facenet)
-
OpenCV
-
Dlib
Repository
https://github.com/iotJumpway/IoT-JumpWay-Intel-Examples/tree/master/Intel-Movidius/TASS