Vehicle Advanced Monitoring System (VAMS)
Avirup Basu
Siliguri, West Bengal
We may face situations, when we may have to leave our car for some other drivers and we may not be present to monitor the driving conditions. We may also face a devastating situation where we meet with an accident, and we may need to analyse what exactly went wrong. In these cases, the only way out is to monitor the live data from a vehicle. ...learn more
Project status: Under Development
Robotics, HPC, Internet of Things, Artificial Intelligence
Intel Technologies
AI DevCloud / Xeon,
Intel FPGA,
Intel Python,
OpenVINO,
Movidius NCS
Overview / Usage
VAMS is a proposed system that monitors the internal telemetry data and the outside traffic data and superimposes into a single system. This is much inclined to the concept of smart cars and the system can have multiple add-ons depending on the usage or demands. The system has two main components.
- Telemetry data acquisition using CAN BUS
- Monitoring and capturing external information from dash-cam feed
The above components are then superimposed into a single data frame and then stored locally. Once connection is established with the cloud, the data is dumped. The first point is tackled using CAN BUS and a background process to publish the data at regular intervals. The second part of the system consists of an object recognition system in its core and its primary purpose is to detect the following.
- Density of traffic on the road
- The type of road, the vehicle is being driven through
Both above parameters are obtained by the use of object recognition implemented on the IDD dataset.
The final master node combines these two pieces of data and saves it locally. Only, when internet connectivity is established, the data is published to the cloud using the Azure IoT stack.
Methodology / Approach
The methodology is based on two sections.
Edge system
The edge system is based on two components.
- Visual processing system
- Telemetry data acquisition unit
Visual processing system:
The VPS is responsible for the hard-core video processing stuffs. At the core of VPS, lies the Intel® Distribution of OpenVINO™ toolkit which actually does the inferencing. But before the inference, we have to train a model on a specific data-set. Lets have a look at the training perspective of the VPS.
Training:
The Indian driving dataset was used for training mainly for two reasons.
- Its completely based on Indian road conditions
- It has some features like drivable and non-drivable object
The dataset is available on https://idd.insaan.iiit.ac.in
Our application relied on real-time inference on a mobile vehicle. This means we require speed over accuracy. That’s why mobilenet ssd was used as our object recognition architecture. As a pipeline for training and evaluation, Tensorflow object recognition API was used.
Inference:
The model was optimized using the Intel® Distribution of OpenVINO™ toolkit and finally inference was executed across various platforms to have a look on the performance on different hardware platforms. Due to the restrictions of the hardware availability on a mobile platform we have to stick to NCS2 or FPGA. Live inference was also carried out where the entire setup was actually done on a vehicle.
Telemetry data acquisition unit:
The telemetry data acquisition unit simply uses an ELM-327 IC for getting the data from the OBD-2 port of the vehicle. python-obd library was used for gathering the data
Combining the VPS and TDAU:
Both of the above are combined using the concept of publisher subscriber model where MQTT was used for the data transfer locally. The final resultant was displayed in a web application using python flask.
Technologies Used
- Intel OneAPI toolkit as a part of the OpenVINO™ toolkit
- Intel® Distribution of OpenVINO™ toolkit
- Intel optimized python distribution
- Intel optimized tensorflow
- Tensorflow object recognition API
- Python flask
- Python-obd
- CAN
- MQTT
- Azure IoT stack
- Azure IoT hub
- Azure function
- IDD dataset
Repository
https://github.com/avirup171/vams
Other links
Collaborators
There are no people to show.