Self-Driving Toy Car

Jianjie Liu

Jianjie Liu

Boston, Massachusetts

1 0
  • 0 Collaborators

A proof of concept of a small-scale autonomous vehicle. ...learn more

Project status: Under Development

Robotics, Artificial Intelligence

Code Samples [1]

Overview / Usage

We will be building a small-scale prototype of an autonomous vehicle through the following means:

  1. Assemble a raspberry-pi-controlled racing car mounted with camera and ultrasound sensor
  2. Train a deep learning model to
    a. clone regular driving behavior along a track (this can be made of printing paper laid on two sides)
    b. detect miniature traffic signs and respond accordingly
  3. Detect and circumvent hindering obstacles on the track using sensor data
  4. Explore efficient perception and planning algorithms operable in IoT devices and inference hardware like the Intel Movidius Compute Stick

We hope that this prototype can serve as a tool to investigate

  1. Efficient software stack in a computation-constrained environment like in a raspberry pi
  2. How ethical principle can be implemented in the software stack of an intelligent agent

Methodology / Approach

First, we need to rewire the circuit of the racing car onto a raspberry pi so we can run softwares on the pi to control the motions of the car. Currently this step is finished and you see more details of the hardwire in the Github repo. Here we will focus discussing the software stack of the self driving vehicle.

A real autonomous vehicle has 5 major components as follow:

  1. Perception (Computer Vision, ML)
  2. Sensor Fusion and Localization (LIDAR, Kalman, Particle Filters, and Mapping)
  3. Motion Planning and Decision Making (Incremental Search Alg (D*), Sampling Planning Alg, Planning w/ uncertainty)
  4. Control
  5. System Integration

However, it is not practical to implement the full software stack of an autonomous vehicle into our project. For the early stage of the project, the focus will be set on implementing both Control and Computer Vision.

For simplicity, we will consider the car's motion into two separate and independent motion axes: x-axis (left and right) and y-axis (forward and backward).

To control the car's x-direction motion, we will construct a CNN based behavioral cloning neural network. This neural network will be trained over video footage of correct driving behavior on a track (i.e. staying in the middle of the track while turning) and ideally should replicate the same behavior when tested on unseen tracks. To imitate such behavior, input the neural network an image of the track and the network will output the steering angle of the car needed to stayed on the track. We will use Nvidia's model to implement this behavioral cloning network. I have implemented a similar behavioral cloning network in a different project (https://github.com/JohnGee96/CarND-Behavioral-Cloning) with very similar task. I will transfer the same model used that project for this specific task.

To control the car's forward motion, we will first implement a simple decision model as follow: keep moving forward until encountering an obstacle. We can use data from an ultrasonic sensor to detect if there any hindering obstacle in front of the car. Later, we can add road conditions to the forward motion including stopping for 3 seconds if the car sees a stop sign.

Technologies Used

Movidius Compute Stick, Raspberry Pi, Deep Learning, Computer Vision and Sensor Fusion.

Repository

https://github.com/JohnGee96/AutoDrive

Comments (0)