We will be building a small-scale prototype of an autonomous vehicle through the following means:
- Assemble a raspberry-pi-controlled racing car mounted with camera and ultrasound sensor
- Train a deep learning model to
- clone regular driving behavior along a track (this can be made of printing paper laid on two sides)
- Explore efficient perception and planning algorithms operable in IoT devices like raspberry pi and augmentation hardware like the Intel Movidius Compute Stick
We hope that this prototype can serve as a tool to investigate
- Efficient software stack in a computation-constrained environment like a raspberry pi
- How ethical principle can be implemented in the software stack of an intelligent agent
Final Result: https://youtu.be/xME5ly80URw
Special thanks to Zhaoyang Hu for sourcing the hardware and car assembly!
- Project Objectives
- Result Video
- Software Architecture
- Materials
- Hardware Specs
- Circuit Schematics
- Dependency Installation (On Raspberry Pi 3b)
- Problems Encountered and Solutions
- Resources and References
An autonomous vehicle has 5 major components as follow:
- Perception (Computer Vision, ML)
- Sensor Fusion and Localization (LIDAR, Kalman, Particle Filters, and Mapping)
- Motion Planning and Decision Making (Incremental Search Alg (D*), Sampling Planning Alg, Planning w/ uncertainty)
- Control
- System Integration
However, it is not practical to implement the full software stack of an autonomous vehicle into our project. The focus of this project will be set on both Control and Computer Vision.
We will be using two different platforms to build our prototype. We will use a simulator from Udacity to produce data to train an end-to-end deep neural network model published by Nvidia which will be responsible for controlling the steering angle of the vehicle from images of the road.
The simulation records a video of the user driving the vehicle and tags each video frame with the respective steering angle of the vehicle. We have collected two full loops of data (about 400MB) and trained the network for 8 hours on a 1.4 GHz Intel Core i5 CPU. For more information on training this model, see a separate repository
Once the neural network model is trained, we then deploy the model onto a raspberry pi which is wired to both the motor and servo of a RC car. On board the pi, we will build control APIs responsible for translating the desire control parameters (i.e. steering angles and speed) into pulse width signal that actuates the motors on the prototype.
For simplicity, we will consider the car's motion into two separate and independent motions: transverse motion (left and right) and longitudinal motion (forward and backward).
To control the car's transverse motion, we will construct a CNN based end-to-end neural network that takes an image and outputs the steering command needed to keep the car in the middle . This neural network will be trained over video footage of correct driving behavior on a track (i.e. staying in the middle of the track while turning) and ideally should replicate the same behavior when tested on unseen tracks. To imitate such behavior, input the neural network an image of the track and the network will output the steering angle of the car needed to stayed on the track. We will use Nvidia's model to implement this behavioral cloning network. I have implemented a similar behavioral cloning network in a different project with very similar task. I will transfer the same model used that project for this specific task.
To control the car's longitudinal motion, we will first implement a simple decision model as follow: keep moving forward until encountering an obstacle. We can use data from an ultrasonic sensor to detect if there any hindering obstacle in front of the car. Later, we can add road conditions to the forward motion including stopping for 3 seconds if the car sees a stop sign.
- Toy Car (Tamiya TT02 Chasis)
- Raspberry Pi 3b
- Raspberry Pi Camera
- HC-SR05 Ultrasonic Sensor
- SD Card (16 GB minimum)
- Power Bank
- Breadboard
- Ethernet Cable (For internal SSH connect without)
- Ethernet Adaptor (If necessary)
- Wires
-
540j Motor (Comes with Tamiya TT02)
- Motor Output Voltage: 6.03V
- Motor Pulse Width Range: 1200-1469 ms
-
Tactic TSX40 High-Torque Servo
- Input Voltage: 4.8/6.0V
- Output Torque (@ 60 degree): 0.16/0.13 sec
- Servo Pulse Width Range: 1150 - 1850 (1500 is Neutral)
- MOTOR_PIN = 13
- SERVO_PIN = 19
- Python 3.5
- Pillow 5.3.0 (Python Imaging Library)
- Matplotlib 2.2.2
- Numpy 1.15.4
- Keras 1.2.1
- Tensorflow 1.8
- OpenCV 3.4.3 (Here we install
opencv-python-headless
from pywheel. It is an unofficial pre-built OpenCV package) - pigpio 1.38 (Natively installed on Raspbian Linux)
- picamera 1.13 (Natively installed on Raspbian Linux)
- Udacity Self-Driving Car Simulator (For training the end-to-end neural network)
Instructions for installing the dependencies on a Raspberry Pi 3b running Raspbian v9.1
linux with Kernel version 4.9.59
using armv7l
instructions.
This assumes a headless setup (without hooking up to a monitor) with SSH enabled. For more details on headless setup, see this post.
Skip this step if not using a headless setup.
-
Try pinging the raspberry:
ping raspberrypi.local
- This should work if mDNS is enabled. See Setup mDNS for assigning .local domain to Raspberry Pi
-
ssh into it:
ssh [email protected]
.
If raspberrypi.local
is unresponsive:
-
Connect via Wifi
- Find raspberry's ip (You can find this in your Router’s DHCP lease allocation table)
- You can use
nmap
to scan your network to locate the raspberry pi:nmap -v -sn <ROUTER_IP>/24
- You can use
ssh pi@<IP_ADDRESS>
- Find raspberry's ip (You can find this in your Router’s DHCP lease allocation table)
-
Connect via ethernet cable
- Scan internal for active IP:
nmap -v -sn 192.168.0.0/16
ssh pi@<IP_ADDRESS>
- Scan internal for active IP:
You can also set a static IP address for your convenience:
Assume you have gain access onto the pi, configure static internal ip address in /etc/dhcpcd.conf
as follow:
interface eth0
static ip_address=192.168.2.2/24
static routers=192.168.2.1
static domain_name_servers=192.168.2.1
interface wlan0
static ip_address=192.168.2.2/24
static routers=192.168.2.1
static domain_name_servers=192.168.2.1
Now you can ssh into the pi with static ip: ssh [email protected]
-
sudo pip3 install virtualenv virtualenvwrapper
-
Append the following onto the ~/.bashrc
# virtualenv and virtualenvwrapper export WORKON_HOME=$HOME/.virtualenvs export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3 source /usr/local/bin/virtualenvwrapper.sh
-
Run
source ~/.bashrc
.-
Below are some of the basic commands for virtualenvwrapper. Refer to the doc for more details:
mkvirtualenv
: Make a new virtual environment.workon
: Activate/switch to a virtual environment. Remember, you can have as many environments as you’d like.deactivate
: Jumps out of a virtual environment and you’ll be working with your system.rmvirtualenv
: Deletes a virtual environment.
-
-
Create a new virtual environment call
autodrive
mkvirtualenv autodrive -p python3
-
Install required shared libraries
sudo apt-get update sudo apt-get install libatlas-base-dev sudo apt-get install libjasper-dev sudo apt-get install libhdf5-dev sudo apt-get install libhdf5-serial-dev sudo apt-get install python3-gi-cairo
-
Install all dependencies with pip (this can take up to 10 minutes)
pip3 install -r requirements.txt
-
Toggle on system-wide packages (hack for resolving dependencies for
matplotlib
in Python 3.5.X environment). This also brings into the virtual environment bothpicamera
andpigpio
libraries that are natively installed on raspbian linux.toggleglobalsitepackages autodrive
NOTE:
toggleglobalsitepackages
will make the virtual environment include global package. To turn off, simply calltoggleglobalsitepackages autodrive
again.
-
Spin up the virtual environment
workon autodrive
-
Turn on the Pi GPIO daemon
sudo pigpiod
-
Start the pipeline
python3 autoDrive.py
-
Troublesome Remote Access (SSH) to Raspberry Pi due to dynamic IP
Solution 1:
Create a .local domain name via mDNS and/or configure static IP through configuring DHCP request (see Setup mDNS for assigning .local domain to Raspberry Pi(#Resources-and-References)).
Solution 2:
Configure static internal ip address in /etc/dhcpcd.conf as follow:
interface eth0 static ip_address=192.168.2.2/24 static routers=192.168.2.1 static domain_name_servers=192.168.2.1 interface wlan0 static ip_address=192.168.2.2/24 static routers=192.168.2.1 static domain_name_servers=192.168.2.1
-
Stream video from Raspberry Cam
Host webserver or open socket from VLC (see [Setup video streaming in various methods]).
To turn on video stream on the Pi:
/opt/vc/bin/raspivid -o - -t 0 -hf -w 640 -h 360 -fps 25|cvlc -vvv stream:///dev/stdin --sout '#standard{access=http,mux=ts,dst=:8090}' :demux=h264
To open the video stream on VLC player:
http://<IP-OF-THE-RPI>:8090
NOTE: Streaming using VLC player introduces extensively footage latency (2-3 seconds) through a ethernet connection.
-
Controlling RC car with Raspberry Pi
Connect Raspberry Pi's GPIO with the motor and servo of the RC car according to the hardware schematics
-
Install OpenCv on Raspberry Pi
See the post
-
Numpy and h5py are missing shared libraries object files.
Install libatlas-base-dev and libhdf5-serial-dev
-
Matplotlib has problem connect with its dependency on GUI frameworks (GTK, cairo, qt4, etc...). Not be a problem unless if you want to show figures interactively. Also can avoid with printing inline in a Jupyter Notebook.
-
When matplotlib is installed in the venv natively, it's likely to complain about missing
gi
module as its dependency. If so, install system-widegi
andcairo
as follow and toggle on system-wide package access:sudo apt-get install python3-gi-cairo toggleglobalsitepackages autodrive
NOTE:
toggleglobalsitepackages
will toggle off the intended barrier between system-wide dependencies and the virtual environment. -
You may see the error
"The Gtk3Agg backend is not known to work on Python 3.x."
becausepycairo
is not compatible with Python3.X. Install a local version ofcairocffi
pip3 install cairocffi
-
-
Troubleshooting TensorFlow backed Keras - if you get the error "ValueError: Tensor Tensor is not an element of this graph" while making predictions with a pretrained model, run the pretrained model on a dummy data directly after initializing it. See this post (though in Chinese) for details
-
Develop basic software-hardware interface
Build principle software foundation by mapping the tolerable pulse-width range of the hardware to measurable actions on the hardware. For example, turning the servo 15 degree left or run the motor on 50% power (See
./control
). -
Mapping the relationship between PWM signal and the induced steering angle
The GPIO library provides the
set_servo_pulsewidth
method to control the width of a pulse signal within the range of[500-2500]
. We conducted an experiment to measure the steering angle of the servo induced from this pulse width range (See ./resources/servo_experiment):There is linear relationship between pulse width and the induced steering angle.
- Left Turn (width: 1200 - 1450):
17.9x + 1439
- Right Turn (width: 1550 - 1800):
15.9x + 1561
Through more test on the hardware, I found it more fitting to use one linear mapping, particularly
17.9x + 1439
, as it produce more balanced steering angle around the range between[-18, 18]
for steering angle degree and[1200, 1780]
for pulse width. - Left Turn (width: 1200 - 1450):
-
Dramatically reduce the time it takes to record an image by using the keyword
use_video_port=True
The picamera API can record multiple images without re-initializing the buffer. We can take advantage of this by using the following API methods:
camera.capture_continuous(stream, 'jpeg', use_video_port=True):
-
We can also use the Movidius Compute Stick to accelerate the inference time of our neural network. See link.
- Setup mDNS for assigning .local domain to Raspberry Pi
- Setup static IP through DHCP
- Setup video streaming in various methods
- Compile FFmpeg in Raspberry Pi to stream video over web server
- Virtualenvwrapper Doc
- Install OpenCV on Raspberry Pi
- Working with Matplotlib in Virtual environments
- Raspberry Pi Controlled ESC and Motor
- PWM and Motor Motion
- PWM Frequency for Controlling Servo Rotation Angle
- GPIO Electrical Specifications
- PIGPIO Library API
- Behavioral Cloning - Udacity Self-Driving Car Nanodegree Project
- Nvidia's End to End Paper