Skip to content

laserprec/AutoDrive

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AutoDrive: Building a self-driving toy car

hardware

Project Objectives:

We will be building a small-scale prototype of an autonomous vehicle through the following means:

  1. Assemble a raspberry-pi-controlled racing car mounted with camera and ultrasound sensor
  2. Train a deep learning model to
    1. clone regular driving behavior along a track (this can be made of printing paper laid on two sides)
  3. Explore efficient perception and planning algorithms operable in IoT devices like raspberry pi and augmentation hardware like the Intel Movidius Compute Stick

We hope that this prototype can serve as a tool to investigate

  1. Efficient software stack in a computation-constrained environment like a raspberry pi
  2. How ethical principle can be implemented in the software stack of an intelligent agent

Result Video

Link to Youtube

Final Result: https://youtu.be/xME5ly80URw

Acknowledgement

Special thanks to Zhaoyang Hu for sourcing the hardware and car assembly!

Table of Content:

  1. Project Objectives
  2. Result Video
  3. Software Architecture
  4. Materials
  5. Hardware Specs
  6. Circuit Schematics
  7. Dependency Installation (On Raspberry Pi 3b)
  8. Problems Encountered and Solutions
  9. Resources and References

Software Architecture

self-driving car component

An autonomous vehicle has 5 major components as follow:

  1. Perception (Computer Vision, ML)
  2. Sensor Fusion and Localization (LIDAR, Kalman, Particle Filters, and Mapping)
  3. Motion Planning and Decision Making (Incremental Search Alg (D*), Sampling Planning Alg, Planning w/ uncertainty)
  4. Control
  5. System Integration

However, it is not practical to implement the full software stack of an autonomous vehicle into our project. The focus of this project will be set on both Control and Computer Vision.

System Design

system design

We will be using two different platforms to build our prototype. We will use a simulator from Udacity to produce data to train an end-to-end deep neural network model published by Nvidia which will be responsible for controlling the steering angle of the vehicle from images of the road.

simulation example

The simulation records a video of the user driving the vehicle and tags each video frame with the respective steering angle of the vehicle. We have collected two full loops of data (about 400MB) and trained the network for 8 hours on a 1.4 GHz Intel Core i5 CPU. For more information on training this model, see a separate repository

Once the neural network model is trained, we then deploy the model onto a raspberry pi which is wired to both the motor and servo of a RC car. On board the pi, we will build control APIs responsible for translating the desire control parameters (i.e. steering angles and speed) into pulse width signal that actuates the motors on the prototype.

Technical Overview

For simplicity, we will consider the car's motion into two separate and independent motions: transverse motion (left and right) and longitudinal motion (forward and backward).

To control the car's transverse motion, we will construct a CNN based end-to-end neural network that takes an image and outputs the steering command needed to keep the car in the middle . This neural network will be trained over video footage of correct driving behavior on a track (i.e. staying in the middle of the track while turning) and ideally should replicate the same behavior when tested on unseen tracks. To imitate such behavior, input the neural network an image of the track and the network will output the steering angle of the car needed to stayed on the track. We will use Nvidia's model to implement this behavioral cloning network. I have implemented a similar behavioral cloning network in a different project with very similar task. I will transfer the same model used that project for this specific task.

To control the car's longitudinal motion, we will first implement a simple decision model as follow: keep moving forward until encountering an obstacle. We can use data from an ultrasonic sensor to detect if there any hindering obstacle in front of the car. Later, we can add road conditions to the forward motion including stopping for 3 seconds if the car sees a stop sign.

Materials:

  1. Toy Car (Tamiya TT02 Chasis)
  2. Raspberry Pi 3b
  3. Raspberry Pi Camera
  4. HC-SR05 Ultrasonic Sensor
  5. SD Card (16 GB minimum)
  6. Power Bank
  7. Breadboard
  8. Ethernet Cable (For internal SSH connect without)
  9. Ethernet Adaptor (If necessary)
  10. Wires

Hardware Specs:

  1. 540j Motor (Comes with Tamiya TT02)

    • Motor Output Voltage: 6.03V
    • Motor Pulse Width Range: 1200-1469 ms
  2. Tactic TSX40 High-Torque Servo

    • Input Voltage: 4.8/6.0V
    • Output Torque (@ 60 degree): 0.16/0.13 sec
    • Servo Pulse Width Range: 1150 - 1850 (1500 is Neutral)

Circuit Schematics

schematic

Raspberry Pi GPIO Pin

gpio

  • MOTOR_PIN = 13
  • SERVO_PIN = 19

Software Dependencies

  1. Python 3.5
  2. Pillow 5.3.0 (Python Imaging Library)
  3. Matplotlib 2.2.2
  4. Numpy 1.15.4
  5. Keras 1.2.1
  6. Tensorflow 1.8
  7. OpenCV 3.4.3 (Here we installopencv-python-headless from pywheel. It is an unofficial pre-built OpenCV package)
  8. pigpio 1.38 (Natively installed on Raspbian Linux)
  9. picamera 1.13 (Natively installed on Raspbian Linux)
  10. Udacity Self-Driving Car Simulator (For training the end-to-end neural network)

Dependency Installation

Instructions for installing the dependencies on a Raspberry Pi 3b running Raspbian v9.1 linux with Kernel version 4.9.59 using armv7l instructions.

Connect to Raspberry Pi

This assumes a headless setup (without hooking up to a monitor) with SSH enabled. For more details on headless setup, see this post.

Skip this step if not using a headless setup.

  1. Try pinging the raspberry: ping raspberrypi.local

  2. ssh into it: ssh [email protected].

If raspberrypi.local is unresponsive:

  1. Connect via Wifi

    • Find raspberry's ip (You can find this in your Router’s DHCP lease allocation table)
      • You can use nmap to scan your network to locate the raspberry pi: nmap -v -sn <ROUTER_IP>/24
    • ssh pi@<IP_ADDRESS>
  2. Connect via ethernet cable

    • Scan internal for active IP: nmap -v -sn 192.168.0.0/16
    • ssh pi@<IP_ADDRESS>

You can also set a static IP address for your convenience:

Assume you have gain access onto the pi, configure static internal ip address in /etc/dhcpcd.conf as follow:

    interface eth0
    static ip_address=192.168.2.2/24
    static routers=192.168.2.1
    static domain_name_servers=192.168.2.1

    interface wlan0
    static ip_address=192.168.2.2/24
    static routers=192.168.2.1
    static domain_name_servers=192.168.2.1

Now you can ssh into the pi with static ip: ssh [email protected]

Install virtualenv and virtualenvwrapper

  1. sudo pip3 install virtualenv virtualenvwrapper

  2. Append the following onto the ~/.bashrc

     # virtualenv and virtualenvwrapper
     export WORKON_HOME=$HOME/.virtualenvs
     export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3
     source /usr/local/bin/virtualenvwrapper.sh
    
  3. Run source ~/.bashrc.

    • Below are some of the basic commands for virtualenvwrapper. Refer to the doc for more details:

      • mkvirtualenv : Make a new virtual environment.
      • workon : Activate/switch to a virtual environment. Remember, you can have as many environments as you’d like.
      • deactivate : Jumps out of a virtual environment and you’ll be working with your system.
      • rmvirtualenv : Deletes a virtual environment.
  4. Create a new virtual environment call autodrive

     mkvirtualenv autodrive -p python3
    

Install Full Dependencies

  1. Install required shared libraries

     sudo apt-get update
     sudo apt-get install libatlas-base-dev
     sudo apt-get install libjasper-dev
     sudo apt-get install libhdf5-dev
     sudo apt-get install libhdf5-serial-dev
     sudo apt-get install python3-gi-cairo
    
  2. Install all dependencies with pip (this can take up to 10 minutes)

     pip3 install -r requirements.txt
    
  3. Toggle on system-wide packages (hack for resolving dependencies for matplotlib in Python 3.5.X environment). This also brings into the virtual environment both picamera and pigpio libraries that are natively installed on raspbian linux.

     toggleglobalsitepackages autodrive
    

    NOTE: toggleglobalsitepackages will make the virtual environment include global package. To turn off, simply call toggleglobalsitepackages autodrive again.

Starting the Car

  1. Spin up the virtual environment

     workon autodrive
    
  2. Turn on the Pi GPIO daemon

     sudo pigpiod
    
  3. Start the pipeline

     python3 autoDrive.py
    

Problems Encountered and Solutions:

Initial Setup

  1. Troublesome Remote Access (SSH) to Raspberry Pi due to dynamic IP

    Solution 1:

    Create a .local domain name via mDNS and/or configure static IP through configuring DHCP request (see Setup mDNS for assigning .local domain to Raspberry Pi(#Resources-and-References)).

    Solution 2:

    Configure static internal ip address in /etc/dhcpcd.conf as follow:

     interface eth0
     static ip_address=192.168.2.2/24
     static routers=192.168.2.1
     static domain_name_servers=192.168.2.1
    
     interface wlan0
     static ip_address=192.168.2.2/24
     static routers=192.168.2.1
     static domain_name_servers=192.168.2.1
    
  2. Stream video from Raspberry Cam

    Host webserver or open socket from VLC (see [Setup video streaming in various methods]).

    To turn on video stream on the Pi:

     /opt/vc/bin/raspivid -o - -t 0 -hf -w 640 -h 360 -fps 25|cvlc -vvv stream:///dev/stdin --sout '#standard{access=http,mux=ts,dst=:8090}' :demux=h264
    

    To open the video stream on VLC player:

     http://<IP-OF-THE-RPI>:8090
    

    NOTE: Streaming using VLC player introduces extensively footage latency (2-3 seconds) through a ethernet connection.

  3. Controlling RC car with Raspberry Pi

    Connect Raspberry Pi's GPIO with the motor and servo of the RC car according to the hardware schematics

Installing Dependencies

  1. Install OpenCv on Raspberry Pi

    See the post

  2. Numpy and h5py are missing shared libraries object files.

    Install libatlas-base-dev and libhdf5-serial-dev

  3. Matplotlib has problem connect with its dependency on GUI frameworks (GTK, cairo, qt4, etc...). Not be a problem unless if you want to show figures interactively. Also can avoid with printing inline in a Jupyter Notebook.

    1. When matplotlib is installed in the venv natively, it's likely to complain about missing gi module as its dependency. If so, install system-wide gi and cairo as follow and toggle on system-wide package access:

       sudo apt-get install python3-gi-cairo
       toggleglobalsitepackages autodrive
      

      NOTE: toggleglobalsitepackages will toggle off the intended barrier between system-wide dependencies and the virtual environment.

    2. You may see the error "The Gtk3Agg backend is not known to work on Python 3.x." because pycairo is not compatible with Python3.X. Install a local version of cairocffi

       pip3 install cairocffi
      
  4. Troubleshooting TensorFlow backed Keras - if you get the error "ValueError: Tensor Tensor is not an element of this graph" while making predictions with a pretrained model, run the pretrained model on a dummy data directly after initializing it. See this post (though in Chinese) for details

Control Motor Motion via GPIO

  1. Develop basic software-hardware interface

    Build principle software foundation by mapping the tolerable pulse-width range of the hardware to measurable actions on the hardware. For example, turning the servo 15 degree left or run the motor on 50% power (See ./control).

  2. Mapping the relationship between PWM signal and the induced steering angle

    The GPIO library provides the set_servo_pulsewidth method to control the width of a pulse signal within the range of [500-2500]. We conducted an experiment to measure the steering angle of the servo induced from this pulse width range (See ./resources/servo_experiment):

    PWM to Steering Angle

    average steering angle (right)

    average steering angle (left)

    There is linear relationship between pulse width and the induced steering angle.

    • Left Turn (width: 1200 - 1450): 17.9x + 1439
    • Right Turn (width: 1550 - 1800): 15.9x + 1561

    Through more test on the hardware, I found it more fitting to use one linear mapping, particularly 17.9x + 1439, as it produce more balanced steering angle around the range between [-18, 18] for steering angle degree and [1200, 1780] for pulse width.

Optimizing Pipeline

  1. Dramatically reduce the time it takes to record an image by using the keyword use_video_port=True

    The picamera API can record multiple images without re-initializing the buffer. We can take advantage of this by using the following API methods:

     camera.capture_continuous(stream, 'jpeg', use_video_port=True):
    
  2. We can also use the Movidius Compute Stick to accelerate the inference time of our neural network. See link.

Resources and References:

Initial Setup Ref

  1. Setup mDNS for assigning .local domain to Raspberry Pi
  2. Setup static IP through DHCP
  3. Setup video streaming in various methods
  4. Compile FFmpeg in Raspberry Pi to stream video over web server

Installing Dependencies Ref

  1. Virtualenvwrapper Doc
  2. Install OpenCV on Raspberry Pi
  3. Working with Matplotlib in Virtual environments

Pulse Width Modulation and GPIO Ref

  1. Raspberry Pi Controlled ESC and Motor
  2. PWM and Motor Motion
  3. PWM Frequency for Controlling Servo Rotation Angle
  4. GPIO Electrical Specifications
  5. PIGPIO Library API

End to End Neural Network Ref

  1. Behavioral Cloning - Udacity Self-Driving Car Nanodegree Project
  2. Nvidia's End to End Paper

Optimizing the Pipeline

  1. Picamera multi-threading
  2. Picamera speed up
  3. Run Keras Model on Movidius Compute Stick

About

Self-driving toy car

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages