Skip to content

Code that drove a real-world Lincoln MKZ around a closed-circuit test-track 🚀

Notifications You must be signed in to change notification settings

jwdunn1/system-integration

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Udacity - Self-Driving Car NanoDegree completion


level5-engineers

level5-engineers

CarND Capstone Project: System Integration
It’s not about the pieces but how they work together.

Team

  • James Dunn, lead
  • Oleg Leizerov
  • Aman Agarwal
  • Rajesh Bhatia
  • Yousof Ebneddin

Objective

Create code to drive a vehicle in both a Unity-based simulator and a real-world Lincoln MKZ around a closed-circuit test-track. This repository contains all ROS nodes to implement the core functionality of an autonomous vehicle system.

Results

From the Udacity review: "Excellent work here! The car drove very smoothly around the waypoints, and made a full stop for the red light. Well done!"

  • Video from the dash camera onboard the test vehicle: on Vimeo (Drive-by-wire is engaged at 2s and disengaged at 38s.)
  • Point cloud vizualization: on Vimeo
  • A map of the test run can be found here
  • Log file, ROS bag, and feedback: here

Below is a visualization of the lidar point cloud from the team's test run on the autonomous Lincoln. Point cloud visualization

Implementation Notes

The diagram below illustrates the system architecture. The autonomous vehicle controller is composed of three major units: perception, planning, and control.

System Architecture Legend: the letters a-k indicate published ROS topics

  a: /camera/image_raw
  b: /current_pose
  c: /current_velocity
  d: /vehicle/dbw_enabled
  e: /traffic_waypoint
  f: /base_waypoints
  g: /final_waypoints
  h: /twist_cmd
  i: /vehicle/throttle_cmd
  j: /vehicle/brake_cmd
  k: /vehicle/steering_cmd

Perception

We employ the MobileNet architecture to efficiently detect / classify traffic lights. We applied transfer learning to further train two convolutional neural networks for the different modes of operation:

  • simulator mode: classifies whole images as either red or none. The model was trained with several datasets using the Tensorflow Image Retraining Example (tutorial: https://goo.gl/HgmiVo, code: https://goo.gl/KdVcMi).
  • test-site mode: we employ the "SSD: Single Shot MultiBox Detection" framework to locate a bounding box around a traffic light. We fine-tuned the pre-trained ssd_mobilenet_v1_coco model using the Tensorflow Object Detection API. The training dataset includes camera images from training, reference, and review rosbags.
  • more...

Planning

The waypoint updater node publishes a queue of n waypoints ahead of the vehicle position, each with a target velocity. For the simulator, n=100 is sufficient. For the site (the real-world test track), we reduce to n=20. We dequeue traversed waypoints and enqueue new points, preserving and reusing those in the middle. When a light-state changes, the entire queue is updated. The vehicle stops at the final base waypoint. more...

Control

The drive-by-wire node adjusts throttle and brakes according to the velocity targets published by the waypoint follower (which is informed by the waypoint updater node). If the list of waypoints contains a series of descending velocity targets, the PID velocity controller (in the twist controller component of DBW) will attempt to match the target velocity. more...

Operation

There are three modes in which the controller operates:

  • site: When at the test site, this mode is launched. This mode can be run simultaneously with a rosbag to test the traffic light classifier. (See below)
  • sitesim: emulates the test site in the simulator at the first traffic light.
  • styx: When using the term3 simulator, this mode is launched. The simulator communicates through server.py and bridge.py

These modes are started by roslaunch. For example, to run the styx (simulator) version we run:

roslaunch launch/styx.launch

Additional Specifications

Traffic light image classification
Waypoint updater
Drive-by-wire

References

Traffic Light Detection and Classification
SSD: Single Shot MultiBox Detection
Machine learning
MobileNets
Transfer learning
Pure Pursuit Algorithm
Quaternion mathematics
Quaternions online visualization
PID control


This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project wiki here.

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Dataspeed DBW

  • Download the Udacity Simulator.

Docker Installation

Install Docker

Build the docker container

docker build . -t capstone

Run the docker file

docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone

Usage

  1. Clone the project repository
git clone https://github.com/level5-engineers/system-integration.git
  1. Install python dependencies
cd system-integration
pip install -r requirements.txt
  1. Make the controller
cd ros
catkin_make
  1. In a new terminal window, start roscore
roscore
  1. Start the simulator, select screen resolution 800x600, click SELECT, uncheck the Manual checkbox. Ideally, run the simulator in the host environment (outside of the virtual machine).

  2. In a new terminal window, start the controller

cd system-integration/ros
source devel/setup.sh
roslaunch launch/styx.launch

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car (a bag demonstraing the correct predictions in autonomous mode can be found here)
  2. Unzip the file
unzip traffic_light_bag_files.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_files/loop_with_traffic_light.bag
  1. Launch your project in site mode
cd system-integration/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images

About

Code that drove a real-world Lincoln MKZ around a closed-circuit test-track 🚀

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 43.4%
  • CMake 33.7%
  • C++ 22.6%
  • Shell 0.3%