-
Notifications
You must be signed in to change notification settings - Fork 0
Lane Assist
The lane assist module is responsible for keeping the vehicle in the lane. The lane assist system contains three parts: lane detection, path generation, and path following. The line detection makes use of a sliding window search algorithm to detect the lines in the image. After the lines are detected, these are filtered to make sure we only have lanes of our part of the road.
The path generation module is responsible for generating a path based on the detected and requested lane the current implementation makes use of a centerline to generate the path.
The final module is the path following module. This module's responsibility is to follow the path generated by the path generation. This makes use of pid to follow the generated path.
- Usage
-
LaneAssist
class -
StoplineAssist
class - How does it work?
The lane almost cannot be run on its own. It is part of the self-driving car.
before running, you need to make sure that the cameras are connected and calibration has been done.
An example of how to run the lane assist is shown below.
from src.calibration.data import CalibrationData
from src.config import config
from src.constants import CameraResolution
from src.driving.can import CANController, get_can_bus
from src.driving.speed_controller import SpeedController
from src.lane_assist.lane_assist import LaneAssist, StopLineAssist
from src.lane_assist.preprocessing.generator import td_stitched_image_generator
from src.telemetry.app import TelemetryServer
from src.utils.video_stream import VideoStream
cam_left = VideoStream(config.camera_ids.left, resolution=CameraResolution.NHD)
cam_center = VideoStream(config.camera_ids.center, resolution=CameraResolution.NHD)
cam_right = VideoStream(config.camera_ids.right, resolution=CameraResolution.NHD)
telemetry = TelemetryServer()
can_controller = CANController(get_can_bus())
speed_controller = SpeedController(can_controller)
calibration = CalibrationData.load(config.calibration.calibration_file)
image_generator = td_stitched_image_generator(
calibration, cam_left, cam_center, cam_right, telemetry)
stop_line_assist = StopLineAssist(speed_controller, calibration) # noqa: F821
lane_assist = LaneAssist(
image_generator, stop_line_assist, speed_controller, telemetry, calibration)
cam_left.start()
cam_center.start()
cam_right.start()
can_controller.start()
speed_controller.start()
lane_assist.start()
The LaneAssist
class is the main class for the lane assist module. It is responsible for
keeping the vehicle in the lane.
-
start(multithreading)
: Thestart
method is responsible for starting the lane assist module. the multithreading
argument is a boolean that indicates if the lane assist should be run in a separate thread.
By default, the value isFalse
.
lane_assist.start()
-
toggle()
: Thetoggle
method is responsible for toggling the lane assist on and off.
lane_assist.toggle()
The StoplineAssist
class is responsible for stopping the vehicle when a stop line is detected. The class has the
following methods:
-
detect_and_handle(img, filtered_lines)
: Thedetect_and_handle
method is responsible for detecting a stop line in
the image and stopping the vehicle.
this function will take only the part of the image that is between the already detected lines
and will rotate that part of the image 90 degrees.
this is so the same algorithm can be used to detect the stop line as the lane lines.
stop_line_assist.detect_and_handle(img, filtered_lines)
The lane assist module exists out of 3 parts: detecting lines, generating a center path, following the center path. The
following steps will explain how these steps work and show some code snippets of what is executed.
To detect lines we first need to capture and preprocess some images. In the example above this is done by
the td_stitched_image_generator
generator. when calling this function you receieve a generator that will capture and
preprocess images so we can detect the lines in them. This generator also supports a couple optional functions but the
basics boil down to the following steps. The code in the examples will be in relation to this generator.
Without images we cant do much so the very first step that we will do is take some images. The first thing we do with
the images is convert them to grayscale. this is due to the rest of the code being faster when using grayscale images.
center_image = cam_center.next()
right_image = cam_right.next()
# convert to grayscale left_gray = cv2.cvtColor(left_image, cv2.COLOR_BGR2GRAY)
center_gray = cv2.cvtColor(center_image, cv2.COLOR_BGR2GRAY)
right_gray = cv2.cvtColor(right_image, cv2.COLOR_BGR2GRAY)
!todo: insert images
ONce we have our grayscale images we need to stitch them and then convert them to a binary image. this is done with the
following code:
topdown = calibration.transform([left_image, center_image, right_image])
thresholded = cv2.threshold(topdown, config.preprocessing.white_threshold, 255, cv2.THRESH_BINARY)[1]
Now we have the preprocessed the images we can detect the lines from the image. This is happening in
the src.lane_assist.line_detection.line_detector.get_lines
function.
the first step of detecting the lines is more preprocessing. We need to remove the zebra crossings. These wont be
detected using the lane assist but using the object detection model. and when left in they will make the object
detection a lot less reliable.
The filtering of the lines is done by taking a histogram of the y axis of the image.
The values in this histogram is then converted into meters per y value.
From this histogram, the peaks are detected and the lines are filtered based on these peaks.
these peaks need to be within a certain range configured in the config file to be removed.
Using the preprocessed image from the previous step we get the following result:
Once the filtering is done we can start detecting the lines.
To detect the lines we first need to know where the lines start. due to this not being at the bottom the image we to
make it a bit more complex then grabbing the first white pixels we see.
we do this by taking a histogram of the bottom quarter of the image. and applying some weights to these pixels. the
weights are so we can get a more accurate start of the line.
this is done with the following code:
pixels = image[image.shape[0] // 4 * 3:, :]
pixels = np.multiply(pixels, np.logspace(0, 1, pixels.shape[0])[:, np.newaxis])
histogram = np.sum(pixels, axis=0)
once we have this histogram we will find the peaks and create the needed data structures for the window search.
now that we have the start of the lines and the needed windows for the window search algorithm we can actually start
detecting lines. In the previous step we have converted the starts of the line into a new type of data structure.
A Window
. this window class contains a couple of parameters these are as follows:
-
x
andy
. this is the current position of the window in the image. -
margin
. this is the width of the window. -
points
. these are the center positions of windows where points were found.
For each of these windows the same steps will be completed. these steps are as follows and will be executed in a loop
untill a end condition has been reached.
-
check if at edges of image
. The first step in the loop will be to check if we are at any of the edges of the
window. if we are at the edge of the window we can assume that the line wont continue any further before leaving the
image. once this happens we cant know if it is still the same line or not. -
Detect pixels
. if we are not at the edge of the window we will have to check if we have any white pixels i9n the
window and where these are in the window. -
move the window
.
!todo: make sure the move step is correct
The final step of moving the window is a bit more complex then the previous steps. This step has a couple different
options.
- we have white pixels or
- we have no white pixels
the case where we have no pixels is the easiest step. In this case we simply move the window a set amount of pixels on
the y axis and stay on the same x axis. next to that we increase the width of the window by a set percentage to make
sure we can catch the dotted lines if they are not straight.
The other case is a bit more complex. Depending on the amount of previous found points we do different things. if we
have less then 3 previous points we will offset the current window by the average position of the white pixels. if we
have 3 or more points we will check if we are suddenly changing direction by getting the average distance between the
last 3 points and converting this to an angle. if this angle is greater then we allow then we will stop the line.
this way of stopping the line is due to lines being to steep and getting points of the line above it causing the lines
to be inaccurate.
in the following image you can see the windows on a image. a green window means it found enough white pixels, red is
where the window ends, and purple is where it has not found enough white pixels.
once all windows have been stopped we will be converting these to the Line
class. This class is a nicer representation
as it only has the points an type of line.
there are 3 types of lines: solid, dashed, and stop lines. A line can be both solid and dashed but this is not
supported. to determine the type of line we will look at the gaps between the y values. if there are enough large gaps
between 2 points we count it as a dashed line if there are less then the threshold then we count it as a solid line. For
stop lines you will need to tell the line class that it is a stop line manually.
Once we have detected all the driving lines in the image we can filter them to only include the relevant lines. this is
done by looking from the center of the image out until we have found a solid line on both sides. we will discard all
lines outside of these lines. this is done using the src.lane_assist.line_detection.line_detector.filter_lines
function.
Compared to the detection of the lines the generation of a driving line is relatively simple.
The first step is grouping the lines into lanes. this is extremely simple as each line is in a lane together with the
lane next to it the right most lane is lane 0 and it increments once every time we cross a line going left. this is
different if we dont have a solid line on the right then all lanes will have there index incremented by 1. once we have
the lanes we will only use the lines of the lane we want to drive on. this is simply indexing the array of lanes. if
only a single of the lanes lines are detected you wont be able to move over to that lane and it will keep you in the
closest lane so 0 -> 1 or 1 -> 0. this can be done due to us only having a maximum of 2 lanes.
The next step would be to make sure we have enough points in a line to generate a nice driving path. to do this we will
interpolate the line. this is done using linear interpolation between sets of points in the line. we will make sure we
will always be at least 100 points. this is done using the following code
new_x = np.interp(new_y, line.points[::-1, 1], line.points[::-1, 0])
The generation of the driving path is extremely simple. it is just a centerline between the points. Once this is
generated a savgol filter will be applied on both the x and y of coordinates of the path. afterwords it is converted
into a Path
object. this will calulate the radius of the path which will be used to generate the max speed of the
path.
The code for generating and smoothing the line is as follows
inter_line_points = len(new_a1_x)### midx = [np.mean([new_a1_x[i], new_a2_x[i]]) for i in range(inter_line_points)]
midy = [np.mean([new_a1_y[i], new_a2_y[i]]) for i in range(inter_line_points)]
Smooth the centerline.
midx = savgol_filter(midx, 51, 3)
midy = savgol_filter(midy, 51, 3)
return Path(calibration, np.array([midx, midy]).T)
Now we have the line we have finally reached the final step of the process. Path following. The path following module
makes use of pid to generate a steering angle. next to that it also calculates the max speed for the generated path.
Based on the curvature of the path the speed can be calculated using the following code. this speed is then set on the
speed controller which will try to reach that speed as quickly as possible.
speed = int(math.sqrt(friction_coefficient * 9.81 * path.radius))
The nest step is to calculate the sttering angle. this is done using PID. for this we first need to generate an error
value. this error value will be passed to the pid controller which will return a value between -max_steering_angle and
max_steering_angle.
For the pid to function we need to provide it an error value. this will require us to get a point along the path that we
want as target position.
to get this poiint we need first need to calcualte the distance traveld since the image has been taken in pixels + a
lookahead distance. this is done with the following code.
lt = pid._last_time
if lt is None:
return min_dist
# Get the distance we have traveled
dt = time.monotonic() - lt
speed = speed_controller.current_speed / 3.6
distance = calibration.get_pixels(speed * dt)
return min_dist + distance
once we have this distance we can get the first point on the path past that distance and use it as target point. oncewe
have this point we can compare the x value to the cars position. which is the center of the image. this value is the
error which is given to the pid controller which will return the steering angle.