Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a new tool to measure end-to-end delay of the Autoware for sudden obstacles #6547

Closed
16 tasks done
brkay54 opened this issue Mar 5, 2024 · 5 comments · Fixed by #6382 or #5954
Closed
16 tasks done

Add a new tool to measure end-to-end delay of the Autoware for sudden obstacles #6547

brkay54 opened this issue Mar 5, 2024 · 5 comments · Fixed by #6382 or #5954
Assignees
Labels
component:evaluator Evaluation tools for planning, localization etc. (auto-assigned) component:system System design and integration. (auto-assigned) component:tools Utility and debugging software. (auto-assigned) status:stale Inactive or outdated issues. (auto-assigned)

Comments

@brkay54
Copy link
Member

brkay54 commented Mar 5, 2024

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

In Autoware, accurately detecting the first response to sudden obstacles in various topics is challenging. There's a need for a tool that can create a test environment, listen to critical topics, and measure the system's reaction across different pipelines including Sensing, Perception, Planning, and Control.

This is a follow-up from:

Purpose

The purpose of this tool is to enhance the reliability and responsiveness of Autoware by providing a means to measure and analyze the delay in reacting to sudden obstacles. This will help in optimizing the system's performance across all stages of its pipeline, ensuring safer autonomous driving capabilities.

Possible approaches

  1. Develop a simulation environment within Autoware that can dynamically spawn obstacles.
  2. Implement a listener module that subscribes to key topics (the topics such messages have been publishing: PredictedObjects, DetectedObjects, TrackedObjects, PointCloud2, Trajectory, AckermannControlCommand) to monitor and log reactions.
  3. Design a measurement tool that calculates the time delay from obstacle-spawned time to the generation of a control command.
  4. Incorporate a reporting mechanism to visualize and analyze the reaction times across different modules.

Definition of done

@doganulus
Copy link
Member

create a test environment, listen to critical topics, and measure the system's reaction

It would be nice to write down these topics of interest, valid test conditions, and acceptable/unacceptable values from such measurements. For example,

  • If the value in /namespace/topic_name is greater than 0, then /namespace/another_topic_name will be greater than 5 within 10 milliseconds.
  • Given /namespace/topic_name equals to ENUM_VALUE and /namespace/another_topic_name is greater than 20, then /namespace/topic3 is always true.

What could be other things/constructs you find useful in writing such sentences for such analysis?

@brkay54
Copy link
Member Author

brkay54 commented Mar 6, 2024

Hi @doganulus thank you for the comment. Currently, we are working on merging this package. However, in the coming days, I am planning to add a detailed report on this issue or show and tell (TBD).

Actually, in this report, I am planning to show 2 different kinds of tests for each node:

This will show us how much time each node takes when it is executed. I am going to list the average calculation times of each node in the Sensing, Perception, and Planning pipelines.

  • 2 - Reaction Response Time: In this test, I will measure the reaction times when an obstacle is spawned suddenly in the Perception, Planning, and Control pipelines.
    Reaction Time = (published time of the first reacted message) - (spawn time of the object)

Here are the topics of interest for Reaction Response Time tests:

  • For Planning & Control pipeline:
        obstacle_cruise_planner (or obstacle_stop_planner):
          topic_name: /planning/scenario_planning/lane_driving/trajectory
          message_type: autoware_auto_planning_msgs/msg/Trajectory
        scenario_selector:
          topic_name: /planning/scenario_planning/scenario_selector/trajectory
          message_type: autoware_auto_planning_msgs/msg/Trajectory
        motion_velocity_smoother:
          topic_name: /planning/scenario_planning/motion_velocity_smoother/trajectory
          message_type: autoware_auto_planning_msgs/msg/Trajectory
        planning_validator:
          topic_name: /planning/scenario_planning/trajectory
          message_type: autoware_auto_planning_msgs/msg/Trajectory
        trajectory_follower:
          topic_name: /control/trajectory_follower/control_cmd
          message_type: autoware_auto_control_msgs/msg/AckermannControlCommand
        vehicle_cmd_gate:
          topic_name: /control/command/control_cmd
          message_type: autoware_auto_control_msgs/msg/AckermannControlCommand
  • For Perception pipeline:
        common_ground_filter:
          topic_name: /perception/obstacle_segmentation/single_frame/pointcloud_raw
          message_type: sensor_msgs/msg/PointCloud2
        occupancy_grid_map_outlier:
          topic_name: /perception/obstacle_segmentation/pointcloud
          message_type: sensor_msgs/msg/PointCloud2
        multi_object_tracker:
          topic_name: /perception/object_recognition/tracking/near_objects
          message_type: autoware_auto_perception_msgs/msg/TrackedObjects
        lidar_centerpoint:
          topic_name: /perception/object_recognition/detection/centerpoint/objects
          message_type: autoware_auto_perception_msgs/msg/DetectedObjects
        obstacle_pointcloud_based_validator:
          topic_name: /perception/object_recognition/detection/centerpoint/validation/objects
          message_type: autoware_auto_perception_msgs/msg/DetectedObjects
        decorative_tracker_merger:
          topic_name: /perception/object_recognition/tracking/objects
          message_type: autoware_auto_perception_msgs/msg/TrackedObjects
        detected_object_feature_remover:
          topic_name: /perception/object_recognition/detection/clustering/objects
          message_type: autoware_auto_perception_msgs/msg/DetectedObjects
        detection_by_tracker:
          topic_name: /perception/object_recognition/detection/detection_by_tracker/objects
          message_type: autoware_auto_perception_msgs/msg/DetectedObjects
        object_lanelet_filter:
          topic_name: /perception/object_recognition/detection/objects
          message_type: autoware_auto_perception_msgs/msg/DetectedObjects
        map_based_prediction:
          topic_name: /perception/object_recognition/objects
          message_type: autoware_auto_perception_msgs/msg/PredictedObjects

These are the topics I am going to measure the reaction times. Please feel free to share any suggestions or comments with me!

@doganulus
Copy link
Member

Thank you @brkay54. I will wait for your report. Looks very useful.

Copy link

stale bot commented May 14, 2024

This pull request has been automatically marked as stale because it has not had recent activity.

@stale stale bot added the status:stale Inactive or outdated issues. (auto-assigned) label May 14, 2024
@brkay54
Copy link
Member Author

brkay54 commented May 17, 2024

I created the following document by using the reaction_analyzer before.
The tests marked by yellow color indicate more delay caused by this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:evaluator Evaluation tools for planning, localization etc. (auto-assigned) component:system System design and integration. (auto-assigned) component:tools Utility and debugging software. (auto-assigned) status:stale Inactive or outdated issues. (auto-assigned)
Projects
No open projects
Status: Done
3 participants