Skip to content

Commit

Permalink
feat(centerpoint): update ceterpoint (autowarefoundation#145)
Browse files Browse the repository at this point in the history
* fix(lidar_centerpoint): fix function sigmoid()  (autowarefoundation#1555)

fix(lidar_centerpoint): fix function sigmoid() (autowarefoundation#1545)

Signed-off-by: WfHit <[email protected]>

* feat(lidar_centerpoint): add score threshold parameter to center point (autowarefoundation#1699)

Signed-off-by: scepter914 <[email protected]>

Signed-off-by: scepter914 <[email protected]>

* feat(behavior_path_planner): add new turn signal algorithm (autowarefoundation#1964)

* clean code

Signed-off-by: yutaka <[email protected]>

* clean format

Signed-off-by: yutaka <[email protected]>

* udpate

Signed-off-by: yutaka <[email protected]>

* update

Signed-off-by: yutaka <[email protected]>

* update

Signed-off-by: yutaka <[email protected]>

* update

Signed-off-by: yutaka <[email protected]>

* update

Signed-off-by: yutaka <[email protected]>

* add test

Signed-off-by: yutaka <[email protected]>

* add test

* add test

Signed-off-by: yutaka <[email protected]>

* fix(avoidance): use new turn signal algorithm

Signed-off-by: satoshi-ota <[email protected]>

* fix(avoidance): fix desired_start_point position

Signed-off-by: satoshi-ota <[email protected]>

* update

Signed-off-by: yutaka <[email protected]>

* change policy

Signed-off-by: yutaka <[email protected]>

* feat(behavior_path_planner): update pull_over for new blinker logic

Signed-off-by: kosuke55 <[email protected]>

* feat(behavior_path_planner): update pull_out for new blinker logic

* tmp: install flask via pip

Signed-off-by: Takayuki Murooka <[email protected]>

* feat(lane_change): added lane change point

* fix start_point and non backward driving turn signal

Signed-off-by: kosuke55 <[email protected]>

* get 3 second during constructing lane change path

Signed-off-by: Muhammad Zulfaqar Azmi <[email protected]>

* fix pull over desired start point

Signed-off-by: kosuke55 <[email protected]>

* update

Signed-off-by: yutaka <[email protected]>

* delete

Signed-off-by: yutaka <[email protected]>

* Update Readme

* Update planning/behavior_path_planner/src/scene_module/avoidance/avoidance_module.cpp

Co-authored-by: Fumiya Watanabe <[email protected]>

* Update planning/behavior_path_planner/src/scene_module/avoidance/avoidance_module.cpp

Co-authored-by: Fumiya Watanabe <[email protected]>

* fix format

Signed-off-by: yutaka <[email protected]>

Signed-off-by: yutaka <[email protected]>
Signed-off-by: satoshi-ota <[email protected]>
Signed-off-by: kosuke55 <[email protected]>
Signed-off-by: Takayuki Murooka <[email protected]>
Signed-off-by: Muhammad Zulfaqar Azmi <[email protected]>
Co-authored-by: Satoshi OTA <[email protected]>
Co-authored-by: satoshi-ota <[email protected]>
Co-authored-by: kosuke55 <[email protected]>
Co-authored-by: Takayuki Murooka <[email protected]>
Co-authored-by: Muhammad Zulfaqar Azmi <[email protected]>
Co-authored-by: Fumiya Watanabe <[email protected]>
Signed-off-by: tomoya.kimura <[email protected]>

* fix(lidar_centerpoint): include zero in range of yaw_norm_threshold (autowarefoundation#1830)

Signed-off-by: Shin-kyoto <[email protected]>

Signed-off-by: Shin-kyoto <[email protected]>

* feat(multi_object_tracker): increase max-area for truck and trailer (autowarefoundation#1710)

* feat(multi_object_tracker): increase max-area for truck

Signed-off-by: yukke42 <[email protected]>

* feat: change truck and trailer max-area gate params

Signed-off-by: yukke42 <[email protected]>

* feat: change trailer params

Signed-off-by: yukke42 <[email protected]>

Signed-off-by: yukke42 <[email protected]>
Signed-off-by: tomoya.kimura <[email protected]>

* feat(multi_object_tracker): update bus size (autowarefoundation#1887)

Signed-off-by: Yukihiro Saito <[email protected]>

Signed-off-by: Yukihiro Saito <[email protected]>

* fix(multi_object_tracker): add missing trailer tracker (autowarefoundation#1885)

Signed-off-by: yukke42 <[email protected]>

Signed-off-by: yukke42 <[email protected]>

* feat(lidar_centerpoint): implemented a class remapping according to the detections shape (autowarefoundation#1876)

* Implemented a small package to remap detection classes depending on their shape

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>

* Update perception/detection_class_adapter/include/detection_class_adapter/detection_class_adapter.hpp

Co-authored-by: Yukihiro Saito <[email protected]>

* Removed the hardcoded mapping from centerpoint and pointfusion. Fixed the description of the config file

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>

* Deleted the new package and moved the logic to centerpoint

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>

* Update perception/lidar_centerpoint/config/detection_class_adapter.param.yaml

Co-authored-by: Yusuke Muramatsu <[email protected]>

* Changed: adapter->remapper

* Update perception/lidar_centerpoint/lib/detection_class_remapper.cpp

Co-authored-by: Yukihiro Saito <[email protected]>

* Delted duplicated file

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>

* Modified the parameters to match autowarefoundation#1710
Now we do not map cars to buses no mather the size

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>

* Update perception/lidar_centerpoint/config/detection_class_remapper.param.yaml

Co-authored-by: Yusuke Muramatsu <[email protected]>

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
Co-authored-by: Yukihiro Saito <[email protected]>
Co-authored-by: Yusuke Muramatsu <[email protected]>
Signed-off-by: tomoya.kimura <[email protected]>

* feat(lidar_centerpoint): add IoU-based NMS (autowarefoundation#1935)

* feat(lidar_centerpoint): add IoU-based NMS

Signed-off-by: yukke42 <[email protected]>

* feat: add magic number name

Signed-off-by: yukke42 <[email protected]>

* feat: remove unnecessary headers

Signed-off-by: yukke42 <[email protected]>

* fix: typo

Signed-off-by: yukke42 <[email protected]>

* fix: typo

Signed-off-by: yukke42 <[email protected]>

Signed-off-by: yukke42 <[email protected]>
Signed-off-by: tomoya.kimura <[email protected]>

* feat(lidar_centerpoint): enabled per class yaw norm threshold (autowarefoundation#1962)

* feat(lidar_centerpoint): enabled per class yaw norm threshold

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>

* Updated thresholds

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>

* Moved the yaw threshold parameters to gpu memory

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>

* Moved the thresholds as a member variable

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>

Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
Signed-off-by: tomoya.kimura <[email protected]>

* fix(lidar_centeproint): fix cmake

Signed-off-by: tomoya.kimura <[email protected]>

* revert(lidar_centerpoint): "fix(lidar_centeproint): fix cmake"

This reverts commit 2efc31d.

Signed-off-by: yukke42 <[email protected]>

* chore(lidar_centerpoint): add trained model of centerpoint (autowarefoundation#1202)

* chore(lidar_centerpoint): add trained model of centerpoint

Signed-off-by: yukke42 <[email protected]>

* docs: add description of models

Signed-off-by: yukke42 <[email protected]>

Signed-off-by: WfHit <[email protected]>
Signed-off-by: scepter914 <[email protected]>
Signed-off-by: yutaka <[email protected]>
Signed-off-by: satoshi-ota <[email protected]>
Signed-off-by: kosuke55 <[email protected]>
Signed-off-by: Takayuki Murooka <[email protected]>
Signed-off-by: Muhammad Zulfaqar Azmi <[email protected]>
Signed-off-by: tomoya.kimura <[email protected]>
Signed-off-by: Shin-kyoto <[email protected]>
Signed-off-by: yukke42 <[email protected]>
Signed-off-by: Yukihiro Saito <[email protected]>
Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
Co-authored-by: WfHit <[email protected]>
Co-authored-by: Satoshi Tanaka <[email protected]>
Co-authored-by: Kenzo Lobos Tsunekawa <[email protected]>
Co-authored-by: Satoshi OTA <[email protected]>
Co-authored-by: satoshi-ota <[email protected]>
Co-authored-by: kosuke55 <[email protected]>
Co-authored-by: Takayuki Murooka <[email protected]>
Co-authored-by: Muhammad Zulfaqar Azmi <[email protected]>
Co-authored-by: Fumiya Watanabe <[email protected]>
Co-authored-by: Shintaro Tomie <[email protected]>
Co-authored-by: Yusuke Muramatsu <[email protected]>
Co-authored-by: Yukihiro Saito <[email protected]>
Co-authored-by: yukke42 <[email protected]>
  • Loading branch information
14 people authored and badai-nguyen committed Feb 1, 2023
1 parent 8449e72 commit be021a9
Show file tree
Hide file tree
Showing 32 changed files with 537 additions and 81 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,14 @@
3.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 2.0, #BICYCLE
2.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 2.0] #PEDESTRIAN
max_area_matrix:
# NOTE(yukke42): The size of truck is 12 m length x 3 m width.
# NOTE(yukke42): The size of trailer is 20 m length x 3 m width.
#UNKNOWN, CAR, TRUCK, BUS, TRAILER, MOTORBIKE, BICYCLE, PEDESTRIAN
[10000.00, 10000.00, 10000.00, 10000.00, 10000.00, 10000.00, 10000.00, 10000.00, #UNKNOWN
12.10, 12.10, 19.75, 40.00, 40.00, 10000.00, 10000.00, 10000.00, #CAR
19.75, 12.10, 19.75, 40.00, 40.00, 10000.00, 10000.00, 10000.00, #TRUCK
40.00, 12.10, 19.75, 40.00, 40.00, 10000.00, 10000.00, 10000.00, #BUS
40.00, 12.10, 19.75, 40.00, 40.00, 10000.00, 10000.00, 10000.00, #TRAILER
[100.00, 100.00, 100.00, 100.00, 100.00, 100.00, 100.00, 100.00, #UNKNOWN
12.10, 12.10, 36.00, 60.00, 60.00, 10000.00, 10000.00, 10000.00, #CAR
36.00, 12.10, 36.00, 60.00, 60.00, 10000.00, 10000.00, 10000.00, #TRUCK
60.00, 12.10, 36.00, 60.00, 60.00, 10000.00, 10000.00, 10000.00, #BUS
60.00, 12.10, 36.00, 60.00, 60.00, 10000.00, 10000.00, 10000.00, #TRAILER
2.50, 10000.00, 10000.00, 10000.00, 10000.00, 2.50, 2.50, 1.00, #MOTORBIKE
2.50, 10000.00, 10000.00, 10000.00, 10000.00, 2.50, 2.50, 1.00, #BICYCLE
2.00, 10000.00, 10000.00, 10000.00, 10000.00, 1.50, 1.50, 1.00] #PEDESTRIAN
Expand Down
2 changes: 1 addition & 1 deletion perception/image_projection_based_fusion/config/pointpainting.param.yaml
100755 → 100644
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
/**:
ros__parameters:
class_names: ["CAR", "PEDESTRIAN", "BICYCLE"]
rename_car_to_truck_and_bus: true
point_feature_size: 6 # x, y, z and car, pedestrian, bicycle
max_voxel_size: 40000
point_cloud_range: [-76.8, -76.8, -3.0, 76.8, 76.8, 5.0]
voxel_size: [0.32, 0.32, 8.0]
downsample_factor: 1
encoder_in_feature_size: 11
yaw_norm_thresholds: [0.3, 0.0, 0.3]
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
#include <image_projection_based_fusion/utils/geometry.hpp>
#include <image_projection_based_fusion/utils/utils.hpp>
#include <lidar_centerpoint/centerpoint_trt.hpp>
#include <lidar_centerpoint/detection_class_remapper.hpp>

#include <map>
#include <memory>
Expand Down Expand Up @@ -52,9 +53,10 @@ class PointpaintingFusionNode : public FusionNode<sensor_msgs::msg::PointCloud2,
float score_threshold_{0.0};
std::vector<std::string> class_names_;
std::vector<double> pointcloud_range;
bool rename_car_to_truck_and_bus_{false};
bool has_twist_{false};

centerpoint::DetectionClassRemapper detection_class_remapper_;

std::unique_ptr<image_projection_based_fusion::PointPaintingTRT> detector_ptr_{nullptr};

bool out_of_scope(const DetectedObjects & obj);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
<arg name="model_name" default="pointpainting" description="options: `pointpainting`"/>
<arg name="model_path" default="$(find-pkg-share image_projection_based_fusion)/data"/>
<arg name="model_param_path" default="$(find-pkg-share image_projection_based_fusion)/config/$(var model_name).param.yaml"/>
<arg name="class_remapper_param_path" default="$(find-pkg-share lidar_centerpoint)/config/detection_class_remapper.param.yaml"/>

<!-- for eval variable-->
<arg name="input_rois_number" default="$(var input/rois_number)"/>
Expand Down Expand Up @@ -74,6 +75,7 @@
<param name="head_onnx_path" value="$(var model_path)/pts_backbone_neck_head_$(var model_name).onnx"/>
<param name="head_engine_path" value="$(var model_path)/pts_backbone_neck_head_$(var model_name).engine"/>
<param from="$(var model_param_path)"/>
<param from="$(var class_remapper_param_path)"/>
<param name="rois_number" value="$(var input/rois_number)"/>

<!-- debug -->
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ PointpaintingFusionNode::PointpaintingFusionNode(const rclcpp::NodeOptions & opt
static_cast<float>(this->declare_parameter<double>("score_threshold", 0.4));
const float circle_nms_dist_threshold =
static_cast<float>(this->declare_parameter<double>("circle_nms_dist_threshold", 1.5));
const auto yaw_norm_thresholds =
this->declare_parameter<std::vector<double>>("yaw_norm_thresholds");
// densification param
const std::string densification_world_frame_id =
this->declare_parameter("densification_world_frame_id", "map");
Expand All @@ -45,7 +47,6 @@ PointpaintingFusionNode::PointpaintingFusionNode(const rclcpp::NodeOptions & opt
const std::string head_onnx_path = this->declare_parameter("head_onnx_path", "");
const std::string head_engine_path = this->declare_parameter("head_engine_path", "");
class_names_ = this->declare_parameter<std::vector<std::string>>("class_names");
rename_car_to_truck_and_bus_ = this->declare_parameter("rename_car_to_truck_and_bus", false);
has_twist_ = this->declare_parameter("has_twist", false);
const std::size_t point_feature_size =
static_cast<std::size_t>(this->declare_parameter<std::int64_t>("point_feature_size"));
Expand All @@ -57,14 +58,22 @@ PointpaintingFusionNode::PointpaintingFusionNode(const rclcpp::NodeOptions & opt
static_cast<std::size_t>(this->declare_parameter<std::int64_t>("downsample_factor"));
const std::size_t encoder_in_feature_size =
static_cast<std::size_t>(this->declare_parameter<std::int64_t>("encoder_in_feature_size"));
const auto allow_remapping_by_area_matrix =
this->declare_parameter<std::vector<int64_t>>("allow_remapping_by_area_matrix");
const auto min_area_matrix = this->declare_parameter<std::vector<double>>("min_area_matrix");
const auto max_area_matrix = this->declare_parameter<std::vector<double>>("max_area_matrix");

detection_class_remapper_.setParameters(
allow_remapping_by_area_matrix, min_area_matrix, max_area_matrix);

centerpoint::NetworkParam encoder_param(encoder_onnx_path, encoder_engine_path, trt_precision);
centerpoint::NetworkParam head_param(head_onnx_path, head_engine_path, trt_precision);
centerpoint::DensificationParam densification_param(
densification_world_frame_id, densification_num_past_frames);
centerpoint::CenterPointConfig config(
class_names_.size(), point_feature_size, max_voxel_size, pointcloud_range, voxel_size,
downsample_factor, encoder_in_feature_size, score_threshold, circle_nms_dist_threshold);
downsample_factor, encoder_in_feature_size, score_threshold, circle_nms_dist_threshold,
yaw_norm_thresholds);

// create detector
detector_ptr_ = std::make_unique<image_projection_based_fusion::PointPaintingTRT>(
Expand Down Expand Up @@ -237,11 +246,12 @@ void PointpaintingFusionNode::postprocess(sensor_msgs::msg::PointCloud2 & painte
continue;
}
autoware_auto_perception_msgs::msg::DetectedObject obj;
centerpoint::box3DToDetectedObject(
box3d, class_names_, rename_car_to_truck_and_bus_, has_twist_, obj);
centerpoint::box3DToDetectedObject(box3d, class_names_, has_twist_, obj);
output_obj_msg.objects.emplace_back(obj);
}

detection_class_remapper_.mapClasses(output_obj_msg);

obj_pub_ptr_->publish(output_obj_msg);
}

Expand Down
26 changes: 15 additions & 11 deletions perception/lidar_centerpoint/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -70,10 +70,12 @@ message(STATUS "start to download")
if(TRT_AVAIL AND CUDA_AVAIL AND CUDNN_AVAIL)
# Download trained models
set(DATA_PATH ${CMAKE_CURRENT_SOURCE_DIR}/data)
set(DOWNLOAD_BASE_URL https://awf.ml.dev.web.auto/perception/models/centerpoint)
execute_process(COMMAND mkdir -p ${DATA_PATH})

function(download FILE_NAME FILE_HASH)
message(STATUS "Checking and downloading ${FILE_NAME}")
function(download VERSION FILE_NAME FILE_HASH)
message(STATUS "Checking and downloading ${FILE_NAME} ")
set(DOWNLOAD_URL ${DOWNLOAD_BASE_URL}/${VERSION}/${FILE_NAME})
set(FILE_PATH ${DATA_PATH}/${FILE_NAME})
set(STATUS_CODE 0)
message(STATUS "start ${FILE_NAME}")
Expand All @@ -86,31 +88,31 @@ if(TRT_AVAIL AND CUDA_AVAIL AND CUDNN_AVAIL)
else()
message(STATUS "diff ${FILE_NAME}")
message(STATUS "File hash changes. Downloading now ...")
file(DOWNLOAD https://awf.ml.dev.web.auto/perception/models/${FILE_NAME} ${FILE_PATH} STATUS DOWNLOAD_STATUS TIMEOUT 300)
file(DOWNLOAD ${DOWNLOAD_URL} ${FILE_PATH} STATUS DOWNLOAD_STATUS TIMEOUT 300)
list(GET DOWNLOAD_STATUS 0 STATUS_CODE)
list(GET DOWNLOAD_STATUS 1 ERROR_MESSAGE)
endif()
else()
message(STATUS "not found ${FILE_NAME}")
message(STATUS "File doesn't exists. Downloading now ...")
file(DOWNLOAD https://awf.ml.dev.web.auto/perception/models/${FILE_NAME} ${FILE_PATH} STATUS DOWNLOAD_STATUS TIMEOUT 300)
file(DOWNLOAD ${DOWNLOAD_URL} ${FILE_PATH} STATUS DOWNLOAD_STATUS TIMEOUT 300)
list(GET DOWNLOAD_STATUS 0 STATUS_CODE)
list(GET DOWNLOAD_STATUS 1 ERROR_MESSAGE)
endif()
if(${STATUS_CODE} EQUAL 0)
message(STATUS "Download completed successfully!")
else()
message(FATAL_ERROR "Error occurred during download: ${ERROR_MESSAGE}")
message(FATAL_ERROR "Error occurred during download: ${ERROR_MESSAGE} ${DOWNLOAD_URL}")
endif()
endfunction()

# default model
download(pts_voxel_encoder_default.onnx 410f730c537968cb27fbd70c941849a8)
download(pts_backbone_neck_head_default.onnx e97c165c7877222c0e27e44409a07517)
# centerpoint
download(v1 pts_voxel_encoder_centerpoint.onnx 94dcf01ccd53ef14a79787afaada63b4)
download(v1 pts_backbone_neck_head_centerpoint.onnx 7046995e502e11f35c9459668d8afde3)

# aip_x2 model
download(pts_voxel_encoder_aip_x2.onnx 3ae5e9efd7b2ed12115e6f0b28cac58d)
download(pts_backbone_neck_head_aip_x2.onnx 6a406a19e05660677c162486ab332de8)
# centerpoint_tiny
download(v1 pts_voxel_encoder_centerpoint_tiny.onnx 410f730c537968cb27fbd70c941849a8)
download(v1 pts_backbone_neck_head_centerpoint_tiny.onnx e97c165c7877222c0e27e44409a07517)

find_package(ament_cmake_auto REQUIRED)
ament_auto_find_build_dependencies()
Expand All @@ -123,10 +125,12 @@ if(TRT_AVAIL AND CUDA_AVAIL AND CUDNN_AVAIL)
### centerpoint ###
ament_auto_add_library(centerpoint_lib SHARED
lib/centerpoint_trt.cpp
lib/detection_class_remapper.cpp
lib/utils.cpp
lib/ros_utils.cpp
lib/network/network_trt.cpp
lib/network/tensorrt_wrapper.cpp
lib/postprocess/non_maximum_suppression.cpp
lib/preprocess/pointcloud_densification.cpp
lib/preprocess/voxel_generator.cpp
)
Expand Down
51 changes: 41 additions & 10 deletions perception/lidar_centerpoint/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,21 +30,45 @@ We trained the models using <https://github.com/open-mmlab/mmdetection3d>.

### Core Parameters

| Name | Type | Default Value | Description |
| ------------------------------- | ------ | ------------- | ----------------------------------------------------------- |
| `score_threshold` | float | `0.4` | detected objects with score less than threshold are ignored |
| `densification_world_frame_id` | string | `map` | the world frame id to fuse multi-frame pointcloud |
| `densification_num_past_frames` | int | `1` | the number of past frames to fuse with the current frame |
| `trt_precision` | string | `fp16` | TensorRT inference precision: `fp32` or `fp16` |
| `encoder_onnx_path` | string | `""` | path to VoxelFeatureEncoder ONNX file |
| `encoder_engine_path` | string | `""` | path to VoxelFeatureEncoder TensorRT Engine file |
| `head_onnx_path` | string | `""` | path to DetectionHead ONNX file |
| `head_engine_path` | string | `""` | path to DetectionHead TensorRT Engine file |
| Name | Type | Default Value | Description |
| ------------------------------- | ------------ | ------------- | ------------------------------------------------------------- |
| `score_threshold` | float | `0.4` | detected objects with score less than threshold are ignored |
| `densification_world_frame_id` | string | `map` | the world frame id to fuse multi-frame pointcloud |
| `densification_num_past_frames` | int | `1` | the number of past frames to fuse with the current frame |
| `trt_precision` | string | `fp16` | TensorRT inference precision: `fp32` or `fp16` |
| `encoder_onnx_path` | string | `""` | path to VoxelFeatureEncoder ONNX file |
| `encoder_engine_path` | string | `""` | path to VoxelFeatureEncoder TensorRT Engine file |
| `head_onnx_path` | string | `""` | path to DetectionHead ONNX file |
| `head_engine_path` | string | `""` | path to DetectionHead TensorRT Engine file |
| `nms_iou_target_class_names` | list[string] | - | target classes for IoU-based Non Maximum Suppression |
| `nms_iou_search_distance_2d` | double | - | If two objects are farther than the value, NMS isn't applied. |
| `nms_iou_threshold` | double | - | IoU threshold for the IoU-based Non Maximum Suppression |

## Assumptions / Known limits

- The `object.existence_probability` is stored the value of classification confidence of a DNN, not probability.

## Trained Models

You can download the onnx format of trained models by clicking on the links below.

### Changelog

#### v1 (2022/07/06)

| Name | URLs | Description |
| ------------------ | -------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
| `centerpoint` | [pts_voxel_encoder][v1-encoder-centerpoint] <br> [pts_backbone_neck_head][v1-head-centerpoint] | There is a single change due to the limitation in the implementation of this package. `num_filters=[32, 32]` of `PillarFeatureNet` |
| `centerpoint_tiny` | [pts_voxel_encoder][v1-encoder-centerpoint-tiny] <br> [pts_backbone_neck_head][v1-head-centerpoint-tiny] | The same model as `default` of `v0`. |

These changes are compared with [this configuration](https://github.com/tianweiy/CenterPoint/blob/v0.2/configs/waymo/pp/waymo_centerpoint_pp_two_pfn_stride1_3x.py).

#### v0 (2021/12/03)

| Name | URLs | Description |
| --------- | -------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
| `default` | [pts_voxel_encoder][v0-encoder-default] <br> [pts_backbone_neck_head][v0-head-default] | There are two changes from the original CenterPoint architecture. `num_filters=[32]` of `PillarFeatureNet` and `ds_layer_strides=[2, 2, 2]` of `RPN` |

## (Optional) Error detection and handling

<!-- Write how to detect errors and how to recover from them.
Expand Down Expand Up @@ -91,3 +115,10 @@ Example:
Currently, this package can't handle the chattering obstacles well. We plan to add some probabilistic filters in the perception layer to improve it.
Also, there are some parameters that should be global(e.g. vehicle size, max steering, etc.). These will be refactored and defined as global parameters so that we can share the same parameters between different nodes.
-->

[v0-encoder-default]: https://awf.ml.dev.web.auto/perception/models/pts_voxel_encoder_default.onnx
[v0-head-default]: https://awf.ml.dev.web.auto/perception/models/pts_backbone_neck_head_default.onnx
[v1-encoder-centerpoint]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_voxel_encoder_centerpoint.onnx
[v1-head-centerpoint]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint.onnx
[v1-encoder-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_voxel_encoder_centerpoint_tiny.onnx
[v1-head-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint_tiny.onnx
10 changes: 0 additions & 10 deletions perception/lidar_centerpoint/config/aip_x2.param.yaml

This file was deleted.

15 changes: 15 additions & 0 deletions perception/lidar_centerpoint/config/centerpoint.param.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
/**:
ros__parameters:
class_names: ["CAR", "PEDESTRIAN", "BICYCLE"]
point_feature_size: 4
max_voxel_size: 40000
point_cloud_range: [-89.6, -89.6, -3.0, 89.6, 89.6, 5.0]
voxel_size: [0.32, 0.32, 8.0]
downsample_factor: 1
encoder_in_feature_size: 9
# post-process params
circle_nms_dist_threshold: 0.5
iou_nms_target_class_names: ["CAR"]
iou_nms_search_distance_2d: 10.0
iou_nms_threshold: 0.1
yaw_norm_thresholds: [0.3, 0.0, 0.3]
Original file line number Diff line number Diff line change
@@ -1,10 +1,15 @@
/**:
ros__parameters:
class_names: ["CAR", "PEDESTRIAN", "BICYCLE"]
rename_car_to_truck_and_bus: true
point_feature_size: 4
max_voxel_size: 40000
point_cloud_range: [-89.6, -89.6, -3.0, 89.6, 89.6, 5.0]
voxel_size: [0.32, 0.32, 8.0]
downsample_factor: 2
encoder_in_feature_size: 9
# post-process params
circle_nms_dist_threshold: 0.5
iou_nms_target_class_names: ["CAR"]
iou_nms_search_distance_2d: 10.0
iou_nms_threshold: 0.1
yaw_norm_thresholds: [0.3, 0.0, 0.3]
Loading

0 comments on commit be021a9

Please sign in to comment.