Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(euclidean_cluster): fix euclidean cluster params #7662

Original file line number Diff line number Diff line change
Expand Up @@ -106,24 +106,28 @@
temporary_cluster.height = pointcloud_msg->height;
temporary_cluster.fields = pointcloud_msg->fields;
temporary_cluster.point_step = point_step;
temporary_cluster.data.resize(max_cluster_size_ * point_step);
temporary_cluster.data.resize(cluster.indices.size() * point_step);
clusters_data_size.push_back(0);
}

// create vector of point cloud cluster. vector index is voxel grid index.
for (size_t i = 0; i < pointcloud->points.size(); ++i) {
const auto & point = pointcloud->points.at(i);
const int index =
voxel_grid_.getCentroidIndexAt(voxel_grid_.getGridCoordinates(point.x, point.y, point.z));
if (map.find(index) != map.end()) {
auto & cluster_data_size = clusters_data_size.at(map[index]);
if (cluster_data_size + point_step > std::size_t(max_cluster_size_ * point_step)) {
continue;
}
std::memcpy(
&temporary_clusters.at(map[index]).data[cluster_data_size],
&pointcloud_msg->data[i * point_step], point_step);
cluster_data_size += point_step;
if (cluster_data_size == temporary_clusters.at(map[index]).data.size()) {
temporary_clusters.at(map[index])
.data.resize(temporary_clusters.at(map[index]).data.size() * 2);
Copy link
Contributor

@badai-nguyen badai-nguyen Jun 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using the maximum value to allocate space will cause excessive CPU and memory usage.

@beginningfan I agree with you.
However, I think decide that the max size of output cluster (cluster_data_size) to be doubled of temporary_cluster is not reasonable. (runtime_error might be occurred here for some high dense pointcloud)

Suggested change
.data.resize(temporary_clusters.at(map[index]).data.size() * 2);
.data.resize(max_cluster_size_ * point_step);

Or we can increment cluster_data_size and check it repeatedly.

    if (cluster_data_size >= temporary_clusters.at(map[index]).data.size()) {
        temporary_clusters.at(map[index]).data.resize(cluster_data_size + point_step * 10);
      }
      std::memcpy(
        &temporary_clusters.at(map[index]).data[cluster_data_size],
        &pointcloud_msg->data[i * point_step], point_step);
      cluster_data_size += point_step;
    }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is a for loop that reallocates twice the space every time the space is full. When the space is full, it will twice the space again.I have tested using a ouster64, and the dense pointcloud is sometimes 50 times larger than the downsampled one. Even so, enough space can be allocated quickly.
If you use max_cluster_size_, it might as well just use the original code. If we add 10 points each time, this will result in multiple copies of the vector data.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@beginningfan I understood. I have checked and it worked well. Thank you for your improvement.

}

Check warning on line 130 in perception/euclidean_cluster/lib/voxel_grid_based_euclidean_cluster.cpp

View check run for this annotation

CodeScene Delta Analysis / CodeScene Cloud Delta Analysis (main)

❌ Getting worse: Complex Method

VoxelGridBasedEuclideanCluster::cluster increases in cyclomatic complexity from 12 to 13, threshold = 9. This function has many conditional statements (e.g. if, for, while), leading to lower code health. Avoid adding more conditionals and code to it without refactoring.
}
}

Expand Down
Loading