Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

background model capacity #11

Open
h8c2 opened this issue Jul 17, 2024 · 2 comments
Open

background model capacity #11

h8c2 opened this issue Jul 17, 2024 · 2 comments

Comments

@h8c2
Copy link

h8c2 commented Jul 17, 2024

I have applied your method to my dataset. From what I understand, you scale the pose in advance, whereas I use the original scale of the pose. This means I need to set the scene_box.aabb correctly (e.g., aabb in real scale and set scale=1.0) and center the pose. Is there something else I might have missed? I noticed that the background region's result is poor, while the foreground region looks good.

微信图片_20240717183910
I'm wondering if there might be an issue with my data processing or if the background's capacity is limiting the performance..

@h8c2
Copy link
Author

h8c2 commented Jul 17, 2024

I have found out that the floaters are introduced by improper parameters of the occupancy grid, however, I think the background is still not satisfactory.

@XJay18
Copy link
Collaborator

XJay18 commented Jul 22, 2024

Hi, thank you for your question.

I think this may result from (1) a lack of point cloud data for objects in the background scene box and (2) a limited background capacity.

In the latter case, you may set a larger value for pipeline.model.bg-color-grid-max-res or pipeline.model.bg-color-log2-hashmap-size to enrich the background's capacity. You may also try fixing the number of importance samples along a ray, i.e., set pipeline.model.pdf-samples-fixed-ratio to 1.

For the first case, as the model relies on lidar initialization for efficient reconstruction, regions, where density grids are not properly initialized (i.e., some objects without lidar observation), may be skipped during occupancy sampling, leading to less satisfactory results. We augment the background point cloud (i.e., $P_{bg}$ in our paper) to alleviate this issue, but there are still some cases of failure. If you could obtain the point cloud data for background objects, the results would be better. It would also be helpful if you could visualize the foreground (red) scene box and the point cloud data, in a 3D viewer like this:

ScreenCapture

Then, you may enlarge the foreground box accordingly to include some regions initially located in the background portion for better visual performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants