Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The width of pointcloud does not match the rgb image size #2937

Closed
quangmta opened this issue Nov 28, 2023 · 27 comments
Closed

The width of pointcloud does not match the rgb image size #2937

quangmta opened this issue Nov 28, 2023 · 27 comments
Labels

Comments

@quangmta
Copy link

quangmta commented Nov 28, 2023

Hello
I run this project in ros with config depth_module.profile := 424x240x6 and rgb_camera.profile := 320x180x30.
When I run topic /camera/depth/color/points, the height is 1 and the width is about 43117, 43661, 43700, ....
When pointcloud is created point cloud from rgb image and depth image, the it's width should be 320x180 = 57600. I guess there are some filters that are applied to this pointcloud.
Can I get the pointcloud with original size? For example in my case is 57600x1, or if it can be 320x180, so it will be better . I want to subcribe this pointcloud topic and need the original size to determine which point of poitncloud corresponds to which in the rgb image.
My environment system is :
Jetson nano, Ubuntu 20.04, RealSense SDK 2.51.1, Wrapper 4.51.1
Thank you!

@quangmta quangmta changed the title Pointcloud width is not fit with rgb image size The width of Point cloud does not match the rgb image size Nov 28, 2023
@quangmta quangmta changed the title The width of Point cloud does not match the rgb image size The width of pointcloud does not match the rgb image size Nov 28, 2023
@MartyG-RealSense
Copy link
Collaborator

Hi @quangmta Do you get a pointcloud closer to expectations if you add pointcloud.allow_no_texture_points:=true to your launch instruction, please?

@quangmta
Copy link
Author

quangmta commented Nov 28, 2023

hi @MartyG-RealSense. Thank you for answering so quickly. When I add 'pointcloud.allow_no_texture_points:=true', it seems that the point cloud is aligned according to the depth image. At the top of the point cloud the colors are added automatically (predicted) and it seems incorrect. You can see in the picture. The width of pointcloud increases to 60266, 59276, .... It's also not fixed. I want to create pointcloud with correct color (aligned according to the rgb image) and has fixed width as size of the rgb picture.
image

@MartyG-RealSense
Copy link
Collaborator

Which RealSense camera model are you using, please?

If it is one of the 'D435' type models such as D435, D435i or D435f then the depth sensor's field of view is larger than the color sensor's. This means that the outer regions of the original depth image get cut off when a depth-RGB aligned pointcloud is created because the color sensor cannot see all of the areas of a scene that the depth sensor can. So when pointcloud.allow_no_texture_points:=true is used, the missing area is filled in.

@quangmta
Copy link
Author

quangmta commented Nov 28, 2023

I'm using D435i. Yes I agree with you, the missing area is filled in and it seems incorrect color. So I don't use pointcloud.allow_no_texture_points. But the size of point cloud is not fixed.
I tried to use pointcloud.ordered_pc:=true, then the size of pointcloud is fixed = 424x240, as the size of depth image, the missing area is filled in. When ordered_pc is true, I tried to add pointcloud.allow_no_texture_points:=false, but nothing change, pointcloud is aligned according to depth image, and the size of pointcloud is still 424x240.
Do you know how to create pointcloud aligned according to rgb image with fixed size as the rgb image size?

@MartyG-RealSense
Copy link
Collaborator

The ROS wrapper aligns depth to color, meaning that some of the depth detail gets cut off due to aligning to the color sensor's smaller field of view on the D435 type camera models. Whilst color to depth alignment is possible in the librealsense SDK (which stretches the color image to the size of the larger depth image), the wrapper does not support color to depth alignment unfortunately.

@quangmta
Copy link
Author

Thanks for the help. I'll try another way. If something works out, I'll tell you.

@MartyG-RealSense
Copy link
Collaborator

Hi @quangmta Do you have an update about this case that you can provide, please? Thanks!

@quangmta
Copy link
Author

Hi @MartyG-RealSense I just sorted the pointcloud by x to get the right horizontal order

@MartyG-RealSense
Copy link
Collaborator

It's excellent news that you were successful, @quangmta - thanks very much for the update!

@quangmta
Copy link
Author

@MartyG-RealSense Thank you very much for the help

@quangmta
Copy link
Author

Hi @MartyG-RealSense I'm using D435i, but the depth error is very large, about 10% at 2m and 20% at 4m. I want to align pointcloud from camera with the laser scan from LIDAR HOKUYO URG-04LX-UG01 (2D LIDAR). Have you done this before or do you have any ideas ?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 20, 2023

Aligning RealSense and non-RealSense cameras is complicated but possible, like in the case at IntelRealSense/librealsense#10363 where a RealSense camera was aligned with a thermal camera. The method uses the RealSense SDK's software-device interface, which is best used with the C++ language (which may not suit your needs if you are using ROS).

I note that you have a Jetson Nano. A ROS project at the link below for using a RealSense and Hokuyo camera together with Jetson may therefore be of interest.

https://github.com/reuben-thomas/NP-DevCar

@quangmta
Copy link
Author

Thank you. But do you know why I got the such a large depth error. I'm using D435i in my room. I know that the depth error of RealSense can be changed by light source, shape of object, and so on. Do you know how to set up the camera or do anything else to increase the depth accuracy?
And since the depth from my LIDAR is very accurate, so I want to align the pointcloud using LIDAR. But it's 2D LIDAR, so I think some algorithms are needed to align smoothly. I'm using a linear approximation function. It seems to work fine:
image
But in some cases, the pointcloud is not smooth.
image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 20, 2023

The wavy depth resembles a phenomenon known as "ghost noise". In IntelRealSense/librealsense#4553 and IntelRealSense/librealsense#3741 (comment) possible remedies for this that are suggested by Intel RealSense team members include performing a calibration and applying post-processing filters.

@MartyG-RealSense
Copy link
Collaborator

Hi @quangmta Do you require further assistance with this case, please? Thanks!

@quangmta
Copy link
Author

quangmta commented Dec 30, 2023

Hi @MartyG-RealSense I still didn't resolve it. Now I want to try to reprocess the depth map instead of point cloud. Thanks for your dedication!

@MartyG-RealSense
Copy link
Collaborator

Hi @quangmta Do you need advice about using post-processing filters in ROS, please?

@quangmta
Copy link
Author

Hi @MartyG-RealSense. Yes. Could you tell me how to create point cloud from depth map, please? I want to post-process depth map and then create point cloud from it.

@MartyG-RealSense
Copy link
Collaborator

The filters in the ROS2 wrapper have the following parameter names.

disparity_filter - convert depth to disparity before applying other filters and back.
spatial_filter - filter the depth image spatially.
temporal_filter - filter the depth image temporally.
hole_filling_filter - apply hole-filling filter.
decimation_filter

Below is an example of a ROS2 launch instruction that enables the decimation filter and pointcloud.

ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true decimation_filter.enable:=true

For other types of post-processing filter, change decimation_filter.enable:=true to another filter name, such as temporal_filter.enable:=true

@quangmta
Copy link
Author

quangmta commented Jan 1, 2024

Hi @MartyG-RealSense But how can I use my manual filter? Thanks and Happy new year!

@MartyG-RealSense
Copy link
Collaborator

Are you referring to the resolutions for depth and color that you manually defined? If so then when a depth + color 'textured pointcloud' is generated, color data will automatically be mapped onto the depth coordinates to generate a pointcloud where depth is aligned to color.

If you have a D435 or D455 type of camera (such as D435 / D435i or D455 / D455f) then you should be able to set both depth and color to a resolution of 424x240. On the D435 type of camera though, the field of view size of the color sensor is smaller than the depth field of view. This means that when depth and color are aligned, some of the image will be cut off at its outer edges.

You can obtain a full pointcloud by adding the parameter pointcloud.allow_no_texture_points:=true to your launch instruction to color the regions beyond the cut-off area with zeros. You cannot add color texture to the area filled with zeroes though. This is because if the camera's color field of view is smaller than the depth field of view, the RGB sensor cannot see as much of the scene as the depth sensor can, and so color data for the outer regions of the depth image will not exist.

On camera models where the depth and color field of view sizes are approximately the same, such as D455, then the amount of edge area that is cut off in an aligned image is much less.

@quangmta
Copy link
Author

quangmta commented Jan 4, 2024

I want to subscribe the topics from depth map and rgb images, change something using my own methods and then create pointcloud from this. I want to ask how to create point cloud from depth map. Thanks.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 4, 2024

Including pointcloud.enable:=true in your launch instruction converts the depth and color data into a point cloud published on the topic /camera/camera/depth/color/points

@quangmta
Copy link
Author

quangmta commented Jan 4, 2024

Ok. Thank you!

@MartyG-RealSense
Copy link
Collaborator

Hi @quangmta Do you require further assistance with this case, please? Thanks!

@quangmta
Copy link
Author

quangmta commented Jan 9, 2024

Hi @MartyG-RealSense I think I don't need now. You can close this issue. Thank you for the help!

@MartyG-RealSense
Copy link
Collaborator

You are very welcome. Thanks very much for the update!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants