Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reconstruction of point cloud incorrectness using aligned rgb and depth image #10313

Closed
vagrant-drift opened this issue Mar 15, 2022 · 17 comments

Comments

@vagrant-drift
Copy link

vagrant-drift commented Mar 15, 2022

Required Info
Camera Model D435
Firmware Version 05.11.01.100
Operating System & Version Ubuntu 18.04
Kernel Version (Linux Only) Linux version 4.9.201-tegra
Platform NVIDIA Jetson
SDK Version librealsense2.so.2.49
Language C ++
Segment Robot }

Issue Description

car plate's Reconstruction of point cloud incorrectness using aligned rgb and depth image

rgb iamge :
image

pointcloud :
image

code:
y1,y2,x1,x2 is plate's position info

` for (int i = y1; i < y2; i++)
{
for (int j = x1; j < x2; j++)
{
if (depthImage.at(i, j) == 0 || 3 > depthImage.at(i, j))
{
continue;
}
float point[3] = {0.0, 0.0, 0.0}, pix[2] = {0.0, 0.0};
pix[0] = j;
pix[1] = i;
rs2_deproject_pixel_to_point(point, &intric_, pix, depthImage.at(i, j) / 1000.0);

                pcl::PointXYZ pont(point[0], point[1], point[2]);
                cloud.push_back(pont);
             
               
        }

}`
Cover materials is Polymeric Methyl Methacrylate(PMMA) with Thickness 5mm
36f4d3bb8b922342c8a2bec25e7dcf4

0d6ecb5f307b392a5471a8e148ea710

‘’

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 15, 2022

Hi @vagrant-drift The problem with your point cloud image is likely not due to reflections from the material of the vehicle numberplate, as numberplates have to be non-reflective so that an image of them can be captured by road speed-cameras.

The thickness of the 5 mm cover material may also not be the problem, as similar covers have been used by other custom housings for RealSense cameras such as the underwater one in the image below, from the discussion IntelRealSense/realsense-ros#1723

image

I would recommend giving your housing's cover material a wipe though as it does not look fully clean and transparent.

You could test whether back-reflections from your chosen cover material are negatively impacting the depth image by turning off the camera's projector. This can be done in C++ by setting the SDK's RS2_OPTION_EMITTER_ENABLED option to '0' or by setting RS2_OPTION_LASER_POWER to '0'. The official documentation page linked to below describes how to do this in C++ under the librealsense2 heading.

https://dev.intelrealsense.com/docs/api-how-to#section-controlling-the-laser

@vagrant-drift
Copy link
Author

vagrant-drift commented Mar 15, 2022

@MartyG-RealSense thank your help ,I will try waht you said and tell you result. And indoor the pointcloud is correct:
image

@vagrant-drift
Copy link
Author

vagrant-drift commented Mar 16, 2022

hi @MartyG-RealSense I set RS2_OPTION_EMITTER_ENABLED option to '0' and I get pointcloud :
57436772976c7bd99725834a3542d3a
why Lose part of pointcloud?
this is rgb image:
image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 16, 2022

If you achieve a correct point cloud indoors with the projector enabled then the problem with the image outdoors likely is not related to back-reflections from the cover material or from dirt on the cover.

Was the image with the missing corner that was produced with the projector disabled taken indoors, please?

When the projector is disabled, the amount of detail on the depth image can reduce as the light emission from the projector is removed and it is no longer casting a pattern of infrared dots onto surfaces in the scene. The camera can analyze these dots - which act as a readable texture - for depth information instead of reading the depth from the material directly.

In scenes that are well lit though, if the dot pattern is turned off then the camera can alternatively use ambient light in the scene to analyze surfaces for depth detail. It will have difficulty reading plain surfaces that have low texture or no texture though, which is why it is beneficial to have the dot pattern enabled when depth-sensing with such surfaces as the textured dots projected onto the surface can be read instead.

If the missing-corner image was taken indoors then the indoor lighting may be helping to minimize detail loss from having the projector disabled but that particular area of the scene cannot be read without the dots being present in it.

It looks as though the missing corner corresponds on the RGB image to the area where the black bumper begins. It is a general physics principle that dark grey or black colours absorb light, making it difficult for depth cameras to analyze those surfaces for depth detail. The darker the color, the less information that is returned to the camera. Without the projector's dot pattern present, it may be making the bumper unreadable.

@vagrant-drift
Copy link
Author

@MartyG-RealSense Thank you for your help.The results above was produced with the projector disabled taken outdoors .I only build pointcloud for numberplate(red rectangle below) not the black bumper :
image
so I think dark grey is not matter. I using D435 when robot is moving which carry D435. Does this affect?

@MartyG-RealSense
Copy link
Collaborator

The D435 has a fast 'global' shutter on its depth sensor which enables vehicles travelling at full speed to be depth captured. If you are mapping RGB onto the point cloud though, the RGB sensor of the D435 has a slower 'rolling' shutter that may produce blurring on the RGB image during motion. Using auto-exposure with an FPS of 60 for the RGB stream can help to reduce motion blur.

Alternatively, if manual exposure is being used then blurring can be reduced by setting FPS to 6 (not 60) and using a manual RGB exposure value of around 70.

@vagrant-drift
Copy link
Author

vagrant-drift commented Mar 16, 2022

@MartyG-RealSense
Thank for your help,I canot understand

The D435 has a fast 'global' shutter on its depth sensor which enables vehicles travelling at full speed to be depth captured. If you are mapping RGB onto the point cloud though, the RGB sensor of the D435 has a slower 'rolling' shutter that may produce blurring on the RGB image during motion

I donot mapping RGB onto the point cloud.

I build point cloud using aligned rgb and depth frame which get by sdk. part code:
y1,y2,x1,x2 is plate's position info on rgb image

   for (int i = y1; i < y2; i++)
            {
                for (int j = x1; j < x2; j++)
                {
                    if (depthImage.at<ushort>(i, j) == 0 || 3 > depthImage.at<ushort>(i, j))
                    {
                        continue;
                    }
                    float point[3] = {0.0, 0.0, 0.0}, pix[2] = {0.0, 0.0};
                    pix[0] = j;
                    pix[1] = i;
                    rs2_deproject_pixel_to_point(point, &intric_, pix, depthImage.at<ushort>(i, j) / 1000.0);

                

                    pcl::PointXYZ pont(point[0], point[1], point[2]);
                    cloud.push_back(pont);
            
                  
                }
            }

Does the blur on the RGB image affect point cloud ?
The image seem clear:
image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 16, 2022

If you are not mapping RGB onto your point cloud then the point cloud would not be directly affected by the RGB sensor's slower shutter.

If you do have depth and RGB streams enabled at the same time then it is possible though for the FPS of one of the streams to experience lag (for example, being set to 30 FPS but having an actual FPS of 18). Do you have an RGB stream active at the same time as you are streaming depth for your point cloud, please or are you using depth only?

@vagrant-drift
Copy link
Author

@MartyG-RealSense
Thank for your help .I start rgb streams and depth streams . I use RGB streams for detecting numberplate getting y1,y2,x1,x2 . using y1,y2,x1,x2 get numberplate's depth infomation from depth frame aligned with rgb frame, finally produce pointcloud using numberplate's depth infomation.
init code:

m_rs2_cap.rs_cap.cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 15);
m_rs2_cap.rs_cap.cfg.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 15);
 m_rs2_cap.rs_cap.pipe.start(m_rs2_cap.rs_cap.cfg);

@MartyG-RealSense
Copy link
Collaborator

You are less likely to be affected by stream lag from having both streams enabled at the same time if you are using 15 FPS instead of 30. So there should not be a problem there.

My PCL programming knowledge is admittedly limited. Your code is generating good quality point clouds though, at least indoors, so it does not seem as though the code should be a cause for concern. It may be worth testing the SDK's official PCL depth-only point cloud example rs-pcl to see how the output of that program compares to your own though.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/pcl/pcl/rs-pcl.cpp

@vagrant-drift
Copy link
Author

@MartyG-RealSense Thank you ,I test my code and FPS only get 3-4 fps. does this matter? this is read code:

       GetRs2CapInst()->rs_cap.frames = GetRs2CapInst()->rs_cap.pipe.wait_for_frames();
			

	GetRs2CapInst()->rs_cap.frames = align.process(GetRs2CapInst()->rs_cap.frames);
		
	rs2::depth_frame depth_frame = GetRs2CapInst()->rs_cap.frames.get_depth_frame();
	rs2::frame color_frame = GetRs2CapInst()->rs_cap.frames.get_color_frame();

@MartyG-RealSense
Copy link
Collaborator

Only getting 3-4 FPS will matter if you had defined a rate of 15 FPS, as it indicates a significant drag on FPS.

I see that you are using align.process but before that instruction is used, align_to is usually defined first, soon after the pipeline start instruction, as demonstrated in the C++ rs-align example.

https://github.com/IntelRealSense/librealsense/blob/master/examples/align/rs-align.cpp#L59-L60

Is your script using the align_to instruction, please?

@vagrant-drift
Copy link
Author

@MartyG-RealSense Thank your help I did not use align_to instruction. I will add Thank you!

@vagrant-drift
Copy link
Author

@MartyG-RealSense yesterday I had an mistake.Code have added align_to ,but not soon after the pipeline start instruction

      rs2::align align(RS2_STREAM_COLOR);

	
	//增加开启收敛
	//do somethings...
	RS2_wait_tolerance(30);
	struct timespec s, e;
	while(m_rs2_cap.task.is_running)
	{
	
		clock_gettime(CLOCK_MONOTONIC, &s);
		
			GetRs2CapInst()->rs_cap.frames = GetRs2CapInst()->rs_cap.pipe.wait_for_frames();
			

			GetRs2CapInst()->rs_cap.frames = align.process(GetRs2CapInst()->rs_cap.frames);

             	rs2::depth_frame depth_frame = GetRs2CapInst()->rs_cap.frames.get_depth_frame();
			rs2::frame color_frame = GetRs2CapInst()->rs_cap.frames.get_color_frame();
      }

and I move align_to soon after the pipeline start instruction result is same 3-4fps

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 18, 2022

Does FPS improve if you comment out the pointcloud generation lines? This will help to show whether or not the pointcloud is responsible for most of the FPS drop.

image

As you are using C++ you could also try taking advantage of the RealSense SDK's GLSL processing blocks acceleration for rs2:: functions, such as rs2::align and rs2::pointcloud. This can be done simply by adding gl:: in front of an rs2:: instruction.

For example with your align instruction:

rs2::gl::align align(RS2_STREAM_COLOR);

#3654 has a very good pros and cons analysis of GLSL processing blocks and when it can be used.

There is also an rs-gl C++ example program for generating a point cloud with GLSL support.

https://github.com/IntelRealSense/librealsense/tree/master/examples/gl

@MartyG-RealSense
Copy link
Collaborator

Hi @vagrant-drift Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants