Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D435i Distance and Size Calculation Errors #10779

Closed
ghost opened this issue Aug 15, 2022 · 13 comments
Closed

D435i Distance and Size Calculation Errors #10779

ghost opened this issue Aug 15, 2022 · 13 comments

Comments

@ghost
Copy link

ghost commented Aug 15, 2022

Required Info
Camera Model D435i
Firmware Version RealSense Viewer v2.50.0
Operating System & Version Linux (Ubuntu 18.04.6 LTS)
Kernel Version (Linux Only) 5.4.0-124-generic
Platform PC/ NVIDIA Jetson Nano
SDK Version { legacy / 2.0 }
Language python
Segment Robot

Wrong Distance Data in the Depth Frame

I am working on a robotics project and D435i will be the eyes of that robot. I just wanted to estimate the distance to any obstacles from the robot and sizes of these obstacles. I am doing post processing and then obtaining the distance data but there exists some inaccuracies when moving away from the center of the frame. For example, D435i measures the distance to object as 400mm if I locate it at the center of the frame but measures 350mm if the object is located at the left side of the frame, and inaccuracy increases as the distance increases.
On the other hand, while calculating the size of an object I follow the formula given in the link. I use focal length, sensor height and sensor width as 1.93mm, 1.44mm and 2.54mm, respectively (848x480). I found the values by doing a search online bu I don't think that the calculations give me the correct result.
I just want to know the reason of the distance calculation errors and the possible solutions to it.
Also, what can I do while calculating the size of an object in 3D with more accurate values/formulas?

Thanks in advance,

Selim

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 15, 2022

Hi @yvzslmozky Are you aligning depth to color and obtaining the distance with the get_distance() instruction, please? If you are then there is a known issue with this method where distance is accurate at the center of the image but increasingly drifts when moving out from the center towards the outer regions of the image in the X or Y axes. #10224 (comment) has links to references about this.

Using the intrinsics of the stream that is being depth-to-color aligned to should provide more accurate measurement results in an aligned image than using depth intrinsics, as described at #3735 (comment)

If depth is being aligned to color then the color intrinsics would be used.


An alternative means of obtaining real-world distance that is more accurate than performing alignment and deprojection is to map color to depth coordinates with map_to, generate a point cloud with pc.calculate and then retrieve the 3D coordinates from the vertices of the pointcloud with points.get_vertices(). The two approaches of deprojection and points.get_vertices are discussed by RealSense team members at #4315

@ghost
Copy link
Author

ghost commented Aug 15, 2022

Hi @MartyG-RealSense ,
I was aligning the depth frame to the color frame since color frame has smaller FOV. I will try what you have said and I will leave a comment about the result.

Thanks,

Selim

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @yvzslmozky for the update!

When aligning depth to color or color to depth, the SDK's align processing block makes automatic adjustments for differences in resolution and FOV size between the two streams.

@ghost
Copy link
Author

ghost commented Aug 16, 2022

Hi @MartyG-RealSense

I have been working on the issues that I mentioned above. The good news is depth, in fact euclidean, measurements are nearly perfect at especially short distances. For this, I used rs.rs2_deproject_pixel_to_point() method.

 size_reduction = int(decimation.get_option(rs.option.filter_magnitude))

 distance_z = depth_frame.get_distance(mouseX//size_reduction, mouseY//size_reduction)
 color_intrinsics = color_frame.profile.as_video_stream_profile().intrinsics
 dx ,dy, dz = rs.rs2_deproject_pixel_to_point(color_intrinsics, [mouseX//size_reduction, mouseY//size_reduction], distance_z)
 euclidean_distance = math.sqrt(dx**2 + dy**2 + dz**2)

Here, I want to point out a few things and want to say that I am choosing the pixel from the RGB frame by clicking with mouse and utilizing the coordinates of the cursor with the help of cv2.setMouseCallback() method.
Firstly, one should consider the effect of decimation filter since it reduces the size of the depth frame by the decimation magnitude. That is why I added a size_reduction divider.
Secondly, the data coming from depth_frame.get_distance() is exactly the same as distance in z-axis, which means that we only measuring one of the components of the real distance from the object to the camera in 3D coordinates, so I named it as distance_z.
Lastly, aligning the color frame to the depth frame results in some useless black pixels especially at the edges of the objects in RGB image, which prevents me from using RGB data the way I want. Therefore, aligning depth frame to the color frame then using the color intrinsics fits for me.

On the other hand, points.get_vertices() method does not seem to give more accurate results, at least for me. I modified the data coming from the method and converted it to a color map. As far as I can tell, sometimes it is affected from something, somehow and the entire frame gives nearly zero distance data and becomes normal. Overall, it gives similar results as compared to the previous method but not better at all. I'm curious about your opinion on whether I'm using the method correctly and I am providing my code below.

 pc = rs.pointcloud()
 pc.map_to(color_frame)
 points = pc.calculate(depth_frame)
 v = points.get_vertices()
 vertices = np.asanyarray(v).view(np.float32).reshape(-1, 3)  # xyz coordinates
 distance_data = np.sqrt(verts[:,0]**2 + verts[:,1]**2 + verts[:,2]**2)
 distance_data = distance_data.reshape(FRAME_HEIGHT, FRAME_WIDTH)
 depth_image = cv2.applyColorMap(cv2.convertScaleAbs(distance_data, alpha=255/distance_data.max()), cv2.COLORMAP_JET)
 
 cursor_location = (FRAME_WIDTH//size_reduction) * (mouseY//size_reduction) + mouseX//size_reduction
 dx ,dy, dz = verts[cursor_location]
 euclidean_distance = math.sqrt(dx**2 + dy**2 + dz**2)

While working, my depth frame started giving wrong result at the edge of an object that is located near to the camera. It did not exist before and I did not do any calibration. Then, I performed hardware reset but no luck :(. I am leaving an example image of the issue here. What can be the reason for this?

Thank you for your attention,

Selim

demo

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 17, 2022

Thanks so much @yvzslmozky for providing such detailed feedback of your experience for the benefit of the RealSense community!

Would it be possible to provide an RGB image of the observed scene please to see whether the object has characteristics that could be making that section difficult for the camera to read depth from correctly?

@ghost
Copy link
Author

ghost commented Aug 17, 2022

Hi again @MartyG-RealSense

I am providing the image that contains two merged images of the same scene with an object and without. Also, I printed the distance value of the same spot on the images to see the difference between the cases. As I said before, I am sure that the wrong depth frame data did not exits before.

By the way, I cannot see your comments about my usage of points.get_vertices(). If you have any suggestions, I will appreciate that.

Thanks,
Selim

mergedimages

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 18, 2022

Fluorescent lights such as the ceiling strip lights in the RGB images can be a disruptive influence for 400 Series camera images, as the heated gas inside them flickers at rates that are difficult to see with the human eye. This interference can be reduced by setting an option called power line frequency to a frequency that closely matches the operating frequency of the lights. This may be 60 Hz in North American regions and 50 Hz in European regions.

The power line frequency can be defined in scripting, with values ranging from 0-3 (0 = Disabled, 1 = 50 Hz, 2 = 60 Hz, 3 = Auto).

A C++ example of configuring the option is at #7766 (comment)

image

Although I could not find an equivalent Python script, the official pyrealsense2 documentation indicates that it is also defined as an rs_option in Python.

https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.option.html?highlight=power%20line%20frequency#pyrealsense2.option.power_line_frequency

Alternatively, it can be defined in a json camera configuration file. For example:

"controls-color-power-line-frequency": "1",

Regarding the incorrect depth reading for the object edge that is near to the camera, can you confirm if the area with the problem is further than 10 cm (0.1 m) from the camera? If an object is closer to the D435i camera than its minimum depth sensing distance of 0.1 m then the depth image will begin degrading in the part of the image that is below the minimum distance.

@ghost
Copy link
Author

ghost commented Aug 18, 2022

Hi again,

I tried changing the power line frequency but it doesn't affect the depth information coming from the sensor. Additionally, sensor doesn't support option 3, which you mentioned as 'auto', and gives RuntimeError: xioctl(VIDIOC_S_CTRL) failed Last Error: Numerical result out of range. I am adding my script anyway.

pipeline = rs.pipeline()
config = rs.config()
pipeline.start(config)
sensor = self.pipeline.get_active_profile().get_device().query_sensors()[1]  # 0: Stereo Module, 1: RGB Camera, 2: Motion Module
sensor.set_option(rs.option.power_line_frequency, 0)

On the other hand, I am aware of the minimum distance that my depth camera can make measurements correctly. I am not placing objects that are closer than the minimum distance. I already mentioned that this problem did not exist before, I really cannot understand the reason that results in depth frame defects.
Also, I noticed something new that depth frame issue is not only about the objects that are near to the camera but also all of the objects have the similar problem at their left edges. As objects come closer, defects become bigger. It feels like objects seem to shift depth information towards left.

I just want to ask one more question, how can I entirely reset my cam other than hardware reset. I am asking because I am not satisfied about the result of that reset method.

Thanks,
Selim

@MartyG-RealSense
Copy link
Collaborator

You mentioned earlier that you had not performed calibration. Even if a camera was well calibrated earlier, it could become mis-calibrated and require re-calibration if it received a physical shock such as a hard knock, a drop on the floor or severe vibration. It could also become mis-calibrated if it experienced very high temperature.

As an alternative to performing a hardware reset of the camera, the entire USB port could be reset using a Linux port reset bash script. #8393 provides an example of bash-script code for this.

@ghost
Copy link
Author

ghost commented Aug 22, 2022

Hey @MartyG-RealSense

That bash script, you mentioned, only simulates plugging and unplugging the USB connection, if I didn't misunderstand.
Recently, I found an issue that is I think the same problem that I have encountered and you already commented about it in here. My understanding from the link is that shadow effect is about the technology used for D400 series RealSense cameras, so cannot be solved with coding. So far, I overcame many problems thanks to your comments.

Thanks,
Selim

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 22, 2022

A port reset script resets the entire port instead of just the camera hardware. So whilst the end effect may be similar to an unplug-replug, a port reset is useful in situations where the camera cannot be detected, because a hardware reset of the camera via the RealSense SDK cannot occur if the camera is not detected whilst it would not matter to a port reset script if the camera could not be found.

The shadow effect (occlusion) is discussed in Intel's Projection, Texture Mapping and Occlusion white-paper document. In the 'occlusion invalidation' section of that paper linked to below, it states, "The reason we are seeing only horizontal occlusion with Intel RealSense D400 product-line is because in these cameras depth and color sensors are mounted horizontally next to each other. In the Intel RealSense LiDAR Camera L515, for instance, you would see vertical occlusion since its color camera is mounted on top of the lidar receiver".

https://dev.intelrealsense.com/docs/projection-texture-mapping-and-occlusion-with-intel-realsense-depth-cameras#4-occlusion-invalidation

@ghost
Copy link
Author

ghost commented Aug 23, 2022

Thank you for your attention @MartyG-RealSense
The issue is closed.

Selim

@ghost ghost closed this as completed Aug 23, 2022
@MartyG-RealSense
Copy link
Collaborator

You are very welcome, @yvzslmozky - thanks very much for the update!

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant