-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
D435i Distance and Size Calculation Errors #10779
Comments
Hi @yvzslmozky Are you aligning depth to color and obtaining the distance with the get_distance() instruction, please? If you are then there is a known issue with this method where distance is accurate at the center of the image but increasingly drifts when moving out from the center towards the outer regions of the image in the X or Y axes. #10224 (comment) has links to references about this. Using the intrinsics of the stream that is being depth-to-color aligned to should provide more accurate measurement results in an aligned image than using depth intrinsics, as described at #3735 (comment) If depth is being aligned to color then the color intrinsics would be used. An alternative means of obtaining real-world distance that is more accurate than performing alignment and deprojection is to map color to depth coordinates with map_to, generate a point cloud with pc.calculate and then retrieve the 3D coordinates from the vertices of the pointcloud with points.get_vertices(). The two approaches of deprojection and points.get_vertices are discussed by RealSense team members at #4315 |
Hi @MartyG-RealSense , Thanks, Selim |
Thanks very much @yvzslmozky for the update! When aligning depth to color or color to depth, the SDK's align processing block makes automatic adjustments for differences in resolution and FOV size between the two streams. |
I have been working on the issues that I mentioned above. The good news is depth, in fact euclidean, measurements are nearly perfect at especially short distances. For this, I used rs.rs2_deproject_pixel_to_point() method.
Here, I want to point out a few things and want to say that I am choosing the pixel from the RGB frame by clicking with mouse and utilizing the coordinates of the cursor with the help of cv2.setMouseCallback() method. On the other hand, points.get_vertices() method does not seem to give more accurate results, at least for me. I modified the data coming from the method and converted it to a color map. As far as I can tell, sometimes it is affected from something, somehow and the entire frame gives nearly zero distance data and becomes normal. Overall, it gives similar results as compared to the previous method but not better at all. I'm curious about your opinion on whether I'm using the method correctly and I am providing my code below.
While working, my depth frame started giving wrong result at the edge of an object that is located near to the camera. It did not exist before and I did not do any calibration. Then, I performed hardware reset but no luck :(. I am leaving an example image of the issue here. What can be the reason for this? Thank you for your attention, Selim |
Thanks so much @yvzslmozky for providing such detailed feedback of your experience for the benefit of the RealSense community! Would it be possible to provide an RGB image of the observed scene please to see whether the object has characteristics that could be making that section difficult for the camera to read depth from correctly? |
Hi again @MartyG-RealSense I am providing the image that contains two merged images of the same scene with an object and without. Also, I printed the distance value of the same spot on the images to see the difference between the cases. As I said before, I am sure that the wrong depth frame data did not exits before. By the way, I cannot see your comments about my usage of points.get_vertices(). If you have any suggestions, I will appreciate that. Thanks, |
Fluorescent lights such as the ceiling strip lights in the RGB images can be a disruptive influence for 400 Series camera images, as the heated gas inside them flickers at rates that are difficult to see with the human eye. This interference can be reduced by setting an option called power line frequency to a frequency that closely matches the operating frequency of the lights. This may be 60 Hz in North American regions and 50 Hz in European regions. The power line frequency can be defined in scripting, with values ranging from 0-3 (0 = Disabled, 1 = 50 Hz, 2 = 60 Hz, 3 = Auto). A C++ example of configuring the option is at #7766 (comment) Although I could not find an equivalent Python script, the official pyrealsense2 documentation indicates that it is also defined as an rs_option in Python. Alternatively, it can be defined in a json camera configuration file. For example:
Regarding the incorrect depth reading for the object edge that is near to the camera, can you confirm if the area with the problem is further than 10 cm (0.1 m) from the camera? If an object is closer to the D435i camera than its minimum depth sensing distance of 0.1 m then the depth image will begin degrading in the part of the image that is below the minimum distance. |
Hi again, I tried changing the power line frequency but it doesn't affect the depth information coming from the sensor. Additionally, sensor doesn't support option 3, which you mentioned as 'auto', and gives RuntimeError: xioctl(VIDIOC_S_CTRL) failed Last Error: Numerical result out of range. I am adding my script anyway.
On the other hand, I am aware of the minimum distance that my depth camera can make measurements correctly. I am not placing objects that are closer than the minimum distance. I already mentioned that this problem did not exist before, I really cannot understand the reason that results in depth frame defects. I just want to ask one more question, how can I entirely reset my cam other than hardware reset. I am asking because I am not satisfied about the result of that reset method. Thanks, |
You mentioned earlier that you had not performed calibration. Even if a camera was well calibrated earlier, it could become mis-calibrated and require re-calibration if it received a physical shock such as a hard knock, a drop on the floor or severe vibration. It could also become mis-calibrated if it experienced very high temperature. As an alternative to performing a hardware reset of the camera, the entire USB port could be reset using a Linux port reset bash script. #8393 provides an example of bash-script code for this. |
That bash script, you mentioned, only simulates plugging and unplugging the USB connection, if I didn't misunderstand. Thanks, |
A port reset script resets the entire port instead of just the camera hardware. So whilst the end effect may be similar to an unplug-replug, a port reset is useful in situations where the camera cannot be detected, because a hardware reset of the camera via the RealSense SDK cannot occur if the camera is not detected whilst it would not matter to a port reset script if the camera could not be found. The shadow effect (occlusion) is discussed in Intel's Projection, Texture Mapping and Occlusion white-paper document. In the 'occlusion invalidation' section of that paper linked to below, it states, "The reason we are seeing only horizontal occlusion with Intel RealSense D400 product-line is because in these cameras depth and color sensors are mounted horizontally next to each other. In the Intel RealSense LiDAR Camera L515, for instance, you would see vertical occlusion since its color camera is mounted on top of the lidar receiver". |
Thank you for your attention @MartyG-RealSense Selim |
You are very welcome, @yvzslmozky - thanks very much for the update! |
Wrong Distance Data in the Depth Frame
I am working on a robotics project and D435i will be the eyes of that robot. I just wanted to estimate the distance to any obstacles from the robot and sizes of these obstacles. I am doing post processing and then obtaining the distance data but there exists some inaccuracies when moving away from the center of the frame. For example, D435i measures the distance to object as 400mm if I locate it at the center of the frame but measures 350mm if the object is located at the left side of the frame, and inaccuracy increases as the distance increases.
On the other hand, while calculating the size of an object I follow the formula given in the link. I use focal length, sensor height and sensor width as 1.93mm, 1.44mm and 2.54mm, respectively (848x480). I found the values by doing a search online bu I don't think that the calculations give me the correct result.
I just want to know the reason of the distance calculation errors and the possible solutions to it.
Also, what can I do while calculating the size of an object in 3D with more accurate values/formulas?
Thanks in advance,
Selim
The text was updated successfully, but these errors were encountered: