-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Horizontal Occlusion while aligning Color To Depth #8501
Comments
Hi @SupreetKurdekar No need for apologies. :) In reply to your questions:
The areas of light grey color on the drill resemble an effect that I encountered when scanning a small object (a toothpick) using rs-align, the RealSense SDK's C++ version of an example align program and also in the 3D point cloud mode of the RealSense Viewer. I was able to reduce the impact of this effect by changing the depth unit scale value of the camera from its default value of 0.001 to 0.0001. The link to the toothpick scanning case is below:
The case in the link below has additional guidance for programming on-chip calibration instructions in Python.
If you would like the advantages of a large field of view and the same field of view size on both sensors then the D455 model can provide that. The D455 is also has 2x the accuracy over distance of the D435 models (it has the same depth accuracy at 6 meters that the D435i does at 3 meters), though this may not be a significant factor in your particular close-range sensing application. Overall though, your images obtained from the D435i are good quality ones. Also, the D435 models have a minimum depth sensing distance of 0.1 meters, whilst the D415's default minimum depth distance is 0.3 meters and the D455 0.4 meters. So for close-range scanning, your existing D435i model is advantageous. |
Hi @MartyG-RealSense ,
I was also able to get quite good pointclouds using the link below.
Below is an image of my pointcloud While this is enough for my application, notice the fact that the sharpness of edges and corners is lost. Do you think this is the limit of the cameras resolution or it might be due to the algorithm I am using? In any case I am closing the issue for now. Thank you very much for your quick response. |
If you have 'blobby' edges and corners then using the Visual Preset Medium Density may help. This provides a balance between accuracy and 'fill rate' (the image detail). A comparison of the effect that different presets may have on the edges and corners can be seen in the link below: The Visual Preset documentation suggests that the Default preset is good for clean edges and reduction of point cloud spray. https://dev.intelrealsense.com/docs/d400-series-visual-presets#section-preset-table You could also test whether the speckling on the image is due to the semi-random infrared dot pattern that the camera projects onto the scene by setting the Emitter Enabled option's drop-down menu to Off to disable the IR emitter. The IR pattern that the projector casts over the surface of objects provides a texture source for the camera to analyze for depth information. This is useful for sensing depth detail from surfaces with low texture or no texture such as doors, walls, desks and tables. Disabling the IR emitter can therefore reduce the detail in the depth image. You may be able to compensate for some of this detail loss though by maximizing the Laser Power option's value, as increasing Laser Power can make a depth image less sparse in detail, whilst reducing laser power makes it more sparse (more holes and gaps). |
Issue Description
Application:
I'm currently running the align-color example, setting both depth and color streams to 848x480 resolution.
Issue:
When aligning Color to Depth, some depth pixels get repeated colors. (Red circle)
Typically at the right edges of objects.
There is also the region to the left on all objects which has very large depth. Is this shadow expected? Or should it be distributed evenly around both left and right of the object?
https://dev.intelrealsense.com/docs/projection-texture-mapping-and-occlusion-with-intel-realsense-depth-cameras
Following the above link, I wasnt expecting this to happen.
Q1
Could this be because of the extrinsic calibration between the color and depth cameras?
Q2
How do I perform the extrinsic calibration in pyrealsense?
The on-chip-calibration example was a little bit unclear.
Q3
For my given application, would it be better to use the D415 camera?
I do apologize for the long post, but i'm really new to this and I have a lot of questions.
The text was updated successfully, but these errors were encountered: