Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Horizontal Occlusion while aligning Color To Depth #8501

Closed
SupreetKurdekar opened this issue Mar 5, 2021 · 3 comments
Closed

Horizontal Occlusion while aligning Color To Depth #8501

SupreetKurdekar opened this issue Mar 5, 2021 · 3 comments

Comments

@SupreetKurdekar
Copy link

SupreetKurdekar commented Mar 5, 2021


Required Info
Camera Model { D435i }
Firmware Version (unsure )
Operating System & Version {Linux (Ubuntu 18)}
Kernel Version (Linux Only) (Linux 5.4.0-66-generic x86_64)
Platform PC
SDK Version { pyrealsense2 }
Language {python }
Segment {AR }

Issue Description

Application:

  1. Take 3d scans and build pointcloud of individual objects at close range, about 0.5 m.
    • The objects range in size from 5-20 cm
    • They are interior parts of a hand drill.
  2. Segment these objects in an scene, where they occlude each other.

I'm currently running the align-color example, setting both depth and color streams to 848x480 resolution.

Issue:
When aligning Color to Depth, some depth pixels get repeated colors. (Red circle)
image

Typically at the right edges of objects.

There is also the region to the left on all objects which has very large depth. Is this shadow expected? Or should it be distributed evenly around both left and right of the object?

https://dev.intelrealsense.com/docs/projection-texture-mapping-and-occlusion-with-intel-realsense-depth-cameras
Following the above link, I wasnt expecting this to happen.
Q1
Could this be because of the extrinsic calibration between the color and depth cameras?

Q2
How do I perform the extrinsic calibration in pyrealsense?
The on-chip-calibration example was a little bit unclear.

Q3
For my given application, would it be better to use the D415 camera?

I do apologize for the long post, but i'm really new to this and I have a lot of questions.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 5, 2021

Hi @SupreetKurdekar No need for apologies. :) In reply to your questions:

  1. Ideally, an occlusion silhouette will be distributed evenly all around the outer edges of a surface instead of biased to one particular side, such as the left. A re-calibration of the camera may improve the image if there is a bias to one side of the object. Judging by the non-drill objects in the scene, and the body-outline of your arm though, your occlusion is actually evenly distributed all round.

The areas of light grey color on the drill resemble an effect that I encountered when scanning a small object (a toothpick) using rs-align, the RealSense SDK's C++ version of an example align program and also in the 3D point cloud mode of the RealSense Viewer.

image

I was able to reduce the impact of this effect by changing the depth unit scale value of the camera from its default value of 0.001 to 0.0001.

The link to the toothpick scanning case is below:

#8228

  1. The On-Chip calibration can be accessed and initiated from Python.

https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras#section-appendix-c-on-chip-calibration-python-api

The case in the link below has additional guidance for programming on-chip calibration instructions in Python.

#7804

  1. The D415 has a smaller field of view than the D435i so it would see less of the outer regions of the scene. It has the same field of view size on both the RGB and depth sensors though, which can be an advantage when performing depth to color alignment. It also has an optimal depth accuracy resolution of 1280x720, compared to 848x480 optimal depth accuracy resolution on the D435i.

If you would like the advantages of a large field of view and the same field of view size on both sensors then the D455 model can provide that. The D455 is also has 2x the accuracy over distance of the D435 models (it has the same depth accuracy at 6 meters that the D435i does at 3 meters), though this may not be a significant factor in your particular close-range sensing application.

Overall though, your images obtained from the D435i are good quality ones. Also, the D435 models have a minimum depth sensing distance of 0.1 meters, whilst the D415's default minimum depth distance is 0.3 meters and the D455 0.4 meters. So for close-range scanning, your existing D435i model is advantageous.

@SupreetKurdekar
Copy link
Author

Hi @MartyG-RealSense ,
Thank you for your response.

  1. You were right, my image quality is quite good.
    On viewing in the realsense viewer i got the following aligned image.
    image
    The shadow is equally divided on both sides.

I was also able to get quite good pointclouds using the link below.
https://github.com/F2Wang/ObjectDatasetTools

  1. I have also decided to stick with the D435i due to the Min Z it allows me.

Below is an image of my pointcloud
image

While this is enough for my application, notice the fact that the sharpness of edges and corners is lost. Do you think this is the limit of the cameras resolution or it might be due to the algorithm I am using?

In any case I am closing the issue for now.

Thank you very much for your quick response.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 6, 2021

If you have 'blobby' edges and corners then using the Visual Preset Medium Density may help. This provides a balance between accuracy and 'fill rate' (the image detail).

image

A comparison of the effect that different presets may have on the edges and corners can be seen in the link below:

#7021 (comment)

The Visual Preset documentation suggests that the Default preset is good for clean edges and reduction of point cloud spray.

https://dev.intelrealsense.com/docs/d400-series-visual-presets#section-preset-table

You could also test whether the speckling on the image is due to the semi-random infrared dot pattern that the camera projects onto the scene by setting the Emitter Enabled option's drop-down menu to Off to disable the IR emitter.

image

The IR pattern that the projector casts over the surface of objects provides a texture source for the camera to analyze for depth information. This is useful for sensing depth detail from surfaces with low texture or no texture such as doors, walls, desks and tables. Disabling the IR emitter can therefore reduce the detail in the depth image. You may be able to compensate for some of this detail loss though by maximizing the Laser Power option's value, as increasing Laser Power can make a depth image less sparse in detail, whilst reducing laser power makes it more sparse (more holes and gaps).

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants