Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D435 Camera, The edge of object problem #10133

Closed
GHCK57 opened this issue Jan 6, 2022 · 67 comments
Closed

D435 Camera, The edge of object problem #10133

GHCK57 opened this issue Jan 6, 2022 · 67 comments

Comments

@GHCK57
Copy link

GHCK57 commented Jan 6, 2022


Required Info
Camera Model { D400 }
Firmware Version 05.12.15.50)
Operating System & Version {Win (10)
SDK Version { legacy / 2.. }
Language {python }
Segment {others }

Issue Description

Hi,

While working with Realsense D435 camera I faced with a problem. When the camera is close to object approximately 35-40 cm, depth frame looks good. But when I put the camera more farther approximately 70cm. The edges of object are deformed.

I use the "opencv_pointcloud_viewer.py " shared by Realsense to get the point cloud data. This code -->

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_pointcloud_viewer.py

That is what I want to measure. There is a surface and there is a object on this surface. I am just looking the edges.

WIN_20220106_12_15_12_Pro

This photo shows the .ply file taken from camera 70 cm far away

70cm

And also this one is taken from 35 cm far away

35cm

As you can see, there is a clear difference in the edges of the object.
I have to work 70cm far away. Why does this difference occur? How can I deal with this problem. I just want to flat edge.

I hope you understood me. I wait for your helping. Thanks a lot.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 6, 2022

HI @GHCK57 Whist the box looks like it has more edges detected at 35 cm, the blue edges actually likely represent empty areas of missing depth data rather than depth detail of the box edges. The same is true for the increased numbers of blue holes on the table surface.

It also looks like at 70 cm range, the camera is having difficulty distinguishing the plain white edges of the box from the similar plain white texture of the table.

You may achieve a better quality image if you ensure that the IR emitter is enabled in your Python script. When the emitter is enabled, it casts a semi-random pattern of dots onto surfaces in the scene that can be used as a 'texture source' for analyzing low-textured or non-textured surfaces (tables, doors, walls, etc) for depth detail.

You could try integrating the Python code below into opencv_pointcloud_viewer.py, which enables the IR emitter with the set_option(rs.option.emitter_enabled, 1) instruction.

import pyrealsense2 as rs
pipeline = rs.pipeline()
config = rs.config()
pipeline_profile = pipeline.start(config)
device = pipeline_profile.get_device()
depth_sensor = device.query_sensors()[0]
if depth_sensor.supports(rs.option.emitter_enabled):
depth_sensor.set_option(rs.option.emitter_enabled, 1)

@GHCK57
Copy link
Author

GHCK57 commented Jan 6, 2022

Hi MartyG,

Thanks for your reply. I insterted the these lines to opencv_pointcloud_viewe.py file after this code pipeline.start(config)

depth_sensor = device.query_sensors()[0]

if depth_sensor.supports(rs.option.emitter_enabled):
depth_sensor.set_option(rs.option.emitter_enabled, 1)

Also I used a different color box.( The Realsense camera's orijinal box). But still when I get the point cloud data from different distances, the problem is going on.

This photo how camera is looking the object.

WIN_20220106_14_51_25_Pro

This is how opencv_pointcloud_viewe.py shows the object.

image

-- Camera far away 70cm from the object--

image

-- Camera far away 35cm from the object--
image

Still, 35 cm is better than 70cm. What is the problem ?

@GHCK57
Copy link
Author

GHCK57 commented Jan 6, 2022

Also I used the function to enable infrared stream but it didn't change.

config.enable_stream(rs.stream.infrared, rs.format.y8, 30)

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 6, 2022

It appears that enabling the IR emitter has at least closed up most of the holes in the table surrounding the box, so that is an improvement.

I am reminded of another Python box scanning case at #6857 (comment) where the edge of the box seemed to be pointing diagonally like your 70 cm image above instead of 90 degrees straight downwards. Their eventual solution - shared in #6857 (comment) - was to create a custom json camera configuration file and load it into their project to adjust the camera settings to produce a more accurate box pointcloud.

The increased bumpiness of the cloud at 70 cm compared to 35 cm is a known phenomenon referred to as 'waviness', where the surface becomes increasingly wavy as the camera moves further away from the observed surface. A RealSense user in #1375 (comment) found that using a custom json could reduce some of the waviness.

A RealSense team member also suggested in #1375 (comment) that applying post-processing filtering such as edge-preserving and temporal averaging could help to reduce waviness. These concepts are covered in Intel's post-processing guide in the link below. Performing a page 'find' operation for the terms edge-preserving and temporal averaging can help to find relevant sections of the guide quickly.

https://dev.intelrealsense.com/docs/depth-post-processing

@GHCK57
Copy link
Author

GHCK57 commented Jan 6, 2022

Hi @MartyG-RealSense ,

I saw and read the topic #6857 and #1375 before and I tried "ShortrangePreset.json" but it didn't solve my problem. In addition to these topic and read this white paper you shared with me https://dev.intelrealsense.com/docs/depth-post-processing . The photos shared in here before I shared them I exported .ply and viewed on meshlab. I think there is no viewer problem.

I tried a lot of configuration on Intel RealSense Viewer v2.48 GUI, What I did whatever, I didn't get precision point cloud in comparison with the position far away 35 cm from object.

This is a different configuration. But camera trying to match edge of object and surface. ( Camera mounted 70cm far away from object)

image

depth frame

image

color frame

image

Infrared frame

image

@MartyG-RealSense
Copy link
Collaborator

Applying a post-processing filter that has a hole-filling function will likely help to close up the black area surrounding the box. You can test this in the RealSense Viewer by going to the Post-Processing category of the options side-panel and left-clicking on the red icon beside the Hole-Filling filter (which indicates a Disabled status) to turn it to blue (enabled).

image

If you find that hole-filling works well for you, you could implement hole-filling in your Python application by applying the Spatial post-processing filter.

image

@GHCK57
Copy link
Author

GHCK57 commented Jan 6, 2022

Thanks for your reply. But I don't need hole filling filter. It is not suitable for my application. I reaaly dont know what is the problem. The depth frame is good for camera far away 35 cm but while the distance is 70cm, it measure wrong.

I just want this depth frame when I put the camera far away from object

image

@MartyG-RealSense
Copy link
Collaborator

How does the box look in the RealSense Viewer if you change the Depth Units option from its default of 0.001 to 0.0001. You can left-click on the pencil icon beside the option to type the value in on the keyboard instead of using the adjustment slider.

image

@GHCK57
Copy link
Author

GHCK57 commented Jan 7, 2022

Hi @MartyG-RealSense

I tried it too before.

For Depth Units --> 0.001

image

For Depth Units --> 0.0001

image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 7, 2022

If your goal is to measure the box then the SDK Python example program box_dimensioner_multicam may meet your needs. It works with a single camera, despite the program's name.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam

In that program, the box is placed upon a print-out of a chessboard to calibrate the camera position to the board and then a bounding-box is automatically placed around the box on the chessboard on an RGB image and volume measurements for the box that are calculated from the depth data are displayed.

image

@GHCK57
Copy link
Author

GHCK57 commented Jan 7, 2022

Hi @MartyG-RealSense

I worked with this code. I need 3D constraction while calculating the dimensions of the box. That's why I need a good depth frame. Due to the point cloud data taken from 70cm is bad, everything going false. As you see from photos that I shared before, The camera seems the edge of box and surface like as same. This problem makes calculating the volume impossible. Please I need more advice about it . Is it caused for IR pattern emitting from camera or caused from something else ?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 7, 2022

If your project is able to take measurements manually instead of them needing to be automatic then you could try using the RealSense Viewer's Measure tool in its 3D pointcloud mode, which enables you to drag out a line between two points on the pointcloud image and receive a measurement of the real-world distance between those two points, or hold the shift key to connect more than 2 points and measure the area.

image

If you prefer to continue with the current method then I would recommend testing with the camera at a straight-ahead or straight-down angle (0 or 90 degrees) instead of the camera being tilted, as indicated by the tilted angle of the table. The angle of the camera may be distorting the point cloud, so having the camera at a straight angle will confirm whether or not the camera angle is the cause.

@GHCK57
Copy link
Author

GHCK57 commented Jan 7, 2022

I did what you said. I put the camera 90 degree above of the object. The camera aproximately 60cm far from the object. The configurations are didn't changed. Everything is same. It looks like this. Still the camera connected the surface of table and edge of object.

image

When I put it the camera 40cm above of the box. Everythings are fine. That is what I want. But I have to work from 70-75 cm distance

image

@MartyG-RealSense
Copy link
Collaborator

I attempted to replicate the test in the Viewer with a pack of drinks bottles but did not experience the edge loss at 70 cm.

35 cm

image

70 cm

image

Could you check whether disabling the two GLSL options in the Viewer's settings before taking the scan improves your edge detection, please? This can be done with the instructions at #8110 (comment)

Also, try completely disabling all post-processing filters that the Viewer applies by default by left-clicking on the blue icon beside Post-Processing to turn the icon to red and turning off all filters.

image

@GHCK57
Copy link
Author

GHCK57 commented Jan 7, 2022

Hi again @MartyG-RealSense

First of all your data taken from 70cm is perfect.

I tried the disable GLSL options. It didn't work. But I explored something. I will try to explaint step by step because what have been done I dont know. I need your help to figure out the solution.

First I want to share the photo that after GLSL options were disabled. It seems nothing changed.

image

After while looking the #8110 topic I saw your comment. #8110 (comment)

I checked my tools under ProgramFiles(x86)--> Intel RealSense SDK 2.0 . I tried to run exe files. After the start the rs-measure.exe I saw the difference. The sides of the box are starting to look good. But I didn't change anything. All parameters are same. Just I run the rs-measure.exe . After that It is a picture taken from the camera.

There are still some noise but better than old one.
image

What is changed it ?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 7, 2022

The main difference that comes to mind about rs-measure is that instead of performing depth to color alignment, it performs the much rarer color to depth alignment. On the D435 and D435i models, which have a smaller field of view (FOV) size on the color sensor than on the depth sensor, the overall image is therefore stretched out to fit the full size of the window without letterbox borders at the edges as the color FOV resizes to match the larger depth FOV size.

@GHCK57
Copy link
Author

GHCK57 commented Jan 7, 2022

Okay. If I hardware reset the camera using Realsense Viewer, everything will be back to old. What should I do next to get the correct depth frame.

image

@MartyG-RealSense
Copy link
Collaborator

Is there any positive change in the Viewer image if you maximize the Laser Power option to '360' instead of the default '150'? Increasing laser power makes the IR dot pattern more visible to the camera and can increase the amount of depth detail on an image.

image

@GHCK57
Copy link
Author

GHCK57 commented Jan 10, 2022

Hi @MartyG-RealSense

Sorry for my late reply. I generally set the laser power option to 330. But as I see there is no difference can see with eye.

@MartyG-RealSense
Copy link
Collaborator

I looked at the code of rs-measure to see what might account for the image improving when the program is run but reverting to the worse image after a hardware reset. When rs-measure is run, it loads and applies the High Accuracy camera configuration preset. When a camera reset is performed, the camera would return to its default configuration.

Ypu could therefore try adding a preset load instruction for 'High Accuracy' to your opencv_pointcloud_viewer.py script using Python code in #2577 (comment) or instead selecting the preset from the presets drop-down menu in the Viewer.

image

@GHCK57
Copy link
Author

GHCK57 commented Jan 10, 2022

It looks pretty good. After a hardware reset, before open the stream, I choose the High Accuracy camera configuration preset. The box look like this.

image

There is almost no edge connection. But if you look carefully, you can see the try to connect edge of object and surface of table.

image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 10, 2022

How does it look if you change "High Accuracy" to "Medium Density"

The Medium Density preset provides a good balance between accuracy and the amount of detail on the image.

@GHCK57
Copy link
Author

GHCK57 commented Jan 10, 2022

Medium Density Preset

image

@MartyG-RealSense
Copy link
Collaborator

Is the above image from 70-75 cm distance?

@GHCK57
Copy link
Author

GHCK57 commented Jan 11, 2022

Yes it is.

@GHCK57
Copy link
Author

GHCK57 commented Jan 11, 2022

Hi @MartyG-RealSense

This is color frame taken by camera.

image

This is point cloud data.

image

I get the point cloud of that box. I fitted the plane for 2 faces of box ( upper faces and front face). You can see it in photo. The yellow plane is fitted on the upper face of box. It is good.

The red plane is fitted on front face of box. As you see there is on problem on that face. I thing it caused from sharp edge of the box. For some reason the camera see it radial.

image

image

image

Why the sharp edge seems like there was a radius there ? Is that caused from alignment or not ?

image

And on left and right bottom corners there are some points. These points is not valid for box. As you see the color is white. I think these are the texture of the surface of table. Why it is like this ?

This is how I get the point cloud data. Where self.w and self.h is the height and width of frame that is for my app 640x480

`

    XYZ = np.array([0]*(self.w*self.h*3),dtype='float')
    XYZ = np.asanyarray(XYZ).reshape(-1, 3)  

    i = 0
    for y in range (0,self.h,1):
        for x in range (0,self.w,1):
            depth = frame.get_distance(x,y)
            depth_point_in_meters_camera_coords = rs.rs2_deproject_pixel_to_point(self.color_intrinsics,[x,y],depth)
            XYZ[i] = depth_point_in_meters_camera_coords
            i = i + 1

    XYZ = np.asanyarray(XYZ).reshape(-1, 3)
    
    return XYZ

`

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 11, 2022

Whilst you certainly can use 640x480 resolution, the optimal depth resolution for accuracy on a D435 is 848x480, so 640x480 depth measurements may be more inaccurate.

If you are using get_distance() and depth-color alignment in your Python application then alignment is known to cause significant depth value inaccuracies at the outer regions of the image, whilst the values are relatively correct at the image's center area.

An example of a case that features this phenomenon is at #6749 (comment) - please read downwards from the point that I have linked to in the discussion (skip past the script to the text underneath it):

If you are creating a pointcloud by performing depth to color alignment and then obtaining the 3D real-world point cloud coordinates with rs2_deproject_pixel_to_point (as indicated in your script above) then the use of alignment may also result in inaccuracies. Using points.get_vertices() instead to generate the point cloud and then store the vertices in a numpy array whose values can be printed out should provide better accuracy. This subject is discussed in detail in the Python case at #4315

@MartyG-RealSense
Copy link
Collaborator

At #1890 there is a Python case where someone was trying to map a depth pixel to a color pixel instead of mapping a color pixel to depth pixel using rs2_project_color_pixel_to_depth_pixel

@GHCK57
Copy link
Author

GHCK57 commented Jan 14, 2022

Hi again @MartyG-RealSense

I didn't solve the problem that posted in this #10133 (comment)

At the points where the table surface meets the box, points are seen in the lower right and left corners of the box. These
points in not related with the box. I think these points belong to the surface of table. Which configuration should be made to remove these points.

image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 14, 2022

This problem would seem to have similarities to the case #8306 which discusses options for tidying pointcloud data that you have already tried, such as changing the depth unit scale and the Second Peak Threshold.

The RealSense user in that case also achieved some improvement by using an Advanced Mode control in the RealSense Viewer called Rsm to reduce noise such as the grey areas.

image

The setting that they changed was Remove Threshold, altering it from the default of '63' to a higher value of '94'.

It is worth noting though that in another case at #5477 the image was improved by reducing the Rsm threshold value to '0'. So I would suggest both increasing and decreasing to see which direction above or below the default provides the best improvement for your image.

In your Python project, the value of the Rsm Remove Threshold setting could be defined in a json using the param-rsmremovethreshold json parameter.

image

If your preference is to set the value with Python code instead of a json, the SDK example python-rs400-advanced-mode-example.py demonstrates setting an Advanced Mode vlaue.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/python-rs400-advanced-mode-example.py#L68

The SDK has an instruction called set_rau_thresholds_control()

https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.rs400_advanced_mode.html#pyrealsense2.rs400_advanced_mode.set_rau_thresholds_control

A RealSense user at the link below shared a Python script that makes use of this instruction.

https://community.intel.com/t5/Items-with-no-label/D415-in-Advanced-Mode/m-p/543308

image

import pyrealsense2 as rs
import time
DS5_product_ids = ["0AD1", "0AD2", "0AD3", "0AD4", "0AD5", "0AF6", "0AFE", "0AFF", "0B00", "0B01", "0B03", "0B07"]
def find_device_that_supports_advanced_mode() :
ctx = rs.context()
ds5_dev = rs.device()
devices = ctx.query_devices();
for dev in devices:
if dev.supports(rs.camera_info.product_id) and str(dev.get_info(rs.camera_info.product_id)) in DS5_product_ids:
if dev.supports(rs.camera_info.name):
print("Found device that supports advanced mode:", dev.get_info(rs.camera_info.name))
return dev
raise Exception("No device that supports advanced mode was found")
try:
dev = find_device_that_supports_advanced_mode()
advnc_mode = rs.rs400_advanced_mode(dev)
print("Advanced mode is", "enabled" if advnc_mode.is_enabled() else "disabled")
# Loop until we successfully enable advanced mode
while not advnc_mode.is_enabled():
print("Trying to enable advanced mode...")
advnc_mode.toggle_advanced_mode(True)
# At this point the device will disconnect and re-connect.
print("Sleeping for 5 seconds...")
time.sleep(5)
# The 'dev' object will become invalid and we need to initialize it again
dev = find_device_that_supports_advanced_mode()
advnc_mode = rs.rs400_advanced_mode(dev)
print("Advanced mode is", "enabled" if advnc_mode.is_enabled() else "disabled")
ctrls = advnc_mode.get_rau_thresholds_control()
print("RAU Thresholds Control: \n", ctrls)
ctrls.rauDiffThresholdRed = 200
ctrls.rauDiffThresholdGreen = 500
ctrls.rauDiffThresholdBlue = 1000
advnc_mode.set_rau_thresholds_control(ctrls)
print("After Setting new value, RAU Thresholds Control: \n", advnc_mode.get_rau_thresholds_control())
except Exception as e:
print(e)
pass

@GHCK57
Copy link
Author

GHCK57 commented Jan 17, 2022

Hi @MartyG-RealSense

Sorry for my late reply. I enable the advanced mode in my python project with below code.

where config_D435 is configuration to enable streams , pipe435 is pipeline (rs.pipeline())

       pipeline_profile = pipe_D435.start(config_D435)
       device = pipeline_profile.get_device()
       advanced_mode = rs.rs400_advanced_mode(device)
       advanced_mode.load_json(HighResHighAccuracyPreset.json)

Also I use "HighResHighAccuracyPreset.json"

{
"param-disableraucolor": 0,
"param-disablesadcolor": 0,
"param-disablesadnormalize": 0,
"param-disablesloleftcolor": 0,
"param-disableslorightcolor": 1,
"param-lambdaad": 751,
"param-lambdacensus": 6,
"param-leftrightthreshold": 10,
"param-maxscorethreshb": 2893,
"param-medianthreshold": 796,
"param-minscorethresha": 4,
"param-neighborthresh": 108,
"param-raumine": 6,
"param-rauminn": 3,
"param-rauminnssum": 7,
"param-raumins": 2,
"param-rauminw": 2,
"param-rauminwesum": 12,
"param-regioncolorthresholdb": 0.786380709066072,
"param-regioncolorthresholdg": 0.5664810046339115,
"param-regioncolorthresholdr": 0.9857413557742051,
"param-regionshrinku": 3,
"param-regionshrinkv": 0,
"param-regionspatialthresholdu": 7,
"param-regionspatialthresholdv": 3,
"param-robbinsmonrodecrement": 25,
"param-robbinsmonroincrement": 2,
"param-rsmdiffthreshold": 1.6605679586483368,
"param-rsmrauslodiffthreshold": 0.7269914923801174,
"param-rsmremovethreshold": 0.8150280066589434,
"param-scanlineedgetaub": 13,
"param-scanlineedgetaug": 15,
"param-scanlineedgetaur": 30,
"param-scanlinep1": 155,
"param-scanlinep1onediscon": 160,
"param-scanlinep1twodiscon": 59,
"param-scanlinep2": 190,
"param-scanlinep2onediscon": 507,
"param-scanlinep2twodiscon": 493,
"param-secondpeakdelta": 647,
"param-texturecountthresh": 0,
"param-texturedifferencethresh": 1722,
"param-usersm": 1
}

I implemented the code you posted. But still the problem is going on. Advanced mode is enable . The outpu is

Found device that supports advanced mode: Intel RealSense D435
Advanced mode is enabled
RAU Thresholds Control:
rauDiffThresholdRed: 1007, rauDiffThresholdGreen: 578, rauDiffThresholdBlue: 802
After Setting new value, RAU Thresholds Control:
rauDiffThresholdRed: 200, rauDiffThresholdGreen: 500, rauDiffThresholdBlue: 1000

image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 17, 2022

I ran extensive further tests with a D435i and a similar cardboard box. I was able to replicate the black areas shown in your image above. They seemed to be strongest when the camera was viewing the box from a horizontal-diagonal angle and receded when the camera was placed parallel to the box off to the side of it, looking straight ahead.

image

image

The D415 model produced a better image using this 'straight ahead and off to the side view', but even when the camera was turned at an angle towards the box it was better than D435's image.

image

@GHCK57
Copy link
Author

GHCK57 commented Jan 17, 2022

I don't suffer from black areas. My problem is different but I dont know how can I explain it.

The blue lines are different surface.
The red circle is different surface.
The green line shows the bottom boundary of box.
The yellow lines are degraded depth data.

image

This is color frame.

image

@GHCK57
Copy link
Author

GHCK57 commented Jan 17, 2022

The color frame

image

Bottom view of the box. (point cloud data)

image

Front view of the box. ( point cloud data)

image

Different view

image

image

As you see, there are many points colored white. These points are belong to surface of table they are not belong to box. I hope I explained well.

Also this is a different point cloud from a different scan. There is a few points in this cloud. It can be ignorable. Long story short, sometimes we see these noisy spots, sometimes we don't.

image

@MartyG-RealSense
Copy link
Collaborator

The kind of point cloud edge trimming required may be beyond the SDK's default point cloud capabilities and require the use of specialist point cloud libraries such as PCL or Open3D to trim off unwanted detail with filters. For example, an outlier removal filter (which can remove unconnected 'islands' of points that are not joined to the main image) could likely deal with the corner blocks in the image above. The RealSense SDK has compatibility wrappers for PCL and Open3D to enable their functions to be accessed from a RealSense project.

@GHCK57
Copy link
Author

GHCK57 commented Jan 18, 2022

I already apply different kind of filters such as outline remover, KNN ,etc. It is not about this. I think there is a problem but I actually dont know what it is. Because if you look picture in this comment #10133 (comment) . There is a depth distortion. Is it caused due to that reason.

@GHCK57
Copy link
Author

GHCK57 commented Jan 18, 2022

I will share one more example. I put the 2 screwdrivers on side of the box.

image

If we look at the depth frame it looks like this.

image

image

I would expect space between the screwdriver and the box. However, the camera measures depth in that range. It fills in between the screwdriver and the box and sees it at the same distance with the box. Something seems to have gone wrong.

@MartyG-RealSense
Copy link
Collaborator

Is the curved-wall backdrop on top of the table removable from the table in order to eliminate the wall's curvature as a possible source of confusion for the camera when depth sensing?

@GHCK57
Copy link
Author

GHCK57 commented Jan 18, 2022

When the curved wall has removed , the depth frame is ( "custom" preset)

image

image

the depth frame is (high accuracy preset)

image

I uploaded a screen record

video.zip

I check why the black hole bigger out. This cause due to the lights. When I check the infrared stream, I didn't saw the infrared dot pattern in that area due to light. The light is not strong.

The infrared stream is while the lights are turn on

image

The infrared stream is while the lights are turn off

image

The depth frame is while the lights are turn off.

image

@MartyG-RealSense
Copy link
Collaborator

The above image with the screwdrivers either side of the box reminds me of when I tested creating a pointcloud of thin toothpicks in #8228 at a similar distance range. The toothpicks had the same kind of black area around them at first. I was able to remove the black areas by using a depth unit scale of '0.0001' like you tried in #10133 (comment)

The speckled effect on all the surfaces in the above image makes me think that the IR dot pattern might be leaking through onto the RGB image, something that is not meant to happen. When it does happen, reducing the Laser Power value can reduce the visibility of the dots.

@gercekcetinkaya
Copy link

gercekcetinkaya commented Jan 18, 2022

Hi @MartyG-RealSense It is @GHCK57

What is the meaning of leakage IR dot pattern onto the RGB image ? I will check the changing the Laser Power to visibility of the dots tomorrow. In your comment #8228, the depth measurement on of the toothpick is very successful. Why can't I get a healthy depth data?

** Also I saw your comment in https://support.intelrealsense.com/hc/en-us/community/posts/360035147633/comments/360009059553 here. My camera's temperature also is high, approximately 50 degree. I will try the solutions you posted in there. I will share the results with you.

@MartyG-RealSense
Copy link
Collaborator

An infrared image is covered in a projection of semi-random dots from the camera's IR Emitter component, like your IR images above. These dots may appear on the depth image as black dots, an undesirable noise effect called laser speckle. They should not be appearing on the RGB image though, and when they do it can be described as a 'leaking through'. I am not aware of negative effects to the RGB image of such a leak though, other than it looking wrong in terms of the real-life scene not having dots on it.

In regard to the toothpick image: a characteristic of the scene that the camera was in is that it does not have strong lighting. I also put a plain flat background card behind the toothpick and set the depth scale to 0.0001. The depth scale change by far made the largest impact on removing unwanted black and grey areas from the image.

I also experimented with setting the Disparity Shift option (under Advanced Mode > Depth Table in the Viewer) to a value of '50' instead of the default '0' to reduce the camera's minimum depth sensing distance below the default 0.1 meters of D435 / D435i.

@GHCK57
Copy link
Author

GHCK57 commented Jan 19, 2022

I tested the changing of temperature. It take approximately 30 minutes to reach the 44 degree temperature. While testing just color and depth frame was open and Laser power was 150mW. There was no error occurred.

image

The second test ,s changing the Laser Power.

The laser Power is 0:

image

The laser power is 50:

image

The laser Power is 150:

image

The laser Power is 300:

image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 19, 2022

It can be observed that as Laser Power increases, the number of black dots increases, as shown in your '300' image. In those particular images, the speckle effect seems to be mostly in the area away from the box rather than negatively affecting the box itself even though the dot pattern is being cast onto the box.

image

This is an expected effect, as the dot visibility can differ depending on the type of material that a surface is made from and the time of day. It has been demonstrated that earlier in the day the pattern may be invisible on one type of material in the scene and strongly visible on another type of material, whilst in late afternoon as light levels reduce, the visibility pattern may reverse, with the formerly invisible area of dots now visible and the strongly visible dots becoming invisible.

@GHCK57
Copy link
Author

GHCK57 commented Jan 19, 2022

Then, What sholuld I do ? What is your suggestions ? There is huge black hole apperings behind of the box. When the lights turn off it does not appear. Also there are lots of problem like in #10133 (comment) . The camera does not see very well the box, how can I measure of its volume ?

@MartyG-RealSense
Copy link
Collaborator

I recommend determining the lighting level and settings in which the dots are most visible on the box's surface. In your particular scene, that would seem to be Laser Power = 300 and with the lights turned off. The camera can then use the dots on the surface of the box as a 'texture source' to analyze for depth detail. This approach works well for plain surfaces with a low amount of texture detail on them like the cardboard.

As mentioned earlier in #10133 (comment) the RealSense Viewer allows manual selection of multiple points on a pointcloud to calculate the area.

The link below suggests an approach for using PCL to calculate volume.

https://stackoverflow.com/questions/55629892/2019-point-cloud-volume-estimation-with-c/55681149#55681149

Another approach would be to use a RealSense-powered commercial box measurement solution such as the ones in the links below.

https://www.mobileworxs.com/volume-dimensioning-3d-camera-system/
https://www.lips-hci.com/lipsmetric

@GHCK57
Copy link
Author

GHCK57 commented Jan 19, 2022

In summary, the camera can be affected by the light source, even if the light source is soft. It is necessary to store the camera so that it is not affected by external sources.

But despite everything, I couldn't get the performance I wanted from the camera. I still haven't found the solutions to the questions I wrote earlier. For example, as in comment (#10133 (comment)) and #10133 (comment), I could not find why the edges of the box were trying to connect to the table surface.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 19, 2022

I cannot think of a way to deal with the table surface other than to position the camera at a top-down angle at a fixed height and set the threshold filter post-processing filter with a Maximum distance value where depth data at the table height and below is excluded from the depth image. Even if larger boxes are used, if the table top is a fixed distance from the camera then the table under the base of the box should always be removed whilst the box remains on the image.

@GHCK57
Copy link
Author

GHCK57 commented Jan 19, 2022

For this job. The D415 cameras is better than D435, isn't it ?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 19, 2022

I did consistently obtain better box images with the D415 model in my own tests with a similar sized cardboard box. The D415, though it has a smaller field of view, has around 2x better image quality than D435 and 2x less RMS error (noise over distance). It also has an optimal depth accuracy resolution of 1280x720, compared to 848x480 on D435 / D435i. The D415 excels at scanning small static objects, whilst the D435 is optimized for capturing fast motion.

I cannot guarantee that a D415 will resolve the issues that you are experiencing in your own particular scene though. I can only speak for my own test results.

@GHCK57
Copy link
Author

GHCK57 commented Jan 20, 2022

"The D415 excels at scanning small static objects, whilst the D435 is optimized for capturing fast motion." Is this diffrence thanks to shutter ?

->Is there a planned production end time for these cameras?

-->Does the D415 camera works with same code that written for D435 ?

@MartyG-RealSense
Copy link
Collaborator

  1. The D415 has a smaller field of view (FOV) on its left and right infrared sensors compared to the D435, which has a wider IR sensor component. A wider sensor lets more light into it, which can result in greater depth noise.

  2. There are no announced plans for EOL of the 400 Series at the time of writing this. My understanding is that the 400 Series will be supported at least through the whole of 2022.

  3. The D415 works with the same code as the D435. An exception to this would be code that looks for a particular camera model name (which would prevent a D435 from working with the script if it only accepted a D415), but most scripts would not use such a check.

@GHCK57
Copy link
Author

GHCK57 commented Jan 20, 2022

I am very pleased. Thank you very much for your help and information. I will buy D415 camera then I will try with this.

@MartyG-RealSense
Copy link
Collaborator

Hi @GHCK57 Do you require further assistance with this case, please? Thanks!

@GHCK57
Copy link
Author

GHCK57 commented Jan 26, 2022

Hi @MartyG-RealSense thanks for your help.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @GHCK57 for the update. As you are satisifed with the outcome, I will close this case. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants