Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D415 pointcloud mismatch between RealSense viewer export function and Python 3.7 export function #6718

Closed
freekb1996 opened this issue Jun 29, 2020 · 16 comments

Comments

@freekb1996
Copy link

freekb1996 commented Jun 29, 2020


Required Info
Camera Model D415
Firmware Version 05.12.05.00
Operating System & Version Win 10
Platform PC
SDK Version 2.34.0.1470
Language Python 3.7

Pointcloud mismatch between RealSense export and Python export

Currently, I'm working on the deprojection of 2D coordinates to 3D coordinates. I have recordings made with the RealSense D415 camera. In Python 3.7, I'm aligning the captured depth frames to the captured color images using pc.map_to. Subsequently, I'm creating a 3D pointcloud using pc.calculate which will be exported to a .ply file using the .export_to_ply function. To asses the accuracy of this operation, the exported pointcloud is compared with the pointcloud generated with the 3D export function which is implemented in the RealSense Viewer. However, there seems to be a displacement between these two pointclouds which is not as expected.

The image below shows a test recording of a cup, placed on a chair. The blue pointcloud is the one exported directly from the RealSense Viewer, while the other pointcloud (with color information) is exported after alignement in Python. The displacement seems most present in the x- and y-direction. The displacement in the z-(or depth) direction is minimal.

Cloud_difference

The python code to generate and export the pointcloud is shown below:

#Create pipeline
pipeline = rs.pipeline()

bag_loc = r"C:...\Test_Object.bag"

#Create a config object
config = rs.config()
#Tell config that we will use a recorded device from filem to be used by the pipeline through playback.
rs.config.enable_device_from_file(config, bag_loc)
#Configure the pipeline to stream the depth stream
config.enable_stream(rs.stream.depth, 1280,720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1920,1080, rs.format.yuyv, 30)

#Start streaming from file
profile = pipeline.start(config)
device = profile.get_device()
playback = device.as_playback()
playback.set_real_time(False)

#Create pointcloud object
pc= rs.pointcloud()
points = rs.points()

#Define filters
#Decimation:
decimation = rs.decimation_filter()                     # filter magnitude between 2 and 8 def = 2
#Depth to disparity        
depth_to_disparity = rs.disparity_transform(True)
disparity_to_depth = rs.disparity_transform(False)
#Spatial:
spatial = rs.spatial_filter()
spatial.set_option(rs.option.holes_fill, 0)             # between 0 and 5 def = 0
spatial.set_option(rs.option.filter_magnitude, 2)       # between 1 and 5 def=2
spatial.set_option(rs.option.filter_smooth_alpha, 0.5)   # between 0.25 and 1 def=0.5
spatial.set_option(rs.option.filter_smooth_delta, 20)   # between 1 and 50 def=20
#Temporal:
temporal = rs.temporal_filter() 
temporal.set_option(rs.option.filter_smooth_alpha, 0.4)   
temporal.set_option(rs.option.filter_smooth_delta, 20)

colorizer = rs.colorizer();
  
#Get info about depth scaling of the device      
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: " , depth_scale)

#Define alignment parameters to align the depth stream to the color stream
align_to = rs.stream.color
align = rs.align(align_to)

yuyv_dec = rs.yuy_decoder();
frames =[]

frame_number = 754

#Streaming loop
while True:
    if rs.playback.current_status(playback) == rs.playback_status.playing:
        
        frames = pipeline.wait_for_frames()
        if not frames: continue
    
    
        aligned_frames = align.process(frames)
        color_frame_or = aligned_frames.get_color_frame()
        frame_num = color_frame_or.get_frame_number()
        print(frame_num)
        
        if frame_num == frame_number:
            rs.playback.pause(playback)
            cur_pos = playback.get_position()
            print("Current position: " + str(cur_pos))
       
            #Apply filters
            pc_filtered = decimation.process(frames)
            pc_filtered = depth_to_disparity.process(pc_filtered)
            pc_filtered = spatial.process(pc_filtered)
            pc_filtered = temporal.process(pc_filtered)
            pc_filtered = disparity_to_depth.process(pc_filtered).as_frameset()
                           
            #Align the depth frame to color frame
            pc_aligned_frames = align.process(pc_filtered)
            
            pc_depth = pc_aligned_frames.get_depth_frame()
            pc_color = pc_aligned_frames.get_color_frame()
            pc_color = yuyv_dec.process(pc_color)
        
            pc_color = pc_color.as_video_frame()
        
            #Create pointcloud 
            pc.map_to(pc_color)
            points = pc.calculate(pc_aligned_frames)
            points.export_to_ply("frame_test.ply", pc_color)   
            
            break

@freekb1996 freekb1996 changed the title D415 pointcloud mismatch between RealSense viewer export function and Python 3.8 export function D415 pointcloud mismatch between RealSense viewer export function and Python 3.7 export function Jun 29, 2020
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 29, 2020

There have been past cases of depth to color point cloud misalignment with Python that were fine when done in the RealSense Viewer. I have linked below to a case from my research of your question that seems to be the most useful of those that I found.

#4644

The advice given for tackling this included:

  1. Using 1280x720 depth resolution on D415 and the full HD resolution for RGB. You seem to be doing this already though.

  2. Performing a Dynamic Calibration of the camera using the Dynamic Calibrator tool for the 400 Series cameras. This tool can be downloaded for Windows from the link below:

https://downloadcenter.intel.com/download/29618/Intel-RealSense-D400-Series-Dynamic-Calibration-Tool

@freekb1996
Copy link
Author

@MartyG-RealSense Thank you for your quick response. Both advices were already implemented and tested but they did not result into a better alignment. Do you have any other suggestions that might solve this problem?

@MartyG-RealSense
Copy link
Collaborator

How are you viewing the exported Python ply, please? If the Depth Units scale of the program that created the ply and the program that imports the ply for viewing are not the same then the ply may be distorted or mis-scaled when viewed in the program that it is imported into.

@freekb1996
Copy link
Author

@MartyG-RealSense I'm using CloudCompare v2.11.0 to view both pointclouds which allow easy assesment. I'm also able to import them into Python using the open3d package but this results in the same misalignment.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 29, 2020

Have you tried removing post-processing filters from your Python script one at a time to test whether a particular one (such as Decimation particularly) may be affecting the alignment?

@freekb1996
Copy link
Author

I ran the code without any filters and got the following result:

image

If you look closely, the color information is now aligned to the blue pointcloud (the one exported from the RealSense Viewer). However, the depth data is still misaligned.

@freekb1996
Copy link
Author

I also tried running the code without the line pc.map_to(pc_color) but this has no effect on the exported pointclouds. On top of that, replacing this line with pc.map_to(pc_depth) has no effect either.

@MartyG-RealSense
Copy link
Collaborator

What procedure did you use to produce depth to color alignment on the Viewer, please? Did you set the Texture Source drop-down menu to 'D415 Color'?

image

@freekb1996
Copy link
Author

Yes I'm using the same procedure.
image

@MartyG-RealSense
Copy link
Collaborator

The Occlusion on the Viewer image looks very misaligned. The black outline around an object that Occlusion produces should be relatively equal all round, but yours is shifted off to one side.

I note that at the top of the Viewer, you have Occlusion Removal set to 'Off'. Could you try setting it to 'On' please?

@freekb1996
Copy link
Author

I tried your suggestion but it did not change anything on the appearance of the object.

So what I also tried just yet is to align the color stream to the depth frame, instead of depth to color. On top of that I changed pc.map_to(pc_color) to pc.map_to(pc_depth). These two alterations seem to resolve the bug I got:

image

So it seems be better to align the color stream to the depth stream, which is not often seen in examples.

@MartyG-RealSense
Copy link
Collaborator

That certainly does seem to be unusual. Does this method provide you with an acceptable solution, please?

@freekb1996
Copy link
Author

I found this suggestion here: #4315 . For now it seems to be an acceptable solution. Thank you for your suggestions!

@Desmenga
Copy link

The issue is further explained in the linked thread. Specifically, this comment explains the shift in depth coordinates in x and y direction:

Aligned depth - when aligning depth to color, the depth coordinates are rounded to the nearest pixel on rgb grid. So when deprojecting aligned depth back to 3D space, the resulted x and y coordinates will be shifted.
In other words, the alignment process keeps z, but does not preserves x, and y of the original depth.
This, in turn, scales up the produced error.

So in case depth accuracy is prioritized (which is most likely the case in the 3D viewer) the color image should be mapped to the depth image.

@MartyG-RealSense
Copy link
Collaborator

@Desmenga Thanks so much for highlighting that link!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants