Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warped/Bent Point Cloud #573

Open
Windson9 opened this issue Feb 13, 2024 · 7 comments
Open

Warped/Bent Point Cloud #573

Windson9 opened this issue Feb 13, 2024 · 7 comments

Comments

@Windson9
Copy link

Hello and thanks for such a great work.

I am trying to access/convert point-cloud using the aditofpython library. I am using the v4.3.0 branch and all the examples and builds are done successfully. While extracting the point cloud using frame.getData("xyz") it appears to be warped. I tried creating point cloud with open3d using depth and rgb images with raw depth and un-distorted depth, although the point cloud improved, it still doesn't appear properly. The config I am using is config_adsd3500_adsd3100.json. Please refer to the attached code and images below.

import aditofpython as tof
import numpy as np
import matplotlib.pyplot as plt
import sys
import open3d as o3d
import cv2

if len(sys.argv) < 2  or sys.argv[1] == "--help" or sys.argv[1] == "-h" :
    print("first_frame.py usage:")
    print("USB / Local connection: first_frame.py <config>")
    print("Network connection: first_frame.py <ip> <config>")
    exit(1)

system = tof.System()

cameras = []
ip = ""
if len(sys.argv) == 3 :
    ip = sys.argv[1]
    config = sys.argv[2]
    print (f"Looking for camera on network @ {ip}. Will use {config}.")
    ip = "ip:" + ip
elif len(sys.argv) == 2 :
    config = sys.argv[1]
    print (f"Looking for camera on UVC. Will use {config}.")
else :
    print("Too many arguments provided!")
    exit(1)

status = system.getCameraList(cameras, ip)
print("system.getCameraList()", status)

camera1 = cameras[0]
point_cloud = o3d.geometry.PointCloud()

status = camera1.setControl("initialization_config", config)
print("camera1.setControl()", status)

status = camera1.initialize()
print("camera1.initialize()", status)

types = []
status = camera1.getAvailableFrameTypes(types)
print("camera1.getAvailableFrameTypes()", status)
print(types)

camDetails = tof.CameraDetails()
print('#' * 100)
print(f'camDetails - {camDetails.intrinsics.fx}')
status = camera1.getDetails(camDetails)
print("camera1.getDetails()", status)
print("camera1 details:", "id:", camDetails.cameraId, "connection:", camDetails.connection)


intrinsicParameters = camDetails.intrinsics
fx = intrinsicParameters.fx
fy = intrinsicParameters.fy
cx = intrinsicParameters.cx
cy = intrinsicParameters.cy
k1 = intrinsicParameters.k1
k2 = intrinsicParameters.k2
k3 = intrinsicParameters.k3
p1 = intrinsicParameters.p1
p2 = intrinsicParameters.p2
print('Total intrinsic parameters: ', dir(intrinsicParameters))

camera_range = 4096
distance_scale = 255.0 / camera_range

print('#' * 100)
print(fx, fy, cx, cy, k1, k2, k3, p1, p2)


status = camera1.setFrameType("sr-qnative")
print("camera1.setFrameType()", status)
# print("lrqmp")

status = camera1.start()
print("camera1.start()", status)

frame = tof.Frame()
status = camera1.requestFrame(frame)
print("camera1.requestFrame()", status)

frameDataDetails = tof.FrameDataDetails()
status = frame.getDataDetails("depth", frameDataDetails)
print("frame.getDataDetails()", status)
print("depth frame details:", "width:", frameDataDetails.width, "height:", frameDataDetails.height, "type:", frameDataDetails.type)

status = camera1.stop()
print("camera1.stop()", status)

depth_map = np.array(frame.getData("depth"), dtype="uint16", copy=False)

plt.figure(num='Raw Depth Image')
plt.imshow(depth_map)
plt.show()

np.save('depth_image.npy', depth_map)
distortion_coeffs = np.array([k1, k2, p1, p2, k3])
undistorted_depth = cv2.undistort(depth_map, np.array([[fx/2, 0, cx/2], [0, fy/2, cy/2], [0, 0, 1]]),
                                      distortion_coeffs)

plt.figure(num='Undistorted Depth Image')
plt.imshow(undistorted_depth)
plt.show()

np.save('undistort_depth_image.npy', undistorted_depth)
ir_map = np.array(frame.getData("ir"), dtype="uint16", copy=False)
xyz_map = np.array(frame.getData("xyz"), dtype="int16", copy=False)
xyz_points = np.resize(xyz_map, (int(depth_map.shape[0]) * depth_map.shape[1], 3))
print('number of points: ', depth_map.shape[0] * depth_map.shape[1])

cameraIntrinsics = o3d.camera.PinholeCameraIntrinsic(frameDataDetails.width, frameDataDetails.height, fx/2, fy/2, cx/2, cy/2)
#vis = o3d.visualization.Visualizer()
#vis.create_window("PointCloud", 1600, 1600)
first_time_render = True
final_pcd = o3d.geometry.PointCloud()

# create open3d compatible image type
depth_16 = o3d.geometry.Image(undistorted_depth)

# Normalize depth values
normalized_depth = depth_map / np.max(depth_map)

# Apply colormap
color_map = plt.cm.jet
temp_color_8 = (color_map(normalized_depth) * 255).astype(np.uint8)  # Convert to uint8
plt.figure(num='Color image')
plt.imshow(temp_color_8)
plt.show()
# Create Open3D color image
color_8 = o3d.geometry.Image(temp_color_8)

print('cameraIntrinsics: ', cameraIntrinsics.intrinsic_matrix)
# Create RGBD image
rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(color_8, depth_16, 1000.0, 3.0, False)

# Create point cloud from RGBD image
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image, cameraIntrinsics)

d_pcd = o3d.geometry.PointCloud.create_from_depth_image(
    depth_16, 
    cameraIntrinsics)

print('created d_pcd')

d_pcd.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
o3d.visualization.draw_geometries([d_pcd], window_name='Created using depth only')
pcd.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
o3d.visualization.draw_geometries([pcd], window_name='Created using RGBD')
# if first_time_render:
#     vis.add_geometry(final_pcd)
#     first_time_render = 0
# vis.update_geometry()
# vis.poll_events()
# vis.update_renderer()


depth_map = depth_map[0: int(depth_map.shape[0]), :]
depth_map = distance_scale * depth_map
depth_map = np.uint8(depth_map)
depth_map = cv2.applyColorMap(depth_map, cv2.COLORMAP_WINTER)
point_cloud.points = o3d.utility.Vector3dVector(xyz_points)
point_cloud.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
np.save('xyz_2.npy', xyz_points)

o3d.visualization.draw_geometries([point_cloud], window_name='Raw XYZ data from ToF')

Raw depth Image

Raw_Depth_Image

Undistorted Depth Image

Undistorted_Depth_Image

Color Image

Color_image

Raw xyz values captured directly from ToF

Screenshot from 2024-02-12 13-09-56

Point cloud created using RGB and depth data

Screenshot from 2024-02-12 13-09-44

Point cloud created using only depth data

Screenshot from 2024-02-12 13-09-25

Setup image for reference (Captured using phone)

IMG_20240213_123326

As we can see, the point cloud which is created using open3d does not have desk floor surface perpendicular to the checkered plate. I tried converting the depth arrary to point cloud using only numpy but the issue still persist. A little help would be great.

Thanks, Mayank.

@SeptimiuVana
Copy link
Collaborator

Hello, we are looking into this and will come back with an answer as soon as possible.

Regards,
Septimiu.

@Windson9
Copy link
Author

Windson9 commented Apr 2, 2024

Hello. I came to know that the first frame will be bent due to temperature sensing being a little off. Hence, I tried capturing the consecutive frames and noticed improvement as compared to the first frame. However, the issue persist (attaching images below).

The same behaviour is observed for 2nd and all trailing frames. We noticed that ToFGUI doesn't seem to have this issue. Is there any post-processing steps that I am missing ?

gui
pcd
setup

Thanks,
Mayank

@andrestraker
Copy link
Collaborator

andrestraker commented Apr 10, 2024

Hi @Windson9 - in general we do not do any processing on the point cloud once it is generated by the depth compute library or open source radial to XYZ.

Can you take a look at this? It was developed with tag v4.3.0.
https://github.com/analogdevicesinc/ToF/blob/main/bindings/python/examples/skeletal_tracking_in_pointcloud/skeletal_tracking_in_pointcloud.py

@Windson9
Copy link
Author

Hello, @andrestraker. I checked the code that you mentioned and the pointcloud now appears correct. However, I noticed that you have used the XYZ data directly for the point cloud. If you see my attached code above, you will find that I am converting the depth image to point cloud using the camera's intrinsic matrix and in that case, the problem persists. I tried undistorting the image but the same results were observed. Can you please check on your end if you can convert the depth image to point cloud successfully?

Thanks,
Mayank

@andrestraker
Copy link
Collaborator

@Windson9 - our depth compute library does the point cloud conversion.

@naitiknakrani-eic
Copy link

Is there any logic difference in depth compute library function compared to open3d point cloud from depth function (https://www.open3d.org/docs/0.7.0/python_api/open3d.geometry.create_point_cloud_from_depth_image.html) ?

@andrestraker
Copy link
Collaborator

We have an open-source version here:
https://github.com/analogdevicesinc/ToF/tree/main/sdk/common/adi

Start from tofiCompute.cpp.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants