Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intel Realsense camera -noisy image #11912

Closed
Gopi-GK02 opened this issue Jun 14, 2023 · 60 comments
Closed

Intel Realsense camera -noisy image #11912

Gopi-GK02 opened this issue Jun 14, 2023 · 60 comments

Comments

@Gopi-GK02
Copy link


Required Info
Camera Model { D435 }
Firmware Version ( 5.15.0.2 )
Operating System & Version { Windows 11 }
Platform PC
SDK Version { 2.54.1 }

The stereo module produces grainy/blurry output

I'm new to 3d development and this is my first time using a intel realsense camera.
I'm trying to get the point cloud of an object and visualize it in pybullet but the stereo module of the camera produces blurry/noisy images,

image
image
image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 14, 2023

Hi @Gopi-GK02 The large black hole at the center of your depth image will be because of the intensity of the round lamp being too strong for the camera to obtain depth information from that part of the scene.

If the rectangular lamp in the top corner of the image is a fluorescent light such as a ceiling strip light then that type of lighting can create depth noise. This is because they contain heated gases that flicker at frequencies that are difficult to see with the human eye.

If you do have a ceiling strip light then the flicker can be compensated for using a setting in the RGB section of the Viewer's options side-panel called Power Line Frequency. This frequency setting has a choice of '50' and '60' values and should be matched as closely as possible with operating frequency of the light. Typically this lighting operating frequency will be 50 in European regions and 60 in North American regions.

I note that your USB connection type is being detected as 2.1 by the RealSense Viewer, which has slower performance and more limited resolution / FPS options than a faster USB 3 connection. Can you confirm whether your camera is plugged into a USB 3 port on your PC, please? If it is in a USB 3 port, using a USB 2 cable or a USB 2 hub with the camera could also cause it to be detected as USB 2.1 speed.

If the RGB image is blurry then maximizing the Sharpness RGB setting to '100' can greatly reduce RGB blurring.

@Gopi-GK02
Copy link
Author

Gopi-GK02 commented Jun 15, 2023

Hi @MartyG-RealSense, these are the results after I changed the Laser Power Frequency.

image

The reflection of the light still caused flickering so I changed to a place with minimal light reflection and these are the results.

image
image

You were right, the depth image is better after changing the laser power frequency, but still, I don't think it will produce a better 3d image.

Any suggestion on how to improve it?

@MartyG-RealSense
Copy link
Collaborator

There is a relationship between the Laser Power setting and depth image quality. As Laser Power is reduced from its default value of '156', more holes and gaps can appear in the image, whilst increasing Laser Power above 156 (its maximum value is 360) can increase depth image detail and help to fill in holes. Laser Power is lowered to '60' on your images.

It looks as though the incorrect setting has been changed. 'Laser Power' should not have been reduced to 60. The fault was mine, as I should have said Power Line Frequency in my comment above instead of Laser Power Frequency. I have edited the comment to correct it and offer my sincere apologies.

Power Line Frequency in the RGB section of the side-panel should be set to '60' in its drop-down menu.

image

@Gopi-GK02
Copy link
Author

Still getting the same output
image
image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 15, 2023

The large black area in the foreground that corresponds to the foreground may be because that area is close to the camera. By default with the D435, areas up to around 10 cm from the camera lenses will not be rendered on the depth image because the camera has a minimum depth sensing distance, below which the depth image breaks up.

Setting the Disparity Shift option to '50' instead of the default '0' may enable more of the foreground depth to be rendered, as increasing Disparity Shift reduces the minimum depth sensing distance of the camera (though also reduces maximum observable depth distance at the same time). This option is located in the Stereo Module > Advanced Controls > Depth Table section of the Viewer's side-panel.

image

Black-surfaced parts of objects may also not show up on the depth image. This is because it is a general physics principle (not specific to RealSense) that dark grey or black absorbs light and so makes it more difficult for depth cameras to read depth information from such surfaces. The darker the color shade, the more light that is absorbed and so the less depth detail that the camera can obtain. Casting a strong light source onto black objects can help to bring out depth detail from them.

@Gopi-GK02
Copy link
Author

Gopi-GK02 commented Jun 17, 2023

Results after modifying the Disparity Shift.
image
image

Since you mentioned that it reduces the observable distance, below are the results of a closer object.

image
image

@MartyG-RealSense
Copy link
Collaborator

The depth image above does not look correct. If it is a flat wall then you would expect it to be mostly the same color instead of a spectrum of colors ranging from blue (near) to red (far).

The empty black area that the cable is in is expected though, because - as mentioned earlier - the camera has difficulty seeing black-colored surfaces and objects such as black cables.

The Infrared 2 (right-side infrared) has some noise and distortion on it that the left infrared image does not have. They should be approximately the same image aside from one of the two infrared images being slightly horizontally offset from the other due to the different positions of the left and right sensors on the front of the camera.

Next, please try resetting the camera to its factory-new calibration settings using the instructions at #10182 (comment) to see whether it improves the image.

@Gopi-GK02
Copy link
Author

image

I followed the steps and when I click the Restore Factory option, a notification with no data appears.
I didn't notice any changes in images after clicking the write table button.

Images after resetting camera
image
image

@MartyG-RealSense
Copy link
Collaborator

The Infrared 2 image at least no longer has a corruption on it after the write-table reset.

With such a dense dot pattern on the table surface for the camera to analyze for depth information, it is unusual that the depth image is broken like this.

As you are on a Windows 11 computer, it is also unusual that the camera is being detected as USB 2.1 when on a modern computer I would expect it to have USB 3 ports. Can you confirm that the camera is plugged into a USB 3 port please? If it is then the USB 2.1 status is a mis-detection of the USB connection that would affect its performance, as USB 2.1 is slower than USB 3.

@Gopi-GK02
Copy link
Author

The ports are USB 3.0, but I am using a USB 2.0 cable

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 20, 2023

The camera will be detected as USB 2 in a USB 3 port if the cable is USB 2. This is because a USB 3 cable has extra wires in it that a USB 2 cable does not have and those extra wires enable USB 3 detection. A USB 3 cable should be used with the camera if possible to enable it to be detected as beng on a USB 3 connection.

@Gopi-GK02
Copy link
Author

I don't have a USB 3.0 cable as of now, I will try it out as soon as I get one.
Is there any way to improve the point cloud obtained from the realsense camera ?
The point cloud I have obtained is blurry and some holes.
I have applied hole-filling filter which removes some holes in the point cloud, but it is still blurry at some parts and at corners.

I am using python.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 21, 2023

If it is the RGB aspect of the pointcloud that is blurry then you can improve that by maximizing RGB sharpness by setting it to a value of '100'.

It can be set with Python code by defining a 'color_sensor' to access the RGB sensor instead of the depth sensor and then use the SDK instruction rs.option.sharpness to set the RGB sharpness. The instruction should be placed after the pipeline start line.

profile = pipeline.start(config)
color_sensor = profile.get_device().query_sensors()[1]
color_sensor.set_option(rs.option.sharpness, 100) 

@MartyG-RealSense
Copy link
Collaborator

Hi @Gopi-GK02 Do you require further assistance with this case, please? Thanks!

@Gopi-GK02
Copy link
Author

Hi @MartyG-RealSense, The issues are not resolved with USB 3.0 cable.

image
image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 30, 2023

Do the black holes and gaps reduce if you change the Depth Units option from its default value of 0.001 to the lower value of 0.0001

The 'Depth Units' setting can be found under the Controls sub-section of the Stereo Module options. You can input 0.0001 manually by left-clicking on the pencil icon beside the option.

image

If changing the Depth Units does not improve the image, please next try using the instructions at the link below to reset the Viewer to its defaults. This is a different procedure from resetting the camera hardware.

https://support.intelrealsense.com/hc/en-us/community/posts/360052013914/comments/1500000416522

@Gopi-GK02
Copy link
Author

Changing the depth units did not improve the image, I have reset the viewer.

Image after resetting the viewer
image
image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 1, 2023

The projected dot pattern on the infrared image is strong and bright, so it is not clear why the camera is having difficulty with depth-analyzing the surface of the object.

I note that the Asic Temperature reading is 43 degrees C. This temperature represents the internal operating temperature of the camera. Officially the maximum recommended operating temperature is 35 degrees C, though the camera may be able to go up to 42 degrees in real-world conditions before glitches occur above that level. Is there heat in the environment that the camera is in, such as hot weather, that could be causing this high temperature?

How many of the holes in the depth image are filled in if you increase the Laser Power option to its maximum of '360'.

Using the Medium Density Visual Preset instead of the default Custom may also help to fill in some of the holes.

The Post-Processing section of the options side-panel also has a Spatial filter that is enabled by default that has a 'holes filling mode' as a sub-option. Set its drop-down menu to 'Unlimited' if it is not set to that already in order to maximize the filter's hole-filling effect.

image

@Gopi-GK02
Copy link
Author

Gopi-GK02 commented Jul 3, 2023

The temperature of the environment is around 27 °C . I have noticed that the Asic temperature goes above 40 after 5 mins.

The holes are increased when increasing the Laser Power.
image

Result after setting the preset to medium density and setting the hole filling mode as unlimited
image
image

I have another issue,

I'm trying to store the frames in a bag file.

import numpy as np
import cv2
try:
    pipeline = rs.pipeline()
    config = rs.config()
    config.enable_device_from_file("C:/Users/gk/PycharmProjects/ICP_Registration/color.bag",False)
    config.enable_stream(rs.stream.color, rs.format.bgr8, 30)
    profile=pipeline.start(config)
    playback = profile.get_device().as_playback()
    playback.set_real_time(False)
    colorizer = rs.colorizer()
    count=0
    while True:
        success, frames = pipeline.try_wait_for_frames(1000)
        frame_number = frames.get_frame_number()
        print("frame number : ",frame_number)
        count += 1
        color_frame = frames.get_color_frame()
        depth_color_frame = colorizer.colorize(color_frame)
        color_image = np.asanyarray(depth_color_frame.get_data())
        cv2.imshow("Color Stream", color_image)
        key = cv2.waitKey(1)
        if key == 27:
            cv2.destroyAllWindows()
            break
finally:
    print(count)
    print(playback.get_duration().total_seconds())
    pass

image

In the above image, the last frame number retrieved by frames.get_frame_number() is 2087, but the result of the count variable is 1800.

I have stored the frames at 30 fps (640,480) for 60 seconds, the number of frames should be under 1800.

The code below is used to store the frames.

from time import sleep

import pyrealsense2 as rs
import open3d as o3d
import cv2
import numpy as np

pipeline = rs.pipeline()
config = rs.config()
pipeline_wrapper =rs.pipeline_wrapper(pipeline)
pipeline_profile =config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
config.enable_record_to_file("color.bag")
profile = pipeline.start(config)
device = profile.get_device()
recorder = device.as_recorder()
rs.recorder.pause(recorder)
sleep(10)
rs.recorder.resume(recorder)
t_end = time.time() + 60 * 1
while time.time() < t_end:
    frames = pipeline.wait_for_frames()
    color_frame = frames.get_color_frame()
    color_image = np.asanyarray(color_frame.get_data())
    cv2.imshow('RealSense', color_image)

    cv2.waitKey(1)
    if cv2.waitKey(10) & 0xFF == ord('q'):
        break

pipeline.stop()

@MartyG-RealSense
Copy link
Collaborator

Setting the preset to medium density and setting the hole filling mode as unlimited does seem to provide some improvement compared to previous images.

When exporting frames to a bag file, the number of frames in the bag will rarely be very close to the number of live frames during the recording.

Working out the number of expected frames by multiplying FPS by the number of seconds is a commonly used technique but is typically not an accurate method.

@Gopi-GK02
Copy link
Author

Gopi-GK02 commented Jul 3, 2023

It is improved, but it is still not good.
image

Regarding the frame rate,
the result provided by the frames.get_frame_number() is 2087. I'm not sure whether it is correct since I am getting the frames at 30 fps for 60s.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 3, 2023

The areas of the image surrounding the object that are large black gaps look as though they are dark grey or black colored in the real-world scene and also highly reflective (it looks like the floor). Both factors will make such surfaces difficult for the camera to read. If you compare to the image at #11912 (comment) where a different surface is in the image, there are much fewer large black gaps. So moving the object to a location with a lighter, less reflective floor surface if possible might enhance the image.

In regard to finding the "correct" frame nuber versus the number of frames in the bag, it is best not to think too much about it and just aim to get a number of bag frames that is as close as possible to the number of frames that were in the live recording session. It will likely not be possible to avoid loss of some frames from the bag.

@MartyG-RealSense
Copy link
Collaborator

Hi @Gopi-GK02 Do you require further assistance with this case, please? Thanks!

@Gopi-GK02
Copy link
Author

Hi @MartyG-RealSense, is there any function to get the matrix and distortion coefficients of the d435 camera ?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 10, 2023

You can access intrinsics and extrinsics in Python with the SDK instructions get_intrinsics and get_extrinsics_to. See #3986 for an example of get_intrinsics and #10180 (comment) for get_extrinsics_to.

On the D435 camera model the five distortion coefficient values will all be zero. The reason for this is provided by a RealSense team member at #1430 (comment)

There are newer RealSense models such as D455 that have some exceptions to the all-zero rule by having non-zero coefficient values for the RGB color stream.

@Gopi-GK02
Copy link
Author

Gopi-GK02 commented Jul 10, 2023

Hi @MartyG-RealSense, I'm trying to understand the depth frame and trying to find the x,y,z values of a point.

The two monochrome Infrared streams are used to form the depth stream, therefore each pixel in the depth stream represents the distance of that point from the camera base in the real world.

Is my understanding correct?

depth_pixel_x = [150, 100]
depth_value_x = (depth_frame.get_distance(150,100))*depth_scale
depth_point_x = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel_x, depth_value_x * depth_scale)
print(depth_point_x)

Is this code correct ?

#1413 (comment)
In the above comment, I don't understand converting the raw depth into meter.

I'm trying to find the x,y,z coordinates of a particular point from depth frame.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 10, 2023

The left and right infrared frames that are used to construct the depth frames are raw frames inside the camera hardware and are not the Infrared and Infrared 2 streams. This is why depth can be streamed even when the infrared streams are inactive, as these IR streams are not involved in depth frame generation.

In regards to the code, #10037 (comment) may be a helpful reference for obtaining 3D xyz with rs2_deproject_pixel_to_point.

Regarding #1413 (comment) - it is discussing how the real-world Z-distance in meters can be obtained by multiplying the raw pixel depth value (uint16_t) by the depth unit scale to obtain the real-world distance. For example, if the raw pixel depth value for a particular coordinate is 6500 and the depth scale is 0.001 then 6500 x 0.001 = 6.5 (meters) distance for that particular coordinate.

@MartyG-RealSense
Copy link
Collaborator

Hi @Gopi-GK02 Do you require further assistance with this case, please? Thanks!

@Gopi-GK02
Copy link
Author

Okay, Thank you @MartyG-RealSense

@Gopi-GK02
Copy link
Author

Hi @MartyG-RealSense,

I'm trying to work with librealsense,


#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
         // Include short list of convenience functions for rendering
//#include <opencv2/opencv.hpp>

#include <iostream>

using namespace std;
//using namespace cv;

// Helper functions

int main() 
{
    rs2::frame color_frame;
    rs2::frame depth_frame;
    rs2::pointcloud pc;
        // We want the points object to be persistent so we can display the last cloud when a frame drops
    rs2::points points;

    clock_t initialTime = clock(); 
    while (true)
    {

        
        cout<<"Hello";

    
        rs2::colorizer color_map;

        rs2::pipeline pipe;
        rs2::config pipe_config;

        //adjust resolution for pipelines
        pipe_config.enable_stream(rs2_stream::RS2_STREAM_DEPTH, 640, 480, rs2_format::RS2_FORMAT_Z16, 30);
        pipe_config.enable_stream(rs2_stream::RS2_STREAM_COLOR, 640, 480, rs2_format::RS2_FORMAT_BGR8, 30);
        rs2::pipeline_profile profile = pipe.start(pipe_config);

        //color stream align to depth stream
        
        //initial all stream
        rs2::frameset camera_framest = pipe.wait_for_frames();
        color_frame = camera_framest.get_color_frame();
        depth_frame= camera_framest.get_depth_frame().apply_filter(color_map);

        cout<<"Runnung";

        
        if ((clock() - initialTime) / CLOCKS_PER_SEC >= 10)
        { // time in seconds 
            break; 
        }

    }
    cout<<"Getting the pcd";
    points = pc.calculate(depth_frame);
    cout<<"Got it !";



    return 0;
}

The code shows the below error,
image

If I remove the pointcloud calculation, points = pc.calculate(depth_frame), the code runs without error.

@MartyG-RealSense
Copy link
Collaborator

Hi @Gopi-GK02 The pc.calculate() instruction is usually preceded by a map_to() instruction to map depth and color together. That is the proper way to align depth and color when using pc.calculate().

pc.map_to(color_frame);
points = pc.calculate(depth_frame);

So your depth_frame line likely does not require the apply_filter command.

depth_frame= camera_framest.get_depth_frame();

@Gopi-GK02
Copy link
Author

The error is resolved, but the pointcloud is not captured.

image
The above image is the visualization of saved ply file using open3d

@MartyG-RealSense
Copy link
Collaborator

How do you save your pointcloud data to a ply file, please? Your script does not have any export_to_ply() code in it.

@Gopi-GK02
Copy link
Author

Below is the code


#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
         // Include short list of convenience functions for rendering
//#include <opencv2/opencv.hpp>

#include <iostream>

using namespace std;
//using namespace cv;

// Helper functions

int main() 
{
    rs2::frame color_frame;
    rs2::frame depth_frame;
    rs2::pointcloud pc;
        // We want the points object to be persistent so we can display the last cloud when a frame drops
    rs2::points points;

    clock_t initialTime = clock();  
    while (true)
    {

        
        cout<<"Hello";

    
        rs2::colorizer color_map;

        rs2::pipeline pipe;
        rs2::config pipe_config;

        //adjust resolution for pipelines
        pipe_config.enable_stream(rs2_stream::RS2_STREAM_DEPTH, 640, 480, rs2_format::RS2_FORMAT_Z16, 30);
        pipe_config.enable_stream(rs2_stream::RS2_STREAM_COLOR, 640, 480, rs2_format::RS2_FORMAT_BGR8, 30);
        rs2::pipeline_profile profile = pipe.start(pipe_config);

        //color stream align to depth stream
        
        //initial all stream
        rs2::frameset camera_framest = pipe.wait_for_frames();
        color_frame = camera_framest.get_color_frame();
        depth_frame= camera_framest.get_depth_frame();

        cout<<"Runnung";

        
        if ((clock() - initialTime) / CLOCKS_PER_SEC >= 10)
        { // time in seconds 
            break; 
        }

    }
    cout<<"Getting the pcd";
    pc.map_to(color_frame);
    points = pc.calculate(depth_frame);
    points.export_to_ply("bin1.ply",color_frame);
    cout<<"Got it !";
   
    


    return 0;
}

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 18, 2023

I can't see anything obviously wrong with your code.

As the top of this discussion says that you are using Windows, you could try loading the ply file into Windows' 3D Viewer tool to see if the file has any content. You should be able to find the tool by typing 3d viewer into the text box at the bottom of your Windows screen.

image

Then when 3D Viewer launches, drag and drop the ply file into the main panel of the tool to display its contents.

@Gopi-GK02
Copy link
Author

For Librealsense, I am using ubuntu.

I tried to load the model in 3d viewer in windows, it shows that it couldn't load the 3d model.

import open3d as o3d
pcd = o3d.io.read_point_cloud("bin1.ply")

print("File opened")
pcd.estimate_normals(
    search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.1, max_nn=30))
print(pcd)
o3d.visualization.draw_geometries([pcd],
                                  )

print('run Poisson surface reconstruction')
with o3d.utility.VerbosityContextManager(
        o3d.utility.VerbosityLevel.Debug) as cm:
    mesh, densities = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(
        pcd, depth=9)
print(mesh)

o3d.visualization.draw_geometries([mesh],
                                  )
 
print(np.asarray(pcd.points))

Above is the python code that I use to load and display the pointclouds.
image

@MartyG-RealSense
Copy link
Collaborator

As you have the ability to use Python (your earlier pointcloud script at #11912 (comment) was C++), you could try an Open3D Python pointcloud script posted last week at #12090 to see whether it exports a ply correctly if you add export_to_ply to the end of it.

#3579 has an example of an export_to_ply program for Python. However, there is a well known issue with pyrealsense2 where it will not export color to a ply. The only example of color ply export in Python that has previously worked is at #6194 (comment)

@Gopi-GK02
Copy link
Author

Gopi-GK02 commented Aug 18, 2023

I have been using python version to generate pointcloud along with color
Below is my code

import open3d as o3d
import cv2
import pyrealsense2 as rs
import numpy as np


pipeline = rs.pipeline()
config = rs.config()

# Get device product line for setting a supporting resolution
pipeline_wrapper = rs.pipeline_wrapper(pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))

found_rgb = False
for s in device.sensors:
    if s.get_info(rs.camera_info.name) == 'RGB Camera':
        found_rgb = True
        break
if not found_rgb:
    print("The demo requires Depth camera with Color sensor")
    exit(0)

config.enable_stream(rs.stream.depth, 424, 240, rs.format.z16, 60)

if device_product_line == 'L500':
    config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30)
else:
    config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
profile = pipeline.start(config)
color_sensor = profile.get_device().query_sensors()[1]
color_sensor.set_option(rs.option.sharpness, 100)
# Start streaming


try:
    while True:

        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()




        hole_filling = rs.hole_filling_filter(1)
        if not depth_frame or not color_frame:
            continue

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())


        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)

        depth_colormap_dim = depth_colormap.shape
        color_colormap_dim = color_image.shape



        # If depth and color resolutions are different, resize color image to match depth image for display
        if depth_colormap_dim != color_colormap_dim:
            resized_color_image = cv2.resize(color_image, dsize=(depth_colormap_dim[1], depth_colormap_dim[0]), interpolation=cv2.INTER_AREA)
            images = np.hstack((resized_color_image, depth_colormap))
        else:
            images = np.hstack((color_image, depth_colormap))


        # Show images
        cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('RealSense', images)
        cv2.waitKey(1)
        if cv2.waitKey(10) & 0xFF == ord('q'):
            break
    colorizer = rs.colorizer()
    colorized = colorizer.process(frames)

    spatial = rs.spatial_filter()

    spatial.set_option(rs.option.filter_magnitude, 5)
    spatial.set_option(rs.option.filter_smooth_alpha, 1)
    spatial.set_option(rs.option.filter_smooth_delta, 50)
    filtered_depth = spatial.process(depth_frame)
    colorized_depth = np.asanyarray(colorizer.colorize(filtered_depth).get_data())
    depth_frame=filtered_depth

    print("Done")

    pc = rs.pointcloud();
    pc.map_to(color_frame);
    print(type(depth_frame))
    pointcloud = pc.calculate(depth_frame);
    vtx = np.asanyarray(pointcloud.get_vertices())
    np.save("bin1_numpy",vtx)


    pointcloud.export_to_ply("bin1.ply", color_frame);


finally:
    pipeline.stop()
pcd = o3d.io.read_point_cloud("bin1.ply")

print("File opened")
pcd.estimate_normals(
    search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.1, max_nn=30))
print(pcd)
o3d.visualization.draw_geometries([pcd],
                                  )

print('run Poisson surface reconstruction')
with o3d.utility.VerbosityContextManager(
        o3d.utility.VerbosityLevel.Debug) as cm:
    mesh, densities = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(
        pcd, depth=9)
print(mesh)

o3d.visualization.draw_geometries([mesh],
                                  )

print(np.asarray(pcd.points))


The above python code works, it generates a pointcloud with color and normals.

@MartyG-RealSense
Copy link
Collaborator

Thanks so much for sharing your Python code above.

So the pointcloud is successfully generated but it is not exporting successfully to a ply file?

If the exported ply is empty then an alternative to using export_to_ply is to use save_to_ply instead, which offers more options for configuring the export like the options in the RealSense Viewer's ply export interface.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/export_ply_example.py

@Gopi-GK02
Copy link
Author

Gopi-GK02 commented Aug 19, 2023

The pointcloud is generated and exported to a ply file in python successfully.
But its not working in c++
I have given c++ the code in previous comments.
Do you have any suggestions ?
I don't know why points are not generated in c++

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 19, 2023

You could try running librealsense's C++ rs-pointcloud example program to see if it works, as it also makes use of map_to and pc_calculate.

https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud

If your librealsense installation has the example programs installed then you should be able to run a pre-made executable version of rs-pointcloud without having to build it yourself. Examples on Linux are located in the usr/local/bin folder, whilst on Windows they can be found at C: > Program Files (x86) > Intel RealSense SDK 2.0 > tools if you have installed the full RealSense SDK with the Intel.RealSense.SDK-WIN10 installer program.

@Gopi-GK02
Copy link
Author

I have built the librealsense with the examples and the rs-pointcloud example works.
I guess there is some issue in storing it ?

@MartyG-RealSense
Copy link
Collaborator

Is there any difference if you change the setup of your color_frames and depth_frames lines so that they are the same as those of rs-pointcloud:

auto frames = pipe.wait_for_frames();
auto color_frames = frames.get_color_frame();
auto depth_frames = frames.get_depth_frame();

@Gopi-GK02
Copy link
Author

Yes, it works if I change the type to auto.
But the point cloud is not colored and there are very less points.
image

@MartyG-RealSense
Copy link
Collaborator

In your C++ script at #11912 (comment) it looks as though you could use RGB8 as the color format instead of BGR8 as the script is not using OpenCV commands.

pipe_config.enable_stream(rs2_stream::RS2_STREAM_COLOR, 640, 480, rs2_format::RS2_FORMAT_RGB8, 30);

Have you also tried commenting out your clock code to make sure that it is not affecting the script negatively with its break command?

if ((clock() - initialTime) / CLOCKS_PER_SEC >= 10)
{ // time in seconds 
break; 
}

@Gopi-GK02
Copy link
Author

Gopi-GK02 commented Nov 21, 2023

Hi @MartyG-RealSense,

We are trying to improve the depth image by applying the post processing filters.
As per the intel realsense docs, it is recommended to apply the alignment after applying the filters.

We are not sure how to do it.

I have attached the code below

frames = pipeline.wait_for_frames()
        aligned_frames = align.process(frames)

        frames_array = []
        for x in range(10):
            frames = pipeline.wait_for_frames()
            aligned_frames = align.process(frames)
            frames_array.append(aligned_frames.get_depth_frame())

        temporal = rs.temporal_filter()
        temporal.set_option(rs.option.holes_fill, 3)


        profile = aligned_frames.get_profile()
        intr = profile.as_video_stream_profile().get_intrinsics()
        matrix_coefficients = np.array([[intr.fx, 0, intr.ppx],
                                        [0, intr.fy, intr.ppy],
                                        [0, 0, 1]])

        color_frame = aligned_frames.get_color_frame()

        hole_filling = rs.hole_filling_filter(2)

        spatial = rs.spatial_filter()

        spatial.set_option(rs.option.filter_magnitude, 5)
        spatial.set_option(rs.option.filter_smooth_alpha, 1)
        spatial.set_option(rs.option.filter_smooth_delta, 50)


 
        depth_to_disparity = rs.disparity_transform(True)
        disparity_to_depth = rs.disparity_transform(False)

        for x in range(10):
            frame = frames_array[x]
            frame = depth_to_disparity.process(frame)
            frame = spatial.process(frame)
            frame = temporal.process(frame)
            frame = disparity_to_depth.process(frame)
            frame = hole_filling.process(frame)

        depth_frame = frame

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

We are also observing some spikes in the pointcloud
image

@MartyG-RealSense
Copy link
Collaborator

I hope that the script at #11246 will be a helpful Python reference for performing align after post-processing.

@Gopi-GK02
Copy link
Author

Gopi-GK02 commented Nov 21, 2023

The script applies post processing filters after the aligning.

@MartyG-RealSense
Copy link
Collaborator

Yes, it does apply align after the post processing filters.

@Gopi-GK02
Copy link
Author

Sorry @MartyG-RealSense,
I didn't analyze the code properly,
In the script, the post processing is applied after the aligning

frameset = align.process(frameset)
depth_frame = frameset.get_depth_frame()
depth_frame = filter_depth_data(depth_frame)

@MartyG-RealSense
Copy link
Collaborator

The filter and alignment instructions are applied with .process instructions. Until that instruction is used, the filter or alignment is not activated.

In the script from the top of #11246 the list of post-processing filters are applied first with .process instructions and then the alignment is applied with .process afterwards, further down the script.

image

@Gopi-GK02
Copy link
Author

I still don't understand @MartyG-RealSense,
I can see that the post processing filters are in a function filter_depth_data, and it is called after the align.process()

I don't understand how where and how the .process method applies the filters first and aligns the frames

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 21, 2023

When a script is run, the computer starts reading the script at the first line and proceeds downwards through the script until it reaches the bottom. So if the .process instructions for the filters are listed first in the script and the .process instruction for align is listed after the filters, further down the script, then alignment will be activated after all of the filters have been activated. This is because the line that activates alignment comes after the filter lines. The computer has to go through each line of the script in the order that it is written.

An exception to this rule is if there is an instruction in the script to jump to another section of the script, but this particular script does not have that.

@Gopi-GK02
Copy link
Author

Gopi-GK02 commented Nov 29, 2023

@MartyG-RealSense
Copy link
Collaborator

Hi @Gopi-GK02 Any polarizing filter should work so long as it is linear, except for the round ones used in 3D glasses. This means that thin-film polarizing filters can be purchased inexpensively by searching stores such as Amazon for the term linear polarization filter sheet

@Gopi-GK02
Copy link
Author

Okay,
Thank you @MartyG-RealSense

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants