-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to construct and visualize a poincloud of an isolated part of the color image #10894
Comments
Hi @alessio8russo May I first ask whether the cable is black? Because of general physics principles (not RealSense specifically), color shades such as dark grey and black absorb light, making it difficult for depth cameras to analyze those surfaces for depth information. So a depth image containing a black cable may appear to show a cable but it may actually just be a cable-shaped black empty area with no depth values in it. If the cable is black, changing to a different colored cable would help it to be better detected. Alternatively, projecting a strong light source onto a black area can help to bring out depth information from it. |
Hi, thanks for reply so quickly, but the problem i think is not the color of cable, because i have a problem how to code what i need to do, because after applying the hole filling ( to resolve the problem of random hole wiht 0 depth in the cable) i have resolved any problem with the depth and depth value. |
I note that you are applying the hole-filling post processing filter before the depth-color alignment is applied, which is the order that Intel recommends. So the alignment part of the script is likely okay. I also see that you seem to be taking account of the OpenCV default color space being BGR instead of RGB so that you are not getting a reverse-color image after converting the RealSense image to OpenCV. The impression that I get from the image though is that the RGB image of the cable is not aligned with the depth image of the cable. Does the pointcloud look better if you test the Open3D pyrealsense2 pointcloud script at isl-org/Open3D#473 (comment) |
The C++ rs-grabcuts OpenCV example that also creates a cut-out takes the approach of creating a foreground and background mask and then extracts foreground pixels. Although it is a C++ example, its cv code may provide some useful insights about your own isolation project. |
Ok thank you for the reply, i will try to see, so the way of use np.nonzero and rs2_deproject_pixel_to_point() isn't good? Because i have some doubts how to use it, so how to visualize the point that i obtain from rs2_deproject_pixel_to_point() ? |
It's great to hear that you made significant progress. Thanks so much for sharing your code with the RealSense community! Open3D has an official tutorial for pointcloud outlier removal at the link below. http://www.open3d.org/docs/latest/tutorial/Advanced/pointcloud_outlier_removal.html In regard to reducing the number of points, you could try applying the High Accuracy camera configuration preset. This will be stricter with the depth values that it considers accurate and exclude from the image the depth values that it has less confidence in. An example of a pyrealsense2 script for setting the High Accuracy preset can be found at #2577 (comment) |
Thank you so much. I will try with both. Is there any down-sampling on the color image or 1 channel-image? |
The 'High Accuracy' preset will only affect depth coordinates. It does not down-sample, it just decides whether a depth point is included on the image or not based on how confident the SDK is in the accuracy of that point's depth value. |
I read it, for this i asked for a method that can be applied on the color point , and so i can down-sample the color image |
Down-sampling of RGB is supported by the Decimation post-processing filter for the RGB sensor, There is not a code example for the RGB decimation filter. I would speculate though that the code would be similar to the decimation filter for depth, except for setting the sensor to be accessed to an index number of '1' (the RGB sensor) as described at #10002 (comment) |
Thank you, now i'm trying with outlier removal of opend3d, to reduce the points |
Hi @alessio8russo Do you require further assistance with this case, please? Thanks! |
No now i'm searching for other issues, relate to how to interpolate the obtained points, any suggestion? |
I researched Open3D interpolation in regards to RealSense. Of the small number of references that were available, they were all interpolating with approximately the same cv2.INTER_AREA instruction that you are already using in your own script. An example of such a reference is isl-org/Open3D#3452 |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hi, i'm very new with python and with pyrealsense, so probably is an easy question. I construct a background removal+color mask in order to isolate only the cable , so all the pixel that are not of the cable are black, but obviously doing in this way when i construct the poincloud using open3d it give as result also the black point. This is my code:
So and I obtained a pointcloud like:
So in order to resolve this problem i thought to use np.nonzero on the mask in order to have the index o pixel that are different from zero and then use this index in rs2_deproject_pixel_to_point(), but i have difficulties to define the depth value for each pixel , this is my code to check if they have the same depth that gives me error "IndexError: index 2 is out of bounds for axis 1 with size 2"
And also in case i will able to use rs2_deproject_pixel_to_point() , i didn't get how to use these points in order to construct and visualize the poincloud with open3d. So there is an easiest way to construct only the poincloud ?
The text was updated successfully, but these errors were encountered: