Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the depth solution of D435 based on binocular parallax principle or structured light scheme? #11811

Closed
pgswag opened this issue May 17, 2023 · 8 comments
Labels

Comments

@pgswag
Copy link

pgswag commented May 17, 2023

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { R200 / F200 / SR300 / ZR300 / D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win (8.1/10) / Linux (Ubuntu 14/16/17) / MacOS
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/Raspberry Pi/ NVIDIA Jetson / etc..
SDK Version { legacy / 2.. }
Language {C/C#/labview/nodejs/opencv/pcl/python/unity }
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

<Is the depth solution of D435 based on binocular parallax principle or structured light scheme?>

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented May 17, 2023

Hi @pgswag The RealSense 400 Series camera range, including D435, uses stereo depth constructed from left and right sensors. The depth pixel value is a measurement from the parallel plane of the imagers and not the absolute range.

image

The left and right imagers capture the scene and send imager data to the depth imaging (vision) processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via the shift between a
point on the Left image and the Right image. The depth pixel values are processed to generate a depth frame. Subsequent depth frames create a depth video stream.

The RealSense SR300 and SR305 models use coded light technology, which is similar to structured light.

@pgswag
Copy link
Author

pgswag commented May 18, 2023

“The depth pixel value is a measurement from the parallel plane of the imagers and not the absolute range.”
So that means the depth is “a” as shown in the figure below instead of “b”?
IMG_8055

@pgswag
Copy link
Author

pgswag commented May 18, 2023

When I hold D435 at an Angle of zero to the wall, is the depth of all points in D435 field of vision theoretically the same? For example, the point in the center of the field of view has the same depth as the vertex of the four corners of the field of view?
IMG_8056

In other words, the depth is the projection onto the z axis not the distance from the origin, right?

@MartyG-RealSense
Copy link
Collaborator

A diagram at #7279 (comment) provided by a RealSense team member further illustrates the relationship between range and Z-depth.

They add "The content of the depth frame is "Z" values calculated for every pixel in the camera's frustum (or cropped FOV). In the sketch it is clear that while range (or radial distance) may coincide with depth (or "Z"), in 99.99% they will be different".

@pgswag
Copy link
Author

pgswag commented May 31, 2023

Thank you for your answer!

@pgswag pgswag closed this as completed May 31, 2023
@5204338
Copy link

5204338 commented Apr 1, 2024

The left and right imagers capture the scene and send imager data to the depth imaging (vision) processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via the shift between a
point on the Left image and the Right image. The depth pixel values are processed to generate a depth frame. Subsequent depth frames create a depth video stream.

Based on your response, the principle of depth measurement of the Realsense D435 camera is based on binocular vision, while the Realsense D300 series camera is based on structured light measurement. Is that correct?

Hi @pgswag The RealSense 400 Series camera range, including D435, uses stereo depth constructed from left and right sensors. The depth pixel value is a measurement from the parallel plane of the imagers and not the absolute range.

image

The left and right imagers capture the scene and send imager data to the depth imaging (vision) processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via the shift between a point on the Left image and the Right image. The depth pixel values are processed to generate a depth frame. Subsequent depth frames create a depth video stream.

The RealSense SR300 and SR305 models use coded light technology, which is similar to structured light.

Based on your response, the principle of depth measurement of the Realsense D435 camera is based on binocular vision, while the Realsense D300 series camera is based on structured light measurement. Is that correct?

@MartyG-RealSense
Copy link
Collaborator

Hi @5204338 The RealSense 400 Series cameras are stereo depth (constructed from left and right sensors), and so these cameras are like binoculars. The SR300 camera is Coded Light, which is similar to structured light.

@5204338
Copy link

5204338 commented Apr 1, 2024

OK,I know the result,Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants