Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

questions about infrared image #5382

Closed
tonyjonyjack opened this issue Dec 5, 2019 · 12 comments
Closed

questions about infrared image #5382

tonyjonyjack opened this issue Dec 5, 2019 · 12 comments

Comments

@tonyjonyjack
Copy link

Required Info
Camera Model D435
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Win (10)
Platform PC
SDK Version legacy / 2.0
Language C++/opencv

Issue Description

Ihave three questions

1、 I want to close IR projrctor,because the spot influences image processing
image
image
2、Stereo image sensing technologies use two infrared cameras to calculate depth ,so can I get depth information in dark environment ?
3、How to transform pixel coordinates in infrared image to camera coordinates (x,y,z)

Thanks in advance

@MartyG-RealSense
Copy link
Collaborator

  1. The IR Projector that projects the semi-random dot pattern can be turned off in the RealSense Viewer program by disabling the 'Emitter Enabled' option.

image

If you wish to turn the emitter off with scripting, the link below (under the 'Librealsense2' heading) has example code for disabling the IR Projector.

https://github.com/IntelRealSense/librealsense/wiki/API-How-To#controlling-the-laser

If the location that the camera is used in is well lit then the camera can use the location's ambient light to analyze objects for their depth information instead of using the dot pattern to do so.

  1. Yes, the camera can operate in a dark environment. The link below discusses the camera's ability to operate in low 'lux' illumination levels.

https://support.intelrealsense.com/hc/en-us/community/posts/360037389393/comments/360009537113

  1. 2D pixel coordinates can be transformed to 3D 'world' coordinates using an instruction called rs2_deproject_pixel_to_point

This instruction is discussed in th elink below.

#1413

@tonyjonyjack
Copy link
Author

  1. The IR Projector that projects the semi-random dot pattern can be turned off in the RealSense Viewer program by disabling the 'Emitter Enabled' option.

image

If you wish to turn the emitter off with scripting, the link below (under the 'Librealsense2' heading) has example code for disabling the IR Projector.

https://github.com/IntelRealSense/librealsense/wiki/API-How-To#controlling-the-laser

If the location that the camera is used in is well lit then the camera can use the location's ambient light to analyze objects for their depth information instead of using the dot pattern to do so.

  1. Yes, the camera can operate in a dark environment. The link below discusses the camera's ability to operate in low 'lux' illumination levels.

https://support.intelrealsense.com/hc/en-us/community/posts/360037389393/comments/360009537113

  1. 2D pixel coordinates can be transformed to 3D 'world' coordinates using an instruction called rs2_deproject_pixel_to_point

This instruction is discussed in th elink below.

#1413
I used function (rs2_deproject_pixel_to_point) to transform pixel coordinates in RGB image to camera coordinates, but this time, pixel coordinates are in infrared image,I don't know how to do

@MartyG-RealSense
Copy link
Collaborator

Apologies for the delay in responding further, as I was thinking carefully about your follow-up question about how to deproject IR pixels.

I am only familiar with deprojecting 2D color pixels. I could not find an answer in my research for IR stream pixels that I feel confident enough in to suggest. One of the Intel staff on this forum should be able to answer this question better than I can. I do apologize.

@ev-mp
Copy link
Collaborator

ev-mp commented Dec 11, 2019

@tonyjonyjack hello, regarding the last point -

  1. How to transform pixel coordinates in infrared image to camera coordinates (x,y,z)

The left IR image and the Depth frame are inherently aligned, so for a given [i,j] index in left IR frame you can find its depth by deprojecting the value of [i,j] pixel in the depth frame.
For the right IR the transformation is similar in its nature to RGB->Depth alignment and requires to either run the transformations manually or to use align processing block or . Check rs-align demo for details.

@tonyjonyjack
Copy link
Author

@tonyjonyjack hello, regarding the last point -

  1. How to transform pixel coordinates in infrared image to camera coordinates (x,y,z)

The left IR image and the Depth frame are inherently aligned, so for a given [i,j] index in left IR frame you can find its depth by deprojecting the value of [i,j] pixel in the depth frame.
For the right IR the transformation is similar in its nature to RGB->Depth alignment and requires to either run the transformations manually or to use align processing block or . Check rs-align demo for details.

thanks, but depth maps are based on binocular imaging principle,it is a virtual depth camera model, I don't know how to get intrinsic of depth 'camera'(fx,fy,ppx etc),

@ev-mp
Copy link
Collaborator

ev-mp commented Dec 17, 2019

@tonyjonyjack ,

I don't know how to get intrinsic of depth 'camera'(fx,fy,ppx etc),

The intrinsic parameters are attached to the stream profiles, see documentation.

You can run rs-enumerate-devices -c to inspect all the available per-stream intrinsic parameters.
The relevant code is https://github.com/IntelRealSense/librealsense/blob/master/tools/enumerate-devices/rs-enumerate-devices.cpp#L105

@tonyjonyjack
Copy link
Author

@tonyjonyjack ,

I don't know how to get intrinsic of depth 'camera'(fx,fy,ppx etc),

The intrinsic parameters are attached to the stream profiles, see documentation.

You can run rs-enumerate-devices -c to inspect all the available per-stream intrinsic parameters.
The relevant code is https://github.com/IntelRealSense/librealsense/blob/master/tools/enumerate-devices/rs-enumerate-devices.cpp#L105

Maybe I didn't express it clearly. rgb camera intrinsic can be obtained through camera calibration. some depth cameras are based on TOF(time of flight),and I obtain depth camera intrinsic through camera calbration, but realsense depth maps are based on binocular imaging principle,it is a virtual depth camera model,I'm curious about the way you get the camera parameters .

@ev-mp
Copy link
Collaborator

ev-mp commented Jan 6, 2020

@tonyjonyjack hello, sorry for the late response -
The calibration process for the IR sensors is similar to the RGB camera, and is significant only in the way that the output is the standard camera matrix (intrinsic+extrinsic) for left and right imagers.
As the left and right IR frames are acquired, they are being undistorted and rectified in hardware in order to calculate the depth map.

Since the the depth map is calculated based on the image of the left IR sensor, then the "virtual channel" intrinsic the same as its physical origin.

@RealSenseCustomerSupport
Copy link
Collaborator


Hi @ tonyjonyjack,

Has the comment provided by @ev-mp helped to find an answer to your question?

Thank you!

@RealSenseCustomerSupport
Copy link
Collaborator


@tonyjonyjack,

Do you still need help with this question?

@ohsm1125
Copy link

I know that IntelrealSense outputs depthmap information.

I want to create a depthmap by analyzing an IR dot pattern image.
(not the depthmap output by intel real sense)

Do you know any of these algorithms?

If you have any open source or related information, please leave a comment.

@MartyG-RealSense
Copy link
Collaborator

Hi @ohsm1125 Is the structured light reference for OpenCV at the link below helpful to you, please?

https://answers.opencv.org/question/145777/measuring-the-depth-using-a-laser-pattern-projector-and-a-single-camera/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants