-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using software-device alignment with rs-camera and non-realsense camera #10363
Comments
Hi @ilyak93 In answer to your questions: I would recommend using the last approach in your case that uses rs2::align so that the RealSense SDK's align processing block is used for alignment. The processing block automatically adjusts for differences in resolution and FOV size. In the case of your change from rs2::align align(RS2_STREAM_COLOR); to rs2::align align(RS2_STREAM_DEPTH); it looks as though you are aligning color to depth instead of the much more common depth to color, a method demonstrated by the SDK's C++ rs-measure example program for measuring the real-world distance betweeen two points on the image. When color to depth alignment is performed, if the color FOV size is smaller than the depth FOV size then the color image will be stretched out to match the depth FOV size. This can be useful if you want the aligned image to fill the whole screen instead of having a border around it, as typically happens when aligning depth to a color stream with a smaller FOV. The link below has links to scripting references for alignment in the MATLAB wrapper. https://support.intelrealsense.com/hc/en-us/community/posts/4415502202771/comments/4415513801235 I would say a cautious 'yes' about whether it is a good sign that the intrinsics calculated by MATLAB closely match the stream intrinsics. I do not know enough about MATLAB calibration to give an absolute answer to this question though. As the information at the top of this case states that you are using SDK 2.34.0, I would recommend considering a newer SDK version if possible. 2.34.0 had a problem with continuously generating timing errors for some RealSense users. |
Can you detail about that a little bit more ? For now I've noticed that surprisingly the first approach did much better alignment and by moderating a little the ppx and ppy I was managed to I tried also making undistortion in matlab, where I also did the calibration, so it is the same environment and the arguments of the calibration to the matlab undistort function, i.e:
But the result wasn't good because I think as much as the calibration good, it isn't ideal, otherwise i don't know how can I change the result. So maybe playing it by hand and looking at the results on a few dozens images is the best I can do, although it isn't a quantitative approach (like I can't really measure it excluding by eye). I think that also is a factor using the first approach (excluding the thing you said about automatically adjustment), because stretching 512 x 640 image by up-sampling to 720 * 1280 creates too much noise for the calibration, that's why on the images u can see that the aligned thermal looks like it shrank too much. Of course probably I can then manually adjust the intrinsics (I think maybe the focal length) for controlling this dimensions, but it seems that in both ways the bottleneck is the calibration (or the way that I use this parameters, but I described here the process and that it do looks fine, idk), so that's why the most profitable approach is the one that makes the least changes to the images (no stretching and other pre-proccesing) as in the first approach. From what it seems to me, I guess. According to this observation, maybe a good try is the one that I mentioned, to lower the resolution of the realsense to a more similar one to thermal. By the way, the same resolution constraint is by using calibration of matlab mainly. Originally the objects of thermal images are just a little bit bigger then the same objects in real sense, that's why I suspect that maybe the first approach gives me better results, because the calibration is easier, thus the parameters are better, and there isn't shrinking effect, just not accurate alignment which I fix by hand (and it do looks good for the big objects in the images). You can notice in the left corner the small device is not exactly aligned, but like the big stuff are aligned, that's already a progress. The original of that thermal image looks like this: So you can notice it shrank the image and then I fixed a little bit the ppx and the ppy to fit as u see above. That's my best shot so far. |
If alignment is performed in your code using rs2::align then the SDK's Align Processing Block mechanism makes automatic adjustments for differences in resolution and FOV size. If you do not use rs2::align and create your own alignment method then the Align Processing Block is not used and you will have to create your own mechanism for compensating for differences, like your various methods described above. So using rs2::align and having the Align Processing Block automatically handle adjustments between streams is much easier than implementing adjustments yourself. But it is certainly not compulsory to use rs2::align. What matters in the end is what works best for your project and produces the best results, and if your own align method achieves that then it is totally fine to use it. |
Hi @ilyak93 Do you require further assistance with this case, please? Thanks! |
Main goal: align realsense camera with nonreal-sense thermal (images mode: RS2_FORMAT_Z16)
The code of the alignment is checked with realsense rgb-depth images and works fine.
I did the calibration of the cameras with matlab:
and I got a pretty good re-projection error 0f 0.51 and it seems in the intrinsics visualization that the location of the cameras is correct:
The realsense is a little bit above the thermal camera, is in the physical setup, it does seem to be reliable.
Also the reprojection itself (red line) looks good:
(the green are the detected points, and they are detected on the rgb image, and on the thermal image its the same locations as in the rgb, so ignore it but the reprojection (red) look perfect)
The resolutions of the cameras are different, as the realsense is 720 x 1280 and the thermal camera is 512 x 640, so I tried two approaches:
cv2.resize(tc1_np, size, interpolation=cv2.INTER_AREA)
using the code for alignment didn't give correct results.
I also wondered if to use the intrinsics Matlab calculated for realsense or the original from the stream (rs2 api), tried both, didn't get anything significant (although I think the matlab intrinsics were better a little).
I did hope that the second approach will work, as real-sense seem to be using it, although I don't really know the inner implementation, but all I do know that before the alignment the images are both 720 x 1280, as in this approach. The thing I don't know is how is the calibration parameters are calculated, tho it is the most logical, that they are calculated for both images when they are 720 x 1280 and then
register_extrinsics_to
from the thermal to real-sense should do the job.So my questions I guess are:
To make sure I'm putting the intrinsics and intrinsics right, here is their structure from matlab:
Camera1 is realsense and Camera2 is thermal, here are their intrinsics:
So as much as I understand, rotation and translation are the parameters for
register_extrinsics_to
.Then the structure of the intrisics matrix:
is a transposed this one (my guess correct me if I'm wrong):
where
f_x
of the matrix is thefx
parameter for creating ars2_intrinsics
forsensor
, same aboutf_y
, andc_x
isppx
andc_y
isppy
.The coefficients as much as I can tell from the docs are for Brown Comrady distortion:
I deduced it looking at matlab documentation and the naming convention (correct me If I'm wrong):
so the k's are k1,k2,k3 the radial distortion coefficients and the k1,k2 are the tangential coefficients.
(I think it is correct:
)
P.S:
I tried to use the transposed rotation matrix, I think it was closer then as is, but can't tell if this is an indicator of something or just a random effect.
The whole alignment code:
Here is the content of the calibration file (with matlab intrinsics to realseanse and not the stream ones from the api):
The original color image (I've cut my face on the color image, so there is another upper part)::
The results (second approach) with rotation as is from the matlab calibration :
With transposed rotation:
The second is better, because when I look one image on another, the objects are closer one to another, but it still isn't a good alignment result.
First approach results:
Color:
Thermal aligned with transposed rotation matrix:
Thermal aligned with as is rotation matrix:
Those are at least overlapping (pretty close).
In this approach I also tried to do
rs2::align align(RS2_STREAM_DEPTH);
instead ofrs2::align align(RS2_STREAM_COLOR);
and saving the aligned color images, so I get:The only thing that comes to my mind I think is to try do better calibration (more images, more positions).
I will be glad for revisiting thing I've done and correct me if I wrong anywhere or any other comments.
Does it look to you guys that it works and it just a matter of more effort into calibration (tho the re-projection looked good and the error was small) ?
The text was updated successfully, but these errors were encountered: