Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to change untextured pointcloud resolution (size) independently from the RGB setting? #1924

Closed
mszuyx opened this issue Jun 9, 2021 · 30 comments
Labels

Comments

@mszuyx
Copy link

mszuyx commented Jun 9, 2021

Hi RS community,

For my application, I need a high resolution RGB stream for image classification and a low resolution pointcloud (without texture) stream for object detection. I originally assumed the size of the pointcloud is dominated by the "depth_width" and "depth_height" (the width and height of the sensor_msg::pointcloud2 should be the same as these two arguments). But it turns out the pointcloud size is dominated by "color_width" and "color_height" instead. (which makes sense as D455 uses stereo images to recover depth and pointcloud).
However, for my application, to maintain a high RGB resolution, I have to live with an unnecessarily detailed pointcloud. I wonder if the ros wrapper offers a way to change pointcloud size without touching the RGB setting. If not, I'm going to explore some ways to sub-sample the pointcloud. (But the whole point here is to reduce CPU usage. Sub-sampling will help, but not subscribing/processing the full-size pointcloud at all is better. )

Thank you!! : )

Yu

@MartyG-RealSense
Copy link
Collaborator

Hi @mszuyx The RealSense ROS wrapper provides a decimation filter that can be used to reduce depth scene complexity. It can be enabled by adding filters:=decimation to the roslaunch instruction.

In ROS1, once a decimation filter is defined then its value (Magnitude) could be configured with dynamic_reconfigure, as demonstrated in the link below.

#583

If you are using ROS2 though, it does not use dynamic_reconfigure. Doronhi the RealSense ROS wrapper developer offers advice in the link below about setting filter values in ROS2.

#1710 (comment)


Alternatively, a RealSense ROS user in the link below took the approach of custom-defining a low resolution in the roslaunch instruction, which they said significantly reduced the number of points per message in their pointcloud configuration.

#1881 (comment)

When defining a custom resolution, three factors should be provided for each stream type that is being defined: width, height and FPS. If the three factors are not provided then the custom configuration is recognized as invalid and the default configuration of the particular camera model being used is applied instead.

@mszuyx
Copy link
Author

mszuyx commented Jun 10, 2021

@MartyG-RealSense Thank you! I am using ROS1 with the latest SDK and ros wrapper. I have tried decimation filter actually. But I couldn't notice any change in pointcloud resolution.
Here is me turning on/off the decimation filter in realsense-reviewer. And you can see the resolution of the depth image obviously dropped.
Screenshot from 2021-06-10 13-35-00
Screenshot from 2021-06-10 13-35-18
But if I include the filter in ros launch file like this:
Screenshot from 2021-06-10 13-38-57
And view the depth image with: rosrun image_view image_view image:=/d455/depth/image_rect_raw
This is the depth image without the decimation filter:
no_dec
This is the depth image with the decimation filter:
with_dec
Both images have 848 x 480 resolution (the same as my launch file depth resolution setting)
I also checked the pointcloud size using:
std::cout << rawPointCloud.width << std::endl;
std::cout << rawPointCloud.height << std::endl;
And I got 1280 and 720 w/o decimation filter. (My RGB resolution setting is 1280 x 720)

Here is my launch file, please advice if I did something wrong. Thank you!

  <arg name="usb_port_id"         default=""/>
  <arg name="device_type"         default="d455"/>
  <arg name="json_file_path"      default=""/>
  <arg name="camera"              default="d455"/>
  <arg name="tf_prefix"           default="$(arg camera)"/>
  <arg name="external_manager"    default="false"/>
  <arg name="manager"             default="realsense2_camera_manager"/>
  <arg name="output"              default="screen"/>
  <arg name="initial_reset"       default="true"/>
  
  
  <arg name="fisheye_width"       default="-1"/>
  <arg name="fisheye_height"      default="-1"/>
  <arg name="enable_fisheye"      default="false"/>

  <arg name="depth_width"         default="848"/>
  <arg name="depth_height"        default="480"/>
  <arg name="enable_depth"        default="true"/>

  <arg name="confidence_width"    default="-1"/>
  <arg name="confidence_height"   default="-1"/>
  <arg name="enable_confidence"   default="false"/>
  <arg name="confidence_fps"      default="-1"/>

  <arg name="infra_width"         default="-1"/>
  <arg name="infra_height"        default="-1"/>
  <arg name="enable_infra"        default="false"/>
  <arg name="enable_infra1"       default="false"/>
  <arg name="enable_infra2"       default="false"/>
  <arg name="infra_rgb"           default="false"/>

  <arg name="color_width"         default="1280"/>
  <arg name="color_height"        default="720"/>
  <arg name="enable_color"        default="true"/>

  <arg name="fisheye_fps"         default="-1"/>
  <arg name="depth_fps"           default="30"/>
  <arg name="infra_fps"           default="30"/>
  <arg name="color_fps"           default="30"/>
  <arg name="gyro_fps"            default="-1"/>
  <arg name="accel_fps"           default="-1"/>
  <arg name="enable_gyro"         default="false"/>
  <arg name="enable_accel"        default="false"/>

  <arg name="enable_pointcloud"         default="false"/>
  <arg name="pointcloud_texture_stream" default="RS2_STREAM_ANY"/>
  <arg name="pointcloud_texture_index"  default="0"/>
  <arg name="allow_no_texture_points"   default="true"/>
  <arg name="ordered_pc"                default="true"/>

  <arg name="enable_sync"               default="true"/>
  <arg name="align_depth"               default="true"/>

  <arg name="publish_tf"                default="true"/>
  <arg name="tf_publish_rate"           default="0"/>

  <arg name="filters"                   default="spatial,temporal,decimation,pointcloud"/>
  <arg name="clip_distance"             default="7"/>
  <arg name="linear_accel_cov"          default="0.01"/>
  <arg name="unite_imu_method"          default="linear_interpolation"/>
  <arg name="topic_odom_in"             default="odom_in"/>
  <arg name="calib_odom_file"           default=""/>
  <arg name="publish_odom_tf"           default="true"/>

  <arg name="stereo_module/exposure/1"  default="7500"/>
  <arg name="stereo_module/gain/1"      default="16"/>
  <arg name="stereo_module/exposure/2"  default="1"/>
  <arg name="stereo_module/gain/2"      default="16"/>
  
  
  <group ns="$(arg camera)">
    <include file="$(find realsense2_camera)/launch/includes/nodelet.launch.xml">
      <arg name="tf_prefix"                value="$(arg tf_prefix)"/>
      <arg name="external_manager"         value="$(arg external_manager)"/>
      <arg name="manager"                  value="$(arg manager)"/>
      <arg name="output"                   value="$(arg output)"/>
      <arg name="serial_no"                value="$(arg serial_no)"/>
      <arg name="usb_port_id"              value="$(arg usb_port_id)"/>
      <arg name="device_type"              value="$(arg device_type)"/>
      <arg name="json_file_path"           value="$(arg json_file_path)"/>

      <arg name="enable_pointcloud"        value="$(arg enable_pointcloud)"/>
      <arg name="pointcloud_texture_stream" value="$(arg pointcloud_texture_stream)"/>
      <arg name="pointcloud_texture_index"  value="$(arg pointcloud_texture_index)"/>
      <arg name="enable_sync"              value="$(arg enable_sync)"/>
      <arg name="align_depth"              value="$(arg align_depth)"/>

      <arg name="fisheye_width"            value="$(arg fisheye_width)"/>
      <arg name="fisheye_height"           value="$(arg fisheye_height)"/>
      <arg name="enable_fisheye"           value="$(arg enable_fisheye)"/>

      <arg name="depth_width"              value="$(arg depth_width)"/>
      <arg name="depth_height"             value="$(arg depth_height)"/>
      <arg name="enable_depth"             value="$(arg enable_depth)"/>

      <arg name="confidence_width"         value="$(arg confidence_width)"/>
      <arg name="confidence_height"        value="$(arg confidence_height)"/>
      <arg name="enable_confidence"        value="$(arg enable_confidence)"/>
      <arg name="confidence_fps"           value="$(arg confidence_fps)"/>

      <arg name="color_width"              value="$(arg color_width)"/>
      <arg name="color_height"             value="$(arg color_height)"/>
      <arg name="enable_color"             value="$(arg enable_color)"/>

      <arg name="infra_width"              value="$(arg infra_width)"/>
      <arg name="infra_height"             value="$(arg infra_height)"/>
      <arg name="enable_infra"             value="$(arg enable_infra)"/>
      <arg name="enable_infra1"            value="$(arg enable_infra1)"/>
      <arg name="enable_infra2"            value="$(arg enable_infra2)"/>
      <arg name="infra_rgb"                value="$(arg infra_rgb)"/>

      <arg name="fisheye_fps"              value="$(arg fisheye_fps)"/>
      <arg name="depth_fps"                value="$(arg depth_fps)"/>
      <arg name="infra_fps"                value="$(arg infra_fps)"/>
      <arg name="color_fps"                value="$(arg color_fps)"/>
      <arg name="gyro_fps"                 value="$(arg gyro_fps)"/>
      <arg name="accel_fps"                value="$(arg accel_fps)"/>
      <arg name="enable_gyro"              value="$(arg enable_gyro)"/>
      <arg name="enable_accel"             value="$(arg enable_accel)"/>

      <arg name="publish_tf"               value="$(arg publish_tf)"/>
      <arg name="tf_publish_rate"          value="$(arg tf_publish_rate)"/>

      <arg name="filters"                  value="$(arg filters)"/>
      <arg name="clip_distance"            value="$(arg clip_distance)"/>
      <arg name="linear_accel_cov"         value="$(arg linear_accel_cov)"/>
      <arg name="initial_reset"            value="$(arg initial_reset)"/>
      <arg name="unite_imu_method"         value="$(arg unite_imu_method)"/>
      <arg name="topic_odom_in"            value="$(arg topic_odom_in)"/>
      <arg name="calib_odom_file"          value="$(arg calib_odom_file)"/>
      <arg name="publish_odom_tf"          value="$(arg publish_odom_tf)"/>
      <arg name="stereo_module/exposure/1" value="$(arg stereo_module/exposure/1)"/>
      <arg name="stereo_module/gain/1"     value="$(arg stereo_module/gain/1)"/>
      <arg name="stereo_module/exposure/2" value="$(arg stereo_module/exposure/2)"/>
      <arg name="stereo_module/gain/2"     value="$(arg stereo_module/gain/2)"/>

      <arg name="allow_no_texture_points"  value="$(arg allow_no_texture_points)"/>
      <arg name="ordered_pc"               value="$(arg ordered_pc)"/>
      
    </include>
    <rosparam>  
      /d455/rgb_camera/auto_exposure_priority : False
    </rosparam>
  </group>

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 11, 2021

It looks as though you are setting up the decimation filter within your launch file, but this does not set the magnitude value for the filter that determines the extent to which the filter will affect the ROS image. As mentioned earlier, after launch the magnitude could be set during runtime in ROS1 with dynamic_reconfigure:

rosrun dynamic_reconfigure dynparam set camera/decimation 'Filter Magnitude' 1

@mszuyx
Copy link
Author

mszuyx commented Jun 11, 2021

@MartyG-RealSense Thank you! I will try that ASAP and let you know the outcome! (Even though I didn't try it in the way you suggest, I tried playing around the magnitude via "rosrun rqt_reconfigure rqt_reconfigure" and it doesn't appear to do anything either. And I believe the default magnitude setting of the decimation filter is 2, so at least the resolution should go down by a factor of two if I set it in the launch file... Anyway, I will let you know how it goes! )

@mszuyx
Copy link
Author

mszuyx commented Jun 13, 2021

@MartyG-RealSense Hi!

I tried setting the magnitude value of the decimation filter with the following lines in my launch file:

    <rosparam>  
      /d455/rgb_camera/auto_exposure_priority : False
      /d455/decimation/filter_magnitude : 8
    </rosparam>

And I also verified that the targeted param has been properly set using rosrun rqt_reconfigure rqt_reconfigure: (The magnitude value was set to 2 before launching the new launch file)
Screenshot from 2021-06-13 14-46-02
Screenshot from 2021-06-13 14-47-16

However,
std::cout << rawPointCloud.width << std::endl;
std::cout << rawPointCloud.height << std::endl;
still output 1280 x 720.

Furthermore, the rosrun image_view image_view image:=/d455/depth/image_rect_raw shows the depth image's resolution remains unchanged as well:
frame0000

Please advise!

@mszuyx
Copy link
Author

mszuyx commented Jun 13, 2021

Also, it will be nice to know how to get rid of these two warnings:
Screenshot from 2021-06-13 14-56-37
I couldn't locate which code triggered the setting of these two params.
Thanks!!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 13, 2021

The limiters for depth auto-exposure and auto-gain in librealsense were introduced in SDK 2.42.0 and require a firmware version 5.12.11.0 or newer. Which firmware version does your camera currently have installed please?

IntelRealSense/librealsense#8220

As this feature just sets maximum limits for auto exposure and gain values, removal of the settings from dynamic_reconfjgure as indicated by the warnings would not likely have a significant effect on your roslaunch.

Looking at your launch file that you kindly provided earlier, it looks as though it is a custom launch file rather than a pre-made one like rs_launch. So I cannot be certain whether your rosparam code for defining the decimation filter or the dynamic reconfigure value for the filter is correct because I do not have a similar reference to compare it to, unfortunately.

Can you try defining the filter the 'standard' way to see whether it makes a difference to the depth image please, using an rs_launch first and then a rosrun secondly.

roslaunch realsense2_camera rs_camera.launch filters:=decimation

rosrun dynamic_reconfigure dynparam set camera/decimation 'Filter Magnitude' 8

@mszuyx
Copy link
Author

mszuyx commented Jun 15, 2021

Hi @MartyG-RealSense , the Realsense SDK is ver 2.47.0 and my D455 cameras are with firmware 5.12.14.50 (updated recently). And I have been following the latest release of realsense ros wrapper.
I tried the "standard" way which launches the camera with rs_launch, and it did work!
So, I compared my custom launch file against the rs_launch file line-by-line and found the cause of the issue: align_depth:=true
If this param is set true, then the pointcloud size == resolution of RGB stream, regardless of the decimation filter setting.
For my application, I need to filter some objects based on textureless pointcloud and later project these masked points to the 2D image for classification. In short, it would be nice that I can still use "align_depth:=true" alongside with the decimation filter. If not, I guess I can transfer the points myself based on the transformation between the depth frame and RGB frame? Does it make sense? Or could realsense add a patch to make these two settings compatible?

@mszuyx
Copy link
Author

mszuyx commented Jun 15, 2021

A side note, for some reasons rosrun dynamic_reconfigure dynparam set camera/decimation 'Filter Magnitude' 8 doesn't work, the other does. Version difference?
Screenshot from 2021-06-15 14-18-33

@MartyG-RealSense
Copy link
Collaborator

The filter_magnitude / Filter Magnitude issue is mentioned in the #583 (comment) case mentioned earlier, though I do not have knowledge about it other than that comment.

Do you experience any difference in the point cloud in regard to the RGB resolution if you set allow_no_texture_points to False in your launch file instead of True?

@mszuyx
Copy link
Author

mszuyx commented Jun 16, 2021

Hi, @MartyG-RealSense
allow_no_texture_points:= true + align_depth:= false = pointcloud size -> 424 x 240 (filter_magnitude = 2)
allow_no_texture_points:= false + align_depth:= false = pointcloud size -> 424 x 240 (filter_magnitude = 2)
allow_no_texture_points:= true + align_depth:= true = pointcloud size -> 1280 x 720 (filter_magnitude = 2)
allow_no_texture_points:= false + align_depth:= true = pointcloud size -> 1280 x 720 (filter_magnitude = 2)
I guess allow_no_texture_points doesn't have any impact on the pointcloud size when decimation filter is active.
D455 has a larger RGB FOV than D435 (RGB FOV > Depth FOV), so setting this param to false won't clip the pointcloud like the case with D435.
As the document on realsense-ros stated:
align_depth: If set to true, will publish additional topics for the "aligned depth to color" image.:
/camera/aligned_depth_to_color/image_raw, /camera/aligned_depth_to_color/camera_info.
The pointcloud, if enabled, will be built based on the aligned_depth_to_color image.

I believe this is the root cause of my issue. The aligned_depth_to_color/image_raw image will always share the same resolution with the RGB image, hence the generated pointcloud will have the same resolution too. May I expect an update from realsense-ros to make the align_depth and decimation compatible? If not, I can try project the pointcloud myself. For that matter, can you please direct me to the transformation / projection matrix used for align_depth in D455 camera? Much appreciated!

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @mszuyx for the test data. I will refer the questions in the above comment to @doronhi the RealSense ROS wrapper developer, since the question about a future update in the wrapper is a development related query.

@doronhi would also be the best person to answer your question about the transformation / projection matrix used for align_depth on D455 in the ROS wrapper. Thanks!

@mszuyx
Copy link
Author

mszuyx commented Jun 17, 2021

Hi @doronhi ! Would you kindly direct me to the right documentation? Thanks : )

@doronhi
Copy link
Contributor

doronhi commented Jun 20, 2021

Regarding the request to split the align_depth flag for images and for pointcloud - I think it's a valid request. I'll try to attend to it shortly but I don't have a timetable yet.

The transformation between the color and depth sensors is available using rosrun tf tf_echo camera_color_optical_frame camera_depth_optical_frame.
For your purpose, since you require no texture pointcloud, I wonder if it wouldn't be easier to modify the wrapper:
Replace these lines in base_realsense_node.cpp - lines 1665-1673

            for (std::vector<NamedFilter>::const_iterator filter_it = _filters.begin(); filter_it != _filters.end(); filter_it++)
            {
                ROS_DEBUG("Applying filter: %s", filter_it->_name.c_str());
                if ((filter_it->_name == "pointcloud") && (!original_depth_frame))
                    continue;
                if ((filter_it->_name == "align_to_color") && (!is_color_frame))
                    continue;
                frameset = filter_it->_filter->process(frameset);
            }

with these:

            for (std::vector<NamedFilter>::const_iterator filter_it = _filters.begin(); filter_it != _filters.end(); filter_it++)
            {
                ROS_DEBUG("Applying filter: %s", filter_it->_name.c_str());
                if (filter_it->_name == "pointcloud")   // Prevent the pointcloud filter to work on the aligned frameset.
                    continue;
                if ((filter_it->_name == "align_to_color") && (!is_color_frame))
                    continue;
                frameset = filter_it->_filter->process(frameset);
            }
            if (_pointcloud_filter && original_depth_frame)
            { // run the pointcloud filter on the original depth frame and publish the results:
                publishPointCloud(_pointcloud_filter->process(original_depth_frame).as<rs2::points>(), t, frameset);
            }

@mszuyx
Copy link
Author

mszuyx commented Jun 21, 2021

@doronhi Thanks! This is awesome! I will try it and let you know the outcome ASAP! : )

@MartyG-RealSense
Copy link
Collaborator

Hi @mszuyx Do you have an update that you can provide about this case, please? Thanks!

@mszuyx
Copy link
Author

mszuyx commented Jun 28, 2021

@MartyG-RealSense Sorry! I am working on it!

@MartyG-RealSense
Copy link
Collaborator

No problem at all - thanks for the update!

@mszuyx
Copy link
Author

mszuyx commented Jun 28, 2021

@doronhi @MartyG-RealSense Just did what doronhi suggested.

Updates:

Now even if the align_depth is set true. The resolution of the camera/depth/image_rect_raw and the associated pointcloud match with the resolution setting for depth_width and depth_height (instead of matching the color_width and color_height like before, great!).

However, the decimation filter still doesn't seem to work on the camera/depth/image_rect_raw or the pointcloud after this modification. -- which doesn't seem like the intended behavior. (i.e., even if the align_depth is set true, the pointcloud resolution will respect the depth resolution setting and it will be further reduced according to the decimation filter setting)

An additional question:

  1. After this modification, is the camera/depth/image_rect_raw I am seeing "aligned with rgb frame" or not? Based on doronhi's kind comments, it seems like the depth image is in the original depth frame (not aligned) and the pointcloud is generated based on this unaligned image. Am I correct? Just want to be sure.
  2. Is the resolution of the pointcloud is always equal to the depth_width and depth_height setting regardless of the decimation filter usage? (decimation only serves as an average filter, but it won't change the output size)

Regarding the request to split the align_depth flag for images and for pointcloud - I think it's a valid request. I'll try to attend to it shortly but I don't have a timetable yet.

The transformation between the color and depth sensors is available using rosrun tf tf_echo camera_color_optical_frame camera_depth_optical_frame.
For your purpose, since you require no texture pointcloud, I wonder if it wouldn't be easier to modify the wrapper:
Replace these lines in base_realsense_node.cpp - lines 1665-1673

            for (std::vector<NamedFilter>::const_iterator filter_it = _filters.begin(); filter_it != _filters.end(); filter_it++)
            {
                ROS_DEBUG("Applying filter: %s", filter_it->_name.c_str());
                if ((filter_it->_name == "pointcloud") && (!original_depth_frame))
                    continue;
                if ((filter_it->_name == "align_to_color") && (!is_color_frame))
                    continue;
                frameset = filter_it->_filter->process(frameset);
            }

with these:

            for (std::vector<NamedFilter>::const_iterator filter_it = _filters.begin(); filter_it != _filters.end(); filter_it++)
            {
                ROS_DEBUG("Applying filter: %s", filter_it->_name.c_str());
                if (filter_it->_name == "pointcloud")   // Prevent the pointcloud filter to work on the aligned frameset.
                    continue;
                if ((filter_it->_name == "align_to_color") && (!is_color_frame))
                    continue;
                frameset = filter_it->_filter->process(frameset);
            }
            if (_pointcloud_filter && original_depth_frame)
            { // run the pointcloud filter on the original depth frame and publish the results:
                publishPointCloud(_pointcloud_filter->process(original_depth_frame).as<rs2::points>(), t, frameset);
            }

@MartyG-RealSense
Copy link
Collaborator

@mszuyx I will defer to @doronhi on the questions in the comment above as I do not have advice to give on those subjects.

@doronhi
Copy link
Contributor

doronhi commented Jul 1, 2021

Hi @mszuyx,
The topic camera/depth/image_rect_raw gives the depth image - with it's original orientation.
The aligned depth image is published by another topic: /camera/aligned_depth_to_color/image_raw

The resolution of the pointcloud matches the resolution of the depth image it was built upon.
In the original implementation, without any modifications, the pointcould is built upon the depth image that passed through all the filters. That means that if you set the "align_depth" filter, the pointcloud will be built based on a depth imaged aligned to the color image and it too will be aligned to the color image. If you set the "decimation" filter, the point cloud will have the resolution resulted from the decimation filter.
In the bypass I suggested, the pointcloud is built based upon the original depth image. Therefore it is not affected by the "align_depth" or the "decimation" filters.
By modifying this section of the code you could make the pointcloud be affected by whatever filters you need.

@mszuyx
Copy link
Author

mszuyx commented Jul 5, 2021

@doronhi I am working on this. The behavior I am looking for is that the decimation filter can be applied on the /camera/aligned_depth_to_color/image_raw frame such that the resolution of the pointcloud generated from this frame can also be reduced accordingly.

Based on your last reply: "In the original implementation, without any modifications, the pointcould is built upon the depth image that passed through all the filters. That means that if you set the "align_depth" filter, the pointcloud will be built based on a depth imaged aligned to the color image and it too will be aligned to the color image. If you set the "decimation" filter, the point cloud will have the resolution resulted from the decimation filter." It seems like the original implementation is aiming for the same behavior.

But based on my observation (please refer to my earlier comments), once the align_depth is set true, the pointcloud will be built based on /camera/aligned_depth_to_color/image_raw frame like you described, but the resolution of this frame is always equal to the RGB frame regardless of the depth frame resolution setting or decimation filter setting. And in result, the pointcloud generated upon this frame will also have the same resolution as the RGB frame regardless of the depth frame resolution setting or decimation filter setting.

To me, it seems like the aligned depth frame is untouched by the decimation filter even though the code clearly said: frameset = filter_it->_filter->process(frameset); Can you share some insights on how to approach this problem?

Thanks!!

@MartyG-RealSense
Copy link
Collaborator

Hi @mszuyx Do you have an update about this case that you can provide, please? Thanks!

@mszuyx
Copy link
Author

mszuyx commented Jul 14, 2021

@MartyG-RealSense Hi, not really. I am still waiting on doronhi's response.

@MartyG-RealSense
Copy link
Collaborator

Thanks for the information @mszuyx

@MartyG-RealSense
Copy link
Collaborator

Hi @doronhi Could you take a look at the question by @mszuyx at #1924 (comment) please? Thanks!

@doronhi
Copy link
Contributor

doronhi commented Aug 9, 2021

Hi, Sorry for the late response.
After alignment, the depth image is aligned to the color image both in point of view as well as in image size. It means that although the depth image was passed through the decimation filter and its resolution went down from, say from 848x480 to 424x240, if the color image is at 1280x720 the aligned_depth_to_color image will have the same 1280x720 resolution.
What you need to do is also reduce the color resolution.
The decimation filter, by default, works only on the depth stream. You can see its parameters using rqt_reconfigure under camera->decimation->stream_filter/stream_format_filter.
To apply the decimation filter on all streams change both parameters of the filter to "Process - Any (0)" value.
You can also set it like so in your launch file:

  <rosparam>
      /camera/decimation/stream_filter: 0
      /camera/decimation/stream_format_filter: 0
  </rosparam>

Out of curiosity - why use the decimation filter instead of requesting the reduced resolution in the stream definition? i.e. depth_width:=424 color_width:=424 etc....

@mszuyx
Copy link
Author

mszuyx commented Aug 9, 2021

@doronhi Thank you for your response! Yes, I have also come to the realization that the aligned_depth_to_color image is not affected by the decimation filter in any way.

To answer your question, my application requires a high-resolution RGB stream (>1280x720 for object recognition), and a manageable size point cloud stream (after some trial and error, 424x240 is sufficient and preferable). In the end, all information will be presented in a traversability map, so it is preferable that the aligned_depth_to_color flag be set to true.
But if I do so, the resolution setting (depth_width etc) and the decimation filter setting will not be applied to the point cloud generated from the aligned_depth_to_color image. -- This is my dilemma.

My current solution is to set the aligned_depth_to_color flag to false and perform the alignment myself. (transformation should be easy, not so sure about the FOV difference. But I am using D455 so I think the two images should be similar.)

Does this sound like a reasonable workaround? (meanwhile, it would be really handy if the resolution setting and decimation can also affect the aligned image directly. I think it will benefit lots of people who use RS cameras for navigation and recognition)

@doronhi
Copy link
Contributor

doronhi commented Aug 24, 2021

If I understand correctly, you would like to apply the decimation filter on the RGB image after the aligned_depth_to_rgb image was already created.
The wrapper does not support such flexibility in the order it applies the filters. Maybe a change request is in place.
For now, I think the best approach for you would be to modify the frame_callback() function, the way you did before. I think you should apply the filters rather than apply the transformation yourself in a later phase.

@MartyG-RealSense
Copy link
Collaborator

Hi @mszuyx Do you have an update about this case that you can provide, please? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants