Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to align frames from two different sensors' pipelines? #10338

Closed
YoshiyasuIzumi opened this issue Mar 25, 2022 · 33 comments
Closed

How to align frames from two different sensors' pipelines? #10338

YoshiyasuIzumi opened this issue Mar 25, 2022 · 33 comments

Comments

@YoshiyasuIzumi
Copy link

YoshiyasuIzumi commented Mar 25, 2022


Required Info
Camera Model D455
Firmware Version 05.12.15.50
Operating System & Version Win 10
Kernel Version (Linux Only) -
Platform PC
SDK Version 2.49.0.3474 (pyrealsense2)
Language python
Segment Robot

Issue Description

I encountered frame missing issue with frame queue with depth and color streaming. This problem seems to be handled in separate frame queues in this post.
But I tried to align color and depth with align.process, I couldn't apply align for the frames.
Please give me advice to align color and depth based on two separated streamed frames?
Thanks!

import pyrealsense2 as rs
import time

ctx = rs.context()
device = ctx.devices[0]

depth_sensor = device.query_sensors()[0]
depth_queue = rs.frame_queue(50)

color_sensor = device.query_sensors()[1]
color_queue = rs.frame_queue(50)


def GetProfile(sensor, stream_type, width, height, stream_format, fps):
    profiles = sensor.get_stream_profiles()

    for i in range(len(profiles)):
        profile = profiles[i].as_video_stream_profile()
        if profile.stream_type() != stream_type:
            continue
        if profile.width() != width:
            continue
        if profile.height() != height:
            continue
        if profile.format() != stream_format:
            continue
        if profile.fps() != fps:
            continue
        return profile

    return profiles[0]


depth_profile = GetProfile(depth_sensor, rs.stream.depth, 848, 480, rs.format.z16, 30)
color_profile = GetProfile(color_sensor, rs.stream.color, 848, 480, rs.format.bgr8, 30)

prev_depth_frame_num = 0
prev_color_frame_num = 0

start = time.time()

depth_sensor.open(depth_profile)
color_sensor.open(color_profile)

depth_sensor.start(depth_queue)
color_sensor.start(color_queue)

align_to = rs.stream.color
align = rs.align(align_to)

while time.time() - start < 3600:
    depth_frames = depth_queue.wait_for_frame()
    depth_frame_num = depth_frames.get_frame_number()
    print("D: " + str(depth_frame_num))

    color_frames = color_queue.wait_for_frame()
    color_frame_num = color_frames.get_frame_number()
    print("C: " + str(color_frame_num))

    # alined_frame = align.process(depth_frames)
    # # TypeError: process(): incompatible function arguments. The following argument types are supported:
    # #     1. (self: pyrealsense2.pyrealsense2.align, frames: pyrealsense2.pyrealsense2.composite_frame) -> pyrealsense2.pyrealsense2.composite_frame
    # #
    # # Invoked with: <pyrealsense2.pyrealsense2.align object at 0x02013DC0>, <pyrealsense2.frame Z16 #1>

    # alined_frame = align.process(depth_frames.as_frameset())
    # # RuntimeError: null pointer passed for argument "frame"

    if depth_frame_num % 20 == 0:
        time.sleep(1/4)

    if depth_frame_num - prev_depth_frame_num > 2:
        print('dropped ' + str(depth_frame_num - prev_depth_frame_num - 1) + ' depth frames')
    prev_depth_frame_num = depth_frame_num

    if color_frame_num - prev_color_frame_num > 2:
        print('dropped ' + str(color_frame_num - prev_color_frame_num - 1) + ' color frames')
    prev_color_frame_num = color_frame_num

depth_sensor.stop()
color_sensor.stop()

depth_sensor.close()
color_sensor.close()
@YoshiyasuIzumi YoshiyasuIzumi changed the title How to align frames from two streams? How to align frames from two different sensorsstreams? Mar 25, 2022
@YoshiyasuIzumi YoshiyasuIzumi changed the title How to align frames from two different sensorsstreams? How to align frames from two different sensors' stream? Mar 25, 2022
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 25, 2022

Hi @YoshiyasuIzumi It usually is not necessary to put the depth and color streams on separate pipelines. In Python, two separate pipelines for different sensors would typically be used if IMU streams were enabled in addition to depth and color, with IMU on one pipeline and depth + color together on the other pipeline.

The RealSense Python wrapper has an example Python alignment program called align_depth2color.py that you could test if you have not done so already to see whether it meets your needs.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/align-depth2color.py

In your sleep instruction if depth_frame_num % 20 == 0: time.sleep(1/4) are you trying to skip the first frames after the pipeline is started, please? If you are then a simpler way to skip the first several frames in order to allow auto-exposure time to settle down would be to use a 'for i' range instruction directly before wait_for_frames(). for example:

for i in range(5):
depth_frames = depth_queue.wait_for_frames()

Please note that I changed your code's 'wait_for_frame()' to 'wait_for_frames(), as wait_for_frames() is the correct name for the instruction.

@YoshiyasuIzumi
Copy link
Author

YoshiyasuIzumi commented Mar 25, 2022

Hi @MartyG-RealSense
Thanks for quick replay and advice! Let me add more details.
First, as you told me, I tried to manage color and depth frames in same pipeline. But "rs.frame_queue(queue_size, keep_frames=True)" skipped depth frame sometimes. (Buffer size seems to be enough and color frame is not skipped.) Then I found next comment "[Provide a frame_queue to each individual Sensor object to asynchronously buffer frames from multiple streams.]".(#9022 (comment)). So I start to find a way to align frames from two streams.

My goal is to get aligned color and depth images without skipping frames. I don't mind whether single pipeline or multiple pipeline is used. Do you have any advise for this situation?

And one more thing, in my case, I don't use auto exposure feature and this sleep timer is for emulate disturbance. And if I change "wait_for_frame" to "wait_for_frames". I got error attached one.
Screenshot from 2022-03-25 17-14-13

Thanks!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 25, 2022

Ah, I see that you are using wait_for_frame() in terms of a frame queue rather than a pipeline. That makes sense. Thanks very much for the clarification.

http://docs.ros.org/en/kinetic/api/librealsense2/html/classrs2_1_1frame__queue.html#a1efb6d2e4fda101edaa2a08879d58291

Setting a custom frame queue size instead of using the default size of '1' is permitted but not recommended, as you can accidentally break the streams in doing so. The SDK's official frame buffer management documentation recommends a queue size of '2' if using two streams (such as depth and color).

https://github.com/IntelRealSense/librealsense/wiki/Frame-Buffering-Management-in-RealSense-SDK-2.0#latency-vs-performance

@YoshiyasuIzumi
Copy link
Author

Thank you for the inputs. I want to understand your advice correctly.
Are you recommending method 1 or method 2 below?

Method 1. Single stream with frame queue for color and depth Please see this reference

Method 2: Two streams with two frame queues for color and depth Please see this reference also

To avoid frame drop for color and depth, I thought the method 2 is recommended from this comment.
#9022 (comment)

But do you suggest doubling the size of queue for single stream (for color and depth) with method 1?

Thank you for your help!

@MartyG-RealSense
Copy link
Collaborator

When you say 'single stream' or 'two streams', I think that 'pipeline' would be the right term. The stream is the type of data that is being provided (depth, color, etc).

A single pipeline with two stream types (depth and color) and a custom Frame Queue Size set to '2' would be the most common implementation for those who have more than one stream enabled wish to set their own custom frame queue size.

It is not the only way to implement a custom frame queue size though, of course. It depends on which method works best for you and your particular project.

Method 1 represented a script that didn't work correctly for the RealSense user that created it, whilst Method 2 was the script that did work for them. So I would suggest trying Method 2 and if it works for you then see if you can adapt it for your own project. You probably do not need to adjust the frame queue size at all though.

@YoshiyasuIzumi
Copy link
Author

Thank you for the correction! As you pointed out pipeline would be an appropriate term.
To be honest, I encountered skipped frame issue with method 1 with queue size 2. So I prefer trying method 2 as you also pointed out.
However, if I choose method 2, I cannot use align.process...
Can you let me know recommended method to align color and depth with two pipelines? (not with single pipeline)
Thank you for your help!

@MartyG-RealSense
Copy link
Collaborator

I could not find a reference for putting color and depth on two separate pipelines, unfortunately.

An alternative to having two separate pipelines is to use callbacks, as described at #5417

@YoshiyasuIzumi YoshiyasuIzumi changed the title How to align frames from two different sensors' stream? How to align frames from two different sensors' pipelines? Mar 25, 2022
@YoshiyasuIzumi
Copy link
Author

I could not find a reference for putting color and depth on two separate pipelines, unfortunately.

I see... If you get any idea to handle this case, please let me know!

An alternative to having two separate pipelines is to use callbacks, as described at #5417

Thank you for sharing useful reference. Related to your picked up issue, I found approach with rs.frame_source.allocate_composite_frame.
Manual frameset creation #5847

I try to make it run but I encountered frame timeout error same as comments below.
#5847 (comment)
#9900 (comment)

I checked your advice but I couldn't manage...
Is there any chance that you or your team members (such as @ev-mp ?) could help me solving this?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 26, 2022

I researched the issue further but didn't find anything that would help.

If your aim is to reduce frame drops, an alternative approach to improving performance with scripting may be to instead build the librealsense SDK from source with CMake and include the build term -DBUILD_WITH_OPENMP=true. This will enable librealsense to automatically take advantage of multiple cores on the computer's CPU when performing depth-color alignment. This may reduce the chances of maxing out the CPU's usage % when processing the alignment operation on a single core and so increasing the risk of performance being negatively impacted because of that heavy processing burden placed on the CPU.

@YoshiyasuIzumi
Copy link
Author

I see...
Thanks for your help. I'll try with that build option.
Please close this issue. But can you kindly let me know if you find useful information for this issue?
Thank you.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 27, 2022

Research on this issue would stop once the case is closed. So it may be worth keeping the case open for a further time period and first trying the OpenMP build option, and if that does not resolve your problem then let me know and I can look at other possible approaches to resolving your frame drops.

For example, you mentioned earlier in #10338 (comment) that you do not have auto-exposure enabled. If manual exposure is set in a certain range that exceeds an 'upper bound' then it can cause the FPS rate to lag, as discussed at #1957

@MartyG-RealSense
Copy link
Collaborator

Hi @YoshiyasuIzumi Was the information provided in the comment above helpful to you, please? Thanks!

@YoshiyasuIzumi
Copy link
Author

YoshiyasuIzumi commented Apr 5, 2022

Hi @MartyG-RealSense
Sorry for my late response.
I tested frame_queue's behavior with color and depth of two type of pyrealsense2 one is pip installed another is built with "-DBUILD_WITH_OPENMP=true" option. In my case, both pyrealsense2's frame_queue drops some case. But let me explain what I observed.
In my test, color and depth's fps was 30. Then sleep timer was set to 1/4 sec. So when the timer waited for 1/4 sec, 30 * 1/4 = 7.5 frames were enqueued. So I have tried queue size 2, 8 and 24. But in all case for both pyrealsense2 have dropped frame in the process.
The frequency of frame drop occurrence increased when the queue size was decreased.
So would you give me a guidance to determine an appropriate queue size for avoiding frame drop with frame_queue for color and depth?
Thanks!

import pyrealsense2.pyrealsense2 as rs  # For import built module
# import pyrealsense2 as rs # For import pip installed module
import time

prev_frame_num = 0
prev_color_frame_num = 0
prev_depth_frame_num = 0

def slow_processing(frame, diff_t):
   global prev_frame_num
   global prev_color_frame_num
   global prev_depth_frame_num

   n = frame.get_frame_number() 
   if n % 20 == 0:
       time.sleep(1/4)

   color_fn = frames.as_frameset().get_color_frame().get_frame_number()
   depth_fn = frames.as_frameset().get_depth_frame().get_frame_number()

   if color_fn - prev_color_frame_num > 1:
       print(f'[{diff_t}] dropped color {color_fn - prev_color_frame_num - 1} frames in {n} frame')

   if depth_fn - prev_depth_frame_num > 1:
       print(f'[{diff_t}] dropped depth {depth_fn - prev_depth_frame_num - 1} frames in {n} frame')

   if n - prev_frame_num > 1:
       print(f'[{diff_t}] dropped {n - prev_frame_num - 1} frames in {n} frame')
   prev_frame_num = n
   prev_color_frame_num = color_fn
   prev_depth_frame_num = depth_fn

#    print(n)

try:
   pipeline = rs.pipeline()

   config = rs.config()
   config.enable_stream(rs.stream.color, 848, 480, rs.format.bgr8, 30)
   config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 30)

   print("Slow callback + queue")
   queue = rs.frame_queue(2,True)
   # queue = rs.frame_queue(8,True)
   # queue = rs.frame_queue(24,True)
   pipeline.start(config, queue)
   start = time.time()
   t0 = time.time()
   while time.time() - start < 3600 * 6:
       t1 = time.time()
       diff_t = t1 - t0
       t0 = t1
       frames = queue.wait_for_frame()
       slow_processing(frames, diff_t)
   pipeline.stop()

except Exception as e:
   print(e)
except:
   print("A different Error")
else:
   print("Done")

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 6, 2022

Normally, the advice would be to use a frame queue size of '2' when using depth + color. Your sleep timer to emulate disturbance may be causing frame drop problems that would not otherwise occur though. Could you try setting the frame queue to '2' and commenting out the two lines of the sleep instruction to see what difference it makes?

// if depth_frame_num % 20 == 0:
// time.sleep(1/4)

@Andyshen555
Copy link

@YoshiyasuIzumi Could you please share the final code where you solved the queue and align together? Appreciated.

@YoshiyasuIzumi
Copy link
Author

Hi @MartyG-RealSense and @Andyshen555
Sorry for my late response again.
I confirmed with commenting out the sleep instruction and setting frame queue size to '2' for 12 hours. Then I couldn't find frame drop for it.
But you know, I want to achieve that even if the process delays, queue will not drop frame if the delay is within the expected range. In other words, I want to use frame_queue feature like this example for color and depth (with alignment).
Please give us help!
Thanks!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 11, 2022

@YoshiyasuIzumi The Python case #7067 may be a useful reference. It looks at inconsistent performance from the frame queue when retrieving and aligning frames, and tries using the RealSense SDK's Keep() function to solve the problem.

Keep() stores frames in the computer's memory instead of writing them to the computer's storage. When the pipeline is closed, you can then perform batch processing on all the stored images from that pipeline session at the same time (such as alignment and post-processing) and then save all the frames to file in a single action. A disadvantage though of Keep() is that because the stored frames consume computer memory, it is suited to short-duration processing sessions such as 10 seconds on a low-end computing device or 30 seconds on a higher-end computer such as a PC.

@YoshiyasuIzumi
Copy link
Author

@MartyG-RealSense Thank you for advice. Let me confirm my understanding. According to your comments, to access kept frame by Keep() function, the pipeline needs to be closed, right? So if I want to process frames while streaming, the Keep() function seems not to be appropriate, right? If it does, please add your insight. And I couldn't find frame_queue method in the code in 7067. And keep method seems to be called after pipe.wait_for_frames() method. I assume we use frame_queue with keep_frames=True in order to avoid frame drop with pipe.wait_for_frames(). Then, frame_queue seems to work with color or depth, but the frame_queue for color and depth, we observed frame drop. That is the situation from my view point.
So I want to know is there any plan to update frame_queue to support color and depth without unexpected frame drop?

@MartyG-RealSense
Copy link
Collaborator

Correct, you need to close the pipeline before processing the set of frames that are stored in memory.

#7067 does not have a frame queue size changing reference. The script in it that I was referring to acts as an example of using Keep() with alignment. If this script performed well then you may not need to change the frame queue size.

# get video length
frame_counts = get_num_frames(example_file)
print(frame_counts)
for i in range(frame_counts):
print(i)
frames = pipe.wait_for_frames()
if not frames:
print("Error: not all frames are extracted")
break
frames.keep()
frameset.append(frames)
pipe.stop()
for i in range(len(frameset)):
align_to = rs.stream.color
dimg, cimg = align_frame(frameset[i], align_to)
io.imsave("./test_res2/" + str(i) + ".jpg", np.hstack((cimg, dimg)))

In regard to frame drops, RealSense cameras and the RealSense SDK are used with so many different hardware and software configurations that it is likely that it would not be possible to develop a single 'one size fits all' solution that guarantees that frame drops cannot occur.

@YoshiyasuIzumi
Copy link
Author

@MartyG-RealSense
Thank you for your reply. I found the condition to avoid the frame drop with 1/4 sec sleep every 20 frames for 12 hours. I'm not sure this setting always avoid frame drop, but it succeeded when I just simply increased the queue size to 50.
I'm really not sure why 50 works and 24 doesn't. Do you have any insights on the reason why the above case happens? And if you have any advice for define queue size, please share us.
Thank you.

import pyrealsense2.pyrealsense2 as rs  # For import built module
# import pyrealsense2 as rs # For import pip installed module
import time

prev_frame_num = 0
prev_color_frame_num = 0
prev_depth_frame_num = 0

def slow_processing(frame, diff_t):
   global prev_frame_num
   global prev_color_frame_num
   global prev_depth_frame_num

   n = frame.get_frame_number() 
   if n % 20 == 0:
       time.sleep(1/4)

   color_fn = frames.as_frameset().get_color_frame().get_frame_number()
   depth_fn = frames.as_frameset().get_depth_frame().get_frame_number()

   if color_fn - prev_color_frame_num > 1:
       print(f'[{diff_t}] dropped color {color_fn - prev_color_frame_num - 1} frames in {n} frame')

   if depth_fn - prev_depth_frame_num > 1:
       print(f'[{diff_t}] dropped depth {depth_fn - prev_depth_frame_num - 1} frames in {n} frame')

   if n - prev_frame_num > 1:
       print(f'[{diff_t}] dropped {n - prev_frame_num - 1} frames in {n} frame')
   prev_frame_num = n
   prev_color_frame_num = color_fn
   prev_depth_frame_num = depth_fn

try:
   pipeline = rs.pipeline()

   config = rs.config()
   config.enable_stream(rs.stream.color, 848, 480, rs.format.bgr8, 30)
   config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 30)

   print("Slow callback + queue")
   queue = rs.frame_queue(50,True)  # No frame drop in my test
   # queue = rs.frame_queue(24,True)  # Frame drop in my test
   pipeline.start(config, queue)
   start = time.time()
   t0 = time.time()
   while time.time() - start < 3600 * 12:
       t1 = time.time()
       diff_t = t1 - t0
       t0 = t1
       frames = queue.wait_for_frame()
       slow_processing(frames, diff_t)
   pipeline.stop()

except Exception as e:
   print(e)
except:
   print("A different Error")
else:
   print("Done")

@MartyG-RealSense
Copy link
Collaborator

A RealSense team member advises at #5041 (comment) that the frame queue size controls the total number of frames in circulation across all queues. So the higher the frame queue number, the greater the number of frames that can be in the queues at the same time. As the frame queue size is increased, latency also increases but the risk of frames being dropped reduces.

@MartyG-RealSense
Copy link
Collaborator

Hi @YoshiyasuIzumi Do you require further assistance with this case, please? Thanks!

@YoshiyasuIzumi
Copy link
Author

Hi @MartyG-RealSense , Let me confirm this thread summery.
[Summary]

  • (To avoid frame drop for color and depth,) Provide a frame_queue to each individual Sensor object to asynchronously buffer frames from multiple streams. Reference
  • But above method have no way to align depth to color at least in python.

[Side note]

  • When I set a large frame queue size, I success to avoid frame drop for 12 hours. But there is no guide to decide enough large frame_queue size and there is no guarantee for avoiding frame drop.

If you have any correction, please let me know.
Thanks.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 21, 2022

There is no official guidance for a frame queue size other than the '2' is recommended by the frame buffering management documentation for a 2-stream setup. I have also seen '10' and '50' used in an official SDK example script, similar to how you are using 50 in your own script.

Aside from that, it may be a matter of trial and error testing to find a value that provides the stability that you require without breaking the streams.


To be honest, I do not have enough knowledge of the specific subject of frame queue programming to provide confirmation or correction of the information in #9022 - however, ev-mp is a senior Intel RealSense team member and leading expert in RealSense programming, so certainly any statements that are made by him can be accepted as completely reliable.

@MartyG-RealSense
Copy link
Collaborator

Hi @YoshiyasuIzumi Do you require further assistance with this case, please? Thanks!

@YoshiyasuIzumi
Copy link
Author

YoshiyasuIzumi commented Apr 28, 2022

@MartyG-RealSense
Thank you for following up. I think we can close this case. But I believe it is important to use frame queue to avoid frame drop for color and depth and align them. So if you find better way to achieve this goal, please share us!
Regards,

@MartyG-RealSense
Copy link
Collaborator

I do not have any further information to offer on this subject at this time. RealSense community members reading this case are welcome to leave comments to share their own knowledge though. As suggested, I will close the case in the meantime. Thanks again!

@maxstrobel
Copy link

Is there in the meantime a solution available to buffer frames via queues AND align the buffered frames afterwards?

@MartyG-RealSense
Copy link
Collaborator

@maxstrobel A comment at #10042 (comment) about storing frames in memory with the Keep() instruction, and the response beneath it, might provide useful insights.

@maxstrobel
Copy link

@MartyG-RealSense Thanks for the quick response.

I looked into #10042, #1000 and #6146.

However, I don't get if keep is really the thing we need.
My use case is quite similar to what @YoshiyasuIzumi described:

I have a realtime stream of data, both RGB and depth. Most of the time my post-processing is fast enough to keep pace with the data acquisition. However sometimes, I encounter some missed frames, because the post-processing took too long or there was some other high load on the machine.

Is there an option to buffer the data, RGB and depth, with a queue AND align the buffered frames afterwards?

@MartyG-RealSense
Copy link
Collaborator

@maxstrobel If you are using Python then multiprocessing could allow you to do two separate operations simultaneously, as described at #9085

Though if the processing bottleneck is occurring at your post-processing filters then it would be logical to improve performance by optimizing the filters.

If you are using the Spatial filter then I would recommend removing it, as the Spatial filter has one of the highest processing costs for the least benefit.

@maxstrobel
Copy link

@MartyG-RealSense - If I use multiprocessing or other approaches, where I get separate frames for the individual sensors, e.g. depth and RGB, is there any possibility to align the frames afterwards?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 20, 2023

If you store the frames in memory with the Keep() instruction then you can perform batch-processing actions on all the frames at the same time at the end after the pipeline has been closed, such as saving the frames to file, post-processing them or aligning them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants