-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-camera operation #12686
Comments
Hi @jpfsaunders Use of wait_for_frames() is not recommended for applications that use multiple cameras. For multicam, using poll_for_frames() is ideal. The RealSense SDK's rs-multicam example program, which creates a separate pipeline for each camera and uses poll_for_frames(), is a good test for multicam performance. https://github.com/IntelRealSense/librealsense/tree/master/examples/multicam If you are able to test your 4 cameras with the rs-multicam example, how do they perform in that program compared to your own project? A feature called Global Time is enabled by default on RealSense 400 Series cameras, so there should already be syncing of multiple cameras to a common timestamp occurring without having to create your own sync mechanism, as described at #3909 By default, the RealSense pipeline can hold a maximum of 16 frames for each individual stream type. As you suggest, this frame queue capacity can be increased to hold more frames in the pipeline simultaneously, though this may result in more of the computer's memory being consumed by the program. |
Hello @MartyG-RealSense I don't use wait_for_frames() in my application, I use poll_for_frames(). However, I only do that to get framesets from the 3 cameras not directly associated with the callback. In my original application from several months ago, I used the rs-multicam example to manage data from 3 cameras. In that case I just used a big state machine that first polled for frames on each camera, and then when all framesets were obtained, I moved on to the next state to work with the all 3 cameras frames together. That works, but I don't really like that method for my newer, 4-camera app because it makes everything sequential and I would prefer to just obtain the framesets as they are available and let my main code loop work with the most recent frames (ie, to more-or-less read new frames and work on them in parallel as managed by threads). To get that to happen, I am trying to use a callback that gets associated with 1 of the 4 cameras. When the callback executes I know there is a frameset available for that camera, so I get it and then poll for frames in sequence on the other 3 cameras. All of that is working, but strangely there is a delay of about 1 second on the frames acquired for the 1 camera associated directly with the callback. The 1 second delay is with respect to the frames attained for the other 3 cameras (all of those appear to be in sync and ahead of the other camera). Really I was just trying to figure out why the 1 second delay existed and if there was a way to eliminate it. After I initiated this issue/question, I decided to re-write my code to not use a callback and instead try to manage the frameset acquisition in a method similar to the Intel background processing example: https://github.com/IntelRealSense/librealsense/wiki/API-How-To#do-processing-on-a-background-thread |
Let me know how using frameset acquisition instead of a callback works out for you, please. Good luck! |
Hello @MartyG-RealSense I am still not sure why the original method I tried resulted in delay on one camera, but the 4-callback scheme is acceptable as a solution. |
Thanks so much @jpfsaunders for sharing your solution. It's excellent to hear that you were successful. |
Issue Description
I am reading depth and RGB from four D455 cameras. To start off I am just trying to sync them roughly via software. I have a callback on one camera, and within the callback I get the frameset from the associated camera and also use rs2::poll_for_frames() on the other 3 cameras in sequence. Frame rate = 15 fps, and resolution of the cameras (both depth & color) is 848x480. I enable streams for depth, infrared and color.
This method seems to work well except that if I display the resulting 4 color frames to verify all is working, the frame from the camera which triggers the callback is delayed with respect to the others by about 1 second. All 3 other cameras seem to be in sync with little or no noticeable delay. I am guessing that the one camera has a larger queue size (maybe = 16) than the others for same reason. I see there is a way to adjust the rs2::frame_queue size on a camera but in the examples the queue is used with wait_for_frames() or poll_for_frames() and I seem to need to use it with the callback instead. Is there a way I can get rid of the delay and still use the callback? If not, I can change my code to use a different method of getting the camera framesets.
Thank you.
The text was updated successfully, but these errors were encountered: