-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add computation of size contribution to info verb #1726
Add computation of size contribution to info verb #1726
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nicolaloi Thanks for your contribution.
Without deserialization, the message's size calculation per topic will be pretty fast in the vast majority of cases. Perhaps no more than a few seconds. I think adding a separate CLI argument like --size-contribution
doesn't make sense. It will be much simpler to make output for topics size contribution by default for the --verbose
CLI option.
See my detailed comments and suggestions inline.
586e4fb
to
a723f75
Compare
Addressed review comments, added size contribution for services, and updated tests and design doc. |
@ros-pull-request-builder retest this please |
I will edit the file with the style divergence error. I have missed it since locally the tests no longer gave me any errors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nicolaloi Thanks for addressing the review comments.
Overall, it looks good among a few nitpicks, mainly related to using const references.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks looks good to me if CI will pass green.
@nicolaloi The Rpr job failed because need to make rebase. |
e3e5f8d
to
b131440
Compare
Signed-off-by: Nicola Loi <[email protected]>
Co-authored-by: Michael Orlov <[email protected]> Signed-off-by: Nicola Loi <[email protected]>
- Also update rosbag2_tests Signed-off-by: Nicola Loi <[email protected]>
- Also add new test and update design doc Signed-off-by: Nicola Loi <[email protected]>
Signed-off-by: Nicola Loi <[email protected]>
Co-authored-by: Michael Orlov <[email protected]> Signed-off-by: Nicola Loi <[email protected]>
Signed-off-by: Nicola Loi <[email protected]>
b131440
to
1a52858
Compare
@MichaelOrlov I have rebased the branch and I have updated the new size contribution test I have added to match the latest changes in rolling, but now the Rpr test is failing due to an unrelated rcpputils::fs issue, as in the latest rolling merged PR #1740 (log L32773) |
@nicolaloi No worries. Those test failures are unrelated to the changes from this PR and test failing because the CI job from GitHub actions uses binaries for the other dependent packages from the latest release on rolling. Anyway, I will run CI jobs on the ROS 2 build farm, which compiles all packages from sources. If everything is correct, all tests should pass. |
Pulls: #1726 |
Addresses #1601 feature request (Show size contribution of each topic with ros2 bag info). This is a test draft.
Output of
ros2 bag info test_bag -v --size-contribution
:The total size contribution of each topic is computed by sequentially reading every message in the rosbag. As mentioned in #1601, this approach can be slow.
In a first test A (previous output), the computation of the size contribution took ~520ms for a bag with 2310 messages, 2.6GiB large, 39.5s long, with two sensor_msgs/msg/Image topics with 763 messages each, and one sensor_msgs/msg/PointCloud2 topic with 765 messages.
In a second test B, the size computation took ~78ms for a bag with 14577 messages, 5.2MiB large, 74.4s long, with one sensor_msgs/msg/Imu topic with 14559 messages.
So to me, it seems the computation time is not simply dependent on the number of messages, but on their types and individual sizes too. As a test, I was curious about skipping the message serialization step inside the reader. I tried a brute modification of the mcap_storage interface to directly access the messageView.message.dataSize variable while skipping the message serialization step. However, this only resulted in a 15-30% improvement in computation time.
As a test, removing the actual size computation code and just reading the rosbag in an empty while loop doesn't really change the timing (only real difference was in test B, 78ms->68ms).
In
rosbag2_py/_info.cpp
, the new functioncompute_topics_size_contribution
can be combined with the existingread_service_info
to avoid reading the rosbag twice.@MichaelOrlov looking forward to your feedback and suggested changes.