Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update service/client construction/destruction API return codes. #247

Merged
merged 1 commit into from
Sep 24, 2020

Conversation

hidmic
Copy link
Contributor

@hidmic hidmic commented Sep 23, 2020

Connected to ros2/rmw#276. Precisely what the title says.

@hidmic
Copy link
Contributor Author

hidmic commented Sep 23, 2020

CI up to test_rmw_implementation and rcl:

  • Linux Build Status
  • Linux-aarch64 Build Status
  • macOS Build Status
  • Windows Build Status

Copy link
Contributor

@Lobotuerk Lobotuerk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment on lines +3848 to +3853
// before proceeding to outright ignore given QoS policies, sanity check them
dds_qos_t * qos;
if ((qos = dds_create_qos()) == nullptr) {
if ((qos = create_readwrite_qos(qos_policies, false)) == nullptr) {
goto fail_qos;
}
dds_reset_qos(qos);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you explain this?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also don't understand this, but then I also don't understand what the QoS policies are doing here in the first place:

  • best-effort, keep-last or a lifespan are fundamentally broken because the request or the response may then be dropped silently;
  • service invocations are (by definition) aperiodic data, so deadline doesn't apply, and, it would seem, neither does liveliness;
  • transient-local would be an interesting choice for dealing with the discovery latencies (and can safely be done with cyclone by leaving the "durability service history" at "keep-last 1", but the discovery latencies have been dealt with in another way already, and so doesn't make sense either.

Allowing avoid_ros_namespace_conventions might be useful, but I think one should really only use that for interfacing with existing non-ROS 2 systems. It seems unlikely that those systems would happen to implement the service invocation protocol.

Arguably that leaves only ignore_local_publications, which could be interpreted as a request to ignore services in the same node, or participant, or, indeed, process. But I'm not sure what the interpretation of it is supposed to be and it doesn't seem to be supported by all RMW implementations anyway. So I doubt it would be wise to interpret it.

And so I'd argue none of the QoS policies are meaningful, and if you want to check the validity, you should check whether they match the above, or are set to "system default" ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also don't understand this, but then I also don't understand what the QoS policies are doing here in the first place:

Yes, we should probably have separated service/topic qos profile definitions.
Service QoS are mostly unusable right now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed I've never seen service QoS policies being used in any meaningful way out there.

However, I won't argue here with the decisions made by each implementation. This change is simply ensuring rmw_cyclonedds_cpp does not accept invalid QoS profiles, as the API mandates.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I imagine that this create_readwrite_qos/dds_reset_qos combo is the same than calling dds_create_qos.
If that's the case, lgtm.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is the same.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, yes, it is !

@hidmic
Copy link
Contributor Author

hidmic commented Sep 24, 2020

CI up to test_rmw_implementation and rcl:

  • Linux Build Status
  • Linux-aarch64 Build Status
  • macOS Build Status
  • Windows Build Status

@hidmic
Copy link
Contributor Author

hidmic commented Sep 24, 2020

@eboasson @ivanpauno anything else?

Copy link
Member

@ivanpauno ivanpauno left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Checking the validity of a qos profile that is being ignored feels weird to me.
Except that, having consistent error handling across the different rmw implementations is a great improvement.

@hidmic
Copy link
Contributor Author

hidmic commented Sep 24, 2020

Checking the validity of a qos profile that is being ignored feels weird to me.

Yeah, it is odd. I don't want to start a discussion here and now, but this is the only Tier 1 implementation that chose to ignore QoS settings when the API suggests (and now explicitly says) otherwise. Perhaps we owe ourselves a discussion about this, but right now we need a (seemingly) consistent API.

@hidmic
Copy link
Contributor Author

hidmic commented Sep 24, 2020

Thanks for the review @ivanpauno !

@hidmic hidmic merged commit d7748d8 into master Sep 24, 2020
@delete-merged-branch delete-merged-branch bot deleted the hidmic/compliant-service-n-client-creation branch September 24, 2020 18:04
ahcorde pushed a commit that referenced this pull request Oct 9, 2020
ahcorde pushed a commit that referenced this pull request Oct 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants