Support for 429 Retry-After header #5011
travis-minke-sap
started this conversation in
Ideas
Replies: 1 comment 1 reply
-
Sooo many opinions!!! ; ) I created PR #5285 with a proposed initial step - doesn't change the api shape or current behavior - is opt-in for now... |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi All,
During today's Knative Eventing WG meeting I started a conversation about whether / how we might support the 429
Retry-After
header as described in the CloudEvent WebHook Specification. Specifically in Section 2.2 it states...The
Retry-After
header is a standard header as described here...https://tools.ietf.org/html/rfc7231#section-7.1.3
It seems as though respecting this header and delaying subsequent requests would provide some minimal form of back-pressure sanity short of the full rate-limiting auth handshake that is described in Sections 4.1.4 and 4.2.2 of the specification.
My initial inclination was to enhance the current
eventing/pkg/channel/message_dispatcher.go
logic which performs retries so that it respected this header and waited whichever duration was greater of theRetry-After
value and the next retry attempt. This would, though, synchronously block theDispatchMessageWithRetries()
request which may or may not be desirable. It also wouldn't guarantee any coordination between other threads attempting to send to that WebHook (eventing-kafka's distributed kafkachannel has scalable Dispatcher replicas).Would love to hear your thoughts - thanks !
Questions...
Does this belong in the shared eventing code (message_dispatcher.go), or in individual channel implementations, in downstream intermediaries, or maybe somewhere else?
DispatchMessageWithRetries()
.If it is added to the shared eventing code, should this configurable and at what level?
DispatchMessageWithRetries()
for channels to hardcode or expose as needed?RespectRetryAfter
Would this take effect ALWAYS or ONLY if retry was enabled in Subscription.Delivery?
Would we want an override capability if the
Retry-After
value was excessively large? Something that says... "support Retry-After but max out at X seconds/minutes/etc" ?Notes...
eventing/pkg/kncloudevents/message_sender.go
implementation, some of which only retry above 300, while others don't retry on 300 and instead include a few 400s (specifically 404, 429, 409).Beta Was this translation helpful? Give feedback.
All reactions