-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Flag Routing #4736
Comments
A nit, this is only true if a breaking API change is needed. The core kube APIs (e.g. Pod) have continued to evolve long past v1 to incorporate new (non-trivial) functionality. An example of this is projection of secrets and configmaps into environment variables, which didn't land until 1.3 (IIRC), and this is far from the last change. Do you have a reason to think that this would require a breaking API change?
I am keen to add a new abstraction that can do these kinds of things as well:
I've always seen this as a higher-level abstraction (atop Service) because often the things you are switching over are fairly different. e.g.
My feeling is that this should target We should be careful about folding everything into a single abstraction as Istio's VirtualService has, as it can become incredibly hard to use properly. This is (part of) why my bias is towards much more focused and clear abstractions that can be composed. Another notable challenge this introduces is that it will require an expansion of our networking interface. Istio's API can clearly handle it, but I'm unsure of Gloo (or others I have heard are in the works). |
Off the top of my head I see one potential area... the I agree with you that I think this is mostly a higher-level abstraction thing, and mainly something that can be additive after v1.0 if we don't make the timeline. But, I wanted to bring it up now for 2 reasons:
And so far, it's just that But I'd like to hear if others know of spots to think about.... |
Closes: knative#4736 Signed-off-by: Doug Davis <[email protected]>
I do agree with @mattmoore‘s point that this can be folded into a higher level abstraction. My understanding of the need for percentage based traffic splitting is that we need to have a mechnism for unobstrusive rollouts. Percentage based traffic splitting allows us to do what deployment does with its rolling upgrade but also allows us to have immutable deployments (the rolling is a result of mutating the deployment). Other means are undoubtedly super useful for various use-cases but maybe should not be part of the core types to keep them somewhat focused on what they need to be and to not further push requirements into the networking layer as Matt mentioned. |
Issues go stale after 90 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /lifecycle stale |
Stale issues rot after 30 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /lifecycle rotten |
Rotten issues close after 30 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /close |
@knative-housekeeping-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@mattmoor (or anyone) can I get this reopened? I missed the lifecycle messages. |
/reopen |
@vagababov: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@vagababov thanks! |
Rotten issues close after 30 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /close |
@knative-housekeeping-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@duglin: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Rotten issues close after 30 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /close |
@knative-housekeeping-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@duglin: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Some user interest in this feature: https://stackoverflow.com/questions/63615721/knative-routing-based-on-custom-headers/63618289 |
/lifecycle frozen This was sent to our mailing list as well. I'm a little reluctant to creep ksvc too much vs. adding new resources that compose well with ksvc (and others) in the same way we're trying to do vanity domains. I sketched some thoughts here over the weekend, and have a PoC that does (part of) this: https://docs.google.com/document/d/1Cp_h4MIRGt2Vy-EE0Yy5L5Y0wRNaMyp6SUq0TCi6XLM/edit |
Other than the request for the user to choose the request header, it seems like Header-based Tag Routing should be able to do this? /triage needs-user-input |
Assuming that header-based tag routing will work here unless someone wants to chime in with a reason why it won't. /close |
@evankanderson: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@duglin: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Agree with @duglin It would help us a lot if supporting route traffic by custom headers, because in our case, we can't gurantee we can inject the |
Agree with @duglin, It would be helpful for us too. |
In what area(s)?
/area API
/area networking
/kind spec
/kind proposal
Describe the feature
One of the things we hear from customers is that routing between revisions based on percentage isn't really of interest to them once in production. In practice they tend to lean more towards a model where only a select group of users are expected to use the latest versions so that everyone else (who need more stability) are not impacted during the testing phase. We call that "Feature Flag Routing". Where the routers (e.g. istio) route requests to certain versions based on some metadata in the message (e.g. perhaps some http header) rather than the random % based stuff. We'd like for Knative to eventually support this as an option. It's not clear if this can be done solely under the covers or whether it might result in an API change. If an API change might be needed it might be best to at least design it out prior to v1.0 when the API is locked down.
Some of the uses for the "select group":
The text was updated successfully, but these errors were encountered: