-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GLBC] Expose GCE backend parameters in Ingress object API #28
Comments
From @bprashanth on February 7, 2017 18:31 This will need to be a per Service configuration since currently ingresses share the backend for a given nodeport, so it makes more sense to specify it as an annotation on the Service. Basically it would be nice if the Service author could publish some timeouts for their Service, and any/all loadbalancers fronting the Service will respect these settings. |
From @thockin on February 8, 2017 8:34 The reason for it to be an annotation is that we're not ready to add it to every implementation of Services, yet. Maybe never. I would suggest something like |
From @itamaro on February 14, 2017 17:23 I think I understand the reasoning for an annotation, thanks. Attempting to tackle this, I got this far.
|
From @itamaro on February 22, 2017 16:48 Following up on the discussion on the CL, I'd appreciate more thoughts from more contributors on the subject of a generic timeout annotation vs. a GCLB-specific one. The issue is whether to take the generic path, with something like WDYT? @thockin feel free to tag specific contributors :-) |
From @nicksardo on March 8, 2017 23:0 What about using the |
From @nicksardo on March 8, 2017 23:6 Oops, forgot that liveness/readiness probes don't live in the service. We would have to look up a pod under the service selector and check, similar to what we do for getting the health check request path. |
From @nicksardo on April 5, 2017 21:19 Many folks have expressed interest on this, so I'd like to keep the ball rolling. After reading the discussion on the CL, I'm also inclined to go with a generic annotation on Service. As I mentioned above, an easy option would be to look at a probe's configuration. |
From @thockin on April 5, 2017 21:40 Ae we happy with a single timeout or do we need one per host/path (Service) On Wed, Apr 5, 2017 at 2:19 PM, Nick Sardo [email protected] wrote:
|
From @nicksardo on April 5, 2017 21:46 If we went with a single timeout, a user is bound to come along with a use case for multiple. Their argument will be that GCP supports different timeouts - the controller should support that feature too. |
From @thockin on April 5, 2017 21:54 Is it generic, then? Or can things like nginx reasonably implement this On Wed, Apr 5, 2017 at 2:46 PM, Nick Sardo [email protected] wrote:
|
From @nicksardo on April 5, 2017 22:5 Pinging @aledbf for thoughts on having this for Tim, you mentioned possibly going straight to field but the question of being generic had to be answered first. |
From @thockin on April 5, 2017 22:15 yeah - if it is generic, let's run with fields, if we can. On Wed, Apr 5, 2017 at 3:05 PM, Nick Sardo [email protected] wrote:
|
From @aledbf on April 5, 2017 22:19 in nginx we have 3 settings related to timeouts with predefined defaults and annotations in the ingress that allow custom values
(this settings are used to send the request to a different endpoint if there's more than one) From what I've seen this default are ok unless you are running something like a docker registry or exposing a service for file upload. |
From @porridge on April 6, 2017 7:49 My use case is a phpmyadmin, for which the default 30s GCLB timeout is not enough, we'd like something on the order of 10 minutes. Re: "which object should be annotated", I'm leaning towards "I don't really care as long as it works ASAP", since my current need is so unsophisticated :-) With my sysadmin hat on, it feels like it should be on the ingress, since there are a bunch of possible timeout parameters (as nginx example shows) and it's just more natural to think about some of them in the context of the ingress (being an abstraction of an LB). I also imagine that in a larger organization one team might own the deployment+service(s), and another might own the ingress(es). Since a single service might be fronted by different ingresses, with different needs, and therefore timeouts (e.g. one for internal use and another exposed to the external users), it also would make sense to specify the timeouts on the ingress rather than service. As such, the fact that ingresses share backends seems like an artificial restrictions. But these are just my conjectures, I don't know how large organizations in fact use k8s. |
From @nicksardo on April 6, 2017 17:51 For clarity there are two questions are being discussed before proceeding...
Hybrid solution?: Support description on service with an optional override-per-service annotation on ingress. (possibly overkill) |
From @itamaro on April 20, 2017 10:40 My answers to the 2 questions formulated by @nicksardo:
service-specific.
specified on the Service object. also, in relation to:
it's another topic that got me confused - since a service selector can match multiple pods, not all having the same health check specification necessarily, the existing behavior looks odd. it would seem more reasonable to have another health check specification at the service level, no? (off topic?) back to the timeout definition:
WDYT? where do we stand about this issue? |
From @nicksardo on April 24, 2017 21:12 @itamaro thanks for providing your feedback. Regarding the first question, I agree that timeouts should be backend-service specific. We should put this question to rest. For the second question, I like your response to the problem of having a service with varied expectations of timeouts. I agree that the owner of a service should be the most qualified to reason about what timeouts are most appropriate. However, I have a hard time getting past the use case of having a file-upload feature in a web service. From the standpoint of the service owner, they will say the timeout "depends" on what path you're talking about. Or they might give an umbrella "X seconds" with X being the longest expected timeout. This Ingress-Service dilemma has existed for awhile and doesn't seem to have a right answer. Two proposals, "Better Ingress" and "Composite Services", would seem to help this situation if one of them were implemented. Since we don't currently have a way to express HTTP paths/attributes on a service, I'm leaning towards noting the timeout on the ingress object where we do have paths defined. Since nginx has an ingress-wide timeout setting, I also believe that this annotation should be GCP specific. Thoughts/comments? |
From @thockin on April 25, 2017 6:30 On Mon, Apr 24, 2017 at 2:12 PM, Nick Sardo [email protected] wrote:
I think I agree with Nick's assessment. |
From @itamaro on April 26, 2017 14:5
well, not sure I completely follow all the reasoning. anyway, I digress... I trust that you've seen more diverse use-cases, and you can choose the best tradeoff for this. I prefer an implemented good enough solution over a theoretically perfect one :-) |
From @thockin on April 28, 2017 15:23 My feeling with timeouts is that they are part of the "how to use" rather On Apr 26, 2017 7:05 AM, "Itamar Ostricher" [email protected]
|
From @tsloughter on May 31, 2017 1:57 For the time being is there any work around where I can manually change the timeout in the google console and not have the controller revert the change? |
From @evanj on May 31, 2017 13:6 This is exactly what I've done and it seems to work? I have a test where I check it periodically. It seems to have kept its settings for at least a month now. I am a bit scared about the next time I make any change to the ingress of course :) |
From @tsloughter on May 31, 2017 15:48 @evanj what is it you did? |
From @evanj on June 1, 2017 2:4 @tsloughter I edited the load balancer's backend timeout through the Google Cloud Console web UI. I changed the timeout from 30s to 10m and it is still working, about 1 month later. |
From @tsloughter on June 1, 2017 18:18 @evanj oh, hm, ok, I assumed you meant something else because I had tried that but the timeout seemed to revert to 30s after a short period of time. I'll try again, thanks! |
From @nicksardo on June 1, 2017 18:42 @tsloughter The ingress controller does not update the timeout value. However, if you change ports of your service and a new nodeport is generated, the backend service will be regenerated. |
From @brugz on June 9, 2017 0:33 Hi guys and gals -- what's the current status here? Do we expect this feature will make it into a future release? Any idea on time frame? Cheers |
From @nicksardo on June 9, 2017 23:8 This is probably what we're looking at:
Thoughts? apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
cloud.google.com/service-settings: |
{
"default": {
"timeoutSec": 321
},
"foo.bar.com/foo": {
"timeoutSec": 123,
"iap": {
"enabled": true,
"oauth2ClientId": "....",
"oauth2ClientSecret":"..."
}
},
"foo.bar.com/bar/*":{
"enableCDN": true
}
}
spec:
backend:
serviceName: s0
servicePort: 80
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar/*
backend:
serviceName: s2
servicePort: 80
|
+1 |
1 similar comment
+1 |
Is there any timeline when timeout configuration will be available in |
yeah, we could really use timeout parameter |
We hear you :-) |
👍 |
Is it possible to setup IAP using nginx-ingress? The old issue was merged with this but they seem somewhat unrelated |
+1 |
@bowei et al, question: going forward, will |
This commits adds a new Connection section to the BackendConfig that enables setting the timeout.
@jpalomaki No, BackendConfig will not cover NEG's. The documentation you linked is how its done. |
All, Thanks to @bpineau, we will soon be launching support for timeout, session affinity and connection draining parameters on the BackendService. As of now, all GA features (except custom healthchecks) on the BackendService have been implemented in BackendConfig Please look out for some documentation in the near future on how you (the community) can contribute further to BackendConfig and FrontendConfig (which will be coming soon). If there is no objection, I am going to close this bug, since the primary ask has been implemented. If you have further requests, please file additional issues for easier tracking. |
/close |
@rramkumar1: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@rramkumar1, can you point us to the documentation on how to configure:
in Kubernetes / GKE? |
@mofirouz Documentation will be posted at the start of next week. |
@rramkumar1 "start of next week" is due now? |
Honestly I don't understand why issues are closed without proper documentation - I have no idea how to use BackendServices to do this. I do understand that from Google's point of view this is closed, but not users. It is misleading to close issues and then users have to dig in deeper to find out that this actually dos not work yet: #513 (comment) |
@matti Rollout of the actual feature has been delayed to issues out of our control and thus the documentation is also delayed. Apologies for not providing an update here sooner. Regarding closing of this issue, the implementation to support GCE BackendService features is already out (BackendConfig). Any additional feature support on top of this existing CRD is outside the scope of this issue. |
@rramkumar1 I'd argue that the issue still exists at this point for consumers of the GCE Ingress. None of us have control over the deployment. Given that, from my view point the issue still exists. At this point in time no value has been delivered since it can't be used. |
@cerealcable The crux of this issue was to come up with a way to expose BackendService parameters. We delivered that with BackendConfig and several features like IAP, CDN and CloudArmor already are being used today (e.g https://cloud.google.com/iap/docs/enabling-kubernetes-howto) Any requests we get to support more features in BackendConfig should be filed as separate issues rather than conflating everything in this issue. I absolutely agree that we need to do a better job of keeping the community informed on what features are dropping and when. This is something we are working on in the form of much better documentation and changelogs. |
@mofirouz and FYI for all: Support for session affinity, timeout and connection draining are now supported via BackendConfig. This is now launched on GKE for cluster versions at or above 1.11.3-gke.18! Docs: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service |
yay, awesome @rramkumar1 thank you. One question in regards to existing services/ingresses: I've noticed that if I update some of the ingress configs (like paths) in the GCP Console, they get reset to what they are in the Kube ingress definition. Do timeouts and other backend configs have a similar behaviour? Do I need to apply backend config retrospectively to those environments? |
@mofirouz BackendConfig is a first class citizen. Any settings specified in BackendConfig will be asserted in GCP and any updates to BackendConfig will be reflected in GCP. If you previously manually modified settings such as timeout, I would highly recommend migrating to using BackendConfig. |
This is great, thanks! Since this issue is closed, is there a separated thread to follow the progress on custom healthchecks? It is personally our last piece that requires "manual override" using GCP. |
From @itamaro on February 7, 2017 10:9
When using GCE Ingress controller, the GCE Ingress controller (GLBC) provisions GCE backends with a bunch of default parameters.
It would be great if it was possible to tweak the parameters that are currently "untweakable" from the Ingress object API (AKA from my YAML's).
Specific use case: GCE backends are provisioned with a default timeout of 30 seconds, which is not sufficient for some long requests. I'd like to be able to control the timeout per-backend.
Copied from original issue: kubernetes/ingress-nginx#243
The text was updated successfully, but these errors were encountered: