-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
canary ingress does not update nginx.conf and has not effect #9865
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug |
yes. the ingress has the ingressClassName configured:
in the output of kubectl describe ingress .. you can see that there is no ingressclassname annotation. Both are refused when you remove and install the ingress. |
Not sure if I messaged wrong or if you got it wrong. With the data visible so far ;
It will help to know what I am missing |
Here is the other ingress object with the same host but without canary annotations:
curl output (hostname+ip anonymized):
|
@uwebartels I think someone needs to run this https://kubesphere.io/blogs/canary-release-with-nginx-ingress/ test on minikube or kind cluster to establish a base if the canary release feature itself is broken, first. And then the second test needs to tweak the environment to your config, with the highlight being weight of 100. |
Hi, I did the setup with minikube and see the same behavior in the nginx.conf of the ingress controller. Unfortunatly I cannot access the ingresses from outside due to my local setup. There are some configurations in the ingress-controller which are not needed here, but I wanted to use a configuration as close as possible to the production environment. In the final check you can see that there is no entry for the ingress nginx2-vip and only one entry for the upstream dev-nginx2-http. The same behavior I experience in my production setup. So I have no idea how the controller should identify requests coming for nginx2-vip or send them to dev-nginx2-http. Best... start minikube
setup namespaces
prepare 2 applications
install 2 applications
install nginx-ingress
install vip ingresses (one ingress with final hostname, one ingress with canary annotations and same hostname)
nginx config check
|
Your test has multiple aspects that are important to you it seems but it looks like these aspects do not related to the issue you have raised, from the point of view of a reader here. Examples of confusion I am having ;
|
I tried to make my behavior reproducable. so whoever takes a closer look can do the kubectl describe. I did this in my problematic environment. So I see your aspects. Best... |
I will have to reproduce with the highlighted configs like canary-weight set to 100. Not that there is concept of "total-weight" qnd it used to be 100 by default. Then recently we merged a PR where the total-weight could be set to 1000 so that fractional percentage of traffic could be shaped. Search PRs merged for the PR number. I don't know if that relates to your use case. But if it does then you would need to define the total-weight as well as the canary-weight. |
the existing canary functionality works. In our application we have a caching problem that caused seeing the wrong content. |
What happened:
after upgrading ingress-nginx helm chart from 3.33.0 to 4.4.0 we experience problems with a canary ingress.
the service/pod behind this ingress does not receive any traffic.
I see that the ingress is validated in the logfile of the ingress controller:
I0417 05:38:28.059525 7 main.go:100] "successfully validated configuration, accepting" ingress="prod/webclient-prod2-vip"
I see that there is no entry for this ingress in the nginx.conf. webclient-prod1-vip has no canary configuration, webclient-prod2-vip has a canary configuration:
/etc/nginx $ grep prod1-vip nginx.conf
set $ingress_name "webclient-prod1-vip";
set $service_name "webclient-prod1-vip";
set $proxy_upstream_name "prod-webclient-prod1-vip-80";
/etc/nginx $ grep prod2-vip nginx.conf
/etc/nginx $
What you expected to happen:
Wenn the canary ingress is configured with a weight of 100% I expect that the service receives traffic. I also expect to find an entry regarding this ingress in the nginx.conf of the ingress controller.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
):Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.17-eks-48e63af", GitCommit:"47b89ea2caa1f7958bc6539d6865820c86b4bf60", GitTreeState:"clean", BuildDate:"2023-01-24T09:34:06Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Environment:
uname -a
): Linux ingress-nginx-public-controller-7bb5d7b7f4-bp8m6 5.4.235-144.344.amzn2.x86_64 Basic structure #1 SMP Sun Mar 12 12:50:22 UTC 2023 x86_64 LinuxClient Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"darwin/amd64"}
kubectl get nodes -o wide
:helm ls -A | grep -i ingress
:helm -n <ingresscontrollernamepspace> get values <helmreleasename>
:If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
Current State of the controller:
kubectl describe ingressclasses
:kubectl -n <ingresscontrollernamespace> get all -o wide
:kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
:kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
kubectl -n <appnnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
kubectl describe ...
of any custom configmap(s) created and in useThe text was updated successfully, but these errors were encountered: