Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

canary ingress does not update nginx.conf and has not effect #9865

Closed
uwebartels opened this issue Apr 17, 2023 · 11 comments
Closed

canary ingress does not update nginx.conf and has not effect #9865

uwebartels opened this issue Apr 17, 2023 · 11 comments
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@uwebartels
Copy link

uwebartels commented Apr 17, 2023

What happened:

after upgrading ingress-nginx helm chart from 3.33.0 to 4.4.0 we experience problems with a canary ingress.
the service/pod behind this ingress does not receive any traffic.

I see that the ingress is validated in the logfile of the ingress controller:
I0417 05:38:28.059525 7 main.go:100] "successfully validated configuration, accepting" ingress="prod/webclient-prod2-vip"

I see that there is no entry for this ingress in the nginx.conf. webclient-prod1-vip has no canary configuration, webclient-prod2-vip has a canary configuration:
/etc/nginx $ grep prod1-vip nginx.conf
set $ingress_name "webclient-prod1-vip";
set $service_name "webclient-prod1-vip";
set $proxy_upstream_name "prod-webclient-prod1-vip-80";
/etc/nginx $ grep prod2-vip nginx.conf
/etc/nginx $

What you expected to happen:
Wenn the canary ingress is configured with a weight of 100% I expect that the service receives traffic. I also expect to find an entry regarding this ingress in the nginx.conf of the ingress controller.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

/etc/nginx $ /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.5.1
  Build:         d003aae913cc25f375deb74f898c7f3c65c06f05
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.21.6

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):
Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.17-eks-48e63af", GitCommit:"47b89ea2caa1f7958bc6539d6865820c86b4bf60", GitTreeState:"clean", BuildDate:"2023-01-24T09:34:06Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Environment:

  • Cloud provider or hardware configuration: AWS EKS
  • OS (e.g. from /etc/os-release): Alpine Linux v3.16
  • Kernel (e.g. uname -a): Linux ingress-nginx-public-controller-7bb5d7b7f4-bp8m6 5.4.235-144.344.amzn2.x86_64 Basic structure  #1 SMP Sun Mar 12 12:50:22 UTC 2023 x86_64 Linux
  • Install tools:
    • terraform with terraform-aws-modules/eks/aws version 17.24.0
  • Basic cluster related info: Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"darwin/amd64"}
    • kubectl get nodes -o wide:
NAME                                          STATUS   ROLES    AGE   VERSION                INTERNAL-IP   EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                 CONTAINER-RUNTIME
ip-10-8-1-123.eu-central-1.compute.internal   Ready    <none>   16d   v1.22.17-eks-a59e1f0   10.8.1.123    <none>        Amazon Linux 2   5.4.235-144.344.amzn2.x86_64   docker://20.10.17
ip-10-8-22-32.eu-central-1.compute.internal   Ready    <none>   16d   v1.22.17-eks-a59e1f0   10.8.22.32    <none>        Amazon Linux 2   5.4.235-144.344.amzn2.x86_64   docker://20.10.17
ip-10-8-23-51.eu-central-1.compute.internal   Ready    <none>   16d   v1.22.17-eks-a59e1f0   10.8.23.51    <none>        Amazon Linux 2   5.4.235-144.344.amzn2.x86_64   docker://20.10.17
ip-10-8-42-21.eu-central-1.compute.internal   Ready    <none>   16d   v1.22.17-eks-a59e1f0   10.8.42.21    <none>        Amazon Linux 2   5.4.235-144.344.amzn2.x86_64   docker://20.10.17
ip-10-8-7-74.eu-central-1.compute.internal    Ready    <none>   16d   v1.22.17-eks-a59e1f0   10.8.7.74     <none>        Amazon Linux 2   5.4.235-144.344.amzn2.x86_64   docker://20.10.17
  • How was the ingress-nginx-controller installed:
    • If helm was used then please show output of helm ls -A | grep -i ingress:
ingress-nginx               	infra      	7       	2023-04-15 12:29:57.240055 +0200 CEST  	deployed	ingress-nginx-4.4.0                	1.5.1      
ingress-nginx-public        	infra      	11      	2023-04-15 13:38:18.409633 +0200 CEST  	deployed	ingress-nginx-4.4.0                	1.5.1      
  • If helm was used then please show output of helm -n <ingresscontrollernamepspace> get values <helmreleasename>:
USER-SUPPLIED VALUES:
controller:
  autoscaling:
    enabled: true
    maxReplicas: 10
    minReplicas: 2
    targetCPUUtilizationPercentage: 70
    targetMemoryUtilizationPercentage: 70
  extraArgs:
    ingress-class: nginx-public
  ingressClass: nginx-public
  ingressClassByName: true
  ingressClassResource:
    controllerValue: k8s.io/ingress-nginx-public
    default: "false"
    enabled: "true"
    name: nginx-public
  priorityClassName: system-cluster-critical
  resources:
    limits:
      cpu: 200m
      memory: 200Mi
    requests:
      cpu: 200m
      memory: 200Mi
  service:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: eks.production.domain.org.
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
    externalTrafficPolicy: Local
  serviceMonitor:
    enabled: false
defaultBackend:
  autoscaling:
    enabled: true
    maxReplicas: 10
    minReplicas: 2
  enabled: true
  image:
    readOnlyRootFilesystem: false
    repository: registry.infrastructure.domain.org/development/devops/dockerfiles/run/ingress-nginx-defaultbackend
    tag: 1.0.0-0
  priorityClassName: system-cluster-critical
  resources:
    limits:
      cpu: 10m
      memory: 20Mi
    requests:
      cpu: 10m
      memory: 20Mi

  • If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used

  • if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances

  • Current State of the controller:

    • kubectl describe ingressclasses:
Name:         alb
Labels:       app.kubernetes.io/instance=aws-load-balancer-controller
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=aws-load-balancer-controller
              app.kubernetes.io/version=v2.4.3
              helm.sh/chart=aws-load-balancer-controller-1.4.4
Annotations:  meta.helm.sh/release-name: aws-load-balancer-controller
              meta.helm.sh/release-namespace: infra
Controller:   ingress.k8s.aws/alb
Events:       <none>


Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.5.1
              helm.sh/chart=ingress-nginx-4.4.0
Annotations:  ingressclass.kubernetes.io/is-default-class: true
              meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: infra
Controller:   k8s.io/ingress-nginx
Events:       <none>


Name:         nginx-public
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx-public
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.5.1
              helm.sh/chart=ingress-nginx-4.4.0
Annotations:  ingressclass.kubernetes.io/is-default-class: true
              meta.helm.sh/release-name: ingress-nginx-public
              meta.helm.sh/release-namespace: infra
Controller:   k8s.io/ingress-nginx-public
Events:       <none>

  • kubectl -n <ingresscontrollernamespace> get all -o wide:
NAME                                                             READY   STATUS      RESTARTS      AGE    IP            NODE                                          NOMINATED NODE   READINESS GATES
pod/alb-dns-external-dns-86dd74f988-m696s                        1/1     Running     0             16d    10.8.44.228   ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/aws-load-balancer-controller-68bb4894fb-nc5nf                1/1     Running     1 (16d ago)   16d    10.8.43.183   ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/aws-load-balancer-controller-68bb4894fb-xwrfg                1/1     Running     0             16d    10.8.29.130   ip-10-8-22-32.eu-central-1.compute.internal   <none>           <none>
pod/aws-node-termination-handler-74v9w                           1/1     Running     0             16d    10.8.7.74     ip-10-8-7-74.eu-central-1.compute.internal    <none>           <none>
pod/aws-node-termination-handler-gmwvq                           1/1     Running     0             16d    10.8.23.51    ip-10-8-23-51.eu-central-1.compute.internal   <none>           <none>
pod/aws-node-termination-handler-mln4g                           1/1     Running     0             16d    10.8.1.123    ip-10-8-1-123.eu-central-1.compute.internal   <none>           <none>
pod/aws-node-termination-handler-nq8z2                           1/1     Running     0             16d    10.8.22.32    ip-10-8-22-32.eu-central-1.compute.internal   <none>           <none>
pod/aws-node-termination-handler-qn5qp                           1/1     Running     0             16d    10.8.42.21    ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/cert-manager-cainjector-68b749489c-h5wwv                     1/1     Running     0             16d    10.8.31.118   ip-10-8-22-32.eu-central-1.compute.internal   <none>           <none>
pod/cert-manager-cd79bdf86-lng5z                                 1/1     Running     0             16d    10.8.2.9      ip-10-8-7-74.eu-central-1.compute.internal    <none>           <none>
pod/cert-manager-webhook-66b5689d76-k2psc                        1/1     Running     1 (16d ago)   16d    10.8.21.216   ip-10-8-23-51.eu-central-1.compute.internal   <none>           <none>
pod/cluster-autoscaler-aws-cluster-autoscaler-86c9c79f7d-hr8b5   1/1     Running     1 (16d ago)   16d    10.8.36.86    ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/cluster-overprovisioner-default-5c5cf7d4cd-7sg2p             1/1     Running     0             16d    10.8.14.73    ip-10-8-1-123.eu-central-1.compute.internal   <none>           <none>
pod/cluster-overprovisioner-default-5c5cf7d4cd-9g6w5             1/1     Running     0             16d    10.8.39.11    ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/cluster-overprovisioner-default-5c5cf7d4cd-mlnml             1/1     Running     0             16d    10.8.29.16    ip-10-8-22-32.eu-central-1.compute.internal   <none>           <none>
pod/descheduler-28028468-6kjxq                                   0/1     Completed   0             5m7s   10.8.2.89     ip-10-8-1-123.eu-central-1.compute.internal   <none>           <none>
pod/descheduler-28028470-8sxw2                                   0/1     Completed   0             3m7s   10.8.2.89     ip-10-8-1-123.eu-central-1.compute.internal   <none>           <none>
pod/descheduler-28028472-hpfv2                                   0/1     Completed   0             67s    10.8.1.152    ip-10-8-1-123.eu-central-1.compute.internal   <none>           <none>
pod/external-dns-79994df4d8-9qwd7                                1/1     Running     0             41h    10.8.12.134   ip-10-8-1-123.eu-central-1.compute.internal   <none>           <none>
pod/ingress-nginx-controller-797455bf64-4bzwk                    1/1     Running     0             16d    10.8.12.67    ip-10-8-7-74.eu-central-1.compute.internal    <none>           <none>
pod/ingress-nginx-controller-797455bf64-nn7d5                    1/1     Running     0             16d    10.8.45.31    ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/ingress-nginx-defaultbackend-cf997c8db-gjhtv                 1/1     Running     0             16d    10.8.40.229   ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/ingress-nginx-defaultbackend-cf997c8db-p2984                 1/1     Running     0             16d    10.8.22.247   ip-10-8-23-51.eu-central-1.compute.internal   <none>           <none>
pod/ingress-nginx-public-controller-7bb5d7b7f4-bp8m6             1/1     Running     0             41h    10.8.9.100    ip-10-8-1-123.eu-central-1.compute.internal   <none>           <none>
pod/ingress-nginx-public-controller-7bb5d7b7f4-p9mnw             1/1     Running     0             41h    10.8.47.15    ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/ingress-nginx-public-defaultbackend-75d5f77f54-4xk6z         1/1     Running     0             16d    10.8.47.60    ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/ingress-nginx-public-defaultbackend-75d5f77f54-8brpp         1/1     Running     0             16d    10.8.0.194    ip-10-8-7-74.eu-central-1.compute.internal    <none>           <none>

NAME                                                TYPE           CLUSTER-IP       EXTERNAL-IP                                                                        PORT(S)                      AGE    SELECTOR
service/alb-dns-external-dns                        ClusterIP      172.20.52.32     <none>                                                                             7979/TCP                     167d   app.kubernetes.io/instance=alb-dns,app.kubernetes.io/name=external-dns
service/aws-load-balancer-webhook-service           ClusterIP      172.20.205.116   <none>                                                                             443/TCP                      167d   app.kubernetes.io/instance=aws-load-balancer-controller,app.kubernetes.io/name=aws-load-balancer-controller
service/cert-manager                                ClusterIP      172.20.194.254   <none>                                                                             9402/TCP                     558d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cert-manager
service/cert-manager-webhook                        ClusterIP      172.20.54.224    <none>                                                                             443/TCP                      558d   app.kubernetes.io/component=webhook,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=webhook
service/cluster-autoscaler-aws-cluster-autoscaler   ClusterIP      172.20.224.169   <none>                                                                             8085/TCP                     534d   app.kubernetes.io/instance=cluster-autoscaler,app.kubernetes.io/name=aws-cluster-autoscaler
service/external-dns                                ClusterIP      172.20.188.181   <none>                                                                             7979/TCP                     558d   app.kubernetes.io/instance=external-dns,app.kubernetes.io/name=external-dns
service/ingress-nginx-controller                    LoadBalancer   172.20.231.152   a00fe19d673a14015a17e74028558dd8-e9598176691f9b82.elb.eu-central-1.amazonaws.com   80:31910/TCP,443:32210/TCP   558d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission          ClusterIP      172.20.71.110    <none>                                                                             443/TCP                      558d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-defaultbackend                ClusterIP      172.20.192.97    <none>                                                                             80/TCP                       558d   app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-public-controller             LoadBalancer   172.20.17.86     abff6742514594e449947754e9a7bbbd-1f356b0e92edce83.elb.eu-central-1.amazonaws.com   80:32601/TCP,443:30984/TCP   558d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-public-controller-admission   ClusterIP      172.20.217.130   <none>                                                                             443/TCP                      558d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-public-defaultbackend         ClusterIP      172.20.177.223   <none>                                                                             80/TCP                       558d   app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx

NAME                                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE    CONTAINERS                     IMAGES                                                        SELECTOR
daemonset.apps/aws-node-termination-handler   5         5         5       5            5           kubernetes.io/os=linux   558d   aws-node-termination-handler   public.ecr.aws/aws-ec2/aws-node-termination-handler:v1.14.0   app.kubernetes.io/instance=aws-node-termination-handler,app.kubernetes.io/name=aws-node-termination-handler,kubernetes.io/os=linux

NAME                                                        READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS                      IMAGES                                                                                                                    SELECTOR
deployment.apps/alb-dns-external-dns                        1/1     1            1           167d   external-dns                    docker.io/bitnami/external-dns:0.13.2-debian-11-r0                                                                        app.kubernetes.io/instance=alb-dns,app.kubernetes.io/name=external-dns
deployment.apps/aws-load-balancer-controller                2/2     2            2           167d   aws-load-balancer-controller    602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.4.3                                   app.kubernetes.io/instance=aws-load-balancer-controller,app.kubernetes.io/name=aws-load-balancer-controller
deployment.apps/cert-manager                                1/1     1            1           558d   cert-manager-controller         quay.io/jetstack/cert-manager-controller:v1.11.0                                                                          app.kubernetes.io/component=controller,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cert-manager
deployment.apps/cert-manager-cainjector                     1/1     1            1           558d   cert-manager-cainjector         quay.io/jetstack/cert-manager-cainjector:v1.11.0                                                                          app.kubernetes.io/component=cainjector,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cainjector
deployment.apps/cert-manager-webhook                        1/1     1            1           558d   cert-manager-webhook            quay.io/jetstack/cert-manager-webhook:v1.11.0                                                                             app.kubernetes.io/component=webhook,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=webhook
deployment.apps/cluster-autoscaler-aws-cluster-autoscaler   1/1     1            1           534d   aws-cluster-autoscaler          k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.1                                                                         app.kubernetes.io/instance=cluster-autoscaler,app.kubernetes.io/name=aws-cluster-autoscaler
deployment.apps/cluster-overprovisioner-default             3/3     3            3           534d   cluster-overprovisioner         registry.k8s.io/pause:3.9                                                                                                 app.cluster-overprovisioner/deployment=default,app.kubernetes.io/instance=cluster-overprovisioner,app.kubernetes.io/name=cluster-overprovisioner
deployment.apps/external-dns                                1/1     1            1           558d   external-dns                    docker.io/bitnami/external-dns:0.13.2-debian-11-r0                                                                        app.kubernetes.io/instance=external-dns,app.kubernetes.io/name=external-dns
deployment.apps/ingress-nginx-controller                    2/2     2            2           558d   controller                      registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
deployment.apps/ingress-nginx-defaultbackend                2/2     2            2           558d   ingress-nginx-default-backend   registry.k8s.io/defaultbackend-amd64:1.5                                                                                  app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
deployment.apps/ingress-nginx-public-controller             2/2     2            2           558d   controller                      registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx
deployment.apps/ingress-nginx-public-defaultbackend         2/2     2            2           558d   ingress-nginx-default-backend   registry.infrastructure.domain.org/development/devops/dockerfiles/run/ingress-nginx-defaultbackend:1.0.0-0            app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx

NAME                                                                   DESIRED   CURRENT   READY   AGE    CONTAINERS                      IMAGES                                                                                                                    SELECTOR
replicaset.apps/alb-dns-external-dns-86dd74f988                        1         1         1       90d    external-dns                    docker.io/bitnami/external-dns:0.13.2-debian-11-r0                                                                        app.kubernetes.io/instance=alb-dns,app.kubernetes.io/name=external-dns,pod-template-hash=86dd74f988
replicaset.apps/alb-dns-external-dns-8dfffb44                          0         0         0       167d   external-dns                    docker.io/bitnami/external-dns:0.12.2-debian-11-r14                                                                       app.kubernetes.io/instance=alb-dns,app.kubernetes.io/name=external-dns,pod-template-hash=8dfffb44
replicaset.apps/aws-load-balancer-controller-68bb4894fb                2         2         2       167d   aws-load-balancer-controller    602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.4.3                                   app.kubernetes.io/instance=aws-load-balancer-controller,app.kubernetes.io/name=aws-load-balancer-controller,pod-template-hash=68bb4894fb
...
replicaset.apps/ingress-nginx-controller-679db5cfbd                    0         0         0       534d   controller                      k8s.gcr.io/ingress-nginx/controller:v0.47.0@sha256:a1e4efc107be0bb78f32eaec37bef17d7a0c81bec8066cdf2572508d21351d0b       app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=679db5cfbd
replicaset.apps/ingress-nginx-controller-797455bf64                    2         2         2       16d    controller                      registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=797455bf64
replicaset.apps/ingress-nginx-controller-bf46f755d                     0         0         0       558d   controller                      k8s.gcr.io/ingress-nginx/controller:v0.47.0@sha256:a1e4efc107be0bb78f32eaec37bef17d7a0c81bec8066cdf2572508d21351d0b       app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=bf46f755d
replicaset.apps/ingress-nginx-defaultbackend-7f645fd4b5                0         0         0       534d   ingress-nginx-default-backend   k8s.gcr.io/defaultbackend-amd64:1.5                                                                                       app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7f645fd4b5
replicaset.apps/ingress-nginx-defaultbackend-cb7bcf6d7                 0         0         0       558d   ingress-nginx-default-backend   k8s.gcr.io/defaultbackend-amd64:1.5                                                                                       app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=cb7bcf6d7
replicaset.apps/ingress-nginx-defaultbackend-cf997c8db                 2         2         2       16d    ingress-nginx-default-backend   registry.k8s.io/defaultbackend-amd64:1.5                                                                                  app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=cf997c8db
replicaset.apps/ingress-nginx-public-controller-6bccd6849d             0         0         0       534d   controller                      k8s.gcr.io/ingress-nginx/controller:v0.47.0@sha256:a1e4efc107be0bb78f32eaec37bef17d7a0c81bec8066cdf2572508d21351d0b       app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6bccd6849d
replicaset.apps/ingress-nginx-public-controller-75d498b5c6             0         0         0       558d   controller                      k8s.gcr.io/ingress-nginx/controller:v0.47.0@sha256:a1e4efc107be0bb78f32eaec37bef17d7a0c81bec8066cdf2572508d21351d0b       app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx,pod-template-hash=75d498b5c6
replicaset.apps/ingress-nginx-public-controller-7bb5d7b7f4             2         2         2       41h    controller                      registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7bb5d7b7f4
replicaset.apps/ingress-nginx-public-controller-7c9f88c778             0         0         0       15d    controller                      registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7c9f88c778
replicaset.apps/ingress-nginx-public-controller-7d5f8948b8             0         0         0       16d    controller                      registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7d5f8948b8
replicaset.apps/ingress-nginx-public-controller-84d6846cd6             0         0         0       501d   controller                      k8s.gcr.io/ingress-nginx/controller:v0.47.0@sha256:a1e4efc107be0bb78f32eaec37bef17d7a0c81bec8066cdf2572508d21351d0b       app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84d6846cd6
replicaset.apps/ingress-nginx-public-defaultbackend-5b59f667fc         0         0         0       534d   ingress-nginx-default-backend   k8s.gcr.io/defaultbackend-amd64:1.5                                                                                       app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5b59f667fc
replicaset.apps/ingress-nginx-public-defaultbackend-6ff6564896         0         0         0       468d   ingress-nginx-default-backend   registry.infrastructure.domain.org/development/devops/dockerfiles/run/ingress-nginx-defaultbackend:1.0.0-0            app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6ff6564896
replicaset.apps/ingress-nginx-public-defaultbackend-75d5f77f54         2         2         2       16d    ingress-nginx-default-backend   registry.infrastructure.domain.org/development/devops/dockerfiles/run/ingress-nginx-defaultbackend:1.0.0-0            app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx,pod-template-hash=75d5f77f54
replicaset.apps/ingress-nginx-public-defaultbackend-7bf48cb4ff         0         0         0       558d   ingress-nginx-default-backend   k8s.gcr.io/defaultbackend-amd64:1.5                                                                                       app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7bf48cb4ff

NAME                                                                      REFERENCE                                        TARGETS            MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/ingress-nginx-controller              Deployment/ingress-nginx-controller              35%/70%, 1%/70%    2         10        2          534d
horizontalpodautoscaler.autoscaling/ingress-nginx-defaultbackend          Deployment/ingress-nginx-defaultbackend          19%/50%, 10%/50%   2         10        2          534d
horizontalpodautoscaler.autoscaling/ingress-nginx-public-controller       Deployment/ingress-nginx-public-controller       35%/70%, 1%/70%    2         10        2          534d
horizontalpodautoscaler.autoscaling/ingress-nginx-public-defaultbackend   Deployment/ingress-nginx-public-defaultbackend   29%/50%, 10%/50%   2         10        2          534d

NAME                        SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE    CONTAINERS    IMAGES                                            SELECTOR
cronjob.batch/descheduler   */2 * * * *   False     0        70s             534d   descheduler   registry.k8s.io/descheduler/descheduler:v0.26.0   <none>

NAME                             COMPLETIONS   DURATION   AGE     CONTAINERS    IMAGES                                            SELECTOR
job.batch/descheduler-28028468   1/1           6s         5m10s   descheduler   registry.k8s.io/descheduler/descheduler:v0.26.0   controller-uid=f67481d8-113b-4d1d-8523-d7020fe727fa
job.batch/descheduler-28028470   1/1           6s         3m10s   descheduler   registry.k8s.io/descheduler/descheduler:v0.26.0   controller-uid=8c57ccff-f2e4-4660-8636-05b43b23026e
job.batch/descheduler-28028472   1/1           6s         70s     descheduler   registry.k8s.io/descheduler/descheduler:v0.26.0   controller-uid=9e3fe2e2-5cba-4859-ba64-8d31688fd0e9
  • kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>:
Name:                 ingress-nginx-public-controller-7bb5d7b7f4-bp8m6
Namespace:            infra
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 ip-10-8-1-123.eu-central-1.compute.internal/10.8.1.123
Start Time:           Sat, 15 Apr 2023 13:38:59 +0200
Labels:               app.kubernetes.io/component=controller
                      app.kubernetes.io/instance=ingress-nginx-public
                      app.kubernetes.io/name=ingress-nginx
                      pod-template-hash=7bb5d7b7f4
Annotations:          kubernetes.io/psp: eks.privileged
Status:               Running
IP:                   10.8.9.100
IPs:
  IP:           10.8.9.100
Controlled By:  ReplicaSet/ingress-nginx-public-controller-7bb5d7b7f4
Containers:
  controller:
    Container ID:  docker://e7d3c5a5546c338d000a98d31f659ab3ef9c212e50b4cdd97b861cc0cadf8f49
    Image:         registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629
    Image ID:      docker-pullable://registry.k8s.io/ingress-nginx/controller@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=$(POD_NAMESPACE)/ingress-nginx-public-defaultbackend
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-public-controller
      --election-id=ingress-nginx-public-leader
      --controller-class=k8s.io/ingress-nginx-public
      --ingress-class=nginx-public
      --configmap=$(POD_NAMESPACE)/ingress-nginx-public-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --ingress-class-by-name=true
      --ingress-class=nginx-public
    State:          Running
      Started:      Sat, 15 Apr 2023 13:39:01 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:      200m
      memory:   200Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-public-controller-7bb5d7b7f4-bp8m6 (v1:metadata.name)
      POD_NAMESPACE:  infra (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-df98t (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-public-admission
    Optional:    false
  kube-api-access-df98t:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
  • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name:                     ingress-nginx-public-controller
Namespace:                infra
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx-public
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.5.1
                          helm.sh/chart=ingress-nginx-4.4.0
Annotations:              external-dns.alpha.kubernetes.io/hostname: eks.production.domain.org.
                          meta.helm.sh/release-name: ingress-nginx-public
                          meta.helm.sh/release-namespace: infra
                          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
                          service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: true
                          service.beta.kubernetes.io/aws-load-balancer-type: nlb
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-public,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       172.20.17.86
IPs:                      172.20.17.86
LoadBalancer Ingress:     abff6742514594e449947754e9a7bbbd-1f356b0e92edce83.elb.eu-central-1.amazonaws.com
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32601/TCP
Endpoints:                10.8.47.15:80,10.8.9.100:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30984/TCP
Endpoints:                10.8.47.15:443,10.8.9.100:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     32688
Events:                   <none>

  • Current state of ingress object, if applicable:
    • kubectl -n <appnnamespace> get all,ing -o wide
NAME                                            READY   STATUS      RESTARTS       AGE     IP            NODE                                          NOMINATED NODE   READINESS GATES
pod/adminportal-64897bf8bf-wmxtm                1/1     Running     0              16d     10.8.17.218   ip-10-8-23-51.eu-central-1.compute.internal   <none>           <none>
pod/adminportal-dex-cc74fbdcf-fqfxv             1/1     Running     0              16d     10.8.39.109   ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/adminportal-oauth2-proxy-5dfbcb59cb-vmwh9   1/1     Running     15 (16d ago)   16d     10.8.36.8     ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
...
pod/webclient-prod1-58cc5d9d79-54hm8            1/1     Running     0              4d20h   10.8.10.61    ip-10-8-7-74.eu-central-1.compute.internal    <none>           <none>
pod/webclient-prod1-58cc5d9d79-tlrfc            1/1     Running     0              4d20h   10.8.30.70    ip-10-8-22-32.eu-central-1.compute.internal   <none>           <none>
pod/webclient-prod2-59fb88f579-6f5ld            1/1     Running     0              3d22h   10.8.32.91    ip-10-8-42-21.eu-central-1.compute.internal   <none>           <none>
pod/webclient-prod2-59fb88f579-ph2xk            1/1     Running     0              3d22h   10.8.4.235    ip-10-8-7-74.eu-central-1.compute.internal    <none>           <none>

NAME                                  TYPE           CLUSTER-IP       EXTERNAL-IP                                        PORT(S)             AGE    SELECTOR
service/adminportal                   ClusterIP      172.20.147.224   <none>                                             80/TCP              180d   app.kubernetes.io/name=adminportal
service/adminportal-dex               ClusterIP      172.20.10.93     <none>                                             5556/TCP,5558/TCP   186d   app.kubernetes.io/instance=adminportal-dex,app.kubernetes.io/name=dex
service/adminportal-oauth2-proxy      ClusterIP      172.20.164.137   <none>                                             80/TCP,44180/TCP    186d   app.kubernetes.io/instance=adminportal-oauth2-proxy,app.kubernetes.io/name=adminportal-oauth2-proxy
...
service/webclient-prod1               ClusterIP      172.20.81.176    <none>                                             80/TCP              502d   app.kubernetes.io/name=webclient-prod1
service/webclient-prod1-vip           ClusterIP      172.20.69.128    <none>                                             80/TCP              500d   app.kubernetes.io/name=webclient-prod1
service/webclient-prod2               ClusterIP      172.20.224.174   <none>                                             80/TCP              502d   app.kubernetes.io/name=webclient-prod2
service/webclient-prod2-vip           ClusterIP      172.20.165.251   <none>                                             80/TCP              500d   app.kubernetes.io/name=webclient-prod2

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS           IMAGES                                                                                                        SELECTOR
deployment.apps/adminportal                1/1     1            1           180d   adminportal          registry.infrastructure.domain.org/development/backend/adminportal/adminportal:d62a7699                   app.kubernetes.io/name=adminportal
deployment.apps/adminportal-dex            1/1     1            1           186d   dex                  ghcr.io/dexidp/dex:v2.35.3                                                                                    app.kubernetes.io/instance=adminportal-dex,app.kubernetes.io/name=dex
deployment.apps/adminportal-oauth2-proxy   1/1     1            1           186d   oauth2-proxy         quay.io/oauth2-proxy/oauth2-proxy:v7.4.0                                                                      app.kubernetes.io/instance=adminportal-oauth2-proxy,app.kubernetes.io/name=adminportal-oauth2-proxy
...
deployment.apps/webclient-prod1            2/2     2            2           502d   webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:091e97c7               app.kubernetes.io/name=webclient-prod1
deployment.apps/webclient-prod2            2/2     2            2           502d   webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:fe0b624f               app.kubernetes.io/name=webclient-prod2

NAME                                                  DESIRED   CURRENT   READY   AGE     CONTAINERS           IMAGES                                                                                                        SELECTOR
replicaset.apps/adminportal-5894c577f5                0         0         0       180d    adminportal          registry.infrastructure.domain.org/development/backend/adminportal/adminportal:39033c25                   app.kubernetes.io/name=adminportal,pod-template-hash=5894c577f5
replicaset.apps/adminportal-64897bf8bf                1         1         1       123d    adminportal          registry.infrastructure.domain.org/development/backend/adminportal/adminportal:d62a7699                   app.kubernetes.io/name=adminportal,pod-template-hash=64897bf8bf
replicaset.apps/adminportal-6fd59c599c                0         0         0       137d    adminportal          registry.infrastructure.domain.org/development/backend/adminportal/adminportal:96fc1083                   app.kubernetes.io/name=adminportal,pod-template-hash=6fd59c599c
replicaset.apps/adminportal-76668f4b55                0         0         0       180d    adminportal          registry.infrastructure.domain.org/development/backend/adminportal/adminportal:6e59b2a8                   app.kubernetes.io/name=adminportal,pod-template-hash=76668f4b55
replicaset.apps/adminportal-7ffb66977c                0         0         0       177d    adminportal          registry.infrastructure.domain.org/development/backend/adminportal/adminportal:b8fd0783                   app.kubernetes.io/name=adminportal,pod-template-hash=7ffb66977c
replicaset.apps/adminportal-dex-85fc78c584            0         0         0       186d    dex                  ghcr.io/dexidp/dex:v2.31.1                                                                                    app.kubernetes.io/instance=adminportal-dex,app.kubernetes.io/name=dex,pod-template-hash=85fc78c584
replicaset.apps/adminportal-dex-cc74fbdcf             1         1         1       90d     dex                  ghcr.io/dexidp/dex:v2.35.3                                                                                    app.kubernetes.io/instance=adminportal-dex,app.kubernetes.io/name=dex,pod-template-hash=cc74fbdcf
replicaset.apps/adminportal-oauth2-proxy-5dd8d99798   0         0         0       186d    oauth2-proxy         quay.io/oauth2-proxy/oauth2-proxy:v7.2.0                                                                      app.kubernetes.io/instance=adminportal-oauth2-proxy,app.kubernetes.io/name=adminportal-oauth2-proxy,pod-template-hash=5dd8d99798
replicaset.apps/adminportal-oauth2-proxy-5dfbcb59cb   1         1         1       90d     oauth2-proxy         quay.io/oauth2-proxy/oauth2-proxy:v7.4.0                                                                      app.kubernetes.io/instance=adminportal-oauth2-proxy,app.kubernetes.io/name=adminportal-oauth2-proxy,pod-template-hash=5dfbcb59cb
...
replicaset.apps/webclient-prod1-564f46fd8b            0         0         0       13d     webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:5abde1b8               app.kubernetes.io/name=webclient-prod1,pod-template-hash=564f46fd8b
replicaset.apps/webclient-prod1-58cc5d9d79            2         2         2       4d20h   webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:091e97c7               app.kubernetes.io/name=webclient-prod1,pod-template-hash=58cc5d9d79
replicaset.apps/webclient-prod1-65974fc74             0         0         0       46d     webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:8dc19f52               app.kubernetes.io/name=webclient-prod1,pod-template-hash=65974fc74
replicaset.apps/webclient-prod1-67456db7c8            0         0         0       16d     webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:e900c73a               app.kubernetes.io/name=webclient-prod1,pod-template-hash=67456db7c8
replicaset.apps/webclient-prod1-697c599895            0         0         0       236d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:a3dd7c46               app.kubernetes.io/name=webclient-prod1,pod-template-hash=697c599895
replicaset.apps/webclient-prod1-6bd4fc6fc8            0         0         0       124d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:2e206ac7               app.kubernetes.io/name=webclient-prod1,pod-template-hash=6bd4fc6fc8
replicaset.apps/webclient-prod1-6bf45f69f             0         0         0       52d     webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:d80b8c5c               app.kubernetes.io/name=webclient-prod1,pod-template-hash=6bf45f69f
replicaset.apps/webclient-prod1-744c5757d7            0         0         0       124d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:6c365714               app.kubernetes.io/name=webclient-prod1,pod-template-hash=744c5757d7
replicaset.apps/webclient-prod1-77857fdb9d            0         0         0       53d     webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:48c66d30               app.kubernetes.io/name=webclient-prod1,pod-template-hash=77857fdb9d
replicaset.apps/webclient-prod1-8bb778d95             0         0         0       129d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:31ddd092               app.kubernetes.io/name=webclient-prod1,pod-template-hash=8bb778d95
replicaset.apps/webclient-prod1-d5889c654             0         0         0       296d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:e7658dfd               app.kubernetes.io/name=webclient-prod1,pod-template-hash=d5889c654
replicaset.apps/webclient-prod2-5558cb5b8             0         0         0       52d     webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:48c66d30               app.kubernetes.io/name=webclient-prod2,pod-template-hash=5558cb5b8
replicaset.apps/webclient-prod2-55cc54f468            0         0         0       16d     webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:e900c73a               app.kubernetes.io/name=webclient-prod2,pod-template-hash=55cc54f468
replicaset.apps/webclient-prod2-59fb88f579            2         2         2       3d22h   webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:fe0b624f               app.kubernetes.io/name=webclient-prod2,pod-template-hash=59fb88f579
replicaset.apps/webclient-prod2-5b854d8cb4            0         0         0       313d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:23aa03c5               app.kubernetes.io/name=webclient-prod2,pod-template-hash=5b854d8cb4
replicaset.apps/webclient-prod2-5d446546b9            0         0         0       293d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:9a98d56a               app.kubernetes.io/name=webclient-prod2,pod-template-hash=5d446546b9
replicaset.apps/webclient-prod2-5f7476c8f7            0         0         0       52d     webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:d80b8c5c               app.kubernetes.io/name=webclient-prod2,pod-template-hash=5f7476c8f7
replicaset.apps/webclient-prod2-6b4db655cd            0         0         0       341d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:09aee4d8               app.kubernetes.io/name=webclient-prod2,pod-template-hash=6b4db655cd
replicaset.apps/webclient-prod2-6bdfdcc9c8            0         0         0       341d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:6b501044               app.kubernetes.io/name=webclient-prod2,pod-template-hash=6bdfdcc9c8
replicaset.apps/webclient-prod2-7cc757bc56            0         0         0       46d     webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:8dc19f52               app.kubernetes.io/name=webclient-prod2,pod-template-hash=7cc757bc56
replicaset.apps/webclient-prod2-9d5ffdd5              0         0         0       123d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:6c365714               app.kubernetes.io/name=webclient-prod2,pod-template-hash=9d5ffdd5
replicaset.apps/webclient-prod2-fdc5997f6             0         0         0       235d    webclient            registry.infrastructure.domain.org/development/buergerplatform/webclient/webclient:a3dd7c46               app.kubernetes.io/name=webclient-prod2,pod-template-hash=fdc5997f6

NAME                                                  REFERENCE                    TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/adminportal       Deployment/adminportal       0%/60%    1         10        1          180d
horizontalpodautoscaler.autoscaling/webclient-prod1   Deployment/webclient-prod1   0%/60%    2         10        2          502d
horizontalpodautoscaler.autoscaling/webclient-prod2   Deployment/webclient-prod2   0%/60%    2         10        2          502d

NAME                                                 CLASS          HOSTS                                                  ADDRESS                                                                            PORTS     AGE
ingress.networking.k8s.io/adminportal                nginx          adminportal.eks.production.domain.org              a00fe19d673a14015a17e74028558dd8-e9598176691f9b82.elb.eu-central-1.amazonaws.com   80, 443   3d
ingress.networking.k8s.io/adminportal-dex            <none>         adminportal-dex.prod.eks.production.domain.org     internal-k8s-prod-adminpor-f693d70f51-493482359.eu-central-1.elb.amazonaws.com     80, 443   186d
ingress.networking.k8s.io/adminportal-oauth2-proxy   <none>         adminportal-oauth.prod.eks.production.domain.org   internal-k8s-prod-adminpor-514b109b18-79272633.eu-central-1.elb.amazonaws.com      80, 443   186d
ingress.networking.k8s.io/webclient-prod1            nginx-public   webclient.prod1.eks.production.domain.org          abff6742514594e449947754e9a7bbbd-1f356b0e92edce83.elb.eu-central-1.amazonaws.com   80, 443   44h
ingress.networking.k8s.io/webclient-prod1-vip        nginx-public   webclient.eks.production.domain.org                abff6742514594e449947754e9a7bbbd-1f356b0e92edce83.elb.eu-central-1.amazonaws.com   80, 443   40h
ingress.networking.k8s.io/webclient-prod2            nginx-public   webclient.prod2.eks.production.domain.org          abff6742514594e449947754e9a7bbbd-1f356b0e92edce83.elb.eu-central-1.amazonaws.com   80, 443   44h
ingress.networking.k8s.io/webclient-prod2-vip        nginx-public   webclient.eks.production.domain.org                abff6742514594e449947754e9a7bbbd-1f356b0e92edce83.elb.eu-central-1.amazonaws.com   80, 443   44h

  • kubectl -n <appnamespace> describe ing <ingressname>
Name:             webclient-prod2-vip
Namespace:        prod
Address:          abff6742514594e449947754e9a7bbbd-1f356b0e92edce83.elb.eu-central-1.amazonaws.com
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  webclient-prod2-vip-tls terminates webclient.eks.production.domain.org
Rules:
  Host                                     Path  Backends
  ----                                     ----  --------
  webclient.eks.production.domain.org  
                                           /   webclient-prod2-vip:80 (10.8.32.91:8080,10.8.4.235:8080)
Annotations:                               cert-manager.io/cluster-issuer: letsencrypt
                                           nginx.ingress.kubernetes.io/canary: true
                                           nginx.ingress.kubernetes.io/canary-weight: 100
Events:                                    <none>

  • Others:
    • Any other related information like ;
      • copy/paste of the snippet (if applicable)
      • kubectl describe ... of any custom configmap(s) created and in use
      • Any other related information that may help
@uwebartels uwebartels added the kind/bug Categorizes issue or PR as related to a bug. label Apr 17, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Apr 17, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@longwuyuan
Copy link
Contributor

/remove-kind bug
@uwebartels you can check and verify, if the ingress object is configured with a ingress.spec.ingressClassName field or the ingressClass annotation.

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Apr 17, 2023
@uwebartels
Copy link
Author

yes. the ingress has the ingressClassName configured:

  ingressClassName: nginx-public

in the output of kubectl describe ingress .. you can see that there is no ingressclassname annotation. Both are refused when you remove and install the ingress.

@longwuyuan
Copy link
Contributor

Not sure if I messaged wrong or if you got it wrong. With the data visible so far ;

  • you are on K8S v1.22
  • ingressClassName is required on K8S v1.22
  • I see kubectl describe output of only one ingress object
  • I can comment if I see at least the non-canary ingress for that same FQDN
  • I don't see the curl request with -v and its response so can not comment much

It will help to know what I am missing

@uwebartels
Copy link
Author

  • you are on K8S v1.22: yes
  • ingressClassName is required on K8S v1.22: yes and is set for the ingress with canary annotations as well as the ingress with the same host without canary annotations

Here is the other ingress object with the same host but without canary annotations:

Name:             webclient-prod1-vip
Namespace:        prod
Address:          abff6742514594e449947754e9a7bbbd-1f356b0e92edce83.elb.eu-central-1.amazonaws.com
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  webclient-prod1-vip-tls terminates webclient.eks.production.domain.org
Rules:
  Host                                     Path  Backends
  ----                                     ----  --------
  webclient.eks.production.domain.org  
                                           /   webclient-prod1-vip:80 (10.8.10.61:8080,10.8.30.70:8080)
Annotations:                               cert-manager.io/cluster-issuer: letsencrypt
Events:                                    <none>

curl output (hostname+ip anonymized):

$ curl -v --insecure https://webclient.eks.production.domain.org
* Rebuilt URL to: https://webclient.eks.production.domain.org/
*   Trying 1.2.3.4...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Connected to webclient.eks.production.domain.org (1.2.3.4) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: /opt/local/share/curl/curl-ca-bundle.crt
  CApath: none
* TLSv1.2, TLS Unknown, Unknown (22):
} [5 bytes data]
* TLSv1.2, TLS handshake, Client hello (1):
} [512 bytes data]
* SSLv2, Unknown (22):
{ [5 bytes data]
* TLSv1.2, TLS handshake, Server hello (2):
{ [93 bytes data]
* SSLv2, Unknown (22):
{ [5 bytes data]
* TLSv1.2, TLS handshake, CERT (11):
{ [4071 bytes data]
* SSLv2, Unknown (22):
{ [5 bytes data]
* TLSv1.2, TLS handshake, Server key exchange (12):
{ [333 bytes data]
* SSLv2, Unknown (22):
{ [5 bytes data]
* TLSv1.2, TLS handshake, Server finished (14):
{ [4 bytes data]
* SSLv2, Unknown (22):
} [5 bytes data]
* TLSv1.2, TLS handshake, Client key exchange (16):
} [70 bytes data]
* SSLv2, Unknown (20):
} [5 bytes data]
* TLSv1.2, TLS change cipher, Client hello (1):
} [1 bytes data]
* SSLv2, Unknown (22):
} [5 bytes data]
* TLSv1.2, TLS handshake, Finished (20):
} [16 bytes data]
* SSLv2, Unknown (20):
{ [5 bytes data]
* TLSv1.2, TLS change cipher, Client hello (1):
{ [1 bytes data]
* SSLv2, Unknown (22):
{ [5 bytes data]
* TLSv1.2, TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
* 	 subject: CN=webclient.eks.production.domain.org
* 	 start date: 2023-03-25 08:59:02 GMT
* 	 expire date: 2023-06-23 08:59:01 GMT
* 	 issuer: C=US; O=Let's Encrypt; CN=R3
* 	 SSL certificate verify result: certificate has expired (10), continuing anyway.
* SSLv2, Unknown (23):
} [5 bytes data]
> GET / HTTP/1.1
> User-Agent: curl/7.41.0
> Host: webclient.eks.production.domain.org
> Accept: */*
> 
* SSLv2, Unknown (23):
{ [5 bytes data]
< HTTP/1.1 200 OK
< Date: Mon, 17 Apr 2023 16:14:32 GMT
< Content-Type: text/html
< Content-Length: 795
< Connection: keep-alive
< Last-Modified: Thu, 06 Apr 2023 07:56:52 GMT
< Vary: Accept-Encoding
< ETag: "642e7b44-31b"
< Accept-Ranges: bytes
< Strict-Transport-Security: max-age=15724800; includeSubDomains
< 
{ [795 bytes data]
100   795  100   795    0     0   4208      0 --:--:-- --:--:-- --:--:--  4297
* Connection #0 to host webclient.eks.production.domain.org left intact
<!doctype html><html lang="de"><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"/><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=1,user-scalable=no"/><meta name="mobile-web-app-capable" content="yes"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><link href="https://fonts.googleapis.com/css2?family=Ubuntu:wght@300;400;500;700&display=swap" rel="stylesheet"/><link rel="stylesheet" href="https://js.arcgis.com/4.24/esri/themes/light/main.css"/><title>MIG Bürgerplattform</title><link rel="icon" href="favicon.jpg"><script defer="defer" src="3666.9b69445a5743164a9a7b.js"></script><script defer="defer" src="4826.03ebbdfcc0f1228388c3.js"></script><link href="index.css" rel="stylesheet"></head><body><div id="root"/></body></html>

@longwuyuan
Copy link
Contributor

@uwebartels I think someone needs to run this https://kubesphere.io/blogs/canary-release-with-nginx-ingress/ test on minikube or kind cluster to establish a base if the canary release feature itself is broken, first.

And then the second test needs to tweak the environment to your config, with the highlight being weight of 100.

@uwebartels
Copy link
Author

Hi,

I did the setup with minikube and see the same behavior in the nginx.conf of the ingress controller. Unfortunatly I cannot access the ingresses from outside due to my local setup.

There are some configurations in the ingress-controller which are not needed here, but I wanted to use a configuration as close as possible to the production environment.

In the final check you can see that there is no entry for the ingress nginx2-vip and only one entry for the upstream dev-nginx2-http. The same behavior I experience in my production setup. So I have no idea how the controller should identify requests coming for nginx2-vip or send them to dev-nginx2-http.

Best...
Uwe

start minikube

minikube start --kubernetes-version=v1.22.17

setup namespaces

echo 'apiVersion: v1
kind: Namespace
metadata:
  labels:
    kubernetes.io/metadata.name: dev
    role: dev
  name: dev
spec:
  finalizers:
  - kubernetes
' > kubectl apply
echo 'apiVersion: v1
kind: Namespace
metadata:
  labels:
    kubernetes.io/metadata.name: infra
    role: infra
  name: infra
spec:
  finalizers:
  - kubernetes
' > kubectl apply

prepare 2 applications

echo 'ingress:
  enabled: true
  hostname: sample1
  ingressClassName: nginx
serviceAccount:
  create: true
  name: sample1
' > nginx1.values.yaml
echo 'ingress:
  enabled: true
  hostname: sample2
  ingressClassName: nginx
serviceAccount:
  create: true
  name: sample2
' > nginx2.values.yaml

install 2 applications

helm upgrade --install nginx1 bitnami/nginx --version 13.2.34 -n dev -f nginx1.values.yaml
helm upgrade --install nginx2 bitnami/nginx --version 13.2.34 -n dev -f nginx2.values.yaml

install nginx-ingress

echo 'controller:
  autoscaling:
    enabled: true
    maxReplicas: 10
    minReplicas: 2
    targetCPUUtilizationPercentage: 70
    targetMemoryUtilizationPercentage: 70
  ingressClassResource:
    default: true
  priorityClassName: system-cluster-critical
  resources:
    limits:
      cpu: 200m
      memory: 200Mi
    requests:
      cpu: 200m
      memory: 200Mi
  service:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: eks.production.domain.org.
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
      service.beta.kubernetes.io/aws-load-balancer-scheme: internal
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
    enabled: true
    loadBalancerSourceRanges:
    - 10.0.0.0/8
  serviceMonitor:
    enabled: false
defaultBackend:
  autoscaling:
    enabled: true
    maxReplicas: 10
    minReplicas: 2
  enabled: true
  priorityClassName: system-cluster-critical
  resources:
    limits:
      cpu: 10m
      memory: 20Mi
    requests:
      cpu: 10m
      memory: 20Mi
' > ingress-nginx.values.yaml

helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx -n infra -f ingress-nginx.values.yaml --version 4.4.0

install vip ingresses (one ingress with final hostname, one ingress with canary annotations and same hostname)

echo '---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx1-vip
  namespace: "dev"
  labels:
    app.kubernetes.io/instance: nginx1-vip
  annotations:
spec:
  ingressClassName: "nginx"
  rules:
    - host: "sample"
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: nginx1
                port:
                  name: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "100"
  labels:
    app.kubernetes.io/instance: nginx2-vip
  name: nginx2-vip
  namespace: "dev"
spec:
  ingressClassName: "nginx"
  rules:
    - host: "sample"
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: nginx2
                port:
                  name: http
' > ingress-vip.yaml 

k apply -f ingress-vip.yaml

nginx config check

k exec -it -n infra <nginx controller pod> -- sh

/etc/nginx $ grep ingress_name nginx.conf
	# $ingress_name
			set $ingress_name   "";
			set $ingress_name   "nginx1-vip";
			set $ingress_name   "nginx1";
			set $ingress_name   "nginx2";
/etc/nginx $ 
/etc/nginx $ grep -E 'ingress_name|proxy_upstream_name' nginx.conf
	# $ingress_name
	log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
		set $proxy_upstream_name "-";
			set $ingress_name   "";
			set $proxy_upstream_name "upstream-default-backend";
			set $proxy_host          $proxy_upstream_name;
		set $proxy_upstream_name "-";
			set $ingress_name   "nginx1-vip";
			set $proxy_upstream_name "dev-nginx1-http";
			set $proxy_host          $proxy_upstream_name;
		set $proxy_upstream_name "-";
			set $ingress_name   "nginx1";
			set $proxy_upstream_name "dev-nginx1-http";
			set $proxy_host          $proxy_upstream_name;
		set $proxy_upstream_name "-";
			set $ingress_name   "nginx2";
			set $proxy_upstream_name "dev-nginx2-http";
			set $proxy_host          $proxy_upstream_name;
		set $proxy_upstream_name "internal";
		set $proxy_upstream_name "internal";
	lua_add_variable $proxy_upstream_name;
/etc/nginx $ 


@longwuyuan
Copy link
Contributor

Your test has multiple aspects that are important to you it seems but it looks like these aspects do not related to the issue you have raised, from the point of view of a reader here. Examples of confusion I am having ;

  • I don't see live state anywhere like output of kubectl describe
  • I see canary-weight set to 100 . That sort of defies what a canary release for an app is if that 100 represents all traffic
  • I see annotations related to aws and I see a heading as minikube so not sure how aws related annotations work in minikube

@uwebartels
Copy link
Author

I tried to make my behavior reproducable. so whoever takes a closer look can do the kubectl describe. I did this in my problematic environment.
Regarding canary weight 100 - we use this for switching traffic from one backend to another. So far this worked perfectly.
The annotations related to aws won't have an effect as there is no aws driver. I wrote, that I tried to make this reproducalbe and as close as possible to our environment.

So I see your aspects.
But Do you see mine?
Is the shown behavior correct?

Best...
Uwe

@longwuyuan
Copy link
Contributor

I will have to reproduce with the highlighted configs like canary-weight set to 100.

Not that there is concept of "total-weight" qnd it used to be 100 by default. Then recently we merged a PR where the total-weight could be set to 1000 so that fractional percentage of traffic could be shaped. Search PRs merged for the PR number. I don't know if that relates to your use case. But if it does then you would need to define the total-weight as well as the canary-weight.

@uwebartels
Copy link
Author

the existing canary functionality works. In our application we have a caching problem that caused seeing the wrong content.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Archived in project
Development

No branches or pull requests

3 participants