Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Corner Case: upstream name duplication causing ingress pointing to wrong service [following issue template] #11938

Open
Revolution1 opened this issue Sep 6, 2024 · 23 comments · May be fixed by #11942
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@Revolution1
Copy link

What happened: HTTP returns content from a wrong backend

What you expected to happen: HTTP returns content from a correct backend which the ingress is configured.

What do you think went wrong?:

links to #11937

func upstreamName outputs the same upstream name for different ingress backend.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
v1.11.1 (and the latest main)

Kubernetes version (use kubectl version):

Environment:

  • Cloud provider or hardware configuration: local

  • OS (e.g. from /etc/os-release): wsl ubuntu

  • Kernel (e.g. uname -a):

  • Install tools:

    • Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
  • Basic cluster related info:Linux 5.15.146.1-microsoft-standard-WSL2 Basic structure  #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

    • kubectl version
    Client Version: v1.30.0
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.29.2
    
    • kubectl get nodes -o wide
    NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                       CONTAINER-RUNTIME
    kind-control-plane   Ready    control-plane   10d   v1.29.2   172.19.0.2    <none>        Debian GNU/Linux 12 (bookworm)   5.15.146.1-microsoft-standard-WSL2   containerd://1.7.13
    
  • How was the ingress-nginx-controller installed:

    • If helm was used then please show output of helm ls -A | grep -i ingress
    nginx           ingress-nginx           3               2024-09-06 18:44:48.292292472 +0800 HKT deployed        ingress-nginx-4.11.1    1.11.1
    
    • If helm was used then please show output of helm -n <ingresscontrollernamespace> get values <helmreleasename>
    USER-SUPPLIED VALUES:
    controller:
      admissionWebhooks:
        certManager:
          enabled: true
        patch:
          image:
            digest: ""
            registry: registry.local:5000
      extraArgs:
        publish-status-address: 127.0.0.1
      hostPort:
        enabled: true
      image:
        digest: ""
        digestChroot: ""
        registry: registry.local:5000
      ingressClassResource:
        default: true
      metrics:
        enabled: true
        port: 10254
      nodeSelector:
        ingress-ready: "true"
        kubernetes.io/os: linux
      publishService:
        enabled: false
      updateStrategy:
        rollingUpdate:
          maxUnavailable: 1
    
  • Current State of the controller:

    • kubectl describe ingressclasses
    Name:         nginx
    Labels:       app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=nginx
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=ingress-nginx
                  app.kubernetes.io/part-of=ingress-nginx
                  app.kubernetes.io/version=1.11.1
                  helm.sh/chart=ingress-nginx-4.11.1
    Annotations:  ingressclass.kubernetes.io/is-default-class: true
                  meta.helm.sh/release-name: nginx
                  meta.helm.sh/release-namespace: ingress-nginx
    Controller:   k8s.io/ingress-nginx
    Events:       <none>
    
    • kubectl -n <ingresscontrollernamespace> get all -o wide
    • kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
    • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
  • Current state of ingress object, if applicable:

    • kubectl -n <appnamespace> get all,ing -o wide
    NAME                                                  READY   STATUS    RESTARTS      AGE   IP            NODE                 NOMINATED NODE   READINESS GATES
    pod/nginx-ingress-nginx-controller-76dcf989d8-v2qm8   1/1     Running   2 (24h ago)   10d   10.244.0.14   kind-control-plane   <none>           <none>
    
    NAME                                               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
    service/nginx-ingress-nginx-controller             LoadBalancer   10.96.75.220    <pending>     80:31222/TCP,443:31220/TCP   10d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx
    service/nginx-ingress-nginx-controller-admission   ClusterIP      10.96.138.105   <none>        443/TCP                      10d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx
    service/nginx-ingress-nginx-controller-metrics     ClusterIP      10.96.152.203   <none>        10254/TCP                    10d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx
    
    NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                 SELECTOR
    deployment.apps/nginx-ingress-nginx-controller   1/1     1            1           10d   controller   registry.local:5000/ingress-nginx/controller:v1.11.1   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx
    
    NAME                                                        DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                                                 SELECTOR
    replicaset.apps/nginx-ingress-nginx-controller-76dcf989d8   1         1         1       10d   controller   registry.local:5000/ingress-nginx/controller:v1.11.1   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=76dcf989d8
    
    • kubectl -n <appnamespace> describe ing <ingressname>
    • If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
  • Others:
    The full manifests that reproduce this problem:

# pod1
apiVersion: v1
kind: Pod
metadata:
  name: "pod1"
  labels:
    app: "pod1"
spec:
  containers:
    - name: pod1
      image: "busybox:latest"
      command:
        - "sh"
        - "-c"
        - |
          while true; do
          echo -e "HTTP/1.1 200 OK\n\n Responsing From: $HOSTNAME" | nc -l -p 8000;
          done;
      resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 100m
          memory: 200Mi
      ports:
        - containerPort: 8000
---
# pod2
apiVersion: v1
kind: Pod
metadata:
  name: "pod2"
  labels:
    app: "pod2"
spec:
  containers:
    - name: pod1
      image: "busybox:latest"
      command:
        - "sh"
        - "-c"
        - |
          while true; do
          echo -e "HTTP/1.1 200 OK\n\n Responsing From: $HOSTNAME" | nc -l -p 8000;
          done;
      resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 100m
          memory: 200Mi
      ports:
        - containerPort: 8000
---
# service1
apiVersion: v1
kind: Service
metadata:
  name: "service-pod"
spec:
  selector:
    app: "pod1"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
      name: http
---
# service2
apiVersion: v1
kind: Service
metadata:
  name: "service"
spec:
  selector:
    app: "pod2"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
      name: pod-http
---
# ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
spec:
  rules:
    - host: "example.local"
      http:
        paths:
          - path: "/service1"
            pathType: Prefix
            backend:
              service:
                name: "service-pod"
                port:
                  name: "http"
          - path: "/service2"
            pathType: Prefix
            backend:
              service:
                name: "service"
                port:
                  name: "pod-http"

How to reproduce this issue:

$ curl example.local/service1
 Responsing From: pod1

$ curl example.local/service2
 Responsing From: pod1
# expected: Responsing From: pod2
@Revolution1 Revolution1 added the kind/bug Categorizes issue or PR as related to a bug. label Sep 6, 2024
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Sep 6, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@longwuyuan
Copy link
Contributor

longwuyuan commented Sep 6, 2024

image

@Revolution1 this is a community project so even if bot requested information, you can help others in the community, by providing detailed information, so others can help you better.

  • Why does reproduce effort fail for me if I try to use one hostname but 2 different paths for different backend ?
  • Why do I get response from the correct pod1 or pod2, when I change the path in my request URL ?
NAME       READY   STATUS    RESTARTS   AGE
pod/pod1   1/1     Running   0          5m39s
pod/pod2   1/1     Running   0          5m39s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   6d3h
service/svc1         ClusterIP   10.110.143.170   <none>        80/TCP    5m39s
service/svc2         ClusterIP   10.96.241.33     <none>        80/TCP    5m39s

NAME                                 CLASS   HOSTS           ADDRESS        PORTS   AGE
ingress.networking.k8s.io/ingress1   nginx   example.local   192.168.49.2   80      5m39s

% k describe ing ingress1 
Name:             ingress1
Labels:           <none>
Namespace:        default
Address:          192.168.49.2
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host           Path  Backends
  ----           ----  --------
  example.local  
                 /service1   svc1:80 (10.244.0.22:8000)
                 /service2   svc2:80 (10.244.0.21:8000)
Annotations:     <none>
Events:
  Type    Reason  Age                    From                      Message
  ----    ------  ----                   ----                      -------
  Normal  Sync    8m13s (x2 over 8m45s)  nginx-ingress-controller  Scheduled for sync
[~] 
% 

[~] 
% curl example.local/service1 --resolve example.local:80:`minikube ip`
 Responsing From: pod1
[~] 
% curl example.local/service2 --resolve example.local:80:`minikube ip`
 Responsing From: pod2
[~] 
% 

% logs
192.168.49.1 - - [06/Sep/2024:14:25:59 +0000] "GET /service1 HTTP/1.1" 200 34 "-" "curl/7.81.0" 85 0.001 [default-svc1-80] [] 10.244.0.22:8000 23 0.001 200 188b5f0b3c5c28278305d28dd6c0a353
192.168.49.1 - - [06/Sep/2024:14:26:06 +0000] "GET /service2 HTTP/1.1" 200 34 "-" "curl/7.81.0" 85 0.001 [default-svc2-80] [] 10.244.0.21:8000 23 0.001 200 830fce8bd89d7cba2c9caed579a89181
192.168.49.1 - - [06/Sep/2024:14:26:34 +0000] "GET /service1 HTTP/1.1" 200 34 "-" "curl/7.81.0" 85 0.001 [default-svc1-80] [] 10.244.0.22:8000 23 0.001 200 00dd290851b42bf2d411aeb364c54284
192.168.49.1 - - [06/Sep/2024:14:26:38 +0000] "GET /service2 HTTP/1.1" 200 34 "-" "curl/7.81.0" 85 0.001 [default-svc2-80] [] 10.244.0.21:8000 23 0.001 200 faa6c97d114e94a9f8204aef9aa5e711

% k -n ingress-nginx exec ingress-nginx-controller-7b7b559f8b-pdx9c -- grep svc1 /etc/nginx/
nginx.conf
                        set $service_name   "svc1";
                        set $proxy_upstream_name "default-svc1-80";
                        set $service_name   "svc1";
                        set $proxy_upstream_name "default-svc1-80";
[~] 
% k -n ingress-nginx exec ingress-nginx-controller-7b7b559f8b-pdx9c -- grep svc2 /etc/nginx/nginx.conf
                        set $service_name   "svc2";
                        set $proxy_upstream_name "default-svc2-80";
                        set $service_name   "svc2";
                        set $proxy_upstream_name "default-svc2-80";
[~] 


image

  • Why are you setting name of the same port 2 times to 2 different names when port is not relevant in your suspected bug ?

  • My edited manifest is below. What is wrong with it ?

# pod1
apiVersion: v1
kind: Pod
metadata:
  name: "pod1"
  labels:
    app: "pod1"
spec:
  containers:
    - name: pod1
      image: "busybox:latest"
      command:
        - "sh"
        - "-c"
        - |
          while true; do
          echo -e "HTTP/1.1 200 OK\n\n Responsing From: $HOSTNAME" | nc -l -p 8000;
          done;
      resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 100m
          memory: 200Mi
      ports:
        - containerPort: 8000
---
# pod2
apiVersion: v1
kind: Pod
metadata:
  name: "pod2"
  labels:
    app: "pod2"
spec:
  containers:
    - name: pod1
      image: "busybox:latest"
      command:
        - "sh"
        - "-c"
        - |
          while true; do
          echo -e "HTTP/1.1 200 OK\n\n Responsing From: $HOSTNAME" | nc -l -p 8000;
          done;
      resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 100m
          memory: 200Mi
      ports:
        - containerPort: 8000
---
# service1
apiVersion: v1
kind: Service
metadata:
  name: "svc1"
spec:
  selector:
    app: "pod1"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
---
# service2
apiVersion: v1
kind: Service
metadata:
  name: "svc2"
spec:
  selector:
    app: "pod2"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
---
# ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress1
spec:
  ingressClassName: nginx
  rules:
    - host: "example.local"
      http:
        paths:
          - path: "/service1"
            pathType: Prefix
            backend:
              service:
                name: "svc1"
                port:
                  number: 80
          - path: "/service2"
            pathType: Prefix
            backend:
              service:
                name: "svc2"
                port:
                  number: 80

@Revolution1
Copy link
Author

@longwuyuan
Please don't edit the manifest if you want to reproduce it.

Anyways, thank you for paying attention. 🙏
Could you at any of the developer maintainers? I believe they'll get it once they see the troubleshooting in the last Issue.

@longwuyuan
Copy link
Contributor

/remove-kind bug
cc @Gacko

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Sep 6, 2024
@Revolution1
Copy link
Author

This bug not related to what hostname or path you use, it's only related to servicename and serviceportname.

@longwuyuan
Copy link
Contributor

@Revolution1 I cc'd another contributor

  • For other people who read this and don't understand, can you explain the below questions ;

  • Why you are first naming port number 80 as "http" and, then once again re-naming the same port number 80, as "pod-http" ?

  • Why are you defining 2 names for one port number in your manifest, when just port number 80 is enough to create a valid ingress ?

  • Why are you not specifying any ingressClassName while using controller v1.11.x ?

@Revolution1
Copy link
Author

Revolution1 commented Sep 6, 2024

@longwuyuan

Maybe the way how it was reproduced looks quite confusing
But it does came from a realife case, and it triggers the bug.

So:

Why you are first naming port number 80 as "http" and, then once again re-naming the same port number 80, as "pod-http" ?

Because that makes "Format(%s-%s-%s", ns, "service", "pod-http") and "Format(%s-%s-%s", ns, "service-pod", "http")outputs the same upstream name, which overwrites one of the path's backend.

Why you are first naming port number 80 as "http" and, then once again re-naming the same port number 80, as "pod-http" ?

Because in realife, I have:

  • a pod contains 2 container, both of them have a port named rpc, so I have to name one of them container1-rpc, another container2-rpc
  • later we decide to separate the 2 containers into two pods, so when we want to recreate one of them, we don't have to recreate the other also. Then there are 2 pods now, pod-container1 and pod-container2, the port name were both renamed to rpc

So then I have 2 ingresses, one points to the old pod, another points to the new pod-container1.
The paths are like this

# ingress that points to pod
host: fasdfsadfsf
- path: /
  backend:
    service:
      name: pod
      port:
        name: container1-rpc
# ingress that points to pod-container1
host: wqeqweqwe
- path: /
  backend:
    service:
      name: pod-container1
      port:
        name: rpc

I set nginx ingressClass as my default ingress class, so I omit the ingressClassName, which is irrelevant to this issue.
image

@Revolution1
Copy link
Author

Revolution1 commented Sep 6, 2024

to fix this bug, just need to modify this file ingress-nginx/internal/ingress/controller/util.go and change one character, like this way:

 // upstreamName returns a formatted upstream name based on namespace, service, and port 
 func upstreamName(namespace string, service *networking.IngressServiceBackend) string { 
 	if service != nil { 
 		if service.Port.Number > 0 { 
 			return fmt.Sprintf("%s-%s-%d", namespace, service.Name, service.Port.Number) 
 		} 
 		if service.Port.Name != "" { 
 			// return fmt.Sprintf("%s-%s-%s", namespace, service.Name, service.Port.Name)
 			return fmt.Sprintf("%s-%s_%s", namespace, service.Name, service.Port.Name)  
 		} 
 	} 
 	return fmt.Sprintf("%s-INVALID", namespace) 
 } 

@longwuyuan
Copy link
Contributor

I may be asking a dumb question here but the reproduce manifest you posted, names the same port number 80 repeatedly. So are you naming the same port number in your real life also ? Or is it different port numbers ?

@Revolution1
Copy link
Author

Revolution1 commented Sep 6, 2024

When the 2 containers are in the same Pod, of course they can only be different port numbers, but same purpose (like rpc, metrics, admin etc.) so we name it by containername-purpose

When separated, same port, same name.

@Revolution1
Copy link
Author

Revolution1 commented Sep 6, 2024

I understand that my manifest, especially the naming, looks very confusing.
That's why I call this issue a "corner case".

But since these are leagal ingresses and a real-life case, I think it definently need to be fixed.

@longwuyuan
Copy link
Contributor

If a user does not try to give 2 names to same port number then there is no bug ?

And your design of your K8S gives 2 different names to same port number ?

And you want to use 2 different names for the same port-number concurrently on the same cluster ?

So that you can use same port number on 2 different pods, with different unique portnames for each pod (when the actual port number behind those names is the same port number) ?

@Revolution1
Copy link
Author

Revolution1 commented Sep 6, 2024

You can't predict what user inputs, any corner case that's no covered, can call it a bug.

We don't usually design that way, it's can be considered a coinsidence on the way of our infra evolution, but indeed a real-life case.

Port number does not mean anything, we name ports by their usage. Thus naming a port number with diffrent names is very common. Like 8080 can be HTTP, GRPC, Metrics or whatever.

@longwuyuan
Copy link
Contributor

@Revolution1, thanks for the info. Readers can get a better idea with these info.

Basically complexity exists in this project as well and resources are acute shortage. With the information details, the action items become a little more clear. Prioritising becomes a little more possible.

I hope you can just make a comment if you are using 2 names for one single port-number ? If yes, then please confirm that based on this requirement of yours, you feel that this is a bug, that the function "upstreamName"returns identical servicenames for 2 services, when a users configures one portnumber with 2 different portnames.

This kind f summary helps readers and maintainers.
I hope others comment on this and you get feedback.

@Revolution1
Copy link
Author

@longwuyuan OK.

So, in short:

When there are 2 or more DIFFERENT backends of ingress(es)

  1. That are in the same namespace
  2. Using service.name and service.port.name to specify the targets
  3. Thier names and ports can be formatted like this "{service.name}-{service.port.name}", which outputs the same result.

It will make the upstream in rendered nginx.conf and backend object have the same name, therefore nginx cannot tell the differents between these backends, they will be ended up pointing the same backend (randomly one of them)

The effect is service name and port name combined output, not just because "using 2 names for one single port-number".

@Revolution1
Copy link
Author

This is 10000% a bug, don't remove the tag.

@longwuyuan
Copy link
Contributor

longwuyuan commented Sep 7, 2024

Thanks @Revolution1 for the comments. You can apply the label yourself, if you feel better not to wait for the triaging to be completed (or if you think triaging is already completed).
You can also choose to ignore my questions and wait for a developer to engage here.

I am going to be very very specific. Based on the criteria for the bug that you typed, see below question.
I clearly do have 2 different backends, in the same namespace, in my test. See below ;

pod/pod1   1/1     Running   0          5m39s
pod/pod2   1/1     Running   0          5m39s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/svc1         ClusterIP   10.110.143.170   <none>        80/TCP    5m39s
service/svc2         ClusterIP   10.96.241.33     <none>        80/TCP    5m39s

And i get the correct ""{service.name}-{service.port.name}" . See below ;

% k -n ingress-nginx exec ingress-nginx-controller-7b7b559f8b-pdx9c -- grep svc1 /etc/nginx/
nginx.conf
                        set $service_name   "svc1";
                        set $proxy_upstream_name "default-svc1-80";
                        set $service_name   "svc1";
                        set $proxy_upstream_name "default-svc1-80";
[~] 
% k -n ingress-nginx exec ingress-nginx-controller-7b7b559f8b-pdx9c -- grep svc2 /etc/nginx/nginx.conf
                        set $service_name   "svc2";
                        set $proxy_upstream_name "default-svc2-80";
                        set $service_name   "svc2";
                        set $proxy_upstream_name "default-svc2-80";
[~] 

How do you explain this ?

@Revolution1
Copy link
Author

"default-svc1-80" != "default-svc2-80"
while
"service-pod-http" == "service-pod-http"

@longwuyuan
Copy link
Contributor

longwuyuan commented Sep 7, 2024

It seems you did not answer my very clear and very specific question.
Your response above is this ;

"default-svc1-80" != "default-svc2-80"
while
"service-pod-http" == "service-pod-http"

This is a comment on the problem config varied between your test and m test. Its obvious.
But this is not the answer to the very precise and very specific question that I asked.

Let me ask again.
Please look your description of the criteria for a bug. I am putting a screenshot of your criteria below
image

My test meets your criteria.
I have 2 different services, in same namespace, and they are using servicename+portname.
And yet there is no problem in my test.
I repeated my test (but this time I added portname as per your criteria) . See below

  • My manifest using portnames
# pod1
apiVersion: v1
kind: Pod
metadata:
  name: "pod1"
  labels:
    app: "pod1"
spec:
  containers:
    - name: pod1
      image: "busybox:latest"
      command:
        - "sh"
        - "-c"
        - |
          while true; do
          echo -e "HTTP/1.1 200 OK\n\n Responsing From: $HOSTNAME" | nc -l -p 8000;
          done;
      resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 100m
          memory: 200Mi
      ports:
        - containerPort: 8000
---
# pod2
apiVersion: v1
kind: Pod
metadata:
  name: "pod2"
  labels:
    app: "pod2"
spec:
  containers:
    - name: pod1
      image: "busybox:latest"
      command:
        - "sh"
        - "-c"
        - |
          while true; do
          echo -e "HTTP/1.1 200 OK\n\n Responsing From: $HOSTNAME" | nc -l -p 8000;
          done;
      resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 100m
          memory: 200Mi
      ports:
        - containerPort: 8000
---
# service1
apiVersion: v1
kind: Service
metadata:
  name: "svc1"
spec:
  selector:
    app: "pod1"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
      name: "http"
---
# service2
apiVersion: v1
kind: Service
metadata:
  name: "svc2"
spec:
  selector:
    app: "pod2"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
      name: "http"
---
# ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress1
spec:
  ingressClassName: nginx
  rules:
    - host: "example.local"
      http:
        paths:
          - path: "/service1"
            pathType: Prefix
            backend:
              service:
                name: "svc1"
                port:
                  name: "http"
          - path: "/service2"
            pathType: Prefix
            backend:
              service:
                name: "svc2"
                port:
                  name: "http"
  • My curl test gives response from correct pod
[~/Downloads] 
% k get all,ing
NAME       READY   STATUS    RESTARTS   AGE
pod/pod1   1/1     Running   0          4m22s
pod/pod2   1/1     Running   0          4m22s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   6d16h
service/svc1         ClusterIP   10.105.218.126   <none>        80/TCP    4m22s
service/svc2         ClusterIP   10.98.59.191     <none>        80/TCP    4m22s

NAME                                 CLASS   HOSTS           ADDRESS        PORTS   AGE
ingress.networking.k8s.io/ingress1   nginx   example.local   192.168.49.2   80      4m22s
[~/Downloads] 
% k describe ingress ingress1 
Name:             ingress1
Labels:           <none>
Namespace:        default
Address:          192.168.49.2
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host           Path  Backends
  ----           ----  --------
  example.local  
                 /service1   svc1:http (10.244.0.40:8000)
                 /service2   svc2:http (10.244.0.39:8000)
Annotations:     <none>
Events:
  Type    Reason  Age                    From                      Message
  ----    ------  ----                   ----                      -------
  Normal  Sync    4m30s (x2 over 4m38s)  nginx-ingress-controller  Scheduled for sync
[~/Downloads] 
% curl example.local/service1 --resolve example.local:80:`minikube ip`
 Responsing From: pod1
[~/Downloads] 
% curl example.local/service2 --resolve example.local:80:`minikube ip`
 Responsing From: pod2
[~/Downloads] 

  • The nginx.conf is correctly configured with "$servicename+portname"
% k -n ingress-nginx exec -ti ingress-nginx-controller-7b7b559f8b-pdx9c -- sh
/etc/nginx $ grep svc1 /etc/nginx/nginx.conf -n
693:                    set $service_name   "svc1";
737:                    set $proxy_upstream_name "default-svc1-http";
813:                    set $service_name   "svc1";
857:                    set $proxy_upstream_name "default-svc1-http";
/etc/nginx $ grep svc2 /etc/nginx/nginx.conf -n
453:                    set $service_name   "svc2";
497:                    set $proxy_upstream_name "default-svc2-http";
573:                    set $service_name   "svc2";
617:                    set $proxy_upstream_name "default-svc2-http";
/etc/nginx $ 

  • What is different is that you use 2 names for the same port number 80. But I use the same name "http" for port number 80

  • How do you explain this ?

@Revolution1
Copy link
Author

N(>1) different ingress backends should generate N different upstreams in nginx.
But since their generated upstreamName duplicated, therer was only one upstream left.

@Revolution1
Copy link
Author

/kind bug

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. labels Sep 7, 2024
@longwuyuan
Copy link
Contributor

N(>1) different ingress backends should generate N different upstreams in nginx. But since their generated upstreamName duplicated, therer was only one upstream left.

@Revolution1 My test generated 2 (N) different upstreams in nginx. There are no duplicates. The proof is right there in the details of my test ;

% k -n ingress-nginx exec -ti ingress-nginx-controller-7b7b559f8b-pdx9c -- sh
/etc/nginx $ grep svc1 /etc/nginx/nginx.conf -n
693:                    set $service_name   "svc1";
737:                    set $proxy_upstream_name "default-svc1-http";
813:                    set $service_name   "svc1";
857:                    set $proxy_upstream_name "default-svc1-http";
/etc/nginx $ grep svc2 /etc/nginx/nginx.conf -n
453:                    set $service_name   "svc2";
497:                    set $proxy_upstream_name "default-svc2-http";
573:                    set $service_name   "svc2";
617:                    set $proxy_upstream_name "default-svc2-http";
/etc/nginx $ 

So please help out and explain your claim that its generating duplicates.

@longwuyuan
Copy link
Contributor

@Revolution1 your test manifest runs busybox with netcat which is not really a webserver. The relevance here is that the default backend-protocol used is HTTP.

So I recreated the test with webserver pods (nginx & tomcat). And my test shows no duplicates in upstream-name.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

Successfully merging a pull request may close this issue.

3 participants