Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TCP-services and proxy-protocol without LoadBalancer on microk8s - client IP replaced with controller internal IP #9685

Closed
Azbesciak opened this issue Mar 3, 2023 · 31 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@Azbesciak
Copy link

Azbesciak commented Mar 3, 2023

What happened:
I am exposing my services via tcp config map, not like the normal way the ingress does on 80 (although also one service is on it, but with the default 80/433 bypased to 7998 and 7999) -- all mapping is through it.
I need to retrieve my client IP.

I have the following config in the controller's config map

data:
  allow-snippet-annotations: "true"
  use-proxy-protocol: "true"
  forwarded-for-header: "proxy_protocol"
  use-forwarded-headers: "true"
  compute-full-forwarded-for: "true"

The controller's service is of type LoadBalancer, has externalTrafficPolicy: Local; in general everything works, I can access grafana on 3000, kubernetes dashboard on 9000, my services on my desired ports. It is fine.

What is not I cannot, even with the above config, retrieve my client IP.

I checked the nginx config inside the controller, find it attached - sorry for the extension, github does not support conf
[ingress-ngnix.txt]; most interesting fragment below

stream {
       ...
        server {
                preread_by_lua_block {
                        ngx.var.proxy_upstream_name="tcp-observability-kube-prom-stack-grafana-80";
                }

                listen                  3000;

                listen                  [::]:3000;

                proxy_timeout           600s;
                proxy_next_upstream     on;
                proxy_next_upstream_timeout 600s;
                proxy_next_upstream_tries   3;

                proxy_pass              upstream_balancer;

        }
        ...
}

for grafana - as you see proxy config is not complete in compare to section in direct http.server.listen.

I see in the http.server.listen that there is redirect, but as mentioned I still get invalid IP as a client IP. Instead of it I get internal controller's IP (10.1.38.72 for example)

image

What you expected to happen:
I want to see my client IP

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.4.0
  Build:         50be2bf95fd1ef480420e2aa1d6c5c7c138c95ea
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.10

-------------------------------------------------------------------------------

I also checked with v1.5.1, no difference

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6", GitCommit:"ff2c119726cc1f8926fb0585c74b25921e866a28", GitTreeState:"clean", BuildDate:"2023-01-25T14:27:37Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6", GitCommit:"ff2c119726cc1f8926fb0585c74b25921e866a28", GitTreeState:"clean", BuildDate:"2023-01-25T14:28:19Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: bare metal
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • Kernel (e.g. uname -a):
Linux maptest01 5.4.0-136-generic #153-Ubuntu SMP Thu Nov 24 15:56:58 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
    • microk8s but ingress is deployed with customized helm chart - I needed these TCP services config. it is based on your static file
      that chart
      ingress-helm.zip
      with
helm upgrade --install ingress-nginx ./ingress --create-namespace -n ingress-nginx -f ingress-values.yaml
  • Basic cluster related info:
    • kubectl version
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6", GitCommit:"ff2c119726cc1f8926fb0585c74b25921e866a28", GitTreeState:"clean", BuildDate:"2023-01-25T14:27:37Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6", GitCommit:"ff2c119726cc1f8926fb0585c74b25921e866a28", GitTreeState:"clean", BuildDate:"2023-01-25T14:28:19Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
  • kubectl get nodes -o wide
maptest02   Ready    <none>   101d   v1.25.6   10.20.18.31   <none>        Ubuntu 20.04.5 LTS   5.4.0-136-generic   containerd://1.6.8
maptest01   Ready    <none>   101d   v1.25.6   10.20.18.30   <none>        Ubuntu 20.04.5 LTS   5.4.0-136-generic   containerd://1.6.8

(ingress as you see is pinned to maptest01)

How to reproduce this issue:

I suppose microk8s does not make problem there, you have the whole helm chart attached. My service which expect the client IP is also another nginx (web application, that one serves static files), but as mentioned I get there the controller internal IP, and it also changes when I restart the controller. (I also checked enable-real-ip, no diff except it was set to 0.0.0.0 in the stream.server).

Anything else we need to know:
I checked out for example #6163 or #6136 ( and config map doc) - no help.

If it is not a bug, please excuse me and give me some hints on how to solve that. I cannot change the way I use these TCP services.

Update with request tracing

Windows' ifconfig relevant part (I am connected via VPN, but there were no problems with docker-compose solution that way, and nothing changed since that time in our architecture, except that we replaced docker-compose with k8s)

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::2337:39b5:5263:9c87%16
   IPv4 Address. . . . . . . . . . . : 10.20.14.39
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . :

The request comes from the web app, from chrome.
Below generated curl for bash

curl 'http://10.20.18.30:8000/requestUrl ' \
  -H 'Accept: application/json, text/plain, */*' \
  -H 'Accept-Language: pl,pl-PL;q=0.9,en-US;q=0.8,pl;q=0.7,en;q=0.6' \
  -H 'Authorization: Bearer <token>' \
  -H 'Connection: keep-alive' \
  -H 'Cookie: authMode=token; username=default; grafana_session=1c451e2c10688e9d0b201baeb21e3236' \
  -H 'DNT: 1' \
  -H 'Referer: http://10.20.18.30:8000/' \
  -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36' \
  --compressed \
  --insecure

I also used tcpflow to see how it looks on the server side (same node where ingress-nginx-controller is placed; find it below

010.020.018.001.50679-010.020.018.030.08000: GET /requestUrl HTTP/1.1
Host: 10.20.18.30:8000
Connection: keep-alive
Accept: application/json, text/plain, */*
DNT: 1
Accept-Language: pl,pl-PL;q=0.9,en-US;q=0.8,pl;q=0.7,en;q=0.6
Authorization: Bearer <token>
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36
Referer: http://10.20.18.30:8000/ <--- this is where the app is hosted
Accept-Encoding: gzip, deflate
Cookie: authMode=token; username=default; grafana_session=1c451e2c10688e9d0b201baeb21e3236

010.020.018.030.08000-010.020.018.001.50679: HTTP/1.1 200 OK
Server: nginx/1.23.3
Date: Sat, 04 Mar 2023 08:08:21 GMT
Content-Type: application/json
Content-Length: 378
Connection: keep-alive
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-Frame-Options: DENY
X-XSS-Protection: 0
Referrer-Policy: no-referrer

response headers from chrome - same as above, but it is a copy from chrome 'copy response headers'

HTTP/1.1 200 OK
Server: nginx/1.23.3
Date: Sat, 04 Mar 2023 08:12:04 GMT
Content-Type: application/json
Content-Length: 358
Connection: keep-alive
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-Frame-Options: DENY
X-XSS-Protection: 0
Referrer-Policy: no-referrer

kubectl logs $ingresscontrollerpodname -n $ingresscontrollernamespace

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.5.1
  Build:         d003aae913cc25f375deb74f898c7f3c65c06f05
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.21.6

-------------------------------------------------------------------------------

W0304 04:58:54.944487       6 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0304 04:58:54.945137       6 main.go:209] "Creating API client" host="https://10.152.183.1:443"
I0304 04:58:54.967009       6 main.go:253] "Running in Kubernetes cluster" major="1" minor="25" git="v1.25.6" state="clean" commit="ff2c119726cc1f8926fb0585c74b25921e866a28" platform="linux/amd64"
I0304 04:58:55.578641       6 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0304 04:58:55.642899       6 ssl.go:533] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0304 04:58:55.679116       6 nginx.go:260] "Starting NGINX Ingress controller"
I0304 04:58:55.998055       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"56863919-bf7a-42cc-a96d-eb295b586875", APIVersion:"v1", ResourceVersion:"17378176", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0304 04:58:55.998095       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"d1b82cd4-71c1-486c-99bb-79d7c6044c8a", APIVersion:"v1", ResourceVersion:"17378175", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0304 04:58:57.098244       6 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-controller-leader...
I0304 04:58:57.098360       6 nginx.go:303] "Starting NGINX process"
I0304 04:58:57.099259       6 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0304 04:58:57.099933       6 controller.go:168] "Configuration changes detected, backend reload required"
I0304 04:58:57.276498       6 status.go:84] "New leader elected" identity="ingress-nginx-controller-5978d5d5dc-jlnw8"
I0304 04:58:57.386114       6 controller.go:185] "Backend successfully reloaded"
I0304 04:58:57.386212       6 controller.go:196] "Initial sync, sleeping for 1 second"
I0304 04:58:57.386522       6 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5978d5d5dc-w89z2", UID:"96503cc0-127d-43ff-8570-a95645785a35", APIVersion:"v1", ResourceVersion:"17378242", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0304 04:59:41.874351       6 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-controller-leader
I0304 04:59:41.874617       6 status.go:84] "New leader elected" identity="ingress-nginx-controller-5978d5d5dc-w89z2"
...

[10.20.18.1] [04/Mar/2023:08:05:27 +0000] TCP 200 0 0 60.000
[10.20.18.1] [04/Mar/2023:08:05:50 +0000] TCP 200 8953 93 0.180
[10.20.18.1] [04/Mar/2023:08:06:38 +0000] TCP 200 9135 6246 131.734
[10.20.18.1] [04/Mar/2023:08:09:26 +0000] TCP 200 2587 2205 89.781
[10.20.18.1] [04/Mar/2023:08:13:38 +0000] TCP 200 1468 1538 95.338

all requests above have the same ip. BTW not every request is placed there, I executed a couple more and these were not appended. I tried in general use that, but I do not know what is really is. I also tried to see the access log (I even enabled it with enable-access-log-for-default-backend: "true" - no difference). And just to be sure, inside ingress-nginx-controller I invoked:

bash-5.1$ ls /var/log/nginx/access.log -lah
lrwxrwxrwx    1 www-data www-data      11 Nov  8 22:45 /var/log/nginx/access.log -> /dev/stdout

kubectl get svc,ing -A -o wide

As I mentioned in comments, service/ingress-nginx-controller was both LoadBalancer and NodePort - no difference, also it has externalTrafficPolicy: Local


NAMESPACE       NAME                                                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                       AGE     SELECTOR
default         service/kubernetes                                           ClusterIP      10.152.183.1     <none>        443/TCP                                                                                                                       102d    <none>
kube-system     service/metrics-server                                       ClusterIP      10.152.183.127   <none>        443/TCP                                                                                                                       102d    k8s-app=metrics-server
kube-system     service/dashboard-metrics-scraper                            ClusterIP      10.152.183.17    <none>        8000/TCP                                                                                                                      102d    k8s-app=dashboard-metrics-scraper
kube-system     service/kubernetes-dashboard                                 NodePort       10.152.183.93    <none>        443:32741/TCP                                                                                                                 102d    k8s-app=kubernetes-dashboard
observability   service/kube-prom-stack-kube-prome-prometheus                ClusterIP      10.152.183.121   <none>        9090/TCP                                                                                                                      54d     app.kubernetes.io/name=prometheus,prometheus=kube-prom-stack-kube-prome-prometheus
kube-system     service/kube-prom-stack-kube-prome-kube-etcd                 ClusterIP      None             <none>        2381/TCP                                                                                                                      54d     component=etcd
kube-system     service/kube-prom-stack-kube-prome-kube-scheduler            ClusterIP      None             <none>        10259/TCP                                                                                                                     54d     <none>
kube-system     service/kube-prom-stack-kube-prome-kube-proxy                ClusterIP      None             <none>        10249/TCP                                                                                                                     54d     k8s-app=kube-proxy
kube-system     service/kube-prom-stack-kube-prome-kube-controller-manager   ClusterIP      None             <none>        10257/TCP                                                                                                                     54d     <none>
kube-system     service/kube-prom-stack-kube-prome-coredns                   ClusterIP      None             <none>        9153/TCP                                                                                                                      54d     k8s-app=kube-dns
observability   service/kube-prom-stack-grafana                              ClusterIP      10.152.183.102   <none>        80/TCP                                                                                                                        54d     app.kubernetes.io/instance=kube-prom-stack,app.kubernetes.io/name=grafana
observability   service/kube-prom-stack-kube-state-metrics                   ClusterIP      10.152.183.71    <none>        8080/TCP                                                                                                                      54d     app.kubernetes.io/instance=kube-prom-stack,app.kubernetes.io/name=kube-state-metrics
observability   service/kube-prom-stack-prometheus-node-exporter             ClusterIP      10.152.183.151   <none>        9100/TCP                                                                                                                      54d     app.kubernetes.io/instance=kube-prom-stack,app.kubernetes.io/name=prometheus-node-exporter
observability   service/kube-prom-stack-kube-prome-alertmanager              ClusterIP      10.152.183.190   <none>        9093/TCP                                                                                                                      54d     alertmanager=kube-prom-stack-kube-prome-alertmanager,app.kubernetes.io/name=alertmanager
observability   service/kube-prom-stack-kube-prome-operator                  ClusterIP      10.152.183.164   <none>        443/TCP                                                                                                                       54d     app=kube-prometheus-stack-operator,release=kube-prom-stack
kube-system     service/kube-prom-stack-kube-prome-kubelet                   ClusterIP      None             <none>        10250/TCP,10255/TCP,4194/TCP                                                                                                  54d     <none>
observability   service/alertmanager-operated                                ClusterIP      None             <none>        9093/TCP,9094/TCP,9094/UDP                                                                                                    54d     app.kubernetes.io/name=alertmanager
observability   service/prometheus-operated                                  ClusterIP      None             <none>        9090/TCP                                                                                                                      54d     app.kubernetes.io/name=prometheus
observability   service/loki-memberlist                                      ClusterIP      None             <none>        7946/TCP                                                                                                                      54d     app=loki,release=loki
observability   service/loki-headless                                        ClusterIP      None             <none>        3100/TCP                                                                                                                      54d     app=loki,release=loki
observability   service/loki                                                 ClusterIP      10.152.183.154   <none>        3100/TCP                                                                                                                      54d     app=loki,release=loki
observability   service/tempo                                                ClusterIP      10.152.183.203   <none>        3100/TCP,16687/TCP,16686/TCP,6831/UDP,6832/UDP,14268/TCP,14250/TCP,9411/TCP,55680/TCP,55681/TCP,4317/TCP,4318/TCP,55678/TCP   54d     app.kubernetes.io/instance=tempo,app.kubernetes.io/name=tempo
observability   service/nfs-server                                           ClusterIP      10.152.183.182   <none>        2049/TCP,20048/TCP,111/TCP                                                                                                    54d     io.kompose.service=nfs-server
kube-system     service/kube-dns                                             ClusterIP      10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP                                                                                                        21d     k8s-app=kube-dns
test            service/nfs-server                                           ClusterIP      10.152.183.183   <none>        2049/TCP,20048/TCP,111/TCP                                                                                                    47h     io.kompose.service=nfs-server
test            service/redis                                                ClusterIP      None             <none>        6379/TCP,16379/TCP                                                                                                            47h     io.kompose.service=redis
test            service/mongodb                                              ClusterIP      None             <none>        27017/TCP                                                                                                                     47h     io.kompose.service=mongodb
test            service/app-manager                                          ClusterIP      10.152.183.228   <none>        80/TCP,443/TCP                                                                                                                47h     io.kompose.service=app-manager
test            service/auth-service                                         ClusterIP      10.152.183.64    <none>        8082/TCP                                                                                                                      47h     io.kompose.service=auth-service
test            service/geocode-service                                      ClusterIP      10.152.183.213   <none>        8083/TCP                                                                                                                      47h     io.kompose.service=geocode-service
test            service/layers-service                                       ClusterIP      10.152.183.86    <none>        8084/TCP                                                                                                                      47h     io.kompose.service=layers-service
test            service/route-service                                        ClusterIP      10.152.183.44    <none>        8081/TCP,5005/TCP                                                                                                             47h     io.kompose.service=route-service
test            service/map-view                                             ClusterIP      10.152.183.80    <none>        80/TCP,443/TCP                                                                                                                47h     io.kompose.service=map-view
test            service/api-gateway                                          ClusterIP      10.152.183.73    <none>        8080/TCP                                                                                                                      47h     io.kompose.service=api-gateway
ingress-nginx   service/ingress-nginx-controller-admission                   ClusterIP      10.152.183.212   <none>        443/TCP                                                                                                                       3h31m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx   service/ingress-nginx-controller                             LoadBalancer   10.152.183.124   10.20.18.30   8000:30410/TCP,4430:30243/TCP,8888:31716/TCP,9000:32307/TCP,3000:32077/TCP                                                    3h31m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

kubectl describe pod $ingresscontrollerpodname -n $ingresscontrollernamespace

Name:             ingress-nginx-controller-5978d5d5dc-w89z2
Namespace:        ingress-nginx
Priority:         0
Service Account:  ingress-nginx
Node:             maptest01/10.20.18.30
Start Time:       Sat, 04 Mar 2023 04:58:51 +0000
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=ingress-nginx
                  app.kubernetes.io/name=ingress-nginx
                  pod-template-hash=5978d5d5dc
Annotations:      cni.projectcalico.org/containerID: 9349d4cc65ee85a09cfb0827c35f7e437e53a31764ddb52ccac485d9c66af9fa
                  cni.projectcalico.org/podIP: 10.1.38.119/32
                  cni.projectcalico.org/podIPs: 10.1.38.119/32
Status:           Running
IP:               10.1.38.119
IPs:
  IP:           10.1.38.119
Controlled By:  ReplicaSet/ingress-nginx-controller-5978d5d5dc
Containers:
  controller:
    Container ID:  containerd://e3ac588fbb766ab1852ace73c6ed60c7c2f4348be5998a64c6654ab714892fbc
    Image:         registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629
    Image ID:      registry.k8s.io/ingress-nginx/controller@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629
    Ports:         8000/TCP, 4430/TCP, 8888/TCP, 9000/TCP, 3000/TCP, 8443/TCP
    Host Ports:    8000/TCP, 4430/TCP, 8888/TCP, 9000/TCP, 3000/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      --validating-webhook=:8443
      --http-port=7998
      --https-port=7999
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --watch-ingress-without-class=false
      --publish-status-address=localhost
    State:          Running
      Started:      Sat, 04 Mar 2023 04:58:54 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-5978d5d5dc-w89z2 (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7f5p8 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-7f5p8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/control-plane:NoSchedule
                             node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

kubectl describe svc $ingresscontrollersvcname -n $ingresscontrollernamespace

Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=0.1.0
                          helm.sh/chart=ingress-0.1.0
Annotations:              meta.helm.sh/release-name: ingress-nginx
                          meta.helm.sh/release-namespace: ingress-nginx
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.152.183.124
IPs:                      10.152.183.124
External IPs:             10.20.18.30
Port:                     http-map-p  8000/TCP
TargetPort:               8000/TCP
NodePort:                 http-map-p  30410/TCP
Endpoints:                10.1.38.119:8000
Port:                     https-map-p  4430/TCP
TargetPort:               4430/TCP
NodePort:                 https-map-p  30243/TCP
Endpoints:                10.1.38.119:4430
Port:                     app-mng-map-p  8888/TCP
TargetPort:               8888/TCP
NodePort:                 app-mng-map-p  31716/TCP
Endpoints:                10.1.38.119:8888
Port:                     dashboard  9000/TCP
TargetPort:               9000/TCP
NodePort:                 dashboard  32307/TCP
Endpoints:                10.1.38.119:9000
Port:                     grafana  3000/TCP
TargetPort:               3000/TCP
NodePort:                 grafana  32077/TCP
Endpoints:                10.1.38.119:3000
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     30751
Events:                   <none>

kubectl -n $appnamespace describe svc $svcname

Name:              map-view
Namespace:         test
Labels:            app.kubernetes.io/instance=map
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=map-deployment
                   app.kubernetes.io/version=0.1.0
                   helm.sh/chart=map-deployment-0.1.0
                   io.kompose.service=map-view
Annotations:       meta.helm.sh/release-name: map
                   meta.helm.sh/release-namespace: test
Selector:          io.kompose.service=map-view
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.152.183.80
IPs:               10.152.183.80
Port:              80  80/TCP
TargetPort:        80/TCP
Endpoints:         10.1.38.106:80
Port:              443  443/TCP
TargetPort:        443/TCP
Endpoints:         10.1.38.106:443
Session Affinity:  None
Events:            <none>

kubectl -n $appnamespace describe ing $ingressname

...I have no ingress in the app namespace, because, as I mentioned, I am using TCP services, which directly redirect to the given service.
In general kubectl describe ing -A gives *No resources found` (my app is working fine on 8000, others on 3000, 9000 etc)

kubectl -n $appnamespace logs $apppodname

Only relevant ones, all IP addresses are the same. nginx log format is:

$remote_addr - $remote_user [$time_local]  "$request\" $status $bytes_sent \"$http_referer\" \"$http_user_agent\"
10.1.38.119 - - [04/Mar/2023:08:22:10 +0000] "GET /requestUrl  HTTP/1.1" 200 811 "http://10.20.18.30:8000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
10.1.38.119 - - [04/Mar/2023:08:22:10 +0000] "GET /requestUrl  HTTP/1.1" 200 829 "http://10.20.18.30:8000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"

Just for my own purpose, I created a simple nginx docker-compose

version: '3.4'
services:
  nginx-test:
    image: nginx:latest
    restart: always
    ports:
      - 8800:80
    volumes:
      - ./conf.d:/etc/nginx/conf.d

with config, which contains only one path (log pattern is the same as above, I get i.e. $remote_addr)

location /check {
    access_log /dev/stdout access;
    add_header 'Content-Type' 'application/json';
    return 200 '{"status":"OK"}';
  }

it returns IP 10.20.18.1, so the same as in ingress-nginx-controller.

@Azbesciak Azbesciak added the kind/bug Categorizes issue or PR as related to a bug. label Mar 3, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 3, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@longwuyuan
Copy link
Contributor

/remove-kind bug

show the output of

  • kubectl get svc,ing -A -o wide
  • kubectl describe svc -n $ingresscontrollernamespace
  • Dont enable proxy-protocol and also set externalTrafficPolicy to local
  • Remove the proxy-protocol and other config
  • Only change externalTrafficPoicy to Local
  • Are you using metallb ?

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Mar 3, 2023
@Azbesciak
Copy link
Author

Azbesciak commented Mar 3, 2023

@longwuyuan

kubectl get svc,ing -A -o wide

NAMESPACE       NAME                                                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                       AGE     SELECTOR
default         service/kubernetes                                           ClusterIP      10.152.183.1     <none>        443/TCP                                                                                                                       101d    <none>
kube-system     service/metrics-server                                       ClusterIP      10.152.183.127   <none>        443/TCP                                                                                                                       101d    k8s-app=metrics-server
kube-system     service/dashboard-metrics-scraper                            ClusterIP      10.152.183.17    <none>        8000/TCP                                                                                                                      101d    k8s-app=dashboard-metrics-scraper
kube-system     service/kubernetes-dashboard                                 NodePort       10.152.183.93    <none>        443:32741/TCP                                                                                                                 101d    k8s-app=kubernetes-dashboard
observability   service/kube-prom-stack-kube-prome-prometheus                ClusterIP      10.152.183.121   <none>        9090/TCP                                                                                                                      53d     app.kubernetes.io/name=prometheus,prometheus=kube-prom-stack-kube-prome-prometheus
kube-system     service/kube-prom-stack-kube-prome-kube-etcd                 ClusterIP      None             <none>        2381/TCP                                                                                                                      53d     component=etcd
kube-system     service/kube-prom-stack-kube-prome-kube-scheduler            ClusterIP      None             <none>        10259/TCP                                                                                                                     53d     <none>
kube-system     service/kube-prom-stack-kube-prome-kube-proxy                ClusterIP      None             <none>        10249/TCP                                                                                                                     53d     k8s-app=kube-proxy
kube-system     service/kube-prom-stack-kube-prome-kube-controller-manager   ClusterIP      None             <none>        10257/TCP                                                                                                                     53d     <none>
kube-system     service/kube-prom-stack-kube-prome-coredns                   ClusterIP      None             <none>        9153/TCP                                                                                                                      53d     k8s-app=kube-dns
observability   service/kube-prom-stack-grafana                              ClusterIP      10.152.183.102   <none>        80/TCP                                                                                                                        53d     app.kubernetes.io/instance=kube-prom-stack,app.kubernetes.io/name=grafana
observability   service/kube-prom-stack-kube-state-metrics                   ClusterIP      10.152.183.71    <none>        8080/TCP                                                                                                                      53d     app.kubernetes.io/instance=kube-prom-stack,app.kubernetes.io/name=kube-state-metrics
observability   service/kube-prom-stack-prometheus-node-exporter             ClusterIP      10.152.183.151   <none>        9100/TCP                                                                                                                      53d     app.kubernetes.io/instance=kube-prom-stack,app.kubernetes.io/name=prometheus-node-exporter
observability   service/kube-prom-stack-kube-prome-alertmanager              ClusterIP      10.152.183.190   <none>        9093/TCP                                                                                                                      53d     alertmanager=kube-prom-stack-kube-prome-alertmanager,app.kubernetes.io/name=alertmanager
observability   service/kube-prom-stack-kube-prome-operator                  ClusterIP      10.152.183.164   <none>        443/TCP                                                                                                                       53d     app=kube-prometheus-stack-operator,release=kube-prom-stack
kube-system     service/kube-prom-stack-kube-prome-kubelet                   ClusterIP      None             <none>        10250/TCP,10255/TCP,4194/TCP                                                                                                  53d     <none>
observability   service/alertmanager-operated                                ClusterIP      None             <none>        9093/TCP,9094/TCP,9094/UDP                                                                                                    53d     app.kubernetes.io/name=alertmanager
observability   service/prometheus-operated                                  ClusterIP      None             <none>        9090/TCP                                                                                                                      53d     app.kubernetes.io/name=prometheus
observability   service/loki-memberlist                                      ClusterIP      None             <none>        7946/TCP                                                                                                                      53d     app=loki,release=loki
observability   service/loki-headless                                        ClusterIP      None             <none>        3100/TCP                                                                                                                      53d     app=loki,release=loki
observability   service/loki                                                 ClusterIP      10.152.183.154   <none>        3100/TCP                                                                                                                      53d     app=loki,release=loki
observability   service/tempo                                                ClusterIP      10.152.183.203   <none>        3100/TCP,16687/TCP,16686/TCP,6831/UDP,6832/UDP,14268/TCP,14250/TCP,9411/TCP,55680/TCP,55681/TCP,4317/TCP,4318/TCP,55678/TCP   53d     app.kubernetes.io/instance=tempo,app.kubernetes.io/name=tempo
observability   service/nfs-server                                           ClusterIP      10.152.183.182   <none>        2049/TCP,20048/TCP,111/TCP                                                                                                    53d     io.kompose.service=nfs-server
kube-system     service/kube-dns                                             ClusterIP      10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP                                                                                                        20d     k8s-app=kube-dns
test            service/nfs-server                                           ClusterIP      10.152.183.183   <none>        2049/TCP,20048/TCP,111/TCP                                                                                                    33h     io.kompose.service=nfs-server
test            service/redis                                                ClusterIP      None             <none>        6379/TCP,16379/TCP                                                                                                            33h     io.kompose.service=redis
test            service/mongodb                                              ClusterIP      None             <none>        27017/TCP                                                                                                                     33h     io.kompose.service=mongodb
test            service/app-manager                                          ClusterIP      10.152.183.228   <none>        80/TCP,443/TCP                                                                                                                33h     io.kompose.service=app-manager
test            service/auth-service                                         ClusterIP      10.152.183.64    <none>        8082/TCP                                                                                                                      33h     io.kompose.service=auth-service
test            service/geocode-service                                      ClusterIP      10.152.183.213   <none>        8083/TCP                                                                                                                      33h     io.kompose.service=geocode-service
test            service/layers-service                                       ClusterIP      10.152.183.86    <none>        8084/TCP                                                                                                                      33h     io.kompose.service=layers-service
test            service/route-service                                        ClusterIP      10.152.183.44    <none>        8081/TCP,5005/TCP                                                                                                             33h     io.kompose.service=route-service
test            service/map-view                                             ClusterIP      10.152.183.80    <none>        80/TCP,443/TCP                                                                                                                33h     io.kompose.service=map-view
test            service/api-gateway                                          ClusterIP      10.152.183.73    <none>        8080/TCP                                                                                                                      33h     io.kompose.service=api-gateway
ingress-nginx   service/ingress-nginx-controller-admission                   ClusterIP      10.152.183.236   <none>        443/TCP                                                                                                                       8m37s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx   service/ingress-nginx-controller                             LoadBalancer   10.152.183.36    <pending>     8000:32043/TCP,4430:31224/TCP,8888:31063/TCP,9000:32164/TCP,3000:31801/TCP                                                    8m37s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

kubectl describe svc -n $ingresscontrollernamespace

Name:              ingress-nginx-controller-admission
Namespace:         ingress-nginx
Labels:            app.kubernetes.io/component=controller
                   app.kubernetes.io/instance=ingress-nginx
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=ingress-nginx
                   app.kubernetes.io/part-of=ingress-nginx
                   app.kubernetes.io/version=0.1.0
                   helm.sh/chart=ingress-0.1.0
Annotations:       meta.helm.sh/release-name: ingress-nginx
                   meta.helm.sh/release-namespace: ingress-nginx
Selector:          app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.152.183.236
IPs:               10.152.183.236
Port:              https-webhook  443/TCP
TargetPort:        webhook/TCP
Endpoints:         10.1.38.88:8443
Session Affinity:  None
Events:            <none>


Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=0.1.0
                          helm.sh/chart=ingress-0.1.0
Annotations:              meta.helm.sh/release-name: ingress-nginx
                          meta.helm.sh/release-namespace: ingress-nginx
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.152.183.36
IPs:                      10.152.183.36
Port:                     http-map-p  8000/TCP
TargetPort:               8000/TCP
NodePort:                 http-map-p  32043/TCP
Endpoints:                10.1.38.88:8000
Port:                     https-map-p  4430/TCP
TargetPort:               4430/TCP
NodePort:                 https-map-p  31224/TCP
Endpoints:                10.1.38.88:4430
Port:                     app-mng-map-p  8888/TCP
TargetPort:               8888/TCP
NodePort:                 app-mng-map-p  31063/TCP
Endpoints:                10.1.38.88:8888
Port:                     dashboard  9000/TCP
TargetPort:               9000/TCP
NodePort:                 dashboard  32164/TCP
Endpoints:                10.1.38.88:9000
Port:                     grafana  3000/TCP
TargetPort:               3000/TCP
NodePort:                 grafana  31801/TCP
Endpoints:                10.1.38.88:3000
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     30441
Events:                   <none>

disable/... -> I did it now, I even removed the whole deployment and deployed it again. Did not work. I started with that, I made every permutation (on/off) with options I described. And I had externalTrafficPolicy: Local and type: LoadBalancer as you maybe saw in the attached helm.

No, I do not have metalib. Microk8s status below - as I mentioned ingress is 100% based on attached helm.

addons:
  enabled:
    dashboard            # (core) The Kubernetes dashboard
    dns                  # (core) CoreDNS
    ha-cluster           # (core) Configure high availability on the current node
    helm                 # (core) Helm - the package manager for Kubernetes
    helm3                # (core) Helm 3 - the package manager for Kubernetes
    hostpath-storage     # (core) Storage class; allocates storage from host directory
    metrics-server       # (core) K8s Metrics Server for API access to service metrics
    observability        # (core) A lightweight observability stack for logs, traces and metrics
    storage              # (core) Alias to hostpath-storage add-on, deprecated
  disabled:
    cert-manager         # (core) Cloud native certificate management
    community            # (core) The community addons repository
    gpu                  # (core) Automatic enablement of Nvidia CUDA
    host-access          # (core) Allow Pods connecting to Host services smoothly
    ingress              # (core) Ingress controller for external access
    kube-ovn             # (core) An advanced network fabric for Kubernetes
    mayastor             # (core) OpenEBS MayaStor
    metallb              # (core) Loadbalancer for your Kubernetes cluster
    prometheus           # (core) Prometheus operator for monitoring and logging
    rbac                 # (core) Role-Based Access Control for authorisation
    registry             # (core) Private image registry exposed on localhost:32000

@Azbesciak
Copy link
Author

Azbesciak commented Mar 3, 2023

And as I mentioned that IP comes from the ingress controller - please see the image I attached, it is inside it. Log on the left is from my app.

I also changed it back to NodePort

ingress-nginx   service/ingress-nginx-controller                             NodePort    10.152.183.96    <none>        8000:30982/TCP,4430:32359/TCP,8888:30386/TCP,9000:30640/TCP,3000:30137/TCP                                                    25s    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

No difference

@Azbesciak
Copy link
Author

BTW that is nginx logging pattern, into mentioned service

log_format access '$remote_addr - $remote_user [$time_local] '
                       '"$request" $status $bytes_sent '
                       '"$http_referer" "$http_user_agent"';

@longwuyuan
Copy link
Contributor

The ingress-controller status is pending so none of your curl/test data is valid. Please fix that and then test

service/ingress-nginx-controller                             LoadBalancer   10.152.183.36    <pending>  

@Azbesciak
Copy link
Author

Azbesciak commented Mar 4, 2023

@longwuyuan
I did it just after I responded, see #9685 (comment)

I changed to NodePort, no difference (BUT IT IS NOT PENDING, it just does not give me the client IP). With LoadBalancer it will never be ready.

and according to https://stackoverflow.com/a/44112285/9658307 that is all what I can do, I can assign IP by myself - as I did also:

ingress-nginx   service/ingress-nginx-controller                             LoadBalancer   10.152.183.20    10.20.18.30   8000:31741/TCP,4430:31558/TCP,8888:30962/TCP,9000:32123/TCP,3000:31446/TCP                                                    90s    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

And it also did not change anything (I undeployed the whole chart, waited some time, and deployed again so there were nothing like grace period in work - the service was reachable from outside; IP is still invalid).

I have literally that config

apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    {{- include "ingress.labels" . | nindent 4 }}
  name: ingress-nginx-controller
  namespace: {{ .Release.Namespace }}
spec:
  externalTrafficPolicy: Local
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  # I manage 80/443 via tcp services, default 80/443 is overridden in controller to 7998/7999, that service I mention operates on 8000
    {{- range .Values.endpoints }}
    - port: {{ .port }}
      targetPort: {{ .port }}
      name: {{ .name }}
      protocol: {{ .protocol }}
    {{- end }}
  selector:
    app.kubernetes.io/component: controller
    {{- include "ingress.selectorLabels" . | nindent 4 }}
  type: LoadBalancer
  externalIPs:
  - 10.20.18.30

and config map for it (deployed runtime version)

kind: ConfigMap
apiVersion: v1
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
  uid: f37f2643-0d6f-4248-b66e-0567f222aa31
  resourceVersion: '17375231'
  creationTimestamp: '2023-03-04T04:34:26Z'
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 0.1.0
    helm.sh/chart: ingress-0.1.0
  annotations:
    meta.helm.sh/release-name: ingress-nginx
    meta.helm.sh/release-namespace: ingress-nginx
  managedFields:
    - manager: helm
      operation: Update
      apiVersion: v1
      time: '2023-03-04T04:34:26Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:data:
          .: {}
          f:allow-snippet-annotations: {}
        f:metadata:
          f:annotations:
            .: {}
            f:meta.helm.sh/release-name: {}
            f:meta.helm.sh/release-namespace: {}
          f:labels:
            .: {}
            f:app.kubernetes.io/component: {}
            f:app.kubernetes.io/instance: {}
            f:app.kubernetes.io/managed-by: {}
            f:app.kubernetes.io/name: {}
            f:app.kubernetes.io/part-of: {}
            f:app.kubernetes.io/version: {}
            f:helm.sh/chart: {}
data:
  allow-snippet-annotations: 'true'

As I mentioned I have just bare metal server where I installed kubernetes and deployed ingress nginx.

@longwuyuan
Copy link
Contributor

@Azbesciak I think you have provided information as per your thought process and your own convenience.
What is needed here actually though, is the information as per relevance of the issue you are reporting.
So please see the new issue template and answer those questions in your original message post.

The information that is related to client-ip is as follows to begin with ;

  • Client IP address as visible in ip a command of linux on command prompt of client
  • Your complete curl command with -v and its reponse, from the command prompt of client where yo showed ipaddress
  • kubectl logs $ingresscontrollerpodname -n $ingresscontrollernamespace
  • kubectl get svc,ing -A -o wide
  • kubectl describe pod $ingresscontrollerpodname -n $ingresscontrollernamespace
  • kubectl describe svc $ingresscontrollersvcname -n $ingresscontrollernamespace
  • kubectl -n $appnamespace describe svc $svcname
  • kubectl -n $appnamespace describe ing $ingressname
  • kubectl -n $appnamespace logs $apppodname

You can delete the other informaiton from this issue as that has no relevance to the issue. Also you need to factor that layer 7 inspection, of the headers in he client's request, containing client-ip address, will not happen, for the TCP/UDP port, that has been exposed in the service type LoadBalancer, via the config for this project's ingress-nginx controller

@Azbesciak
Copy link
Author

@longwuyuan

I added a new section in the initial ticket (Update with request tracing).
I added - not replaced - because IMO the previous info might be usable; please read it all carefully.

BTW that initial info was the same as expected in the template, I just removed the last section because I the whole helm deployment is based on your deployment (mentioned), I also gave the whole config and helm chart itself. And it also looked fine.

I want to also notice that that app in total was migrated from docker compose, and it has the same architecture except that there is a k8s between. In docker-compose everything worked fine - I was able to see client IPs (I mean that we did not change anything outside).

Also you mentioned

Also you need to factor that layer 7 inspection, of the headers in he client's request, containing client-ip address, will not happen, for the TCP/UDP port, that has been exposed in the service type LoadBalancer, via the config for this project's ingress-nginx controller

can you elaborate on that? Please notice that I also changed the service type to NodePort (it is not in the logs above, but I did as I mentioned) and it made no difference.

Thank you for your time and support.

@longwuyuan
Copy link
Contributor

@Azbesciak after reading all the content here, my opinion is that you are repeatedly providing information and updates that is your opinion and point of view, and you are paying less attention to the details of the request for information and also you are paying less attention to the related info for triaging this issue. You could be trying to help sincerely but somehow I am not able to make the progress that I wish I could or I think I can. I am not an expert but I can surely help triaging this.

I have experiences some odd error messages while testing this on release v1.6.4 of the controller. So I was hoping to get on the same page with you but its not happening. Here are some significant observations ;

  • I don't see any ingress object in the output of kubectl get svc,ing -A -o wide so why are you sending a curl request ???

@longwuyuan
Copy link
Contributor

And I have just now tested the controller on minikube and I can get the real client ip address in the logs of the controller so there is no problem to be solved in the controller code, related to getting the real client ip address

@Azbesciak
Copy link
Author

Azbesciak commented Mar 4, 2023

@longwuyuan
image
I added it.
I provided every command you expected.
With comments.

And I have just now tested the controller on minikube and I can get the real client ip address in the logs of the controller so there is no problem to be solved in the controller code, related to getting the real client ip address

So why the controller receives the client IP, but on my app side I see controller's internal ip?

@Azbesciak
Copy link
Author

Azbesciak commented Mar 4, 2023

I don't see any ingress object in the output of kubectl get svc,ing -A -o wide so why are you sending a curl request ???

image
image

and I also told you that my app is working. I get my expected message. My production app, not example "ok" or something.

@Azbesciak
Copy link
Author

Azbesciak commented Mar 4, 2023

@longwuyuan
Ok, let us approach this from other side.
Why do you think that there is no issue on the controller side? I get the exact controller IP in my app.
Look, below is ifconfig executed inside ingress-nginx-controller

image

now, look into my app logs

image

No surprice when I - inside ingress-nginx-controller invoke curl 0.0.0.0:8000 it is also in the app log, under the same IP.

@longwuyuan
Copy link
Contributor

What is the real complete URL you are using to access your app ?

@Azbesciak
Copy link
Author

Azbesciak commented Mar 4, 2023

http://10.20.18.30:8000 - this is our test server, but the production one has the same issue (on prod it is on 80/443).
The whole app is behind VPN.
API is behind /api/1 path. It does not matter, on / and any other path index.html is returned.

And the whole traffic on given port is redirected to the app. So no matter if it is http://10.20.18.30:8000 or http://10.20.18.30:8000/my/favourite/path or something.

@longwuyuan
Copy link
Contributor

where is the ipaddress 10.20.18.30 ?

@Azbesciak
Copy link
Author

Yes, we are in the private network. But this is a separate server, not my laptop or something.

@longwuyuan
Copy link
Contributor

longwuyuan commented Mar 4, 2023

well I hope someone can solve your issue. I am not getting the answer to a simple question like "where is the ipaddress". I mean I really would like to understand where is the ipaddress because you mentioned you have the controller listening on nodePort so I expected that you need to have the ipaddress of the node+nodePort in your URL.

On a completely different note, I think you should get on the K8S slack and discuss this it there as there are more people there. nodePort is never a good choice for real use

The interface on which you terminate your connection needs to be capable working with a Layer7 process that can understand proxy-protocol and forward the headers to the upstream. In case of cloud environments like AWS etc, the service-provider offers configurable parameters to enable the proxy-protocol attributes like perserving the real-client-ip-address while forwarding traffic to the upstream.

@Azbesciak
Copy link
Author

So I did not get you, sorry. I thought about geo location.

That IP address belongs to the main cluster machine, it is totally hosted in our servers (no AWS, Azure or something).
And this one is also the entry point to the cluster, there is no load balancer or other endpoint uppon that.
Our cluster contains 2 machines, the controller is hosted on this one. The app also is there.

@Azbesciak
Copy link
Author

configurable parameters to enable the proxy-protocol attributes like perserving the real-client-ip-address while forwarding traffic to the upstream.

Ok, but... since

  • the controller receives valid client IP
  • my own mock nginix outside kubernetes also get the valid client IP
  • but my app, behind ingress-nginx-controller does not...?
    so my natural understanding is that the controller does not pass it.

And when I see the internal nginx config inside that controller

server {
                preread_by_lua_block {
                        ngx.var.proxy_upstream_name="tcp-test-map-view-80";
                }

                listen                  8000;

                listen                  [::]:8000;

                proxy_timeout           600s;
                proxy_next_upstream     on;
                proxy_next_upstream_timeout 600s;
                proxy_next_upstream_tries   3;

                proxy_pass              upstream_balancer;

        }

I suppose the problem is there. I know that headers are into the main section, but maybe something does not work there?

@longwuyuan
Copy link
Contributor

It seems to me that you are not using the documented and supported install https://kubernetes.github.io/ingress-nginx/deploy/#microk8s

I don't see data here that points to a problem in the controller.
I do see data here that something you want to achieve is not happening.
Since you are not following either this project's documented install or the microk8s documentation, I am not sure what are the next steps.
I hope there are other users out there who are doing the same thing you are doing and have already solved the problem you are trying to solve. _ hope they help you.

@Azbesciak
Copy link
Author

Azbesciak commented Mar 4, 2023

@longwuyuan
Thank you for your help.
Yes, I do not have default microk8s ingress installation - but why does it make a difference there...? From my point of view there is no difference; Ok - I do not have any ingress, but it would only allow me to route traffic to 80 port, whereas all my apps are on other ports - so it would be useless. Whole other configuration is like it is described there.

And btw, the installation I have comes from your repo - as also mentioned.
https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/baremetal/deploy.yaml

@Azbesciak
Copy link
Author

@longwuyuan
BTW I found the same issue, from 8 April 2021.
#7022

You were also included there

@longwuyuan
Copy link
Contributor

longwuyuan commented Mar 4, 2023 via email

@Azbesciak
Copy link
Author

I added in my nginx app printing of X-Forwarded-For and X-Real-Ip headers.
like

[upstream_http_x_forwarded_for=$upstream_http_x_forwarded_for upstream_http_x_real_ip=$upstream_http_x_real_ip http_x_real_ip=$http_x_real_ip http_x_forwarded_for=$http_x_forwarded_for]

I know that only http_ makes sense there, but for being sure.
All 4 are empty

[upstream_http_x_forwarded_for=- upstream_http_x_real_ip=- http_x_real_ip=- http_x_forwarded_for=-] 10.1.38.69

@longwuyuan
Copy link
Contributor

/retitle proxy-protocol without LoadBalancer on microk8s
/kind support

@k8s-ci-robot k8s-ci-robot changed the title client IP not reachable on custom route declared by tcp-services proxy-protocol without LoadBalancer on microk8s Mar 4, 2023
@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. labels Mar 4, 2023
@Azbesciak Azbesciak changed the title proxy-protocol without LoadBalancer on microk8s TCP-services and proxy-protocol without LoadBalancer on microk8s - client IP is lost Mar 10, 2023
@Azbesciak Azbesciak changed the title TCP-services and proxy-protocol without LoadBalancer on microk8s - client IP is lost TCP-services and proxy-protocol without LoadBalancer on microk8s - client IP replaced with controller internal IP Mar 10, 2023
@github-actions
Copy link

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

@github-actions github-actions bot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Apr 10, 2023
@maximemoreillon
Copy link

I managed to fix this issue in my Microk8s v1.30 Kubernetes cluster where The NGINX ingress controller is installed using the Microk8s ingress addon.

To do so, I edited the nginx-load-balancer-microk8s-conf configmap in the ingress namespace and added the following:

data:
  enable-real-ip: "true"

@longwuyuan
Copy link
Contributor

Project is deprecating TCP/UDP forwarding #11666 so there is no action item to be tracked in this issue. Hence closing the issue.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

Project is deprecating TCP/UDP forwarding #11666 so there is no action item to be tracked in this issue. Hence closing the issue.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

4 participants