Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modsecurity log dosent use $request_id from nginx when modsec blocks (403) #11288

Closed
husa570 opened this issue Apr 22, 2024 · 16 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@husa570
Copy link

husa570 commented Apr 22, 2024

The modsecurity log dosent have the same transaction_id (unique_id in the log) as the nginx log (request_id)
I have rename all "sensitive internal" info below

I have tested custom error pages in this cluster earlier (about a year ago) but after that we hade upgraded the ingress so that config is (hopefully) gone.

The ingress have the annotation

nginx.ingress.kubernetes.io/modsecurity-transaction-id: "$request_id"

and the config in the controller have modsecurity_transaction_id "$request_id";set under location / for that ingress

Nginx log with request_id=eeb4c975-6097-4bc8-9456-e22ae7c866ce

{"time": "2024-04-22T08:20:12+02:00", "remote_address": "<hidden-src-ip>", "remote_user": "-", "request": "GET /?id=1+union+select+1,2,3/* HTTP/1.1", "response_code": "403", "referer": "-", "useragent": "curl/8.4.0", "request_length": "334", "request_time": "0.000", "proxy_upstream_uname": "<hidden-proxy>", "proxy_alternative_upstream_name": "", "upstream_addr": "-", "upstream_response_length": "-", "upstream_response_time": "-", "upstream_status": "-", "request_id": "eeb4c975-6097-4bc8-9456-e22ae7c866ce", "x-forward-for": "<hidden-src-ip>, <hidden-src-ip>", "uri": "/", "request_query": "id=1+union+select+1,2,3/*", "method": "GET", "http_referrer": "-", "vhost": "change-hostname.example.local"}

Modsec log with unique_id=0cb9025a0230b70127cc25f7591ef443

2024/04/22 08:20:12 [error] 186#186: *2810440 [client <hidden-src-ip>] ModSecurity: Access denied with code 403 (phase 2). Matched "Operator `Ge' with parameter `5' against variable `TX:ANOMALY_SCORE' (Value: `15' ) [file "/etc/nginx/owasp-modsecurity-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "81"] [id "949110"] [rev ""] [msg "Inbound Anomaly Score Exceeded (Total Score: 15)"] [data ""] [severity "2"] [ver "OWASP_CRS/3.3.5"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "10.244.100.251"] [uri "/"] [unique_id "0cb9025a0230b70127cc25f7591ef443"] [ref ""], client: <hidden-src-ip>, server: change-hostname.example.local, request: "GET /?id=1+union+select+1,2,3/* HTTP/1.1", host: "change-hostname.example.local"
2024/04/22 08:20:12 [info] 186#186: *2810440 ModSecurity: Warning. Matched "Operator `Rx' with parameter `(?i:(?:^[\W\d]+\s*?(?:(?:alter\s*(?:a(?:(?:pplication\s*rol|ggregat)e|s(?:ymmetric\s*ke|sembl)y|u(?:thorization|dit)|vailability\s*group)|c(?:r(?:yptographic\s*provider|edential)|o(?:l(?:latio|um)|nve (1040 characters omitted)' against variable `ARGS:id' (Value: `1 union select 1,2,3/*' ) [file "/etc/nginx/owasp-modsecurity-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "426"] [id "942360"] [rev ""] [msg "Detects concatenated basic SQL injection and SQLLFI attempts"] [data "Matched Data: 1 union select found within ARGS:id: 1 union select 1,2,3/*"] [severity "2"] [ver "OWASP_CRS/3.3.5"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-sqli"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "capec/1000/152/248/66"] [tag "PCI/6.5.2"] [hostname "10.244.100.251"] [uri "/"] [unique_id "0cb9025a0230b70127cc25f7591ef443"] [ref "o0,14v9,22t:urlDecodeUni"], client: <hidden-src-ip>, server: change-hostname.example.local, request: "GET /?id=1+union+select+1,2,3/* HTTP/1.1", host: "change-hostname.example.local"
2024/04/22 08:20:12 [info] 186#186: *2810440 ModSecurity: Warning. Matched "Operator `Rx' with parameter `(?i:(?:[\"'`](?:;?\s*?(?:having|select|union)\b\s*?[^\s]|\s*?!\s*?[\"'`\w])|(?:c(?:onnection_id|urrent_user)|database)\s*?\([^\)]*?|u(?:nion(?:[\w(\s]*?select| select @)|ser\s*?\([^\)]*?)|s(?:chema\s* (165 characters omitted)' against variable `ARGS:id' (Value: `1 union select 1,2,3/*' ) [file "/etc/nginx/owasp-modsecurity-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "165"] [id "942190"] [rev ""] [msg "Detects MSSQL code execution and information gathering attempts"] [data "Matched Data: union select found within ARGS:id: 1 union select 1,2,3/*"] [severity "2"] [ver "OWASP_CRS/3.3.5"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-sqli"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "capec/1000/152/248/66"] [tag "PCI/6.5.2"] [hostname "10.244.100.251"] [uri "/"] [unique_id "0cb9025a0230b70127cc25f7591ef443"] [ref "o2,12v9,22t:urlDecodeUni"], client: <hidden-src-ip>, server: change-hostname.example.local, request: "GET /?id=1+union+select+1,2,3/* HTTP/1.1", host: "change-hostname.example.local"
2024/04/22 08:20:12 [info] 186#186: *2810440 ModSecurity: Warning. detected SQLi using libinjection. [file "/etc/nginx/owasp-modsecurity-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "46"] [id "942100"] [rev ""] [msg "SQL Injection Attack Detected via libinjection"] [data "Matched Data: 1UE1c found within ARGS:id: 1 union select 1,2,3/*"] [severity "2"] [ver "OWASP_CRS/3.3.5"] [maturity "0"] [accuracy "0"] [hostname "10.244.100.251"] [uri "/"] [unique_id "0cb9025a0230b70127cc25f7591ef443"] [ref "v9,22"], client: <hidden-src-ip>, server: change-hostname.example.local, request: "GET /?id=1+union+select+1,2,3/* HTTP/1.1", host: "change-hostname.example.local"

The interesting part of annotions. In the ingress I have comment out some rule examples

annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      more_set_headers "Request-Id: $request_id";
    nginx.ingress.kubernetes.io/enable-modsecurity: "true"
    nginx.ingress.kubernetes.io/modsecurity-transaction-id: "$request_id"
    nginx.ingress.kubernetes.io/modsecurity-snippet: |
        #nginx 0.25.0 and above
        Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
        SecRuleEngine On
        SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.paranoia_level=1"

kubectl exec -it -n ingress-nginx ingress-nginx-controller-8kgrl -- /nginx-ingress-controller --version

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.9.6
  Build:         6a73aa3b05040a97ef8213675a16142a9c95952a
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.21.6

-------------------------------------------------------------------------------

The nginx config for the ingress

kubectl exec -it -n ingress-nginx ingress-nginx-controller-8kgrl -- cat /etc/nginx/nginx.conf

        ## start server change-hostname.example.local
        server {
                server_name change-hostname.example.local ;

                listen 80  ;
                listen [::]:80  ;
                listen 443  ssl http2 ;
                listen [::]:443  ssl http2 ;

                set $proxy_upstream_name "-";

                ssl_certificate_by_lua_block {
                        certificate.call()
                }

                location / {

                        set $namespace      "echo";
                        set $ingress_name   "waf-ingress";
                        set $service_name   "echoapp-svc";
                        set $service_port   "8080";
                        set $location_path  "/";
                        set $global_rate_limit_exceeding n;

                        rewrite_by_lua_block {
                                lua_ingress.rewrite({
                                        force_ssl_redirect = false,
                                        ssl_redirect = true,
                                        force_no_ssl_redirect = false,
                                        preserve_trailing_slash = false,
                                        use_port_in_redirects = false,
                                        global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
                                })
                                balancer.rewrite()
                                plugins.run()
                        }

                        # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
                        # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
                        # other authentication method such as basic auth or external auth useless - all requests will be allowed.
                        #access_by_lua_block {
                        #}

                        header_filter_by_lua_block {
                                lua_ingress.header()
                                plugins.run()
                        }

                        body_filter_by_lua_block {
                                plugins.run()
                        }

                        log_by_lua_block {
                                balancer.log()

                                monitor.call()

                                plugins.run()
                        }

                        port_in_redirect off;

                        set $balancer_ewma_score -1;
                        set $proxy_upstream_name "echo-echoapp-svc-8080";
                        set $proxy_host          $proxy_upstream_name;
                        set $pass_access_scheme  $scheme;

                        set $pass_server_port    $server_port;

                        set $best_http_host      $http_host;
                        set $pass_port           $pass_server_port;

                        set $proxy_alternative_upstream_name "";

                        modsecurity on;
                        modsecurity_rules '
                        #Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect.
                        #If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement:
                        #nginx 0.24.1 and below
                        #Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
                        #Include /etc/nginx/modsecurity/modsecurity.conf
                        #nginx 0.25.0 and above
                        Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
                        SecRuleEngine On
                        #SecRuleEngine DetectionOnly
                        # Paranoidlevel for crs version 3.x
                        SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.paranoia_level=1"
                        # Paranoidlevel for crs version 4.x
                        #SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.blocking_paranoia_level=1"
                        #SecAction "id:900001,phase:1,nolog,pass,t:none,setvar:tx.detection_paranoia_level=4"
                        #SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.paranoia_level=1"
                        # Change traffic to ba controlled to 50%
                        #SecAction "id:900400,phase:1,nolog,pass,t:none,setvar:tx.sampling_percentage=50"
                        # Change anomaly_score_threshold to 500 resp 400 (default 5 and 4)
                        #SecAction "id:900110,phase:1,nolog,pass,t:none,setvar:tx.inbound_anomaly_score_threshold=500,setvar:tx.outbound_anomaly_score_threshold=400"
                        #SecRuleRemoveById 942360
                        #SecRuleRemoveById 942190
                        #SecRuleRemoveById 942100
                        #SecRuleRemoveByTag attack-sqli
                        #SecRule REQUEST_URI "@beginsWith /path-ok" "id:1200,phase:1,nolog,pass,ctl:ruleRemoveById=942360,ctl:ruleRemoveById=942190,ctl:ruleRemoveById=942100"
                        SecRule REQUEST_URI "@beginsWith /path-ok/" "id:1200,phase:1,nolog,pass,ctl:ruleRemoveByTag=attack-sqli"
                        #Exemple 1 with blocking not valid URI regex
                        #SecRule REQUEST_URI "!\/(path-ok|path-1|path-2)\/" "id:1201,phase:1,deny,log,t:none,status:403"
                        #Exemple 2 with blocking not valid URI pm
                        #SecRule REQUEST_URI "!@pm /path-ok/ /path-1/ /path-2/" "id:1201,phase:1,deny,log,t:none,status:403"
                        #Exemple 3 with blocking not valid URI secmarker
                        #SecMarker BEGIN_VALID_URL_CHECK
                        #SecRule REQUEST_URI "@beginsWith /path-ok/"  "id:1201,nolog,pass,phase:1,t:none,skipAfter:END_VALID_URL_CHECK"
                        #SecRule REQUEST_URI "@beginsWith /path-1/" "id:1202,nolog,pass,phase:1,t:none,skipAfter:END_VALID_URL_CHECK"
                        #SecRule REQUEST_URI "@beginsWith /path-2/" "id:1203,nolog,pass,phase:1,t:none,skipAfter:END_VALID_URL_CHECK"
                        #Allow admin path from specific IP ranges
                        #SecRule REQUEST_URI "@beginsWith /admin/" "id:1204,nolog,pass,phase:1,t:none,chain,skipAfter:END_VALID_URL_CHECK" SecRule REMOTE_ADDR "@ipMatch 192.168.145.0/24" "phase:1,t:none"
                        #Allow admin path from several IP ranges
                        #SecRule REQUEST_URI "@beginsWith /admin/" "id:1204,nolog,pass,phase:1,t:none,chain,skipAfter:END_VALID_URL_CHECK" SecRule REMOTE_ADDR "@ipMatch 192.168.145.0/24,192.168.95.0/24" "phase:1,t:none"
                        #SecRule REQUEST_URI "." "id:1209,phase:1,deny,log,t:none,phase:1,status:403"
                        #SecMarker END_VALID_URL_CHECK
                        #SecDebugLog /dev/stdout
                        #SecDebugLogLevel 4 # 0 No logging,1 Errors (e.g., fatal processing errors, blocked transactions),2     Warnings (e.g., non-blocking rule matches),3 Notices (e.g., non-fatal processing errors), 4 Informational,5 Detailed,9 Everything!

                        ';
                        modsecurity_transaction_id "$request_id";

                        client_max_body_size                    1m;

                        proxy_set_header Host                   $best_http_host;

                        # Pass the extracted client certificate to the backend

                        # Allow websocket connections
                        proxy_set_header                        Upgrade           $http_upgrade;

                        proxy_set_header                        Connection        $connection_upgrade;

                        proxy_set_header X-Request-ID           $req_id;
                        proxy_set_header X-Real-IP              $remote_addr;

                        proxy_set_header X-Forwarded-For        $remote_addr;

                        proxy_set_header X-Forwarded-Host       $best_http_host;
                        proxy_set_header X-Forwarded-Port       $pass_port;
                        proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                        proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;

                        proxy_set_header X-Scheme               $pass_access_scheme;

                        # Pass the original X-Forwarded-For
                        proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

                        # mitigate HTTPoxy Vulnerability
                        # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                        proxy_set_header Proxy                  "";

                        # Custom headers to proxied server

                        proxy_connect_timeout                   5s;
                        proxy_send_timeout                      60s;
                        proxy_read_timeout                      60s;

                        proxy_buffering                         off;
                        proxy_buffer_size                       4k;
                        proxy_buffers                           4 4k;

                        proxy_max_temp_file_size                1024m;

                        proxy_request_buffering                 on;
                        proxy_http_version                      1.1;

                        proxy_cookie_domain                     off;
                        proxy_cookie_path                       off;

                        # In case of errors try the next upstream server before returning an error
                        proxy_next_upstream                     error timeout;
                        proxy_next_upstream_timeout             0;
                        proxy_next_upstream_tries               3;

                        more_set_headers "Request-Id: $request_id";

                        proxy_pass http://upstream_balancer;

                        proxy_redirect                          off;

                }

        }
        ## end server change-hostname.example.local

Kubernetes version:
kubectl version
Client Version: v1.28.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.0

Environment:

  • HPE DL380 G10/ESXI 8.0.2/:
  • SUSE Linux Enterprise Server 15 SP5
  • **Linux bvin01-k865m-01 5.14.21-150500.55.52-default Basic structure  #1 SMP PREEMPT_DYNAMIC Tue Mar 5 16:53:41 UTC 2024 (a62851f) x86_64 x86_64 x86_64 GNU/Linux
    ** :
  • Install tools:
    • kubeadm`
  • Basic cluster related info:

kubectl get nodes -o wide

NAME              STATUS   ROLES           AGE      VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                              KERNEL-VERSION                 CONTAINER-RUNTIME
bvin01-k865m-01   Ready    control-plane   2y318d   v1.29.0   192.168.95.46   <none>        SUSE Linux Enterprise Server 15 SP5   5.14.21-150500.55.52-default   containerd://1.7.10
bvin01-k865m-03   Ready    control-plane   2y318d   v1.29.0   192.168.95.47   <none>        SUSE Linux Enterprise Server 15 SP5   5.14.21-150500.55.52-default   containerd://1.7.10
bvin01-k865m-05   Ready    control-plane   552d     v1.29.0   192.168.95.48   <none>        SUSE Linux Enterprise Server 15 SP5   5.14.21-150500.55.52-default   containerd://1.7.10
bvin01-k865w-01   Ready    <none>          2y318d   v1.29.0   192.168.95.49   <none>        SUSE Linux Enterprise Server 15 SP5   5.14.21-150500.55.52-default   containerd://1.7.10
bvin01-k865w-03   Ready    <none>          2y318d   v1.29.0   192.168.95.50   <none>        SUSE Linux Enterprise Server 15 SP5   5.14.21-150500.55.52-default   containerd://1.7.10
bvin01-k865w-05   Ready    <none>          2y318d   v1.29.0   192.168.95.51   <none>        SUSE Linux Enterprise Server 15 SP5   5.14.21-150500.55.52-default   containerd://1.7.10
bvin01-k865w-07   Ready    <none>          2y318d   v1.29.0   192.168.95.52   <none>        SUSE Linux Enterprise Server 15 SP5   5.14.21-150500.55.52-default   containerd://1.7.10

helm ls -A | grep -i ingress

ingress-nginx           ingress-nginx           10              2024-04-16 14:39:02.841537105 +0200 CEST        deployed        ingress-nginx-4.9.1             1.9.6

helm -n ingress-nginx get values ingress-nginx

USER-SUPPLIED VALUES:
commonLabels: {}
controller:
  addHeaders:
    Content-Security-Policy: 'default-src https: data: ''unsafe-inline'' ''unsafe-eval'''
    X-Content-Type-Options: nosniff
    X-Frame-Options: SAMEORIGIN
    X-XSS-Protection: 1; mode=block
  admissionWebhooks:
    annotations: {}
    certManager:
      admissionCert:
        duration: ""
      enabled: false
      rootCert:
        duration: ""
    certificate: /usr/local/certificates/cert
    createSecretJob:
      resources: {}
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
          - ALL
        seccompProfile:
          type: RuntimeDefault
    enabled: true
    existingPsp: ""
    extraEnvs: []
    failurePolicy: Fail
    key: /usr/local/certificates/key
    labels: {}
    namespaceSelector: {}
    objectSelector: {}
    patch:
      enabled: true
      image:
        digest: sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80
        image: ingress-nginx/kube-webhook-certgen
        pullPolicy: IfNotPresent
        registry: library
        tag: v20231011-8b53cabe0
      labels: {}
      nodeSelector:
        kubernetes.io/os: linux
      podAnnotations: {}
      priorityClassName: ""
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      tolerations: []
    patchWebhookJob:
      resources: {}
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
          - ALL
        seccompProfile:
          type: RuntimeDefault
    port: 8443
    service:
      annotations: {}
      externalIPs: []
      loadBalancerSourceRanges: []
      servicePort: 443
      type: ClusterIP
  affinity: {}
  allowSnippetAnnotations: true
  annotations: {}
  autoscaling:
    annotations: {}
    behavior: {}
    enabled: false
    maxReplicas: 11
    minReplicas: 1
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  autoscalingTemplate: []
  config:
    enable-real-ip: "true"
    error-log-level: info
    generate-request-id: "True"
    hsts: "true"
    hsts-include-subdomains: "false"
    large-client-header-buffers: 4 12k
    log-format-upstream: '{"time": "$time_iso8601", "remote_address": "$remote_addr",
      "remote_user": "$remote_user", "request": "$request", "response_code": "$status",
      "referer": "$http_referer", "useragent": "$http_user_agent", "request_length":
      "$request_length", "request_time": "$request_time", "proxy_upstream_uname":
      "$proxy_upstream_name", "proxy_alternative_upstream_name": "$proxy_alternative_upstream_name",
      "upstream_addr": "$upstream_addr", "upstream_response_length": "$upstream_response_length",
      "upstream_response_time": "$upstream_response_time", "upstream_status": "$upstream_status",
      "request_id": "$req_id", "x-forward-for": "$proxy_add_x_forwarded_for", "uri":
      "$uri", "request_query": "$args", "method": "$request_method", "http_referrer":
      "$http_referer", "vhost": "$host"}'
    proxy-real-ip-cidr: 192.168.47.0/24
    real-ip-header: X-Forwarded-For
    server-tokens: "false"
    skip-access-log-urls: /healthz
    use-forwarded-headers: "true"
    use-proxy-protocol: "False"
  configAnnotations: {}
  configMapNamespace: ""
  containerName: controller
  containerPort:
    healthz: 10254
    http: 80
    https: 443
  containerSecurityContext:
    allowPrivilegeEscalation: false
    capabilities:
      add:
      - NET_BIND_SERVICE
      drop:
      - ALL
    runAsNonRoot: true
    runAsUser: 101
    seccompProfile:
      type: RuntimeDefault
  customTemplate:
    configMapKey: ""
    configMapName: ""
  dnsConfig: {}
  dnsPolicy: ClusterFirst
  electionID: ""
  enableAnnotationValidations: false
  enableMimalloc: true
  enableTopologyAwareRouting: false
  extraArgs:
    default-ssl-certificate: $(POD_NAMESPACE)/default-ssl-certificate-tls
  extraContainers: []
  extraEnvs:
  - name: POD_NAMESPACE
    valueFrom:
      fieldRef:
        fieldPath: metadata.namespace
  - name: POD_NAME
    valueFrom:
      fieldRef:
        fieldPath: metadata.name
  extraInitContainers: []
  extraModules: []
  extraVolumeMounts:
  - mountPath: /etc/localtime
    name: tz-config
    readOnly: true
    subPath: localtime
  extraVolumes:
  - configMap:
      items:
      - key: localtime
        path: localtime
      name: tzdata
    name: tz-config
  healthCheckHost: ""
  healthCheckPath: /healthz
  hostAliases: []
  hostNetwork: false
  hostPort:
    enabled: true
    ports:
      http: 80
      https: 443
  hostname: {}
  image:
    allowPrivilegeEscalation: true
    chroot: false
    digest: sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c
    digestChroot: sha256:7eb46ff733429e0e46892903c7394aff149ac6d284d92b3946f3baf7ff26a096
    existingPsp: ""
    image: ingress-nginx/controller
    pullPolicy: IfNotPresent
    registry: library
    runAsUser: 101
    tag: v1.9.6
  ingressClass: nginx
  ingressClassByName: false
  ingressClassResource:
    controllerValue: k8s.io/ingress-nginx
    default: true
    enabled: true
    name: nginx
    parameters: {}
  keda:
    apiVersion: keda.sh/v1alpha1
    behavior: {}
    cooldownPeriod: 300
    enabled: false
    maxReplicas: 11
    minReplicas: 1
    pollingInterval: 30
    restoreToOriginalReplicaCount: false
    scaledObject:
      annotations: {}
    triggers: []
  kind: DaemonSet
  labels: {}
  lifecycle:
    preStop:
      exec:
        command:
        - /wait-shutdown
  livenessProbe:
    failureThreshold: 5
    httpGet:
      path: /healthz
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
  maxmindLicenseKey: ""
  metrics:
    enabled: true
    port: 10254
    portName: metrics
    prometheusRule:
      additionalLabels: {}
      enabled: true
      rules:
      - alert: NGINXConfigFailed
        annotations:
          description: bad ingress config - nginx config test failed
          summary: uninstall the latest ingress changes to allow config reloads to
            resume
        expr: count(nginx_ingress_controller_config_last_reload_successful == 0) >
          0
        for: 1s
        labels:
          group: MW-rules
          severity: critical
      - alert: NGINXTooMany503s
        annotations:
          description: Too many 503s
          summary: More than 10% of all requests returned 503, this requires your
            attention
        expr: 100 * (sum by (host)(rate(nginx_ingress_controller_requests{status=~"503"}[10m]))
          / sum by (host)(rate(nginx_ingress_controller_requests[10m])) ) > 10
        for: 10m
        labels:
          group: MW-rules
          severity: warning
      - alert: NGINSTooMany500s
        annotations:
          description: Too many 503s
          summary: More than 10% of all requests returned 503, this requires your
            attention
        expr: 100 * (sum by (host)(rate(nginx_ingress_controller_requests{status=~"5.."}[10m]))
          / sum by (host)(rate(nginx_ingress_controller_requests[10m])) ) > 10
        for: 10m
        labels:
          severity: warning
      - alert: NGINXTooMany400s
        annotations:
          description: Too many 4XXs
          summary: More than 15% of all requests returned 4XX, this requires your
            attention
        expr: 100 * ( sum(rate(nginx_ingress_controller_requests{status=~"4.+"}[10m]))
          / sum(rate(nginx_ingress_controller_requests[10m])) ) > 15
        for: 10m
        labels:
          severity: warning
    service:
      annotations: {}
      externalIPs: []
      labels: {}
      loadBalancerSourceRanges: []
      servicePort: 10254
      type: ClusterIP
    serviceMonitor:
      additionalLabels: {}
      enabled: true
      metricRelabelings: []
      namespace: ""
      namespaceSelector: {}
      relabelings: []
      scrapeInterval: 30s
      targetLabels: []
  minAvailable: 1
  minReadySeconds: 1
  name: controller
  networkPolicy:
    enabled: false
  nodeSelector:
    kubernetes.io/os: linux
  opentelemetry:
    containerSecurityContext:
      allowPrivilegeEscalation: false
    enabled: false
    image: registry.k8s.io/ingress-nginx/opentelemetry:v20230721-3e2062ee5@sha256:13bee3f5223883d3ca62fee7309ad02d22ec00ff0d7033e3e9aca7a9f60fd472
    resources: {}
  podAnnotations: {}
  podLabels: {}
  podSecurityContext: {}
  priorityClassName: ""
  proxySetHeaders: {}
  publishService:
    enabled: false
    pathOverride: ""
  readinessProbe:
    failureThreshold: 3
    httpGet:
      path: /healthz
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
  replicaCount: 1
  reportNodeInternalIp: false
  resources:
    requests:
      cpu: 100m
      memory: 235Mi
  scope:
    enabled: false
    namespace: ""
    namespaceSelector: ""
  service:
    annotations: {}
    appProtocol: true
    enableHttp: true
    enableHttps: true
    enabled: false
    external:
      enabled: true
    externalIPs: []
    internal:
      annotations: {}
      enabled: false
      loadBalancerIP: ""
      loadBalancerSourceRanges: []
      ports: {}
      targetPorts: {}
    ipFamilies:
    - IPv4
    ipFamilyPolicy: SingleStack
    labels: {}
    loadBalancerClass: ""
    loadBalancerIP: ""
    loadBalancerSourceRanges: []
    nodePorts:
      http: ""
      https: ""
      tcp: {}
      udp: {}
    ports:
      http: 80
      https: 443
    targetPorts:
      http: http
      https: https
    type: LoadBalancer
  shareProcessNamespace: false
  sysctls: {}
  tcp:
    annotations: {}
    configMapNamespace: ""
  terminationGracePeriodSeconds: 300
  tolerations: []
  topologySpreadConstraints: []
  udp:
    annotations: {}
    configMapNamespace: ""
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  watchIngressWithoutClass: false
defaultBackend:
  affinity: {}
  autoscaling:
    annotations: {}
    enabled: false
    maxReplicas: 2
    minReplicas: 1
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  containerSecurityContext: {}
  enabled: true
  existingPsp: ""
  extraArgs: {}
  extraEnvs: []
  extraVolumeMounts: []
  extraVolumes: []
  image:
    allowPrivilegeEscalation: false
    image: defaultbackend-amd64
    pullPolicy: IfNotPresent
    readOnlyRootFilesystem: true
    registry: library/k8
    runAsNonRoot: true
    runAsUser: 65534
    tag: "1.5"
  labels: {}
  livenessProbe:
    failureThreshold: 3
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  minAvailable: 1
  minReadySeconds: 0
  name: defaultbackend
  networkPolicy:
    enabled: false
  nodeSelector:
    kubernetes.io/os: linux
  podAnnotations: {}
  podLabels: {}
  podSecurityContext: {}
  port: 8080
  priorityClassName: ""
  readinessProbe:
    failureThreshold: 6
    initialDelaySeconds: 0
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 5
  replicaCount: 1
  resources: {}
  service:
    annotations: {}
    externalIPs: []
    loadBalancerSourceRanges: []
    servicePort: 80
    type: ClusterIP
  serviceAccount:
    automountServiceAccountToken: true
    create: false
    name: default
  tolerations: []
  updateStrategy: {}
dhParam: ""
imagePullSecrets: []
podSecurityPolicy:
  enabled: false
portNamePrefix: ""
rbac:
  create: true
  scope: false
revisionHistoryLimit: 10
serviceAccount:
  annotations: {}
  automountServiceAccountToken: true
  create: true
  name: ""
tcp: {}
udp: {}

Current State of the controller:
kubectl describe ingressclasses

Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.9.6
              helm.sh/chart=ingress-nginx-4.9.1
Annotations:  ingressclass.kubernetes.io/is-default-class: true
              meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: ingress-nginx
Controller:   k8s.io/ingress-nginx
Events:       <none>

kubectl -n ingress-nginx describe pod ingress-nginx-controller-8kgrl

Name:             ingress-nginx-controller-8kgrl
Namespace:        ingress-nginx
Priority:         0
Service Account:  ingress-nginx
Node:             bvin01-k865w-03/192.168.95.50
Start Time:       Thu, 18 Apr 2024 17:18:55 +0200
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=ingress-nginx
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=ingress-nginx
                  app.kubernetes.io/part-of=ingress-nginx
                  app.kubernetes.io/version=1.9.6
                  controller-revision-hash=5cf4bb47c8
                  helm.sh/chart=ingress-nginx-4.9.1
                  pod-template-generation=1
Annotations:      cni.projectcalico.org/containerID: 0e3aada85301502343c8a6ffef9ae5b528bb60588f07c2cc5afc1b16fd3f6cd6
                  cni.projectcalico.org/podIP: 10.244.230.205/32
                  cni.projectcalico.org/podIPs: 10.244.230.205/32
Status:           Running
IP:               10.244.230.205
IPs:
  IP:           10.244.230.205
Controlled By:  DaemonSet/ingress-nginx-controller
Containers:
  controller:
    Container ID:    containerd://22db1e1ec774269b26ac871f50b7d45cd5a8a2e620c4b3bd16cd9a8fcbecb2ba
    Image:           library/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c
    Image ID:        docker.io/library/ingress-nginx/controller@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c
    Ports:           10254/TCP, 80/TCP, 443/TCP, 10254/TCP, 8443/TCP
    Host Ports:      10254/TCP, 80/TCP, 443/TCP, 0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    Args:
      /nginx-ingress-controller
      --default-backend-service=$(POD_NAMESPACE)/ingress-nginx-defaultbackend
      --election-id=ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --default-ssl-certificate=$(POD_NAMESPACE)/default-ssl-certificate-tls
    State:          Running
      Started:      Thu, 18 Apr 2024 17:18:56 +0200
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   235Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-8kgrl (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      POD_NAME:       ingress-nginx-controller-8kgrl (v1:metadata.name)
    Mounts:
      /etc/localtime from tz-config (ro,path="localtime")
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x9c68 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  tz-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      tzdata
    Optional:  false
  kube-api-access-x9c68:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason  Age                  From                      Message
  ----    ------  ----                 ----                      -------
  Normal  RELOAD  20m (x2 over 3d15h)  nginx-ingress-controller  NGINX reload triggered due to a change in configuration

kubectl -n ingress-nginx describe svc

Name:              ingress-nginx-controller-admission
Namespace:         ingress-nginx
Labels:            app.kubernetes.io/component=controller
                   app.kubernetes.io/instance=ingress-nginx
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=ingress-nginx
                   app.kubernetes.io/part-of=ingress-nginx
                   app.kubernetes.io/version=1.9.6
                   helm.sh/chart=ingress-nginx-4.9.1
Annotations:       meta.helm.sh/release-name: ingress-nginx
                   meta.helm.sh/release-namespace: ingress-nginx
Selector:          app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.106.105.41
IPs:               10.106.105.41
Port:              https-webhook  443/TCP
TargetPort:        webhook/TCP
Endpoints:         10.244.100.251:8443,10.244.196.53:8443,10.244.230.205:8443 + 1 more...
Session Affinity:  None
Events:            <none>


Name:              ingress-nginx-controller-metrics
Namespace:         ingress-nginx
Labels:            app.kubernetes.io/component=controller
                   app.kubernetes.io/instance=ingress-nginx
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=ingress-nginx
                   app.kubernetes.io/part-of=ingress-nginx
                   app.kubernetes.io/version=1.9.6
                   helm.sh/chart=ingress-nginx-4.9.1
Annotations:       meta.helm.sh/release-name: ingress-nginx
                   meta.helm.sh/release-namespace: ingress-nginx
Selector:          app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.100.173.201
IPs:               10.100.173.201
Port:              metrics  10254/TCP
TargetPort:        metrics/TCP
Endpoints:         10.244.100.251:10254,10.244.196.53:10254,10.244.230.205:10254 + 1 more...
Session Affinity:  None
Events:            <none>


Name:              ingress-nginx-defaultbackend
Namespace:         ingress-nginx
Labels:            app.kubernetes.io/component=default-backend
                   app.kubernetes.io/instance=ingress-nginx
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=ingress-nginx
                   app.kubernetes.io/part-of=ingress-nginx
                   app.kubernetes.io/version=1.9.6
                   helm.sh/chart=ingress-nginx-4.9.1
Annotations:       meta.helm.sh/release-name: ingress-nginx
                   meta.helm.sh/release-namespace: ingress-nginx
Selector:          app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.110.88.10
IPs:               10.110.88.10
Port:              http  80/TCP
TargetPort:        http/TCP
Endpoints:         10.244.54.59:8080
Session Affinity:  None
Events:            <none>

kubectl get -n ingress-nginx all,ing -o wide

NAME                                               READY   STATUS    RESTARTS   AGE     IP               NODE              NOMINATED NODE   READINESS GATES
pod/ingress-nginx-controller-8kgrl                 1/1     Running   0          3d15h   10.244.230.205   bvin01-k865w-03   <none>           <none>
pod/ingress-nginx-controller-bn94w                 1/1     Running   0          3d15h   10.244.196.53    bvin01-k865w-07   <none>           <none>
pod/ingress-nginx-controller-jfpsq                 1/1     Running   0          3d15h   10.244.100.251   bvin01-k865w-05   <none>           <none>
pod/ingress-nginx-controller-t6k2j                 1/1     Running   0          3d15h   10.244.54.51     bvin01-k865w-01   <none>           <none>
pod/ingress-nginx-defaultbackend-b8d7d8b66-g8dbm   1/1     Running   0          5d19h   10.244.54.59     bvin01-k865w-01   <none>           <none>

NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE     SELECTOR
service/ingress-nginx-controller-admission   ClusterIP   10.106.105.41    <none>        443/TCP     5d19h   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-metrics     ClusterIP   10.100.173.201   <none>        10254/TCP   5d19h   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-defaultbackend         ClusterIP   10.110.88.10     <none>        80/TCP      5d19h   app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE     CONTAINERS   IMAGES                                                                                                            SELECTOR
daemonset.apps/ingress-nginx-controller   4         4         4       4            4           kubernetes.io/os=linux   5d19h   controller   library/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS                      IMAGES                                SELECTOR
deployment.apps/ingress-nginx-defaultbackend   1/1     1            1           5d19h   ingress-nginx-default-backend   library/k8/defaultbackend-amd64:1.5   app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                                     DESIRED   CURRENT   READY   AGE     CONTAINERS                      IMAGES                                SELECTOR
replicaset.apps/ingress-nginx-defaultbackend-b8d7d8b66   1         1         1       5d19h   ingress-nginx-default-backend   library/k8/defaultbackend-amd64:1.5   app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=b8d7d8b66

kubectl -n echo describe ingress waf-ingress

Name:             waf-ingress
Labels:           <none>
Namespace:        echo
Address:          192.168.95.49,192.168.95.50,192.168.95.51,192.168.95.52
Ingress Class:    nginx
Default backend:  <default>
TLS:
  SNI routes change-hostname.example.local
Rules:
  Host                                    Path  Backends
  ----                                    ----  --------
  change-hostname.example.local
                                          /   echoapp-svc:8080 (10.244.100.210:8080,10.244.196.43:8080,10.244.230.204:8080 + 1 more...)
Annotations:                              nginx.ingress.kubernetes.io/configuration-snippet: more_set_headers "Request-Id: $request_id";
                                          nginx.ingress.kubernetes.io/enable-modsecurity: true
                                          nginx.ingress.kubernetes.io/modsecurity-snippet:
                                            #Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect...
                                            #If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement:
                                            #nginx 0.24.1 and below
                                            #Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
                                            #Include /etc/nginx/modsecurity/modsecurity.conf
                                            #nginx 0.25.0 and above
                                            Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
                                            SecRuleEngine On
                                            #SecRuleEngine DetectionOnly
                                            # Paranoidlevel for crs version 3.x
                                            SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.paranoia_level=1"
                                            # Paranoidlevel for crs version 4.x
                                            #SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.blocking_paranoia_level=1"
                                            #SecAction "id:900001,phase:1,nolog,pass,t:none,setvar:tx.detection_paranoia_level=4"
                                            #SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.paranoia_level=1"
                                            # Change traffic to ba controlled to 50%
                                            #SecAction "id:900400,phase:1,nolog,pass,t:none,setvar:tx.sampling_percentage=50"
                                            # Change anomaly_score_threshold to 500 resp 400 (default 5 and 4)
                                            #SecAction "id:900110,phase:1,nolog,pass,t:none,setvar:tx.inbound_anomaly_score_threshold=500,setvar:tx.outbound_anomaly_score_threshold=4...
                                            #SecRuleRemoveById 942360
                                            #SecRuleRemoveById 942190
                                            #SecRuleRemoveById 942100
                                            #SecRuleRemoveByTag attack-sqli
                                            #SecRule REQUEST_URI "@beginsWith /path-ok" "id:1200,phase:1,nolog,pass,ctl:ruleRemoveById=942360,ctl:ruleRemoveById=942190,ctl:ruleRemove...
                                            SecRule REQUEST_URI "@beginsWith /path-ok/" "id:1200,phase:1,nolog,pass,ctl:ruleRemoveByTag=attack-sqli"
                                            #Exemple 1 with blocking not valid URI regex
                                            #SecRule REQUEST_URI "!\/(path-ok|path-1|path-2)\/" "id:1201,phase:1,deny,log,t:none,status:403"
                                            #Exemple 2 with blocking not valid URI pm
                                            #SecRule REQUEST_URI "!@pm /path-ok/ /path-1/ /path-2/" "id:1201,phase:1,deny,log,t:none,status:403"
                                            #Exemple 3 with blocking not valid URI secmarker
                                            #SecMarker BEGIN_VALID_URL_CHECK
                                            #SecRule REQUEST_URI "@beginsWith /path-ok/"  "id:1201,nolog,pass,phase:1,t:none,skipAfter:END_VALID_URL_CHECK"
                                            #SecRule REQUEST_URI "@beginsWith /path-1/" "id:1202,nolog,pass,phase:1,t:none,skipAfter:END_VALID_URL_CHECK"
                                            #SecRule REQUEST_URI "@beginsWith /path-2/" "id:1203,nolog,pass,phase:1,t:none,skipAfter:END_VALID_URL_CHECK"
                                            #Allow admin path from specific IP ranges
                                            #SecRule REQUEST_URI "@beginsWith /admin/" "id:1204,nolog,pass,phase:1,t:none,chain,skipAfter:END_VALID_URL_CHECK" SecRule REMOTE_ADDR "@i...
                                            #Allow admin path from several IP ranges
                                            #SecRule REQUEST_URI "@beginsWith /admin/" "id:1204,nolog,pass,phase:1,t:none,chain,skipAfter:END_VALID_URL_CHECK" SecRule REMOTE_ADDR "@i...
                                            #SecRule REQUEST_URI "." "id:1209,phase:1,deny,log,t:none,phase:1,status:403"
                                            #SecMarker END_VALID_URL_CHECK
                                            #SecDebugLog /dev/stdout
                                            #SecDebugLogLevel 4 # 0 No logging,1 Errors (e.g., fatal processing errors, blocked transactions),2  Warnings (e.g., non-blocking rule matc...
                                          nginx.ingress.kubernetes.io/modsecurity-transaction-id: $request_id
Events:
  Type    Reason  Age                  From                      Message
  ----    ------  ----                 ----                      -------
  Normal  Sync    24m (x2 over 3d16h)  nginx-ingress-controller  Scheduled for sync
  Normal  Sync    24m (x2 over 3d16h)  nginx-ingress-controller  Scheduled for sync
  Normal  Sync    24m (x2 over 3d16h)  nginx-ingress-controller  Scheduled for sync
  Normal  Sync    24m (x2 over 3d16h)  nginx-ingress-controller  Scheduled for sync
@husa570 husa570 added the kind/bug Categorizes issue or PR as related to a bug. label Apr 22, 2024
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Apr 22, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@longwuyuan
Copy link
Contributor

longwuyuan commented Apr 22, 2024

/remove-kind bug

  • Can you please reproduce this on a minimal configuration on a cluster created using kind or minikube
  • Please describe why your ingress-nginx-controller service is of type ClusterIP

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Apr 22, 2024
@longwuyuan
Copy link
Contributor

/triage needs-information

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Apr 22, 2024
@husa570
Copy link
Author

husa570 commented Apr 22, 2024

/remove-kind bug

* Can you please reproduce this on a minimal configuration on a cluster created using kind or minikube

I will try to see if I can do that

* Please describe why your ingress-nginx-controller service is of type ClusterIP

We use hostports on controller and have haproxy in front of the cluster

@longwuyuan
Copy link
Contributor

thanks for updating.

  • appreciate you will try reproducing in minikube or kind. Please ensure that you use a service --type LoadBalancer or the unique networking of kind configuration as we do it in CI https://github.com/kubernetes/ingress-nginx/blob/main/build/kind.yaml . We don't test this HAProxy in front of ingress-nginx networking in CI so this will help a lot

  • At some point of type I hope you will be testing with service type LoadBalancer in front of ingress-nginx as well as that long snippet for modsecurity. Idea is install metallb in minikube if you choose miniube, and configure the minikube ip-address as the starting and ending of the pool for minikube. That way the service type LoadBalancer gets that external-ip

@longwuyuan
Copy link
Contributor

I would test in stages ;

  • No modifications except enabling modsec
  • Then make one change like req_id annotation but no complete set of rules if possible
  • Next one add complete rules
  • But I would check the log_format_upstream (because I have not ingested all the info)

@longwuyuan
Copy link
Contributor

/kind support

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. labels Apr 22, 2024
@longwuyuan
Copy link
Contributor

if you meant to say you can reproduce on minikube, then please do this.

From your minikube cluster, copy/paste the output of commands here in one single post;

  • kubectl cluster-info
  • helm -n ingress-ingress get values ingress-nginx
  • kubectl get all,ing -A -o wide
  • kubectl -n ingress-nginx get cm -o wide
  • kubectl -n ingress-nginx describe cm ingress-nginx-controller
  • kubectl -n ingress-nginx describe po $ingress-nginx-controller-pod-name
  • kubectl -n $appnamespace describe ing
  • kubectl -n $appnamespace logs $apppodname
  • Curl command complete and exactly as used with a -v and its reponse
  • kubectl -n ingress-nginx logs $ingress-nginx-controller-podname
  • Anh other related info

@longwuyuan
Copy link
Contributor

You can actually reduce he clutter here by deleting less informative posts and posting all that important minikube info in the original issue-description

@longwuyuan
Copy link
Contributor

Also, the controller v1.10.x is using nginx v1.25 (it was v1.21 earlier) so have to check if any upstream nginx changes impacted your log_format or nginx_vars or mosec config etc

@longwuyuan
Copy link
Contributor

Thanks.

I asked for those command outputs so I can reproduce. I suspect that if there is a genuine problem and if it is caused by the controller, then maybe the upgrade of the inernal component nginx (stating that nginx is a component of the controller) from v1.21 to v1.25 has introduced changes that are related.

@husa570
Copy link
Author

husa570 commented Apr 24, 2024

if you meant to say you can reproduce on minikube, then please do this.

From your minikube cluster, copy/paste the output of commands here in one single post;

* kubectl cluster-info

* helm -n ingress-ingress get values ingress-nginx

* kubectl get all,ing -A -o wide

* kubectl -n ingress-nginx get cm -o wide

* kubectl -n ingress-nginx describe cm ingress-nginx-controller

* kubectl -n ingress-nginx describe po $ingress-nginx-controller-pod-name

* kubectl -n $appnamespace describe ing

* kubectl -n $appnamespace logs $apppodname

* Curl command complete and exactly as used with a -v and its reponse

* kubectl -n ingress-nginx logs $ingress-nginx-controller-podname

* Anh other related info

I will se what I can do, some of this commands extract information that might be sesitive for us, but parts of it I might be able to anonymize

@husa570
Copy link
Author

husa570 commented Apr 24, 2024

Im stuck at the moment, I cant reproduce it in minkube. One differense between minikube and our kluster is that we use containerd (ver 1.7.10) and not docker, unfortunately I dont seem to have the knowlege to run minikube on containerd.
So at the moment Im stuck with a the fact that the ingress-nginx nginx logs the same req_id twice (happend when we upgraded to 1.10.0) and modsecurity uses it own unique_id.

@husa570
Copy link
Author

husa570 commented Apr 24, 2024

Deleted most of my "clutter" post and closing this issue unresolved

@longwuyuan
Copy link
Contributor

minikube start --container-runtime --help should show you this

image

@husa570 we can do a zoom session if you think you are ok with that way to make progress

@husa570
Copy link
Author

husa570 commented Apr 24, 2024

minikube start --container-runtime --help should show you this

image

@husa570 we can do a zoom session if you think you are ok with that way to make progress

Thanks but this was another deadend. Minkube worked as expected
Minikube start

minikube start --container-runtime=containerd
😄  minikube v1.33.0 on Ubuntu 20.04 (amd64)
✨  Automatically selected the docker driver. Other choices: none, ssh
📌  Using Docker driver with root privileges
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🚜  Pulling base image v0.0.43 ...
💾  Downloading Kubernetes v1.30.0 preload ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
📦  Preparing Kubernetes v1.30.0 on containerd 1.6.31 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

The request

curl --resolve waf-demo.localdev.me:8080:127.0.0.1 http://waf-demo.localdev.me:8080/?id=1+union+select+1,2,3/*

And the log, everything works as expected
unique_id=request_id

2024/04/24 13:19:01 [error] 775#775: *7024 [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Matched "Operator `Ge' with parameter `5' against variable `TX:ANOMALY_SCORE' (Value: `15' ) [file "/etc/nginx/owasp-modsecurity-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "81"] [id "949110"] [rev ""] [msg "Inbound Anomaly Score Exceeded (Total Score: 15)"] [data ""] [severity "2"] [ver "OWASP_CRS/3.3.5"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "127.0.0.1"] [uri "/"] [unique_id "41b34f98bb4e4a703d54e6597a5785a1"] [ref ""], client: 127.0.0.1, server: waf-demo.localdev.me, request: "GET /?id=1+union+select+1,2,3/* HTTP/1.1", host: "waf-demo.localdev.me:8080"
{"time": "2024-04-24T13:19:01+00:00", "remote_address": "127.0.0.1", "remote_user": "-", "request": "GET /?id=1+union+select+1,2,3/* HTTP/1.1", "response_code": "403", "referer": "-", "useragent": "curl/7.68.0", "request_length": "115", "request_time": "0.000", "proxy_upstream_uname": "default-demo-80", "proxy_alternative_upstream_name": "", "upstream_addr": "-", "upstream_response_length": "-", "upstream_response_time": "-", "upstream_status": "-", "request_id": "41b34f98bb4e4a703d54e6597a5785a1", "x-forward-for": "127.0.0.1", "uri": "/", "request_query": "id=1+union+select+1,2,3/*", "method": "GET", "http_referrer": "-", "vhost": "waf-demo.localdev.me"}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
Development

No branches or pull requests

3 participants