You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
This is a bug report.
NGINX Ingress controller version:
0.9.0. We tried 0.10.0 but had redirect loops and didn't have time to debug this yet.
Kubernetes version (use kubectl version):
1.7.11+coreos.0. Bug happens in 1.7.7 too.
Environment:
Cloud provider or hardware configuration:
AWS.
OS (e.g. from /etc/os-release):
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1576.5.0
VERSION_ID=1576.5.0
BUILD_ID=2018-01-05-1121
PRETTY_NAME="Container Linux by CoreOS 1576.5.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
Kernel (e.g. uname -a):
Linux ip-10-120-125-69.intermedium.local 4.14.11-coreos #1 SMP Fri Jan 5 11:00:14 UTC 2018 x86_64 Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz GenuineIntel GNU/Linux
Install tools:
We are using kube-aws 0.9.8 to provision the cluster.
Others:
What happened:
The nginx-ingress-controller configmap is being updated with a period of a few seconds, triggering nginx reloads. In some load configurations it is causing some communication errors (HTTP 502, 503) when the nginx process is being reloaded.
The only attribute changed in the configmap between reloads is ResourceVersion.
We tested with just flannel and calico+flannel. Both had about the same behaviour. With calico on I had apparently more errors, but I didn't measured it with more precision yet.
We are using one deployment per namespace, with two to three pods per deployment.
This issue is similar to #1269, we even tried the image referenced in the issue, but this time we have the configmap update as the trigger.
What you expected to happen:
Nginx reloading only when a service is created\modified\deleted or when a deployment scales in or out.
How to reproduce it (as minimally and precisely as possible):
Log of one of the pods, taken at approximately the same time: I0126 12:48:23.399076 11 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"hml", Name:"ingress-controller-leader-nginx", UID:"8f452e1e-f473-11e7-87eb-02a3e10fb2a2", APIVersion:"v1", ResourceVersion:"6805010", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap hml/ingress-controller-leader-nginx W0126 12:48:23.399912 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:23.400182 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:23.400258 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:23.400611 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:23.401450 11 controller.go:211] backend reload required I0126 12:48:23.488872 11 controller.go:220] ingress backend successfully reloaded... W0126 12:48:26.382631 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:26.382966 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:26.383069 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:26.383432 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:30.937211 11 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"hml", Name:"ingress-controller-leader-nginx", UID:"8f452e1e-f473-11e7-87eb-02a3e10fb2a2", APIVersion:"v1", ResourceVersion:"6805045", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap hml/ingress-controller-leader-nginx W0126 12:48:30.938284 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:30.938556 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:30.938626 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:30.939007 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:30.941159 11 controller.go:211] backend reload required I0126 12:48:31.031896 11 controller.go:220] ingress backend successfully reloaded... I0126 12:48:38.463805 11 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"hml", Name:"ingress-controller-leader-nginx", UID:"8f452e1e-f473-11e7-87eb-02a3e10fb2a2", APIVersion:"v1", ResourceVersion:"6805067", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap hml/ingress-controller-leader-nginx W0126 12:48:38.464723 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:38.464981 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:38.465053 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:38.465375 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:38.466204 11 controller.go:211] backend reload required I0126 12:48:38.556001 11 controller.go:220] ingress backend successfully reloaded... I0126 12:48:46.016916 11 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"hml", Name:"ingress-controller-leader-nginx", UID:"8f452e1e-f473-11e7-87eb-02a3e10fb2a2", APIVersion:"v1", ResourceVersion:"6805098", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap hml/ingress-controller-leader-nginx W0126 12:48:46.018098 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:46.018395 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:46.018473 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:46.018854 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:46.019687 11 controller.go:211] backend reload required I0126 12:48:46.106200 11 controller.go:220] ingress backend successfully reloaded... I0126 12:48:53.533864 11 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"hml", Name:"ingress-controller-leader-nginx", UID:"8f452e1e-f473-11e7-87eb-02a3e10fb2a2", APIVersion:"v1", ResourceVersion:"6805126", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap hml/ingress-controller-leader-nginx W0126 12:48:53.534748 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:53.535046 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:53.535113 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:53.535463 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:53.536316 11 controller.go:211] backend reload required I0126 12:48:53.635153 11 controller.go:220] ingress backend successfully reloaded...
Deployment/service/RBAC roles/RBAC bindings of the deployment:
`apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount-hml
namespace: hml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
namespace: hml
spec:
replicas: 2
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (kubernetes/kubernetes#31307) we have to use hostNetwork where CNI is used
# like with kubeadm
# hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount-hml
terminationGracePeriodSeconds: 60
nodeSelector:
slot: hml
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=hml/default-http-backend
- --watch-namespace=$(POD_NAMESPACE)
- --configmap=$(POD_NAMESPACE)/ingress-controller-leader-nginx
`
The text was updated successfully, but these errors were encountered:
Closing. You cannot use the configmap used for leader election as configuration. Please use a different configmap name like --configmap=$(POD_NAMESPACE)/ingress-controller-configuration
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
This is a bug report.
NGINX Ingress controller version:
0.9.0. We tried 0.10.0 but had redirect loops and didn't have time to debug this yet.
Kubernetes version (use
kubectl version
):1.7.11+coreos.0. Bug happens in 1.7.7 too.
Environment:
Cloud provider or hardware configuration:
AWS.
OS (e.g. from /etc/os-release):
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1576.5.0
VERSION_ID=1576.5.0
BUILD_ID=2018-01-05-1121
PRETTY_NAME="Container Linux by CoreOS 1576.5.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
uname -a
):Linux ip-10-120-125-69.intermedium.local 4.14.11-coreos #1 SMP Fri Jan 5 11:00:14 UTC 2018 x86_64 Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz GenuineIntel GNU/Linux
Install tools:
We are using kube-aws 0.9.8 to provision the cluster.
Others:
What happened:
The nginx-ingress-controller configmap is being updated with a period of a few seconds, triggering nginx reloads. In some load configurations it is causing some communication errors (HTTP 502, 503) when the nginx process is being reloaded.
The only attribute changed in the configmap between reloads is ResourceVersion.
We tested with just flannel and calico+flannel. Both had about the same behaviour. With calico on I had apparently more errors, but I didn't measured it with more precision yet.
We are using one deployment per namespace, with two to three pods per deployment.
This issue is similar to #1269, we even tried the image referenced in the issue, but this time we have the configmap update as the trigger.
What you expected to happen:
Nginx reloading only when a service is created\modified\deleted or when a deployment scales in or out.
How to reproduce it (as minimally and precisely as possible):
It just started to reload wildly after some time.
Anything else we need to know:
Contents of the configmap, taken 15 seconds apart, three times:
kubectl get cm ingress-controller-leader-nginx --namespace hml -o yaml; sleep 15 ; kubectl get cm ingress-controller-leader-nginx --namespace hml -o yaml; sleep 15 ; kubectl get cm ingress-controller-leader-nginx --namespace hml -o yaml apiVersion: v1 data: proxy-connect-timeout: “60” proxy-read-timeout: “600" proxy-send-timeout: “600” kind: ConfigMap metadata: annotations: control-plane.alpha.kubernetes.io/leader: ‘{“holderIdentity”:“nginx-ingress-controller-4212645903-3r5tx”,“leaseDurationSeconds”:30,“acquireTime”:“2018-01-25T12:06:46Z”,“renewTime”:“2018-01-26T12:48:45Z”,“leaderTransitions”:3}’ kubectl.kubernetes.io/last-applied-configuration: | {“apiVersion”:“v1”,“data”:{“proxy-connect-timeout”:“60",“proxy-read-timeout”:“600",“proxy-send-timeout”:“600"},“kind”:“ConfigMap”,“metadata”:{“annotations”:{“control-plane.alpha.kubernetes.io/leader”:“{\“holderIdentity\“:\“nginx-ingress-controller-4212645903-z436q\“,\“leaseDurationSeconds\“:30,\“acquireTime\“:\“2018-01-11T19:47:52Z\“,\“renewTime\“:\“2018-01-23T19:30:51Z\“,\“leaderTransitions\“:0}“},“name”:“ingress-controller-leader-nginx”,“namespace”:“hml”}} creationTimestamp: 2018-01-08T12:58:02Z name: ingress-controller-leader-nginx namespace: hml resourceVersion: “6805098” selfLink: /api/v1/namespaces/hml/configmaps/ingress-controller-leader-nginx uid: 8f452e1e-f473-11e7-87eb-02a3e10fb2a2 apiVersion: v1 data: proxy-connect-timeout: “60" proxy-read-timeout: “600” proxy-send-timeout: “600" kind: ConfigMap metadata: annotations: control-plane.alpha.kubernetes.io/leader: ‘{“holderIdentity”:“nginx-ingress-controller-4212645903-3r5tx”,“leaseDurationSeconds”:30,“acquireTime”:“2018-01-25T12:06:46Z”,“renewTime”:“2018-01-26T12:49:01Z”,“leaderTransitions”:3}’ kubectl.kubernetes.io/last-applied-configuration: | {“apiVersion”:“v1",“data”:{“proxy-connect-timeout”:“60”,“proxy-read-timeout”:“600”,“proxy-send-timeout”:“600”},“kind”:“ConfigMap”,“metadata”:{“annotations”:{“control-plane.alpha.kubernetes.io/leader”:“{\“holderIdentity\“:\“nginx-ingress-controller-4212645903-z436q\“,\“leaseDurationSeconds\“:30,\“acquireTime\“:\“2018-01-11T19:47:52Z\“,\“renewTime\“:\“2018-01-23T19:30:51Z\“,\“leaderTransitions\“:0}“},“name”:“ingress-controller-leader-nginx”,“namespace”:“hml”}} creationTimestamp: 2018-01-08T12:58:02Z name: ingress-controller-leader-nginx namespace: hml resourceVersion: “6805159” selfLink: /api/v1/namespaces/hml/configmaps/ingress-controller-leader-nginx uid: 8f452e1e-f473-11e7-87eb-02a3e10fb2a2 apiVersion: v1 data: proxy-connect-timeout: “60" proxy-read-timeout: “600” proxy-send-timeout: “600" kind: ConfigMap metadata: annotations: control-plane.alpha.kubernetes.io/leader: ‘{“holderIdentity”:“nginx-ingress-controller-4212645903-3r5tx”,“leaseDurationSeconds”:30,“acquireTime”:“2018-01-25T12:06:46Z”,“renewTime”:“2018-01-26T12:49:16Z”,“leaderTransitions”:3}’ kubectl.kubernetes.io/last-applied-configuration: | {“apiVersion”:“v1",“data”:{“proxy-connect-timeout”:“60”,“proxy-read-timeout”:“600”,“proxy-send-timeout”:“600”},“kind”:“ConfigMap”,“metadata”:{“annotations”:{“control-plane.alpha.kubernetes.io/leader”:“{\“holderIdentity\“:\“nginx-ingress-controller-4212645903-z436q\“,\“leaseDurationSeconds\“:30,\“acquireTime\“:\“2018-01-11T19:47:52Z\“,\“renewTime\“:\“2018-01-23T19:30:51Z\“,\“leaderTransitions\“:0}“},“name”:“ingress-controller-leader-nginx”,“namespace”:“hml”}} creationTimestamp: 2018-01-08T12:58:02Z name: ingress-controller-leader-nginx namespace: hml resourceVersion: “6805213” selfLink: /api/v1/namespaces/hml/configmaps/ingress-controller-leader-nginx uid: 8f452e1e-f473-11e7-87eb-02a3e10fb2a2
Log of one of the pods, taken at approximately the same time:
I0126 12:48:23.399076 11 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"hml", Name:"ingress-controller-leader-nginx", UID:"8f452e1e-f473-11e7-87eb-02a3e10fb2a2", APIVersion:"v1", ResourceVersion:"6805010", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap hml/ingress-controller-leader-nginx W0126 12:48:23.399912 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:23.400182 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:23.400258 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:23.400611 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:23.401450 11 controller.go:211] backend reload required I0126 12:48:23.488872 11 controller.go:220] ingress backend successfully reloaded... W0126 12:48:26.382631 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:26.382966 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:26.383069 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:26.383432 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:30.937211 11 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"hml", Name:"ingress-controller-leader-nginx", UID:"8f452e1e-f473-11e7-87eb-02a3e10fb2a2", APIVersion:"v1", ResourceVersion:"6805045", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap hml/ingress-controller-leader-nginx W0126 12:48:30.938284 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:30.938556 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:30.938626 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:30.939007 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:30.941159 11 controller.go:211] backend reload required I0126 12:48:31.031896 11 controller.go:220] ingress backend successfully reloaded... I0126 12:48:38.463805 11 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"hml", Name:"ingress-controller-leader-nginx", UID:"8f452e1e-f473-11e7-87eb-02a3e10fb2a2", APIVersion:"v1", ResourceVersion:"6805067", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap hml/ingress-controller-leader-nginx W0126 12:48:38.464723 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:38.464981 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:38.465053 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:38.465375 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:38.466204 11 controller.go:211] backend reload required I0126 12:48:38.556001 11 controller.go:220] ingress backend successfully reloaded... I0126 12:48:46.016916 11 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"hml", Name:"ingress-controller-leader-nginx", UID:"8f452e1e-f473-11e7-87eb-02a3e10fb2a2", APIVersion:"v1", ResourceVersion:"6805098", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap hml/ingress-controller-leader-nginx W0126 12:48:46.018098 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:46.018395 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:46.018473 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:46.018854 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:46.019687 11 controller.go:211] backend reload required I0126 12:48:46.106200 11 controller.go:220] ingress backend successfully reloaded... I0126 12:48:53.533864 11 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"hml", Name:"ingress-controller-leader-nginx", UID:"8f452e1e-f473-11e7-87eb-02a3e10fb2a2", APIVersion:"v1", ResourceVersion:"6805126", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap hml/ingress-controller-leader-nginx W0126 12:48:53.534748 11 controller.go:811] service hml/service-a does not have any active endpoints W0126 12:48:53.535046 11 controller.go:811] service hml/service-b does not have any active endpoints W0126 12:48:53.535113 11 controller.go:811] service hml/service-c does not have any active endpoints W0126 12:48:53.535463 11 controller.go:811] service hml/service-d does not have any active endpoints I0126 12:48:53.536316 11 controller.go:211] backend reload required I0126 12:48:53.635153 11 controller.go:220] ingress backend successfully reloaded...
Deployment/service/RBAC roles/RBAC bindings of the deployment:
`apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount-hml
namespace: hml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole-hml
rules:
resources:
verbs:
resources:
verbs:
resources:
verbs:
resources:
verbs:
resources:
verbs:
resources:
verbs:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: hml
rules:
resources:
verbs:
resources:
resourceNames:
Defaults to "-"
Here: "-"
This has to be adapted if you change either parameter
when launching the nginx-ingress-controller.
verbs:
resources:
verbs:
resources:
verbs:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: hml
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
name: nginx-ingress-serviceaccount-hml
namespace: hml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
metadata:
name: nginx-ingress-clusterrole-nisa-binding-hml
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole-hml
subjects:
name: nginx-ingress-serviceaccount-hml
namespace: hml
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
namespace: hml
spec:
type: NodePort
ports:
- port: 80
name: http
nodePort: 30745
selector:
k8s-app: nginx-ingress-lb
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
namespace: hml
spec:
replicas: 2
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (kubernetes/kubernetes#31307) we have to use hostNetwork where CNI is used
# like with kubeadm
# hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount-hml
terminationGracePeriodSeconds: 60
nodeSelector:
slot: hml
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=hml/default-http-backend
- --watch-namespace=$(POD_NAMESPACE)
- --configmap=$(POD_NAMESPACE)/ingress-controller-leader-nginx
`
The text was updated successfully, but these errors were encountered: