You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 8, 2022. It is now read-only.
Cilium creates a service of type LoadBalancer for ingress support. In a managed Kubernetes cluster, this will typically result in an external load balancer being created that assigns an external IP address for that service. On test clusters created with Kind or Minikube this doesn't automatically happen, so the Ingress will not be assigned an address.
It may be possible to use MetalLB or similar to provide the load balancer in Kind, but this has not been tested.
There is a further complication: the load balancer would need to be on a different node to the backend services being load balanced. This is because, in a multi-node Kind cluster, it's hard for socket-level eBPF programs to reliably detect the host network namespace for the node (because there are multiple nodes sharing the same kernel, unlike the situation in "production" Kubernetes where there is one node per VM). As a result, NodePort traffic originating in a Kind node's own network namespace will not reach its destination correctly. Traffic will reach its destination if that destination is on another node.
The text was updated successfully, but these errors were encountered:
On test clusters created with Kind or Minikube this doesn't automatically happen, so the Ingress will not be assigned an address.
In this case it is possible to observe the assigned NodePort in kubectl get svc -A and use any of the Kind node addresses with it to reach the Ingress service. Not ideal, but works for testing when an external Load Balancer is not available.
As a result, NodePort traffic originating in a Kind node's own network namespace will not reach its destination correctly. Traffic will reach its destination if that destination is on another node.
To fully enable Cilium’s kube-proxy replacement (Kubernetes Without kube-proxy), cgroup v1 controllers net_cls and net_prio have to be disabled, or cgroup v1 has to be disabled (e.g. by setting the kernel cgroup_no_v1="all" parameter).
Note that we have so far only tested the latter part of this workaround: setting kernel boot parameter cgroup_no_v1=all allows Cilium Ingress to work properly regardless of the node where the traffic lands on.
May I request that the service type Cilium provisions is configurable? In clusters using outbound tunneling to ingress traffic (cloudflared, for instance), there is no LoadBalancer Kubernetes resource.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Cilium creates a service of type LoadBalancer for ingress support. In a managed Kubernetes cluster, this will typically result in an external load balancer being created that assigns an external IP address for that service. On test clusters created with Kind or Minikube this doesn't automatically happen, so the Ingress will not be assigned an address.
It may be possible to use MetalLB or similar to provide the load balancer in Kind, but this has not been tested.
There is a further complication: the load balancer would need to be on a different node to the backend services being load balanced. This is because, in a multi-node Kind cluster, it's hard for socket-level eBPF programs to reliably detect the host network namespace for the node (because there are multiple nodes sharing the same kernel, unlike the situation in "production" Kubernetes where there is one node per VM). As a result, NodePort traffic originating in a Kind node's own network namespace will not reach its destination correctly. Traffic will reach its destination if that destination is on another node.
The text was updated successfully, but these errors were encountered: