Skip to content
This repository has been archived by the owner on Sep 8, 2022. It is now read-only.

Ingress may not work reliably on clusters without external LoadBalancer support (such as Kind clusters) #3

Open
lizrice opened this issue Dec 13, 2021 · 3 comments

Comments

@lizrice
Copy link
Member

lizrice commented Dec 13, 2021

Cilium creates a service of type LoadBalancer for ingress support. In a managed Kubernetes cluster, this will typically result in an external load balancer being created that assigns an external IP address for that service. On test clusters created with Kind or Minikube this doesn't automatically happen, so the Ingress will not be assigned an address.

It may be possible to use MetalLB or similar to provide the load balancer in Kind, but this has not been tested.

There is a further complication: the load balancer would need to be on a different node to the backend services being load balanced. This is because, in a multi-node Kind cluster, it's hard for socket-level eBPF programs to reliably detect the host network namespace for the node (because there are multiple nodes sharing the same kernel, unlike the situation in "production" Kubernetes where there is one node per VM). As a result, NodePort traffic originating in a Kind node's own network namespace will not reach its destination correctly. Traffic will reach its destination if that destination is on another node.

@jrajahalme
Copy link
Member

jrajahalme commented Dec 13, 2021

On test clusters created with Kind or Minikube this doesn't automatically happen, so the Ingress will not be assigned an address.

In this case it is possible to observe the assigned NodePort in kubectl get svc -A and use any of the Kind node addresses with it to reach the Ingress service. Not ideal, but works for testing when an external Load Balancer is not available.

As a result, NodePort traffic originating in a Kind node's own network namespace will not reach its destination correctly. Traffic will reach its destination if that destination is on another node.

There is a workaround for this documented in Cilium Kind Getting Started Guide:

To fully enable Cilium’s kube-proxy replacement (Kubernetes Without kube-proxy), cgroup v1 controllers net_cls and net_prio have to be disabled, or cgroup v1 has to be disabled (e.g. by setting the kernel cgroup_no_v1="all" parameter).

Note that we have so far only tested the latter part of this workaround: setting kernel boot parameter cgroup_no_v1=all allows Cilium Ingress to work properly regardless of the node where the traffic lands on.

@maorkuriel
Copy link

On test clusters created with Minikube run minikube tunnel

@SohumB
Copy link

SohumB commented May 5, 2022

May I request that the service type Cilium provisions is configurable? In clusters using outbound tunneling to ingress traffic (cloudflared, for instance), there is no LoadBalancer Kubernetes resource.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants