Layer 3 / load balancer / Service route #3351
Replies: 2 comments 1 reply
-
Thank you for posting about this. I would like us to spend some cycles iterating on the "what", "why" and "who", and not worry too much about "how" for now: I don't necessarily want to be concerned yet with implementation details like Long term what I personally really want to see is Kubernetes core to provide focused and composable network APIs that can be used in conjunction (for instance, ultimately to re-implement This topic feels quite large in scope. It would be very reasonable to put this on the agenda for live discussion in multiple upcoming community syncs. Additionally, we could consider organizing a focused separate meeting just on this topic.
|
Beta Was this translation helpful? Give feedback.
-
Hey, MetalLB maintainer here. I am still wrapping my head around the various concepts here, so what I am going to write might not be 100% accurate (and my ramblings on tcp route / udp route happened a while back too). I am having a hard time understanding how the mapping between a potential L3 route and a service would look like, because a service expresses more than a binding between an (internal,external) ip and a set of pods, as it also involves ports. Here what we are suggesting is, there should be a way to say "all the traffic directed to this VIP should go to this service (and thus, to its endpoints)" which really doesn't map to what a service is. What I think it'd be useful for, is to act as the implementation of the last mile I described here metallb/metallb#847 (comment) , where the ultimate user's goal is still to have some port mapping and another infra component (possibly the CNI / kube-proxy) does the same translation between the VIP and the corresponding service's VIP (to note that today you can have multiple services, with the same VIP but different ports mapped to different kubernetes services). All in all, if the backend is a kubernetes service, I find it difficult to think about something that does not involve a port (but please, if I missed something please tell me). |
Beta Was this translation helpful? Give feedback.
-
I've kept the name intentionally vague, as my knowledge in this area is limited.
While I have experience as a consumer of various load balancers, my understanding of their internal workings is less extensive, relying on insights from maintainers such as those from MetalLB.
In this discussion, I use "load balancers" as a general term. I understand that some organizations may have specific definitions, such as Google's use of "layer 7 load balancer".
Here, I refer to tools operating primarily at Layer 3 and implicitly supporting Layer 4 protocols beyond TCP/UDP, though this discussion does not focus on those protocols
I believe this effectively operates at the same network layer as a kubernetes service does.
Problem statement
The current TCP/UDP/HTTP/gRPC routes are effective for reverse proxying but seem insufficient for load balancer implementations. As the gateway API currently exists, Load balancer based implementations currently need to handle more of the networking stack than before, which hampers adoption.
To illustrate the adoption problem, see the discussion in the MetalLB project.
Not the problem statement
This issue arises when Layer 7 capabilities are not tightly integrated with the load balancer.
For instance, major cloud providers offer Layer 7 load balancer implementations:
These implementations support a reverse proxy behind the load balancer but require Layer 7 communication with it.
If a load balancer controller chooses to handle Layer 7, this is not an issue. Similarly they might choose to implement a TCPRoute or UDPRoute, again this is not a problem this aims to address as it is currently possible
Intent
I propose finding a mechanism (likely a new type of route) that offloads responsibilities to the reverse proxy behind the load balancer, akin to traditional load balancer controller implementations.
The goal is to be able to express the capability; "provision a cluster-external endpoint and ensure every packet is forwarded to a given cluster-internal IP", which would presumably be the Kubernetes service of a reverse proxy.
As has been suggested in Slack, this seems to point to a ServiceRoute being introduced.
This approach aims to:
Hypothetical examples of implementations using this proposed pattern:
I am working on this as a personal project, which highlighted this broader issue.
My current workaround involves building an HTTPRoute, but I prefer offloading Layer 4+ responsibilities to a reverse proxy.
This proposed route would enable provisioning network load balancers that handle Layer 4 traffic, and allowing other cloud providers to offer similar functionality.
These examples illustrate:
Beta Was this translation helpful? Give feedback.
All reactions