-
Notifications
You must be signed in to change notification settings - Fork 466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ExternalIP allows access to node part 2 #1209
Comments
I think that the intention is that for a NodePort service it should be controlled by the If you have not enabled that option, then in
InternalIP or fall through to ExternalIP ) in which case it doesn't matter if iptables lets it through or not because there won't be anything to receive the traffic.
I could see potentially changing
service_endpoints_sync when the user hasn't enabled --nodeport-bindon-all-ip , and possibly also filtering on the dst port range defined by --service-node-port-range but otherwise, I think that the current logic is working as intended.
|
I'm not sure if we're talking about the same thing. Imagine a machine with 2 interfaces
are no longer effective and allow access to any service listening 0.0.0.0 within default namespace. Is this expected behavior? |
Maybe we aren't talking about the same thing... Are you talking about potential exposure of services that are not containerized or are containerized, but are running in the host network namespace? Because kube-router doesn't comment on host based services, only kubernetes based containers and services running within the container's network namespace. The rule that you mention doesn't result in an ACCEPT, but rather the traffic is passed along to future chains and it is left up to the administrator to filter those services appropriately in subsequent chains. |
I want access to only exposed ports e.g. if 10.10.10.111 tcp/80 is exposed through a Service, then I don't want let in traffic to 10.10.10.111 tcp/22, which is the case now. The rule I pointed to was made to resolve this exact issue. However, that works only with ECMP setup, because in that case all the service IPs, including ExternalIPs, are assigned to |
I think that you're missing my point though. There is nothing in kube-router that would specifically comment on accepting traffic on The only thing that the rule you're talking about is doing is not rejecting the traffic. This is by design. In this way it allows the user to comment on what they want to do with local services in future chains. If you want to block traffic on port 22 because you have SSH setup there, all you would need to do is something like:
|
I see what you mean, but still, how's the case I described above any different from #282? |
That's a very old issue. Since that issue, and specifically in the last year, we've made significant changes to kube-router's network policy handling. Previously, we would ACCEPT all traffic once we matched it. In cases like yours, where we matched it incorrectly rather than just "not rejecting" the traffic, we would actually accept the traffic. This meant that in order to allow users to secure their nodes we had to specifically account for non-kubernetes traffic because we were causing the problem by making judgements about non-kubernetes traffic. This added a lot of complexity to kube-router and eventually it became impossible to satisfy all use-cases. Over the last year, and at the request of several users, we have changed our posture. We have refactored much of our network policy implementation, to instead only comment on traffic that kube-router should control and leaving other host based traffic types to fall through to future, user-defined iptables chains so that system administrators have choice in how they deal with non-kubernetes traffic that hits their nodes. |
Define "non-kubernetes" traffic. Is packets destined to ExternalIP non-kubernetes traffic? Then why are packets coming to unexposed ports being rejected by the rule and not flowing further down the chains as you said? My point is that Kube-router currently treats the same traffic differently, relying only upon a network interface name. |
ExternalIP services are consider Kubernetes traffic. |
So, back to the question. Why if an interface name doesn't contain 'Kube', 'Docker' or 'Dummy' in it unexpected traffic to ExternalIPs is not rejected, and if it does, unexpected traffic to ExternalIPs is rejected? Can we make this part flexible for users rather than hardcoded? |
Because those are the interfaces that kube-router manages, so it makes sense for us to only consider those ones. If users add additional interfaces that kube-router doesn't know about, that isn't the domain of kube-router. I don't see us adding a command-line option or exposing an annotation or some such to allow users to configure interfaces as that brings too much complexity into kube-router to manage.
We cannot just start rejecting traffic from interfaces that we don't know about because it would include potentially valid flows to the node's IP address. kube-router doesn't manage or create arp based floating address interfaces, so it shouldn't attempt to manage them. If the user creates them, then the user should expect to manage them via the method I described above. |
#618 follow up.
This solution is designed exclusively for ECMP setup and doesn't work for floating addresses (aka ARP mode) because it relies on
kube-router-local-ips
set, which is filled up by "local" IP addresses by iterating over all interfaces available on the host and retrieving its IP addresseskube-router/pkg/controllers/proxy/network_services_controller.go
Lines 2063 to 2090 in 2ca39f1
so that rule has no effect
kube-router/pkg/controllers/proxy/network_services_controller.go
Lines 633 to 637 in 2ca39f1
What if we add an extra argument that would accept a list of tracked interfaces, so we can iterate only over these interfaces?
The text was updated successfully, but these errors were encountered: