-
Notifications
You must be signed in to change notification settings - Fork 468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues accessing pods from host network with several interfaces #337
Comments
Can you list the iptable rules on your host with |
Can you put that in a gist instead? |
Yup. Done |
Have you set --cluster-cidr on kube-router and/or kube-proxy? |
Hmm, I haven't. The kube-proxy config has it defined right:
Should I do this as well?
Where should it be defined for kube-router? This is my current config: https://github.com/dimm0/prp_k8s_config/blob/master/kubeadm-kuberouter.yaml |
Try adding --cluster-cidr here: https://github.com/dimm0/prp_k8s_config/blob/master/kubeadm-kuberouter.yaml#L48, I'm not 100% sure if it will fix this issue, but I had a similar issue a while back and it fixed it for me |
Hmm... How about this? #137 |
Didn't help. Changed the ds definition and deleted the kube-router pod on broken node. |
I'm having this similar issue. |
tcpdump
iptables -L -t nat
kubectl get svc -o wide
kubectl get pods -o wide
kubectl get ep -o wide
kuberouter version and other details
|
Switched to calico, no troubles since then |
@dimm0 The reason I am sticking with kube-router is mainly for the performance, but seems like none of the kube-router experts are looking this ticket :( |
Sorry, this fell off my radar, will try to find time to dig into it in the next few weeks. If anyone has more information that will help debug please let me know. |
@dimm0 sorry things did not work out with kube-router for you. You guys were very first set of large scale users to use kube-router when project was still infancy. Lot of valuable feedback came from you guys. With multiple interfaces things get little bizarre with Kubernetes. there is just one line prerequisite: https://kubernetes.io/docs/tasks/tools/install-kubeadm/#check-network-adapters Basically when using Kubernetes with multiple-interfaces, data-path has to right source Ip and interface when sending the packets. If you see the tcpdump shared by dimm0
You will see the source IP address used is from There is a fix checked in 359ab1d which should have fixed this issue. I ran into some issue recently reported by @ieugen so i guess there is still some missing piece. I will revisit this issue and see what could be going wrong. |
Hi, I'm having this issue and found a workaround to have host-to-service networking working. I have to replace this rule in iptable's nat table (created by kube router): by: My guess is that the MASQUERADE target is somewhat confused by all this setup. |
@FabienZouaoui what version of kube-router you are running? We had a recent PR (#668) that is part of the |
@murali-reddy sorry to have missed that PR. In the meantime I'm thinking that this rewrite rule create an useless overhead in this specific case (node to pod communication). The routing rule does a good job as setting the right IP address |
Closing as resolved in v1.0.0-rc |
I have a node (several actually, all having same behavior) with 2 interfaces up in different subnets:
enp9s0 is the default one:
The tunnels are bound to the right interface:
The tunnels are working fine pod-pod. But when I try to access pod on another host from physical host (or from a pod bound to the host network), the packets are going from a wrong interface and never returning:
Please help!
The text was updated successfully, but these errors were encountered: