-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ClusterIP services not accessible when using flannel CNI from host machines in Kubernetes #1243
Comments
Exactly same as my experience. My setup is Kubernetes 1.17.2 + Flannel. |
Our workaround is to manually add the route to DNS through a DaemonSet as soon as there is at least one pod running on all workers (so that the |
issue on kubernetes/kubernetes:
@nonsense have an example? |
using @mikebryant 's workaround did the trick for me now: |
Just changed to Changing to Here ist my report of response time of a minio-Service (in seconds) before and after changing. The checks run on the nodes itself. |
Yes, here it is: https://github.com/ipfs/testground/blob/master/infra/k8s/sidecar.yaml#L23 Note that this won't work, unless you have one pod on every host (i.e. another DaemonSet), so that In our case the first pod we expect on every host is |
@nonsense I fixed it by changing the backend of flannel to host-gw instead of vxlan:
maybe this works for you as well |
If you have issues with all network traffic and not just reaching services from pods hostnetwork: true, you have some other issues |
the same problem to me. ip r add 10.96.0.0/16 dev cni0 |
the 'host-gw' option is only possible to infrastructures that support layer2 interaction. |
Hi. It turns out that host-gw fixed my problem as well: #1268. To me this is a critical bug somewhere in the vxlan based pipline. |
I had similar issues after upgrading our cluster from I also can't reproduce this issue on our dev cluster after replacing Could this issue be caused by changes in |
Just curious, how many folks running into this issue are using hyperkube? |
I tried reverting from 1.17.3 to 1.16.8, but I was still experiencing the same problem. |
try on node and on pod with hostnetwork:true (podnet 10.244.2.0/24) without and tcpdump not show this packet on another side on vxlan tunnel With route
src ip changed to addres from cni0, not a flanel.1 interface
and acces to service ipnet works fine. well i try to remove iptables rule created by kube-proxe
and i get answer from coredns
also work with |
Sorry for being late to the party... I just installed a clean v1.17 cluster, no duplicate iptables routes in there. So it seems like they only occur after upgrading. Anyways the issue persists. I'll continue investigating... |
Just a side note, the issue doesn't happen on the node where the pods balanced by the service are deployed:
|
@nonsense could you please provide another example manifest for this
Ends on 404 |
@malikbenkirane change ipfs/testground to testground/infra - repo moved - https://github.com/testground/infra/blob/master/k8s/sidecar.yaml |
Thanks, I like the idea. Though I've found using calico rather than flannel working for me. I just had set |
I had the same issue on a HA cluster provisioned by kubeadm with RHEL7 nodes. Both the options (turning of This did not affect a RHEL8 cluster provisioned by kubeadm (also that was not a HA cluster). |
I guess this can be closed since the related issues have been fixed in Kubernetes. |
@Gacko could you link the issue/PR for that, please? |
@rafzei thanks 👍 |
+1 |
I've bumped into the same issue with an RKE network:
plugin: canal
- options: {}
+ options:
+ # workaround to get hostnetworked pods DNS resolution working on nodes that don't have a CoreDNS replica running
+ # do the rke up then reboot all nodes to apply
+ # @see: https://github.com/coreos/flannel/issues/1243#issuecomment-589542796
+ # @see: https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/
+ canal_flannel_backend_type: host-gw
mtu: 0
node_selector: {}
update_strategy: null |
I am also getting intermittent issue while running stateful sets in k8 with hostNetwork. I got it resolved by following below.
Also , you can temporarily fix this by running your DNS pod on the same node on which your Application pod is running. |
I just upgraded our K8S bare metal cluster running on physical servers v1.23.13 to flannel v0.20.1 from v0.17.0 and are having this issue. My pods with |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I am trying to access a Kubernetes service through its
ClusterIP
, from apod
that is attached to itshost
's network and has access to DNS, with:However the host machine has no ip routes setup for the service CIDR, for example
Expected Behavior
I expect that I should be able to reach services running on Kubernetes from the host machines, but I can only access
headless
services - those that return a pod IP.The pod CIDR has
ip route
s setup, but the services CIDR doesn't.Current Behavior
Can't access services through their
ClusterIP
s from host network.Possible Solution
If I manually add an
ip route
to100.64.0.0/16
via100.96.1.1
,ClusterIP
are accessible. But this route is not there by default.Your Environment
The text was updated successfully, but these errors were encountered: