-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Antrea IPAM picks overlapping pod IP addresses #119
Comments
Thanks for reporting this! |
I see currently GKE supports GKE native CNI and Calico, so the existing Pods should get their IPs from one of them. Even if the IPs don't conflict, I think it's not guaranteed that Pods whose network are created by different CNIs can communicate, as Pods are connected via different approaches (linux bridge, routing mode, openvswitch bridge), and the gateway interfaces created by each CNI in default namespace might have IP conflict too. If a GKE cluster can start with a non CNI enabled state like |
Ah interesting point about the gateway interfaces. Yes, that would also be unfortunate - we were just lucky that the default GKE CNI doesn't create any gateway devices. But you're right - that's a confusing situation, and I can't imagine any production clusters wanting to be in that mode. One easy way to fix this is to just have all the pods restart after the Antrea CNI is installed. Then, when they all get recreated they can join the Antrea overlay. Thanks! |
Should we take the action item to add some documentation to the getting started guide about deploying Antrea in a cluster which already uses a different CNI plugin? We could suggest the following steps: 1) delete existing CNI, 2) apply Antrea's yaml, 3) drain / uncordon each node one by one. |
Similarly we may want to make sure that Antrea cleans up after itself properly (deleting the gw interface, etc.) when it is deleted from a cluster :/ |
Make sense to me. We might have a cleanup DaemonSet to do the cleanup. |
Assigning this to me. Discussed at the 12/04/2019 Antrea community meeting. Only AI is to document how to deploy Antrea on a cluster which already has a CNI / running Pods. |
Describe the bug
When installing Antrea on an existing cluster, pod IPs allocated may conflict with previously created pods (like kube-dns)
To Reproduce
Create a GKE cluster. Install Antrea. Create enough pod so that some of those pods will end up on nodes that already have existing kube-system pods.
Expected
Pods are allocated IPs that do not overlap existing pod IPs.
From a 2-node GKE cluster, with Antrea deployed using vmware-tanzu/antrea/master/build/yamls/antrea.yml file, and kube-dns pods already existing on the node. If we add a few iperf pods, I can see this -
k get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default iperf-client 1/1 Running 0 2m37s 10.44.0.2 gke-antrea-test-default-pool-41d51a90-ltw0
default iperf-server-cbdb86575-qrl5p 1/1 Running 0 2m40s 10.44.1.2 gke-antrea-test-default-pool-41d51a90-5bvq
kube-system kube-dns-79868f54c5-h9c4m 4/4 Running 0 9m31s 10.44.0.2 gke-antrea-test-default-pool-41d51a90-ltw0
kube-system kube-dns-79868f54c5-km95f 4/4 Running 0 8m52s 10.44.1.2 gke-antrea-test-default-pool-41d51a90-5bvq
This is a cluster running kubernetes v1.13.11
The text was updated successfully, but these errors were encountered: