You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From any server in the vpc-2 (the same vpc running EKS), i can connect to port 80 of these pods. But in any server on vpc-1, i can only hit 9 of 12 nginx pods.
As you can see the result above. i can not hit 10.1.131.223, 10.1.135.210, 10.1.138.6. I ran command /opt/cni/bin/aws-cni-support.sh on each worker and found three of them have one thing in common. These IPs belonged to the secondary interface (not a main one - eth0).
pod.output on ip-10-1-129-172.us-west-2.compute.internal
"nginx-deployment-6c479b78c5-9qp9z_testing_eb35b8a44be06854517d834cdd459cea98a66f17c28317e185fa82bccb09ccd2": {
"IP": "10.1.131.223",
"DeviceNumber": 2
},
I pick up pod with IP 10.1.135.210 running on worker ip-10-1-134-238.us-west-2.compute.internal to continue checking route table
[ec2-user@ip-10-1-134-238 ~]$ ip rule show
0: from all lookup local
512: from all to 10.1.132.11 lookup main
512: from all to 10.1.134.230 lookup main
512: from all to 10.1.133.129 lookup main
512: from all to 10.1.135.193 lookup main
512: from all to 10.1.135.47 lookup main
512: from all to 10.1.133.95 lookup main
512: from all to 10.1.133.82 lookup main
512: from all to 10.1.135.210 lookup main
512: from all to 10.1.132.160 lookup main
512: from all to 10.1.133.30 lookup main
1024: from all fwmark 0x80/0x80 lookup main
1536: from 10.1.135.210 to 10.1.0.0/16 lookup 2
32766: from all lookup main
32767: from all lookup default
[ec2-user@ip-10-1-134-238 ~]$ ip route show table main
default via 10.1.132.1 dev eth0
10.1.132.0/22 dev eth0 proto kernel scope link src 10.1.134.238
10.1.132.11 dev eni3367e1e163b scope link
10.1.132.160 dev eni79af95bd2b2 scope link
10.1.133.30 dev eni7976e813d36 scope link
10.1.133.82 dev enia1d23be782e scope link
10.1.133.95 dev enid5a84d7679a scope link
10.1.133.129 dev eni4ab52ffff90 scope link
10.1.134.230 dev eni78b12ef97b2 scope link
10.1.135.47 dev eni10d2e7ffdce scope link
10.1.135.193 dev eni242a6cbb667 scope link
10.1.135.210 dev eni1dc02e51912 scope link
169.254.169.254 dev eth0
I found there is no route to vpc-1 (10.0.0.0/16) on woker node. Adding new rule and it worked perfectly.
[root@ip-10-1-134-238 ec2-user]# ip rule add from 10.1.135.210 to 10.0.0.0/16 priority 1537 table 2
My question is should vpc-cni-k8s plugin add routing automatically when see vpc peering? If i have multiple VPCs would like to talk to EKS, i need to add routing manually on every worker node. That's not good.
The text was updated successfully, but these errors were encountered:
dmai-apixio
changed the title
Missing route in multi VPCs environment
Missing route in VPC peering
Jun 3, 2019
My limited understanding is that VPC peers should be handled by the default route from the machine's perspective: do the two VPCs that you've paired have entries for the peering connection in their route tables?
I have 2 VPCs and they are connected using VPC peering
I setup EKS cluster with 3 workers running on vpc-2. Security Groups are open for all.
Running a simple deployment to deploy nginx and expose port 80
And here is pod information
From any server in the vpc-2 (the same vpc running EKS), i can connect to port 80 of these pods. But in any server on vpc-1, i can only hit 9 of 12 nginx pods.
As you can see the result above. i can not hit
10.1.131.223
,10.1.135.210
,10.1.138.6
. I ran command/opt/cni/bin/aws-cni-support.sh
on each worker and found three of them have one thing in common. These IPs belonged to the secondary interface (not a main one - eth0).I pick up pod with IP
10.1.135.210
running on workerip-10-1-134-238.us-west-2.compute.internal
to continue checking route tableI found there is no route to vpc-1 (10.0.0.0/16) on woker node. Adding new rule and it worked perfectly.
My question is should vpc-cni-k8s plugin add routing automatically when see vpc peering? If i have multiple VPCs would like to talk to EKS, i need to add routing manually on every worker node. That's not good.
The text was updated successfully, but these errors were encountered: