Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues accessing pods from host network with several interfaces #337

Closed
dimm0 opened this issue Mar 12, 2018 · 19 comments
Closed

Issues accessing pods from host network with several interfaces #337

dimm0 opened this issue Mar 12, 2018 · 19 comments

Comments

@dimm0
Copy link
Contributor

dimm0 commented Mar 12, 2018

I have a node (several actually, all having same behavior) with 2 interfaces up in different subnets:

eno1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 171.64.20.83  netmask 255.255.254.0  broadcast 171.64.21.255
        ether 0c:c4:7a:31:31:d6  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xfbb00000-fbb7ffff

enp9s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet 171.66.4.10  netmask 255.255.255.240  broadcast 171.66.4.15
        inet6 fe80::f652:14ff:fe63:6cd0  prefixlen 64  scopeid 0x20<link>
        ether f4:52:14:63:6c:d0  txqueuelen 10000  (Ethernet)
        RX packets 878347471  bytes 596966317122 (555.9 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 769905817  bytes 632561514596 (589.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp9s0 is the default one:

[root@netw-fiona ~]# ip route
default via 171.66.4.1 dev enp9s0
10.244.0.0/24 dev tun-675853147 proto 17
10.244.1.0/24 dev tun-1981710146 proto 17
10.244.2.0/24 dev tun128125214139 proto 17
10.244.3.0/24 dev tun-675853155 proto 17
10.244.4.0/24 dev tun128111174188 proto 17
10.244.5.0/24 dev tun-13475115132 proto 17
10.244.6.0/24 dev tun-675853159 proto 17
10.244.7.0/24 dev tun-675853156 proto 17
10.244.9.0/24 dev tun-675853158 proto 17
10.244.10.0/24 dev tun-12811410976 proto 17
10.244.11.0/24 dev tun-1981710169 proto 17
10.244.12.0/24 dev tun-1981710170 proto 17
10.244.13.0/24 dev tun-1921542254 proto 17
10.244.14.0/24 dev tun-12811410970 proto 17
10.244.15.0/24 dev tun-675853146 proto 17
10.244.16.0/24 dev tun-1301911031 proto 17
10.244.18.0/24 dev tun-1382310466 proto 17
10.244.19.0/24 dev tun128117212248 proto 17
10.244.21.0/24 dev kube-bridge proto kernel scope link src 10.244.21.1
10.244.22.0/24 dev tun-13019149222 proto 17
10.244.23.0/24 dev tun-12817112310 proto 17
169.254.0.0/16 dev eno1 scope link metric 1002
169.254.0.0/16 dev ens15 scope link metric 1004
169.254.0.0/16 dev enp9s0 scope link metric 1005
171.64.20.0/23 dev eno1 proto kernel scope link src 171.64.20.83
171.66.4.0/28 dev enp9s0 proto kernel scope link src 171.66.4.10
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1

The tunnels are bound to the right interface:

[root@netw-fiona ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
    link/ether 0c:c4:7a:31:31:d6 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 0c:c4:7a:31:31:d7 brd ff:ff:ff:ff:ff:ff
4: ens15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq portid f452140300636cb0 state UP mode DEFAULT qlen 1000
    link/ether f4:52:14:63:6c:b0 brd ff:ff:ff:ff:ff:ff
5: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq portid f452140300636cd0 state UP mode DEFAULT qlen 10000
    link/ether f4:52:14:63:6c:d0 brd ff:ff:ff:ff:ff:ff
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT
    link/ether 02:42:ab:2d:65:8d brd ff:ff:ff:ff:ff:ff
7: kube-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether 0a:58:0a:f4:15:01 brd ff:ff:ff:ff:ff:ff
9: veth746479cc@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master kube-bridge state UP mode DEFAULT
    link/ether 02:2e:aa:45:59:77 brd ff:ff:ff:ff:ff:ff link-netnsid 1
10: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
11: tun-1981710169@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 198.17.101.69
12: tun-675853155@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 67.58.53.155
13: tun-12811410970@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 128.114.109.70
14: tun128111174188@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 128.111.174.188
15: tun-675853156@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 67.58.53.156
16: tun-675853159@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 67.58.53.159
17: tun-1981710170@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 198.17.101.70
18: tun-12811410976@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 128.114.109.76
19: tun-675853146@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 67.58.53.146
20: tun-675853158@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 67.58.53.158
21: tun-1382310466@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 138.23.104.66
22: tun-12817112310@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 128.171.123.10
23: tun-1981710146@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 198.17.101.46
24: tun-675853147@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 67.58.53.147
25: tun-13019149222@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 130.191.49.222
26: tun-1301911031@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 130.191.103.1
27: tun128117212248@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 128.117.212.248
28: tun128125214139@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 128.125.214.139
29: tun-1921542254@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 192.154.2.254
30: veth6880acbc@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master kube-bridge state UP mode DEFAULT
    link/ether 8a:9f:d7:a5:f1:85 brd ff:ff:ff:ff:ff:ff link-netnsid 2
41: vethd2b18509@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master kube-bridge state UP mode DEFAULT
    link/ether ca:77:0f:07:80:8e brd ff:ff:ff:ff:ff:ff link-netnsid 7
42: veth5650c15d@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master kube-bridge state UP mode DEFAULT
    link/ether be:e3:61:e0:1a:36 brd ff:ff:ff:ff:ff:ff link-netnsid 4
44: veth4162359f@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master kube-bridge state UP mode DEFAULT
    link/ether a6:44:5a:f0:3b:7c brd ff:ff:ff:ff:ff:ff link-netnsid 0
45: vethc8ace34e@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master kube-bridge state UP mode DEFAULT
    link/ether 6a:83:e5:86:07:99 brd ff:ff:ff:ff:ff:ff link-netnsid 5
60: tun-13475115132@enp9s0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8960 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ipip 171.66.4.10 peer 134.75.115.132
61: veth50227e34@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master kube-bridge state UP mode DEFAULT
    link/ether 02:bc:7e:55:f9:17 brd ff:ff:ff:ff:ff:ff link-netnsid 3

The tunnels are working fine pod-pod. But when I try to access pod on another host from physical host (or from a pod bound to the host network), the packets are going from a wrong interface and never returning:

[root@netw-fiona ~]# tcpdump -i tun-13019149222 | grep 10.244.22.90
tcpdump: listening on tun-13019149222, link-type RAW (Raw IP), capture size 262144 bytes
    171.64.20.83.46108 > 10.244.22.90.80: Flags [S], cksum 0xe10f (incorrect -> 0x0fff), seq 1732815011, win 35680, options [mss 8920,sackOK,TS val 3881173733 ecr 0,nop,wscale 14], length 0
    171.64.20.83.46108 > 10.244.22.90.80: Flags [S], cksum 0xe10f (incorrect -> 0x0c07), seq 1732815011, win 35680, options [mss 8920,sackOK,TS val 3881174749 ecr 0,nop,wscale 14], length 0
    171.64.20.83.46108 > 10.244.22.90.80: Flags [S], cksum 0xe10f (incorrect -> 0x0407), seq 1732815011, win 35680, options [mss 8920,sackOK,TS val 3881176797 ecr 0,nop,wscale 14], length 0
    171.64.20.83.46108 > 10.244.22.90.80: Flags [S], cksum 0xe10f (incorrect -> 0xf446), seq 1732815011, win 35680, options [mss 8920,sackOK,TS val 3881180829 ecr 0,nop,wscale 14], length 0
    171.64.20.83.46108 > 10.244.22.90.80: Flags [S], cksum 0xe10f (incorrect -> 0xd3c6), seq     
    171.64.20.83 > 10.244.22.90: ICMP echo request, id 25284, seq 1, length 64
    171.64.20.83 > 10.244.22.90: ICMP echo request, id 25284, seq 2, length 64
    171.64.20.83 > 10.244.22.90: ICMP echo request, id 25284, seq 3, length 64
    171.64.20.83 > 10.244.22.90: ICMP echo request, id 25284, seq 4, length 64
    171.64.20.83 > 10.244.22.90: ICMP echo request, id 25284, seq 5, length 64
    171.64.20.83 > 10.244.22.90: ICMP echo request, id 28681, seq 1, length 64
    171.64.20.83 > 10.244.22.90: ICMP echo request, id 28681, seq 2, length 64
    171.64.20.83 > 10.244.22.90: ICMP echo request, id 28681, seq 3, length 64
[root@netw-fiona ~]# tcpdump -i enp9s0 | grep 10.244.22.90
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp9s0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:14:00.995847 IP netw-fiona.stanford.edu > fiona.sdsu.edu: IP netw-fiona-pine.stanford.edu > 10.244.22.90: ICMP echo request, id 19328, seq 1, length 64 (ipip-proto-4)
15:14:02.035004 IP netw-fiona.stanford.edu > fiona.sdsu.edu: IP netw-fiona-pine.stanford.edu > 10.244.22.90: ICMP echo request, id 19328, seq 2, length 64 (ipip-proto-4)
15:14:10.672920 IP netw-fiona.stanford.edu > fiona.sdsu.edu: IP netw-fiona-pine.stanford.edu.58004 > 10.244.22.90.http: Flags [S], seq 1944923732, win 35680, options [mss 8920,sackOK,TS val 3882967266 ecr 0,nop,wscale 14], length 0 (ipip-proto-4)
15:14:11.698986 IP netw-fiona.stanford.edu > fiona.sdsu.edu: IP netw-fiona-pine.stanford.edu.58004 > 10.244.22.90.http: Flags [S], seq 1944923732, win 35680, options [mss 8920,sackOK,TS val 3882968293 ecr 0,nop,wscale 14], length 0 (ipip-proto-4)

Please help!

@andrewsykim
Copy link
Collaborator

Can you list the iptable rules on your host with sudo iptables-save?

@dimm0
Copy link
Contributor Author

dimm0 commented Mar 13, 2018

@andrewsykim
Copy link
Collaborator

Can you put that in a gist instead?

@dimm0
Copy link
Contributor Author

dimm0 commented Mar 13, 2018

Yup. Done

@andrewsykim
Copy link
Collaborator

andrewsykim commented Mar 13, 2018

Have you set --cluster-cidr on kube-router and/or kube-proxy?

@dimm0
Copy link
Contributor Author

dimm0 commented Mar 13, 2018

Hmm, I haven't. The kube-proxy config has it defined right:

dimm:k8s_portal dimm$ kubectl get configmaps -n kube-system  kube-proxy -o yaml
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 5
    clusterCIDR: 10.244.0.0/16

Should I do this as well?

Where should it be defined for kube-router?

This is my current config: https://github.com/dimm0/prp_k8s_config/blob/master/kubeadm-kuberouter.yaml

@andrewsykim
Copy link
Collaborator

Try adding --cluster-cidr here: https://github.com/dimm0/prp_k8s_config/blob/master/kubeadm-kuberouter.yaml#L48, I'm not 100% sure if it will fix this issue, but I had a similar issue a while back and it fixed it for me

@dimm0
Copy link
Contributor Author

dimm0 commented Mar 13, 2018

Hmm... How about this? #137

@dimm0
Copy link
Contributor Author

dimm0 commented Mar 14, 2018

Didn't help. Changed the ds definition and deleted the kube-router pod on broken node.

@acloudiator
Copy link

I'm having this similar issue.
@dimm0 Have you found any remedy/workaround for it?

@acloudiator
Copy link

tcpdump

18:15:46.739079 IP 192.168.0.27.48986 > 10.98.14.4.15201: Flags [S], seq 1460892127, win 29200, options [mss 1460,sackOK,TS val 1347052227 ecr 0,nop,wscale 7], length 0
18:15:47.754993 IP 192.168.0.27.48986 > 10.98.14.4.15201: Flags [S], seq 1460892127, win 29200, options [mss 1460,sackOK,TS val 1347053243 ecr 0,nop,wscale 7], length 0
18:15:49.770950 IP 192.168.0.27.48986 > 10.98.14.4.15201: Flags [S], seq 1460892127, win 29200, options [mss 1460,sackOK,TS val 1347055259 ecr 0,nop,wscale 7], length 0
18:15:54.026906 IP 192.168.0.27.48986 > 10.98.14.4.15201: Flags [S], seq 1460892127, win 29200, options [mss 1460,sackOK,TS val 1347059515 ecr 0,nop,wscale 7], length 0
18:16:02.218914 IP 192.168.0.27.48986 > 10.98.14.4.15201: Flags [S], seq 1460892127, win 29200, options [mss 1460,sackOK,TS val 1347067707 ecr 0,nop,wscale 7], length 0
18:16:18.346957 IP 192.168.0.27.48986 > 10.98.14.4.15201: Flags [S], seq 1460892127, win 29200, options [mss 1460,sackOK,TS val 1347083835 ecr 0,nop,wscale 7], length 0
18:16:52.138961 IP 192.168.0.27.48986 > 10.98.14.4.15201: Flags [S], seq 1460892127, win 29200, options [mss 1460,sackOK,TS val 1347117626 ecr 0,nop,wscale 7], length 0

iptables -L -t nat

KUBE-MARK-MASQ  all  --  192.168.0.27         anywhere             /* default/iperf3-node1-clusterip: */
DNAT       tcp  --  anywhere             anywhere             /* default/iperf3-node1-clusterip: */ tcp to:192.168.0.27:5201
KUBE-MARK-MASQ  tcp  -- !192.168.0.0/16       10.98.14.4           /* default/iperf3-node1-clusterip: cluster IP */ tcp dpt:15201
KUBE-SVC-RS2UP6RGOOXCI2LQ  tcp  --  anywhere             10.98.14.4           /* default/iperf3-node1-clusterip: cluster IP */ tcp dpt:15201

kubectl get svc -o wide

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE       SELECTOR
iperf3-node1-clusterip   ClusterIP   10.98.14.4      <none>        15201/TCP         26m       app=iperf3-node1-clusterip

kubectl get pods -o wide

NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
iperf3-node1-clusterip   1/1       Running   0          27m       192.168.0.27   al-server-a

kubectl get ep -o wide

NAME                     ENDPOINTS             AGE
iperf3-node1-clusterip   192.168.0.27:5201     28m

kuberouter version and other details

/usr/local/bin/kube-router version v0.2.0-beta.7, built on 2018-06-08T09:34:34+0000, go1.8.7

@dimm0
Copy link
Contributor Author

dimm0 commented Jul 3, 2018

Switched to calico, no troubles since then

@acloudiator
Copy link

@dimm0 The reason I am sticking with kube-router is mainly for the performance, but seems like none of the kube-router experts are looking this ticket :(

@andrewsykim
Copy link
Collaborator

Sorry, this fell off my radar, will try to find time to dig into it in the next few weeks. If anyone has more information that will help debug please let me know.

@murali-reddy
Copy link
Member

@dimm0 sorry things did not work out with kube-router for you. You guys were very first set of large scale users to use kube-router when project was still infancy. Lot of valuable feedback came from you guys.

With multiple interfaces things get little bizarre with Kubernetes. there is just one line prerequisite:

https://kubernetes.io/docs/tasks/tools/install-kubeadm/#check-network-adapters
and corresponding issue
kubernetes/kubeadm#102 (comment)

Basically when using Kubernetes with multiple-interfaces, data-path has to right source Ip and interface when sending the packets.

If you see the tcpdump shared by dimm0

[root@netw-fiona ~]# tcpdump -i tun-13019149222 | grep 10.244.22.90
tcpdump: listening on tun-13019149222, link-type RAW (Raw IP), capture size 262144 bytes
    171.64.20.83.46108 > 10.244.22.90.80: Flags [S], cksum 0xe10f (incorrect -> 0x0fff), seq 1732815011, win 35680, options [mss 8920,sackOK,TS val 3881173733 ecr 0,nop,wscale 14], length 0
    171.64.20.83.46108 > 10.244.22.90.80: Flags [S], cksum 0xe10f (incorrect -> 0x0c07), seq 1732815011, win 35680, options [mss 8920,sackOK,TS val 3881174749 ecr 0,nop,wscale 14], length 0
    171.64.20.83.46108 > 10.244.22.90.80: Flags [S], cksum 0xe10f (incorrect -> 0x0407), seq 1732815011, win 35680, options [mss 8920,sackOK,TS val 3881176797 ecr 0,nop,wscale 14], length 0
    171.64.20.83.46108 > 10.244.22.90.80: Flags [S], cksum 0xe10f (incorrect -> 0xf446), seq 1732815011, win 35680, options [mss 8920,sackOK,TS val 3881180829 ecr 0,nop,wscale 14], length 0
    171.64.20.83.46108 > 10.244.22.90.80: Flags [S], cksum 0xe10f (incorrect -> 0xd3c6), seq     

You will see the source IP address used is from eno1. and the packets send over tunnel/enp9s0. This is wrong. We need to add a route so that proper source address is used.

There is a fix checked in 359ab1d which should have fixed this issue.

I ran into some issue recently reported by @ieugen

#484

so i guess there is still some missing piece. I will revisit this issue and see what could be going wrong.

@FabienZouaoui
Copy link

Hi, I'm having this issue and found a workaround to have host-to-service networking working.

I have to replace this rule in iptable's nat table (created by kube router):
-A POSTROUTING ! -s NODE_POD_NET ! -d NODE_POD_NET -m ipvs --vdir ORIGINAL --vmethod MASQ -m comment --comment "" -j MASQUERADE

by:
-A POSTROUTING ! -s NODE_POD_NET ! -d NODE_POD_NET -m ipvs --vdir ORIGINAL --vmethod MASQ -m comment --comment "" -j SNAT --to-source CORRECT_IP_ADDRESS

My guess is that the MASQUERADE target is somewhat confused by all this setup.
Do is seems ok to add a flag to kube-router to specify witch ip address is wanted for this rule (and keep the masquerade target when this flag is absent) ?

@murali-reddy
Copy link
Member

@FabienZouaoui what version of kube-router you are running? We had a recent PR (#668) that is part of the v1.0.0-rc1 release which adds similar rule. Can you please try and see if that addresses the issue?

@FabienZouaoui
Copy link

@murali-reddy sorry to have missed that PR.
I was running v0.4.0 and I can confirm that this issue is resolved for me with v1.0.0-rc1.

In the meantime I'm thinking that this rewrite rule create an useless overhead in this specific case (node to pod communication). The routing rule does a good job as setting the right IP address

@aauren
Copy link
Collaborator

aauren commented Apr 24, 2020

Closing as resolved in v1.0.0-rc

@aauren aauren closed this as completed Apr 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants