Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

1 Node stopped from connecting to other nodes in Kubernetes Cluster #3392

Closed
shahbour opened this issue Sep 3, 2018 · 17 comments
Closed

1 Node stopped from connecting to other nodes in Kubernetes Cluster #3392

shahbour opened this issue Sep 3, 2018 · 17 comments
Assignees
Milestone

Comments

@shahbour
Copy link

shahbour commented Sep 3, 2018

What you expected to happen?

What happened?

Yesterday one of the 4 nodes I have in kubernetes cluster stopped working, any traffic going to other pods in another node stopped working and vice versal

How to reproduce it?

I could not find any difference in configuration so I can't reproduce it but it is still there now

Anything else we need to know?

This is the node 192.168.70.230 that stopped talking to others with reason: Received update for IP range I own at 10.44.0.0 v4961: incoming ...

(⎈ |production:kube-system)➜  ~ kubectl exec -n kube-system weave-net-n98pg -c weave -- /home/weave/weave --local status connections
-> 172.16.71.11:6783     failed      Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4997, retry: 2018-09-03 14:03:18.569215277 +0000 UTC m=+2667.309046958
-> 192.168.70.232:6783   failed      Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4997, retry: 2018-09-03 14:05:40.259769405 +0000 UTC m=+2808.999601088
-> 192.168.70.230:6783   failed      cannot connect to ourself, retry: never
(⎈ |production:kube-system)➜  ~ kubectl exec -n kube-system weave-net-5p9zb -c weave -- /home/weave/weave --local status connections
-> 192.168.70.232:6783   established sleeve 2a:44:ef:34:94:3b(kube-master) mtu=1438
-> 192.168.70.231:6783   established sleeve da:68:9b:b7:51:65(engine02) mtu=1438
-> 192.168.70.230:6783   failed      read tcp4 172.16.71.11:33627->192.168.70.230:6783: read: connection reset by peer, retry: 2018-09-03 14:02:50.598792999 +0000 UTC m=+23008.104583296
-> 172.16.71.11:6783     failed      cannot connect to ourself, retry: never
(⎈ |production:kube-system)➜  ~ kubectl exec -n kube-system weave-net-ts4fp -c weave -- /home/weave/weave --local status connections
<- 172.16.71.11:60522    established sleeve 9e:1f:f3:46:dd:12(engine03) mtu=1438
<- 192.168.70.232:52895  established fastdp 2a:44:ef:34:94:3b(kube-master) mtu=1376
-> 192.168.70.230:6783   failed      read tcp4 192.168.70.231:49715->192.168.70.230:6783: read: connection reset by peer, retry: 2018-09-03 14:04:38.797392876 +0000 UTC m=+4938783.801228928
-> 192.168.70.231:6783   failed      cannot connect to ourself, retry: never
(⎈ |production:kube-system)➜  ~ kubectl exec -n kube-system weave-net-wzbt9 -c weave -- /home/weave/weave --local status connections
<- 172.16.71.11:52226    established sleeve 9e:1f:f3:46:dd:12(engine03) mtu=1438
-> 192.168.70.231:6783   established fastdp da:68:9b:b7:51:65(engine02) mtu=1376
-> 192.168.70.232:6783   failed      cannot connect to ourself, retry: never
-> 192.168.70.230:6783   failed      read tcp4 192.168.70.232:46196->192.168.70.230:6783: read: connection reset by peer, retry: 2018-09-03 14:05:36.038993072 +0000 UTC m=+23243.658647602

Versions:

$ weave version
`weave script 2.3.0`
$ docker version
docker version
Client:
 Version:         1.13.1
 API version:     1.26
 Package version: <unknown>
 Go version:      go1.8.3
 Git commit:      774336d/1.13.1
 Built:           Wed Mar  7 17:06:16 2018
 OS/Arch:         linux/amd64

Server:
 Version:         1.13.1
 API version:     1.26 (minimum version 1.12)
 Package version: <unknown>
 Go version:      go1.8.3
 Git commit:      774336d/1.13.1
 Built:           Wed Mar  7 17:06:16 2018
 OS/Arch:         linux/amd64
 Experimental:    false
$ uname -a
Linux engine01 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ kubectl version
(⎈ |production:kube-system)➜  ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T22:29:25Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Logs:

(⎈ |production:kube-system)➜  ~ kubectl logs  weave-net-n98pg weave
DEBU: 2018/09/03 13:18:51.236888 [kube-peers] Checking peer "72:be:1a:b8:98:77" against list &{[{2a:44:ef:34:94:3b kube-master} {72:be:1a:b8:98:77 engine01.} {da:68:9b:b7:51:65 engine02} {9e:1f:f3:46:dd:12 engine03}]}
INFO: 2018/09/03 13:18:51.311055 Command line options: map[ipalloc-init:consensus=3 ipalloc-range:10.32.0.0/12 no-dns:true datapath:datapath db-prefix:/weavedb/weave-net docker-api: host-root:/host metrics-addr:0.0.0.0:6782 port:6783 expect-npc:true name:72:be:1a:b8:98:77 nickname:engine01. conn-limit:100 http-addr:127.0.0.1:6784]
INFO: 2018/09/03 13:18:51.311312 weave  2.3.0
INFO: 2018/09/03 13:18:51.597063 Bridge type is bridged_fastdp
INFO: 2018/09/03 13:18:51.597131 Communication between peers is unencrypted.
INFO: 2018/09/03 13:18:51.623481 Our name is 72:be:1a:b8:98:77(engine01.)
INFO: 2018/09/03 13:18:51.623560 Launch detected - using supplied peer list: [192.168.70.230 172.16.71.11 192.168.70.232]
INFO: 2018/09/03 13:18:51.623684 Checking for pre-existing addresses on weave bridge
INFO: 2018/09/03 13:18:51.624044 weave bridge has address 10.36.0.0/12
INFO: 2018/09/03 13:18:51.659581 Found address 10.36.0.81/12 for ID _
INFO: 2018/09/03 13:18:51.678220 Found address 10.36.0.85/12 for ID _
INFO: 2018/09/03 13:18:51.696947 Found address 10.36.0.91/12 for ID _
INFO: 2018/09/03 13:18:51.714631 Found address 10.36.0.81/12 for ID _
INFO: 2018/09/03 13:18:51.732737 Found address 10.36.0.85/12 for ID _
INFO: 2018/09/03 13:18:51.745257 Found address 10.36.0.91/12 for ID _
INFO: 2018/09/03 13:18:51.758333 Found address 10.36.0.92/12 for ID _
INFO: 2018/09/03 13:18:51.769979 Found address 10.36.0.92/12 for ID _
INFO: 2018/09/03 13:18:51.785279 Found address 10.36.0.95/12 for ID _
INFO: 2018/09/03 13:18:51.798937 Found address 10.36.0.95/12 for ID _
INFO: 2018/09/03 13:18:51.800843 [allocator 72:be:1a:b8:98:77] Initialising with persisted data
INFO: 2018/09/03 13:18:51.801210 Sniffing traffic on datapath (via ODP)
INFO: 2018/09/03 13:18:51.802815 ->[172.16.71.11:6783] attempting connection
INFO: 2018/09/03 13:18:51.802990 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 13:18:51.803123 ->[192.168.70.232:6783] attempting connection
INFO: 2018/09/03 13:18:51.803364 ->[192.168.70.230:50218] connection accepted
INFO: 2018/09/03 13:18:51.805241 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:18:51.805415 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01.)]: connection shutting down due to error: cannot connect to ourself
INFO: 2018/09/03 13:18:51.805531 ->[192.168.70.230:50218|72:be:1a:b8:98:77(engine01.)]: connection shutting down due to error: cannot connect to ourself
INFO: 2018/09/03 13:18:51.805664 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2018/09/03 13:18:51.805687 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 13:18:51.805790 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection added (new peer)
INFO: 2018/09/03 13:18:51.805823 Listening for metrics requests on 0.0.0.0:6782
INFO: 2018/09/03 13:18:51.805876 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:18:51.807567 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 13:18:51.808297 ->[192.168.70.231:6783] attempting connection
INFO: 2018/09/03 13:18:51.810538 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection added (new peer)
INFO: 2018/09/03 13:18:51.811838 ->[192.168.70.231:6783|da:68:9b:b7:51:65(engine02.)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:18:51.811945 overlay_switch ->[da:68:9b:b7:51:65(engine02.)] using fastdp
INFO: 2018/09/03 13:18:51.811988 ->[192.168.70.231:6783|da:68:9b:b7:51:65(engine02.)]: connection added (new peer)
INFO: 2018/09/03 13:18:51.812019 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4995
INFO: 2018/09/03 13:18:51.812173 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 13:18:51.812271 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4995
INFO: 2018/09/03 13:18:51.813052 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] sleeve write tcp4 192.168.70.230:39220->172.16.71.11:6783: use of closed network connection
INFO: 2018/09/03 13:18:51.814581 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 13:18:51.815535 ->[192.168.70.231:6783|da:68:9b:b7:51:65(engine02.)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4995
INFO: 2018/09/03 13:18:51.815700 ->[192.168.70.231:6783|da:68:9b:b7:51:65(engine02.)]: connection deleted
INFO: 2018/09/03 13:18:51.815761 Removed unreachable peer da:68:9b:b7:51:65(engine02.)
INFO: 2018/09/03 13:18:51.815798 Removed unreachable peer 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/03 13:18:51.815825 Removed unreachable peer 2a:44:ef:34:94:3b(kube-master)
INFO: 2018/09/03 13:18:52.304924 Weave version 2.4.0 is available; please update at https://github.com/weaveworks/weave/releases/download/v2.4.0/weave
INFO: 2018/09/03 13:18:52.829705 [kube-peers] Added myself to peer list &{[{2a:44:ef:34:94:3b kube-master} {72:be:1a:b8:98:77 engine01.} {da:68:9b:b7:51:65 engine02.} {9e:1f:f3:46:dd:12 engine03}]}
DEBU: 2018/09/03 13:18:52.833896 [kube-peers] Nodes that have disappeared: map[engine02.:{da:68:9b:b7:51:65 engine02.}]
DEBU: 2018/09/03 13:18:52.833956 [kube-peers] Preparing to remove disappeared peer {da:68:9b:b7:51:65 engine02.}
DEBU: 2018/09/03 13:18:52.833978 [kube-peers] Noting I plan to remove  da:68:9b:b7:51:65
DEBU: 2018/09/03 13:18:52.840629 weave DELETE to http://127.0.0.1:6784/peer/da:68:9b:b7:51:65 with map[]
INFO: 2018/09/03 13:18:52.842911 [kube-peers] rmpeer of da:68:9b:b7:51:65: 0 IPs taken over from da:68:9b:b7:51:65
INFO: 2018/09/03 13:18:52.847470 ->[172.16.71.11:6783] attempting connection
INFO: 2018/09/03 13:18:52.851480 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:18:52.851778 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 13:18:52.851858 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection added (new peer)
INFO: 2018/09/03 13:18:52.853624 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4995
INFO: 2018/09/03 13:18:52.853890 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 13:18:52.853959 Removed unreachable peer 9e:1f:f3:46:dd:12(engine03)
DEBU: 2018/09/03 13:18:52.868152 [kube-peers] Nodes that have disappeared: map[]
10.36.0.0
INFO: 2018/09/03 13:18:53.623710 ->[192.168.70.232:6783] attempting connection
INFO: 2018/09/03 13:18:53.630233 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:18:53.630644 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 13:18:53.630716 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection added (new peer)
INFO: 2018/09/03 13:18:53.631638 ->[192.168.70.231:6783] attempting connection
INFO: 2018/09/03 13:18:53.632233 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4995
INFO: 2018/09/03 13:18:53.632377 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 13:18:53.632436 Removed unreachable peer 2a:44:ef:34:94:3b(kube-master)
INFO: 2018/09/03 13:18:53.632458 Removed unreachable peer da:68:9b:b7:51:65(engine02.)
INFO: 2018/09/03 13:18:53.632473 Removed unreachable peer 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/03 13:18:53.633775 ->[192.168.70.231:6783] connection shutting down due to error during handshake: Found unknown remote name: da:68:9b:b7:51:65 at 192.168.70.231:6783
INFO: 2018/09/03 13:18:55.427538 ->[192.168.70.232:6783] attempting connection
INFO: 2018/09/03 13:18:55.430304 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:18:55.430504 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 13:18:55.430571 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection added (new peer)
INFO: 2018/09/03 13:18:55.432532 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4995
INFO: 2018/09/03 13:18:55.432725 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 13:18:55.432785 Removed unreachable peer 2a:44:ef:34:94:3b(kube-master)
INFO: 2018/09/03 13:18:56.914843 ->[172.16.71.11:6783] attempting connection
INFO: 2018/09/03 13:18:56.917841 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:18:56.918074 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 13:18:56.918169 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection added (new peer)
INFO: 2018/09/03 13:18:56.919823 ->[192.168.70.231:6783] attempting connection
INFO: 2018/09/03 13:18:56.921437 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4995
INFO: 2018/09/03 13:18:56.921580 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] sleeve write tcp4 192.168.70.230:59466->172.16.71.11:6783: use of closed network connection
INFO: 2018/09/03 13:18:56.921627 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 13:18:56.921683 Removed unreachable peer 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/03 13:18:56.921709 Removed unreachable peer 2a:44:ef:34:94:3b(kube-master)
INFO: 2018/09/03 13:18:56.921734 Removed unreachable peer da:68:9b:b7:51:65(engine02.)
INFO: 2018/09/03 13:18:56.923764 ->[192.168.70.231:6783] connection shutting down due to error during handshake: Found unknown remote name: da:68:9b:b7:51:65 at 192.168.70.231:6783
INFO: 2018/09/03 13:18:59.315929 ->[192.168.70.232:6783] attempting connection
INFO: 2018/09/03 13:18:59.319051 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:18:59.319263 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 13:18:59.319342 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection added (new peer)
INFO: 2018/09/03 13:18:59.320601 ->[192.168.70.231:6783] attempting connection
INFO: 2018/09/03 13:18:59.321230 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4995
INFO: 2018/09/03 13:18:59.321387 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 13:18:59.321450 Removed unreachable peer 2a:44:ef:34:94:3b(kube-master)
INFO: 2018/09/03 13:18:59.321470 Removed unreachable peer da:68:9b:b7:51:65(engine02.)
INFO: 2018/09/03 13:18:59.321497 Removed unreachable peer 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/03 13:18:59.322891 ->[192.168.70.231:6783] connection shutting down due to error during handshake: Found unknown remote name: da:68:9b:b7:51:65 at 192.168.70.231:6783
INFO: 2018/09/03 13:19:01.974685 ->[172.16.71.11:6783] attempting connection
INFO: 2018/09/03 13:19:01.977493 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:19:01.977673 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 13:19:01.977741 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection added (new peer)
INFO: 2018/09/03 13:19:01.978891 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4961: incoming message says owner da:68:9b:b7:51:65 v4995
INFO: 2018/09/03 13:19:01.979292 ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 13:19:01.979369 Removed unreachable peer 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/03 13:19:06.110495 ->[192.168.70.232:6783] attempting connection

Network:

$ ip route
➜  ~ ip route
default via 192.168.70.225 dev eno16777984 proto static metric 100 
10.32.0.0/12 dev weave proto kernel scope link src 10.36.0.0 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.70.224/28 dev eno16777984 proto kernel scope link src 192.168.70.230 metric 100 
$ ip -4 -o addr
➜  ~ ip -4 -o addr
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: eno16777984    inet 192.168.70.230/28 brd 192.168.70.239 scope global eno16777984\       valid_lft forever preferred_lft forever
3: docker0    inet 172.17.0.1/16 scope global docker0\       valid_lft forever preferred_lft forever
6: weave    inet 10.36.0.0/12 brd 10.47.255.255 scope global weave\       valid_lft forever preferred_lft forever
$ sudo iptables-save
➜  ~ iptables-save 
# Generated by iptables-save v1.4.21 on Mon Sep  3 14:12:05 2018
*nat
:PREROUTING ACCEPT [1:60]
:INPUT ACCEPT [1:60]
:OUTPUT ACCEPT [3:180]
:POSTROUTING ACCEPT [3:180]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-2T2JSMHSSEXICVWQ - [0:0]
:KUBE-SEP-4FAFKUO5PDFMMDJZ - [0:0]
:KUBE-SEP-5VHD65JGHBHEH3OW - [0:0]
:KUBE-SEP-6CEMAHSACHX7EJ7U - [0:0]
:KUBE-SEP-6W2PTZRXA3PIDZ55 - [0:0]
:KUBE-SEP-6WT2QPM3S5LK4PNK - [0:0]
:KUBE-SEP-7DAQXCZAGL6F2WLE - [0:0]
:KUBE-SEP-7KP7D5HJPDVZ255S - [0:0]
:KUBE-SEP-B3V2WOWBYTP3PD3L - [0:0]
:KUBE-SEP-BBDC6QBC4WFRDAZF - [0:0]
:KUBE-SEP-D6ROLV4NHZ77JI47 - [0:0]
:KUBE-SEP-DWJ53WG5H2T7VC2T - [0:0]
:KUBE-SEP-E3QV3WKMDKT5BIMM - [0:0]
:KUBE-SEP-EQU7W3UR3KH7R56E - [0:0]
:KUBE-SEP-EZUERM5EL56PPGZ5 - [0:0]
:KUBE-SEP-FJZZMNOWVSDGOSM3 - [0:0]
:KUBE-SEP-GJFNYPBSZIUQLAEL - [0:0]
:KUBE-SEP-JAB644JEIH72D7NY - [0:0]
:KUBE-SEP-JVZTSKHLN3KBIP5L - [0:0]
:KUBE-SEP-KH5PW5K2NW74EL6U - [0:0]
:KUBE-SEP-LGQVMCXJGZVPU2ZD - [0:0]
:KUBE-SEP-MS7CXDNM35F2O7EZ - [0:0]
:KUBE-SEP-MVHGEIUXD4RAT7UH - [0:0]
:KUBE-SEP-N2JFJLJRMCNZGU3T - [0:0]
:KUBE-SEP-NAIHDCL2YFPCGYUR - [0:0]
:KUBE-SEP-ORRCHWATARHYOER6 - [0:0]
:KUBE-SEP-RAJQ32SYKW6X5VTL - [0:0]
:KUBE-SEP-RBKQ3UO2BD5574UU - [0:0]
:KUBE-SEP-RDFVHZ3YEYWMVRZR - [0:0]
:KUBE-SEP-S7WJ4KDV72DZRKXY - [0:0]
:KUBE-SEP-TQR4NGDI7QBPAEOB - [0:0]
:KUBE-SEP-UJC7W3YPF5XF2GWW - [0:0]
:KUBE-SEP-VXGODP34ZKHNR4NT - [0:0]
:KUBE-SEP-WQJGR5GL3DRFNR6C - [0:0]
:KUBE-SEP-XDKQFSDP7SK3DG4G - [0:0]
:KUBE-SEP-YE6T6BNVTOPUMLPP - [0:0]
:KUBE-SEP-YR4QPLQBXIPH5XEL - [0:0]
:KUBE-SEP-YWGOVFCG7K5KS42B - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-2TWB3W26JFEUCPTC - [0:0]
:KUBE-SVC-5E6L6IMJXXB5PRYI - [0:0]
:KUBE-SVC-5E6NQUE2DDZQYXK4 - [0:0]
:KUBE-SVC-5O4RX4I7LOPQUYD5 - [0:0]
:KUBE-SVC-6NXSGHI464OKUGD3 - [0:0]
:KUBE-SVC-7BB4GED2QYDGC4GN - [0:0]
:KUBE-SVC-BJM46V3U5RZHCFRZ - [0:0]
:KUBE-SVC-CR2BJB4IXVLPMKXG - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-FQZVPXPAVNSYM4O6 - [0:0]
:KUBE-SVC-HNUXZZTQTMLMGJUH - [0:0]
:KUBE-SVC-ITDPLE4X2C3GITQN - [0:0]
:KUBE-SVC-IUGSYNO5UD5ENGHX - [0:0]
:KUBE-SVC-JBRDQBMTQUIIYW2F - [0:0]
:KUBE-SVC-JRXTEHDDTAFMSEAS - [0:0]
:KUBE-SVC-K7J76NXP7AUZVFGS - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-Q6XJQ2I55QTBQCWT - [0:0]
:KUBE-SVC-R4UENAYGNDN2K6E7 - [0:0]
:KUBE-SVC-RKD3UH2OQ33CLZE3 - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-WUIYBZBCYLXPRZMM - [0:0]
:KUBE-SVC-X7DVPRUYWJS6MW3F - [0:0]
:KUBE-SVC-XAHVVRAYYBPGG4CC - [0:0]
:KUBE-SVC-XWYU5Y3HAZFCHZ5X - [0:0]
:KUBE-SVC-ZEJADLV2PZJ4IDUH - [0:0]
:KUBE-SVC-ZW2RLD2NAMM5L3LY - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:http" -m tcp --dport 30076 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:http" -m tcp --dport 30076 -j KUBE-SVC-ITDPLE4X2C3GITQN
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/monitoring-grafana:" -m tcp --dport 31413 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/monitoring-grafana:" -m tcp --dport 31413 -j KUBE-SVC-JRXTEHDDTAFMSEAS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:2222-tcp" -m tcp --dport 32711 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:2222-tcp" -m tcp --dport 32711 -j KUBE-SVC-5E6L6IMJXXB5PRYI
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:https" -m tcp --dport 31884 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:https" -m tcp --dport 31884 -j KUBE-SVC-5O4RX4I7LOPQUYD5
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-2T2JSMHSSEXICVWQ -s 172.17.0.2/32 -m comment --comment "kube-system/fluent-bit:" -j KUBE-MARK-MASQ
-A KUBE-SEP-2T2JSMHSSEXICVWQ -p tcp -m comment --comment "kube-system/fluent-bit:" -m tcp -j DNAT --to-destination 172.17.0.2:24224
-A KUBE-SEP-4FAFKUO5PDFMMDJZ -s 192.168.70.231/32 -m comment --comment "kube-system/glusterfs-cluster-system:" -j KUBE-MARK-MASQ
-A KUBE-SEP-4FAFKUO5PDFMMDJZ -p tcp -m comment --comment "kube-system/glusterfs-cluster-system:" -m tcp -j DNAT --to-destination 192.168.70.231:9091
-A KUBE-SEP-5VHD65JGHBHEH3OW -s 10.44.0.75/32 -m comment --comment "kube-system/fluent-bit:" -j KUBE-MARK-MASQ
-A KUBE-SEP-5VHD65JGHBHEH3OW -p tcp -m comment --comment "kube-system/fluent-bit:" -m tcp -j DNAT --to-destination 10.44.0.75:24224
-A KUBE-SEP-6CEMAHSACHX7EJ7U -s 10.44.0.79/32 -m comment --comment "default/ui:" -j KUBE-MARK-MASQ
-A KUBE-SEP-6CEMAHSACHX7EJ7U -p tcp -m comment --comment "default/ui:" -m tcp -j DNAT --to-destination 10.44.0.79:8085
-A KUBE-SEP-6W2PTZRXA3PIDZ55 -s 192.168.70.230/32 -m comment --comment "kube-system/glusterfs-cluster-system:" -j KUBE-MARK-MASQ
-A KUBE-SEP-6W2PTZRXA3PIDZ55 -p tcp -m comment --comment "kube-system/glusterfs-cluster-system:" -m tcp -j DNAT --to-destination 192.168.70.230:9091
-A KUBE-SEP-6WT2QPM3S5LK4PNK -s 10.44.0.95/32 -m comment --comment "kube-system/tiller-deploy:tiller" -j KUBE-MARK-MASQ
-A KUBE-SEP-6WT2QPM3S5LK4PNK -p tcp -m comment --comment "kube-system/tiller-deploy:tiller" -m tcp -j DNAT --to-destination 10.44.0.95:44134
-A KUBE-SEP-7DAQXCZAGL6F2WLE -s 172.16.71.11/32 -m comment --comment "default/dmz-nginx-ingress-controller:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-7DAQXCZAGL6F2WLE -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:https" -m tcp -j DNAT --to-destination 172.16.71.11:443
-A KUBE-SEP-7KP7D5HJPDVZ255S -s 10.44.0.77/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-7KP7D5HJPDVZ255S -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.44.0.77:53
-A KUBE-SEP-B3V2WOWBYTP3PD3L -s 10.36.0.85/32 -m comment --comment "kube-system/mon-kubernetes-dashboard:" -j KUBE-MARK-MASQ
-A KUBE-SEP-B3V2WOWBYTP3PD3L -p tcp -m comment --comment "kube-system/mon-kubernetes-dashboard:" -m tcp -j DNAT --to-destination 10.36.0.85:8443
-A KUBE-SEP-BBDC6QBC4WFRDAZF -s 10.44.0.88/32 -m comment --comment "kube-system/monitoring-influxdb:" -j KUBE-MARK-MASQ
-A KUBE-SEP-BBDC6QBC4WFRDAZF -p tcp -m comment --comment "kube-system/monitoring-influxdb:" -m tcp -j DNAT --to-destination 10.44.0.88:8086
-A KUBE-SEP-D6ROLV4NHZ77JI47 -s 172.16.71.11/32 -m comment --comment "default/dmz-nginx-ingress-controller:2222-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-D6ROLV4NHZ77JI47 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:2222-tcp" -m tcp -j DNAT --to-destination 172.16.71.11:2222
-A KUBE-SEP-DWJ53WG5H2T7VC2T -s 172.16.71.11/32 -m comment --comment "default/dmz-nginx-ingress-controller-stats:stats" -j KUBE-MARK-MASQ
-A KUBE-SEP-DWJ53WG5H2T7VC2T -p tcp -m comment --comment "default/dmz-nginx-ingress-controller-stats:stats" -m tcp -j DNAT --to-destination 172.16.71.11:18080
-A KUBE-SEP-E3QV3WKMDKT5BIMM -s 10.32.0.17/32 -m comment --comment "kube-system/fluent-bit:" -j KUBE-MARK-MASQ
-A KUBE-SEP-E3QV3WKMDKT5BIMM -p tcp -m comment --comment "kube-system/fluent-bit:" -m tcp -j DNAT --to-destination 10.32.0.17:24224
-A KUBE-SEP-EQU7W3UR3KH7R56E -s 172.16.71.11/32 -m comment --comment "default/dmz-nginx-ingress-controller:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-EQU7W3UR3KH7R56E -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:http" -m tcp -j DNAT --to-destination 172.16.71.11:80
-A KUBE-SEP-EZUERM5EL56PPGZ5 -s 10.44.0.81/32 -m comment --comment "default/hazelcast:hzport" -j KUBE-MARK-MASQ
-A KUBE-SEP-EZUERM5EL56PPGZ5 -p tcp -m comment --comment "default/hazelcast:hzport" -m tcp -j DNAT --to-destination 10.44.0.81:5701
-A KUBE-SEP-FJZZMNOWVSDGOSM3 -s 10.32.0.18/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-FJZZMNOWVSDGOSM3 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.18:53
-A KUBE-SEP-GJFNYPBSZIUQLAEL -s 10.44.0.78/32 -m comment --comment "default/sonus:" -j KUBE-MARK-MASQ
-A KUBE-SEP-GJFNYPBSZIUQLAEL -p tcp -m comment --comment "default/sonus:" -m tcp -j DNAT --to-destination 10.44.0.78:9010
-A KUBE-SEP-JAB644JEIH72D7NY -s 10.44.0.93/32 -m comment --comment "kube-system/heapster:" -j KUBE-MARK-MASQ
-A KUBE-SEP-JAB644JEIH72D7NY -p tcp -m comment --comment "kube-system/heapster:" -m tcp -j DNAT --to-destination 10.44.0.93:8082
-A KUBE-SEP-JVZTSKHLN3KBIP5L -s 10.44.0.76/32 -m comment --comment "default/auth:" -j KUBE-MARK-MASQ
-A KUBE-SEP-JVZTSKHLN3KBIP5L -p tcp -m comment --comment "default/auth:" -m tcp -j DNAT --to-destination 10.44.0.76:9191
-A KUBE-SEP-KH5PW5K2NW74EL6U -s 10.36.0.81/32 -m comment --comment "kube-system/fluent-bit:" -j KUBE-MARK-MASQ
-A KUBE-SEP-KH5PW5K2NW74EL6U -p tcp -m comment --comment "kube-system/fluent-bit:" -m tcp -j DNAT --to-destination 10.36.0.81:24224
-A KUBE-SEP-LGQVMCXJGZVPU2ZD -s 172.17.0.3/32 -m comment --comment "default/dmz-nginx-ingress-default-backend:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-LGQVMCXJGZVPU2ZD -p tcp -m comment --comment "default/dmz-nginx-ingress-default-backend:http" -m tcp -j DNAT --to-destination 172.17.0.3:8080
-A KUBE-SEP-MS7CXDNM35F2O7EZ -s 192.168.70.231/32 -m comment --comment "default/glusterfs-cluster:" -j KUBE-MARK-MASQ
-A KUBE-SEP-MS7CXDNM35F2O7EZ -p tcp -m comment --comment "default/glusterfs-cluster:" -m tcp -j DNAT --to-destination 192.168.70.231:9090
-A KUBE-SEP-MVHGEIUXD4RAT7UH -s 192.168.70.230/32 -m comment --comment "rate-jobs/glusterfs-cluster:" -j KUBE-MARK-MASQ
-A KUBE-SEP-MVHGEIUXD4RAT7UH -p tcp -m comment --comment "rate-jobs/glusterfs-cluster:" -m tcp -j DNAT --to-destination 192.168.70.230:9092
-A KUBE-SEP-N2JFJLJRMCNZGU3T -s 192.168.70.232/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-N2JFJLJRMCNZGU3T -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-N2JFJLJRMCNZGU3T --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 192.168.70.232:6443
-A KUBE-SEP-NAIHDCL2YFPCGYUR -s 10.44.0.77/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-NAIHDCL2YFPCGYUR -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.44.0.77:53
-A KUBE-SEP-ORRCHWATARHYOER6 -s 10.44.0.90/32 -m comment --comment "kube-system/kibana-logging:" -j KUBE-MARK-MASQ
-A KUBE-SEP-ORRCHWATARHYOER6 -p tcp -m comment --comment "kube-system/kibana-logging:" -m tcp -j DNAT --to-destination 10.44.0.90:5601
-A KUBE-SEP-RAJQ32SYKW6X5VTL -s 10.44.0.80/32 -m comment --comment "default/common:" -j KUBE-MARK-MASQ
-A KUBE-SEP-RAJQ32SYKW6X5VTL -p tcp -m comment --comment "default/common:" -m tcp -j DNAT --to-destination 10.44.0.80:8083
-A KUBE-SEP-RBKQ3UO2BD5574UU -s 10.44.0.82/32 -m comment --comment "default/hazelcast:hzport" -j KUBE-MARK-MASQ
-A KUBE-SEP-RBKQ3UO2BD5574UU -p tcp -m comment --comment "default/hazelcast:hzport" -m tcp -j DNAT --to-destination 10.44.0.82:5701
-A KUBE-SEP-RDFVHZ3YEYWMVRZR -s 172.17.0.4/32 -m comment --comment "hub/hub-docker-registry:registry" -j KUBE-MARK-MASQ
-A KUBE-SEP-RDFVHZ3YEYWMVRZR -p tcp -m comment --comment "hub/hub-docker-registry:registry" -m tcp -j DNAT --to-destination 172.17.0.4:5000
-A KUBE-SEP-S7WJ4KDV72DZRKXY -s 10.32.0.18/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-S7WJ4KDV72DZRKXY -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.18:53
-A KUBE-SEP-TQR4NGDI7QBPAEOB -s 10.36.0.95/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-TQR4NGDI7QBPAEOB -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.36.0.95:53
-A KUBE-SEP-UJC7W3YPF5XF2GWW -s 192.168.70.231/32 -m comment --comment "rate-jobs/glusterfs-cluster:" -j KUBE-MARK-MASQ
-A KUBE-SEP-UJC7W3YPF5XF2GWW -p tcp -m comment --comment "rate-jobs/glusterfs-cluster:" -m tcp -j DNAT --to-destination 192.168.70.231:9092
-A KUBE-SEP-VXGODP34ZKHNR4NT -s 10.36.0.95/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-VXGODP34ZKHNR4NT -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.36.0.95:53
-A KUBE-SEP-WQJGR5GL3DRFNR6C -s 10.44.0.96/32 -m comment --comment "kube-system/monitoring-grafana:" -j KUBE-MARK-MASQ
-A KUBE-SEP-WQJGR5GL3DRFNR6C -p tcp -m comment --comment "kube-system/monitoring-grafana:" -m tcp -j DNAT --to-destination 10.44.0.96:3000
-A KUBE-SEP-XDKQFSDP7SK3DG4G -s 10.44.0.92/32 -m comment --comment "kube-system/elasticsearch-logging:" -j KUBE-MARK-MASQ
-A KUBE-SEP-XDKQFSDP7SK3DG4G -p tcp -m comment --comment "kube-system/elasticsearch-logging:" -m tcp -j DNAT --to-destination 10.44.0.92:9200
-A KUBE-SEP-YE6T6BNVTOPUMLPP -s 172.16.71.11/32 -m comment --comment "default/dmz-nginx-ingress-controller-metrics:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-YE6T6BNVTOPUMLPP -p tcp -m comment --comment "default/dmz-nginx-ingress-controller-metrics:metrics" -m tcp -j DNAT --to-destination 172.16.71.11:10254
-A KUBE-SEP-YR4QPLQBXIPH5XEL -s 192.168.70.230/32 -m comment --comment "default/glusterfs-cluster:" -j KUBE-MARK-MASQ
-A KUBE-SEP-YR4QPLQBXIPH5XEL -p tcp -m comment --comment "default/glusterfs-cluster:" -m tcp -j DNAT --to-destination 192.168.70.230:9090
-A KUBE-SEP-YWGOVFCG7K5KS42B -s 10.44.0.55/32 -m comment --comment "default/gateway:" -j KUBE-MARK-MASQ
-A KUBE-SEP-YWGOVFCG7K5KS42B -p tcp -m comment --comment "default/gateway:" -m tcp -j DNAT --to-destination 10.44.0.55:8080
-A KUBE-SERVICES -d 10.98.219.96/32 -p tcp -m comment --comment "default/auth: cluster IP" -m tcp --dport 9191 -j KUBE-SVC-5E6NQUE2DDZQYXK4
-A KUBE-SERVICES -d 10.99.215.43/32 -p tcp -m comment --comment "kube-system/elasticsearch-logging: cluster IP" -m tcp --dport 9200 -j KUBE-SVC-7BB4GED2QYDGC4GN
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.102.143.89/32 -p tcp -m comment --comment "rate-jobs/glusterfs-cluster: cluster IP" -m tcp --dport 9092 -j KUBE-SVC-XWYU5Y3HAZFCHZ5X
-A KUBE-SERVICES -d 10.110.234.188/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:http cluster IP" -m tcp --dport 80 -j KUBE-SVC-ITDPLE4X2C3GITQN
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:http external IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:http external IP" -m tcp --dport 80 -m physdev ! --physdev-is-in -m addrtype ! --src-type LOCAL -j KUBE-SVC-ITDPLE4X2C3GITQN
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:http external IP" -m tcp --dport 80 -m addrtype --dst-type LOCAL -j KUBE-SVC-ITDPLE4X2C3GITQN
-A KUBE-SERVICES -d 10.97.209.16/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-default-backend:http cluster IP" -m tcp --dport 80 -j KUBE-SVC-X7DVPRUYWJS6MW3F
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-default-backend:http external IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-default-backend:http external IP" -m tcp --dport 80 -m physdev ! --physdev-is-in -m addrtype ! --src-type LOCAL -j KUBE-SVC-X7DVPRUYWJS6MW3F
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-default-backend:http external IP" -m tcp --dport 80 -m addrtype --dst-type LOCAL -j KUBE-SVC-X7DVPRUYWJS6MW3F
-A KUBE-SERVICES -d 10.111.84.201/32 -p tcp -m comment --comment "kube-system/kibana-logging: cluster IP" -m tcp --dport 5601 -j KUBE-SVC-IUGSYNO5UD5ENGHX
-A KUBE-SERVICES -d 10.100.13.121/32 -p tcp -m comment --comment "kube-system/monitoring-grafana: cluster IP" -m tcp --dport 80 -j KUBE-SVC-JRXTEHDDTAFMSEAS
-A KUBE-SERVICES -d 10.102.10.101/32 -p tcp -m comment --comment "kube-system/glusterfs-cluster-system: cluster IP" -m tcp --dport 9091 -j KUBE-SVC-XAHVVRAYYBPGG4CC
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.162.218/32 -p tcp -m comment --comment "kube-system/mon-kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-SVC-CR2BJB4IXVLPMKXG
-A KUBE-SERVICES -d 10.110.234.188/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:2222-tcp cluster IP" -m tcp --dport 2222 -j KUBE-SVC-5E6L6IMJXXB5PRYI
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:2222-tcp external IP" -m tcp --dport 2222 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:2222-tcp external IP" -m tcp --dport 2222 -m physdev ! --physdev-is-in -m addrtype ! --src-type LOCAL -j KUBE-SVC-5E6L6IMJXXB5PRYI
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:2222-tcp external IP" -m tcp --dport 2222 -m addrtype --dst-type LOCAL -j KUBE-SVC-5E6L6IMJXXB5PRYI
-A KUBE-SERVICES -d 10.108.180.123/32 -p tcp -m comment --comment "kube-system/tiller-deploy:tiller cluster IP" -m tcp --dport 44134 -j KUBE-SVC-K7J76NXP7AUZVFGS
-A KUBE-SERVICES -d 10.106.153.70/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller-stats:stats cluster IP" -m tcp --dport 18080 -j KUBE-SVC-WUIYBZBCYLXPRZMM
-A KUBE-SERVICES -d 10.97.204.29/32 -p tcp -m comment --comment "default/hazelcast:hzport cluster IP" -m tcp --dport 5701 -j KUBE-SVC-2TWB3W26JFEUCPTC
-A KUBE-SERVICES -d 10.110.233.145/32 -p tcp -m comment --comment "default/sonus: cluster IP" -m tcp --dport 9010 -j KUBE-SVC-FQZVPXPAVNSYM4O6
-A KUBE-SERVICES -d 10.106.124.123/32 -p tcp -m comment --comment "default/glusterfs-cluster: cluster IP" -m tcp --dport 9090 -j KUBE-SVC-6NXSGHI464OKUGD3
-A KUBE-SERVICES -d 10.104.77.93/32 -p tcp -m comment --comment "hub/hub-docker-registry:registry cluster IP" -m tcp --dport 5000 -j KUBE-SVC-ZEJADLV2PZJ4IDUH
-A KUBE-SERVICES -d 10.108.182.166/32 -p tcp -m comment --comment "kube-system/heapster: cluster IP" -m tcp --dport 80 -j KUBE-SVC-BJM46V3U5RZHCFRZ
-A KUBE-SERVICES -d 10.103.4.107/32 -p tcp -m comment --comment "default/ui: cluster IP" -m tcp --dport 8085 -j KUBE-SVC-RKD3UH2OQ33CLZE3
-A KUBE-SERVICES -d 10.99.215.196/32 -p tcp -m comment --comment "kube-system/fluent-bit: cluster IP" -m tcp --dport 24224 -j KUBE-SVC-HNUXZZTQTMLMGJUH
-A KUBE-SERVICES -d 10.109.97.78/32 -p tcp -m comment --comment "kube-system/monitoring-influxdb: cluster IP" -m tcp --dport 8086 -j KUBE-SVC-Q6XJQ2I55QTBQCWT
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.107.41.164/32 -p tcp -m comment --comment "default/common: cluster IP" -m tcp --dport 8083 -j KUBE-SVC-JBRDQBMTQUIIYW2F
-A KUBE-SERVICES -d 10.107.248.201/32 -p tcp -m comment --comment "default/gateway: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-ZW2RLD2NAMM5L3LY
-A KUBE-SERVICES -d 10.104.251.108/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller-metrics:metrics cluster IP" -m tcp --dport 9913 -j KUBE-SVC-R4UENAYGNDN2K6E7
-A KUBE-SERVICES -d 10.110.234.188/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-5O4RX4I7LOPQUYD5
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:https external IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:https external IP" -m tcp --dport 443 -m physdev ! --physdev-is-in -m addrtype ! --src-type LOCAL -j KUBE-SVC-5O4RX4I7LOPQUYD5
-A KUBE-SERVICES -d 46.17.72.162/32 -p tcp -m comment --comment "default/dmz-nginx-ingress-controller:https external IP" -m tcp --dport 443 -m addrtype --dst-type LOCAL -j KUBE-SVC-5O4RX4I7LOPQUYD5
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-2TWB3W26JFEUCPTC -m comment --comment "default/hazelcast:hzport" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-EZUERM5EL56PPGZ5
-A KUBE-SVC-2TWB3W26JFEUCPTC -m comment --comment "default/hazelcast:hzport" -j KUBE-SEP-RBKQ3UO2BD5574UU
-A KUBE-SVC-5E6L6IMJXXB5PRYI -m comment --comment "default/dmz-nginx-ingress-controller:2222-tcp" -j KUBE-SEP-D6ROLV4NHZ77JI47
-A KUBE-SVC-5E6NQUE2DDZQYXK4 -m comment --comment "default/auth:" -j KUBE-SEP-JVZTSKHLN3KBIP5L
-A KUBE-SVC-5O4RX4I7LOPQUYD5 -m comment --comment "default/dmz-nginx-ingress-controller:https" -j KUBE-SEP-7DAQXCZAGL6F2WLE
-A KUBE-SVC-6NXSGHI464OKUGD3 -m comment --comment "default/glusterfs-cluster:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YR4QPLQBXIPH5XEL
-A KUBE-SVC-6NXSGHI464OKUGD3 -m comment --comment "default/glusterfs-cluster:" -j KUBE-SEP-MS7CXDNM35F2O7EZ
-A KUBE-SVC-7BB4GED2QYDGC4GN -m comment --comment "kube-system/elasticsearch-logging:" -j KUBE-SEP-XDKQFSDP7SK3DG4G
-A KUBE-SVC-BJM46V3U5RZHCFRZ -m comment --comment "kube-system/heapster:" -j KUBE-SEP-JAB644JEIH72D7NY
-A KUBE-SVC-CR2BJB4IXVLPMKXG -m comment --comment "kube-system/mon-kubernetes-dashboard:" -j KUBE-SEP-B3V2WOWBYTP3PD3L
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-S7WJ4KDV72DZRKXY
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-VXGODP34ZKHNR4NT
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-NAIHDCL2YFPCGYUR
-A KUBE-SVC-FQZVPXPAVNSYM4O6 -m comment --comment "default/sonus:" -j KUBE-SEP-GJFNYPBSZIUQLAEL
-A KUBE-SVC-HNUXZZTQTMLMGJUH -m comment --comment "kube-system/fluent-bit:" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-E3QV3WKMDKT5BIMM
-A KUBE-SVC-HNUXZZTQTMLMGJUH -m comment --comment "kube-system/fluent-bit:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-KH5PW5K2NW74EL6U
-A KUBE-SVC-HNUXZZTQTMLMGJUH -m comment --comment "kube-system/fluent-bit:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-5VHD65JGHBHEH3OW
-A KUBE-SVC-HNUXZZTQTMLMGJUH -m comment --comment "kube-system/fluent-bit:" -j KUBE-SEP-2T2JSMHSSEXICVWQ
-A KUBE-SVC-ITDPLE4X2C3GITQN -m comment --comment "default/dmz-nginx-ingress-controller:http" -j KUBE-SEP-EQU7W3UR3KH7R56E
-A KUBE-SVC-IUGSYNO5UD5ENGHX -m comment --comment "kube-system/kibana-logging:" -j KUBE-SEP-ORRCHWATARHYOER6
-A KUBE-SVC-JBRDQBMTQUIIYW2F -m comment --comment "default/common:" -j KUBE-SEP-RAJQ32SYKW6X5VTL
-A KUBE-SVC-JRXTEHDDTAFMSEAS -m comment --comment "kube-system/monitoring-grafana:" -j KUBE-SEP-WQJGR5GL3DRFNR6C
-A KUBE-SVC-K7J76NXP7AUZVFGS -m comment --comment "kube-system/tiller-deploy:tiller" -j KUBE-SEP-6WT2QPM3S5LK4PNK
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-N2JFJLJRMCNZGU3T --mask 255.255.255.255 --rsource -j KUBE-SEP-N2JFJLJRMCNZGU3T
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-N2JFJLJRMCNZGU3T
-A KUBE-SVC-Q6XJQ2I55QTBQCWT -m comment --comment "kube-system/monitoring-influxdb:" -j KUBE-SEP-BBDC6QBC4WFRDAZF
-A KUBE-SVC-R4UENAYGNDN2K6E7 -m comment --comment "default/dmz-nginx-ingress-controller-metrics:metrics" -j KUBE-SEP-YE6T6BNVTOPUMLPP
-A KUBE-SVC-RKD3UH2OQ33CLZE3 -m comment --comment "default/ui:" -j KUBE-SEP-6CEMAHSACHX7EJ7U
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-FJZZMNOWVSDGOSM3
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-TQR4NGDI7QBPAEOB
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-7KP7D5HJPDVZ255S
-A KUBE-SVC-WUIYBZBCYLXPRZMM -m comment --comment "default/dmz-nginx-ingress-controller-stats:stats" -j KUBE-SEP-DWJ53WG5H2T7VC2T
-A KUBE-SVC-X7DVPRUYWJS6MW3F -m comment --comment "default/dmz-nginx-ingress-default-backend:http" -j KUBE-SEP-LGQVMCXJGZVPU2ZD
-A KUBE-SVC-XAHVVRAYYBPGG4CC -m comment --comment "kube-system/glusterfs-cluster-system:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-6W2PTZRXA3PIDZ55
-A KUBE-SVC-XAHVVRAYYBPGG4CC -m comment --comment "kube-system/glusterfs-cluster-system:" -j KUBE-SEP-4FAFKUO5PDFMMDJZ
-A KUBE-SVC-XWYU5Y3HAZFCHZ5X -m comment --comment "rate-jobs/glusterfs-cluster:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-MVHGEIUXD4RAT7UH
-A KUBE-SVC-XWYU5Y3HAZFCHZ5X -m comment --comment "rate-jobs/glusterfs-cluster:" -j KUBE-SEP-UJC7W3YPF5XF2GWW
-A KUBE-SVC-ZEJADLV2PZJ4IDUH -m comment --comment "hub/hub-docker-registry:registry" -j KUBE-SEP-RDFVHZ3YEYWMVRZR
-A KUBE-SVC-ZW2RLD2NAMM5L3LY -m comment --comment "default/gateway:" -j KUBE-SEP-YWGOVFCG7K5KS42B
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Mon Sep  3 14:12:05 2018
# Generated by iptables-save v1.4.21 on Mon Sep  3 14:12:05 2018
*filter
:INPUT ACCEPT [90:32607]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [92:16055]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "flux/flux: has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 32629 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "kube-system/prometheus-service: has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 30000 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "grafana/cm-grafana-tls-kusoq:http has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 32592 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-SERVICES -d 10.109.72.150/32 -p tcp -m comment --comment "default/hazel-management-service: has no endpoints" -m tcp --dport 8080 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.105.194.19/32 -p tcp -m comment --comment "default/ace: has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.106.241.185/32 -p tcp -m comment --comment "flux/flux: has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.98.70.92/32 -p tcp -m comment --comment "prometheus/prometheus-alertmanager:http has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.110.180.16/32 -p tcp -m comment --comment "default/bootadmin: has no endpoints" -m tcp --dport 8082 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.97.156.213/32 -p tcp -m comment --comment "default/email-fetcher-sell: has no endpoints" -m tcp --dport 8092 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.181.215/32 -p tcp -m comment --comment "default/rate-parser: has no endpoints" -m tcp --dport 8087 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.100.50.102/32 -p tcp -m comment --comment "default/rabbitmq:amqp-port has no endpoints" -m tcp --dport 5672 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.100.50.102/32 -p tcp -m comment --comment "default/rabbitmq:mgmt-port has no endpoints" -m tcp --dport 15672 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.97.224.87/32 -p tcp -m comment --comment "default/cdrimport:cdrimport has no endpoints" -m tcp --dport 8080 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.104.192.63/32 -p tcp -m comment --comment "prometheus/prometheus-pushgateway:http has no endpoints" -m tcp --dport 9091 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.99.75.239/32 -p tcp -m comment --comment "kube-system/prometheus-service: has no endpoints" -m tcp --dport 9090 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.99.236/32 -p tcp -m comment --comment "default/rate-server-job: has no endpoints" -m tcp --dport 8086 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.106.75.32/32 -p tcp -m comment --comment "default/rate-buy: has no endpoints" -m tcp --dport 8089 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.105.12.181/32 -p tcp -m comment --comment "default/email-fetcher-buy: has no endpoints" -m tcp --dport 8091 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.106.54.227/32 -p tcp -m comment --comment "default/rate-sell: has no endpoints" -m tcp --dport 8084 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.106.3.8/32 -p tcp -m comment --comment "grafana/cm-grafana-tls-kusoq:http has no endpoints" -m tcp --dport 8089 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.99.18.130/32 -p tcp -m comment --comment "prometheus/prometheus-server:http has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.107.78.174/32 -p tcp -m comment --comment "default/notify: has no endpoints" -m tcp --dport 9011 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.45.102/32 -p tcp -m comment --comment "grafana/grafana:service has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.106.152.75/32 -p tcp -m comment --comment "kube-system/docker-registry:registry has no endpoints" -m tcp --dport 5000 -j REJECT --reject-with icmp-port-unreachable
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC -m set ! --match-set weave-local-pods dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-E.1.0W^NGSp]0_t5WwH/]gX@L dst -m comment --comment "DefaultAllow isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-{:0H+H[*m}fgriHv%.OP^S=(U dst -m comment --comment "DefaultAllow isolation for namespace: gitlab" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-YgP/w8oy:.|@DL0zbTkRYvngS dst -m comment --comment "DefaultAllow isolation for namespace: grafana" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-c(0OtpYZYdGHPcpUCzKY;DKhj dst -m comment --comment "DefaultAllow isolation for namespace: hub" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-}6c5!v0DzyABi!o;xECv{g0N_ dst -m comment --comment "DefaultAllow isolation for namespace: prometheus" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-!{J3zam6S_P8]#xPo}E28I!pd dst -m comment --comment "DefaultAllow isolation for namespace: rate-jobs" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-5TyI|mk/)b{Evv#uP~Ppl.6EV dst -m comment --comment "DefaultAllow isolation for namespace: vmware" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-)QlcpgF%gBprY=ov]e=XR4dD_ dst -m comment --comment "DefaultAllow isolation for namespace: flux" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-5EOQqp?$*llGgXygq!yv|rNDg dst -m comment --comment "DefaultAllow isolation for namespace: jenkins" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-0EHD/vdN#O4]V?o4Tx7kS;APH dst -m comment --comment "DefaultAllow isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-?b%zl9GIe0AET1(QI^7NWe*fO dst -m comment --comment "DefaultAllow isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-O6n3/lObK~qGu[7olR9UaL2hA dst -m comment --comment "DefaultAllow isolation for namespace: panel-gitlab" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-ZVi%*+sg.quw+[m].KfxT#VUX dst -m comment --comment "DefaultAllow isolation for namespace: rabbitmq" -j ACCEPT
COMMIT
# Completed on Mon Sep  3 14:12:05 2018
@bboreham
Copy link
Contributor

bboreham commented Sep 3, 2018

Do you have the logs from the node with ID {da:68:9b:b7:51:65 engine02} , from the time of the failure ?

@shahbour
Copy link
Author

shahbour commented Sep 3, 2018

Yes below is the log, the error started today 2018/09/03 02:01:09.890616 , I did copy the log from yesterday till after the error started.

Engine02 is still working, Engine01 is the one that stopped. Unfortunatly i did restart it several time i don't know if i can still capture the log from Engine01.

INFO: 2018/09/02 07:57:39.947749 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 08:20:13.514069 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 08:37:49.940102 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 08:37:56.897223 Discovered remote MAC 82:a8:ee:f9:43:c7 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 08:38:31.620601 Discovered remote MAC 5e:76:e7:30:8a:82 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 08:38:46.654588 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 09:03:47.850445 Discovered remote MAC 82:a8:ee:f9:43:c7 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 09:05:20.679644 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 09:05:39.610444 Discovered remote MAC 82:e1:db:87:bb:cc at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 09:14:53.087907 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 09:30:17.859595 Discovered remote MAC 82:a8:ee:f9:43:c7 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 09:30:29.405022 Discovered remote MAC 5e:76:e7:30:8a:82 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 09:40:28.296713 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 09:57:59.148413 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 10:11:11.402068 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 10:11:11.403067 Discovered remote MAC 4e:8d:c6:c5:b7:3f at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 10:35:20.129633 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 10:35:34.337247 Discovered remote MAC ba:2b:4b:02:ba:33 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 10:35:34.422560 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 10:35:34.423791 Discovered remote MAC 4e:8d:c6:c5:b7:3f at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 11:00:03.429587 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 11:19:21.107324 Discovered remote MAC 4e:8d:c6:c5:b7:3f at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 11:19:27.064823 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 11:22:51.489002 Discovered remote MAC aa:6c:ff:c8:67:8e at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 11:38:30.310062 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 11:58:18.597562 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 11:58:29.286691 Discovered remote MAC 82:e1:db:87:bb:cc at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 12:20:36.871167 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 12:50:26.713598 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 13:01:04.090857 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 13:01:09.820950 Discovered remote MAC 82:a8:ee:f9:43:c7 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 13:02:26.574532 Discovered remote MAC 5e:76:e7:30:8a:82 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 13:06:24.541316 Discovered remote MAC 82:e1:db:87:bb:cc at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 13:17:29.496544 Discovered remote MAC 5e:76:e7:30:8a:82 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 13:58:37.699882 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 14:20:25.993693 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 14:37:33.032799 Weave version 2.4.0 is available; please update at https://github.com/weaveworks/weave/releases/download/v2.4.0/weave
INFO: 2018/09/02 14:49:08.591552 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 14:58:32.178835 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 15:12:56.212298 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 15:20:35.562991 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 15:41:36.134000 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 15:48:40.129956 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 15:58:01.012097 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 16:18:34.035792 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 16:18:52.965142 Discovered remote MAC 4e:8d:c6:c5:b7:3f at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 16:18:59.950369 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 16:19:12.421304 Discovered remote MAC 82:a8:ee:f9:43:c7 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 16:20:05.788052 Discovered remote MAC 5e:76:e7:30:8a:82 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 16:34:18.023375 Discovered remote MAC 82:a8:ee:f9:43:c7 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 16:34:36.652489 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 16:35:04.349994 Discovered remote MAC 5e:76:e7:30:8a:82 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 16:38:33.034092 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 16:57:59.565781 Discovered remote MAC 5e:76:e7:30:8a:82 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 17:20:24.768960 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 17:57:15.534162 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 18:18:34.075730 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 18:51:24.402316 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 19:20:13.865525 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 19:43:35.303324 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 19:58:35.155542 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 20:51:58.973838 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 21:25:49.018756 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 21:35:35.338692 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 21:35:38.376224 Discovered remote MAC 82:a8:ee:f9:43:c7 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 21:41:23.881150 Discovered remote MAC 82:e1:db:87:bb:cc at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 21:58:54.842481 Weave version 2.4.0 is available; please update at https://github.com/weaveworks/weave/releases/download/v2.4.0/weave
INFO: 2018/09/02 21:59:54.278181 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 22:28:44.045935 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 23:20:22.198513 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 23:42:51.907136 Discovered remote MAC 4e:7c:ec:9a:fe:c6 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 23:44:25.715324 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/02 23:45:46.780895 Discovered remote MAC 82:a8:ee:f9:43:c7 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 23:49:40.447411 Discovered remote MAC 5e:76:e7:30:8a:82 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/02 23:58:28.189598 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 00:06:37.413210 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/03 00:17:27.859132 Discovered remote MAC aa:6c:ff:c8:67:8e at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 00:20:21.749392 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/03 00:35:48.205470 Discovered remote MAC 82:a8:ee:f9:43:c7 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 00:58:38.023448 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 00:58:38.239916 Discovered remote MAC 82:e1:db:87:bb:cc at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 01:31:16.392851 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 01:58:47.719606 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:01:09.890616 ->[192.168.70.230:36422|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: read tcp4 192.168.70.231:6783->192.168.70.230:36422: read: 
connection reset by peer
INFO: 2018/09/03 02:01:10.049001 ->[192.168.70.230:36422|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 02:01:10.049884 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 02:01:10.050732 ->[192.168.70.230:6783] error during connection attempt: dial tcp4 :0->192.168.70.230:6783: connect: connection refused
INFO: 2018/09/03 02:01:10.282928 Removed unreachable peer 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:01:32.156972 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 02:01:32.157686 ->[192.168.70.230:6783] error during connection attempt: dial tcp4 :0->192.168.70.230:6783: connect: connection refused
INFO: 2018/09/03 02:01:53.887483 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 02:01:53.888291 ->[192.168.70.230:6783] error during connection attempt: dial tcp4 :0->192.168.70.230:6783: connect: connection refused
INFO: 2018/09/03 02:01:59.701008 ->[192.168.70.230:41901] connection accepted
INFO: 2018/09/03 02:01:59.783638 ->[192.168.70.230:41901|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:01:59.785635 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:01:59.785733 ->[192.168.70.230:41901|72:be:1a:b8:98:77(engine01)]: connection added (new peer)
INFO: 2018/09/03 02:01:59.788291 ->[192.168.70.230:41901|72:be:1a:b8:98:77(engine01)]: connection fully established
INFO: 2018/09/03 02:01:59.834532 Discovered remote MAC 3a:be:9e:66:51:59 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:01:59.836940 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:01:59.837778 sleeve ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:01:59.970644 Discovered remote MAC f6:31:e4:ec:f0:b1 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:01:59.970794 Discovered remote MAC 12:c8:48:21:7f:8a at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:01:59.971190 Discovered remote MAC 32:83:33:d0:ab:fb at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:01:59.972070 Discovered remote MAC ae:8f:dc:e6:af:5e at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:01:59.972692 Discovered remote MAC ce:96:48:f8:ef:b9 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:00.259710 Discovered remote MAC 72:be:1a:b8:98:77 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:00.459619 Discovered remote MAC de:12:18:f8:59:78 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:00.464077 Discovered remote MAC 8a:ae:60:1a:bb:f7 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:00.480709 Discovered remote MAC 22:53:7b:91:50:9a at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:00.630096 ->[192.168.70.230:41901|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: inco
ming message says owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:02:00.630392 ->[192.168.70.230:41901|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 02:02:01.109592 Discovered remote MAC 72:46:9b:71:91:06 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:01.914453 Discovered remote MAC de:50:2a:17:e6:db at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:03.252788 ->[192.168.70.230:36418] connection accepted
INFO: 2018/09/03 02:02:03.254390 ->[192.168.70.230:36418|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:02:03.254807 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:02:03.254890 ->[192.168.70.230:36418|72:be:1a:b8:98:77(engine01)]: connection added
INFO: 2018/09/03 02:02:03.260519 ->[192.168.70.230:36418|72:be:1a:b8:98:77(engine01)]: connection fully established
INFO: 2018/09/03 02:02:03.265140 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:02:03.266109 sleeve ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:02:03.997683 Discovered remote MAC 5e:aa:46:e4:99:6c at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:05.118862 ->[172.16.71.11:54661|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says own
er 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:02:05.119057 ->[172.16.71.11:54661|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 02:02:05.120796 ->[172.16.71.11:55641] connection accepted
INFO: 2018/09/03 02:02:05.121123 ->[172.16.71.11:6783] attempting connection
INFO: 2018/09/03 02:02:05.121799 ->[172.16.71.11:6783] error during connection attempt: dial tcp4 :0->172.16.71.11:6783: connect: no route to host
INFO: 2018/09/03 02:02:05.122659 ->[172.16.71.11:55641|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:02:05.122794 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 02:02:05.122844 ->[172.16.71.11:55641|9e:1f:f3:46:dd:12(engine03)]: connection added
INFO: 2018/09/03 02:02:05.127467 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using sleeve
INFO: 2018/09/03 02:02:05.127536 ->[172.16.71.11:55641|9e:1f:f3:46:dd:12(engine03)]: connection fully established
INFO: 2018/09/03 02:02:05.127837 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:02:05.131496 sleeve ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:02:06.886923 Discovered remote MAC 5e:2f:3d:94:9b:46 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:09.886430 Discovered remote MAC aa:6c:ff:c8:67:8e at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:10.788929 Discovered remote MAC 4e:7c:ec:9a:fe:c6 at 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:02:14.761621 Discovered remote MAC 9e:1f:f3:46:dd:12 at 9e:1f:f3:46:dd:12(engine03)
ERRO: 2018/09/03 02:02:14.761727 Captured frame from MAC (9e:1f:f3:46:dd:12) to (8a:ae:60:1a:bb:f7) associated with another peer 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/03 02:02:25.471127 ->[192.168.70.232:59220|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message say
s owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:02:25.471415 ->[192.168.70.232:59220|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 02:02:25.473648 ->[192.168.70.232:60448] connection accepted
INFO: 2018/09/03 02:02:25.474887 ->[192.168.70.232:60448|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:02:25.476170 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 02:02:25.478602 ->[192.168.70.232:60448|2a:44:ef:34:94:3b(kube-master)]: connection added
INFO: 2018/09/03 02:02:25.533042 ->[192.168.70.232:60448|2a:44:ef:34:94:3b(kube-master)]: connection fully established
INFO: 2018/09/03 02:02:25.979721 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:02:25.980443 sleeve ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:02:35.118197 ->[172.16.71.11:55641|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says own
er 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:02:35.118314 ->[172.16.71.11:55641|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 02:02:35.120296 ->[172.16.71.11:6783] attempting connection
INFO: 2018/09/03 02:02:36.122441 ->[172.16.71.11:6783] error during connection attempt: dial tcp4 :0->172.16.71.11:6783: connect: no route to host
INFO: 2018/09/03 02:02:36.613377 ->[172.16.71.11:40426] connection accepted
INFO: 2018/09/03 02:02:36.614988 ->[172.16.71.11:40426|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:02:36.615409 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 02:02:36.615503 ->[172.16.71.11:40426|9e:1f:f3:46:dd:12(engine03)]: connection added
INFO: 2018/09/03 02:02:36.628394 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using sleeve
INFO: 2018/09/03 02:02:36.628484 ->[172.16.71.11:40426|9e:1f:f3:46:dd:12(engine03)]: connection fully established
INFO: 2018/09/03 02:02:36.628901 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:02:36.630378 sleeve ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: Effective MTU verified at 1438
ERRO: 2018/09/03 02:02:44.624193 Captured frame from MAC (9e:1f:f3:46:dd:12) to (f6:31:e4:ec:f0:b1) associated with another peer 9e:1f:f3:46:dd:12(engine03)
INFO: 2018/09/03 02:02:56.992808 ->[192.168.70.230:36418|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: inco
ming message says owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:02:56.992926 ->[192.168.70.230:36418|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 02:02:56.995225 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 02:02:56.998690 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:02:56.998982 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:02:56.999047 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection added
INFO: 2018/09/03 02:02:57.001195 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:02:57.001453 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using sleeve
INFO: 2018/09/03 02:02:57.001514 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection fully established
INFO: 2018/09/03 02:02:57.001739 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:02:57.002895 sleeve ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:03:05.118258 ->[172.16.71.11:40426|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says own
er 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:03:05.118536 ->[172.16.71.11:40426|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 02:03:07.330127 ->[172.16.71.11:6783] attempting connection
INFO: 2018/09/03 02:03:07.331412 ->[172.16.71.11:6783] error during connection attempt: dial tcp4 :0->172.16.71.11:6783: connect: no route to host
INFO: 2018/09/03 02:03:07.552033 ->[172.16.71.11:60290] connection accepted
INFO: 2018/09/03 02:03:07.553640 ->[172.16.71.11:60290|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:03:07.555789 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 02:03:07.555941 ->[172.16.71.11:60290|9e:1f:f3:46:dd:12(engine03)]: connection added
INFO: 2018/09/03 02:03:08.056081 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using sleeve
INFO: 2018/09/03 02:03:08.056271 ->[172.16.71.11:60290|9e:1f:f3:46:dd:12(engine03)]: connection fully established
INFO: 2018/09/03 02:03:08.056567 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:03:08.057930 sleeve ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:03:35.120098 ->[172.16.71.11:60290|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says own
er 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:03:35.120730 ->[172.16.71.11:60290|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 02:03:41.821102 ->[172.16.71.11:57576] connection accepted
INFO: 2018/09/03 02:03:41.822649 ->[172.16.71.11:57576|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:03:41.822929 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 02:03:41.823005 ->[172.16.71.11:57576|9e:1f:f3:46:dd:12(engine03)]: connection added
INFO: 2018/09/03 02:03:42.325311 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using sleeve
INFO: 2018/09/03 02:03:42.325450 ->[172.16.71.11:57576|9e:1f:f3:46:dd:12(engine03)]: connection fully established
INFO: 2018/09/03 02:03:42.325468 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:03:42.327661 sleeve ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:03:55.471303 ->[192.168.70.232:60448|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message say
s owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:03:55.471486 ->[192.168.70.232:60448|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 02:03:55.474116 ->[192.168.70.232:47543] connection accepted
INFO: 2018/09/03 02:03:55.474440 ->[192.168.70.232:6783] attempting connection
INFO: 2018/09/03 02:03:55.476426 ->[192.168.70.232:47543|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:03:55.476723 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 02:03:55.476802 ->[192.168.70.232:47543|2a:44:ef:34:94:3b(kube-master)]: connection added
INFO: 2018/09/03 02:03:55.484673 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:03:55.484960 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 02:03:55.485405 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Multiple connections to 2a:44:ef:34:94:3b(kube-master) added to da:68:9b:b7:
51:65(engine02)
INFO: 2018/09/03 02:03:55.490909 ->[192.168.70.232:47543|2a:44:ef:34:94:3b(kube-master)]: connection fully established
INFO: 2018/09/03 02:03:55.491077 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:03:55.490799 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using sleeve
INFO: 2018/09/03 02:03:55.503919 sleeve ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:03:55.978484 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 02:03:56.992391 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incom
ing message says owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:03:56.992498 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 02:03:56.993892 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 02:03:57.012175 ->[192.168.70.230:56102] connection accepted
INFO: 2018/09/03 02:03:57.013167 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:03:57.013229 ->[192.168.70.230:56102|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:03:57.013360 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:03:57.013380 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:03:57.013402 ->[192.168.70.230:56102|72:be:1a:b8:98:77(engine01)]: connection added
INFO: 2018/09/03 02:03:57.013722 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: Multiple connections to 72:be:1a:b8:98:77(engine01.teltacwor
ldwide.co) added to da:68:9b:b7:51:65(engine02)
INFO: 2018/09/03 02:03:57.015726 ->[192.168.70.230:56102|72:be:1a:b8:98:77(engine01)]: connection fully established
INFO: 2018/09/03 02:03:57.028830 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:03:57.029464 sleeve ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:04:05.118852 ->[172.16.71.11:57576|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says own
er 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:04:05.119057 ->[172.16.71.11:57576|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 02:04:05.121674 ->[172.16.71.11:6783] attempting connection
INFO: 2018/09/03 02:04:06.124245 ->[172.16.71.11:6783] error during connection attempt: dial tcp4 :0->172.16.71.11:6783: connect: no route to host
INFO: 2018/09/03 02:04:12.485802 ->[172.16.71.11:44010] connection accepted
INFO: 2018/09/03 02:04:12.487551 ->[172.16.71.11:44010|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:04:12.487738 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 02:04:12.487815 ->[172.16.71.11:44010|9e:1f:f3:46:dd:12(engine03)]: connection added
INFO: 2018/09/03 02:04:12.993284 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using sleeve
INFO: 2018/09/03 02:04:12.994048 ->[172.16.71.11:44010|9e:1f:f3:46:dd:12(engine03)]: connection fully established
INFO: 2018/09/03 02:04:12.994246 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:04:12.995819 sleeve ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:04:25.472032 ->[192.168.70.232:47543|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message say
s owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:04:25.472239 ->[192.168.70.232:47543|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 02:04:25.475207 ->[192.168.70.232:6783] attempting connection
INFO: 2018/09/03 02:04:25.477777 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:04:25.478081 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 02:04:25.478142 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection added
INFO: 2018/09/03 02:04:25.480997 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection fully established
INFO: 2018/09/03 02:04:25.481155 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:04:25.485508 sleeve ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:04:35.117789 ->[172.16.71.11:44010|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says own
er 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:04:35.117955 ->[172.16.71.11:44010|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 02:04:50.123856 ->[172.16.71.11:48117] connection accepted
INFO: 2018/09/03 02:04:50.125463 ->[172.16.71.11:48117|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:04:50.125678 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 02:04:50.125767 ->[172.16.71.11:48117|9e:1f:f3:46:dd:12(engine03)]: connection added
INFO: 2018/09/03 02:04:50.628065 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using sleeve
INFO: 2018/09/03 02:04:50.628150 ->[172.16.71.11:48117|9e:1f:f3:46:dd:12(engine03)]: connection fully established
INFO: 2018/09/03 02:04:50.628382 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:04:50.629520 sleeve ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:04:55.471836 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says
 owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:04:55.472179 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 02:04:55.473460 ->[192.168.70.232:51359] connection accepted
INFO: 2018/09/03 02:04:55.477211 ->[192.168.70.232:51359|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:04:55.477555 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 02:04:55.477627 ->[192.168.70.232:51359|2a:44:ef:34:94:3b(kube-master)]: connection added
INFO: 2018/09/03 02:04:55.493741 ->[192.168.70.232:51359|2a:44:ef:34:94:3b(kube-master)]: connection fully established
INFO: 2018/09/03 02:04:55.505409 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:04:55.507382 sleeve ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:05:05.117753 ->[172.16.71.11:48117|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says own
er 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:05:05.117912 ->[172.16.71.11:48117|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 02:05:23.802106 ->[172.16.71.11:58682] connection accepted
INFO: 2018/09/03 02:05:23.803838 ->[172.16.71.11:58682|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:05:23.804138 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 02:05:23.804210 ->[172.16.71.11:58682|9e:1f:f3:46:dd:12(engine03)]: connection added
INFO: 2018/09/03 02:05:24.306594 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using sleeve
INFO: 2018/09/03 02:05:24.306688 ->[172.16.71.11:58682|9e:1f:f3:46:dd:12(engine03)]: connection fully established
INFO: 2018/09/03 02:05:24.306759 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:05:24.308601 sleeve ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:05:25.472288 ->[192.168.70.232:51359|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message say
s owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:05:25.472629 ->[192.168.70.232:51359|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 02:05:25.474392 ->[192.168.70.232:6783] attempting connection
INFO: 2018/09/03 02:05:25.477780 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:05:25.477942 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 02:05:25.477993 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection added
INFO: 2018/09/03 02:05:25.480285 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:05:25.480325 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection fully established
INFO: 2018/09/03 02:05:25.480288 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using sleeve
INFO: 2018/09/03 02:05:25.495980 sleeve ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:05:25.496555 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 02:05:26.992492 ->[192.168.70.230:56102|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: inco
ming message says owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:05:26.992671 ->[192.168.70.230:56102|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 02:05:26.994314 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 02:05:26.995375 ->[192.168.70.230:56945] connection accepted
INFO: 2018/09/03 02:05:26.996587 ->[192.168.70.230:56945|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:05:26.996804 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:05:26.996942 ->[192.168.70.230:56945|72:be:1a:b8:98:77(engine01)]: connection added
INFO: 2018/09/03 02:05:26.997562 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:05:26.997720 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:05:26.997781 ->[192.168.70.230:56945|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: Multiple connections to 72:be:1a:b8:98:77(engine01.teltacwo
rldwide.co) added to da:68:9b:b7:51:65(engine02)
INFO: 2018/09/03 02:05:26.997806 ->[192.168.70.230:56945|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 02:05:26.998274 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection added
INFO: 2018/09/03 02:05:26.998893 overlay_switch ->[72:be:1a:b8:98:77(engine01)] fastdp write tcp4 192.168.70.231:6783->192.168.70.230:56945: use of closed network connection
INFO: 2018/09/03 02:05:27.000108 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using sleeve
INFO: 2018/09/03 02:05:27.000171 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection fully established
INFO: 2018/09/03 02:05:27.000417 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:05:27.001890 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:05:27.001934 sleeve ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:05:35.118815 ->[172.16.71.11:58682|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says own
er 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:05:35.119070 ->[172.16.71.11:58682|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 02:05:55.471430 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says
 owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:05:55.471639 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 02:05:55.479248 ->[192.168.70.232:47588] connection accepted
INFO: 2018/09/03 02:05:55.483644 ->[192.168.70.232:47588|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:05:55.483843 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 02:05:55.483882 ->[192.168.70.232:47588|2a:44:ef:34:94:3b(kube-master)]: connection added
INFO: 2018/09/03 02:05:55.488008 ->[192.168.70.232:47588|2a:44:ef:34:94:3b(kube-master)]: connection fully established
INFO: 2018/09/03 02:05:55.986736 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:05:55.987594 sleeve ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:06:06.373127 ->[172.16.71.11:48277] connection accepted
INFO: 2018/09/03 02:06:06.375099 ->[172.16.71.11:48277|9e:1f:f3:46:dd:12(engine03)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:06:06.375360 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using fastdp
INFO: 2018/09/03 02:06:06.375451 ->[172.16.71.11:48277|9e:1f:f3:46:dd:12(engine03)]: connection added
INFO: 2018/09/03 02:06:06.878483 overlay_switch ->[9e:1f:f3:46:dd:12(engine03)] using sleeve
INFO: 2018/09/03 02:06:06.878589 ->[172.16.71.11:48277|9e:1f:f3:46:dd:12(engine03)]: connection fully established
INFO: 2018/09/03 02:06:06.878748 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:06:06.879992 sleeve ->[172.16.71.11:6783|9e:1f:f3:46:dd:12(engine03)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:06:25.470952 ->[192.168.70.232:47588|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message say
s owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:06:25.471147 ->[192.168.70.232:47588|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 02:06:25.474918 ->[192.168.70.232:6783] attempting connection
INFO: 2018/09/03 02:06:25.480237 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:06:25.480437 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp
INFO: 2018/09/03 02:06:25.480528 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection added
INFO: 2018/09/03 02:06:25.482511 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection fully established
INFO: 2018/09/03 02:06:25.484273 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:06:25.488256 sleeve ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:06:26.992428 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incom
ing message says owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:06:26.992628 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 02:06:26.994004 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 02:06:26.994634 ->[192.168.70.230:38378] connection accepted
INFO: 2018/09/03 02:06:26.999731 ->[192.168.70.230:38378|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:06:26.999900 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:06:26.999917 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:06:26.999957 ->[192.168.70.230:38378|72:be:1a:b8:98:77(engine01)]: connection added
INFO: 2018/09/03 02:06:27.000220 ->[192.168.70.230:38378|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: read tcp4 192.168.70.231:6783->192.168.70.230:38378: read: 
connection reset by peer
INFO: 2018/09/03 02:06:27.000485 ->[192.168.70.230:38378|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 02:06:27.000595 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:06:27.001015 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection added
INFO: 2018/09/03 02:06:27.001096 overlay_switch ->[72:be:1a:b8:98:77(engine01)] fastdp write tcp4 192.168.70.231:6783->192.168.70.230:38378: use of closed network connection
INFO: 2018/09/03 02:06:27.001130 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using sleeve
INFO: 2018/09/03 02:06:27.003824 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using sleeve
INFO: 2018/09/03 02:06:27.003898 ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: connection fully established
INFO: 2018/09/03 02:06:27.004004 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2018/09/03 02:06:27.004360 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 02:06:27.005554 sleeve ->[192.168.70.230:6783|72:be:1a:b8:98:77(engine01)]: Effective MTU verified at 1438
INFO: 2018/09/03 02:06:35.118957 ->[172.16.71.11:48277|9e:1f:f3:46:dd:12(engine03)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says own
er 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:06:35.119373 ->[172.16.71.11:48277|9e:1f:f3:46:dd:12(engine03)]: connection deleted
INFO: 2018/09/03 02:06:35.125596 ->[172.16.71.11:6783] attempting connection
INFO: 2018/09/03 02:06:36.128395 ->[172.16.71.11:6783] error during connection attempt: dial tcp4 :0->172.16.71.11:6783: connect: no route to host
INFO: 2018/09/03 02:06:55.471862 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection shutting down due to error: Received update for IP range I own at 10.44.0.0 v4959: incoming message says
 owner 72:be:1a:b8:98:77 v4961
INFO: 2018/09/03 02:06:55.472044 ->[192.168.70.232:6783|2a:44:ef:34:94:3b(kube-master)]: connection deleted
INFO: 2018/09/03 02:06:55.480760 ->[192.168.70.232:57145] connection accepted
INFO: 2018/09/03 02:06:55.486244 ->[192.168.70.232:57145|2a:44:ef:34:94:3b(kube-master)]: connection ready; using protocol version 2
INFO: 2018/09/03 02:06:55.486481 overlay_switch ->[2a:44:ef:34:94:3b(kube-master)] using fastdp

@bboreham
Copy link
Contributor

bboreham commented Sep 3, 2018

The symptom is that two nodes in your cluster disagree on the contents of the IPAM data structure. Since there's no clue in the first log you provided as to why, I asked for the other.

Absent any clues as to what happened at the time of the failure, this is the same as #3310, and I wrote about how you can clean up at #3310 (comment)

@shahbour
Copy link
Author

shahbour commented Sep 3, 2018

It did work, I just deleted that file and reboot.

Now everything is working, to give you more info. I found on some of my pods errors complaining about max open files not sure if it is related and it caused this

Thanks for your help, I spent around 6 hours digging my head to know what happened

@brb
Copy link
Contributor

brb commented Sep 4, 2018

@shahbour Thanks for the logs.

Do you know whether engine02 was running at the time of the following?

INFO: 2018/09/03 13:18:52.829705 [kube-peers] Added myself to peer list &{[{2a:44:ef:34:94:3b kube-master} {72:be:1a:b8:98:77 engine01.} {da:68:9b:b7:51:65 engine02.} {9e:1f:f3:46:dd:12 engine03}]}
DEBU: 2018/09/03 13:18:52.833896 [kube-peers] Nodes that have disappeared: map[engine02.:{da:68:9b:b7:51:65 engine02.}]
DEBU: 2018/09/03 13:18:52.833956 [kube-peers] Preparing to remove disappeared peer {da:68:9b:b7:51:65 engine02.}
DEBU: 2018/09/03 13:18:52.833978 [kube-peers] Noting I plan to remove  da:68:9b:b7:51:65
DEBU: 2018/09/03 13:18:52.840629 weave DELETE to http://127.0.0.1:6784/peer/da:68:9b:b7:51:65 with map[]
INFO: 2018/09/03 13:18:52.842911 [kube-peers] rmpeer of da:68:9b:b7:51:65: 0 IPs taken over from da:68:9b:b7:51:65

Unfortunatly i did restart it several time i don't know if i can still capture the log from Engine01.

Maybe you can find the stopped container with docker ps -a?

@shahbour
Copy link
Author

shahbour commented Sep 4, 2018

Yes engine02 was on all the time , the error started at 02:01 as below

INFO: 2018/09/03 02:01:09.890616 ->[192.168.70.230:36422|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: read tcp4 192.168.70.231:6783->192.168.70.230:36422: read: connection reset by peer
INFO: 2018/09/03 02:01:10.049001 ->[192.168.70.230:36422|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 02:01:10.049884 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 02:01:10.050732 ->[192.168.70.230:6783] error during connection attempt: dial tcp4 :0->192.168.70.230:6783: connect: connection refused
INFO: 2018/09/03 02:01:10.282928 Removed unreachable peer 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 02:01:32.156972 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 02:01:32.157686 ->[192.168.70.230:6783] error during connection attempt: dial tcp4 :0->192.168.70.230:6783: connect: connection refused
INFO: 2018/09/03 02:01:53.887483 ->[192.168.70.230:6783] attempting connection
INFO: 2018/09/03 02:01:53.888291 ->[192.168.70.230:6783] error during connection attempt: dial tcp4 :0->192.168.70.230:6783: connect: connection refused
INFO: 2018/09/03 02:01:59.701008 ->[192.168.70.230:41901] connection accepted

at hour 13:18 I think I was doing a restart to engine01 to check if goes up , and for sure engine02 was up because all our pods were in it and system was up

below log are from engine02 at 13:18:52

INFO: 2018/09/03 13:18:09.010613 Removed unreachable peer 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 13:18:51.787727 ->[192.168.70.230:52588] connection accepted
INFO: 2018/09/03 13:18:51.790587 ->[192.168.70.230:52588|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:18:51.790733 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 13:18:51.790803 ->[192.168.70.230:52588|72:be:1a:b8:98:77(engine01)]: connection added (new peer)
INFO: 2018/09/03 13:18:51.793576 ->[192.168.70.230:52588|72:be:1a:b8:98:77(engine01)]: connection fully established
INFO: 2018/09/03 13:18:51.794588 ->[192.168.70.230:52588|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: read tcp4 192.168.70.231:6783->192.168.70.230:52588: read: connection reset by peer
INFO: 2018/09/03 13:18:51.794895 ->[192.168.70.230:52588|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 13:18:51.795131 Removed unreachable peer 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 13:18:51.797380 Removed unreachable peer 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 13:18:52.834296 Removed unreachable peer 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 13:18:53.610733 ->[192.168.70.230:57000] connection accepted
INFO: 2018/09/03 13:18:53.612583 ->[192.168.70.230:57000|72:be:1a:b8:98:77(engine01)]: connection ready; using protocol version 2
INFO: 2018/09/03 13:18:53.612721 overlay_switch ->[72:be:1a:b8:98:77(engine01)] using fastdp
INFO: 2018/09/03 13:18:53.612849 ->[192.168.70.230:57000|72:be:1a:b8:98:77(engine01)]: connection added (new peer)
INFO: 2018/09/03 13:18:53.613091 ->[192.168.70.230:57000|72:be:1a:b8:98:77(engine01)]: connection shutting down due to error: read tcp4 192.168.70.231:6783->192.168.70.230:57000: read: connection reset by peer
INFO: 2018/09/03 13:18:53.614032 ->[192.168.70.230:57000|72:be:1a:b8:98:77(engine01)]: connection deleted
INFO: 2018/09/03 13:18:53.614160 Removed unreachable peer 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 13:18:53.614727 Removed unreachable peer 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 13:18:55.416425 Removed unreachable peer 72:be:1a:b8:98:77(engine01)
INFO: 2018/09/03 13:18:56.900218 ->[192.168.70.230:55045] connection accepted
INFO: 2018/09/03 13:18:56.901104 Removed unreachable peer 72:be:1a:b8:98:77(engine01)

@brb
Copy link
Contributor

brb commented Sep 4, 2018

@shahbour Could you run kubectl get nodes -o wide?

@shahbour
Copy link
Author

shahbour commented Sep 4, 2018

here we go

(⎈ |production:kube-system)➜  ~ kubectl get node -o wide
NAME                          STATUS                     ROLES     AGE       VERSION   INTERNAL-IP      EXTERNAL-IP      OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
engine01                      Ready,SchedulingDisabled   <none>    59d       v1.11.2   192.168.70.230   192.168.70.230   CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
engine02                      Ready                      <none>    59d       v1.11.0   <none>           <none>           CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
engine03                      Ready                      <none>    59d       v1.11.2   172.16.71.11     172.16.71.11     CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
kube-master                   Ready                      master    89d       v1.11.2   192.168.70.232   192.168.70.232   CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1

@brb
Copy link
Contributor

brb commented Sep 4, 2018

Sweet! Here is the culprit: engine02 has neither internalIP nor externalIP, so https://github.com/weaveworks/weave/blob/master/prog/kube-utils/main.go#L28 removes the node from the list of available ones, and therefore weave-kube reclaimer tries to remove the node which leads to the reported bug.

Any idea why it does not have any internalIP?

@shahbour
Copy link
Author

shahbour commented Sep 4, 2018

Nope i don't , i noticed that few weeks ago and was trying to manually set it but it did not work for me and i forgot about it .

@brb
Copy link
Contributor

brb commented Sep 4, 2018

I don't see how to set it manually, so you might need to drain the node and re-deploy k8s on it.

I'm working on a fix for Weave Net but it will take a while until it gets released.

@shahbour
Copy link
Author

shahbour commented Sep 4, 2018

I did fix the internal and external IPs

(⎈ |production:default)➜  ~ kubectl get node -o wide
NAME                          STATUS                     ROLES     AGE       VERSION   INTERNAL-IP      EXTERNAL-IP      OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
engine01                      Ready                      <none>    59d       v1.11.2   192.168.70.230   192.168.70.230   CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
engine02                      Ready,SchedulingDisabled   <none>    59d       v1.11.2   192.168.70.231   192.168.70.231   CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
engine03                      Ready                      <none>    59d       v1.11.2   172.16.71.11     172.16.71.11     CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
kube-master                   Ready                      master    89d       v1.11.2   192.168.70.232   192.168.70.232   CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1

@brb
Copy link
Contributor

brb commented Sep 4, 2018

Just out of curiosity, how did you set the IP addrs?

@shahbour
Copy link
Author

shahbour commented Sep 4, 2018

As you said, I did drain the node then update kubelet (it needed to be updated to 1.11.2) and did restart for the node.

After it came up it was fixed

@shahbour
Copy link
Author

shahbour commented Sep 5, 2018

Today Morning i just checked and seems node is loosing its ip address some how

➜ kubectl get node -o wide
NAME                          STATUS    ROLES     AGE       VERSION   INTERNAL-IP      EXTERNAL-IP      OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
engine01.teltacworldwide.co   Ready     <none>    60d       v1.11.2   192.168.70.230   192.168.70.230   CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
engine02.teltacworldwide.co   Ready     <none>    60d       v1.11.2   <none>           <none>           CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
engine03                      Ready     <none>    60d       v1.11.2   172.16.71.11     172.16.71.11     CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
kube-master                   Ready     master    90d       v1.11.2   192.168.70.232   192.168.70.232   CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1

@brb brb modified the milestones: 2.5, 2.4.1 Sep 6, 2018
bboreham added a commit that referenced this issue Sep 6, 2018
Do not exclude k8s node without any IP addr in reclaim
@brb
Copy link
Contributor

brb commented Sep 12, 2018

Fixed in #3393

@Slutzky
Copy link

Slutzky commented Oct 31, 2019

in my case, it was only one node that was not able to connect to other nodes in the waeve cluster.
solution was :

  1. Identify the failing node
  2. SSH to the node
  3. rm /var/lib/weave/weave-netdata.db
  4. Restart weave-net POD that is running on the problematic node.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants