Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

weave-net pod fails to get peers after Kubernetes v1.12.0 upgrade #3420

Closed
tailtwo opened this issue Sep 28, 2018 · 24 comments
Closed

weave-net pod fails to get peers after Kubernetes v1.12.0 upgrade #3420

tailtwo opened this issue Sep 28, 2018 · 24 comments

Comments

@tailtwo
Copy link

tailtwo commented Sep 28, 2018

What you expected to happen?

Weave fills automatically the KUBE_PEERS env var and the connection between peers can initiate.

What happened?

After upgrading a working v1.11.3 Kubernetes cluster to v1.12.0, the weave container in of the two weave pods fails to obtain the peer list and enter the CrashLoopBackOff state, only logging Failed to get peers. The weave-npc container fails to contact 10.96.0.1 (kube-api) and every list issued fail with a timeout. Manually entering the peers in the daemonset allows the connection between weave pods to initiate successfully.

How to reproduce it?

Upgrade K8 cluster from v1.11.3 to v1.12.0 with kubeadm.

Anything else we need to know?

The cluster is running on bare metal with a single node and control plane. I also upgraded cni from 0.6.0 to 0.7.1. I have specified the cluster-cidr in the kube-proxy daemonset ( - --cluster-cidr=10.32.0.0/12 ).

Versions:

$ weave version
weave script 2.4.1
weave 2.4.1

$ docker version
Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:16:31 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:16:31 2018
  OS/Arch:          linux/amd64
  Experimental:     false

$ uname -a
Linux server-01 4.14.67-coreos #1 SMP Mon Sep 10 23:14:26 UTC 2018 x86_64 ...

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T16:55:41Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

Logs:

# On the failing pod

$ kubectl -n kube-system logs -f weave-net-c7nfb weave
Failed to get peers

# On the running pod

$ kubectl -n kube-system logs -f weave-net-47gdt weave
DEBU: 2018/09/28 10:01:22.164268 [kube-peers] Checking peer "b2:de:01:96:13:b4" against list &{[{b2:de:01:96:13:b4 server-01} {2e:53:d1:b4:0c:dd server-02}]}
INFO: 2018/09/28 10:01:22.358106 Command line options: map[no-dns:true db-prefix:/weavedb/weave-net host-root:/host mtu:8900 nickname:server-01 no-masq-local:true conn-limit:100 docker-api: ipalloc-init:consensus=2 metrics-addr:0.0.0.0:6782 datapath:datapath expect-npc:true ipalloc-range:10.32.0.0/12 name:b2:de:01:96:13:b4 http-addr:127.0.0.1:6784 log-level:debug port:6783]
INFO: 2018/09/28 10:01:22.358229 weave  2.4.1
INFO: 2018/09/28 10:01:22.742314 Re-exposing 10.40.0.0/12 on bridge "weave"
INFO: 2018/09/28 10:01:22.810321 Bridge type is bridged_fastdp
INFO: 2018/09/28 10:01:22.810369 Communication between peers is unencrypted.
INFO: 2018/09/28 10:01:22.965628 Our name is b2:de:01:96:13:b4(server-01)
INFO: 2018/09/28 10:01:22.965673 Launch detected - using supplied peer list: [192.168.20.7 192.168.20.6]
INFO: 2018/09/28 10:01:22.965708 Using "no-masq-local" LocalRangeTracker
INFO: 2018/09/28 10:01:22.965716 Checking for pre-existing addresses on weave bridge
INFO: 2018/09/28 10:01:22.965918 weave bridge has address 10.40.0.0/12
INFO: 2018/09/28 10:01:22.998075 Found address 10.40.0.9/12 for ID _
INFO: 2018/09/28 10:01:22.998406 Found address 10.40.0.9/12 for ID _
INFO: 2018/09/28 10:01:23.000696 Found address 10.40.0.11/12 for ID _
INFO: 2018/09/28 10:01:23.001317 Found address 10.40.0.11/12 for ID _
INFO: 2018/09/28 10:01:23.004646 Found address 10.40.0.9/12 for ID _
INFO: 2018/09/28 10:01:23.020759 Found address 10.40.0.6/12 for ID _
INFO: 2018/09/28 10:01:23.020895 Found address 10.40.0.7/12 for ID _
INFO: 2018/09/28 10:01:23.021175 Found address 10.40.0.6/12 for ID _
INFO: 2018/09/28 10:01:23.021322 Found address 10.40.0.5/12 for ID _
INFO: 2018/09/28 10:01:23.021439 Found address 10.40.0.7/12 for ID _
INFO: 2018/09/28 10:01:23.021576 Found address 10.40.0.5/12 for ID _
INFO: 2018/09/28 10:01:23.021720 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.021852 Found address 10.40.0.10/12 for ID _
INFO: 2018/09/28 10:01:23.021997 Found address 10.40.0.11/12 for ID _
INFO: 2018/09/28 10:01:23.022107 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.022220 Found address 10.40.0.10/12 for ID _
INFO: 2018/09/28 10:01:23.022718 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.022817 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.023192 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.023292 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.023387 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.023481 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.023574 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.023661 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.023750 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.023849 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.023951 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.024045 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.024137 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.024234 Found address 10.40.0.8/12 for ID _
INFO: 2018/09/28 10:01:23.024349 Found address 10.40.0.10/12 for ID _
INFO: 2018/09/28 10:01:23.024438 Found address 10.40.0.10/12 for ID _
INFO: 2018/09/28 10:01:23.024536 Found address 10.40.0.10/12 for ID _
INFO: 2018/09/28 10:01:23.024629 Found address 10.40.0.10/12 for ID _
INFO: 2018/09/28 10:01:23.024718 Found address 10.40.0.10/12 for ID _
INFO: 2018/09/28 10:01:23.026770 adding entry 10.40.0.0/13 to weaver-no-masq-local of 
INFO: 2018/09/28 10:01:23.026787 added entry 10.40.0.0/13 to weaver-no-masq-local of 
INFO: 2018/09/28 10:01:23.027991 [allocator b2:de:01:96:13:b4] Initialising with persisted data
DEBU: 2018/09/28 10:01:23.028082 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.0/12 for weave:expose
DEBU: 2018/09/28 10:01:23.028130 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.9/12 for ID _ having existing ID as 5704052a8c81ca8e844d6480d3175623e26af7c346a1bbfa561b311db60d0ca1
DEBU: 2018/09/28 10:01:23.028161 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.9/12 for ID _ having existing ID as 5704052a8c81ca8e844d6480d3175623e26af7c346a1bbfa561b311db60d0ca1
DEBU: 2018/09/28 10:01:23.028184 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.11/12 for ID _ having existing ID as 21c3fe888aa6fdd8ee571f727662fe88f7f65b46b0808cd880ed696c8bcb817e
DEBU: 2018/09/28 10:01:23.028204 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.11/12 for ID _ having existing ID as 21c3fe888aa6fdd8ee571f727662fe88f7f65b46b0808cd880ed696c8bcb817e
DEBU: 2018/09/28 10:01:23.028223 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.9/12 for ID _ having existing ID as 5704052a8c81ca8e844d6480d3175623e26af7c346a1bbfa561b311db60d0ca1
DEBU: 2018/09/28 10:01:23.028242 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.6/12 for ID _ having existing ID as 5fd80d9d62e2cdecd8070846bb45725f4476c36aed63ba61fbb34169bfecf733
DEBU: 2018/09/28 10:01:23.028261 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.7/12 for ID _ having existing ID as bbe64a5def512cdb2a8f22740d80c3680067efa471f4a8fd526cc89d3c0112f2
DEBU: 2018/09/28 10:01:23.028279 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.6/12 for ID _ having existing ID as 5fd80d9d62e2cdecd8070846bb45725f4476c36aed63ba61fbb34169bfecf733
DEBU: 2018/09/28 10:01:23.028299 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.5/12 for ID _ having existing ID as dfe8a081908b7317e57113ca4f4bd3764bd2cc0c45cd3f1dbb971be1bec460df
DEBU: 2018/09/28 10:01:23.028319 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.7/12 for ID _ having existing ID as bbe64a5def512cdb2a8f22740d80c3680067efa471f4a8fd526cc89d3c0112f2
DEBU: 2018/09/28 10:01:23.028337 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.5/12 for ID _ having existing ID as dfe8a081908b7317e57113ca4f4bd3764bd2cc0c45cd3f1dbb971be1bec460df
DEBU: 2018/09/28 10:01:23.028356 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028384 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.10/12 for ID _ having existing ID as b92578e6ab2d5aeecc8f37a8ca386448ce5e237abecef56bc5144b77b179c44d
DEBU: 2018/09/28 10:01:23.028404 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.11/12 for ID _ having existing ID as 21c3fe888aa6fdd8ee571f727662fe88f7f65b46b0808cd880ed696c8bcb817e
DEBU: 2018/09/28 10:01:23.028422 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028442 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.10/12 for ID _ having existing ID as b92578e6ab2d5aeecc8f37a8ca386448ce5e237abecef56bc5144b77b179c44d
DEBU: 2018/09/28 10:01:23.028462 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028480 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028498 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028516 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028535 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028558 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028584 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028603 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028627 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028658 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028678 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028701 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028722 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028741 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.8/12 for ID _ having existing ID as 589631e93ea890e3dc0ed144e95427a98ba4f6d7a78de4ca23bd7621bf114cd2
DEBU: 2018/09/28 10:01:23.028760 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.10/12 for ID _ having existing ID as b92578e6ab2d5aeecc8f37a8ca386448ce5e237abecef56bc5144b77b179c44d
DEBU: 2018/09/28 10:01:23.028779 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.10/12 for ID _ having existing ID as b92578e6ab2d5aeecc8f37a8ca386448ce5e237abecef56bc5144b77b179c44d
DEBU: 2018/09/28 10:01:23.028798 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.10/12 for ID _ having existing ID as b92578e6ab2d5aeecc8f37a8ca386448ce5e237abecef56bc5144b77b179c44d
DEBU: 2018/09/28 10:01:23.028822 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.10/12 for ID _ having existing ID as b92578e6ab2d5aeecc8f37a8ca386448ce5e237abecef56bc5144b77b179c44d
DEBU: 2018/09/28 10:01:23.028841 [allocator b2:de:01:96:13:b4]: Re-Claimed 10.40.0.10/12 for ID _ having existing ID as b92578e6ab2d5aeecc8f37a8ca386448ce5e237abecef56bc5144b77b179c44d
INFO: 2018/09/28 10:01:23.028929 Sniffing traffic on datapath (via ODP)
INFO: 2018/09/28 10:01:23.029369 ->[192.168.20.6:6783] attempting connection
INFO: 2018/09/28 10:01:23.029515 ->[192.168.20.7:6783] attempting connection
INFO: 2018/09/28 10:01:23.029686 ->[192.168.20.6:51265] connection accepted
INFO: 2018/09/28 10:01:23.030007 ->[192.168.20.7:6783] error during connection attempt: dial tcp4 :0->192.168.20.7:6783: connect: connection refused
INFO: 2018/09/28 10:01:23.030492 ->[192.168.20.6:6783|b2:de:01:96:13:b4(server-01)]: connection shutting down due to error: cannot connect to ourself
INFO: 2018/09/28 10:01:23.030666 ->[192.168.20.6:51265|b2:de:01:96:13:b4(server-01)]: connection shutting down due to error: cannot connect to ourself
INFO: 2018/09/28 10:01:23.033900 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2018/09/28 10:01:23.035258 Listening for metrics requests on 0.0.0.0:6782
DEBU: 2018/09/28 10:01:23.114862 fastdp: broadcast{1 false} {b2:de:01:96:13:b4 ff:ff:ff:ff:ff:ff}
DEBU: 2018/09/28 10:01:23.114946 Discovered local MAC b2:de:01:96:13:b4
DEBU: 2018/09/28 10:01:23.115104 Creating ODP flow FlowSpec{keys: [InPortFlowKey{vport: 1} EthernetFlowKey{src: b2:de:01:96:13:b4, dst: ff:ff:ff:ff:ff:ff}], actions: [OutputAction{vport: 0}]}
DEBU: 2018/09/28 10:01:23.214435 [http] GET /status
DEBU: 2018/09/28 10:01:24.406060 ODP miss: map[22:UnknownFlowKey{type: 22, key: 00000000, mask: exact} 24:UnknownFlowKey{type: 24, key: 00000000, mask: exact} 4:EthernetFlowKey{src: 2e:53:d1:b4:0c:dd, dst: ff:ff:ff:ff:ff:ff} 20:BlobFlowKey{type: 20, key: 00000000, mask: ffffffff} 13:BlobFlowKey{type: 13, key: 0a2000010a28000500012e53d1b40cdd0000000000000000, mask: ffffffffffffffffffffffffffffffffffffffffffffffff} 2:BlobFlowKey{type: 2, key: 00000000, mask: ffffffff} 23:UnknownFlowKey{type: 23, key: 0000, mask: exact} 19:BlobFlowKey{type: 19, key: 00000000, mask: ffffffff} 25:UnknownFlowKey{type: 25, key: 00000000000000000000000000000000, mask: exact} 6:BlobFlowKey{type: 6, key: 0806, mask: ffff} 3:InPortFlowKey{vport: 2} 15:BlobFlowKey{type: 15, key: 00000000, mask: ffffffff} 16:TunnelFlowKey{id: 0000000000f4f4d8, ipv4src: 192.168.20.7, ipv4dst: 192.168.20.6, ttl: 64, tpsrc: 48884, tpdst: 6784}] on port 2
INFO: 2018/09/28 10:01:25.416266 ->[192.168.20.7:6783] attempting connection
INFO: 2018/09/28 10:01:25.416933 ->[192.168.20.7:6783] error during connection attempt: dial tcp4 :0->192.168.20.7:6783: connect: connection refused
DEBU: 2018/09/28 10:01:25.465323 ODP miss: map[16:TunnelFlowKey{id: 0000000000f4f4d8, ipv4src: 192.168.20.7, ipv4dst: 192.168.20.6, ttl: 64, tpsrc: 48884, tpdst: 6784} 25:UnknownFlowKey{type: 25, key: 00000000000000000000000000000000, mask: exact} 6:BlobFlowKey{type: 6, key: 0806, mask: ffff} 22:UnknownFlowKey{type: 22, key: 00000000, mask: exact} 15:BlobFlowKey{type: 15, key: 00000000, mask: ffffffff} 4:EthernetFlowKey{src: 2e:53:d1:b4:0c:dd, dst: ff:ff:ff:ff:ff:ff} 20:BlobFlowKey{type: 20, key: 00000000, mask: ffffffff} 19:BlobFlowKey{type: 19, key: 00000000, mask: ffffffff} 23:UnknownFlowKey{type: 23, key: 0000, mask: exact} 24:UnknownFlowKey{type: 24, key: 00000000, mask: exact} 3:InPortFlowKey{vport: 2} 13:BlobFlowKey{type: 13, key: 0a2000010a28000500012e53d1b40cdd0000000000000000, mask: ffffffffffffffffffffffffffffffffffffffffffffffff} 2:BlobFlowKey{type: 2, key: 00000000, mask: ffffffff}] on port 2
DEBU: 2018/09/28 10:01:26.489271 ODP miss: map[24:UnknownFlowKey{type: 24, key: 00000000, mask: exact} 4:EthernetFlowKey{src: 2e:53:d1:b4:0c:dd, dst: ff:ff:ff:ff:ff:ff} 22:UnknownFlowKey{type: 22, key: 00000000, mask: exact} 25:UnknownFlowKey{type: 25, key: 00000000000000000000000000000000, mask: exact} 20:BlobFlowKey{type: 20, key: 00000000, mask: ffffffff} 19:BlobFlowKey{type: 19, key: 00000000, mask: ffffffff} 16:TunnelFlowKey{id: 0000000000f4f4d8, ipv4src: 192.168.20.7, ipv4dst: 192.168.20.6, ttl: 64, tpsrc: 48884, tpdst: 6784} 23:UnknownFlowKey{type: 23, key: 0000, mask: exact} 2:BlobFlowKey{type: 2, key: 00000000, mask: ffffffff} 3:InPortFlowKey{vport: 2} 15:BlobFlowKey{type: 15, key: 00000000, mask: ffffffff} 6:BlobFlowKey{type: 6, key: 0806, mask: ffff} 13:BlobFlowKey{type: 13, key: 0a2000010a28000500012e53d1b40cdd0000000000000000, mask: ffffffffffffffffffffffffffffffffffffffffffffffff}] on port 2
DEBU: 2018/09/28 10:01:29.406274 ODP miss: map[25:UnknownFlowKey{type: 25, key: 00000000000000000000000000000000, mask: exact} 13:BlobFlowKey{type: 13, key: 0a2000010a28000500012e53d1b40cdd0000000000000000, mask: ffffffffffffffffffffffffffffffffffffffffffffffff} 2:BlobFlowKey{type: 2, key: 00000000, mask: ffffffff} 22:UnknownFlowKey{type: 22, key: 00000000, mask: exact} 24:UnknownFlowKey{type: 24, key: 00000000, mask: exact} 23:UnknownFlowKey{type: 23, key: 0000, mask: exact} 6:BlobFlowKey{type: 6, key: 0806, mask: ffff} 20:BlobFlowKey{type: 20, key: 00000000, mask: ffffffff} 4:EthernetFlowKey{src: 2e:53:d1:b4:0c:dd, dst: ff:ff:ff:ff:ff:ff} 19:BlobFlowKey{type: 19, key: 00000000, mask: ffffffff} 16:TunnelFlowKey{id: 0000000000f4f4d8, ipv4src: 192.168.20.7, ipv4dst: 192.168.20.6, ttl: 64, tpsrc: 48884, tpdst: 6784} 3:InPortFlowKey{vport: 2} 15:BlobFlowKey{type: 15, key: 00000000, mask: ffffffff}] on port 2
INFO: 2018/09/28 10:01:29.559178 ->[192.168.20.7:6783] attempting connection
INFO: 2018/09/28 10:01:29.559825 ->[192.168.20.7:6783] error during connection attempt: dial tcp4 :0->192.168.20.7:6783: connect: connection refused

Network:

$ ip route # node with failed weave pod
default via <redacted>.123.1 dev vlan1 proto static 
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.20.0/24 dev vlan20 proto kernel scope link src 192.168.20.7 
192.168.34.0/24 dev vlan1 proto kernel scope link src 192.168.34.254 
<redacted>.123.0/24 dev vlan1 proto kernel scope link src <redacted>.123.160

$ ip route # control plane with running weave pod
default via <redacted>.123.1 dev vlan1 proto static 
10.32.0.0/12 dev weave proto kernel scope link src 10.40.0.0 
169.254.95.0/24 dev enp0s20f0u1u6 proto kernel scope link src 169.254.95.120 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.20.0/24 dev vlan20 proto kernel scope link src 192.168.20.6 
192.168.34.0/24 dev vlan1 proto kernel scope link src 192.168.34.253 
<redacted>.123.0/24 dev vlan1 proto kernel scope link src <redacted>.123.161 

$ ip -4 -o addr # node with failed weave pod
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
7: vlan1    inet <redacted>.123.160/24 brd <redacted>.123.255 scope global vlan1\       valid_lft forever preferred_lft forever
7: vlan1    inet 192.168.34.254/24 brd 192.168.34.255 scope global vlan1\       valid_lft forever preferred_lft forever
8: vlan20    inet 192.168.20.7/24 brd 192.168.20.255 scope global vlan20\       valid_lft forever preferred_lft forever
9: docker0    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\       valid_lft forever preferred_lft forever
12: weave    inet 10.32.0.1/12 brd 10.47.255.255 scope global weave\       valid_lft forever preferred_lft forever

$ ip -4 -o addr # control plane with running weave pod
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
7: enp0s20f0u1u6    inet 169.254.95.120/24 brd 169.254.95.255 scope global dynamic enp0s20f0u1u6\       valid_lft 499sec preferred_lft 499sec
8: vlan20    inet 192.168.20.6/24 brd 192.168.20.255 scope global vlan20\       valid_lft forever preferred_lft forever
9: vlan1    inet <redacted>.123.161/24 brd <redacted>.123.255 scope global vlan1\       valid_lft forever preferred_lft forever
9: vlan1    inet 192.168.34.253/24 brd 192.168.34.255 scope global vlan1\       valid_lft forever preferred_lft forever
12: docker0    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\       valid_lft forever preferred_lft forever
62: weave    inet 10.40.0.0/12 brd 10.47.255.255 scope global weave\       valid_lft forever preferred_lft forever

$ sudo iptables-save
# Would be too long to paste here, tell me if this is really needed there.
@bboreham
Copy link
Contributor

Thanks for the detailed report.

The "running pod" is running because you manually inserted the list, right?

Can you shell into the weave container and run /home/weave/kube-utils -v=8 please?

@tailtwo
Copy link
Author

tailtwo commented Sep 28, 2018

The running pod is scheduled on an uncordoned control plane. It can retrieve peers even without the peer list variable, while the other weave-net pod scheduled on a worker node can't retrieve them automatically.

Here are my logs :

# Failing pod scheduled on a slave, command hangs and the pod get restarted

$ kubectl -n kube-system exec -it weave-net-5txmw /home/weave/kube-utils -v=8
I0928 14:59:36.840502   15871 loader.go:359] Config loaded from file /home/core/.kube/config
I0928 14:59:36.841149   15871 loader.go:359] Config loaded from file /home/core/.kube/config
I0928 14:59:36.841900   15871 loader.go:359] Config loaded from file /home/core/.kube/config
I0928 14:59:36.842992   15871 round_trippers.go:383] GET https://192.168.20.6:6443/api/v1/namespaces/kube-system/pods/weave-net-5txmw
I0928 14:59:36.843007   15871 round_trippers.go:390] Request Headers:
I0928 14:59:36.843017   15871 round_trippers.go:393]     Accept: application/json, */*
I0928 14:59:36.843026   15871 round_trippers.go:393]     User-Agent: kubectl/v1.12.0 (linux/amd64) kubernetes/0ed3388
I0928 14:59:36.871828   15871 round_trippers.go:408] Response Status: 200 OK in 28 milliseconds
I0928 14:59:36.871866   15871 round_trippers.go:411] Response Headers:
I0928 14:59:36.871881   15871 round_trippers.go:414]     Content-Type: application/json
I0928 14:59:36.871892   15871 round_trippers.go:414]     Date: Fri, 28 Sep 2018 14:59:36 GMT
I0928 14:59:36.872042   15871 request.go:942] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"weave-net-5txmw","generateName":"weave-net-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/weave-net-5txmw","uid":"19358ed3-c32f-11e8-8466-1aeeb4bf4c74","resourceVersion":"974971","creationTimestamp":"2018-09-28T14:59:28Z","labels":{"controller-revision-hash":"554488cfc7","name":"weave-net","pod-template-generation":"24"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"weave-net","uid":"858e8705-bcf0-11e8-af8f-1aeeb4bf4c74","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"weavedb","hostPath":{"path":"/var/lib/weave","type":""}},{"name":"cni-bin","hostPath":{"path":"/opt","type":""}},{"name":"cni-bin2","hostPath":{"path":"/home","type":""}},{"name":"cni-conf","hostPath":{"path":"/etc","type":""}},{"name":"dbus","hostPath":{"path":"/var/lib/dbus","type":""}},{"name":"lib-modules","hostPath":{"path":"/lib/modules","type":""}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock" [truncated 4219 chars]
Defaulting container name to weave.
Use 'kubectl describe pod/weave-net-5txmw -n kube-system' to see all of the containers in this pod.
I0928 14:59:36.880577   15871 round_trippers.go:383] POST https://192.168.20.6:6443/api/v1/namespaces/kube-system/pods/weave-net-5txmw/exec?command=%2Fhome%2Fweave%2Fkube-utils&container=weave&container=weave&stdin=true&stdout=true&tty=true
  I0928 14:59:36.880608   15871 round_trippers.go:390] Request Headers:
                                                                       I0928 14:59:36.880625   15871 round_trippers.go:393]     X-Stream-Protocol-Version: v4.channel.k8s.io
                                                                                                                                                                            I0928 14:59:36.880642   15871 round_trippers.go:393]     X-Stream-Protocol-Version: v3.channel.k8s.io
                                   I0928 14:59:36.880657   15871 round_trippers.go:393]     X-Stream-Protocol-Version: v2.channel.k8s.io
                                                                                                                                        I0928 14:59:36.880672   15871 round_trippers.go:393]     X-Stream-Protocol-Version: channel.k8s.io
                                                                                                                                                                                                                                          I0928 14:59:36.880687   15871 round_trippers.go:393]     User-Agent: kubectl/v1.12.0 (linux/amd64) kubernetes/0ed3388
                                                                                                                 I0928 14:59:36.939761   15871 round_trippers.go:408] Response Status: 101 Switching Protocols in 59 milliseconds
                                                                                                                                                                                                                                 I0928 14:59:36.939779   15871 round_trippers.go:411] Response Headers:
                                                         I0928 14:59:36.939785   15871 round_trippers.go:414]     X-Stream-Protocol-Version: v4.channel.k8s.io
                                                                                                                                                              I0928 14:59:36.939794   15871 round_trippers.go:414]     Date: Fri, 28 Sep 2018 14:59:36 GMT
            I0928 14:59:36.939803   15871 round_trippers.go:414]     Connection: Upgrade
                                                                                        I0928 14:59:36.939812   15871 round_trippers.go:414]     Upgrade: SPDY/3.1
# Running pod scheduled on the control plane

$ kubectl -n kube-system exec -it weave-net-zrnnc /home/weave/kube-utils -v=8
I0928 15:01:17.023511   18177 loader.go:359] Config loaded from file /home/core/.kube/config
I0928 15:01:17.024236   18177 loader.go:359] Config loaded from file /home/core/.kube/config
I0928 15:01:17.025085   18177 loader.go:359] Config loaded from file /home/core/.kube/config
I0928 15:01:17.026271   18177 round_trippers.go:383] GET https://192.168.20.6:6443/api/v1/namespaces/kube-system/pods/weave-net-zrnnc
I0928 15:01:17.026287   18177 round_trippers.go:390] Request Headers:
I0928 15:01:17.026295   18177 round_trippers.go:393]     Accept: application/json, */*
I0928 15:01:17.026303   18177 round_trippers.go:393]     User-Agent: kubectl/v1.12.0 (linux/amd64) kubernetes/0ed3388
I0928 15:01:17.053146   18177 round_trippers.go:408] Response Status: 200 OK in 26 milliseconds
I0928 15:01:17.053186   18177 round_trippers.go:411] Response Headers:
I0928 15:01:17.053204   18177 round_trippers.go:414]     Content-Type: application/json
I0928 15:01:17.053220   18177 round_trippers.go:414]     Date: Fri, 28 Sep 2018 15:01:17 GMT
I0928 15:01:17.053437   18177 request.go:942] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"weave-net-zrnnc","generateName":"weave-net-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/weave-net-zrnnc","uid":"1a772fcd-c32f-11e8-8466-1aeeb4bf4c74","resourceVersion":"974989","creationTimestamp":"2018-09-28T14:59:30Z","labels":{"controller-revision-hash":"554488cfc7","name":"weave-net","pod-template-generation":"24"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"weave-net","uid":"858e8705-bcf0-11e8-af8f-1aeeb4bf4c74","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"weavedb","hostPath":{"path":"/var/lib/weave","type":""}},{"name":"cni-bin","hostPath":{"path":"/opt","type":""}},{"name":"cni-bin2","hostPath":{"path":"/home","type":""}},{"name":"cni-conf","hostPath":{"path":"/etc","type":""}},{"name":"dbus","hostPath":{"path":"/var/lib/dbus","type":""}},{"name":"lib-modules","hostPath":{"path":"/lib/modules","type":""}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock" [truncated 4225 chars]
Defaulting container name to weave.
Use 'kubectl describe pod/weave-net-zrnnc -n kube-system' to see all of the containers in this pod.
I0928 15:01:17.067501   18177 round_trippers.go:383] POST https://192.168.20.6:6443/api/v1/namespaces/kube-system/pods/weave-net-zrnnc/exec?command=%2Fhome%2Fweave%2Fkube-utils&container=weave&container=weave&stdin=true&stdout=true&tty=true
  I0928 15:01:17.067546   18177 round_trippers.go:390] Request Headers:
                                                                       I0928 15:01:17.067564   18177 round_trippers.go:393]     X-Stream-Protocol-Version: v4.channel.k8s.io
                                                                                                                                                                            I0928 15:01:17.067579   18177 round_trippers.go:393]     X-Stream-Protocol-Version: v3.channel.k8s.io
                                   I0928 15:01:17.067594   18177 round_trippers.go:393]     X-Stream-Protocol-Version: v2.channel.k8s.io
                                                                                                                                        I0928 15:01:17.067609   18177 round_trippers.go:393]     X-Stream-Protocol-Version: channel.k8s.io
                                                                                                                                                                                                                                          I0928 15:01:17.067624   18177 round_trippers.go:393]     User-Agent: kubectl/v1.12.0 (linux/amd64) kubernetes/0ed3388
                                                                                                                 I0928 15:01:17.147833   18177 round_trippers.go:408] Response Status: 101 Switching Protocols in 80 milliseconds
                                                                                                                                                                                                                                 I0928 15:01:17.147893   18177 round_trippers.go:411] Response Headers:
                                                         I0928 15:01:17.147912   18177 round_trippers.go:414]     X-Stream-Protocol-Version: v4.channel.k8s.io
                                                                                                                                                              I0928 15:01:17.147928   18177 round_trippers.go:414]     Date: Fri, 28 Sep 2018 15:01:17 GMT
            I0928 15:01:17.147943   18177 round_trippers.go:414]     Connection: Upgrade
                                                                                        I0928 15:01:17.147958   18177 round_trippers.go:414]     Upgrade: SPDY/3.1
                                                                                                                                                                  192.168.20.7
192.168.20.6

@bboreham
Copy link
Contributor

kubectl -n kube-system exec -it weave-net-5txmw /home/weave/kube-utils -v=8

This is printing the details of calls done by kubectl.
This is more like what I meant:

kubectl -n kube-system exec -it weave-net-5txmw -c weave -- /home/weave/kube-utils -v=8

however it turns out -v is not wired up in kube-utils or otherwise not doing what I wanted it to.

@bboreham
Copy link
Contributor

The fact that the second container returns only one address seems relevant - I'm wondering if this is the same as #3392 (comment)

Can you do kubectl get nodes -o wide please?

@tailtwo
Copy link
Author

tailtwo commented Sep 28, 2018

NAME      STATUS   ROLES    AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                        KERNEL-VERSION   CONTAINER-RUNTIME
server-02   Ready    <none>   7d23h   v1.12.0   192.168.20.7   <none>        Container Linux by CoreOS 1855.4.0 (Rhyolite)   4.14.67-coreos   docker://18.6.1
server-01   Ready    master   7d23h   v1.12.0   192.168.20.6   <none>        Container Linux by CoreOS 1855.4.0 (Rhyolite)   4.14.67-coreos   docker://18.6.1

@bboreham
Copy link
Contributor

OK, not the same as #3392.

I figured out how to make -v do what I wanted; please run:

kubectl -n kube-system exec -it weave-net-5txmw -c weave -- /home/weave/kube-utils -v=8 -alsologtostderr`

@tailtwo
Copy link
Author

tailtwo commented Oct 1, 2018

$ kubectl -n kube-system exec -it weave-net-v5ppz -c weave -- /home/weave/kube-utils -v=8 -alsologtostderr
I1001 08:54:41.083285   28904 round_trippers.go:414] GET https://10.96.0.1:443/api/v1/nodes
I1001 08:54:41.083804   28904 round_trippers.go:421] Request Headers:
I1001 08:54:41.083821   28904 round_trippers.go:424]     Accept: application/json, */*
I1001 08:54:41.083832   28904 round_trippers.go:424]     User-Agent: kube-utils/v0.0.0 (linux/amd64) kubernetes/$Format
I1001 08:54:41.083849   28904 round_trippers.go:424]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJ3ZWF2ZS1uZXQtdG9rZW4tZGZ3dnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoid2VhdmUtbmV0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiODU4MmQyYmItYmNmMC0xMWU4LWFmOGYtMWFlZWI0YmY0Yzc0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOndlYXZlLW5ldCJ9.k6ztNyG2EG47Km11g8zPoJvw4sZquYKr1cKh367HANUI9cEq-86AsSzO5o27m-5oggXf-1IGxx7BFbBCOJCwFhiFYjtAtOmWW7AAExAEAVZc92NyAmYOfao53Y4PFLjUxBqFBX1eChoOLMGRN16U8zk8M9xh4PTxyLgubMiU81LA3zEOPYt7Rru9eJne-8Kov5aGWnkovyNLr6MViDXiB4-M4fEm1QFD4rRMBn8HufJws9j6rYEaBWgb7b7VlqmOy4viqlWkV67AAmflX-cizfyPcfoGxmZu7yUgdK8hi_piVuCf6aeyhHoDTxg23xeyDR886aiT7vDySABSTXeIMg

Then it timeout and restart.

@brb
Copy link
Contributor

brb commented Oct 1, 2018

@bboreham
Copy link
Contributor

bboreham commented Oct 5, 2018

I believe what we are looking at now is not exactly the same as the original behaviour:

$ kubectl -n kube-system logs -f weave-net-c7nfb weave
Failed to get peers

because it shouldn't get as far as "Failed to get peers" if the whole pod times out.

Nontheless, your system is not going to do anything if the node can't access the Kubernetes api-server.

I wonder if this is the thing where Linux picks the wrong source IP address for DNATted packets?
Try adding a route to 10.96.0.0/24 via the appropriate network interface. I see you have multiple on each host, for different vlans.

@gousse
Copy link

gousse commented Oct 5, 2018

Hello, same behaviour here in some of our clusters.

It seems that weave want to connect to api-server (kubernetes.default:443) but for some reason kube-proxy do not forward it to master_ip:6443

Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                  provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                10.96.0.1
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         10.0.2.15:6443
Session Affinity:  None
Events:            <none>

@bboreham
Copy link
Contributor

bboreham commented Oct 5, 2018

@gousse it's impossible to diagnose without a lot of detail; please open your own issue and fill in the information requested.

@gousse
Copy link

gousse commented Oct 5, 2018

@bboreham understand, it was just to point out we have exact same situation, and found that the problem is kube-proxy failing to join apiserver

@bboreham
Copy link
Contributor

bboreham commented Oct 5, 2018

OK, I appreciate you are trying to help, but just to be clear to anyone else reading this your comment makes assumptions ahead of the information available in this particular case.

"the thing where Linux picks the wrong source address" is described more fully at kubernetes/kubeadm#102 (comment)

@marcosnils
Copy link

@bboreham just hit this issue after updating to 1.12 and after seeing your comment (kubernetes/kubeadm#102 (comment)) adding the route rule fixed the problem. The thing is that it fails consistently in 1.12 and it works in 1.8 using the same kernel version and bootstrap steps (kubeadm)

I do see that ip ro get 10.96.0.1 picks the default interface (which is not the cluster one) in both installations but for some reason in 1.8 it still routes the traffic correctly.

If you want to reproduce this you can go to https://play-with-k8s.com (PWK co-creator here) and bootstrap the cluster with kubeadm to see what I'm saying. You can select between 1.12 and 1.8 with the gear icon in the left menu.

image

@bboreham
Copy link
Contributor

Why do you think this is an issue with Weave Net? My understanding of the problem is that a process on a host cannot establish an TCP connection to an address it has been given. The same thing would happen for any process.

@marcosnils
Copy link

@bboreham I'm aware it's not a Weave Net issue. I'm just adding some information here as it's the only place where I found that these issue is being referenced. I'll try to run some more tests and probable open an issue in the k8s tracker

@bboreham
Copy link
Contributor

@marcosnils sorry, I mis-read the context

@marcosnils
Copy link

After investigating found what was causing the problem in v1.12.1. As stated in (kubernetes/kubeadm#102 (comment)) the following iptable rule was missing in the new version:
iptables -t nat -I KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ

Not sure how / when / if that rule is necessary but after adding that it started working. I'd also love to know where K8s networking is using that MARK to forward the packet to the correct interface. I haven't found any routing tables that actually uses that mark.

@bboreham
Copy link
Contributor

KUBE-MARK-MASQ is used by another kube-proxy rule to masquerade the packet (change its source address). Since the root problem is that Linux has initially made a bad choice of source address, changing the source address makes that problem go away.

I prefer adding a route as it has less overhead, and you might want to keep the original source address for tracing or policy reasons.

@tailtwo
Copy link
Author

tailtwo commented Nov 12, 2018

I think I just figured what was wrong with my setup. The pod CIDR (10.32.0.0/12) was not passed to some kubernetes system components because I did not initialized the cluster with the correct kubeadm parameter. This resulted in unreachable kubernetes API calls on the slave node.
Adding this in the correct places, it also fixed my nginx controller not being able to get source IP from clients.

The kubernetes documentation is slighlty misleading. It advices to set a pod CIDR for various pod network add-on but not for weave. See : https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

This isssue can be closed.

@murali-reddy
Copy link
Contributor

thanks for reporting back @tailtwo. FYI it's documented under Things to watch out for in weave net docs.

@RSwarnkar
Copy link

RSwarnkar commented Dec 11, 2020

After investigating found what was causing the problem in v1.12.1. As stated in (kubernetes/kubeadm#102 (comment)) the following iptable rule was missing in the new version:
iptables -t nat -I KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ

Not sure how / when / if that rule is necessary but after adding that it started working. I'd also love to know where K8s networking is using that MARK to forward the packet to the correct interface. I haven't found any routing tables that actually uses that mark.

Could you kindly explain the command in a bit so that one referring to this can understand what they're trying to do and adapt if the need be?

I keep getting error for iptable:

iptables v1.8.5 (nf_tables): Couldn't load match comment':No such file or directory`

Please help this is getting recursive troubleshooting for me.

@bboreham
Copy link
Contributor

Could you kindly explain the command

The question about what MARK does is answered at #3420 (comment); you can find descriptions of other iptables options at https://linux.die.net/man/8/iptables

iptables v1.8.5 (nf_tables): Couldn't load match comment

That suggests your Linux doesn't have the "comment" extension for iptables. It is normally installed as standard.

@phanimullapudi
Copy link

iptables -t nat -I KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ

This worked and solved my issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants