Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

'weave launch' does not work with iptables 1.8 #3844

Open
mathiasbrito opened this issue Aug 13, 2020 · 2 comments
Open

'weave launch' does not work with iptables 1.8 #3844

mathiasbrito opened this issue Aug 13, 2020 · 2 comments

Comments

@mathiasbrito
Copy link

mathiasbrito commented Aug 13, 2020

Hi Everyone, I posted this Issue here since I cannot find a solution in the documentation, also I tried to get help in the Slack channel, some could reproduce it following my instructions, but still no solution, so I'm posting it here.

What you expected to happen?

Containers to ping each other, when started with Weave activated.

What happened?

The containers do not reach each other, no ping from both directions also the netcat example does not work. Despite that, they do resolve the names and get the correct IPs.

How to reproduce it?

To reproduce my problem, I wrote a Vagrantfile (attached Vagrantfile.zip), it will setup two virtual machines and install docker (19.03.1) and weaver (2.7.0), if you have vagrant installed a vagrant up is all you need to have the machines up and running. After that, the results of configuring Weave and running the container can be seen in the image.

screenshot

Basically, after setting up Weave with weave launch, set up the peers, I run one Busybox container in each machine, they get the IP from weave, they resolve the names, but they cannot communicate.

Anything else we need to know?

I reproduced this in two different setups:

  1. Vagrant with VirtualBox (Two VMs running Debian Buster)
  • VirtualBox version 6.1.12 r139181
  • macOS Catalina (v10.15.5)
  • Vagrant 2.2.9
  1. Two Raspberry Pis
  • Two Raspberries Pi 4
  • Raspberry Pi OS (Based on Debian Buster)

Versions:

weave script 2.7.0
weave 2.7.0
Docker version 19.03.1, build 74b1e89
Linux manager 4.19.0-9-amd64 #1 SMP Debian 4.19.118-2 (2020-04-29) x86_64 GNU/Linux
No Kubernets

Logs:

Logs from node with hostname manager
$ docker logs weave

weave-manager-node.log

Logs from node with hostname vm-node-1
$ docker logs weave

weave-vm-node-1-node.log

$ docker logs weave

Network:

manager Node

$ ip route
default via 10.0.2.2 dev eth0
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/16 dev br-adc1d69dcf59 proto kernel scope link src 172.18.0.1 linkdown
192.168.15.0/24 dev eth1 proto kernel scope link src 192.168.15.74
$ ip -4 -o addr
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: eth0    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0\       valid_lft 85663sec preferred_lft 85663sec
3: eth1    inet 192.168.15.74/24 brd 192.168.15.255 scope global dynamic eth1\       valid_lft 13676sec preferred_lft 13676sec
4: docker0    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\       valid_lft forever preferred_lft forever
30: br-adc1d69dcf59    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-adc1d69dcf59\       valid_lft forever preferred_lft forever
$ sudo iptables-save
# Generated by xtables-save v1.8.2 on Thu Aug 13 15:56:23 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o br-adc1d69dcf59 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-adc1d69dcf59 -j DOCKER
-A FORWARD -i br-adc1d69dcf59 ! -o br-adc1d69dcf59 -j ACCEPT
-A FORWARD -i br-adc1d69dcf59 -o br-adc1d69dcf59 -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-adc1d69dcf59 ! -o br-adc1d69dcf59 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o br-adc1d69dcf59 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Thu Aug 13 15:56:23 2020
# Generated by xtables-save v1.8.2 on Thu Aug 13 15:56:23 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.18.0.0/16 ! -o br-adc1d69dcf59 -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i br-adc1d69dcf59 -j RETURN
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Thu Aug 13 15:56:23 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

vm-node-1 node

$ ip route
default via 10.0.2.2 dev eth0
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.15.0/24 dev eth1 proto kernel scope link src 192.168.15.75

$ ip -4 -o addr
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: eth0    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0\       valid_lft 85462sec preferred_lft 85462sec
3: eth1    inet 192.168.15.75/24 brd 192.168.15.255 scope global dynamic eth1\       valid_lft 13466sec preferred_lft 13466sec
4: docker0    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\       valid_lft forever preferred_lft forever

$ sudo iptables-save
# Generated by xtables-save v1.8.2 on Thu Aug 13 15:59:57 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Thu Aug 13 15:59:57 2020
# Generated by xtables-save v1.8.2 on Thu Aug 13 15:59:57 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Thu Aug 13 15:59:57 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them
@bboreham
Copy link
Contributor

Warning: iptables-legacy tables present, use iptables-legacy-save to see them

I think this is a repeat of #3465 - Weave Net is talking to "iptables-legacy" while your system is set up with iptables 1.8.

It was fixed for Kubernetes by getting the launch script there to figure out which one to use, but not for regular weave launch.

You can switch the hosts to legacy mode as noted at #3465 (comment)

@mathiasbrito
Copy link
Author

OK, I can confirm that using tables-legacy solves the problem running it on Debian Buster. I can now ping containers in every scenario that was not working previously.

@bboreham bboreham changed the title Containers not pinging each other (No communication between them) 'weave launch' does not work with iptables 1.8 Aug 14, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants