Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker bridge network leaks internal IP addresses (masquerade not working) #44015

Open
flobernd opened this issue Aug 23, 2022 · 6 comments
Open
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.

Comments

@flobernd
Copy link

Description

Docker containers using the bridge network sometimes send packets from the internal (172.17.0.X) IP to the network interface without masquerading them.

Reproduce

Run a docker container of your choice (in my case portainer/portainer-ce) using the default bridge network. Inspect outgoing traffic using tcpdump (e.g. on the router device).

Expected behavior

Docker containers using only the bridge network should not send any packets with internal IP addresses to the outside.

docker version

Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:02:55 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:01:25 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

docker info

Client:
 Debug Mode: false

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 19.03.13
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 8fba4e9a7d01810a393d5d25a3621dc101981175
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.19.0-11-amd64
 Operating System: Debian GNU/Linux 10 (buster)
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 986.3MiB
 Name: PORTAINER
 ID: GIFN:75F4:YB36:FWJV:HKOX:55OX:67OQ:HOZM:XWBO:TN3P:AEVT:7X6T
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Additional Info

In my example the docker container is running on a Debian VM which is running on a VMware ESXi host.

First noticed the leaked IP addresses in the "client overview" of my networking hardware (Ubiquity UniFi). This list shows the currently assigned IP address for each connected client. For all VMs running a docker container with bridge network this IP from time to time is changed to 172.17.0.X for some seconds until it switches back to the correct value.


Original issue (created Oct 2020): docker/for-linux#1126

I copied over the details from the original issue. Versions are outdated by now, but the problem is still not fixed in the latest version.

@flobernd flobernd added kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. status/0-triage labels Aug 23, 2022
@Andrysiregar

This comment was marked as off-topic.

@Andrysiregar

This comment was marked as off-topic.

@Andrysiregar

This comment was marked as off-topic.

@Beanow
Copy link

Beanow commented Sep 25, 2022

Yes masquerade seems to be all kinds of messed up. Not only do I see internal IPs leaking like described before, but I also had a regression in connecting to ports published on the docker_gwbridge, namely:

I have a DNS server (pihole) publishing tcp and udp port 53 in mode: host.
Resulting in entries like:

<filter table>
Chain DOCKER (2 references)
...
ACCEPT     tcp  --  0.0.0.0/0            172.18.0.11          tcp dpt:53

<nat table>
Chain POSTROUTING (policy ACCEPT)
...
MASQUERADE  tcp  --  172.18.0.11          172.18.0.11          tcp dpt:53

Chain DOCKER (2 references)
...
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:53 to:172.18.0.11:53

And the host machine points to this DNS server by it's LAN IP address. (10.20.0.20)
Previously this worked fine, but with a docker-ce update now swarm tasks are no longer able to connect to this DNS server.

Failing, which worked before updating

  • nslookup -type=a google.com 10.20.0.20
  • nslookup -type=a google.com 172.18.0.11

Workaround

  • nslookup -type=a google.com 172.18.0.1 the gateway's gateway 😂

So for the time being I've added 172.18.0.1 as the first DNS server in /etc/docker/daemon.json.

Alternatively, using mode: ingress to publish the port instead did work. The problem then being the (pihole) DNS server no longer sees the real client IP. All DNS requests seem to come from an IP in the ingress network range.

@polarathene
Copy link
Contributor

If you're affected and can reproduce for external connections:

If your host is reachable via IPv6, and you have the default userland-proxy: true (change it via /etc/docker/daemon.json) you'll have the connection routed through the docker-proxy IIRC, and this will replace the remote client IP with an IPv4 gateway IP belonging to the target containers docker bridge network.

You can enable ip6tables: true (presently also requires experimental: true), which is an alternative that fixes that issue, while still allowing userland-proxy: true to accept IPv6 connections to the docker host to route to the IPv4 only docker network.

Likewise, if you have an IPv6 ULA subnet in the docker bridge network, you'd have a similar problem but the gateway IP would be IPv6 instead. userland-proxy: false would initiate a connection but hit a DROP rule from external connections, thus you should enable ip6tables: true too. IPv6 GUA subnet should be fine IIRC.


If you have connections within the docker host (host to container, container to container via host IP, etc), these can behave in a similar way and replace the client IP.

Especially for a container that uses a host IP + port that resolves back into itself, there is a MASQUERADE rule (container IP as source and destination) which needs to substitute the source as the gateway IP otherwise the connection (packets?) is dropped. I don't think you can do much to avoid that, other than not indirectly accessing the container that way.

@high-code
Copy link

Hello

I'm affected by this problem too, maybe someone can help me how to avoid that in my setup.
I have 3 node Swarm cluster with reverse proxy container(traefik) deployed on one of the nodes in host mode networking and a bunch of containers with other http services(jenkins, gitea, etc) that are available through proxy. To made those services accessible in different external networks i'm using traefik whitelist middleware and the thing is if i have container that needs to request proxy(for example git container trying to post webhook to jenkins) and proxy container on the same machine, proxy sees requests coming from docker_gwbridge address(172.18.0.1) but not from the LAN ip address, so my whitelists are not working as intended(they are described using LAN addresses). After deploying on another node everything is working as intended because now reverse proxy will see LAN address of other machine, so the issue could be only reproduced only if reverse proxy and git deployed on the same machine.
Any help is highly appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.
Projects
None yet
Development

No branches or pull requests

7 participants