Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman inspect only shows NetworkSettings from the first network a container is attached to #4907

Closed
abalage opened this issue Jan 20, 2020 · 10 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@abalage
Copy link

abalage commented Jan 20, 2020

/kind bug

Description

I created multiple pods each having containers. The containers may be attached to more than one networks created by podman network create in advance.
Manually checking the network interfaces within the containers looks legit, however podman inspect <container> shows only the settings of the first network the container was attached to.

Steps to reproduce the issue:

  1. Create two new networks (bridge with shipped defaults, except firewall backend has been changed from iptables to firewalld)
# podman network create  --subnet 172.22.0.0/16 net_proxy
# podman network inspect net_proxy
[
        {
                "cniVersion": "0.4.0",
                "name": "net_proxy",
                "plugins": [
                        {
                                "bridge": "cni-podman3",
                                "ipMasq": true,
                                "ipam": {
                                        "ranges": [
                                                [
                                                        {
                                                                "gateway": "172.22.0.1",
                                                                "subnet": "172.22.0.0/16"
                                                        }
                                                ]
                                        ],
                                        "routes": [
                                                {
                                                        "dst": "0.0.0.0/0"
                                                }
                                        ],
                                        "type": "host-local"
                                },
                                "isGateway": true,
                                "type": "bridge"
                        },
                        {
                                "capabilities": {
                                        "portMappings": true
                                },
                                "type": "portmap"
                        },
                        {
                                "backend": "firewalld",
                                "type": "firewall"
                        }
                ]
        }
]

# podman network create  --subnet 172.22.0.0/16 net_elk
# podman network inspect net_elk
[
        {
                "cniVersion": "0.4.0",
                "name": "net_elk",
                "plugins": [
                        {
                                "bridge": "cni-podman4",
                                "ipMasq": true,
                                "ipam": {
                                        "ranges": [
                                                [
                                                        {
                                                                "gateway": "172.23.0.1",
                                                                "subnet": "172.23.0.0/16"
                                                        }
                                                ]
                                        ],
                                        "routes": [
                                                {
                                                        "dst": "0.0.0.0/0"
                                                }
                                        ],
                                        "type": "host-local"
                                },
                                "isGateway": true,
                                "type": "bridge"
                        },
                        {
                                "capabilities": {
                                        "portMappings": true
                                },
                                "type": "portmap"
                        },
                        {
                                "backend": "firewalld",
                                "type": "firewall"
                        }
                ]
        }
]
  1. Create two pods. I use ubuntu:18.04 images for tests, but on high level there is a pod with one container (a reverse proxy), and another pod with two containers (elasticsearch, kibana). (192.168.122.253 is the host OS' IP)
# podman pod create --name reverse_proxy -p 192.168.122.253:80:80 -p 192.168.122.253:443:443
# podman run -d --name proxy --hostname proxy --expose 80 --expose 443 --pod reverse_proxy --network=net_proxy ubuntu:18.04 sleep 6000

# podman pod create --name elk -p 172.23.0.1:80:80 -p 172.23.0.1:443:443
# podman run -d --name elasticsearch --hostname elasticsearch --expose 9200 --pod elk --network=net_elk ubuntu:18.04 sleep 6000
# podman run -d --name kibana --hostname kibana --expose 5601 --pod elk --network=net_elk,net_proxy ubuntu:18.04 sleep 6000

Describe the results you received:
Check the network interfaces and IP addresses in containers. Container 'proxy' looks fine.

# podman exec -ti proxy ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 22:73:3a:32:c8:6f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.22.0.12/16 brd 172.22.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2073:3aff:fe32:c86f/64 scope link 
       valid_lft forever preferred_lft forever

# podman inspect -f "{{.NetworkSettings.IPAddress}}" proxy
172.22.0.12

Show NetworkSettings without filtering and omit the rest.

    "NetworkSettings": {
      "Bridge": "",
      "SandboxID": "",
      "HairpinMode": false,
      "LinkLocalIPv6Address": "",
      "LinkLocalIPv6PrefixLen": 0,
      "Ports": [],
      "SandboxKey": "/var/run/netns/cni-4fb3cbfa-4e92-179b-c8b5-4719fc923c6e",
      "SecondaryIPAddresses": null,
      "SecondaryIPv6Addresses": null,
      "EndpointID": "",
      "Gateway": "172.22.0.1",
      "GlobalIPv6Address": "",
      "GlobalIPv6PrefixLen": 0,
      "IPAddress": "172.22.0.12",
      "IPPrefixLen": 16,
      "IPv6Gateway": "",
      "MacAddress": "22:73:3a:32:c8:6f"
    },

However the container 'kibana' is attached to two networks (net_elk, net_proxy). But 'podman inspect' shows only the IP address from the first network the container was attached to (net_elk).

# podman exec -ti kibana ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether da:3e:fa:92:2d:54 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.23.0.17/16 brd 172.23.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d83e:faff:fe92:2d54/64 scope link 
       valid_lft forever preferred_lft forever
5: eth1@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 5e:4d:43:3f:38:d1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.22.0.14/16 brd 172.22.255.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::5c4d:43ff:fe3f:38d1/64 scope link 
       valid_lft forever preferred_lft forever
# podman inspect -f "{{.NetworkSettings.IPAddress}}" kibana
172.23.0.17

------------
    "NetworkSettings": {
      "Bridge": "",
      "SandboxID": "",
      "HairpinMode": false,
      "LinkLocalIPv6Address": "",
      "LinkLocalIPv6PrefixLen": 0,
      "Ports": [],
      "SandboxKey": "/var/run/netns/cni-62027f0b-04e5-6183-3502-d9476a445165",
      "SecondaryIPAddresses": null,
      "SecondaryIPv6Addresses": null,
      "EndpointID": "",
      "Gateway": "172.23.0.1",
      "GlobalIPv6Address": "",
      "GlobalIPv6PrefixLen": 0,
      "IPAddress": "172.23.0.17",
      "IPPrefixLen": 16,
      "IPv6Gateway": "",
      "MacAddress": "da:3e:fa:92:2d:54"
    },
-----------

Describe the results you expected:
I expected that podman inspect will show an array of network settings of networks the container has been attached to.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 1.7.0

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.12.2
  podman version: 1.7.0
host:
  BuildahVersion: 1.12.0
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.9-lp151.19.1.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.9, commit: unknown'
  Distribution:
    distribution: '"opensuse-leap"'
    version: "15.1"
  MemFree: 7713280000
  MemTotal: 8357810176
  OCIRuntime:
    name: runc
    package: runc-1.0.0~rc6-lp151.1.2.x86_64
    path: /usr/sbin/runc
    version: |-
      runc version 1.0.0-rc6
      spec: 1.0.1-dev
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 2
  eventlogger: file
  hostname: linux-thwt
  kernel: 4.12.14-lp151.28.36-default
  os: linux
  rootless: false
  uptime: 1h 48m 8.97s (Approximately 0.04 days)
registries:
  search:
  - docker.io
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 5
  GraphDriverName: overlay
  GraphOptions: {}
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 9
  RunRoot: /var/run/containers/storage
  VolumePath: /var/lib/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.7.0-lp151.2.1.x86_64
cni-0.7.1-lp151.10.1.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):
Test VM of OpenSUSE 15.1 x86_64 in KVM. The podman and cni packages are from devel:cubic repository.

@rhatdan
Copy link
Member

rhatdan commented Jan 20, 2020

Any chance you would be interested in opening a PR to fix this issue?

@mheon
Copy link
Member

mheon commented Jan 20, 2020

Need to check how Docker formats things when this happens - is it an array of the network settings struct, or just appending to the struct

@mheon
Copy link
Member

mheon commented Jan 20, 2020

            "Networks": {
                "test1": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "",
                    "EndpointID": "",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ""
                },
                "test2": {
                    "IPAMConfig": {},
                    "Links": null,
                    "Aliases": [
                        "67c0c0443fda"
                    ],
                    "NetworkID": "",
                    "EndpointID": "",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ""
                }

Looks like a map keyed by network name

@mheon
Copy link
Member

mheon commented Jan 20, 2020

To be clear, this is a subset of the NetworkSettings struct - NetworkSettings itself is still present but no part of it save for Networks is populated.

@mheon
Copy link
Member

mheon commented Jan 20, 2020

I don't know how much of this we can populate; a lot of this is inside CNI, and we can't easily get at it. We have one CNI Result struct for each network we attach to, which gives us interfaces, IP adresses, routes, and DNS. Matching that up with the name of the network that created the result could be difficult, and it doesn't give us things like aliases, IPAMConfig, etc.

@mheon mheon self-assigned this Jan 23, 2020
@rhatdan
Copy link
Member

rhatdan commented Feb 17, 2020

@mheon any further ideas? Have you talked to the CNI guys?

@mheon
Copy link
Member

mheon commented Feb 17, 2020

Not yet. I'll ask.

@mheon
Copy link
Member

mheon commented Feb 17, 2020

Update: ordering of results from CNI is same as order that networks were passed in. Will try to get to this tomorrow.

@rhatdan
Copy link
Member

rhatdan commented Feb 18, 2020

Nice.

@mheon
Copy link
Member

mheon commented Feb 21, 2020

Fixed by #5295

snj33v pushed a commit to snj33v/libpod that referenced this issue May 31, 2020
When inspecting containers, info on CNI networks added to the
container by name (e.g. --net=name1) should be displayed
separately from the configuration of the default network, in a
separate map called Networks.

This patch adds this separation, improving our Docker
compatibility and also adding the ability to see if a container
has more than one IPv4 and IPv6 address and more than one MAC
address.

Fixes containers#4907

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

4 participants