Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman run leaks dbus-daemon processes #4483

Closed
ajeddeloh opened this issue Nov 8, 2019 · 21 comments · Fixed by #6569
Closed

podman run leaks dbus-daemon processes #4483

ajeddeloh opened this issue Nov 8, 2019 · 21 comments · Fixed by #6569
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@ajeddeloh
Copy link

/kind bug

Description

Running podman run --rm busybox (which container doesn't matter) leaks two dbus-daemon processes. I am running rootless. I do not have a user dbus daemon running, only a system one.

Logs with --log-level debug: https://gist.github.com/ajeddeloh/4884936398d9ce6203a4ba4e40c39b73

Steps to reproduce the issue:

  1. pgrep dbus-daemon | wc -l

  2. podman run --rm busybox

  3. pgrep dbus-daemon | wc -l Note that there are two more.

Describe the results you received:
Two dbus-daemon processes were leaked

Describe the results you expected:
No processes are leaked

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.6.3
RemoteAPI Version:  1
Go Version:         go1.12.9
Built:              Mon Nov  4 12:00:53 2019
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.12.9
  podman version: 1.6.3
host:
  BuildahVersion: 1.12.0-dev
  CgroupVersion: v1
  Conmon:
    package: Unknown
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.1, commit: 4dc8bcfec41e10ca760c8e2089474c2843dfd066'
  Distribution:
    distribution: gentoo
    version: unknown
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 10000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 10000
      size: 65536
  MemFree: 4907253760
  MemTotal: 16680652800
  OCIRuntime:
    name: runc
    package: Unknown
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc9
      spec: 1.0.1-dev
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 4
  eventlogger: journald
  hostname: grape
  kernel: 5.3.6-gentoo-r1
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: Unknown
    Version: |-
      slirp4netns version 0.4.1
      commit: 4d38845e2e311b684fc8d1c775c725bfcd5ddc27
  uptime: 244h 10m 29.94s (Approximately 10.17 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - quay.io
  - registry.fedoraproject.org
store:
  ConfigFile: /home/andrew/.config/containers/storage.conf
  ContainerStore:
    number: 8
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /home/andrew/bin/fuse-overlayfs
      Package: Unknown
      Version: |-
        fuse-overlayfs: version 0.6.4
        FUSE library version 3.7.0
        using FUSE kernel interface version 7.31
  GraphRoot: /home/andrew/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 7
  RunRoot: /run/user/1000
  VolumePath: /home/andrew/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

andrew@grape ~ $ equery u libpod
[ Legend : U - final flag setting for installation]
[        : I - package is installed with flag     ]
[ Colors : set, unset                             ]
 * Found these USE flags for app-emulation/libpod-1.6.3:
 U I
 - - apparmor : Enable AppArmor support.
 - - btrfs    : Enables dependencies for the "btrfs" graph driver, including necessary kernel flags.
 - - ostree   : Enables dependencies for handling of OSTree images.
 + + rootless : Enables dependencies for running in rootless mode.

Additional environment details (AWS, VirtualBox, physical, etc.):
On laptop, running Gentoo.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 8, 2019
@mheon
Copy link
Member

mheon commented Nov 8, 2019

Definitely looks like our linger code, or similar

@giuseppe
Copy link
Member

giuseppe commented Nov 8, 2019

we are not enabling linger mode anymore automatically.

I wonder if it is just systemd spawning these processes when required (socket activation?).

Does the number keep increasing if you launch more containers or is it constant?

@ajeddeloh
Copy link
Author

It always leaks two processes, regardless of if I've already run it.

@ajeddeloh
Copy link
Author

Just did some more tests; if I have a user dbus daemon already running, it does not leak (or perhaps doesn't spawn in the first place) and dbus-daemon processes.

@giuseppe
Copy link
Member

giuseppe commented Nov 8, 2019

so I think it is socket activation, systemd spawns dbus-daemon if it is needed and not not already running. Podman doesn't do anything to start it up, just opens a connection.

@avikivity
Copy link

I'm seeing this problem; is there any resolution? The user is a distcc daemon, so I can't expect it to spawn a dbus-daemon process.

@adrianlzt
Copy link

Same problem here with

dbus-1.10.24-13.el7_6.x86_64
systemd-219-67.el7_7.2.x86_64
podman-1.6.4-16.el7_8.x86_64
podman info

spawns a new dbus-daemon process each time it is run.

But in a newer system it does not happen:

dbus 1.12.16-5
systemd 245.4-2
podman 1.8.2-1

@adrianlzt
Copy link

strace -fs 200 podman info |& grep -i dbus
...
[pid 64778] execve("/usr/bin/dbus-launch", ["dbus-launch"], [/* 32 vars */] <unfinished ...>
[pid 64778] open("/lib64/libdbus-1.so.3", O_RDONLY|O_CLOEXEC <unfinished ...>
[pid 64778] open("/var/lib/dbus/machine-id", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 64778] open("/var/lib/dbus/machine-id.21n00U16", O_WRONLY|O_CREAT|O_EXCL, 0644) = -1 EACCES (Permission denied)
[pid 64780] execve("/usr/bin/dbus-daemon", ["/usr/bin/dbus-daemon", "--fork", "--print-pid", "4", "--print-address", "6", "--session"], [/* 32 vars */] <unfinished ...>

Podman run dbus-launch, which start the dbus-daemon.

@mheon
Copy link
Member

mheon commented Apr 20, 2020

Looks like this is the godbus library, not Podman itself. Are you seeing it create duplicate DBus daemons after the first one is launched?

@adrianlzt
Copy link

There are two dbus sessions generated.
The first want could be avoided using:

dbus-run-session -- podman info

The second one is executed when the container is stopped.
conmon calls podman when the container is terminated with a call like:

/usr/bin/podman --root /var/lib/zabbix/.local/share/containers/storage --runroot /tmp/run-776 --log-level error --cgroup-manager cgroupfs --tmpdir /tmp/run-776/libpod/tmp --runtime runc --storage-driver overlay --storage-opt overlay.mount_program=/usr/bin/fuse-overlayfs --events-backend journald container cleanup 1afa4381c48bd0ec17f35709199378aa2992ea99fe8d5fe324b3d7271e0e94d8

My use-case is Zabbix (the monitoring solution) having to run a container. Probably something related with "zabbix" not being a proper user.

@mrinaldhillon
Copy link

I have same issue with podman v1.9.0 rootless container. Seems podman expects DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus path to be present. It got resolved once I created directory /run/user/1000/bus . There is not systemd running in the base container.

@Everberg
Copy link

Same here on podman 1.6.4. Sufficient to run "podman ps" a number of times to get an equal amount of dbus-daemon processes. Problem started after upgrading from from RHEL 7.7 to RHEL 7.8.
Creating /run/user/.../bus directory indeed solves it.

@mrinaldhillon
Copy link

@mheon could you please re-open

@mheon mheon reopened this Apr 27, 2020
@mheon
Copy link
Member

mheon commented Apr 27, 2020

@giuseppe PTAL

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented May 28, 2020

@giuseppe Did you ever get a chance to look at this?

@giuseppe
Copy link
Member

I'll try to cut some time next week, go-dbus probably spawns a dbus-daemon if there is not already one running. We should not try to use systemd if it is not available.

@skorhone
Copy link

skorhone commented Jun 1, 2020

Creating a directory named bus seems to fix issue with leaks. But considering /run/user/.../bus should be a socket, it makes me worried about possible side-effects.

@rhatdan
Copy link
Member

rhatdan commented Jun 9, 2020

@giuseppe Did you ever get to this?

giuseppe added a commit to giuseppe/libpod that referenced this issue Jun 11, 2020
drop check for current cgroup ownership if the cgroup manager is not
set to systemd.

Closes: containers#4483

Signed-off-by: Giuseppe Scrivano <[email protected]>
@giuseppe
Copy link
Member

I've opened a PR here: #6569

@e00E
Copy link

e00E commented Jan 25, 2023

I would like to reopen this issue.

I'm experiencing this issue. podman 4.3.1 on arch linux. My /etc/containers/containers.conf contains cgroup_manager = "cgroupfs".

I see that the parent process of the dbus-daemon processes is pid 1. It seems like they are requested to be created by podman through the system bus in /var/run/dbus/system_bus_socket.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 3, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 3, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.