-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
not able to run rootless podman in k8 pod #5488
Comments
@rhatdan You were working on containers inside unprivileged containers - any input? |
Base container in privileged. Its entry point sets up unprivileged user that works on containers with podman. FYI buildah v1.14.2 is working fine with unprivileged user: just warnings annoying.
|
You want to disable cgroups, have the cgroups of the parent container control the podman inside of the container. podman run --cgroups disabled ... |
Please note that this requires the |
Didn't the command above to this for you? |
@rhatdan podman run --cgroups disabled fails. even podman info is failing:
podinfo info fails in setupRootless function when calling UserOwnsCurrentSystemdCgroup() https://github.com/containers/libpod/blob/c617484c15db0c0f227cab3c57f36a1585092a31/pkg/cgroups/cgroups_supported.go#L73 reason is that /proc/self/cgroup shows full cgroup hierarchy of parent container but it is not really mounted into the parent container. This check is done very early in podman commands.
Also could you please clarify when uid of current in user is 43346 then how uid is resolved to 0 here? |
@giuseppe Thoughts? But with --cgroups none, we should probably no be doing this check. |
OCI runtimes workaround cgroup v1 delegation it by mounting only the container subtree but still The first attempt I'd do is to try running the container with a cgroup namespace (--cgroupns=private) and see if it makes any difference. |
...and k8s doesn't support to specify cgroupns. So you might need some sort of wrapper around podman for doing that. The good news is that cgroup v2 support for k8s will default to create a new cgroup namespace. That is the first part of the problem though, the second one is that you still need CAP_SETUID/CAP_SETGID for your container. |
@giuseppe why podman info command needs to check cgroup ownership ? shouldn't cgroup ownership be checked when running containers. This commit seems to be relevant for running containers and not podman configurations. afd0818326aa37f03a3bc74f0269a06a403db16d Following is error for podman info
|
do not fail if we cannot detect the cgroup ownership. The detection fails when running in a container, since the cgroup showed in /proc/self/cgroup is not accessible, due to the runtime mounting it directly as the cgroup root. Closes: containers#5488 Signed-off-by: Giuseppe Scrivano <[email protected]>
agreed, this error should not be fatal. I've opened a PR to address it. Could you give it a try? |
Hello @giuseppe I used latest podman quay.io/podman/stable:master i guess it should contains the latest not release changes. But I still see this an error:
And disable group didn't help me:
|
what crun version are you using? It looks like an issue that was fixed in 0.13 |
Yes, we had some breakage, and I have been working on getting rootless podman to work. The thing that is blocking me right now and podman trying to manage cgroups inside of a container. |
Version was really lower than 0.13:
But issue is relevant even after update runc:
Thanks a lot, Good luck! |
do not fail if we cannot detect the cgroup ownership. The detection fails when running in a container, since the cgroup showed in /proc/self/cgroup is not accessible, due to the runtime mounting it directly as the cgroup root. Closes: containers#5488 Signed-off-by: Giuseppe Scrivano <[email protected]>
Hello! I've installed podman in my jenkins-worker node as follows:
However when I try to build image with podman inside jenkins-worker container I get this error:
podman version: 1.6.4 |
So I've fixed the problem by mapping /sys/fs/cgroup from host to my jenkins-worker container. The problem is solved now |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Steps to reproduce the issue:
Describe the results you received:
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with:
loginctl enable-linger 43346
(possibly as root)WARN[0000] Falling back to --cgroup-manager=cgroupfs
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with:
loginctl enable-linger 43346
(possibly as root)WARN[0000] Falling back to --cgroup-manager=cgroupfs
Error: stat /sys/fs/cgroup/systemd/kubepods/burstable/pod3cac3187-5e77-11ea-9c85-2c600c7b896a/2961aa806fbeb848a8de8b64d08666dae15f207102a2d5ca5d6afc06b618453c: no such file or directory
Describe the results you expected:
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Unable to get debug info in rootless mode. Here is when run as root user
Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
Distributor ID: Ubuntu
Description: Ubuntu 16.04.6 LTS
Release: 16.04
Codename: xenial
The text was updated successfully, but these errors were encountered: