Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

not able to run rootless podman in k8 pod #5488

Closed
mrinaldhillon opened this issue Mar 13, 2020 · 20 comments · Fixed by #5664
Closed

not able to run rootless podman in k8 pod #5488

mrinaldhillon opened this issue Mar 13, 2020 · 20 comments · Fixed by #5664
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@mrinaldhillon
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:

  1. ./podman info

Describe the results you received:
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: loginctl enable-linger 43346 (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: loginctl enable-linger 43346 (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
Error: stat /sys/fs/cgroup/systemd/kubepods/burstable/pod3cac3187-5e77-11ea-9c85-2c600c7b896a/2961aa806fbeb848a8de8b64d08666dae15f207102a2d5ca5d6afc06b618453c: no such file or directory

Describe the results you expected:

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 1.8.1

Output of podman info --debug:
Unable to get debug info in rootless mode. Here is when run as root user

./podman info
host:
  BuildahVersion: 1.14.2
  CgroupVersion: v1
  Conmon:
    package: Unknown
    path: /usr/local/conmon
    version: 'conmon version 2.0.11, commit: ff9d97a08d7a4b58267ac03719786e4e7258cecf'
  Distribution:
    distribution: ubuntu
    version: "16.04"
  MemFree: 473352593408
  MemTotal: 540959875072
  OCIRuntime:
    name: runc
    package: Unknown
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 24
  eventlogger: file
  hostname: buildcontainer
  kernel: 4.4.98+
  os: linux
  rootless: false
  uptime: 1018h 37m 52.15s (Approximately 42.42 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions: {}
  GraphRoot: /var/lib/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 0
  RunRoot: /var/run/containers/storage
  VolumePath: /var/lib/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

built from source

Additional environment details (AWS, VirtualBox, physical, etc.):
Distributor ID: Ubuntu
Description: Ubuntu 16.04.6 LTS
Release: 16.04
Codename: xenial

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 13, 2020
@mheon
Copy link
Member

mheon commented Mar 13, 2020

@rhatdan You were working on containers inside unprivileged containers - any input?

@mrinaldhillon
Copy link
Author

Base container in privileged. Its entry point sets up unprivileged user that works on containers with podman.
tmpdir is /var/run/user/43346/
mdhillon:100000:65536 in /etc/{subuid,subgid)
HOME=/homes/mdhillon mounted from host.

FYI buildah v1.14.2 is working fine with unprivileged user: just warnings annoying.

buildah info
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 43346` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
{
    "host": {
        "CgroupVersion": "v1",
        "Distribution": {
            "distribution": "ubuntu",
            "version": "16.04"
        },
        "MemTotal": 540959875072,
        "MenFree": 473544286208,
        "OCIRuntime": "runc",
        "SwapFree": 0,
        "SwapTotal": 0,
        "arch": "amd64",
        "cpus": 24,
        "hostname": "buildcontainer",
        "kernel": "4.4.98+",
        "os": "linux",
        "rootless": true,
        "uptime": "1025h 18m 1.09s (Approximately 42.71 days)"
    },
    "store": {
        "ContainerStore": {
            "number": 4
        },
        "GraphDriverName": "vfs",
        "GraphOptions": null,
        "GraphRoot": "/b/workspace/.local/share/containers/storage",
        "GraphStatus": {},
        "ImageStore": {
            "number": 1
        },
        "RunRoot": "/run/user/43346"
    }
}

buildah run $(./buildah from alpine) sh
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 43346` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
/ #5480 

@rhatdan
Copy link
Member

rhatdan commented Mar 13, 2020

You want to disable cgroups, have the cgroups of the parent container control the podman inside of the container.

podman run --cgroups disabled ...

@mheon
Copy link
Member

mheon commented Mar 13, 2020

Please note that this requires the crun OCI runtime

@mrinaldhillon
Copy link
Author

@rhatdan all podman commands are failing in rootless mode.
@mheon I installed crun and set it as default runtime. even that made no difference.

Is there a way to disable cgroups manager ?

@rhatdan
Copy link
Member

rhatdan commented Mar 16, 2020

Didn't the command above to this for you?
podman run --cgroups disabled ..

@mrinaldhillon
Copy link
Author

@rhatdan podman run --cgroups disabled fails. even podman info is failing:

podinfo info
ERRO[0000] stat /sys/fs/cgroup/systemd/docker/da09413f3e39d82a2be453a6caf14c55cd3127611549557cc70c50565ef5f0f6: no such file or directory

podinfo info fails in setupRootless function when calling UserOwnsCurrentSystemdCgroup() https://github.com/containers/libpod/blob/c617484c15db0c0f227cab3c57f36a1585092a31/pkg/cgroups/cgroups_supported.go#L73

reason is that /proc/self/cgroup shows full cgroup hierarchy of parent container but it is not really mounted into the parent container. This check is done very early in podman commands.

cat /proc/self/cgroup | grep systemd
1:name=systemd:/docker/da09413f3e39d82a2be453a6caf14c55cd3127611549557cc70c50565ef5f0f6

Also could you please clarify when uid of current in user is 43346 then how uid is resolved to 0 here?
https://github.com/containers/libpod/blob/bd9386ddac4ef6730fbe6ce4104e80f56a48fe43/cmd/podman/main_local.go#L170

@rhatdan
Copy link
Member

rhatdan commented Mar 19, 2020

@giuseppe Thoughts?
Currently podman is not supported running in a container. I am not sure why it is checking for this cgroup or how we work around it.

But with --cgroups none, we should probably no be doing this check.

@mrinaldhillon
Copy link
Author

@rhatdan @giuseppe --cgroups=none is relevant for commands like podman run. Why cgroups are checked for podman info command?

@giuseppe
Copy link
Member

@giuseppe Thoughts?
Currently podman is not supported running in a container. I am not sure why it is checking for this cgroup or how we work around it.

OCI runtimes workaround cgroup v1 delegation it by mounting only the container subtree but still /proc/self/mountinfo shows the full path.

The first attempt I'd do is to try running the container with a cgroup namespace (--cgroupns=private) and see if it makes any difference.

@giuseppe
Copy link
Member

...and k8s doesn't support to specify cgroupns.

So you might need some sort of wrapper around podman for doing that.

The good news is that cgroup v2 support for k8s will default to create a new cgroup namespace.

That is the first part of the problem though, the second one is that you still need CAP_SETUID/CAP_SETGID for your container.

@mrinaldhillon
Copy link
Author

@giuseppe why podman info command needs to check cgroup ownership ? shouldn't cgroup ownership be checked when running containers. This commit seems to be relevant for running containers and not podman configurations. afd0818326aa37f03a3bc74f0269a06a403db16d

Following is error for podman info

podman info
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: loginctl enable-linger 43346 (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
Error: stat /sys/fs/cgroup/systemd/kubepods/burstable/pod3cac3187-5e77-11ea-9c85-2c600c7b896a/2961aa806fbeb848a8de8b64d08666dae15f207102a2d5ca5d6afc06b618453c: no such file or directory**

giuseppe added a commit to giuseppe/libpod that referenced this issue Mar 30, 2020
do not fail if we cannot detect the cgroup ownership.  The detection
fails when running in a container, since the cgroup showed in
/proc/self/cgroup is not accessible, due to the runtime mounting it
directly as the cgroup root.

Closes: containers#5488

Signed-off-by: Giuseppe Scrivano <[email protected]>
@giuseppe
Copy link
Member

agreed, this error should not be fatal. I've opened a PR to address it.

Could you give it a try?

@AndrienkoAleksandr
Copy link

Hello @giuseppe I used latest podman quay.io/podman/stable:master i guess it should contains the latest not release changes. But I still see this an error:

sh-5.0$ podman run --log-level=debug busybox sh
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/theia/.containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver vfs                       
DEBU[0000] Using graph root /home/theia/.containers/storage 
DEBU[0000] Using run root /var/tmp/containers/storage   
DEBU[0000] Using static dir /home/theia/.containers/storage/libpod 
DEBU[0000] Using tmp dir /var/tmp/containers/runtime/libpod/tmp 
DEBU[0000] Using volume path /home/theia/.containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Not configuring container store              
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] error from newuidmap:                        
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids 
DEBU[0000] write setgroups file exited with 0           
DEBU[0000] write uid_map exited with 0                  
DEBU[0000] error from newgidmap:                        
WARN[0000] the current user namespace doesn't match the configuration in /etc/subuid or /etc/subgid 
WARN[0000] you can use `podman system migrate` to recreate the user namespace and restart the containers 
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/theia/.containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver vfs                       
DEBU[0000] Using graph root /home/theia/.containers/storage 
DEBU[0000] Using run root /var/tmp/containers/storage   
DEBU[0000] Using static dir /home/theia/.containers/storage/libpod 
DEBU[0000] Using tmp dir /var/tmp/containers/runtime/libpod/tmp 
DEBU[0000] Using volume path /home/theia/.containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "vfs"   
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] Initialized SHM lock manager at path /libpod_rootless_lock_1000530000 
DEBU[0000] Podman detected system restart - performing state refresh 
ERRO[0000] unable to write system event: "write unixgram @00033->/run/systemd/journal/socket: sendmsg: no such file or directory" 
ERRO[0000] stat /sys/fs/cgroup/systemd/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod595acb35_d4ce_4a54_9ae5_3d03a84f8209.slice/crio-5c1b706cfb087f626b1a73c0357d33bda6877eee4baf32731b2cd36702235f57.scope: no such file or directory 

And disable group didn't help me:

sh-5.0$ podman run --cgroups=disabled --log-level=debug busybox sh
WARN[0000] the current user namespace doesn't match the configuration in /etc/subuid or /etc/subgid 
WARN[0000] you can use `podman system migrate` to recreate the user namespace and restart the containers 
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/theia/.containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver vfs                       
DEBU[0000] Using graph root /home/theia/.containers/storage 
DEBU[0000] Using run root /var/tmp/containers/storage   
DEBU[0000] Using static dir /home/theia/.containers/storage/libpod 
DEBU[0000] Using tmp dir /var/tmp/containers/runtime/libpod/tmp 
DEBU[0000] Using volume path /home/theia/.containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "vfs"   
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
ERRO[0000] stat /sys/fs/cgroup/systemd/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod595acb35_d4ce_4a54_9ae5_3d03a84f8209.slice/crio-5c1b706cfb087f626b1a73c0357d33bda6877eee4baf32731b2cd36702235f57.scope: no such file or directory

@giuseppe
Copy link
Member

what crun version are you using? It looks like an issue that was fixed in 0.13

@TomSweeneyRedHat
Copy link
Member

@giuseppe FWIW, @rhatdan discovered that the stable image is still at v1.6.2 as of this morning. We've an issue with the building of the final rpm that @lsm5 is looking into.

@rhatdan
Copy link
Member

rhatdan commented Apr 13, 2020

Yes, we had some breakage, and I have been working on getting rootless podman to work.

The thing that is blocking me right now and podman trying to manage cgroups inside of a container.
This PR is to help fix this.
containers/common#115

@AndrienkoAleksandr
Copy link

what crun version are you using? It looks like an issue that was fixed in 0.13

Version was really lower than 0.13:

$ crun version 0.12.1
commit: df5f2b2369b3d9f36d175e1183b26e5cee55dd0a
spec: 1.0.0

But issue is relevant even after update runc:

runc --version
runc version 1.0.0-rc10
commit: 96f6022b37cbe12b26c9ad33a24677bec72a9cc3
spec: 1.0.1-dev

The thing that is blocking me right now and podman trying to manage cgroups inside of a container.
This PR is to help fix this.
containers/common#115

Thanks a lot, Good luck!

snj33v pushed a commit to snj33v/libpod that referenced this issue May 31, 2020
do not fail if we cannot detect the cgroup ownership.  The detection
fails when running in a container, since the cgroup showed in
/proc/self/cgroup is not accessible, due to the runtime mounting it
directly as the cgroup root.

Closes: containers#5488

Signed-off-by: Giuseppe Scrivano <[email protected]>
@nickyfoster
Copy link

nickyfoster commented Sep 29, 2020

Hello!
I'm facing the same issue.
I'm trying to build image inside jenkins container in OKD cluster (version 3.11).

I've installed podman in my jenkins-worker node as follows:

FROM docker.io/openshift/jenkins-agent-maven-35-centos7:v3.11
USER root
RUN yum -y install podman
RUN chown 1001:1001 /home/jenkins/.local && echo 'jenkins:200000:1001' > /etc/subuid && echo 'jenkins:200000:1001' > /etc/subgid
USER 1001

However when I try to build image with podman inside jenkins-worker container I get this error:

sh-4.2$ podman
Error: stat /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod571d3a9e_026c_11eb_ae3c_9600005007ef.slice/docker-13376c9c59ccf01aa948dec771429c10eae4229b763cea6509dd30a5425f7256.scope: no such file or directory

podman version: 1.6.4

@nickyfoster
Copy link

So I've fixed the problem by mapping /sys/fs/cgroup from host to my jenkins-worker container.

The problem is solved now

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants