Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default cpu-shares are 0, but documentation says it is 1024 #4822

Closed
marusak opened this issue Jan 9, 2020 · 8 comments · Fixed by #4952
Closed

Default cpu-shares are 0, but documentation says it is 1024 #4822

marusak opened this issue Jan 9, 2020 · 8 comments · Fixed by #4952
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@marusak
Copy link
Contributor

marusak commented Jan 9, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
From doc:

To modify the proportion from the default of 1024, use the --cpu-shares flag to set the weighting to 2 or higher.

But when I run container without specifying --cpu-shares it has cpu-shares set to 0.

Steps to reproduce the issue:

 $ sudo podman run --name is_it_1024 -dit fedora bash
c9cfa73320f239f1aa07acf94f9ee926df06b4f5bc308368fcde791b8256bea4

$ sudo podman inspect --format '{{.HostConfig.CpuShares}}' is_it_1024
0

Describe the results you expected:

1024

Output of podman version:

Version:            1.6.2
RemoteAPI Version:  1
Go Version:         go1.13.1
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.13.1
  podman version: 1.6.2
host:
  BuildahVersion: 1.11.3
  CgroupVersion: v2
  Conmon:
    package: conmon-2.0.2-1.fc31.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.2, commit: 186a550ba0866ce799d74006dab97969a2107979'
  Distribution:
    distribution: fedora
    version: "31"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1935008
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1935008
      size: 65536
  MemFree: 1709568000
  MemTotal: 16426700800
  OCIRuntime:
    name: crun
    package: crun-0.10.6-1.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.10.6
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 4
  eventlogger: journald
  hostname: zahlmaschine
  kernel: 5.3.15-300.fc31.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: slirp4netns-0.4.0-20.1.dev.gitbbd6f25.fc31.x86_64
    Version: |-
      slirp4netns version 0.4.0-beta.3+dev
      commit: bbd6f25c70d5db2a1cd3bfb0416a8db99a75ed7e
  uptime: 44h 16m 49.74s (Approximately 1.83 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - quay.io
store:
  ConfigFile: /home/mmarusak/.config/containers/storage.conf
  ContainerStore:
    number: 2
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.2-2.fc31.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 0.7.2
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  GraphRoot: /home/mmarusak/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 3
  RunRoot: /run/user/1000

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.6.2-2.fc31.x86_64
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 9, 2020
@vrothberg
Copy link
Member

Hi @marusak, thanks for reaching out!

@giuseppe, using --cpu-shares yields the following error:

Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd

Is that correct or a regressions? It has worked with v1 at some point, didn't it?

@giuseppe
Copy link
Member

giuseppe commented Jan 9, 2020

Is that correct or a regressions? It has worked with v1 at some point, didn't it?

are you running as rootless?

@vrothberg
Copy link
Member

are you running as rootless?

🙈 yes ... which totally explains the difference. So we should make the default value conditional and update the docs?

@marusak
Copy link
Contributor Author

marusak commented Jan 9, 2020

In my example I was running as root, as it is not supported as rootless (to my knowledge).

$ podman run --name is_it_1024 --cpu-shares=1024 -dit fedora bash
Error: writing file '/sys/fs/cgroup/user.slice/user-1000.slice/[email protected]/cgroup.subtree_control': No such file or directory: OCI runtime command not found error

(this is CGroupsV2 of Fedora 31)

@giuseppe
Copy link
Member

giuseppe commented Jan 9, 2020

see_no_evil yes ... which totally explains the difference. So we should make the default value conditional and update the docs?

1024 is the default also when running as rootless. The difference is that it cannot be modified.

I think the issue here can be that 0 means "unspecified", given that the minimum value for cpu shares on cgroup v1 is 2. I think we should just convert 0 to 1024.

@vrothberg
Copy link
Member

@giuseppe
Copy link
Member

giuseppe commented Jan 9, 2020

@giuseppe the default is set to 0 (see https://github.com/containers/libpod/blob/master/cmd/podman/common.go#L192).

which is fine. When it is set to 0 then its value is not set in the OCI spec file as libpod treats it as unspecified by the user: https://github.com/containers/libpod/blob/master/pkg/spec/spec.go#L187-L189

So I think the issue is only in inspect.

In the inspect output, should we convert 0 to 1024 which is the default "most of the time" value used for the cgroup? We can't really assume it though as we could re-use an existing cgroup or configure systemd to use a different value.

I don't see how we can make the inspect output better and make it clear that 0 means "it uses whatever value the cgroup is configured with". The same problem exists with other cgroup settings as well.

@mheon
Copy link
Member

mheon commented Jan 9, 2020

I would think that we set it to 1024 by default in the output, given that we should only be overriding it if the resource limits struct is actually present and populated in the spec?

marusak added a commit to marusak/cockpit-podman that referenced this issue Jan 10, 2020
Minimum value is 2.
Default value is 1024. We still test that if we don't set it up, it is
0, but for that see containers/podman#4822

Fixes cockpit-project#287
marusak added a commit to marusak/cockpit-podman that referenced this issue Jan 10, 2020
Minimum value is 2.
Default value is 1024. We still test that if we don't set it up, it is
0, but for that see containers/podman#4822

Fixes cockpit-project#287
marusak added a commit to marusak/cockpit-podman that referenced this issue Jan 13, 2020
Minimum value is 2.
Default value is 1024. We still test that if we don't set it up, it is
0, but for that see containers/podman#4822

Fixes: cockpit-project#287
Closes: cockpit-project#289
marusak added a commit to cockpit-project/cockpit-podman that referenced this issue Jan 13, 2020
Minimum value is 2.
The default value is 1024 (or value set up in cgroups).
Let's not assume any value. Also podman reports `0` as default. We still test that if we don't set it up, it is 0, but for that see containers/podman#4822

Fixes: #287
Closes: #289
mheon added a commit to mheon/libpod that referenced this issue Jan 23, 2020
This is purely a display change - we weren't initializing the
default value to display for the CPUShares field, which defaults
to 1024.

Fixes containers#4822

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants