Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CapDrop are printed in random and different order each time in inspection #9490

Closed
sshnaidm opened this issue Feb 23, 2021 · 5 comments · Fixed by #9494
Closed

CapDrop are printed in random and different order each time in inspection #9490

sshnaidm opened this issue Feb 23, 2021 · 5 comments · Fixed by #9494
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@sshnaidm
Copy link
Member

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:

ubuntu@ubuntu20-vmtest:~$ sudo podman run -d --rm --name qqqq alpine sleep 1d
ubuntu@ubuntu20-vmtest:~$ sudo podman inspect qqqq | jq -r '.[0].HostConfig.CapDrop'
[
  "CAP_MKNOD",
  "CAP_NET_RAW",
  "CAP_AUDIT_WRITE"
]
ubuntu@ubuntu20-vmtest:~$ sudo podman inspect qqqq | jq -r '.[0].HostConfig.CapDrop'
[
  "CAP_AUDIT_WRITE",
  "CAP_MKNOD",
  "CAP_NET_RAW"
]

Describe the results you received:
CapDrop are printed in random arbitrary order. Probably CapAdd and other arrays are printed like that.

Describe the results you expected:
Arrays in podman inspection should be sorted.

Additional information you deem important (e.g. issue happens only occasionally):

Was ordered before version 3.0.0

Output of podman version:

Version:      3.0.1
API Version:  3.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 00:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.19.4
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.26, commit: '
  cpus: 4
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: ubuntu20-vmtest
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.4.0-65-generic
  linkmode: dynamic
  memFree: 779091968
  memTotal: 2083704832
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.17.7-5502-dirty
      commit: fd582c529489c0738e7039cbc036781d1d039014
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    selinuxEnabled: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.1.8
      commit: unknown
      libslirp: 4.3.1-git
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.3
  swapFree: 0
  swapTotal: 0
  uptime: 16h 25m 40.64s (Approximately 0.67 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /home/ubuntu/.config/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 3
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: 'fuse-overlayfs: /usr/bin/fuse-overlayfs'
      Version: |-
        fusermount3 version: 3.9.0
        fuse-overlayfs: version 1.4
        FUSE library version 3.9.0
        using FUSE kernel interface version 7.31
  graphRoot: /home/ubuntu/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 20
  runRoot: /run/user/1000/containers
  volumePath: /home/ubuntu/.local/share/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 0
  BuiltTime: Thu Jan  1 00:00:00 1970
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 3.0.1


Package info (e.g. output of rpm -q podman or apt list podman):

podman/unknown,now 100:3.0.1-2 amd64 [installed]

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

No

Additional environment details (AWS, VirtualBox, physical, etc.):
Ubuntu 20.04

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 23, 2021
@mheon
Copy link
Member

mheon commented Feb 23, 2021

I'll take this

@mheon mheon self-assigned this Feb 23, 2021
@mheon mheon added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Feb 23, 2021
@mheon
Copy link
Member

mheon commented Feb 23, 2021

I can't reproduce - ordering is 100% consistent on my machine.

@mheon
Copy link
Member

mheon commented Feb 23, 2021

Ah, nevermind, it's only CapDrop not CapAdd. Interesting.

@edsantiago
Copy link
Member

FWIW this is not a regression: I get random order in podman-2.2.1-1.fc33

@mheon
Copy link
Member

mheon commented Feb 23, 2021

Fix in #9494

mheon added a commit to mheon/libpod that referenced this issue Feb 23, 2021
The order of CapAdd when inspecting containers is deterministic.
However, the order of CapDrop is not (for unclear reasons). Add a
quick sort on the final array to guarantee a consistent order.

Fixes containers#9490

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants