Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong ownership of volume created with podman volume create #9608

Closed
zakkak opened this issue Mar 3, 2021 · 16 comments · Fixed by #9768
Closed

Wrong ownership of volume created with podman volume create #9608

zakkak opened this issue Mar 3, 2021 · 16 comments · Fixed by #9768
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@zakkak
Copy link

zakkak commented Mar 3, 2021

/kind bug

Description

Volume created with podman volume create is owned by root:root in podman 3, despite being mount to a directory owned by quakus:quarkus. The issue is not present in podman 2.

Steps to reproduce the issue:

  1. podman volume create test-volume
  2. podman run --rm --entrypoint /bin/bash -v test-volume:/project quay.io/quarkus/ubi-quarkus-native-image:21.0.0-java11 -c "ls -la"

Describe the results you received:

total 0
drwxr-xr-x.  2 root root 6 Mar  3 22:51 .
dr-xr-xr-x. 19 root root 6 Mar  3 22:51 ..

Describe the results you expected:

total 0
drwxr-xr-x.  2 quarkus quarkus 6 Mar  3 22:52 .
dr-xr-xr-x. 19 root    root    6 Mar  3 22:52 ..

Additional information you deem important (e.g. issue happens only occasionally):

This works as expected in podman 2.

It also works in podman 3 if I create the volume implicitly through podman run or podman create, e.g.:

$ podman create -v test-volume:/project quay.io/quarkus/ubi-quarkus-native-image:21.0.0-java11
$ podman run --rm --entrypoint /bin/bash -v test-volume:/project registry.access.redhat.com/quarkus/mandrel-20-rhel8:latest -c "ls -la"
total 0
drwxr-xr-x.  2 quarkus quarkus 6 Mar  3 23:08 .
dr-xr-xr-x. 19 root    root    6 Mar  3 23:08 ..

Output of podman version:

Version:      3.0.1
API Version:  3.0.0
Go Version:   go1.15.8
Built:        Fri Feb 19 18:56:17 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.19.4
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.26-1.fc33.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.26, commit: 777074ecdb5e883b9bec233f3630c5e7fa37d521'
  cpus: 8
  distribution:
    distribution: fedora
    version: "33"
  eventLogger: journald
  hostname: slimhat
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.10.19-200.fc33.x86_64
  linkmode: dynamic
  memFree: 15687917568
  memTotal: 33436680192
  ociRuntime:
    name: crun
    package: crun-0.18-1.fc33.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.18
      commit: 808420efe3dc2b44d6db9f1a3fac8361dde42a95
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    selinuxEnabled: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.fc33.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 12884893696
  swapTotal: 12884893696
  uptime: 9h 52m 54.98s (Approximately 0.38 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/zakkak/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.4.0-1.fc33.x86_64
      Version: |-
        fusermount3 version: 3.9.3
        fuse-overlayfs: version 1.4
        FUSE library version 3.9.3
        using FUSE kernel interface version 7.31
  graphRoot: /home/zakkak/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 30
  runRoot: /run/user/1000/containers
  volumePath: /home/zakkak/.local/share/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 1613753777
  BuiltTime: Fri Feb 19 18:56:17 2021
  GitCommit: ""
  GoVersion: go1.15.8
  OsArch: linux/amd64
  Version: 3.0.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.0.1-1.fc33.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

@mheon
Copy link
Member

mheon commented Mar 12, 2021

I'll take a look next week.

@mheon mheon self-assigned this Mar 12, 2021
@mheon mheon added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Mar 19, 2021
@mheon
Copy link
Member

mheon commented Mar 19, 2021

Confirmed this reproduces. Working on identifying cause.

@mheon
Copy link
Member

mheon commented Mar 19, 2021

#9768 should fix

@jdoss
Copy link
Contributor

jdoss commented Mar 23, 2021

Hey @mheon think I am seeing this issue when trying to use Podman 3 with the official Elasticsearch container:

/usr/bin/podman volume create podman-es-test
/usr/bin/podman run --name podman-es-test  --replace --rm -it -e ES_JAVA_OPTS="-Xms512m -Xmx512m"  --volume podman-es-test:/usr/share/elasticsearch/data:Z elasticsearch:7.5.2 ls -lah

Podman 3 has the incorrect ownership of the data directory (root:root)

# podman version
Version:      3.0.1
API Version:  3.0.0
Go Version:   go1.15.8
Built:        Fri Feb 19 16:56:17 2021
OS/Arch:      linux/amd64
# /usr/bin/podman volume create podman-es-test
podman-es-test
# /usr/bin/podman run --name podman-es-test  --replace --rm -it -e ES_JAVA_OPTS="-Xms512m -Xmx512m"  --volume podman-es-test:/usr/share/elasticsearch/data:Z elasticsearch:7.5.2 ls -lah
total 560K
drwxrwxr-x.  1 elasticsearch root   17 Jan 15  2020 .
drwxr-xr-x.  1 root          root   27 Jan 15  2020 ..
-rw-r--r--.  1 elasticsearch root   18 Aug  8  2019 .bash_logout
-rw-r--r--.  1 elasticsearch root  193 Aug  8  2019 .bash_profile
-rw-r--r--.  1 elasticsearch root  231 Aug  8  2019 .bashrc
-rw-r--r--.  1 elasticsearch root  14K Jan 15  2020 LICENSE.txt
-rw-r--r--.  1 elasticsearch root 511K Jan 15  2020 NOTICE.txt
-rw-r--r--.  1 elasticsearch root 8.0K Jan 15  2020 README.asciidoc
drwxr-xr-x.  2 elasticsearch root 4.0K Jan 15  2020 bin
drwxrwxr-x.  2 elasticsearch root  148 Jan 15  2020 config
drwxr-xr-x.  2 root          root    6 Mar 23 21:34 data
drwxr-xr-x.  1 elasticsearch root   17 Jan 15  2020 jdk
drwxr-xr-x.  3 elasticsearch root 4.0K Jan 15  2020 lib
drwxrwxr-x.  2 elasticsearch root    6 Jan 15  2020 logs
drwxr-xr-x. 38 elasticsearch root 4.0K Jan 15  2020 modules
drwxr-xr-x.  2 elasticsearch root    6 Jan 15  2020 plugins

Podman 2 has the correct ownership of data directory (elasticsearch:root)

# podman version
Version:      2.1.1
API Version:  2.0.0
Go Version:   go1.15.2
Built:        Wed Oct  7 16:21:20 2020
OS/Arch:      linux/amd64
# /usr/bin/podman volume create podman-es-test
podman-es-test
# /usr/bin/podman run --name podman-es-test  --replace --rm -it -e ES_JAVA_OPTS="-Xms512m -Xmx512m"  --volume podman-es-test:/usr/share/elasticsearch/data:Z elasticsearch:7.5.2 ls -lah
total 560K
drwxrwxr-x.  1 elasticsearch root   17 Jan 15  2020 .
drwxr-xr-x.  1 root          root   27 Jan 15  2020 ..
-rw-r--r--.  1 elasticsearch root   18 Aug  8  2019 .bash_logout
-rw-r--r--.  1 elasticsearch root  193 Aug  8  2019 .bash_profile
-rw-r--r--.  1 elasticsearch root  231 Aug  8  2019 .bashrc
-rw-r--r--.  1 elasticsearch root  14K Jan 15  2020 LICENSE.txt
-rw-r--r--.  1 elasticsearch root 511K Jan 15  2020 NOTICE.txt
-rw-r--r--.  1 elasticsearch root 8.0K Jan 15  2020 README.asciidoc
drwxr-xr-x.  2 elasticsearch root 4.0K Jan 15  2020 bin
drwxrwxr-x.  2 elasticsearch root  148 Jan 15  2020 config
drwxrwxr-x.  2 elasticsearch root    6 Mar 23 21:35 data
drwxr-xr-x.  1 elasticsearch root   17 Jan 15  2020 jdk
drwxr-xr-x.  3 elasticsearch root 4.0K Jan 15  2020 lib
drwxrwxr-x.  2 elasticsearch root    6 Jan 15  2020 logs
drwxr-xr-x. 38 elasticsearch root 4.0K Jan 15  2020 modules
drwxr-xr-x.  2 elasticsearch root    6 Jan 15  2020 plugins

If you don't mount a volume in on podman 3 it has the correct elasticsearch:root ownership on the data directory with podman 3.

# /usr/bin/podman run --name podman-es-test  --replace --rm -it -e ES_JAVA_OPTS="-Xms512m -Xmx512m" elasticsearch:7.5.2 ls -lah
total 560K
drwxrwxr-x.  1 elasticsearch root   17 Jan 15  2020 .
drwxr-xr-x.  1 root          root   27 Jan 15  2020 ..
-rw-r--r--.  1 elasticsearch root   18 Aug  8  2019 .bash_logout
-rw-r--r--.  1 elasticsearch root  193 Aug  8  2019 .bash_profile
-rw-r--r--.  1 elasticsearch root  231 Aug  8  2019 .bashrc
-rw-r--r--.  1 elasticsearch root  14K Jan 15  2020 LICENSE.txt
-rw-r--r--.  1 elasticsearch root 511K Jan 15  2020 NOTICE.txt
-rw-r--r--.  1 elasticsearch root 8.0K Jan 15  2020 README.asciidoc
drwxr-xr-x.  2 elasticsearch root 4.0K Jan 15  2020 bin
drwxrwxr-x.  2 elasticsearch root  148 Jan 15  2020 config
drwxrwxr-x.  2 elasticsearch root    6 Jan 15  2020 data
drwxr-xr-x.  1 elasticsearch root   17 Jan 15  2020 jdk
drwxr-xr-x.  3 elasticsearch root 4.0K Jan 15  2020 lib
drwxrwxr-x.  2 elasticsearch root    6 Jan 15  2020 logs
drwxr-xr-x. 38 elasticsearch root 4.0K Jan 15  2020 modules
drwxr-xr-x.  2 elasticsearch root    6 Jan 15  2020 plugins

Will your PR fix this issue too? The reason why I ask is becuase the Elasticsearch container runs as root but uses the entry point to run as the elasticsearch user https://github.com/elastic/elasticsearch/blob/7.5/distribution/docker/src/docker/bin/docker-entrypoint.sh so I think with the changes in Podman 3 this container will still be broken compared to it's old behavior in Podman 2.

mheon added a commit to mheon/libpod that referenced this issue Mar 24, 2021
As part of a fix for an earlier bug (containers#5698) we added the ability
for Podman to chown volumes to correctly match the user running
in the container, even in adverse circumstances (where we don't
know the right UID/GID until very late in the process). However,
we only did this for volumes created automatically by a
`podman run` or `podman create`. Volumes made by
`podman volume create` do not get this chown, so their
permissions may not be correct. I've looked, and I don't think
there's a good reason not to do this chwon for all volumes the
first time the container is started.

I would prefer to do this as part of volume copy-up, but I don't
think that's really possible (copy-up happens earlier in the
process and we don't have a spec). There is a small chance, as
things stand, that a copy-up happens for one container and then
a chown for a second, unrelated container, but the odds of this
are astronomically small (we'd need a very close race between two
starting containers).

Fixes containers#9608

Signed-off-by: Matthew Heon <[email protected]>
@mheon
Copy link
Member

mheon commented Mar 24, 2021

@jdoss I just tested with my PR and it looks as though it's working properly - the data directory is owned by elasticsearch root

@jdoss
Copy link
Contributor

jdoss commented Mar 24, 2021

@mheon heck yeah! Thank you!

mheon added a commit to mheon/libpod that referenced this issue Mar 29, 2021
As part of a fix for an earlier bug (containers#5698) we added the ability
for Podman to chown volumes to correctly match the user running
in the container, even in adverse circumstances (where we don't
know the right UID/GID until very late in the process). However,
we only did this for volumes created automatically by a
`podman run` or `podman create`. Volumes made by
`podman volume create` do not get this chown, so their
permissions may not be correct. I've looked, and I don't think
there's a good reason not to do this chwon for all volumes the
first time the container is started.

I would prefer to do this as part of volume copy-up, but I don't
think that's really possible (copy-up happens earlier in the
process and we don't have a spec). There is a small chance, as
things stand, that a copy-up happens for one container and then
a chown for a second, unrelated container, but the odds of this
are astronomically small (we'd need a very close race between two
starting containers).

Fixes containers#9608

Signed-off-by: Matthew Heon <[email protected]>
@jdoss
Copy link
Contributor

jdoss commented Apr 8, 2021

@mheon did this make it into Podman 3.1.0? Volumes ownership is still busted on FCOS 34.20210328.1.1 with Podman 3.1.0

# cat /etc/os-release |grep VERSION
VERSION="34.20210328.1.1 (CoreOS Prerelease)"
VERSION_ID=34
VERSION_CODENAME=""
REDHAT_BUGZILLA_PRODUCT_VERSION=34
REDHAT_SUPPORT_PRODUCT_VERSION=34
OSTREE_VERSION='34.20210328.1.1'
[root@example (example.example.wtf) data]# podman version
Version:      3.1.0
API Version:  3.1.0
Go Version:   go1.16
Built:        Tue Mar 30 13:29:36 2021
OS/Arch:      linux/amd64
[root@example (example.example.wtf) data]# /usr/bin/podman run --name podman-es-test  --replace --rm -it -e ES_JAVA_OPTS="-Xms512m -Xmx512m"  --volume podman-es-test:/usr/share/elasticsearch/data:Z elasticsearch:7.5.2 ls -lah
total 560K
drwxrwxr-x.  1 elasticsearch root   17 Jan 15  2020 .
drwxr-xr-x.  1 root          root   27 Jan 15  2020 ..
-rw-r--r--.  1 elasticsearch root   18 Aug  8  2019 .bash_logout
-rw-r--r--.  1 elasticsearch root  193 Aug  8  2019 .bash_profile
-rw-r--r--.  1 elasticsearch root  231 Aug  8  2019 .bashrc
-rw-r--r--.  1 elasticsearch root  14K Jan 15  2020 LICENSE.txt
-rw-r--r--.  1 elasticsearch root 511K Jan 15  2020 NOTICE.txt
-rw-r--r--.  1 elasticsearch root 8.0K Jan 15  2020 README.asciidoc
drwxr-xr-x.  2 elasticsearch root 4.0K Jan 15  2020 bin
drwxrwxr-x.  2 elasticsearch root  148 Jan 15  2020 config
drwxr-xr-x.  2 root          root    6 Apr  8 21:39 data
drwxr-xr-x.  1 elasticsearch root   17 Jan 15  2020 jdk
drwxr-xr-x.  3 elasticsearch root 4.0K Jan 15  2020 lib
drwxrwxr-x.  2 elasticsearch root    6 Jan 15  2020 logs
drwxr-xr-x. 38 elasticsearch root 4.0K Jan 15  2020 modules
drwxr-xr-x.  2 elasticsearch root    6 Jan 15  2020 plugins

@mheon
Copy link
Member

mheon commented Apr 8, 2021

Yep, this one made it into 3.1.0, so my patch must not have been a complete fix.

@mheon
Copy link
Member

mheon commented Apr 8, 2021

Though - to verify, podman-es-test - is that a fresh volume, or preexisting?

This will only fix newly-created volumes (IE, made after upgrade to v3.1.0)

@jdoss
Copy link
Contributor

jdoss commented Apr 8, 2021

This is a brand new VM with a brand new volume. Also of you do a --volume /opt/elasticsearch/data:/usr/share/elasticsearch/data:Z It is still owned as root:root.

@EduardoVega
Copy link
Contributor

I think that the problem is between Podman and the entrypoint script of the elasticsearch image, it seems that the correct user/group are not being detected (elasticsearch:root).

Similar scenarios do work with mariadb

Regular volume

podman run -d  -e 'MYSQL_USER=podman' -e 'MYSQL_PASSWORD=podman' -e 'MYSQL_ROOT_PASSWORD=podman' -e 'MYSQL_DATABASE=podman' -v /tmp/mariadb-data-01:/var/lib/mysql:Z --name mariadb01 mariadb

Named volume

podman run -d  -e 'MYSQL_USER=podman' -e 'MYSQL_PASSWORD=podman' -e 'MYSQL_ROOT_PASSWORD=podman' -e 'MYSQL_DATABASE=podman' -v mariadb-data-01:/var/lib/mysql:Z --name mariadb02 mariadb

As a workaround , I think you can run elasticsearch with the new U volume option. This only works for regular volumes.

podman run --name podman-es-test  --replace --rm -it -e ES_JAVA_OPTS="-Xms512m -Xmx512m"  --volume /tmp/elastic-data:/usr/share/elasticsearch/data:Z,U elasticsearch:7.5.2

@mheon
Copy link
Member

mheon commented Apr 9, 2021

@jdoss Your second example (--volume /opt/elasticsearch/data:/usr/share/elasticsearch/data:Z) will never chown by default - that's not a bug. Podman will only chown named volumes, not host paths. As of 3.1.0 you can explicitly request a chown to appropriate permissions using :U (so :U,Z instead of just :Z).

@jdoss
Copy link
Contributor

jdoss commented Apr 9, 2021

@mheon Running your example still results in root:root which is different behavior than Podman 2.x

podman run --name podman-es-test  --replace --rm -it -e ES_JAVA_OPTS="-Xms512m -Xmx512m"  --volume /tmp/elastic-data:/usr/share/elasticsearch/data:U,Z elasticsearch:7.5.2 ls -lah
total 584K
drwxrwxr-x. 10 elasticsearch root 4.0K Jan 15  2020 .
drwxr-xr-x. 53 root          root 4.0K Jan 15  2020 ..
-rw-r--r--.  1 elasticsearch root   18 Aug  8  2019 .bash_logout
-rw-r--r--.  1 elasticsearch root  193 Aug  8  2019 .bash_profile
-rw-r--r--.  1 elasticsearch root  231 Aug  8  2019 .bashrc
-rw-r--r--.  1 elasticsearch root  14K Jan 15  2020 LICENSE.txt
-rw-r--r--.  1 elasticsearch root 511K Jan 15  2020 NOTICE.txt
-rw-r--r--.  1 elasticsearch root 8.0K Jan 15  2020 README.asciidoc
drwxr-xr-x.  2 elasticsearch root 4.0K Jan 15  2020 bin
drwxrwxr-x.  2 elasticsearch root 4.0K Jan 15  2020 config
drwxr-xr-x.  2 root          root   40 Apr  9 15:19 data
drwxr-xr-x.  9 elasticsearch root 4.0K Jan 15  2020 jdk
drwxr-xr-x.  3 elasticsearch root 4.0K Jan 15  2020 lib
drwxrwxr-x.  2 elasticsearch root 4.0K Jan 15  2020 logs
drwxr-xr-x. 38 elasticsearch root 4.0K Jan 15  2020 modules
drwxr-xr-x.  2 elasticsearch root 4.0K Jan 15  2020 plugins

@mheon
Copy link
Member

mheon commented Apr 9, 2021

If :U isn't working, then I definitely concur with @EduardoVega that the issue isn't with the volume code, but with the user the container is running as. Podman must think the container is running as root:root (maybe the entrypoint script drops to an unprivileged user?)

@EduardoVega
Copy link
Contributor

EduardoVega commented Apr 9, 2021

@jdoss @mheon

The :U option does work

Run container without changing the CMD

podman run --name podman-es-test  --replace --rm -it -e ES_JAVA_OPTS="-Xms512m -Xmx512m"  --volume /tmp/elastic-data:/usr/share/elasticsearch/data:Z,U elasticsearch:7.5.2

Run container changing the CMD (ls -lah). Here I need to use the --user flag because otherwise it will be root:root

podman run --name podman-es-test  --replace --rm -it -e ES_JAVA_OPTS="-Xms512m -Xmx512m"  --volume /tmp/elastic-data:/usr/share/elasticsearch/data:Z,U --user elasticsearch:root elasticsearch:7.5.2 ls -lah
total 560K
drwxrwxr-x. 10 elasticsearch root   17 Jan 15  2020 .
drwxr-xr-x. 53 root          root   27 Jan 15  2020 ..
-rw-r--r--.  1 elasticsearch root   18 Aug  8  2019 .bash_logout
-rw-r--r--.  1 elasticsearch root  193 Aug  8  2019 .bash_profile
-rw-r--r--.  1 elasticsearch root  231 Aug  8  2019 .bashrc
-rw-r--r--.  1 elasticsearch root  14K Jan 15  2020 LICENSE.txt
-rw-r--r--.  1 elasticsearch root 511K Jan 15  2020 NOTICE.txt
-rw-r--r--.  1 elasticsearch root 8.0K Jan 15  2020 README.asciidoc
drwxr-xr-x.  2 elasticsearch root 4.0K Jan 15  2020 bin
drwxrwxr-x.  2 elasticsearch root  148 Jan 15  2020 config
drwxrwxr-x.  2 elasticsearch root   40 Apr  9 16:59 data
drwxr-xr-x.  9 elasticsearch root   17 Jan 15  2020 jdk
drwxr-xr-x.  3 elasticsearch root 4.0K Jan 15  2020 lib
drwxrwxr-x.  2 elasticsearch root    6 Jan 15  2020 logs
drwxr-xr-x. 38 elasticsearch root 4.0K Jan 15  2020 modules
drwxr-xr-x.  2 elasticsearch root    6 Jan 15  2020 plugins

I do believe there is a problem with named volumes and only the Elasticsearch image because mariadb image does work.

@jdoss
Copy link
Contributor

jdoss commented Apr 9, 2021

Hey @EduardoVega and @mheon thanks for the replys and taking the time to clear things up for me. You are right it does work!

I pointed out in my OP to this issue that the entrypoint in the official elasticsearch container does run as the root user but then drops to run as elasticsearch:root if it is starting up elasticsearch:

Will your PR fix this issue too? The reason why I ask is becuase the Elasticsearch container runs as root but uses the entry point to run as the elasticsearch user https://github.com/elastic/elasticsearch/blob/7.5/distribution/docker/src/docker/bin/docker-entrypoint.sh so I think with the changes in Podman 3 this container will still be broken compared to it's old behavior in Podman 2.

I am able to get elasticsearch running in my systemd unit by adding --user elasticsearch:root and using this as the volume mount --volume /opt/elasticsearch/data/elasticsearch:/usr/share/elasticsearch/data:U,Z.

jmguzik pushed a commit to jmguzik/podman that referenced this issue Apr 26, 2021
As part of a fix for an earlier bug (containers#5698) we added the ability
for Podman to chown volumes to correctly match the user running
in the container, even in adverse circumstances (where we don't
know the right UID/GID until very late in the process). However,
we only did this for volumes created automatically by a
`podman run` or `podman create`. Volumes made by
`podman volume create` do not get this chown, so their
permissions may not be correct. I've looked, and I don't think
there's a good reason not to do this chwon for all volumes the
first time the container is started.

I would prefer to do this as part of volume copy-up, but I don't
think that's really possible (copy-up happens earlier in the
process and we don't have a spec). There is a small chance, as
things stand, that a copy-up happens for one container and then
a chown for a second, unrelated container, but the odds of this
are astronomically small (we'd need a very close race between two
starting containers).

Fixes containers#9608

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants