-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong ownership of volume created with podman volume create
#9608
Comments
I'll take a look next week. |
Confirmed this reproduces. Working on identifying cause. |
#9768 should fix |
Hey @mheon think I am seeing this issue when trying to use Podman 3 with the official Elasticsearch container:
Podman 3 has the incorrect ownership of the data directory (root:root)
Podman 2 has the correct ownership of data directory (elasticsearch:root)
If you don't mount a volume in on podman 3 it has the correct
Will your PR fix this issue too? The reason why I ask is becuase the Elasticsearch container runs as root but uses the entry point to run as the |
As part of a fix for an earlier bug (containers#5698) we added the ability for Podman to chown volumes to correctly match the user running in the container, even in adverse circumstances (where we don't know the right UID/GID until very late in the process). However, we only did this for volumes created automatically by a `podman run` or `podman create`. Volumes made by `podman volume create` do not get this chown, so their permissions may not be correct. I've looked, and I don't think there's a good reason not to do this chwon for all volumes the first time the container is started. I would prefer to do this as part of volume copy-up, but I don't think that's really possible (copy-up happens earlier in the process and we don't have a spec). There is a small chance, as things stand, that a copy-up happens for one container and then a chown for a second, unrelated container, but the odds of this are astronomically small (we'd need a very close race between two starting containers). Fixes containers#9608 Signed-off-by: Matthew Heon <[email protected]>
@jdoss I just tested with my PR and it looks as though it's working properly - the |
@mheon heck yeah! Thank you! |
As part of a fix for an earlier bug (containers#5698) we added the ability for Podman to chown volumes to correctly match the user running in the container, even in adverse circumstances (where we don't know the right UID/GID until very late in the process). However, we only did this for volumes created automatically by a `podman run` or `podman create`. Volumes made by `podman volume create` do not get this chown, so their permissions may not be correct. I've looked, and I don't think there's a good reason not to do this chwon for all volumes the first time the container is started. I would prefer to do this as part of volume copy-up, but I don't think that's really possible (copy-up happens earlier in the process and we don't have a spec). There is a small chance, as things stand, that a copy-up happens for one container and then a chown for a second, unrelated container, but the odds of this are astronomically small (we'd need a very close race between two starting containers). Fixes containers#9608 Signed-off-by: Matthew Heon <[email protected]>
@mheon did this make it into Podman 3.1.0? Volumes ownership is still busted on FCOS 34.20210328.1.1 with Podman 3.1.0
|
Yep, this one made it into 3.1.0, so my patch must not have been a complete fix. |
Though - to verify, This will only fix newly-created volumes (IE, made after upgrade to v3.1.0) |
This is a brand new VM with a brand new volume. Also of you do a |
I think that the problem is between Podman and the entrypoint script of the elasticsearch image, it seems that the correct user/group are not being detected (elasticsearch:root). Similar scenarios do work with mariadb Regular volume
Named volume
As a workaround , I think you can run elasticsearch with the new U volume option. This only works for regular volumes.
|
@jdoss Your second example ( |
@mheon Running your example still results in
|
If |
The Run container without changing the CMD
Run container changing the CMD (ls -lah). Here I need to use the --user flag because otherwise it will be root:root
I do believe there is a problem with named volumes and only the Elasticsearch image because mariadb image does work. |
Hey @EduardoVega and @mheon thanks for the replys and taking the time to clear things up for me. You are right it does work! I pointed out in my OP to this issue that the entrypoint in the official elasticsearch container does run as the
I am able to get elasticsearch running in my systemd unit by adding |
As part of a fix for an earlier bug (containers#5698) we added the ability for Podman to chown volumes to correctly match the user running in the container, even in adverse circumstances (where we don't know the right UID/GID until very late in the process). However, we only did this for volumes created automatically by a `podman run` or `podman create`. Volumes made by `podman volume create` do not get this chown, so their permissions may not be correct. I've looked, and I don't think there's a good reason not to do this chwon for all volumes the first time the container is started. I would prefer to do this as part of volume copy-up, but I don't think that's really possible (copy-up happens earlier in the process and we don't have a spec). There is a small chance, as things stand, that a copy-up happens for one container and then a chown for a second, unrelated container, but the odds of this are astronomically small (we'd need a very close race between two starting containers). Fixes containers#9608 Signed-off-by: Matthew Heon <[email protected]>
/kind bug
Description
Volume created with
podman volume create
is owned byroot:root
in podman 3, despite being mount to a directory owned byquakus:quarkus
. The issue is not present in podman 2.Steps to reproduce the issue:
podman volume create test-volume
podman run --rm --entrypoint /bin/bash -v test-volume:/project quay.io/quarkus/ubi-quarkus-native-image:21.0.0-java11 -c "ls -la"
Describe the results you received:
Describe the results you expected:
Additional information you deem important (e.g. issue happens only occasionally):
This works as expected in podman 2.
It also works in podman 3 if I create the volume implicitly through
podman run
orpodman create
, e.g.:Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
The text was updated successfully, but these errors were encountered: