-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error reading image ... image not known #3982
Comments
|
It looks like that changes the error I get when I try to prune, but I'm still getting
|
I do use |
Hm. Do you have any Buildah containers created? This looks like it may have something to do with lingering Buildah containers after builds. |
A |
I don't suspect this is |
|
That's probably why We should probably just ignore those errors - if an image fails to prune because it's in use by a container, we should not have been pruning them in the first place. |
This doesn't resolve the |
Should I create a separate issue for |
Agree - it'll probably be a trivial fix, so we should track it separately. |
I just created a new issue for Thanks! |
@vrothberg You mind taking a look at the |
Looking into it now. |
Hard to judge from far how the @bmaupin, did you manually remove data from the storage or did those errors occur without manual intervention? Could you share the output of This might aid in pointing to the right direction. In the meantime, I'll open a PR for c/storage to include the image ID in the |
I'm not a power user; the first time I've ever manually touched the storage was when I was requested to run
Thanks! |
Sorry for the late reply. First, thanks for sharing the data! If you still suffer from this issues, could you repaste the output from |
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days. |
@bmaupin, are you still seeing the issue? |
Sorry, after filing 4-5 different bugs for podman I ended up switching back to Docker so I could get my work done. But I still have podman installed. I don't believe I've touched podman since our last interaction, and it looks like the error is slightly different now:
It looks like I'm running a newer version of podman:
I started this command and piped it to a file. It's been several minutes and the file's nearly 200 MB and it's still running. When it finishes running I'll compress the output and if it's a reasonable size I can attach it here. Thanks! |
Unclosing |
@bmaupin, I am going to close it as we can't reproduce the issue :( Please reopen if there's a reproducer. |
I'm seeing this issue as well:
Doing a |
@vrothberg my machine :) But I suspect that just "rm -rf"-ing one of those directories under
|
If that's the case, we're in between a rock and a hard place. If files are removed from the storage, bad things will happen :^) |
I didn't do it on purpose though, in fact I didn't poke into the |
the report from Paolo is really helpful. One thing in common I see with the original report is that both his configuration and the original one use directories where systemd-tmpfiles can clean up. We don't set the sticky bit for any of out files (and we cannot really do that for image files). @bonzini are you using the default configuration for systemd-tmpfiles or have you tweaked it? In the default configuration I've on Fedora 32, I see:
|
Wait, what? I was completely unaware of this. This sounds like it could break Podman's alive lock on long-running systems... |
Nope, I have not tweaked the configuration in any way. That's a great catch @giuseppe!! |
yes, we should not use We could probably fix the rundir side, and set the sticky bit for each file, but that is difficult to get right as we will certainly miss some of the files. I don't see any way in systemd for doing it for directories (unless the directory is a mountpoint named As a workaround, I can't even think of a good place where we could periodically |
No problem, I'll move the graphroot under /var/lib and just regenerate all the images, in effect it's just like dropping a cache. One thing that can be done from your side is to add a "podman image fsck" command that drops inaccessible images from the json directory, which would un-hose my graphroot. |
Is it possible to create a libpod-specific directory in tmp, move everything there and set the sticky bit there? |
the sticky bit cannot be set for a directory for tmpfiles to skip cleaning it, because the sticky bit has already a different meaning for directories.
that can be useful in general to have, but for this particular issue I'd still feel uncomfortable that files can suddenly disappear from the storage. If systemd-tmpfiles deletes the right whiteout file, then the super secret file covered by the image uppermost layer becomes accessible again... |
@rhatdan should we worry about the security implications having the storage on such directories can have or is it out of scope/wrong configuration? |
It's not just whiteouts, deleting random files in an image can also screw up the installation or apply the wrong configuration (e.g. it could enable password-less sudo). In fact before noticing this symptom, I had found the top image in the chain to be completely broken because half of /usr had disappeared from the So I don't think the security implications are serious; it's just a very bad idea to remove random files from the storage graphroot, no matter if you do it directly or systemd-tmpfiles decides to. |
This issue happens on my openshift customer site too. Definitely, the customer did not change podman configuration. Unfortunately, there are no reproduce steps. |
Same thing just happened to me, after deleting an image using podman version 2.0.2 |
Hi. I faced same issue. any workaround? For me I didn't do anything but graceful reboot of coreos. This happened to Openshift 4.5 which is latest version. [core@control-0 ~]$ sudo podman images podman info --debug
|
I will keep my environment for this month, so please leave a comment if there is anything you want to try out. |
Hi I tested, but I can pull a new image without any problem, and it is stored properly. [root@control-0 ~]# podman pull nginx [root@control-0 ~]# podman images [root@control-0 ~]# podman pull quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a99ba49caf5063f24f139b2eb9326122364592c95ae72c7d0112d2cee1d0e30c Trying to pull quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a99ba49caf5063f24f139b2eb9326122364592c95ae72c7d0112d2cee1d0e30c... |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Several times since I switched to podman from docker I end up getting an
image not known
error. I'm unable to remove the offending image and the only solution I've found (besides manually combing through the storage and json files to clean it out) is to wipe ~/.local/share/containers/storage and start over.I typically first notice the error when running
podman image
:I can't remove the offending image:
I can't do a prune either:
Where does
f2061acd5a510ad39a7ec7923d2a1aa416210ef4c2cd5c28afec92d6c4a677a1
come from? It's not in the list of images or containers, and I can't remove it either.Steps to reproduce the issue:
Although this has happened several times, I'm not sure what the cause is and so I'm not sure how to reproduce it.
Describe the results you received:
See Description
Describe the results you expected:
See Description
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
Physical machine running Ubuntu 18.04.
Thanks!
The text was updated successfully, but these errors were encountered: