Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[rawhide+podman-next] podman push/pull registry roundtrip changes the image #20611

Open
martinpitt opened this issue Nov 6, 2023 · 32 comments · Fixed by cockpit-project/cockpit-podman#1477
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@martinpitt
Copy link
Contributor

martinpitt commented Nov 6, 2023

Issue Description

This is about the regression that started in PR #20595 and since then has failed the cockpit-podman rawhide test on every PR (example). I don't see a clean pattern to this yet. #20595 fails all tests (yours and ours) on all OSes, and yet that schema bump seems harmless; other PRs succeed your tests, but still fail ours in the same way, but only on rawhide; and your own tests pass in other PRs (mostly). It is related to the containers-common update from 1-97 to 1-98 (-99 fails as well), as downgrading that package makes it work again.

But something doesn't add up. Perhaps the podman-next COPR gets builds not only from main, but from some PRs, or the PRs do builds without rebasing, or don't build against the latest podman-next, or that containers-common has some indirect effect which I don't understand.

The failing test checks image uploading and downloading to/from a registry. Until yesterday, that ended up as the same image, but now it's a different one.

Steps to reproduce the issue

This is a CLI version of the relevant part of the test:

# update to podman-next:
sudo dnf -y copr enable rhcontainerbot/podman-next >&2; dnf -y update --repo 'copr*'

# run local registry
podman run -d -p 5000:5000 --name registry quay.io/libpod/registry:2.8

# take some container image, note its SHA
podman pull docker.io/busybox
# → docker.io/library/busybox  latest      a416a98b71e2  3 months ago   4.5 MB

# upload it to the registry
podman tag docker.io/library/busybox:latest localhost:5000/my-busybox
podman push localhost:5000/my-busybox
podman rm localhost:5000/my-busybox

# download it again
podman pull localhost:5000/my-busybox

# compare SHAs
podman images | grep busybox

Describe the results you received

With podman-next:

podman-4.8.0~dev-1.20231106154052317574.main.2390.886f932b0.fc40.x86_64
containers-common-1-99.fc40.noarch

the downloaded image is different from the original:

docker.io/library/busybox  latest      a416a98b71e2  3 months ago   4.5 MB
localhost:5000/my-busybox  latest      5ed23df91f27  3 months ago   4.49 MB

Describe the results you expected

With current rawhide:

podman-4.7.0-1.fc40.x86_64
containers-common-1-97.fc40.noarch

the downloaded image is identical to the original docker.io one:

localhost:5000/my-busybox  latest      a416a98b71e2  3 months ago   4.5 MB
docker.io/library/busybox  latest      a416a98b71e2  3 months ago   4.5 MB

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.0-dev
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-3.fc39.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 97.43
    systemPercent: 1.22
    userPercent: 1.34
  cpus: 1
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: cloud
    version: "40"
  eventLogger: journald
  freeLocks: 2047
  hostname: fedora-rawhide-127-0-0-2-2201
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.7.0-0.rc0.20231031git5a6a09e97199.2.fc40.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 405979136
  memTotal: 1135865856
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.8.0-1.20231103152128612668.main.26.g0b97b25.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.9.0-dev
    package: netavark-1.8.0-1.20231103122905869245.main.24.gb7e144d.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.9.0-dev
  ociRuntime:
    name: crun
    package: crun-1.11.1-1.20231106135232645587.main.7.g2e35a99.fc40.x86_64
    path: /usr/bin/crun
    version: |-
      crun version UNKNOWN
      commit: 3af84aa5c314ce41d579d2cfa0a0ccc0059ca8aa
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20231004.gf851084-1.fc40.x86_64
    version: |
      pasta 0^20231004.gf851084-1.fc40.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-1.fc40.x86_64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 1131147264
  swapTotal: 1135603712
  uptime: 1h 19m 59.00s (Approximately 0.04 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  localhost:5000:
    Blocked: false
    Insecure: true
    Location: localhost:5000
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: localhost:5000
    PullFromMirror: ""
  localhost:6000:
    Blocked: false
    Insecure: true
    Location: localhost:6000
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: localhost:6000
    PullFromMirror: ""
  search:
  - localhost:5000
  - localhost:6000
store:
  configFile: /home/admin/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/admin/.local/share/containers/storage
  graphRootAllocated: 12798898176
  graphRootUsed: 2198786048
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 5
  runRoot: /tmp/containers-user-1000/containers
  transientStore: false
  volumePath: /home/admin/.local/share/containers/storage/volumes
version:
  APIVersion: 4.8.0-dev-886f932b0
  Built: 1699285434
  BuiltTime: Mon Nov  6 15:43:54 2023
  GitCommit: ""
  GoVersion: go1.21.3
  Os: linux
  OsArch: linux/amd64
  Version: 4.8.0-dev-886f932b0

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Standard Fedora rawhide cloud image

Additional information

Always happens. Running sudo dnf downgrade containers-common twice to downgrade to 4:1-97.fc40 goes back to the previous working state.

@martinpitt martinpitt added the kind/bug Categorizes issue or PR as related to a bug. label Nov 6, 2023
@rhatdan
Copy link
Member

rhatdan commented Nov 6, 2023

@giuseppe PTAL

@rhatdan
Copy link
Member

rhatdan commented Nov 6, 2023

@vrothberg @mtrmac PTAL

Image created via pulling a gzip.tar is different then a zstd:chunked, is this expected?

@martinpitt
Copy link
Contributor Author

Let's ignore that particular failure in the test runs for the time being. It just creates a firehose of failures and notifications, which is annoying and hides potential further regressions.

Is that zstd change is expected to modify images on a push/pull roundtrip? It feels rather unexpected to me, so far the image hash has been a good and reliable indicator of integrity and a precise name. But if that is intended, I can drop that assumption from c-podman's tests. But as I can't technically judge this, I'll wait for your investigation. Thanks!

@martinpitt martinpitt changed the title [podman-next] podman push/pull registry roundtrip changes the image [rawhide+podman-next] podman push/pull registry roundtrip changes the image Nov 7, 2023
@martinpitt
Copy link
Contributor Author

This also started to fail in rawhide proper, so it's not limited to the podman-next repo any more. containers-common 1-98 landed in rawhide.

jelly pushed a commit to cockpit-project/bots that referenced this issue Nov 7, 2023
@vrothberg
Copy link
Member

vrothberg commented Nov 7, 2023

Note that the ID is not the digest. In the examples above, we are looking at the ID. In general, there is no guarantee that the digests don't change when pull-pushing images around. For details, please refer to #15969 (comment).

@martinpitt, I think the cockpit test could fail at any point in the future independent of zstd or gzip.

martinpitt added a commit to martinpitt/cockpit-podman that referenced this issue Nov 7, 2023
Until very recently, a cycle of `podman push`/`pull` preserved an
image's ID. Starting with containers-common 1-98 this is not the case
any more, as that changed the on-disk compression format. Quoting podman
devs: "The digests validate a specific image _representation_, not some
abstract "sameness" of an image."

Instead, remove the original busybox image for the user as well (that
already happend for the system user), and more explicitly check deletion
of an image with multiple tags.

Fixes containers/podman#20611

Obsoletes cockpit-project/bots#5515
@martinpitt
Copy link
Contributor Author

Ack, thanks @vrothberg ! I sent cockpit-project/cockpit-podman#1477 to drop that assumption then, I just wanted to get a confirmation that this is expected behaviour.

martinpitt added a commit to cockpit-project/cockpit-podman that referenced this issue Nov 7, 2023
Until very recently, a cycle of `podman push`/`pull` preserved an
image's ID. Starting with containers-common 1-98 this is not the case
any more, as that changed the on-disk compression format. Quoting podman
devs: "The digests validate a specific image _representation_, not some
abstract "sameness" of an image."

Instead, remove the original busybox image for the user as well (that
already happend for the system user), and more explicitly check deletion
of an image with multiple tags.

Fixes containers/podman#20611

Obsoletes cockpit-project/bots#5515
@mtrmac
Copy link
Collaborator

mtrmac commented Nov 7, 2023

Uh, this report looks confusing.

digests are expected to change (at any push, but they will change on a push with a different compression format). Image IDs are not expected to change (… yet; we might actually do that on a Zstd pull, somewhere around containers/image#1980 ).

If the situation is that the image ID (and a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824 is an image ID, not a manifest digest; digests of that image are 3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 at the top level, and for the amd64/linux platform instance) changes, that is indeed surprising and worth investigation.

Re-opening to either try to reproduce this myself, or to get a positive confirmation that this is actually about digests.


In the future, in “steps to reproduce”: … “the downloaded image is different from the original:”, please actually include the command writing that output.

@mtrmac mtrmac reopened this Nov 7, 2023
@martinpitt
Copy link
Contributor Author

martinpitt commented Nov 7, 2023

@mtrmac Note that until yesterday I didn't even know what a "manifest digest" was. The "image ID" is what's visible with podman pull and podman images, and that's what I considered the "unique identifer" of a particular image build, as opposed to the tag (where the thing they point to can move). I.e. my mental model was "image ID" is similar to a git commit SHA, and an image tag similar to a git tag. Apparently this is not correct, that's why I asked above whether this ID was supposed to change with a roundtrip to a registry.

I didn't compare digests (I wouldn't even know how to display them without looking for that in the manpages or googling). It's very likely that the digests didn't change, but due to the gzip → zstd recompression the image ID changed. But short of changing the whole concept (i.e. making the image ID equal to the manifest digest or whatnot) this may be unavoidable. (Again: this is my gut feeling, I have no technical qualification here).

I did write the command that wrote the output:

# compare SHAs
podman images | grep busybox

but the form splits the "what did you do" from "what is supposed to happen". That's maybe a bit confusing in this case.

Thanks for investigating!

@giuseppe
Copy link
Member

giuseppe commented Nov 7, 2023

Uh, this report looks confusing.

digests are expected to change (at any push, but they will change on a push with a different compression format). Image IDs are not expected to change (… yet; we might actually do that on a Zstd pull, somewhere around containers/image#1980 ).

If the situation is that the image ID (and a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824 is an image ID, not a manifest digest; digests of that image are 3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 at the top level, and for the amd64/linux platform instance) changes, that is indeed surprising and worth investigation.

Re-opening to either try to reproduce this myself, or to get a positive confirmation that this is actually about digests.

isn't it expected that all the digests change when we change compression? On fedora:rawhide we are defaulting to compression_format = "zstd:chunked" so that affects all the layers. Why wouldn't the image ID be affected?

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 7, 2023

Until something like #1980 chooses different layer IDs layers pulled the traditional way and layers pulled based on the TOC, OCI image IDs = config digest. So, just pushing/pulling with zstd is not currently expected to change the image ID.

@giuseppe
Copy link
Member

giuseppe commented Nov 7, 2023

The original alpine image is in Docker format, while we convert to oci with zstd:chunked. I get the same result just with podman push -f oci localhost:5000/my-busybox without requiring the zstd:chunked compression

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 7, 2023

Oh. Yes, that would explain it.

@giuseppe
Copy link
Member

giuseppe commented Nov 8, 2023

can we close the issue or is there anything more you'd like to check?

@edsantiago
Copy link
Member

Let's not close this please. The recent zstd disaster on rawhide is triggering a bug that looks related. I'm working to find a short reproducer.

@edsantiago
Copy link
Member

edsantiago commented Nov 8, 2023

Here's the shortest reproducer I can manage. There may be better ones. Assumes: checked-out and built podman@main on current rawhide.

# iii=quay.io/libpod/testimage:20221018
# rrr=quay.io/libpod/registry:2.8
# bin/podman pull $rrr $iii
# mkdir /tmp/pmpm
# htpasswd -Bbn uuuu pppp >/tmp/pmpm/htpasswd
# bin/podman run -d -p 127.0.0.1:5000:5000 --name reg -v /tmp/pmpm:/auth:Z -e REGISTRY_AUTH=htpasswd -e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd $rrr
95dfcf3be45a4441b0f2cef3e6531607a92b1e75ed687717cc62c86f6a782356
# bin/podman login --tls-verify=false localhost:5000
Username: uuuu
Password:   pppp
Login Succeeded!
# bin/podman push -q --tls-verify=false $iii localhost:5000/foo:bar
# rm -f /tmp/foo.tar; bin/podman image save -q -o /tmp/foo.tar $iii

So far, so good. Now the problem comes when pulling back that pushed image:

# bin/podman pull --tls-verify=false  localhost:5000/foo:bar
Trying to pull localhost:5000/foo:bar...
Getting image source signatures
Copying blob 4ffca960952d skipped: already exists  
Copying blob 16254220078a skipped: already exists  
Copying config f5a99120db done   | 
Writing manifest to image destination
f5a99120db6452661930a1db3bf7390eec9b963f5f62c068fa32dc1d550afad3

# rm -f /tmp/foo.tar; bin/podman image save -q -o /tmp/foo.tar $iii
Error: creating an updated image manifest: Error during manifest conversion: "application/vnd.oci.image.layer.v1.tar+zstd": zstd compression is not supported for docker images

The damage is in /var/lib/containers:

# grep -Rl zstd /var/lib/containers/storage/ 2>/dev/null
/var/lib/containers/storage/overlay-images/f5a99120db6452661930a1db3bf7390eec9b963f5f62c068fa32dc1d550afad3/manifest
/var/lib/containers/storage/overlay-images/f5a99120db6452661930a1db3bf7390eec9b963f5f62c068fa32dc1d550afad3/=bWFuaWZlc3Qtc2hhMjU2OmFkYWIyMmVkY2Y3NGY3MThiNDg2NzIwZjMxMjc5YzhjZjdmNjAwM2MwNWRiYTc1YmQzYWY3ZDdiMTRiMmFhMjU=
^C

This is not a harmless bug. This is causing podman system tests to fail on rawhide, because one test does a push/pull, and all subsequent saves fail.

@vrothberg
Copy link
Member

The problem is that podman save defaults to --format=**docker-archive**. Since zstd is an OCI-only feature, conversion to Docker will fail.

In this specific case, changing the test to use --format=oci-archive should fix it. Not sure whether other tests (e2e?) will fail later on.

@vrothberg
Copy link
Member

I think that somebody needs to sit down and change default to zstd and make sure that podman/buildah CI passes. Changing rawhide first seems expensive to debug.

@edsantiago
Copy link
Member

Thanks, @vrothberg. IIUC, a small correction to your suggestion:

changing the test to use --format=oci-archive should fix it.

to:

changing all podman tests that run podman-save to use --format=oci-archive should fix it

Because as the above clearly shows, there are times when podman save will work, and times when it will not, and there is no sane way to predict which will be which.

Better solution: change podman save to default to something that will work with zstd.

And, full agreement: testing and fixing and designing and changing should be done BEFORE changing this default on rawhide.

@vrothberg
Copy link
Member

Thanks for the correction, @edsantiago!

I think this brings up a good point. Which parts of Podman should "implicitly" (?) change as well when the default compression is set to zstd? I think podman save is a good candidate.

So, instead of changing the tests, it may be worth changing the behavior of Podman.

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 8, 2023

(As a matter of pedantic tidiness, note that this bug seems to have shifted from “pull+push changes image ID” to “other kinds of breakage due to the default compression change”. It might have been cleaner to track the two (or more?) issues separately. If this is going to turn into a tracking bug, shall we retitle it?)


The problem is that podman save defaults to --format=**docker-archive**. Since zstd is an OCI-only feature, conversion to Docker will fail.

But docker-archive does not compress at all, so whether the image was originally Zstd or not — and even whether it was OCI or not, should in principle not matter.

The conversion failing is just a c/image bug.


In this case I think we got lucky on timing: with no coordination at all, I think I fixed this in a recent bug week in containers/image#2151 .

If so, it should not reproduce with #20621 (which drags in more changes) — or, cleaner, just with a dependency update in 7ca3c3d .

@edsantiago
Copy link
Member

this bug seems to have shifted

That is probably my fault. I was trying to show that (1) yes, push/pull changes an existing loaded image, and (2) those changes are very bad. Discussion of (2) should probably continue elsewhere, I just want it 100% clear that push/pull is now destructive.

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 8, 2023

I do agree that “pull+push+pull changes image ID” is pretty likely to break assumptions in quite a few automated systems. We always told users that manifest digests can change on push, and I think it’s very likely we either said or suggested that image IDs don’t change and are a better option. And now image IDs change as well.

I don’t know that I would recommend any kind of bottom-up image matching (vs. a top-down “this is the set of applications, execute them”) but there certainly is a class of users that wants to do matching and deduplication and sanity-checks, and those users will have hard-coded assumptions about what hash value changes when.

@giuseppe
Copy link
Member

giuseppe commented Nov 8, 2023

I think that somebody needs to sit down and change default to zstd and make sure that podman/buildah CI passes. Changing rawhide first seems expensive to debug.

just opened: #20633

@giuseppe
Copy link
Member

giuseppe commented Nov 8, 2023

the "save" tests failure are probably fixed with containers/image#1980

I'll vendor that in as well for my test PR

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 9, 2023

the "save" tests failure are probably fixed with containers/image#1980

No, this happens during manifest format conversion. At the moment the manifest format conversion is attempted, layers were already successfully copied. This should be containers/image#2151 .

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 10, 2023

The original alpine image is in Docker format, while we convert to oci with zstd:chunked. I get the same result just with podman push -f oci localhost:5000/my-busybox without requiring the zstd:chunked compression

Yes, confirming the cause is that the push with zstd:chunked triggered a conversion to OCI. So, when zstd:chunked becomes the default (again), it will be expected that pull of a schema2 image + push + pull will change an image ID. (But a pull of a zstd:chunked image + push + pull should preserve the image ID, right now . containers/image#1980 might change that.)

We actually got lucky: the code tries to convert to schema1 first, and in my testing, that conversion incorrectly succeeds, and only because the registry refuses the upload of schema1 by default: containers/image#2181 .

I think that bug is a blocker for enabling zstd:chunked by default.

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 10, 2023

bin/podman image save -q -o /tmp/foo.tar $iii
Error: creating an updated image manifest: Error during manifest conversion: "application/vnd.oci.image.layer.v1.tar+zstd": zstd compression is not supported for docker images
```

containers/image#2151 is necessary but not sufficient: containers/image#2182 .

(The latter bug also affects podman save --uncompressed --format oci-dir: the data is uncompressed but the manifest continues to say tar+zstd.)

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 10, 2023

So, to summarize:

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 13, 2023

I have started containers/image#2189 , trying to list the various to-do items at a fairly detailed granularity.

@Romain-Geissler-1A
Copy link
Contributor

Hi,

Jumping in this "old" thead, as we have just met the "podman save" issue internally following the move to zstd compression in our company base image.

I see in the release notes for docker 25 released couple of days ago (https://docs.docker.com/engine/release-notes/25.0/) that this change was merged moby/moby#44598 (and looking at the github updates, it seems it broke some docker users). So it seems now docker generates OCI compatible image with "docker save". So, should we also update "podman save" to use by default the "oci-archive" rather than the "docker-archive" format by default ?

@Romain-Geissler-1A
Copy link
Contributor

Romain-Geissler-1A commented Jan 24, 2024

Concrete example with docker 25, now it generates this kind of tar by default:

rgeissler@ncearmdev002:/tmp> docker save quay.io/fedora/fedora:latest | tar -t
blobs/
blobs/sha256/
blobs/sha256/00ebae014d06f969091b23d46dbe605ccce6729277968bc4e5c60ecfc7302a77
blobs/sha256/5663d2b0f39cbf19384039014059b724d3e69c4c6585956f862c61164e77495f
blobs/sha256/830f03be397f9f663501e8fb3485a3e9ae88ece897e95ffb060e53ce47301153
blobs/sha256/e755c02d8377dc92b46e528d59cd7c246fcd19af47318f65b690ba155cf90b0a
index.json
manifest.json
oci-layout
repositories
rgeissler@ncearmdev002:/tmp> docker --version
Docker version 25.0.1, build 29cf629

@mtrmac
Copy link
Collaborator

mtrmac commented Jan 24, 2024

@Romain-Geissler-1A Please file new issues for things that are clearly different from the subject as stated. No-one is going to look for a podman save discussion here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants