Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

image showing different digests when pulling and pushing #15969

Closed
amokkara opened this issue Sep 28, 2022 · 13 comments
Closed

image showing different digests when pulling and pushing #15969

amokkara opened this issue Sep 28, 2022 · 13 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@amokkara
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
image showing different digests when i pull it, and later when i push it to a registry, its showing different digests

Steps to reproduce the issue:

  1. pull any image
    podman pull busybox:musl

  2. look at the digest of the pulled image
    ~/aditya-poc/acc/images# podman image list --digests
    REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
    docker.io/library/busybox musl sha256:49cbafcd38052e3dd9c92203fb9abcdb7c2f08b4cb5c9dc16ec964be6164619d ba49bb78d342 2 weeks ago 1.62 MB

  3. push the image to registry

  4. podman push busybox:musl /test/podman/busybox:musl1

  5. query the image digest from registry using v2 api
    GET : https:///v2/test/podman/busybox/manifests/musl1
    digest in responce : sha256:0236a2d4606f27aa8019deaa6ffb84a9f9cc55fd586b3069deb066f513573322

Describe the results you received:
incorrect digest when image is pushed to registry using podman push

Describe the results you expected:
expected digest to remain unchanged after pushing to registry

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:      3.4.2
API Version:  3.4.2
Go Version:   go1.15.2
Built:        Thu Jan  1 05:30:00 1970
OS/Arch:      linux/amd64

Output of podman info:

host:
  arch: amd64
  buildahVersion: 1.23.1
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.1.2, commit: '
  cpus: 4
  distribution:
    codename: focal
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: scspo2673281001
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.4.0-37-generic
  linkmode: dynamic
  logDriver: journald
  memFree: 606396416
  memTotal: 8348454912
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version UNKNOWN
      commit: ea1fe3938eefa14eb707f1d22adff4db670645d6
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: true
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.1.8
      commit: unknown
      libslirp: 4.3.1-git
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.3
  swapFree: 0
  swapTotal: 0
  uptime: 456h 36m 43.58s (Approximately 19.00 days)
plugins:
  log:
  - k8s-file
  - none
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  hub.docker.netapp.com:
    Blocked: false
    Insecure: true
    Location: hub.docker.netapp.com
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: hub.docker.netapp.com
  search:
  - hub.docker.netapp.com
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 1
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.4.2
  Built: 0
  BuiltTime: Thu Jan  1 05:30:00 1970
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 3.4.2

Package info (e.g. output of rpm -q podman or apt list podman):

apt list podman:
Listing... Done
podman/unknown,now 100:3.4.2-5 amd64 [installed]
podman/unknown 100:3.4.2-5 arm64
podman/unknown 100:3.4.2-5 armhf
podman/unknown 100:3.4.2-5 s390x

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

No

Additional environment details (AWS, VirtualBox, physical, etc.):
ubuntu 20.04

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 28, 2022
@flouthoc
Copy link
Collaborator

Hi @amokkara , I think this happens because

  1. The format is switched from docker to oci when push happens.
  2. If compression format is unknown c/image will re-compress the blobs changing the digests.

I think this should not happen if compression is known so if you pull the pushed image from registry and push it again it should not happen.

But @mtrmac can confirm my points better.

@amokkara
Copy link
Author

amokkara commented Sep 28, 2022

Hi @flouthoc
I also tried another scenario.

i built a sample image using podman - podman build -f Dockerfile_nginx -t nginx .

digest shown is sha256:1346ba21d6a15476b0aca96c03929bf5ead3736c062a54e063c364cbb6771020

saved podman image to a tar file - podman save > nginx.tar nginx:latest

removed existing podman image

loaded the image from saved tar file - podman load -i nginx.tar

digest shown after loading the image is different - sha256:e7068929ce3a484b39191c4b9353107f4634fad09153efe290e6e6ddc383115d

I am not pushing the image at all here. Just save to tar and load again.
Even this seems to alter the digest. is there a reason for this behavior? if so, how to make it consistent?

Dockerfile:

FROM debian:latest

RUN apt-get update && apt-get install --no-install-recommends -y nginx; \
 echo "daemon off;" >> /etc/nginx/nginx.conf

EXPOSE 80

CMD ["/usr/sbin/nginx"

@flouthoc
Copy link
Collaborator

I think in save and load the digests changes because timestamp is changed while producing the tar. You can use skopeo copy if you want to push to registry without any modifications.

@amokkara
Copy link
Author

Im using digest to verify that the image is not tampered with after pushing to registry or during pushing.
if digest keeps changing due to timestamp, then it wouldn't serve any purpose right?

@flouthoc
Copy link
Collaborator

I think signing container images is there correct way to verify if the integrity of the image and to make sure if image is tampered or not , also there has been recent work in the segment of cosign/sigstore by @mtrmac that can help here better. I'd wait for @mtrmac's reply on this one.

@amokkara
Copy link
Author

@mtrmac could you please provide some insight into this behavior , thanks

@amokkara
Copy link
Author

i tried with latest version of podman 4.3.0-dev
seeing same behavior. Digests are different after save and load from tar file.
and after pushing to repo

@amokkara
Copy link
Author

@flouthoc no response from @mtrmac , could you please tag anyone else who can throw some insight on this. Thank you

@mtrmac
Copy link
Collaborator

mtrmac commented Sep 29, 2022

This is fundamentally how it works. The digests validate a specific image representation, not some abstract “sameness” of an image.

If you want to preserve the digest, you must preserve the original image representation. That means no pull+push round-trips (which, in general, recompress, and make different compression choices and create different byte streams), no save+load (which don’t preserve the manifest, and other parts of the image, at all).

Use skopeo copy, probably skopeo copy --preserve-digests, to copy images, not pull+push. (Alternatively, for a signed image, skopeo copy would automatically turn on the --preserve-digests mode.)

@amokkara
Copy link
Author

@mtrmac thanks for the detailed explanation! Would you by any chance know how docker does this? Any operation on an image doesnt change the digest of the image when using docker.

@mtrmac
Copy link
Collaborator

mtrmac commented Sep 29, 2022

That’s just not true; docker pull + docker push will, eventually, change the digest.

Anyone can get lucky, at a specific time, when the specific compression implementation used during push is exactly the same as the one creating the pulled version, and makes exactly the same compression choices. And that will break when that implementation is updated or chooses to make different choices.

@amokkara
Copy link
Author

We have a process where we save image to tar file and load the image from tar file in another machine n push it to a registry of our choice.
Them we verify image in registry has same digest as the one before it's saved to tar file.
We can't get lucky every time and that too in an automated process. May be docker has fixed compression process? Need to dig further

@mtrmac
Copy link
Collaborator

mtrmac commented Sep 29, 2022

This is not a theory; I have seen digests change on a Docker upgrade.

You have a recommendation on what to do instead to get reliable results — as well as enough general pointers about the structure of the problem to decide whether the risk of unexpected breakage is worth it to you.

@mtrmac mtrmac closed this as not planned Won't fix, can't repro, duplicate, stale Sep 29, 2022
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 13, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 13, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

3 participants