Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

looking up local images should consider search registries #6381

Closed
juhp opened this issue May 26, 2020 · 11 comments · Fixed by #7823
Closed

looking up local images should consider search registries #6381

juhp opened this issue May 26, 2020 · 11 comments · Fixed by #7823
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@juhp
Copy link
Contributor

juhp commented May 26, 2020

/kind bug

Description

podman run <imagename> chose an older local image.

Steps to reproduce the issue:

  1. podman pull docker.io/fedora:33 && podman pull registry.fedoraproject.org/fedora:33
  2. podman run -it --rm fedora:33
  3. check which image is chosen

Describe the results you received:
For me the older image was run (in my case my docker.io/fedora:33 was several months old).

Describe the results you expected:
Newer image to used.
Or is this a security feature?

Output of podman version:

Version:            1.9.2
RemoteAPI Version:  1
Go Version:         go1.13.10
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  gitCommit: ""
  goVersion: go1.13.10
  podmanVersion: 1.9.2
host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.16-2.fc31.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.16, commit: aec991fec16dc45935de184f2ea06a6ffca200a0'
  cpus: 8
  distribution:
    distribution: fedora
    version: "31"
  eventLogger: file
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.6.13-200.fc31.x86_64
  memFree: 8420085760
  memTotal: 16416141312
  ociRuntime:
    name: crun
    package: crun-0.13-2.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.13
      commit: e79e4de4ac16da0ce48777afb72c6241de870525
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.0.1-1.fc31.x86_64
    version: |-
      slirp4netns version 1.0.1
      commit: 6a7b16babc95b6a3056b33fb45b74a6f62262dd4
      libslirp: 4.1.0
  swapFree: 8296329216
  swapTotal: 8296329216
  uptime: 22m 41.8s
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /var/home/petersen/.config/containers/storage.conf
  containerStore:
    number: 8
    paused: 0
    running: 1
    stopped: 7
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.0.0-1.fc31.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 1.0.0
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  graphRoot: /var/home/petersen/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 39
  runRoot: /run/user/1000
  volumePath: /var/home/petersen/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.9.2-1.fc31.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):

Physical: Fedora 31 Silverblue

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label May 26, 2020
@rhatdan
Copy link
Member

rhatdan commented May 26, 2020

This is based on the search path in your registries.conf. Since docker.io comes first, it will be chosen first. If you want to use fedora registry images first then you need to either fully specify the fedora registry or switch the search order.

@rhatdan
Copy link
Member

rhatdan commented May 26, 2020

Registries.conf in fedora by default have this search path.
unqualified-search-registries = ['registry.fedoraproject.org', 'registry.access.redhat.com', 'registry.centos.org', 'docker.io']
Which means that docker.io should be chosen last.

@rhatdan rhatdan closed this as completed May 26, 2020
@juhp
Copy link
Contributor Author

juhp commented May 26, 2020

Registries.conf in fedora by default have this search path.
unqualified-search-registries = ['registry.fedoraproject.org', 'registry.access.redhat.com', 'registry.centos.org', 'docker.io']
Which means that docker.io should be chosen last.

It wasn't :-)

I think you mean for remote "resolution", whereas I am talking about local images.

I am wondering if it depends on the order of the original image pulls?
Or it is doing lexical ordering?

@juhp
Copy link
Contributor Author

juhp commented May 27, 2020

To clarify my original report/question:

$ podman images fedora:33
REPOSITORY                          TAG   IMAGE ID       CREATED        SIZE
registry.fedoraproject.org/fedora   33    a23de53c4305   39 hours ago   190 MB
docker.io/library/fedora            33    1910ec046b4b   3 months ago   207 MB
$ podman run -it --rm fedora:33 rpm -q rpm
rpm-4.15.1-2.fc32.1.x86_64
$ podman run -it --rm registry.fedoraproject.org/fedora:33 rpm -q rpm
rpm-4.15.90-0.git14971.12.fc33.x86_64

So my question was why does podman choose the older (or first installed) image in my case?

@juhp juhp changed the title does podman run imagename choose older image? does podman run imagename choose older local image? May 27, 2020
@vrothberg vrothberg reopened this May 27, 2020
@vrothberg
Copy link
Member

I can reproduce with:

libpod (master) $ ./bin/podman inspect --format '{{.RepoTags}}' fedora:33
[docker.io/library/fedora:33]

Note that the described order of unqualified-serach-registries only applies when pulling (and searching) but not for other operations.

@vrothberg
Copy link
Member

I reopened to find consensus how Podman should behave.

@mheon
Copy link
Member

mheon commented May 27, 2020

This is one of the things I really wanted to solve in the refactor of the image package to be shared with Buildah & CRI-O - consistent resolution is shortnames is a significant issue in our tools.

@rhatdan
Copy link
Member

rhatdan commented Jun 9, 2020

@ParkerVR @ryanchpowell This is a case where the shortname table would help out, so the user referring to a shortname is always the same.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@vrothberg
Copy link
Member

I think that we should tackle it now and teach NewFromLocal(name string) to iterate the search registries for short names. That should even out the semantics across other commands.

@mtrmac, I can tackle it but I want your ACK on the idea.

@vrothberg vrothberg self-assigned this Sep 10, 2020
@vrothberg vrothberg changed the title does podman run imagename choose older local image? looking up local images should consider search registries Sep 10, 2020
@mtrmac
Copy link
Collaborator

mtrmac commented Sep 11, 2020

Yes, I would love for NewFromLocal and pullGoalFromPossiblyUnqualifiedName to be consistent — and pullRefPair.dstRef values already are storage references, so it should not be very hard to do.

OTOH, read findImageInRepotags. IIRC I was last involved with that code only to preserve the existing behavior via suspiciousRefNameTagValuesForSearch, which also indicates my lack of understanding of the purpose of that code.

I would love for that to be replaced with something else, but if you want an ACK that a specific kind of replacement makes sense, I’m not the person to ask because I can only guess at goals of that search code; that might have to be tracked down the history, maybe all the way to atomic.

Whatever the goals of the search code, I have a strong suspicion that removing that search is going to break some users that incorrectly relied on those very loose matching semantics. That may well be worth it, but I want to make sure it’s a conscious decision.

@vrothberg vrothberg added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Sep 28, 2020
vrothberg added a commit to vrothberg/libpod that referenced this issue Sep 30, 2020
When looking up local images, take the unqualified-serach registries of
the registries.conf into account (on top of "localhost/").

Also extend the integration tests to prevent future regressions.

Fixes: containers#6381
Signed-off-by: Valentin Rothberg <[email protected]>
mheon pushed a commit to mheon/libpod that referenced this issue Oct 14, 2020
When looking up local images, take the unqualified-serach registries of
the registries.conf into account (on top of "localhost/").

Also extend the integration tests to prevent future regressions.

Fixes: containers#6381
Signed-off-by: Valentin Rothberg <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants