-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
switch the default kubeadm image registry to registry.k8s.io #2671
Comments
/cc |
PR for this in 1.25: |
kubernetes/kubernetes#109938 broke out e2e tests for 'latest' I think what's happening is the following. A cluster is created with the older version of kubeadm and the clusterconfiguration.imageRepository is defaulted to "k8s gcr.io". The when upgrade happens the new kubeadm binary thinks that "k8s.gcr.io" is a custom repository. Kinder is not pre-pulling the right images. For kubeadm we need to mutate the image repo field to registry.k8s.io. For kinder we need to ensure it uses the ouput of "kubeadm config images" for prepull (might be the case already). |
first thing we have to do is here: once it merges i can try debugging kinder workflows. |
kubernetes/kubernetes#110343 merged and i was expecting it to be a sufficient fix, but it seems like https://storage.googleapis.com/k8s-release-dev/ci/latest.txt is not pointing to the latest CI build, thus the jobs are continuing to fail. notified #release-management and #release-ci-signal on slack: |
During
|
that's actually tricky to fix because the images are created from tars. kubeadm/kinder/pkg/build/alter/alter.go Line 225 in 5f13a39
alternatively, we could just add a workaround for the |
re-tag seems to be the simplest way here. (Or a trick change in fixImageTar to tag it to both registries) |
i'm testing a hack in fixImageTar right now. it's not great, but it can be removed once we no longer test the 1.24 k8s/1.25 kubeadm skew. |
@neolit123 is there anything that's need to be done here ? happy to provide some help |
@ameukam in the OP we have a task to do some cleanups in 1.26. |
Does something need to be done for kubeadm < 1.25 too, or can they just rely on the image redirect ? This also goes for the old tarballs, that were generated with the content from Like: They could need some retagging, after image loading, if so. |
k8s.gcr.io will continue to exist as a source of truth, and is itself an alias to the actual registries (unadvertised). It would be a breaking change for older releases to switch the primary registry alias, e.g. for the reasons you mentioned, so they will not be updated. However, in the near future k8s.gcr.io may begin to redirect to registry.k8s.io, which will again contain all the same images, but will backport only the need to allow it through the firewall, the image names will not change. There was just a notice sent about this to the dev mailinglist, but it needs wider circulation |
the flip was done on oct 3rd kubeadm ci has been green thus far, let's reopen this if we need to change something else. |
after discussion with @BenTheElder we should actually backport this to >=1.23 releases (1.23, 1.24). >= 1.25 have the changes. note: this is an exception as we only backport bugfixes, but in this case it has to be done. looks like we need to backport these PRs: we could leave the backports without doing the cleanups that we did here: |
cc @fabriziopandini @sbueringer in case this affects CAPI. |
Need to do some retagging effort in minikube, or it would break the old preloads and caches (that only have the old registry) Alternatively re-generate all the preloads, but that could already be "too late" if they are being cached and have been downloaded it would only break air-gapped installs (and china ?) the others would just be able to fetch the "new" image |
Only if using the new patch release for which the images don't exist yet. Also for older releases we still currently plan to publish the tags to k8s.gcr.io it just won't be the default. Upgrades should always be taken carefully and we'll certainly need a prominent release note |
Yeah, on second thought this would just affect new (minor) releases of kubeadm So all that is needed is to mirror the version selection logic, like today (>= 1.25.0-alpha.1) |
cherrypicks for 1.22, 1.23, 1.24: |
It seems work is done in v1.26.
|
@pacoxu: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I added the backports to minikube. For most users, it is not used anyway. By default, it will fetch the preload from GCS instead of using the cache. https://storage.googleapis.com/minikube-preloaded-volume-tarballs/ But with |
the k8s project is moving away from k8s.gcr.io to registry.k8s.io.
https://groups.google.com/g/kubernetes-sig-testing/c/U7b_im9vRrM/m/7qywJeUTBQAJ
xref kubernetes/k8s.io#1834
1.25
Move from k8s.gcr.io to registry.k8s.io kubernetes#109938
kubeadm: mutate ClusterConfiguration.imageRepository to "registry.k8s.io" kubernetes#110343
kinder: apply a hack to handle the registry.k8s.io transition bugs #2705
update kubeadm pages to use registry.k8s.io website#34163
1.26
cleanup TODOs from 1.25
kubeadm: remove MutateImageRepository for registry change kubernetes#112006
cherrypicks for 1.22, 1.23, 1.24:
Manual-cherry-pick-for-1.22: kubeadm: apply registry.k8s.io changes kubernetes#113388
Manual-cherry-pick-for-1.23: kubeadm: apply registry.k8s.io changes kubernetes#113393
Manual-cherry-pick-for-1.24: kubeadm: apply registry.k8s.io changes kubernetes#113395
(minus cleanups)
The text was updated successfully, but these errors were encountered: