Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail running on Raspberry Pi Ubuntu 21.10 #4234

Closed
1 task
manedurphy opened this issue Oct 18, 2021 · 10 comments
Closed
1 task

Fail running on Raspberry Pi Ubuntu 21.10 #4234

manedurphy opened this issue Oct 18, 2021 · 10 comments

Comments

@manedurphy
Copy link

Environmental Info:
K3s Version: 1.21.5+k3s2

Node(s) CPU architecture, OS, and Version: Linux k3s-server-01 5.13.0-1008-raspi #9-Ubuntu SMP PREEMPT Wed Sep 29 08:27:44 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

Cluster Configuration: 1 server

Describe the bug:
I have upgraded my Raspberry Pi 4s to Ubuntu 21.10. There seems to be some issuer when running the basic installation scripts for K3s

Steps To Reproduce:
Prior to installation, I appended the following to /boot/firmware/cmdline.txt and rebooted the system - cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
Installed K3s with command curl -sfL https://get.k3s.io | sh -

  • Installed K3s: k3s version v1.21.5+k3s2

Expected behavior:
I expect that it should run the server the same as it does on my Desktop

Actual behavior:
It tries to start, fails, and then restarts the process again. Since I am not seeing this issue when running the installation script in a VM running Ubuntu 21.10 on my Desktop, I am wondering if there are any more extra steps that I must take for getting this to work on a Raspberry Pi 4. I am not entirely understanding everything from the logs, so if there is any extra information that you need me to provide I am happy to do so.

Additional context / logs:

-- Journal begins at Wed 2021-10-13 13:32:52 UTC. --

Oct 18 12:50:00 k3s-server-01 k3s[3438]: time="2021-10-18T12:50:00.793178690Z" level=info msg="Cluster-Http-Server 2021/10/18 12:50:00 http: TLS handshake error from 10.0.0.76:44766: remote error: tls: bad certificate"

Oct 18 12:50:00 k3s-server-01 k3s[3438]: I1018 12:50:00.799753    3438 network_policy_controller.go:154] Starting network policy controller full sync goroutine

Oct 18 12:50:01 k3s-server-01 k3s[3438]: I1018 12:50:01.054465    3438 controller.go:611] quota admission added evaluator for: deployments.apps

Oct 18 12:50:01 k3s-server-01 k3s[3438]: time="2021-10-18T12:50:01.257207928Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"b29ebfef-01c1-46cb-af94-7f318b2c5859\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"300\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""

Oct 18 12:50:01 k3s-server-01 k3s[3438]: I1018 12:50:01.258239    3438 kube.go:123] Node controller sync successful

Oct 18 12:50:01 k3s-server-01 k3s[3438]: I1018 12:50:01.258441    3438 vxlan.go:123] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false

Oct 18 12:50:01 k3s-server-01 k3s[3438]: time="2021-10-18T12:50:01.264271721Z" level=fatal msg="flannel exited: operation not supported"

Oct 18 12:50:01 k3s-server-01 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE

Oct 18 12:50:01 k3s-server-01 systemd[1]: k3s.service: Failed with result 'exit-code'.

Oct 18 12:50:01 k3s-server-01 systemd[1]: k3s.service: Consumed 31.845s CPU time.

Oct 18 12:50:06 k3s-server-01 systemd[1]: k3s.service: Scheduled restart job, restart counter is at 2.

Oct 18 12:50:06 k3s-server-01 systemd[1]: Stopped Lightweight Kubernetes.

Oct 18 12:50:06 k3s-server-01 systemd[1]: k3s.service: Consumed 31.845s CPU time.

Oct 18 12:50:06 k3s-server-01 systemd[1]: Starting Lightweight Kubernetes...

Oct 18 12:50:06 k3s-server-01 sh[3639]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service

Oct 18 12:50:06 k3s-server-01 sh[3640]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory

Oct 18 12:50:07 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:07.294425002Z" level=info msg="Starting k3s v1.21.5+k3s2 (724ef700)"

Oct 18 12:50:07 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:07.295196202Z" level=info msg="Cluster bootstrap already complete"

Oct 18 12:50:07 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:07.353123234Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"

Oct 18 12:50:07 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:07.353259641Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."

Oct 18 12:50:07 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:07.353730731Z" level=info msg="Database tables and indexes are up to date"

Oct 18 12:50:07 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:07.363761756Z" level=info msg="Kine listening on unix://kine.sock"

Oct 18 12:50:07 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:07.364358290Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

Oct 18 12:50:07 k3s-server-01 k3s[3643]: Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.

Oct 18 12:50:07 k3s-server-01 k3s[3643]: I1018 12:50:07.368404    3643 server.go:656] external host was not specified, using 10.0.0.251

Oct 18 12:50:07 k3s-server-01 k3s[3643]: I1018 12:50:07.369445    3643 server.go:195] Version: v1.21.5+k3s2

Oct 18 12:50:07 k3s-server-01 k3s[3643]: I1018 12:50:07.381768    3643 shared_informer.go:240] Waiting for caches to sync for node_authorizer

Oct 18 12:50:07 k3s-server-01 k3s[3643]: I1018 12:50:07.385506    3643 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

Oct 18 12:50:07 k3s-server-01 k3s[3643]: I1018 12:50:07.385597    3643 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

Oct 18 12:50:07 k3s-server-01 k3s[3643]: I1018 12:50:07.390709    3643 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

Oct 18 12:50:07 k3s-server-01 k3s[3643]: I1018 12:50:07.390808    3643 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

Oct 18 12:50:07 k3s-server-01 k3s[3643]: I1018 12:50:07.491618    3643 instance.go:283] Using reconciler: lease

Oct 18 12:50:07 k3s-server-01 k3s[3643]: I1018 12:50:07.604285    3643 rest.go:130] the default service ipfamily for this cluster is: IPv4

Oct 18 12:50:08 k3s-server-01 k3s[3643]: W1018 12:50:08.685638    3643 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.

Oct 18 12:50:08 k3s-server-01 k3s[3643]: W1018 12:50:08.723257    3643 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.

Oct 18 12:50:08 k3s-server-01 k3s[3643]: W1018 12:50:08.735738    3643 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.

Oct 18 12:50:08 k3s-server-01 k3s[3643]: W1018 12:50:08.758202    3643 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.

Oct 18 12:50:08 k3s-server-01 k3s[3643]: W1018 12:50:08.767506    3643 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.

Oct 18 12:50:08 k3s-server-01 k3s[3643]: W1018 12:50:08.790369    3643 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.

Oct 18 12:50:08 k3s-server-01 k3s[3643]: W1018 12:50:08.790476    3643 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.

Oct 18 12:50:08 k3s-server-01 k3s[3643]: I1018 12:50:08.820476    3643 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

Oct 18 12:50:08 k3s-server-01 k3s[3643]: I1018 12:50:08.820574    3643 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

Oct 18 12:50:08 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:08.865325852Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"

Oct 18 12:50:08 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:08.865354426Z" level=info msg="Waiting for API server to become available"

Oct 18 12:50:08 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:08.867833414Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"

Oct 18 12:50:08 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:08.869673702Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"

Oct 18 12:50:08 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:08.873199241Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"

Oct 18 12:50:08 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:08.873341481Z" level=info msg="To join node to cluster: k3s agent -s https://10.0.0.251:6443 -t ${NODE_TOKEN}"

Oct 18 12:50:08 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:08.876896834Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"

Oct 18 12:50:08 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:08.877066815Z" level=info msg="Run: k3s kubectl"

Oct 18 12:50:08 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:08.981295929Z" level=info msg="Cluster-Http-Server 2021/10/18 12:50:08 http: TLS handshake error from 127.0.0.1:46934: remote error: tls: bad certificate"

Oct 18 12:50:09 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:09.001348870Z" level=info msg="Cluster-Http-Server 2021/10/18 12:50:09 http: TLS handshake error from 127.0.0.1:46944: remote error: tls: bad certificate"

Oct 18 12:50:09 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:09.049363165Z" level=info msg="certificate CN=k3s-server-01 signed by CN=k3s-server-ca@1634561348: notBefore=2021-10-18 12:49:08 +0000 UTC notAfter=2022-10-18 12:50:09 +0000 UTC"

Oct 18 12:50:09 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:09.061714830Z" level=info msg="certificate CN=system:node:k3s-server-01,O=system:nodes signed by CN=k3s-client-ca@1634561348: notBefore=2021-10-18 12:49:08 +0000 UTC notAfter=2022-10-18 12:50:09 +0000 UTC"

Oct 18 12:50:09 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:09.366439430Z" level=info msg="Module overlay was already loaded"

Oct 18 12:50:09 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:09.366563874Z" level=info msg="Module nf_conntrack was already loaded"

Oct 18 12:50:09 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:09.366616985Z" level=info msg="Module br_netfilter was already loaded"

Oct 18 12:50:09 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:09.366669003Z" level=info msg="Module iptable_nat was already loaded"

Oct 18 12:50:09 k3s-server-01 k3s[3643]: W1018 12:50:09.366791    3643 sysinfo.go:203] Nodes topology is not available, providing CPU topology

Oct 18 12:50:09 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:09.374527633Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"

Oct 18 12:50:09 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:09.374819799Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"

Oct 18 12:50:10 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:10.379382832Z" level=info msg="Containerd is now running"

Oct 18 12:50:10 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:10.425194587Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"

Oct 18 12:50:10 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:10.435161727Z" level=info msg="Handling backend connection request [k3s-server-01]"

Oct 18 12:50:10 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:10.437616604Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/5b1d238ed00ea9bbae6aa96d867c1264a96ad48b42e55d56570d54d05c8d8a7a/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3s-server-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

Oct 18 12:50:10 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:10.439301189Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3s-server-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"

Oct 18 12:50:10 k3s-server-01 k3s[3643]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.

Oct 18 12:50:10 k3s-server-01 k3s[3643]: Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.

Oct 18 12:50:10 k3s-server-01 k3s[3643]: Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.

Oct 18 12:50:10 k3s-server-01 k3s[3643]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.

Oct 18 12:50:10 k3s-server-01 k3s[3643]: W1018 12:50:10.440225    3643 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

Oct 18 12:50:10 k3s-server-01 k3s[3643]: I1018 12:50:10.496268    3643 server.go:436] "Kubelet version" kubeletVersion="v1.21.5+k3s2"

Oct 18 12:50:10 k3s-server-01 k3s[3643]: E1018 12:50:10.534578    3643 node.go:161] Failed to retrieve node info: nodes "k3s-server-01" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

Oct 18 12:50:10 k3s-server-01 k3s[3643]: I1018 12:50:10.591543    3643 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt

Oct 18 12:50:10 k3s-server-01 k3s[3643]: W1018 12:50:10.591703    3643 manager.go:159] Cannot detect current cgroup on cgroup v2

Oct 18 12:50:10 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:10.841260345Z" level=info msg="Cluster-Http-Server 2021/10/18 12:50:10 http: TLS handshake error from 10.0.0.76:44774: remote error: tls: bad certificate"

Oct 18 12:50:11 k3s-server-01 k3s[3643]: E1018 12:50:11.565577    3643 node.go:161] Failed to retrieve node info: nodes "k3s-server-01" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

Oct 18 12:50:13 k3s-server-01 k3s[3643]: E1018 12:50:13.952095    3643 node.go:161] Failed to retrieve node info: nodes "k3s-server-01" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

Oct 18 12:50:15 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:15.430956039Z" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: W1018 12:50:15.613895    3643 sysinfo.go:203] Nodes topology is not available, providing CPU topology

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.621056    3643 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.621941    3643 container_manager_linux.go:291] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.622202    3643 container_manager_linux.go:296] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.622322    3643 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.622377    3643 container_manager_linux.go:327] "Initializing Topology Manager" policy="none" scope="container"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.622411    3643 container_manager_linux.go:332] "Creating device plugin manager" devicePluginEnabled=true

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.623070    3643 kubelet.go:404] "Attempting to sync node with API server"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.623146    3643 kubelet.go:272] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.623246    3643 kubelet.go:283] "Adding apiserver pod source"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.623311    3643 apiserver.go:42] "Waiting for node sync before watching apiserver pods"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.629371    3643 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.11-k3s1" apiVersion="v1alpha2"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.644223    3643 server.go:1191] "Started kubelet"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.648809    3643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: E1018 12:50:15.655905    3643 cri_stats_provider.go:369] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: E1018 12:50:15.656013    3643 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.657326    3643 server.go:149] "Starting to listen" address="0.0.0.0" port=10250

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.660149    3643 server.go:409] "Adding debug handlers to kubelet server"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.665903    3643 volume_manager.go:271] "Starting Kubelet Volume Manager"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.666114    3643 desired_state_of_world_populator.go:141] "Desired state populator starts to run"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.766367    3643 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.769650    3643 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.797056    3643 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.804553    3643 kubelet_node_status.go:71] "Attempting to register node" node="k3s-server-01"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:15.867866720Z" level=info msg="Cluster-Http-Server 2021/10/18 12:50:15 http: TLS handshake error from 10.0.0.76:44780: remote error: tls: bad certificate"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.908773    3643 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.908969    3643 status_manager.go:157] "Starting to sync pod status with apiserver"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: I1018 12:50:15.909967    3643 kubelet.go:1846] "Starting kubelet main sync loop"

Oct 18 12:50:15 k3s-server-01 k3s[3643]: E1018 12:50:15.910607    3643 kubelet.go:1870] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"

Oct 18 12:50:16 k3s-server-01 k3s[3643]: E1018 12:50:16.010768    3643 kubelet.go:1870] "Skipping pod synchronization" err="container runtime status check may not have completed yet"

Oct 18 12:50:16 k3s-server-01 k3s[3643]: E1018 12:50:16.211310    3643 kubelet.go:1870] "Skipping pod synchronization" err="container runtime status check may not have completed yet"

Oct 18 12:50:16 k3s-server-01 k3s[3643]: E1018 12:50:16.612382    3643 kubelet.go:1870] "Skipping pod synchronization" err="container runtime status check may not have completed yet"

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.639367    3643 apiserver.go:52] "Watching apiserver"

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.791840    3643 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.791840    3643 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.793053    3643 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.794013    3643 secure_serving.go:202] Serving securely on 127.0.0.1:6444

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.794099    3643 tlsconfig.go:240] Starting DynamicServingCertificateController

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.794222    3643 customresource_discovery_controller.go:209] Starting DiscoveryController

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.794285    3643 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.796222    3643 apf_controller.go:294] Starting API Priority and Fairness config controller

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.797059    3643 apiservice_controller.go:97] Starting APIServiceRegistrationController

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.797127    3643 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.797742    3643 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.797800    3643 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.797880    3643 autoregister_controller.go:141] Starting autoregister controller

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.797908    3643 cache.go:32] Waiting for caches to sync for autoregister controller

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.797956    3643 controller.go:86] Starting OpenAPI controller

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.798007    3643 naming_controller.go:291] Starting NamingConditionController

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.798053    3643 establishing_controller.go:76] Starting EstablishingController

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.798105    3643 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.798156    3643 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.798202    3643 crd_finalizer.go:266] Starting CRDFinalizer

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.804947    3643 controller.go:83] Starting OpenAPI AggregationController

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.794299    3643 available_controller.go:475] Starting AvailableConditionController

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.805163    3643 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.860853    3643 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.860986    3643 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.862126    3643 crdregistration_controller.go:111] Starting crd-autoregister controller

Oct 18 12:50:16 k3s-server-01 k3s[3643]: I1018 12:50:16.862191    3643 shared_informer.go:240] Waiting for caches to sync for crd-autoregister

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.060011    3643 trace.go:205] Trace[840102342]: "Create" url:/api/v1/namespaces/default/events,user-agent:k3s/v1.21.5+k3s2 (linux/arm64) kubernetes/724ef70,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/1.1 (18-Oct-2021 12:50:15.734) (total time: 1325ms):

Oct 18 12:50:17 k3s-server-01 k3s[3643]: Trace[840102342]: ---"Object stored in database" 1324ms (12:50:00.059)

Oct 18 12:50:17 k3s-server-01 k3s[3643]: Trace[840102342]: [1.325227626s] [1.325227626s] END

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.062761    3643 shared_informer.go:247] Caches are synced for crd-autoregister

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.082019    3643 shared_informer.go:247] Caches are synced for node_authorizer

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.097018    3643 apf_controller.go:299] Running API Priority and Fairness config worker

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.098430    3643 cache.go:39] Caches are synced for autoregister controller

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.099114    3643 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.099928    3643 cache.go:39] Caches are synced for APIServiceRegistrationController controller

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.107274    3643 cache.go:39] Caches are synced for AvailableConditionController controller

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.124615    3643 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.156719    3643 trace.go:205] Trace[389610618]: "GuaranteedUpdate etcd3" type:*coordination.Lease (18-Oct-2021 12:50:15.727) (total time: 1428ms):

Oct 18 12:50:17 k3s-server-01 k3s[3643]: Trace[389610618]: ---"Transaction prepared" 1397ms (12:50:00.124)

Oct 18 12:50:17 k3s-server-01 k3s[3643]: Trace[389610618]: [1.428672466s] [1.428672466s] END

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.157162    3643 trace.go:205] Trace[1086018545]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k3s-server-01,user-agent:k3s/v1.21.5+k3s2 (linux/arm64) kubernetes/724ef70,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/1.1 (18-Oct-2021 12:50:15.727) (total time: 1429ms):

Oct 18 12:50:17 k3s-server-01 k3s[3643]: Trace[1086018545]: ---"Object stored in database" 1428ms (12:50:00.156)

Oct 18 12:50:17 k3s-server-01 k3s[3643]: Trace[1086018545]: [1.429437223s] [1.429437223s] END

Oct 18 12:50:17 k3s-server-01 k3s[3643]: E1018 12:50:17.163728    3643 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.182789    3643 trace.go:205] Trace[1868172453]: "Create" url:/api/v1/nodes,user-agent:k3s/v1.21.5+k3s2 (linux/arm64) kubernetes/724ef70,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/1.1 (18-Oct-2021 12:50:15.809) (total time: 1373ms):

Oct 18 12:50:17 k3s-server-01 k3s[3643]: Trace[1868172453]: [1.373238491s] [1.373238491s] END

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.204721    3643 kubelet_node_status.go:109] "Node was previously registered" node="k3s-server-01"

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.205259    3643 kubelet_node_status.go:74] "Successfully registered node" node="k3s-server-01"

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.225706    3643 setters.go:577] "Node became not ready" node="k3s-server-01" condition={Type:Ready Status:False LastHeartbeatTime:2021-10-18 12:50:17.225516411 +0000 UTC m=+10.650340102 LastTransitionTime:2021-10-18 12:50:17.225516411 +0000 UTC m=+10.650340102 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}

Oct 18 12:50:17 k3s-server-01 k3s[3643]: E1018 12:50:17.413186    3643 kubelet.go:1870] "Skipping pod synchronization" err="container runtime status check may not have completed yet"

Oct 18 12:50:17 k3s-server-01 k3s[3643]: I1018 12:50:17.840493    3643 storage_scheduling.go:148] all system priority classes are created successfully or already exist.

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.075039    3643 cpu_manager.go:199] "Starting CPU manager" policy="none"

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.075129    3643 cpu_manager.go:200] "Reconciling" reconcilePeriod="10s"

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.075204    3643 state_mem.go:36] "Initialized new in-memory state store"

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.075795    3643 state_mem.go:88] "Updated default CPUSet" cpuSet=""

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.075877    3643 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.075911    3643 policy_none.go:44] "None policy: Start"

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.089095    3643 manager.go:600] "Failed to retrieve checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.091380    3643 plugin_manager.go:114] "Starting Kubelet Plugin Manager"

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.333010    3643 node.go:172] Successfully retrieved node IP: 10.0.0.251

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.333109    3643 server_others.go:141] Detected node IP 10.0.0.251

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.349129    3643 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.349222    3643 server_others.go:213] Using iptables Proxier.

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.349264    3643 server_others.go:220] creating dualStackProxier for iptables.

Oct 18 12:50:18 k3s-server-01 k3s[3643]: W1018 12:50:18.349306    3643 server_others.go:519] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.351565    3643 server.go:643] Version: v1.21.5+k3s2

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.354108    3643 config.go:315] Starting service config controller

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.354216    3643 shared_informer.go:240] Waiting for caches to sync for service config

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.354299    3643 config.go:224] Starting endpoint slice config controller

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.354328    3643 shared_informer.go:240] Waiting for caches to sync for endpoint slice config

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.454734    3643 shared_informer.go:247] Caches are synced for endpoint slice config

Oct 18 12:50:18 k3s-server-01 k3s[3643]: I1018 12:50:18.454813    3643 shared_informer.go:247] Caches are synced for service config

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.013753    3643 topology_manager.go:187] "Topology Admit Handler"

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.014046    3643 topology_manager.go:187] "Topology Admit Handler"

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.014239    3643 topology_manager.go:187] "Topology Admit Handler"

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.014422    3643 topology_manager.go:187] "Topology Admit Handler"

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.014636    3643 topology_manager.go:187] "Topology Admit Handler"

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.141595    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vzg\" (UniqueName: \"kubernetes.io/projected/3d987250-1bdc-486a-afbd-343795ef2687-kube-api-access-r4vzg\") pod \"metrics-server-86cbb8457f-8pbxh\" (UID: \"3d987250-1bdc-486a-afbd-343795ef2687\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.141839    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/49ee4f82-abbd-4eb3-a255-427f862a7cfd-values\") pod \"helm-install-traefik-vpf9z\" (UID: \"49ee4f82-abbd-4eb3-a255-427f862a7cfd\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.141991    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/49ee4f82-abbd-4eb3-a255-427f862a7cfd-content\") pod \"helm-install-traefik-vpf9z\" (UID: \"49ee4f82-abbd-4eb3-a255-427f862a7cfd\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.142082    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92w5g\" (UniqueName: \"kubernetes.io/projected/20d73a21-a45f-4412-9514-e49a531633f6-kube-api-access-92w5g\") pod \"helm-install-traefik-crd-qq97x\" (UID: \"20d73a21-a45f-4412-9514-e49a531633f6\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.142200    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmssf\" (UniqueName: \"kubernetes.io/projected/a9b0985f-2d49-473f-a17d-5b90ecfe7c17-kube-api-access-xmssf\") pod \"local-path-provisioner-5ff76fc89d-8t5zd\" (UID: \"a9b0985f-2d49-473f-a17d-5b90ecfe7c17\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.142288    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/528a0883-71d3-4eae-b695-856966266802-config-volume\") pod \"coredns-7448499f4d-6tm6f\" (UID: \"528a0883-71d3-4eae-b695-856966266802\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.142385    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls5dl\" (UniqueName: \"kubernetes.io/projected/528a0883-71d3-4eae-b695-856966266802-kube-api-access-ls5dl\") pod \"coredns-7448499f4d-6tm6f\" (UID: \"528a0883-71d3-4eae-b695-856966266802\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.142475    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/20d73a21-a45f-4412-9514-e49a531633f6-values\") pod \"helm-install-traefik-crd-qq97x\" (UID: \"20d73a21-a45f-4412-9514-e49a531633f6\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.142584    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/20d73a21-a45f-4412-9514-e49a531633f6-content\") pod \"helm-install-traefik-crd-qq97x\" (UID: \"20d73a21-a45f-4412-9514-e49a531633f6\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.142674    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9b0985f-2d49-473f-a17d-5b90ecfe7c17-config-volume\") pod \"local-path-provisioner-5ff76fc89d-8t5zd\" (UID: \"a9b0985f-2d49-473f-a17d-5b90ecfe7c17\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.142749    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3d987250-1bdc-486a-afbd-343795ef2687-tmp-dir\") pod \"metrics-server-86cbb8457f-8pbxh\" (UID: \"3d987250-1bdc-486a-afbd-343795ef2687\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.142826    3643 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k64gd\" (UniqueName: \"kubernetes.io/projected/49ee4f82-abbd-4eb3-a255-427f862a7cfd-kube-api-access-k64gd\") pod \"helm-install-traefik-vpf9z\" (UID: \"49ee4f82-abbd-4eb3-a255-427f862a7cfd\") "

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.142876    3643 reconciler.go:157] "Reconciler: start to sync state"

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.732643    3643 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).

Oct 18 12:50:19 k3s-server-01 k3s[3643]: I1018 12:50:19.732766    3643 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).

Oct 18 12:50:19 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:19.999233053Z" level=info msg="Kube API server is now running"

Oct 18 12:50:19 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:19.999367553Z" level=info msg="k3s is up and running"

Oct 18 12:50:19 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:19.999434441Z" level=info msg="Waiting for cloud-controller-manager privileges to become available"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: Flag --address has been deprecated, see --bind-address instead.

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.013579    3643 controllermanager.go:175] Version: v1.21.5+k3s2

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.014574    3643 deprecated_insecure_serving.go:56] Serving insecurely on 127.0.0.1:10252

Oct 18 12:50:20 k3s-server-01 k3s[3643]: W1018 12:50:20.081995    3643 authorization.go:47] Authorization is disabled

Oct 18 12:50:20 k3s-server-01 k3s[3643]: W1018 12:50:20.082101    3643 authentication.go:47] Authentication is disabled

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.082141    3643 deprecated_insecure_serving.go:54] Serving healthz insecurely on 127.0.0.1:10251

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.182887687Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-9.18.2.tgz"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.184691588Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-9.18.2.tgz"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.337229039Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.338040832Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.338898792Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.339828974Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.340484435Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.341102599Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.341831448Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.342369983Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.343086333Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.343735293Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.344278958Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.345092770Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.414100    3643 request.go:668] Waited for 1.165820018s due to client-side throttling, not priority and fairness, request: POST:https://127.0.0.1:6443/api/v1/namespaces/kube-system/serviceaccounts/metrics-server/token

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.445771611Z" level=info msg="Stopped tunnel to 127.0.0.1:6443"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.447847770Z" level=info msg="Connecting to proxy" url="wss://10.0.0.251:6443/v1-k3s/connect"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.448478786Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.449367413Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.452781919Z" level=info msg="Starting /v1, Kind=Secret controller"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.454521653Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"d1e52c9e-3876-4ce1-b80c-5aacd1b8ffc1\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"284\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.455866260Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.513856106Z" level=info msg="Handling backend connection request [k3s-server-01]"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: E1018 12:50:20.519025    3643 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 75.75.75.75 75.75.76.76 2001:558:feed::1"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.604123948Z" level=info msg="Node CIDR assigned for: k3s-server-01"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.606292    3643 flannel.go:92] Determining IP address of default interface

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.607230    3643 flannel.go:105] Using interface with name eth0 and address 10.0.0.251

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.608237322Z" level=info msg="Cluster dns configmap already exists"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.622816    3643 kube.go:116] Waiting 10m0s for node controller to sync

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.622954    3643 kube.go:299] Starting kube subnet manager

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.631183795Z" level=info msg="labels have already set on node: k3s-server-01"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.671377335Z" level=info msg="Starting /v1, Kind=Pod controller"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.671725427Z" level=info msg="Starting /v1, Kind=Endpoints controller"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.673052107Z" level=info msg="Starting /v1, Kind=Node controller"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.673458143Z" level=info msg="Starting /v1, Kind=Service controller"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.723258297Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"d1e52c9e-3876-4ce1-b80c-5aacd1b8ffc1\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"284\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.738244    3643 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.748791001Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"b29ebfef-01c1-46cb-af94-7f318b2c5859\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"300\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.753848186Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.754774924Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.755921938Z" level=info msg="Starting batch/v1, Kind=Job controller"

Oct 18 12:50:20 k3s-server-01 systemd[1]: Started Lightweight Kubernetes.

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.778824    3643 network_policy_controller.go:144] Starting network policy controller

Oct 18 12:50:20 k3s-server-01 k3s[3643]: I1018 12:50:20.846843    3643 controller.go:611] quota admission added evaluator for: helmcharts.helm.cattle.io

Oct 18 12:50:20 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:20.905859546Z" level=info msg="Cluster-Http-Server 2021/10/18 12:50:20 http: TLS handshake error from 10.0.0.76:44784: remote error: tls: bad certificate"

Oct 18 12:50:21 k3s-server-01 k3s[3643]: I1018 12:50:21.153197    3643 network_policy_controller.go:154] Starting network policy controller full sync goroutine

Oct 18 12:50:21 k3s-server-01 k3s[3643]: I1018 12:50:21.241415    3643 controller.go:611] quota admission added evaluator for: deployments.apps

Oct 18 12:50:21 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:21.348450201Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"b29ebfef-01c1-46cb-af94-7f318b2c5859\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"300\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""

Oct 18 12:50:21 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:21.418358677Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"2133183c-c306-415c-9e1f-bdc4afa0c24d\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"314\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""

Oct 18 12:50:21 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:21.525500522Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"2133183c-c306-415c-9e1f-bdc4afa0c24d\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"314\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""

Oct 18 12:50:21 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:21.538269440Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"052728ab-76e5-4ab3-a541-a31c666c8103\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"320\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""

Oct 18 12:50:21 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:21.565681677Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"052728ab-76e5-4ab3-a541-a31c666c8103\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"320\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""

Oct 18 12:50:21 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:21.589396908Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"64ce18e9-c806-483c-aaeb-d774b3c05653\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"326\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""

Oct 18 12:50:21 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:21.614747263Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"64ce18e9-c806-483c-aaeb-d774b3c05653\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"326\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""

Oct 18 12:50:21 k3s-server-01 k3s[3643]: I1018 12:50:21.629630    3643 kube.go:123] Node controller sync successful

Oct 18 12:50:21 k3s-server-01 k3s[3643]: I1018 12:50:21.629853    3643 vxlan.go:123] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false

Oct 18 12:50:21 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:21.635670930Z" level=fatal msg="flannel exited: operation not supported"

Oct 18 12:50:21 k3s-server-01 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE

Oct 18 12:50:21 k3s-server-01 systemd[1]: k3s.service: Failed with result 'exit-code'.

Oct 18 12:50:21 k3s-server-01 systemd[1]: k3s.service: Unit process 3826 (k3s-server) remains running after unit stopped.

Oct 18 12:50:21 k3s-server-01 systemd[1]: k3s.service: Consumed 31.190s CPU time.

Backporting

  • Needs backporting to older releases
@brandond
Copy link
Member

Oct 18 12:50:21 k3s-server-01 k3s[3643]: I1018 12:50:21.629853    3643 vxlan.go:123] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Oct 18 12:50:21 k3s-server-01 k3s[3643]: time="2021-10-18T12:50:21.635670930Z" level=fatal msg="flannel exited: operation not supported"

This suggests that your kernel is missing vxlan support, which is unusual since its available pretty much everywhere. Can you successfully modprobe vxlan?

@manedurphy
Copy link
Author

hi brandon, thank you for the quick response. here is the output from that command modprobe: FATAL: Module vxlan not found in directory /lib/modules/5.13.0-1008-raspi

i will start researching on my end to see if i can find anything. i appreciate the help on this matter.

@sjmach
Copy link

sjmach commented Oct 20, 2021

I tried updating the Linux Kernel from 5.13.0-1008-raspi to 5.14.x but it defaults back to 5.13.0-1008-raspi. I installed Ubuntu Server 20.04.3 LTS version and I can confirm it works fine on my Raspberry Pi 4 (Linux version 5.4.0-1042-raspi (buildd@bos02-arm64-010)).

@brandond
Copy link
Member

Probably related to #4188

@manedurphy
Copy link
Author

i see, thank you both for your help. i will close this issue as it is not an issue with k3s. hopefully this will be addressed in the future.

@siretart
Copy link

I can confirm the symptoms, and was able to fix it with:

sudo apt install linux-modules-extra-raspi && reboot

@sjmach
Copy link

sjmach commented Oct 21, 2021

I have raised the issue with the Ubuntu team and they have confirmed it's a bug at their end -https://bugs.launchpad.net/bugs/1947628

@siretart
Copy link

I have raised the issue with the Ubuntu team and they have confirmed it's a bug at their end -https://bugs.launchpad.net/bugs/1947628

I think there was a misunderstanding. The ubuntu kernel maintainers confirmed that this is intended behavior. I'd suggest to document this in the k3s documentation that from Ubuntu 20.10, the new package linux-modules-extra-* is needed

nerdynick pushed a commit to nerdynick/k3s-ansible that referenced this issue Oct 20, 2022
nerdynick pushed a commit to nerdynick/k3s-ansible that referenced this issue Oct 20, 2022
@MrGoldWatch
Copy link

I can confirm the symptoms, and was able to fix it with:

sudo apt install linux-modules-extra-raspi && reboot

this resolved my issues as well.

@brandond
Copy link
Member

This has been included in the docs for a while now.

@k3s-io k3s-io locked as resolved and limited conversation to collaborators Jan 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants