Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kind supports v1.31 #3937

Closed
weizhoublue opened this issue Aug 23, 2024 · 4 comments · May be fixed by #4089
Closed

kind supports v1.31 #3937

weizhoublue opened this issue Aug 23, 2024 · 4 comments · May be fixed by #4089
Assignees

Comments

@weizhoublue
Copy link
Collaborator

Which tests are failing?

https://github.com/spidernet-io/spiderpool/actions/runs/10514834350/job/29133837113

helm/[email protected] , use (default: v0.21.0)
https://github.com/helm/kind-action

latest kind
https://github.com/kubernetes-sigs/kind/releases/tag/v0.24.0

Job's Link

No response

Reason for failure (if possible)

No response

Anything else we need to know?

No response

@weizhoublue weizhoublue changed the title kind support v1.31 kind supports v1.31 Aug 23, 2024
@ty-dc
Copy link
Collaborator

ty-dc commented Aug 23, 2024

same as #3897

@weizhoublue
Copy link
Collaborator Author

问题1:为什么 kind 发布的新版本 release 已经 能支持了 1.31 ,我却不能修复 该 pr
问题2:为什么 更新 k8s matrix 矩阵的 pr 能够跑过,使得 合入 ,而在每天 暴露出该问题

@ty-dc
Copy link
Collaborator

ty-dc commented Aug 28, 2024

使用最新的 kind 二进制 https://github.com/kubernetes-sigs/kind/releases/tag/v0.24.0 ,部署 kind 1.31.0 仍会失败。

image

-------------
setup kind with docker.io/kindest/node:v1.31.0
Creating cluster "spiderpool0828034955" ...
 • Ensuring node image (docker.io/kindest/node:v1.31.0) 🖼  ...
 ✓ Ensuring node image (docker.io/kindest/node:v1.31.0) 🖼
 • Preparing nodes 📦 📦   ...
 ✓ Preparing nodes 📦 📦 
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✗ Starting control-plane 🕹️
Deleted nodes: ["spiderpool0828034955-worker" "spiderpool0828034955-control-plane"]
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged spiderpool0828034955-control-plane kubeadm init --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1

Command Output: I0828 03:51:01.140433     147 initconfiguration.go:261] loading configuration from "/kind/kubeadm.conf"
W0828 03:51:01.140918     147 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0828 03:51:01.141353     147 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0828 03:51:01.141648     147 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0828 03:51:01.141862     147 initconfiguration.go:361] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.31.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0828 03:51:01.142847     147 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0828 03:51:01.329094     147 certs.go:473] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost spiderpool0828034955-control-plane] and IPs [10.233.0.1 172.18.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0828 03:51:01.604136     147 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0828 03:51:01.771472     147 certs.go:473] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0828 03:51:01.942856     147 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0828 03:51:02.153519     147 certs.go:473] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost spiderpool0828034955-control-plane] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost spiderpool0828034955-control-plane] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0828 03:51:02.532556     147 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0828 03:51:02.632075     147 kubeconfig.go:111] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0828 03:51:02.767657     147 kubeconfig.go:111] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I0828 03:51:02.833592     147 kubeconfig.go:111] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0828 03:51:02.969039     147 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0828 03:51:03.008739     147 kubeconfig.go:111] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0828 03:51:03.167402     147 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0828 03:51:03.167433     147 manifests.go:103] [control-plane] getting StaticPodSpecs
I0828 03:51:03.167705     147 certs.go:473] validating certificate period for CA certificate
I0828 03:51:03.167771     147 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0828 03:51:03.167785     147 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0828 03:51:03.167792     147 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0828 03:51:03.167798     147 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0828 03:51:03.167804     147 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0828 03:51:03.168366     147 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0828 03:51:03.168381     147 manifests.go:103] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0828 03:51:03.168551     147 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0828 03:51:03.168561     147 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0828 03:51:03.168566     147 manifests.go:129] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0828 03:51:03.168570     147 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0828 03:51:03.168573     147 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0828 03:51:03.168580     147 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0828 03:51:03.168588     147 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0828 03:51:03.169104     147 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0828 03:51:03.169118     147 manifests.go:103] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0828 03:51:03.169286     147 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0828 03:51:03.169630     147 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0828 03:51:03.169648     147 kubelet.go:68] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0828 03:51:03.350989     147 loader.go:395] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10[248](https://github.com/ty-dc/spiderpool/actions/runs/10589427302/job/29347810616#step:11:249)/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 2.000952583s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
I0828 03:55:03.853907     147 round_trippers.go:553] GET https://spiderpool0828034955-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0828 03:55:04.353991     147 round_trippers.go:553] GET https://spiderpool0828034955-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0828 03:55:04.854066     147 round_trippers.go:553] GET https://spiderpool0828034955-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
[api-check] The API server is not healthy after 4m0.000112322s

Unfortunately, an error has occurred:
	context deadline exceeded

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
could not initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:112
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:132
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:1117
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:1041
k8s.io/kubernetes/cmd/kubeadm/app.Run
	k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:47
main.main
	k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	runtime/proc.go:271
runtime.goexit
	runtime/asm_amd64.s:1695
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:1117
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:1041
k8s.io/kubernetes/cmd/kubeadm/app.Run
	k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:47
main.main
	k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	runtime/proc.go:271
runtime.goexit
	runtime/asm_amd64.s:1695
make[3]: *** [Makefile:141: setup_kind] Error 1
make[3]: Leaving directory '/home/runner/work/spiderpool/spiderpool/test'
make[2]: *** [Makefile:31: kind-init] Error 2
make[2]: Leaving directory '/home/runner/work/spiderpool/spiderpool/test'
make[1]: *** [Makefile:299: e2e_init] Error 2
make[1]: Leaving directory '/home/runner/work/spiderpool/spiderpool'
make: *** [Makefile:323: e2e_init_underlay] Error 2
debug
● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-08-28 03:44:20 UTC; 10min ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 944 (dockerd)
      Tasks: 15
     Memory: 3.3G
        CPU: 1min 2.651s
     CGroup: /system.slice/docker.service
             └─944 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Aug 28 03:49:55 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:55.198525974Z" level=warning msg="failed to prune image docker.io/library/node@sha256:17514b20acef0e79691285e7a59f3ae561f7a1702a9adc72a515aef23f326729: No such image: node@sha256:17514b20acef0e79691285e7a59f3ae561f7a1702a9adc72a515aef23f326729"
Aug 28 03:50:56 fv-az1121-872 dockerd[944]: time="2024-08-28T03:50:56.925149205Z" level=error msg="Could not add route to IPv6 network fc00:f853:ccd:e793::1/64 via device br-8dd48bfe0bd5: network is down"
Aug 28 03:50:59 fv-az1121-872 dockerd[944]: time="2024-08-28T03:50:59.952501557Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers"
Aug 28 03:50:59 fv-az1121-872 dockerd[944]: time="2024-08-28T03:50:59.955026943Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers"
Aug 28 03:51:00 fv-az1121-872 dockerd[944]: 2024/08/28 03:51:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
Aug 28 03:51:00 fv-az1121-872 dockerd[944]: 2024/08/28 03:51:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
Aug 28 03:55:05 fv-az1121-872 dockerd[944]: time="2024-08-28T03:55:05.494924664Z" level=info msg="ignoring event" container=ecb853f2a758d4db63021b436263f58433555caa5d9a6e6c3bc22927e64525be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 03:55:05 fv-az1121-872 dockerd[944]: time="2024-08-28T03:55:05.510448540Z" level=warning msg="ShouldRestart failed, container will not be restarted" container=ecb853f2a758d4db63021b436263f58433555caa5d9a6e6c3bc22927e64525be daemonShuttingDown=false error="restart canceled" execDuration=4m5.243856638s exitStatus="{137 2024-08-28 03:55:05.485646734 +0000 UTC}" hasBeenManuallyStopped=true restartCount=0
Aug 28 03:55:05 fv-az1121-872 dockerd[944]: time="2024-08-28T03:55:05.658906964Z" level=info msg="ignoring event" container=a9b5fd85a6cc6f8802273902967af69ed821cd65a10c2d494c2f7fb938b632f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 03:55:05 fv-az1121-872 dockerd[944]: time="2024-08-28T03:55:05.670040297Z" level=warning msg="ShouldRestart failed, container will not be restarted" container=a9b5fd85a6cc6f8802273902967af69ed821cd65a10c2d494c2f7fb938b632f2 daemonShuttingDown=false error="restart canceled" execDuration=4m5.4029529s exitStatus="{137 2024-08-28 03:55:05.646891847 +0000 UTC}" hasBeenManuallyStopped=true restartCount=0
Aug 21 11:19:06 fv-az1121-872 dockerd[1143]: time="2024-08-21T11:19:06.233618185Z" level=info msg="Processing signal 'terminated'"
Aug 21 11:19:06 fv-az1121-872 dockerd[1143]: time="2024-08-21T11:19:06.235825707Z" level=info msg="Daemon shutdown complete"
Aug 21 11:19:06 fv-az1121-872 systemd[1]: Stopping Docker Application Container Engine...
░░ Subject: A stop job for unit docker.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A stop job for unit docker.service has begun execution.
░░ 
░░ The job identifier is 1142.
Aug 21 11:19:06 fv-az1121-872 systemd[1]: docker.service: Deactivated successfully.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit docker.service has successfully entered the 'dead' state.
Aug 21 11:19:06 fv-az1121-872 systemd[1]: Stopped Docker Application Container Engine.
░░ Subject: A stop job for unit docker.service has finished
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A stop job for unit docker.service has finished.
░░ 
░░ The job identifier is 1142 and the job result is done.
-- Boot 59a5ee4d603143a58f3f7491e5c4b811 --
Aug 28 03:44:19 fv-az1121-872 systemd[1]: Starting Docker Application Container Engine...
░░ Subject: A start job for unit docker.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit docker.service has begun execution.
░░ 
░░ The job identifier is 170.
Aug 28 03:44:19 fv-az1121-872 dockerd[944]: time="2024-08-28T03:44:19.626609452Z" level=info msg="Starting up"
Aug 28 03:44:19 fv-az1121-872 dockerd[944]: time="2024-08-28T03:44:19.629789148Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf"
Aug 28 03:44:19 fv-az1121-872 dockerd[944]: time="2024-08-28T03:44:19.792506276Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Aug 28 03:44:19 fv-az1121-872 dockerd[944]: time="2024-08-28T03:44:19.856106178Z" level=info msg="Loading containers: start."
Aug 28 03:44:20 fv-az1121-872 dockerd[944]: time="2024-08-28T03:44:20.261192130Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 28 03:44:20 fv-az1121-872 dockerd[944]: time="2024-08-28T03:44:20.300261203Z" level=info msg="Loading containers: done."
Aug 28 03:44:20 fv-az1121-872 dockerd[944]: time="2024-08-28T03:44:20.319212407Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Aug 28 03:44:20 fv-az1121-872 dockerd[944]: time="2024-08-28T03:44:20.319293469Z" level=info msg="Docker daemon" commit=8e96db1 containerd-snapshotter=false storage-driver=overlay2 version=26.1.3
Aug 28 03:44:20 fv-az1121-872 dockerd[944]: time="2024-08-28T03:44:20.319494905Z" level=info msg="Daemon has completed initialization"
Aug 28 03:44:20 fv-az1121-872 systemd[1]: Started Docker Application Container Engine.
░░ Subject: A start job for unit docker.service has finished successfully
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit docker.service has finished successfully.
░░ 
░░ The job identifier is 170.
Aug 28 03:44:20 fv-az1121-872 dockerd[944]: time="2024-08-28T03:44:20.344827212Z" level=info msg="API listen on /run/docker.sock"
Aug 28 03:49:49 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:49.757074066Z" level=warning msg="failed to prune image docker.io/library/alpine@sha256:5292533eb4efd4b5cf35e93b5a2b7d0e07ea193224c49446c7802c19ee4f2da5: No such image: alpine@sha256:5292533eb4efd4b5cf35e93b5a2b7d0e07ea193224c49446c7802c19ee4f2da5"
Aug 28 03:49:50 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:50.088683489Z" level=warning msg="failed to prune image docker.io/library/debian@sha256:58ce6f1271ae1c8a2006ff7d3e54e9874d839f573d8009c20154ad0f2fb0a225: No such image: debian@sha256:58ce6f1271ae1c8a2006ff7d3e54e9874d839f573d8009c20154ad0f2fb0a225"
Aug 28 03:49:50 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:50.455682783Z" level=warning msg="failed to prune image docker.io/library/node@sha256:a7ff16657263663c1e92ba3060cdbba0e77329a0a4cb3c27bbbbe90c6e20bd87: No such image: node@sha256:a7ff16657263663c1e92ba3060cdbba0e77329a0a4cb3c27bbbbe90c6e20bd87"
Aug 28 03:49:50 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:50.701012544Z" level=warning msg="failed to prune image docker.io/library/ubuntu@sha256:adbb90115a21969d2fe6fa7f9af4253e16d45f8d4c1e930182610c4731962658: No such image: ubuntu@sha256:adbb90115a21969d2fe6fa7f9af4253e16d45f8d4c1e930182610c4731962658"
Aug 28 03:49:50 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:50.908853051Z" level=warning msg="failed to prune image docker.io/library/ubuntu@sha256:fa17826afb526a9fc7250e0fbcbfd18d03fe7a54849472f86879d8bf562c629e: No such image: ubuntu@sha256:fa17826afb526a9fc7250e0fbcbfd18d03fe7a54849472f86879d8bf562c629e"
Aug 28 03:49:50 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:50.946655267Z" level=warning msg="failed to prune image docker.io/library/alpine@sha256:452e7292acee0ee16c332324d7de05fa2c99f9994ecc9f0779c602916a672ae4: No such image: alpine@sha256:452e7292acee0ee16c332324d7de05fa2c99f9994ecc9f0779c602916a672ae4"
Aug 28 03:49:50 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:50.986313986Z" level=warning msg="failed to prune image docker.io/moby/buildkit@sha256:e0b10610709509aded9b101a61a090e24a5161f46d5eb8a479297fe96aa5d8ac: No such image: moby/buildkit@sha256:e0b10610709509aded9b101a61a090e24a5161f46d5eb8a479297fe96aa5d8ac"
Aug 28 03:49:51 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:51.179512202Z" level=warning msg="failed to prune image docker.io/library/node@sha256:eb8101caae9ac02229bd64c0[249](https://github.com/ty-dc/spiderpool/actions/runs/10589427302/job/29347810616#step:11:250)19fe3d4504ff7f329da79ca60a04db08cef52: No such image: node@sha256:eb8101caae9ac02229bd64c024919fe3d4504ff7f329da79ca60a04db08cef52"
Aug 28 03:49:51 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:51.215085781Z" level=warning msg="failed to prune image docker.io/library/alpine@sha256:ef813b2faa3dd1a37f9ef6ca98347b72cd0f55e4ab29fb90946f1b853bf032d9: No such image: alpine@sha256:ef813b2faa3dd1a37f9ef6ca98347b72cd0f55e4ab29fb90946f1b853bf032d9"
Aug 28 03:49:52 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:52.826173767Z" level=warning msg="failed to prune image docker.io/library/node@sha256:d3c8ababe9566f9f3495d0d365a5c4b393f607924647dd52e75bf4f8a54effd3: No such image: node@sha256:d3c8ababe9566f9f3495d0d365a5c4b393f607924647dd52e75bf4f8a54effd3"
Aug 28 03:49:53 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:53.150986377Z" level=warning msg="failed to prune image docker.io/library/debian@sha256:0bb606aad3307370c8b4502eff11fde298e5b7721e59a0da3ce9b30cb92045ed: No such image: debian@sha256:0bb606aad3307370c8b4502eff11fde298e5b7721e59a0da3ce9b30cb92045ed"
Aug 28 03:49:53 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:53.186242097Z" level=warning msg="failed to prune image docker.io/library/alpine@sha256:95c16745f100f44cf9a0939fd3f357905f845f8b6fa7d0cde0e88c9764060185: No such image: alpine@sha256:95c16745f100f44cf9a0939fd3f357905f845f8b6fa7d0cde0e88c9764060185"
Aug 28 03:49:54 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:54.656488769Z" level=warning msg="failed to prune image docker.io/library/node@sha256:f77a1aef2da8d83e45ec990f45df50f1a286c5fe8bbfb8c6e4246c6389705c0b: No such image: node@sha256:f77a1aef2da8d83e45ec990f45df50f1a286c5fe8bbfb8c6e4246c6389705c0b"
Aug 28 03:49:54 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:54.970659613Z" level=warning msg="failed to prune image docker.io/library/node@sha256:a1f9d027912b58a7c75be7716c97cfbc6d3099f3a97ed84aa490be9dee20e787: No such image: node@sha256:a1f9d027912b58a7c75be7716c97cfbc6d3099f3a97ed84aa490be9dee20e787"
Aug 28 03:49:55 fv-az1121-872 dockerd[944]: time="2024-08-28T03:49:55.198525974Z" level=warning msg="failed to prune image docker.io/library/node@sha256:17514b20acef0e79691285e7a59f3ae561f7a1702a9adc72a515aef23f326729: No such image: node@sha256:17514b20acef0e79691285e7a59f3ae561f7a1702a9adc72a515aef23f326729"
Aug 28 03:50:56 fv-az1121-872 dockerd[944]: time="2024-08-28T03:50:56.925149205Z" level=error msg="Could not add route to IPv6 network fc00:f853:ccd:e793::1/64 via device br-8dd48bfe0bd5: network is down"
Aug 28 03:50:59 fv-az1121-872 dockerd[944]: time="2024-08-28T03:50:59.95[250](https://github.com/ty-dc/spiderpool/actions/runs/10589427302/job/29347810616#step:11:251)1557Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers"
Aug 28 03:50:59 fv-az1121-872 dockerd[944]: time="2024-08-28T03:50:59.955026943Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers"
Aug 28 03:51:00 fv-az1121-872 dockerd[944]: 2024/08/28 03:51:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
Aug 28 03:51:00 fv-az1121-872 dockerd[944]: 2024/08/28 03:51:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
Aug 28 03:55:05 fv-az1121-872 dockerd[944]: time="2024-08-28T03:55:05.494924664Z" level=info msg="ignoring event" container=ecb853f2a758d4db63021b436263f58433555caa5d9a6e6c3bc22927e64525be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 03:55:05 fv-az1121-872 dockerd[944]: time="2024-08-28T03:55:05.510448540Z" level=warning msg="ShouldRestart failed, container will not be restarted" container=ecb853f2a758d4db63021b436[263](https://github.com/ty-dc/spiderpool/actions/runs/10589427302/job/29347810616#step:11:264)f58433555caa5d9a6e6c3bc22927e64525be daemonShuttingDown=false error="restart canceled" execDuration=4m5.243856638s exitStatus="{137 2024-08-28 03:55:05.485646734 +0000 UTC}" hasBeenManuallyStopped=true restartCount=0
Aug 28 03:55:05 fv-az1121-872 dockerd[944]: time="2024-08-28T03:55:05.658906964Z" level=info msg="ignoring event" container=a9b5fd85a6cc6f8802273902967af69ed821cd65a10c2d494c2f7fb938b632f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 03:55:05 fv-az1121-872 dockerd[944]: time="2024-08-28T03:55:05.670040297Z" level=warning msg="ShouldRestart failed, container will not be restarted" container=a9b5fd85a6cc6f8802273902967af69ed821cd65a10c2d494c2f7fb938b632f2 daemonShuttingDown=false error="restart canceled" execDuration=4m5.4029529s exitStatus="{137 2024-08-28 03:55:05.646891847 +0000 UTC}" hasBeenManuallyStopped=true restartCount=0
restart docker before trying again
Failed to restart docker.service: Interactive authentication required.
See system logs and 'systemctl status docker.service' for details.
WARNING: Retry command threw the error Command failed: echo "debug"
systemctl status docker

@ty-dc
Copy link
Collaborator

ty-dc commented Oct 22, 2024

same as #4110

@ty-dc ty-dc closed this as completed Oct 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants