Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to use kind to pull up the k8s cluster of version v1.31.0. #3897

Closed
github-actions bot opened this issue Aug 18, 2024 · 3 comments
Closed

Failed to use kind to pull up the k8s cluster of version v1.31.0. #3897

github-actions bot opened this issue Aug 18, 2024 · 3 comments
Assignees

Comments

@github-actions
Copy link

action url: https://github.com/spidernet-io/spiderpool/actions/runs/10443386041

@ty-dc
Copy link
Collaborator

ty-dc commented Aug 19, 2024

ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged spider-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0819 07:41:42.211599     192 initconfiguration.go:261] loading configuration from "/kind/kubeadm.conf"
W0819 07:41:42.213022     192 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0819 07:41:42.214074     192 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0819 07:41:42.214603     192 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0818 20:16:22.316460     140 round_trippers.go:553] GET https://spiderpool0818201108-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0818 20:16:22.816477     140 round_trippers.go:553] GET https://spiderpool0818201108-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0818 20:16:23.316440     140 round_trippers.go:553] GET https://spiderpool0818201108-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0818 20:16:23.816528     140 round_trippers.go:553] GET https://spiderpool0818201108-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0818 20:16:24.316577     140 round_trippers.go:553] GET https://spiderpool0818201108-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0818 20:16:24.816512     140 round_trippers.go:553] GET https://spiderpool0818201108-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0818 20:16:25.316548     140 round_trippers.go:553] GET https://spiderpool0818201108-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0818 20:16:25.816513     140 round_trippers.go:553] GET https://spiderpool0818201108-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
[api-check] The API server is not healthy after 4m0.000182641s

Unfortunately, an error has occurred:
	context deadline exceeded

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
could not initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:112
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:132
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:[985](https://github.com/spidernet-io/spiderpool/actions/runs/10443386041/job/28916818552#step:12:986)
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:1117
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:1041
k8s.io/kubernetes/cmd/kubeadm/app.Run
	k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:47
main.main
	k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	runtime/proc.go:271
runtime.goexit
	runtime/asm_amd64.s:1695
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:1117
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:1041
k8s.io/kubernetes/cmd/kubeadm/app.Run
	k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:47
main.main
	k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	runtime/proc.go:271
runtime.goexit
	runtime/asm_amd64.s:1695
make[3]: *** [Makefile:141: setup_kind] Error 1
make[3]: Leaving directory '/home/runner/work/spiderpool/spiderpool/test'
make[2]: *** [Makefile:31: kind-init] Error 2
make[2]: Leaving directory '/home/runner/work/spiderpool/spiderpool/test'
make[1]: *** [Makefile:299: e2e_init] Error 2
make[1]: Leaving directory '/home/runner/work/spiderpool/spiderpool'
make: *** [Makefile:323: e2e_init_underlay] Error 2

@ty-dc ty-dc changed the title Nightly K8s Matrix CI 2024-08-18: Failed In K8s version 1.31.0, spiderpool-controller Pod health check fails Aug 19, 2024
@ty-dc ty-dc changed the title In K8s version 1.31.0, spiderpool-controller Pod health check fails Failed to use kind to pull up the k8s cluster of version v1.31.0. Aug 19, 2024
@ty-dc
Copy link
Collaborator

ty-dc commented Aug 19, 2024

https://github.com/cri-o/cri-o/releases

kind/node 1.31.0 版本依赖 cri-o 1.31,但是 cri-o 还未发布新的版本

@ty-dc
Copy link
Collaborator

ty-dc commented Sep 14, 2024

same as #3937

@ty-dc ty-dc closed this as completed Sep 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant