Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'make test.integration' fails in my local enviornment #4390

Closed
1 task done
parkjeongryul opened this issue Jul 21, 2023 · 3 comments
Closed
1 task done

'make test.integration' fails in my local enviornment #4390

parkjeongryul opened this issue Jul 21, 2023 · 3 comments
Labels
bug Something isn't working

Comments

@parkjeongryul
Copy link
Contributor

parkjeongryul commented Jul 21, 2023

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I want to run integration test in my macbook with docker desktop, but it fails.
(also failed in CentOS VM with same error)

I found that KIC workflows is running in ubuntu runner.
Should i need ubuntu environment for running integration test?

Has anyone else encountered a similar problem?

This is my logs.

$ make test.integration
INFO: container environment ready DOCKER=(Docker version 20.10.12, build e91ed57) KIND=(kind v0.12.0 go1.17.8 darwin/arm64)
(cd third_party && go mod tidy && \
                GOBIN=/Users/user/Works/jeong-ryul-park/parkjeongryul/kubernetes-ingress-controller/bin go generate -tags=third_party ./go-junit-report.go )
KONG_CLUSTER_VERSION="v1.27.1" \
                TEST_DATABASE_MODE="off" \
                GOFLAGS="-tags=integration_tests" \
                KONG_CONTROLLER_FEATURE_GATES="GatewayAlpha=true" \
                go test  \
                -v \
                -timeout "45m" \
                -parallel 8 \
                -race \
                -covermode=atomic \
                -coverpkg=./pkg/...,./internal/... \
                -coverprofile=coverage.dbless.out \
                ./test/integration | \
        /Users/user/Works/jeong-ryul-park/parkjeongryul/kubernetes-ingress-controller/bin/go-junit-report -iocopy -out /dev/null -parser gotest
INFO: setting up test environment
INFO: configuring cluster for testing environment
INFO: no existing cluster found, deploying using Kubernetes In Docker (KIND)
INFO: build a new KIND cluster with version 1.27.1
INFO: building test environment
WARNING: failure occurred, performing test cleanup
Error: tests failed: failed to create cluster 0ecc82d3-835f-4ba0-84fa-389e978bb8e3: Creating cluster "0ecc82d3-835f-4ba0-84fa-389e978bb8e3" ...
 • Ensuring node image (kindest/node:v1.27.1) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.27.1) 🖼
 • Preparing nodes 📦   ...
 ✓ Preparing nodes 📦 
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✗ Starting control-plane 🕹️
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged 0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1

Command Output: I0721 16:31:17.938797     132 initconfiguration.go:255] loading configuration from "/kind/kubeadm.conf"
W0721 16:31:17.945303     132 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.27.1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0721 16:31:18.008719     132 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0721 16:31:18.345425     132 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.19.0.4 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0721 16:31:18.603366     132 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0721 16:31:18.737060     132 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0721 16:31:18.971929     132 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0721 16:31:19.306141     132 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane localhost] and IPs [172.19.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane localhost] and IPs [172.19.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0721 16:31:20.089710     132 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0721 16:31:20.308015     132 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0721 16:31:20.425150     132 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0721 16:31:20.642084     132 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0721 16:31:20.828448     132 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0721 16:31:21.018707     132 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0721 16:31:21.186985     132 manifests.go:99] [control-plane] getting StaticPodSpecs
I0721 16:31:21.189025     132 certs.go:519] validating certificate period for CA certificate
I0721 16:31:21.189139     132 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0721 16:31:21.189146     132 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0721 16:31:21.189150     132 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0721 16:31:21.189152     132 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0721 16:31:21.189155     132 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0721 16:31:21.194862     132 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0721 16:31:21.194902     132 manifests.go:99] [control-plane] getting StaticPodSpecs
I0721 16:31:21.195255     132 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0721 16:31:21.195282     132 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0721 16:31:21.195293     132 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0721 16:31:21.195304     132 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0721 16:31:21.195313     132 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0721 16:31:21.195326     132 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0721 16:31:21.195336     132 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0721 16:31:21.196062     132 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0721 16:31:21.196199     132 manifests.go:99] [control-plane] getting StaticPodSpecs
I0721 16:31:21.196432     132 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0721 16:31:21.196998     132 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0721 16:31:21.197868     132 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
I0721 16:31:21.199644     132 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0721 16:31:21.199992     132 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I0721 16:31:21.201168     132 loader.go:373] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0721 16:31:21.222662     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 10 milliseconds
I0721 16:31:21.724876     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0721 16:31:22.224557     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0721 16:31:22.735919     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 2 milliseconds
I0721 16:31:23.224565     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0721 16:31:23.723960     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0721 16:31:24.225182     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0721 16:31:24.724727     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0721 16:31:25.224928     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0721 16:31:25.724176     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0721 16:31:26.224505     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0721 16:31:26.724609     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0721 16:31:27.232722     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0721 16:31:27.731935     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0721 16:31:28.231840     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0721 16:31:28.733265     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0721 16:31:29.232074     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0721 16:31:29.731136     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0721 16:31:30.231663     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
...
I0721 16:35:21.115513     132 round_trippers.go:553] GET https://0ecc82d3-835f-4ba0-84fa-389e978bb8e3-control-plane:6443/healthz?timeout=10s  in 1 milliseconds

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
        cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_arm64.s:1172
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_arm64.s:1172
: exit status 1
FAIL    github.com/kong/kubernetes-ingress-controller/v2/test/integration       267.048s
FAIL

Expected Behavior

Integration test should run in my local env.

Steps To Reproduce

1. run docker
2. make test.integration

Kong Ingress Controller version

main branch

Kubernetes version

DOCKER=(Docker version 20.10.12, build e91ed57) KIND=(kind v0.12.0 go1.17.8 darwin/arm64)

Anything else?

No response

@parkjeongryul parkjeongryul added the bug Something isn't working label Jul 21, 2023
@rainest
Copy link
Contributor

rainest commented Jul 21, 2023

The environment setup is unable to start a KIND cluster.

You should review your cluster setup logs after starting a cluster independent of the KIC tests (by running kind create cluster) and check with KIND support if you cannot determine a fix yourself.

If you can create a KIND cluster using the kind command but cannot start one with our test scripts, please file an issue to the test framework repo.

@rainest rainest closed this as completed Jul 21, 2023
@parkjeongryul
Copy link
Contributor Author

I can create a KIND cluster with kind create cluster.

$ kind create cluster --name "test-cluster"
Creating cluster "test-cluster" ...
 ✓ Ensuring node image (kindest/node:v1.23.4) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-test-cluster"
You can now use your cluster with:

kubectl cluster-info --context kind-test-cluster

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

please file an issue to the test framework repo.

Thanks! I will do this.

@parkjeongryul
Copy link
Contributor Author

parkjeongryul commented Jul 24, 2023

FYI

MacOS

I found that KTF doesn't support MacOS and Windows.
https://github.com/Kong/kubernetes-testing-framework

Linux(RHEL7)

I found that latest KIND version(v0.20.0) has issue with RHEL7.
kubernetes-sigs/kind#3311

I downgraded KIND to version v0.19.0 and saw it succeed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants