Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: the server doesn't have a resource type "machinedeployment" #447

Closed
Kube-ASY opened this issue May 14, 2019 · 4 comments · Fixed by #450
Closed

error: the server doesn't have a resource type "machinedeployment" #447

Kube-ASY opened this issue May 14, 2019 · 4 comments · Fixed by #450
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor.
Milestone

Comments

@Kube-ASY
Copy link

What happened:
I tried to delete a cluster with the following config:

apiVersion: kubeone.io/v1alpha1
kind: KubeOneCluster
name: demo
versions:
  kubernetes: "1.13.2"
cloudProvider:
  name: "none"
hosts:
- publicAddress: '1.2.3.4'
  privateAddress: '172.18.0.1'
  sshPort: 22 # can be left out if using the default (22)
  sshUsername: ubuntu
machineController:
  deploy: false
  provider: "none"

This failed with the following issue:

~/go/bin/kubeone reset config.yaml -v
INFO[08:22:07 CEST] Resetting kubeadm…
INFO[08:22:07 CEST] Destroying worker nodes…                      node=172.18.0.1
+ kubectl cluster-info
+ 1> /dev/null
+ kubectl annotate --all --overwrite node 'kubermatic.io/skip-eviction=true'
node/master0 annotated
+ kubectl delete machinedeployment -n kube-system --all
error: the server doesn't have a resource type "machinedeployment"
Error: failed to exec command: export "PATH=$PATH:/sbin:/usr/local/bin:/opt/bin"

set -xeu pipefail


if kubectl cluster-info > /dev/null; then
  kubectl annotate --all --overwrite node kubermatic.io/skip-eviction=true
  kubectl delete machinedeployment -n "kube-system" --all
  kubectl delete machineset -n "kube-system" --all
  kubectl delete machine -n "kube-system" --all

  for try in {1..30}; do
    if kubectl get machine -n "kube-system" 2>&1 | grep -q  'No resources found.'; then
      exit 0
    fi
    sleep 10s
  done

  echo "Error: Couldn't delete all machines!"
  exit 1
fi
: Process exited with status 1

What is the expected behavior:
Deletion proceeds.

How to reproduce the issue:
Try to delete a cluster when the following is set in config.yaml

machineController:
  deploy: false
  provider: "none"

Anything else we need to know?

Information about the environment:
KubeOne version (kubeone version): v0.6.0-6-g6407139
Operating system: CentOS Linux release 7.6.1810 (Core)
Provider you're deploying cluster on: Bare-metal
Operating system you're deploying on: CentOS Linux release 7.6.1810 (Core)

@kron4eg
Copy link
Member

kron4eg commented May 14, 2019

Hey, thanks for the report!

We already aware of this issue.
Closing as duplicate of #398

@kron4eg kron4eg closed this as completed May 14, 2019
@xmudrii
Copy link
Member

xmudrii commented May 14, 2019

@Kube-ASY Thanks for reporting the issue!

This is a known issue and we have a ticket open for that one - #398. Considering this issue has more details than #398, I'll leave it open and close another one. We'll get this fixed for v0.7.0 release, which will be released in the upcoming weeks. As a workaround, you can skip destroying workers with the --destroy-workers=false flag.

@xmudrii xmudrii added this to the v0.7.0 milestone May 14, 2019
@xmudrii xmudrii reopened this May 14, 2019
@kron4eg kron4eg modified the milestones: v0.7.0, v0.8.0 May 14, 2019
@kron4eg
Copy link
Member

kron4eg commented May 14, 2019

changing milestone to 0.8 as original issue was in it

@xmudrii
Copy link
Member

xmudrii commented May 14, 2019

Considering this issue should not take too long to fix, I think we should get it done for 0.7, so I'll move it back and assign it to myself.

/assign

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor.
Projects
None yet
3 participants