-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't spinup a working cluster with default settings #1803
Comments
Possibly being affected by this kubernetes/kubeadm#484 |
I deployed an environment by hand with the following settings:
Then I tried to list nodes with my generated credentials:
The UI works as well. If you want to access the UI but not enable kube_basic_auth (or can't because you are using kubeadm mode), you can access the UI with
|
@mattymo Thanks for this. Like I said I used all defaults and that used to give me what I wanted. I guess things have changed and I will need to look through those var's files for now on. I also need to look up kubeadm as I don't know what that is. I will recreate the cluster soon with those settings. Thanks. |
We disabled basic auth because it's a much weaker attack vector to an account with admin privileges than a proper x509 cert. |
@mattymo I need to look into it more. I have an ELB pointed at my Masters on port 6883? Or whichever port API is listening and that is how I always reached the cluster and used basic auth to hit the dashboard. If I can still reach the UI and run commands like |
@mattymo Ahh, Look at what user floreks wrote here kubernetes/kubernetes#31665 I will try to install that 509 cert in my browser and see if I can reach api and dashboard throug my Nginx reverse proxy. Mainly because I don't want to use a VPN to hit the cluster. |
@shadycuz Enabling kube_basic_auth is easier than authenticating with a cert in a browser, in my experience. |
@mattymo I will just do that, its a personal cluster for fun and learning. I will set a really long random password and call it a day =). Thanks for adding that kubeconfig role to set it up for end user, Super Nice! |
@mattymo maybe I should start a new issue but I spun up a new cluster this morning I used
unfortuantly I lost the ansible output, but all steps were changed or okay untill it was time to wait for the api servers to come up. 20 tries I think and all 3 failed. Was checking 127.0.0.1:8080/health or something like that. I checked the host docker and something isnt right...
|
Why are you deploying kubernetes version 1.6.7 instead of 1.8.0? |
@mattymo Idk?
|
I guess I thought the defaults with out changing much would give me a good cluster out of the box, I know yesterday when I launched a cluster with all defaults it came up okay except basic auth wasn't enabled. |
You shouldn't remove the group vars that are included. We can't guarantee that the role defaults are set correctly because it's not covered by CI. I have a fix on review for the kube version issue: #1845 |
@mattymo what do you mean remove groups vars? I didn't? You mean I need to put my inventory in kubespray inventory and run kubespray from inside the kubespray directory? |
@mattymo I think most of my problems have been from not using the groups vars ..... =/ I will stand up a new cluster and put my inventory in the proper place... |
and then I will submit my vars via CLI so they take precedence over anything else cause im not really sure where should I change what |
@shadycuz You can make your own json or yaml file with vars and then specify it on the CLI like |
@mattymo The ansible config file inside of kubespray does not specify a path to an inventory, so dropping an inventory into |
Okay groupsvars should have worked?
|
@mattymo playbook ran with out erros, first thing I noticed was no artifacts dir? I used and
|
@mattymo well this aint good, I don't know what has happened to kubespray but at one point in time it actually worked with defaults out of the box with my exact setup.
|
@mattymo from the dashboard pod
|
I didn't use the @sign.... |
Tried again with a new cluster
|
@shadycuz did you ever figure out a solution to this error you were getting? fatal: [Compute02 -> None]: FAILED! => {"attempts": 10, "changed": false, "content": "", "failed": true, "msg": "Status code was not [200]: Request failed: <urlopen error ('_ssl.c:574: The handshake operation timed out',)>", "redirected": false, "status": -1, "url": "https://localhost:2379/health"} I get the same thing sometimes for the "wait for etcd" task and every single time on the "Master | wait for the apiserver to be running" handler |
@rhino5oh I did get past this, but I don't remember how. If you are using the latest code from here it might be something different from my issue. I would ask on the kubernetes slack. Kubespay has a room and I always get good help from people. You might want to SSH into your etcd nodes and check the logs and see what it says. Also make a new issue =) |
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Environment:
Scaleway (bare hardware)
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
):Linux 4.4.92-mainline-rev1 x86_64
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
ansible --version
):ansible 2.4.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/etc/ansible/module']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]
Kubespray version (commit) (
git rev-parse --short HEAD
):92d0380
Network plugin used:
Default
Copy of your inventory file:
Command used to invoke ansible:
ansible-playbook -b kubespray/cluster.yml -u root -i /etc/ansible/inventory/hosts
Output of ansible run:
Anything else do we need to know:
Cluster seems healthy, I can launch deployments etc. from the console of one of the three master servers. When trying to launch a dashboard using
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.6.3.yaml
everything is created normally. When trying to access it remotely using /ui I was able to login but retrieved an error. Thats when I noticed this user doesn't seem to be able to do anything?
Trying to reach /api/v1/nodes
returns
???
Maybe I am missing something? It's been a while since I stood something up with kubespray but in the past using defaults was always a sure thing.
The text was updated successfully, but these errors were encountered: