Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

Kubernetes API cannot reach Kubernetes Dashboard #2673

Closed
bpasson opened this issue Nov 26, 2016 · 3 comments
Closed

Kubernetes API cannot reach Kubernetes Dashboard #2673

bpasson opened this issue Nov 26, 2016 · 3 comments
Assignees
Labels
Milestone

Comments

@bpasson
Copy link

bpasson commented Nov 26, 2016

In my setup I have a master and a node both with a public IP and connected to each other using a VPN. Kubernetes was deployed using Kubeadm and weave was deployed using https://git.io/weave-kube. Kubeadm used the -api-advertise-addresses parameter where only the VPN assigned address for the master was given.

This all works, master and node can see each other on both sides of the VPN. Weave starts and both sides can find each other using the public IPs (eth0). Those are not connected to the same switch.

This gives us the following networking situation:

Master Node
eth0 (IPv4) eth0 (IPv4)
tun0 (VPN) tun0 (VPN)
weave weave

Next I deployed the Kubernetes Dashboard which got assigned to the node. The dashboard starts and connects to the api deployed on the master.

To access the dashboard I use the following command on the master node:

curl -v -L http://127.0.0.1:8080/ui

After a while it fails with:

Error: 'dial tcp 10.32.0.1:9090: i/o timeout'

After some searching I noticed the weave interace on the master node had no address assigned. I had a chat with @bboreham and doing a manual local expose in the weave pod on the master solved my problem. He told me weave will only assign an address if a pod uses the pod network on the specific host. In my case only pods using the host network were deployed and therefore no address was assigned.

This leads to the problem that the kubernetes api can't use the pod network where the kubernetes dashboard pod is deployed.

The setup is composed of two Ubuntu 16.04 machines with nothing special. Uname -a lists for both:

Linux 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

I hope this report helps.

@je2ryw
Copy link

je2ryw commented Nov 27, 2016

@bpasson @bboreham could you share the manual solution on how to expose kubernetes service (api server)? I ran into a similar situation:

  • 3 master kubernete nodes running inside private network (coreos 1185.3.0/stable + hyperkube + weave-kube)
  • 2 worker nodes (cores 1185.3.0/stable + hyperkube + weave-kube) are available via public Internet

The weave-npc containers running on the worker nodes are unable to connect to the kubernetes apiserver service (e.g. 10.13.0.1), which causes all weave network connections (not on host os, only inside POD containers) blocked by the weave-npc container.

kubectl logs -c weave-npc weave-net-xxxx

E1127 03:55:02.816764    1955 reflector.go:214] github.com/weaveworks/weave/vendor/k8s.io/client-go/tools/cache/reflector.go:109: Failed to list *v1.Namespace: Get https://10.16.0.1:443/api/v1/namespaces?resourceVersion=0: dial tcp 10.16.0.1:443: i/o timeout

/home/weave # curl -kv http://localhost:6781/metrics

...
# HELP weavenpc_blocked_connections_total Connection attempts blocked by policy controller.
# TYPE weavenpc_blocked_connections_total counter
weavenpc_blocked_connections_total{dport="53",protocol="udp"} 50190
weavenpc_blocked_connections_total{dport="80",protocol="tcp"} 11

@bpasson
Copy link
Author

bpasson commented Nov 27, 2016

If you log in the machine which has no address assigned to the weave interface use docker ps to locate the weaveworks/weave-kube container. If you found it, open a shell as follows:

$ docker exec -it <container-id> sh

Inside the container you can then run the following to have the interface get an address assigned:

# /home/weave/weave --local expose

Keep in mind that if the machine reboots you need to do this again. In my setup I created a DaemonSet to run a very small http-server that uses the pod network to force weave to do a local expose on every node.

@bboreham
Copy link
Contributor

Hi sorry, this was most likely an unintended consequence of #2637, which moved the expose call inside the CNI plugin. You are getting this symptom because the plugin never runs if there are no pods on the node.

Therefore another workaround is to arrange that there is some pod run on the node each time it boots - it doesn't matter if it then continues running or finishes. It needs to use the pod network though; all of Kubernetes' own pods on master run in the host network.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants