-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes client does not support InsecureSkipVerify #876
Comments
Hey @jonaz @emilevauge I can't thing of any valid use case for this. And the Kubernetes Client will actually prevent you from going insecure by raising an error if you try to set the Insecure flag. A potential (not sure if useful at all) use case for this could be to have an "out of cluster" kubernetes provider reading a kubeconfig file (just like https://github.com/kubernetes/client-go/blob/124670e99da15091e13916f0ad4b2b2df2a39cd5/examples/out-of-cluster/main.go#L36) |
I agree, at the moment k8s implies that Traefik is running in cluster, with credentials and CA provided by the service account, running Traefik externally makes a lot of things uncertain networking wise... Unless there is a strong use case to support this, in a real cluster it seems irrelevant. |
My usecase was traefik running inside a cluster. With a haproxy in front of the kubernetes apiservers and the common name was then not included in the SAN of the apiserver certs. Therefore i needed traefik to work anyway. My SANs looked like this: But my service for the haproxy was named apiserver-lb which was not included in the SAN. I solved it by running traefik on hostNetwork instead so it could access the haproxy on localhost which was included in the SAN. |
Running in cluster you shouldn't need ha-proxy kube-proxy should work just fine... if kubedns is running, kubernetes.default should route correctly... |
That's not true. Kubernetes does not reconfigure the service lb when a
master node is down. That's why the official HA guide suggest setting an LB
in front of the API servers. Otherwise every third request to the cluster
will fail. There is a pretty old bug report for this on the kubernetes
issue tracker
…On Wed, Feb 22, 2017, 03:31 Ed Robinson ***@***.***> wrote:
Running in cluster you shouldn't need ha-proxy kube-proxy should work just
fine... if kubedns is running, kubernetes.default should route correctly...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#876 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABF-FScd_i6McUiIeqpt7A0IPbpqWtqnks5re53wgaJpZM4K5eAT>
.
|
Like other providers we should have
--kubernetes.tls.insecureskipverify
Which makes the client in provider/k8s/client.go set
c.tls.InsecureSkipVerify
The text was updated successfully, but these errors were encountered: