-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nginx HA #1674
Comments
@dkirrane what do you mean? |
@aledbf to clarify I mean for external request coming from outside the kubernetes cluster thru the ingress. Is it possible to give the nginx ingress controller an IP and if one pod dies the other replica takes the IP. At least that's how the VRRP in Nginx Plus works using a Virtual IP. |
@dkirrane Probably isn't the case to run your Ingress Controller in specific nodes, and use something like keepalived to share the external Node IP? |
@rikatz do you mean run 1 ingress controller per node via DaemonSet? Any examples of running keepalived pod and assigning nginx an IP one can access outside the cluster? |
@dkirrane Yeap, it should work :) I've never ran keepalived inside a container, just giving you some ideas. |
@dkirrane you can use https://github.com/aledbf/kube-keepalived-vip |
Closing. Please reopen if you have more question. |
Updated keepalived example https://github.com/kubernetes/contrib/tree/master/keepalived-vip |
QUESTION or FEATURE REQUEST
I'd like to make the nginx ingress controller HA. Have 1 IP that can be used outside the Kubernetes cluster to communicate thru ingress to my services/pods.
I can add 2 nginx ingress controller Replicas but how can I share a VIP between them?
NginX Plus uses keepalived+VRRP with a Virtual IP to do this. Active Nginx gets the VIP. If it dies Backup gets the VIP and becomes the Active:
https://www.nginx.com/products/nginx/high-availability/
But how can I do this in Kubernetes?
NGINX Ingress controller version:
0.9.0-beta.15
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
CentOS Linux release 7.4.1708 (Core)
Cloud provider or hardware configuration:
Kubernetes cluster created on VMWare with 4 CentOS 7 VMs using kubeadm. 1 master 3 minions
OS (e.g. from /etc/os-release):
CentOS Linux release 7.4.1708 (Core)
Kernel (e.g.
uname -a
):Linux master 3.10.0-693.el7.x86_64 Basic structure #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
kubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Others:
kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The text was updated successfully, but these errors were encountered: