Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal Floating IP #25

Open
kaotika opened this issue Jan 29, 2020 · 3 comments
Open

Internal Floating IP #25

kaotika opened this issue Jan 29, 2020 · 3 comments

Comments

@kaotika
Copy link

kaotika commented Jan 29, 2020

In some cloud or bare metal environments so called floating or failover ip's are provided. Those can be assigned to any server as required. Using a daemon tracking the life status of the servers, like keepalived, one can easily create a simple high availability environment. This is primary useful to connect a static ip to a domain.

For some specific use cases it might be useful to have a similar feature inside a vpn. An ip always pointing to some server alive.

A real world usecase:

Below is some typical self-made HA kubernetes cluster setup.

  • LB Node 1:
    • HAProxy
    • keepalived master
    • Public IP: 76.0.0.11
    • Public Floating IP assigned: 100.0.0.3
    • VPN IP 10.0.0.11
    • VPN Floating Point IP assigned: 10.0.0.99
  • LB Node 2:
    • HAProxy
    • keepalived master
    • Public IP: 76.0.0.12
    • VPN IP 10.0.0.12
  • Master Node 1:
    • k8s control plane
    • VPN IP 10.0.0.21
  • Master Node 2:
    • k8s control plane
    • VPN IP 10.0.0.22
  • Master Node 3:
    • k8s control plane
    • VPN IP 10.0.0.23
  • Worker Node 1
    • k8s worker
    • VPN IP 10.0.0.31
  • Worker Node 2
    • k8s worker
    • VPN IP 10.0.0.32

Every k8s component has to communicate to the control place through a load balancer. If you want all traffic routed through the vpn you need to take care that the load balancer nodes are always be reachable. Both LB nodes running some alive keeper daemon (e.g. keepalived). If one of the nodes goes offline, the other automatically assigns the vpn floating ip to itself.

Is this possible somehow with wireguard and wesher already? Starting a second wesher instance on a connected server understandably failed, cause the port is still used.

@costela
Copy link
Owner

costela commented Jan 29, 2020

I'm not sure I understand your example. K8s already takes care of creating its own overlay network on top of the wesher mesh, so I don't see why the control-plane traffic has to go through the LBs. What am I missing?

Putting the specific example aside, the actual feature being requested - assuming I understood it correctly - is not currently supported: wesher assigns each node one mesh IP address and currently doesn't do any routing.
If we do get around to solving #12, this might become possible, though.

@costela
Copy link
Owner

costela commented Jan 29, 2020

BTW, since you seem to be using hetzner, you might be interested in this instead of - or as a complement to - keepalived.

@kaiyou
Copy link
Contributor

kaiyou commented May 24, 2020

This should be implemented as an additional benefit to the currently proposed fix for #28.

The implementation allows nodes to announce routed prefixes through wesher, currently focusing on device scope routes, so it should be compatible with keepalived.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants