Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LBaaS : Private to Private load balancing and support for larger LBs (Octavia integration) #104

Closed
Nico-VanHaaster opened this issue May 18, 2021 · 50 comments
Labels
Performance Compute, Storage or Network fundamentals

Comments

@Nico-VanHaaster
Copy link

Nico-VanHaaster commented May 18, 2021

As a Kubernetes cluster user
I want to use a fully integrated load balancer for more various use cases such as high-traffic scenarios or private to-private load-balancing
so that I can benefit from full Kubernetes integration and automation for all possible business use case

Note : We currently only support public to public and public to private loadbancers and we only have one ficex integrated loadbalancer size, described here (with limitations such as
2,000 req/s and 200 Mbit/s). A workaroudn is to setup a self-managed load balancer and/ or use OVHcloud IP load balancer solution https://www.ovh.ie/solutions/load-balancer but this is challenging because you then miss the live integration for example when a pod is rescheduled or a node is added/removed.

All those usecases will be covered in 2023 with the integration of Octavia load balancer in Managed Kubernetes.


original issue :
It is quite common that a K8 load balancer work with Private IP (VRack assigned). This allows other private services to interact with the services \ pods in a K8 cluster without exposing the cluster to the public web.

In other platforms an annotation is added to the ingress controller to create the load balancer using a Private IP Address. For example in Azure we would use the annotation

service.beta.kubernetes.io/azure-load-balancer-internal: "true"

This is quite helpful for cases such as

  • A firewall \ WAF external to the cluster managing all ingress
  • Orchestrating muliple clusters through common end points
  • Creating services that are not exposed to the public web (ie. Monitoring/Telemetry, Cache, Database etc)
@giudicelli
Copy link

Indeed, it would be nice... even more since OVH's documentation is very confusing and might actually lead people to beleive it's already in place! :(

@mhurtrel mhurtrel changed the title Managed Kubernetes: Interal IP Load balancer LBaaS : Private to Private load balancing May 19, 2021
@mhurtrel
Copy link
Collaborator

Hello

I confirm that we currently support only Public to Public and Public to Private load balancing but not Private to Private.
This is a planned feature from our network colleagues and we will support an annotation for this when they mùake it available. In the meatime we invite you to use clusterIPs/nodeports for internal traffic routing.

@giudicelli
Copy link

Well acutally there seem to be a major bug in your public to private implementation... https://gitter.im/ovh/kubernetes?at=60a3c10f233397424dcd6e1f
And I'm not the only one if I refer to people's posts :)

@mhurtrel
Copy link
Collaborator

mhurtrel commented May 19, 2021

@giudicelli I confirm this bug is being adressed by my network colleagues (it is non systematic : sometimes they create a neutron port in the wrong region to one side of the LB which make it unreachable)

@mhurtrel
Copy link
Collaborator

@giudicelli This bug has been fixed a couple of weeks ago. nevertheless I let the issue open because private to private loadbalancing is still to be developped.

@matmicro
Copy link

matmicro commented Nov 7, 2021

What is the ETA for this ? Beta / GA

@mhurtrel
Copy link
Collaborator

mhurtrel commented Nov 7, 2021

Hello @matmicro I dont have an ETA yet, but this should be pretty soon after regions are moved to a recent version of Openstack and expose Octavia, which is the technology we plan to leverage for this need

@matmicro
Copy link

matmicro commented Nov 7, 2021

Can we expect before end of 2021 then ?

@mhurtrel
Copy link
Collaborator

mhurtrel commented Nov 7, 2021

@matmicro no, this will be during first semester of 2022

@pierreis
Copy link

pierreis commented Mar 22, 2022

@mhurtrel Is the first semester ETA still accurate?

@mhurtrel
Copy link
Collaborator

Hello @pierreis There is some delays in adding public to public support for octavia, that will be leveraged for both use case after integration in Kubernetes. The current ETA is Q3, sorry for the delay.

@mscheer93
Copy link

is ETA still Q3?

@mhurtrel
Copy link
Collaborator

Hi @mscheer93 . Our Octavia colleagues are having some delays in making Octavia full compliant with all the K8s use case we will need (public to public, public to private and private to private). We need all this scenarios to be usable to then make the switch. Our best guess at the moment is Q4 calendar year. sorry for the delay.

@mhurtrel
Copy link
Collaborator

mhurtrel commented Jun 22, 2022

@Nico-VanHaaster I updated the issue also covering the 'high load' scenario. Both will be covered with the octavia integration. Unfortunately, we have delay from the Octavia team to enable all scenarios needed for Kubernetes. I will give a new ETA as soon as I have a clear one from them.

@Nico-VanHaaster
Copy link
Author

@JacquesMrz thanks for the update. At this point we have stopped using OVH K8 and gone with Azure until the K8 offering is finished. This feature of one really important features to be implemented.

Looking forward to the GA release.

@arcalys
Copy link

arcalys commented Aug 1, 2022

Any news or any ETA for this?

1 similar comment
@yctn
Copy link

yctn commented Oct 13, 2022

Any news or any ETA for this?

@matmicro
Copy link

As a MKS user i would like to have benefit from below improvements on Loadbalancer :

  • ability to set a name (from kube YAML file) and see this name from Manager UI
  • display LB kubernetes cluster on Manager UI
  • display LB metrics on Manager UI
    • requests/s
    • bandwith
      Then i will be able to clearly optimize my LB usage and anticipate migration from S, M, L flavor.

@yctn
Copy link

yctn commented Oct 26, 2022

Any news or any ETA for this?

@mhurtrel mhurtrel changed the title LBaaS : Private to Private load balancing LBaaS : Private to Private load balancing and support for larger LBs (Octavia integration) Dec 1, 2022
@qualifio-infrastructure
Copy link

qualifio-infrastructure commented Jan 4, 2023

So we are almost 2 years later, this just moved to "prioritized", so we should expect this by ~2025 right?

@mhurtrel
Copy link
Collaborator

mhurtrel commented Jan 4, 2023

Hi @qualifio-infrastructure I fully understand and share part of your frustration, but please understand that there were numerous dependancies to support this.
We depend on those to move forward with this. Octavia is being release by my network colleagues in a first production-ready versions, and they are now progressing on supporting all the scenario we need for Managed Kubernetes (including missing public to public management).
As soon as we have that, we will integrate Octavia and support this. We currently target this for Spring though there may be additionnal smal delay on those dependancy I dont have full control on.

@pierreis
Copy link

How is this looking at the moment? This is critical for many uses.

@LisaPerrier
Copy link

Hi @pierreis,
We are putting effort into deploying Octavia (public to public and public to private management) for the end of summer 2023. Then we will be able to work on our dependencies issues and making private to private deployment possible for 2024.

@pierreis
Copy link

Less than 3 months ago spring was targeted, but it now shifted by a whole additional year? Alright.

@Nico-VanHaaster
Copy link
Author

Hi, do you have any update on this feature ?

@zyguy i opened this thread over 2 years ago May 2021 and nothing but delays and no responses. We moved our entire stack off OVH to Azure, with properly configured clusters we were able to achieve near close price points to OVH with 10x the capabilities. Now using private routing from the edge, HA clusters and much more.

I wouldn’t expect to see much from this team.

@bcouetil
Copy link

bcouetil commented Aug 2, 2023

we were able to achieve near close price points to OVH with 10x the capabilities

😂

If your goal is price and top notch capabilities, you could have started directly with Azure/GCP/AWS asking to any Cloud Engineer knowing the market.

OVH is a sovereign outsider my friend... dev budget is an order of magnitude lower...

@LisaPerrier
Copy link

Hello all,

Thank you for your comment and concern. We completely understand the importance of the Octavia integration and want to reassure you that our teams, including the network team, have been actively working on it for several months now. Rest assured, we are committed to delivering the integration as planned by the end of summer, just as we recently stated and answered.

The Octavia integration will indeed bring a host of new features to our platform. Some of the highlights include support for private-to-private load balancing, load balancer resizing, customizable TTL (Time to Live), and much more. We are excited about the possibilities this will open up for our users, and we're confident that these enhancements will greatly improve the overall performance and functionality of our service.

Please know that your feedback and suggestions are invaluable to us, and we are working diligently to address your needs and requirements. We look forward to delivering the Octavia integration and its exciting features by the end of summer.

@JustDoItSascha
Copy link

Hello all,

Thank you for your comment and concern. We completely understand the importance of the Octavia integration and want to reassure you that our teams, including the network team, have been actively working on it for several months now. Rest assured, we are committed to delivering the integration as planned by the end of summer, just as we recently stated and answered.

The Octavia integration will indeed bring a host of new features to our platform. Some of the highlights include support for private-to-private load balancing, load balancer resizing, customizable TTL (Time to Live), and much more. We are excited about the possibilities this will open up for our users, and we're confident that these enhancements will greatly improve the overall performance and functionality of our service.

Please know that your feedback and suggestions are invaluable to us, and we are working diligently to address your needs and requirements. We look forward to delivering the Octavia integration and its exciting features by the end of summer.

You mean summer '23? Right? ;-)

Yeah it's a bit frustrating. Especially when you say they are working for month now on this feature which could be implemented in a day or two.

I also told developers of yours months ago that there is a problem with octavia load balancers and Member Connect Timeout. But nothing happens on that front, it's a reproducable error which nobody is solving. Sad :(

@mhurtrel
Copy link
Collaborator

mhurtrel commented Aug 2, 2023

We do indeed mean summer 2023.
I think you largely understimate the work required to make that work and be maintainable on the thousands of production cluster we have, with the different scenarii our customers want us to support (public to public, private to private, public to private). As per your last comment, please understand that Octavia Load Balancer is still a new feature at OVHcloud so part of the job is also to detect challenges at scale etc that our network colleagues could not yet experiment.

@mhurtrel mhurtrel added the Performance Compute, Storage or Network fundamentals label Aug 3, 2023
@JustDoItSascha
Copy link

JustDoItSascha commented Aug 3, 2023

To switch the default setting cannot be that hard... And the previous comments show the problem with OVH. I really like your services, but they are not very reliable. Every couple of months there are issues with Routers or the network which can only be solved if I'm asking on Discord. They are always undetected to that point.

Then there is this huge, reproducable bug with Member Connect Timeout. I manage like 5-6 Clusters of Customers and all of them have this problem. If you create a service with type NodePort and create an octavia load balancer and add the nodes as members, you have a 50/50 chance that your requests are running into the member connect timeout which initially is set on 5 seconds. All clusters have this and all kind of service configs.

The problem itself is not the topic of this thread, but it shows one thing: Either the octavia load balancers are not that much used or you don't care about that kind of issues. Or you are not able to solve them.

I understand everyone who criticizes OVH and I will not recommend it to future clients. It's just not reliable, not the infrastructure and also not the roadmap.

@ilhanadiyaman
Copy link

Do you have any update on this feature? Seems like yesterday it moved to partially released section. What does that mean? Are there any possibilities to become an early tester?

@LisaPerrier
Copy link

Hello @ilhanadiyaman,

Exactly, the Private Beta for Octavia is now available as of yesterday, I will try to find you on discord to let you in on it. :)
Private to private load balancing is already available with this beta version

@everybody, If you are interested in using it please let me know so that I can add you to our private Discord channel.

@ilhanadiyaman
Copy link

Thank you @LisaPerrier, just joined Discord. I'll send you a DM there.

@fkalinowski
Copy link

Hi @LisaPerrier,

This is a good news !

Can you also share the pricing and technical specs (bandwitdth, number of connections, ...) for these Private-to-Private Octavia laod balancers ?

@mhurtrel
Copy link
Collaborator

Hi @fkalinowski princing is composed of

  • A gateway (shared accross all you load balancers)
  • Octavia load balancers
  • Public IPv4 (not for private to private)

You can find prices here https://www.ovhcloud.com/en-ie/public-cloud/prices/ and some explannation here : https://help.ovhcloud.com/csm/en-public-cloud-network-load-balancer-concepts?id=kb_article_view&sysparm_article=KB0059277

(instructions on leveraging the laod balance rin kubernetes will be shared privately to private beta users first)
image

@fkalinowski
Copy link

Hi @mhurtrel does it mean that the column "Public network" will also apply to private-only traffic ? Or can we expect higher bandwitdh for private-only traffic ?

@yomovh
Copy link
Member

yomovh commented Sep 14, 2023

@fkalinowski
To date, the private & public bandwidth are the same.
Also, after discussing with @mhurtrel , here is a bit of clarification on the component (and the associated costs) that are requested depending on your traffic pattern:

  • Pub-to-Priv : Floating IP + Gateway S* + Load Balancer
  • Pub-to-Pub : Floating IP + Gateway S* + Load Balancer
  • Priv-to-Priv : Load Balancer
  • in all cases where Gateway is needed, the bandwidth that is used for the Load Balancer is the Load Balancer one. Of course, should you use the Gateway for other need than the. Load Balancer itself, you will need to chose the Size appropriately.

@fkalinowski
Copy link

@yomovh @mhurtrel Thanks for this clarification !

IMO it seems weird that private bandwitdh is billed the same price than public bandwidth.
Indeed, like most other Public Cloud services with network connectivity, for a given price I was expecting two different bandwitdh for public and private connectivity with a higher value for the private's one.

@DanielVilaFalcon
Copy link

DanielVilaFalcon commented Oct 18, 2023

Hello @ilhanadiyaman,

Exactly, the Private Beta for Octavia is now available as of yesterday, I will try to find you on discord to let you in on it. :) Private to private load balancing is already available with this beta version

@everybody, If you are interested in using it please let me know so that I can add you to our private Discord channel.

I am really interesting in using this feature but I have opened a ticket to OVH support but they do not know about it.
@LisaPerrier could you indicate me how to use a loadBalancer with UDP protocol in k8s cluster pls?

@antonin-a
Copy link
Collaborator

Hello @DanielVilaFalcon I'm taking Lisa's handover.
The best way to get support on MKS + Octavia usage is to join our private Beta channel on Discord.
If you do not have access yet you can join our dedicated Discord channel (https://discord.com/channels/850031577277792286/1143208429872226325) and reach me to have the access code.

@DanielVilaFalcon
Copy link

DanielVilaFalcon commented Oct 19, 2023

Hello @DanielVilaFalcon I'm taking Lisa's handover. The best way to get support on MKS + Octavia usage is to join our private Beta channel on Discord. If you do not have access yet you can join our dedicated Discord channel (https://discord.com/channels/850031577277792286/1143208429872226325) and reach me to have the access code.

@antonin-a I do not have access to the channel, but I open the link and it does not work properly. Could you send me the channel invitation link pls?

@antonin-a
Copy link
Collaborator

Sorry @DanielVilaFalcon , my mistake. You can join our Dicord using this link : https://discord.gg/ovhcloud
Then "Container & Orchestration" > "beta-info-containers-orchestration"
Send me a PM on Discord to get the access code (user: antonin.anchisi)

@Nico-VanHaaster
Copy link
Author

So after years of this issue open it appears there is some traction on this. However my original issue has been edited many times. I would like to confirm the original requirements are now available:

  1. I can create a load balancer that will provide TCP based private to private VNET routing (vrack) by providing custom annotations to a Kubernetes Service (Type=Loadbalancer)
  2. This new load balancer will only operate over the private network (no public ip address)
  3. Conforms to a standard Kubernetes Load Balancer Service
  4. Can be destroyed and recreated as required as a standard Kubernetes object.
  5. When created will maintain its public (private only) IP Address

I have stopped using OVH quite some time ago and would like to move back however I am concerned I won’t be able to setup a solution that is on par with our current solution.

Given that OVH has no plans on investing in Windows K8 nodes we would be required to run a hybrid solution with K8 nodes + windows public cloud instances. Our platform was originally built on windows and most of the vital components are hosted on Windows. This is slowly being reduced but this will be a long term plan. Currently the major web front end components have been migrated to Linux however the Windows portion still runs the critical data services.

What we are looking to setup is a following

  1. K8 cluster is the primary ingesss for the application components these include
  2. Most of the major front application parts
  3. Database server
  4. Caching services (redis)
  5. Message queue services (rabbit)
  6. Block / Blob storage for the services
  7. Windows Public cloud services (all private) including
  8. Batch processing services
  9. Web front end services (limited non-critical)
  10. Networking
  11. The K8 cluster will manage all network routing both to internal services (within K8) and public cloud instances (private networking)
  12. The public cloud networking will be handled through separate proxy servers within the VRack with static IPs
  13. The public cloud instances will use private to private networking to the K8 clusters to access the Database, cache and message queue services.

And to sum this all up. The private to private networking would allow our public cloud (servers) to access services within K8 through the new Octavia services.

Does this sound accurate and achievable with this design?

@antonin-a
Copy link
Collaborator

Hello @Nico-VanHaaster, a lot of questions (and few issues with markdown :) )but I will try to answer as best as I can:

  1. I can create a load balancer that will provide TCP based private to private VNET routing (vrack) by providing custom annotations to a Kubernetes Service (Type=Loadbalancer)

Yes, private to private LB creation using annotation is supported : service.beta.kubernetes.io/openstack-internal-load-balancer

  1. This new load balancer will only operate over the private network (no public ip address)

Ok

  1. Conforms to a standard Kubernetes Load Balancer Service

Ok

  1. Can be destroyed and recreated as required as a standard Kubernetes object.

Ok

  1. When created will maintain its public (private only) IP Address

Not sure to understand this one. Do you mean "will maintain its private IP" ? As all your services (DB, Cache, MQ, ...) will be hosted on your k8s and exposed behind the LBaaS you probably need to keep the same private IP in case of deletion/creation of the LB ?
If so, target is to use port_idto attach an existing port (+private IP) to the LB at creation. Please note that it is not supported yet but we plan to support it when we will release in GA.

Does this sound accurate and achievable with this design?

Yes, expect 5. in the current situation.

@linus-ha
Copy link

Is the Octavia integration the groundwork needed for enabling public ipv6+ipv4 to private LB in the future? #167

@vincenthartmann
Copy link

Hello, are you sure ovh support service.beta.kubernetes.io/openstack-internal-load-b tried to ad it to a service and it always give me an external IP
`apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
spec:
type: LoadBalancer
ports:

  • port: 80
    targetPort: 80
    protocol: TCP
    name: http
    selector:
    app: hello-world`

@antonin-a
Copy link
Collaborator

antonin-a commented Feb 13, 2024

Hello @vincenthartmann , historical Load Balancer for Managed Kubernetes Service does not support private mode.
The topic we are discussing on this ticket is the usage of the new OVHcloud Public Cloud Load Balancer with MKS. This implementation is currently in Alpha phase and you can join the alpha using these instructions.

I do confirm that for this new Load Balancer, service.beta.kubernetes.io/openstack-internal-load-balancer: "true" is supported.

@antonin-a
Copy link
Collaborator

antonin-a commented Sep 13, 2024

Solved by #603

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Performance Compute, Storage or Network fundamentals
Projects
Status: Partially released
Development

No branches or pull requests