-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LBaaS : Private to Private load balancing and support for larger LBs (Octavia integration) #104
Comments
Indeed, it would be nice... even more since OVH's documentation is very confusing and might actually lead people to beleive it's already in place! :( |
Hello I confirm that we currently support only Public to Public and Public to Private load balancing but not Private to Private. |
Well acutally there seem to be a major bug in your public to private implementation... https://gitter.im/ovh/kubernetes?at=60a3c10f233397424dcd6e1f |
@giudicelli I confirm this bug is being adressed by my network colleagues (it is non systematic : sometimes they create a neutron port in the wrong region to one side of the LB which make it unreachable) |
@giudicelli This bug has been fixed a couple of weeks ago. nevertheless I let the issue open because private to private loadbalancing is still to be developped. |
What is the ETA for this ? Beta / GA |
Hello @matmicro I dont have an ETA yet, but this should be pretty soon after regions are moved to a recent version of Openstack and expose Octavia, which is the technology we plan to leverage for this need |
Can we expect before end of 2021 then ? |
@matmicro no, this will be during first semester of 2022 |
@mhurtrel Is the first semester ETA still accurate? |
Hello @pierreis There is some delays in adding public to public support for octavia, that will be leveraged for both use case after integration in Kubernetes. The current ETA is Q3, sorry for the delay. |
is ETA still Q3? |
Hi @mscheer93 . Our Octavia colleagues are having some delays in making Octavia full compliant with all the K8s use case we will need (public to public, public to private and private to private). We need all this scenarios to be usable to then make the switch. Our best guess at the moment is Q4 calendar year. sorry for the delay. |
@Nico-VanHaaster I updated the issue also covering the 'high load' scenario. Both will be covered with the octavia integration. Unfortunately, we have delay from the Octavia team to enable all scenarios needed for Kubernetes. I will give a new ETA as soon as I have a clear one from them. |
@JacquesMrz thanks for the update. At this point we have stopped using OVH K8 and gone with Azure until the K8 offering is finished. This feature of one really important features to be implemented. Looking forward to the GA release. |
Any news or any ETA for this? |
1 similar comment
Any news or any ETA for this? |
As a MKS user i would like to have benefit from below improvements on Loadbalancer :
|
Any news or any ETA for this? |
So we are almost 2 years later, this just moved to "prioritized", so we should expect this by ~2025 right? |
Hi @qualifio-infrastructure I fully understand and share part of your frustration, but please understand that there were numerous dependancies to support this. |
How is this looking at the moment? This is critical for many uses. |
Hi @pierreis, |
Less than 3 months ago spring was targeted, but it now shifted by a whole additional year? Alright. |
@zyguy i opened this thread over 2 years ago May 2021 and nothing but delays and no responses. We moved our entire stack off OVH to Azure, with properly configured clusters we were able to achieve near close price points to OVH with 10x the capabilities. Now using private routing from the edge, HA clusters and much more. I wouldn’t expect to see much from this team. |
😂 If your goal is price and top notch capabilities, you could have started directly with Azure/GCP/AWS asking to any Cloud Engineer knowing the market. OVH is a sovereign outsider my friend... dev budget is an order of magnitude lower... |
Hello all, Thank you for your comment and concern. We completely understand the importance of the Octavia integration and want to reassure you that our teams, including the network team, have been actively working on it for several months now. Rest assured, we are committed to delivering the integration as planned by the end of summer, just as we recently stated and answered. The Octavia integration will indeed bring a host of new features to our platform. Some of the highlights include support for private-to-private load balancing, load balancer resizing, customizable TTL (Time to Live), and much more. We are excited about the possibilities this will open up for our users, and we're confident that these enhancements will greatly improve the overall performance and functionality of our service. Please know that your feedback and suggestions are invaluable to us, and we are working diligently to address your needs and requirements. We look forward to delivering the Octavia integration and its exciting features by the end of summer. |
You mean summer '23? Right? ;-) Yeah it's a bit frustrating. Especially when you say they are working for month now on this feature which could be implemented in a day or two. I also told developers of yours months ago that there is a problem with octavia load balancers and Member Connect Timeout. But nothing happens on that front, it's a reproducable error which nobody is solving. Sad :( |
We do indeed mean summer 2023. |
To switch the default setting cannot be that hard... And the previous comments show the problem with OVH. I really like your services, but they are not very reliable. Every couple of months there are issues with Routers or the network which can only be solved if I'm asking on Discord. They are always undetected to that point. Then there is this huge, reproducable bug with Member Connect Timeout. I manage like 5-6 Clusters of Customers and all of them have this problem. If you create a service with type NodePort and create an octavia load balancer and add the nodes as members, you have a 50/50 chance that your requests are running into the member connect timeout which initially is set on 5 seconds. All clusters have this and all kind of service configs. The problem itself is not the topic of this thread, but it shows one thing: Either the octavia load balancers are not that much used or you don't care about that kind of issues. Or you are not able to solve them. I understand everyone who criticizes OVH and I will not recommend it to future clients. It's just not reliable, not the infrastructure and also not the roadmap. |
Do you have any update on this feature? Seems like yesterday it moved to partially released section. What does that mean? Are there any possibilities to become an early tester? |
Hello @ilhanadiyaman, Exactly, the Private Beta for Octavia is now available as of yesterday, I will try to find you on discord to let you in on it. :) @everybody, If you are interested in using it please let me know so that I can add you to our private Discord channel. |
Thank you @LisaPerrier, just joined Discord. I'll send you a DM there. |
Hi @LisaPerrier, This is a good news ! Can you also share the pricing and technical specs (bandwitdth, number of connections, ...) for these Private-to-Private Octavia laod balancers ? |
Hi @fkalinowski princing is composed of
You can find prices here https://www.ovhcloud.com/en-ie/public-cloud/prices/ and some explannation here : https://help.ovhcloud.com/csm/en-public-cloud-network-load-balancer-concepts?id=kb_article_view&sysparm_article=KB0059277 (instructions on leveraging the laod balance rin kubernetes will be shared privately to private beta users first) |
Hi @mhurtrel does it mean that the column "Public network" will also apply to private-only traffic ? Or can we expect higher bandwitdh for private-only traffic ? |
@fkalinowski
|
@yomovh @mhurtrel Thanks for this clarification ! IMO it seems weird that private bandwitdh is billed the same price than public bandwidth. |
I am really interesting in using this feature but I have opened a ticket to OVH support but they do not know about it. |
Hello @DanielVilaFalcon I'm taking Lisa's handover. |
@antonin-a I do not have access to the channel, but I open the link and it does not work properly. Could you send me the channel invitation link pls? |
Sorry @DanielVilaFalcon , my mistake. You can join our Dicord using this link : https://discord.gg/ovhcloud |
So after years of this issue open it appears there is some traction on this. However my original issue has been edited many times. I would like to confirm the original requirements are now available:
I have stopped using OVH quite some time ago and would like to move back however I am concerned I won’t be able to setup a solution that is on par with our current solution. Given that OVH has no plans on investing in Windows K8 nodes we would be required to run a hybrid solution with K8 nodes + windows public cloud instances. Our platform was originally built on windows and most of the vital components are hosted on Windows. This is slowly being reduced but this will be a long term plan. Currently the major web front end components have been migrated to Linux however the Windows portion still runs the critical data services. What we are looking to setup is a following
And to sum this all up. The private to private networking would allow our public cloud (servers) to access services within K8 through the new Octavia services. Does this sound accurate and achievable with this design? |
Hello @Nico-VanHaaster, a lot of questions (and few issues with markdown :) )but I will try to answer as best as I can:
Yes, private to private LB creation using annotation is supported :
Ok
Ok
Ok
Not sure to understand this one. Do you mean "will maintain its private IP" ? As all your services (DB, Cache, MQ, ...) will be hosted on your k8s and exposed behind the LBaaS you probably need to keep the same private IP in case of deletion/creation of the LB ?
Yes, expect 5. in the current situation. |
Is the Octavia integration the groundwork needed for enabling public ipv6+ipv4 to private LB in the future? #167 |
Hello, are you sure ovh support service.beta.kubernetes.io/openstack-internal-load-b tried to ad it to a service and it always give me an external IP
|
Hello @vincenthartmann , historical Load Balancer for Managed Kubernetes Service does not support private mode. I do confirm that for this new Load Balancer, |
Solved by #603 |
As a Kubernetes cluster user
I want to use a fully integrated load balancer for more various use cases such as high-traffic scenarios or private to-private load-balancing
so that I can benefit from full Kubernetes integration and automation for all possible business use case
Note : We currently only support public to public and public to private loadbancers and we only have one ficex integrated loadbalancer size, described here (with limitations such as
2,000 req/s and 200 Mbit/s). A workaroudn is to setup a self-managed load balancer and/ or use OVHcloud IP load balancer solution https://www.ovh.ie/solutions/load-balancer but this is challenging because you then miss the live integration for example when a pod is rescheduled or a node is added/removed.
All those usecases will be covered in 2023 with the integration of Octavia load balancer in Managed Kubernetes.
original issue :
It is quite common that a K8 load balancer work with Private IP (VRack assigned). This allows other private services to interact with the services \ pods in a K8 cluster without exposing the cluster to the public web.
In other platforms an annotation is added to the ingress controller to create the load balancer using a Private IP Address. For example in Azure we would use the annotation
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
This is quite helpful for cases such as
The text was updated successfully, but these errors were encountered: