Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please provide example of multi-cluster container-native load balancing #59

Closed
naseemkullah opened this issue Oct 16, 2019 · 11 comments · Fixed by #57
Closed

Please provide example of multi-cluster container-native load balancing #59

naseemkullah opened this issue Oct 16, 2019 · 11 comments · Fixed by #57
Assignees
Labels
enhancement New feature or request triaged Scoped and ready for work

Comments

@naseemkullah
Copy link
Contributor

As per https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing ... thanks!

@morgante
Copy link
Contributor

@naseemkullah As the linked doc explains, GKE load balancers can (and should) be created using Kubernetes objects.

Is there a specific ask you have not accommodated by the kubectl tutorial?

@naseemkullah
Copy link
Contributor Author

naseemkullah commented Oct 16, 2019

Thanks @morgante, I am new to GCLB (used regional LB fronted nginx-ingress until now) and as my org is working on distributing services globally we want to use the GCLB.

Terraform is what we use to deploy GCP resources so I thought it would have to be created via TF.
My mistake!

@morgante
Copy link
Contributor

No worries! You should be able to use the Kubernetes resources directly (the GKE controller handles creating the GCP resources for you).

@naseemkullah
Copy link
Contributor Author

Great, thanks! That saved me a lot of messing around. :)

@naseemkullah
Copy link
Contributor Author

@morgante My issue is I want a multi cluster ingress defined declaratively, is this possible without using Terraform?

kubernetes/ingress-gce#794

@morgante
Copy link
Contributor

Got it, for that case you might want to look at autoneg: https://github.com/GoogleCloudPlatform/gke-autoneg-controller

@morgante morgante reopened this Oct 16, 2019
@Dev25
Copy link
Contributor

Dev25 commented Oct 16, 2019

You could this today via standalone NEGs + this module with some tweaks (support max_*_per_endpoint backend group vars) + using the NEG datasource

I do have a large/breaking PR that includes NEG support #57 which includes an example of doing that.

You could make a smaller PR that just includes the minimum changes (those backend vars) to support NEGs

@naseemkullah naseemkullah changed the title Please provide example of container-native load balancing Please provide example of multi-cluster container-native load balancing Oct 17, 2019
@naseemkullah
Copy link
Contributor Author

naseemkullah commented Oct 18, 2019

Got it, for that case you might want to look at autoneg: https://github.com/GoogleCloudPlatform/gke-autoneg-controller

This looks really interesting @morgante

@naseemkullah
Copy link
Contributor Author

naseemkullah commented Oct 19, 2019

You could this today via standalone NEGs + this module with some tweaks (support max_*_per_endpoint backend group vars) + using the NEG datasource

I do have a large/breaking PR that includes NEG support #57 which includes an example of doing that.

You could make a smaller PR that just includes the minimum changes (those backend vars) to support NEGs

That's excellent @Dev25

May I ask, once your PR goes through could you please confirm if this is the correct workflow:

  1. Use TF to deploy GKE as well as this module to deploy the LB with backend services
  2. Use whatever method to deploy your app to GKE, including the standalone NEG annotated service
  3. Go back to TF and use it to configure this module, with the new standalone NEGs as datasource to be added as NEGS to the backend service created in step 1

@Dev25
Copy link
Contributor

Dev25 commented Oct 23, 2019

Yes that would be correct @naseemkullah , once you create the NEG service inside GKE you can figure out the compute engine NEG name (there is a cloud.google.com/neg-status annotation on the kubernetes service) and use that in TF/this module.

@naseemkullah
Copy link
Contributor Author

Thanks for clarifying, I look forward to your PR being merged!

@aaron-lane aaron-lane added the enhancement New feature or request label Nov 4, 2019
@aaron-lane aaron-lane added the triaged Scoped and ready for work label Nov 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request triaged Scoped and ready for work
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants