Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support the necessary functions of load balancing to gateway and route #1074

Closed
sdjksdajshd opened this issue Mar 28, 2022 · 19 comments
Closed
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sdjksdajshd
Copy link

What would you like to be added:

Load balancing algorithm, session persistence, CORS and health check can be used as a standardized support

Why this is needed:

From my work experience as a product manager, basically, the load balancing of the public cloud supports these four capabilities and is used as a necessary configuration for load balancing

@sdjksdajshd sdjksdajshd added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 28, 2022
@hbagdi
Copy link
Contributor

hbagdi commented Apr 1, 2022

Hey @sdjksdajshd, thanks for the issue!
Gateway API aims to provide a portable set of expressive features while allowing for extensions.
We have had discussions at length about supporting load-balancing features such as LB algorithms, timeout policies, health-checks and we have come to the conclusion that supporting these in a way that is consistent across various implementations is very challenging. We have come up with a mechanism to make these extensions easy and intuitive via the ReferencePolicy resource.
Please take a look at that resource and let us know your thoughts!

@sdjksdajshd
Copy link
Author

sdjksdajshd commented Apr 14, 2022

Hey @hbagdi ,thanks for your answer!

I had read ReferencePolicy,and we had developed part。

A big question for me,we ready to provie a Gateway API UI for user。

We will provide customers with a complete K8S commercialization service, including highly customized permissions and UI.

We provided customers with a LB function as an enhanced version of Ingress, which was well received by customers. Our customers have at least two roles in LB management and developer。

Manager is response for create and allocate LB resources.

Developer is response for listeners and rules

After discussion,we decided to provide a Gateway API LB UI in a community-friendly way, except to improve the experience of LB developers.

But if anything other than rules can only be configured by some resource called ReferencePolicy. For the sake of user experience, the presentation to the user must be limited to writing on a resource, a Route or Gateway onlay allow to attach a Policy resource

I expect that other products will probably do the same if they do UI. This could be a huge disaster for protocol compatibility.

there are two solutions that can better solve this problem

One: Vote a very specific API design, or some recommended design,I can choose one to use. The expectation is that the implementation of the Policy can be accomplished directly based on this

Two:Gateway、Route support more parameters

Thank you and LOOK forward to your reply

@sdjksdajshd
Copy link
Author

mark

@youngnick
Copy link
Contributor

Thanks for your response @sdjksdajshd. I'm not sure I quite understand what you are asking for though.

It seems like you are saying that you're building a Gateway API UI, which is awesome.

Could you clarify a bit more what you'd like us to do? From your comments here, it seems like you would like there to be a way to limit users to only editing one resource type? Or perhaps your question is about what resources people need access to for a UI to work?

I think that for a Gateway API UI to work, users will require read access to all the resource types (GatewayClass, Gateway, ReferencePolicy, plus any supported Route types), and only have write access to things in namespaces they control.

I think that we will end up with a common pattern that has cluster-wide Gateways in some shared namespace, with the whole cluster having Read access to those Gateways (so you can see they are there to attach to).

The whole point of ReferencePolicy is that it should be created in the namespace of the object it is allowing references to. So users should be able to create ReferencePolicy objects in whatever namespaces they own.

I should also note that, although the name looks like it, ReferencePolicy is unrelated to the other Policy attachment we have in the API (see Policy Attachment for that).

I hope this helps, and hope that we can help you more.

@sdjksdajshd
Copy link
Author

I am very glad to hear from you @youngnick

If the user wants to configure a Policy, then in this case, I don't think the user wants to configure multiple policies on a Route to achieve this goal.

So I was looking forward to a more standardized Policy API that would allow common configurations to be configured on a single resource

And, Is there a standard way to show the configuration that really works

Thanks for you,looking forward to a reply

@sdjksdajshd
Copy link
Author

We are agree to provide enough function to final users,it‘s a goal that we provide a Gateway API UI

@youngnick
Copy link
Contributor

It sounds like our naming is a bit confusing. ReferencePolicy is a part of the main API, not an extension mechanism like the other Policy mechanism we discussed. It's configured alongside objects that are not part of the Gateway API, like Secret or Service, to safely do cross-namespace references. ReferencePolicy objects do not need to be configured on a Route.

Other Policy objects are currently implementation-specific, and I don't think we've got many examples of them actually being implemented yet. They are intended to be an extension point for implementations to do things that are difficult to do within the main API resources. The main API resources need to be used by many implementations with different capabilities, so we need to ensure that the capabilities used are available to everyone.

In terms of a standard way to show that configuration has been accepted, the "Accepted" Condition in the status is the closest thing we have to this. We've got some open issues to clear up how this Condition works (see #1111 for more info).

I think that a UI to show how Gateway API resources are linked would be an amazing tool!

@sdjksdajshd
Copy link
Author

@youngnick Missing a field showing what is in effect?

Gateway Class :Attcah Policy1, Load balancing algorithm is WR

Gateway :Attcah Policy2,Load balancing algorithm is NULL

Route :Attcah Policy3,Load balancing algorithm is WRR

Now,Need to tell customer every resource it’s really effect Load balancing algorithm。

Sorry for my english,maybe I think is different with I write

@youngnick
Copy link
Contributor

Ah, you are saying that we don't have any way to show what policy is in effect on an object. That is a really good point that we haven't addressed yet.

I agree that we should do that.

@sdjksdajshd
Copy link
Author

@youngnick

If we think uniform Policy standard is hard,So why not make an experimental version? Logic is always the world we recognize。

Maybe a Policy standard is easy for lb providers.

@sdjksdajshd
Copy link
Author

sdjksdajshd commented May 10, 2022

We had realized Poliycy CDR.

I go to communicate with developers,If our work is useful,we will pull request.

----update
Sorry, actually we have not finshed a demo, I misunderstood my colleague

@sdjksdajshd
Copy link
Author

Thanks for very much. Expect the Gateway API to become the new de facto standard.

When can we have a beta version?Look forward to.(*❦ω❦)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 8, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 7, 2022
@youngnick
Copy link
Contributor

/remove-lifecycle rotten

@sdjksdajshd I'm sorry I didn't get back to you - if you've implemented a Policy resource, I'd love to see a writeup or presentation to one of our meetings, or anything else you can do to share. Policy resources are a bit underspecified at the moment, because there aren't a lot of examples yet.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 7, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 6, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 5, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 4, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants