Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ServiceExternalIP: Allow multiple services sharing one IP #4309

Closed
meibensteiner opened this issue Oct 18, 2022 · 19 comments · Fixed by #6480
Closed

ServiceExternalIP: Allow multiple services sharing one IP #4309

meibensteiner opened this issue Oct 18, 2022 · 19 comments · Fixed by #6480
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@meibensteiner
Copy link

Describe the problem/challenge you have
Our production environment requires some services to share a single IP, which is currently not possible and is the single reason why we wouldnt be able to remove MetalLB altogether and just use Antrea with the ServiceExternalIP feature flag.

Describe the solution you'd like
A similar solution to MetalLB would be nice which is a simple annotation on the service: metallb.universe.tf/allow-shared-ip

Anything else you would like to add?
Im currently looking into proxying those requests to allow IP sharing, but a native solution in Antrea would be prefered.

@meibensteiner meibensteiner added the kind/feature Categorizes issue or PR as related to a new feature. label Oct 18, 2022
@tnqn
Copy link
Member

tnqn commented Oct 18, 2022

Thanks @meibensteiner for the proposal. I think technically it would work if the services use different ports. We also received another feature request for sharing same IP for Egress and Service LoadBalancerIP(#3334). Actually if we specify same IP for multiple Egresses, it should be already working. Then I'm thinking whether we should just adjust the IP binding strategy a little: instead of making one external IP can only be bound with either one Egress or one Service, we could allow specifying same IP for multiple objects regardless of Egresses and Services, to address both requests. For Egresses and Services that don't specify IPs explictly, they would still be allocated an IP not in use.

This would be how to share IPs for services:

apiVersion: v1
kind: Service
metadata:
  name: my-service1
  annotations:
    service.antrea.io/external-ip-pool: "service-external-ip-pool"
spec:
  selector:
    app: MyApp1
  loadBalancerIP: "10.10.0.2"
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  name: my-service2
  annotations:
    service.antrea.io/external-ip-pool: "service-external-ip-pool"
spec:
  selector:
    app: MyApp2
  loadBalancerIP: "10.10.0.2"
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 9376
  type: LoadBalancer

Does the usage make sense to you? or annotation like "metallb.universe.tf/allow-shared-ip" is expected for some reasons? @meibensteiner
cc @jianjuns @antoninbas @xliuxu for opinions too.

@antoninbas
Copy link
Contributor

The value of the annotation metallb.universe.tf/allow-shared-ip is unclear to me. I thought it was a means to enable IP sharing without having to explicitly provide an IP, but that's not really the case:

If these conditions are satisfied, MetalLB may colocate the two services on the same IP, but does not have to.

(from https://metallb.universe.tf/usage/#ip-address-sharing)

I am fine with Quan's proposal. Do we plan to have some sort of validation if the user mistakenly provides the same IP AND port for 2 Services?

@xliuxu
Copy link
Contributor

xliuxu commented Oct 19, 2022

+1 for Quan's proposal. Something in my mind:

  1. I recall that for Egress we need to assign the external IP to the dummy interface and it conflicts with the implementation of ServiceExternalP. So I think it is not easy to share IP for Egress and Service.
  2. What is the expected behavior if Service1 does not have a loadBalancerIP set in the spec, but user sets Service2 with the same IP to the spec.loadBalancerIP which has been allocated to Service1?

@xliuxu xliuxu self-assigned this Oct 19, 2022
@meibensteiner
Copy link
Author

@tnqn No. I always considered that metallb annotation to be unnecessary. I thought I would bring it up to illustrate my use case.

Did I read this correctly and multiple services using one IP is already possible? I thought I tried it and was unsuccessful because the IP was already claimed. I will try again and circle back.

@xliuxu
Copy link
Contributor

xliuxu commented Oct 19, 2022

@meibensteiner Sharing IP for multiple services is not possible for the moment. We need to finalize the API design and implement it in future releases.

@tnqn
Copy link
Member

tnqn commented Oct 19, 2022

@meibensteiner I meant multiple Egresses using one IP should work today. Egress is another Antrea feature manging egress traffic SNATing, which also consumes the external IP pool.

@tnqn
Copy link
Member

tnqn commented Oct 19, 2022

I am fine with Quan's proposal. Do we plan to have some sort of validation if the user mistakenly provides the same IP AND port for 2 Services?

Yes, if you don't mean validatingWebhook for Service object, I think we could validate whether the services can share an IP and use ServiceStatus and events assosicated with the Service to report error, for example:
When Service A and Service B are specified same LoadBalancer IP and have same port, if Service A is processed first it will get the LoadBalancer IP, and we do not set Service B's status.loadBalancer.ingress.ip but create an event for it to describe the conflict.

  • I recall that for Egress we need to assign the external IP to the dummy interface and it conflicts with the implementation of ServiceExternalP. So I think it is not easy to share IP for Egress and Service.

Now I remember it, then probably we could start with support sharing IP among same kinds of objects first.

  • What is the expected behavior if Service1 does not have a loadBalancerIP set in the spec, but user sets Service2 with the same IP to the spec.loadBalancerIP which has been allocated to Service1?

I think we don't have to care whether the specified IP was previously allocated automatically or manually. We could just allow specifying an IP that is already allocated to other Services when the new Service doesn't conflict with the others.

@vrabbi
Copy link
Contributor

vrabbi commented Oct 25, 2022

A side note, as i just saw it in the example from above.
The spec.loadBalancerIP field is deprecated as of k8s 1.24 and is planned to be removed in a few releases from core k8s. The recommendation from k8s community is to have an annotation for specifying the loadbalancer ip that is provider specific. May be worth adding such an annotation sooner then later into antrea, to give users more time before that field is removed
kubernetes/kubernetes#107235

@meibensteiner
Copy link
Author

Another side note, a feature flag was introduced with k8s 1.20 which allows using different protocols (tcp&udp) on one LoadBalancer Service instance. Would be great if that was supported by antrea as well.
kubernetes/enhancements#1435

Not a dealbreaker however, since you could just create two services. One for udp, one for tcp.

@github-actions
Copy link
Contributor

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment, or this will be closed in 90 days

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 25, 2023
@antoninbas
Copy link
Contributor

Re-opening this, as it seems like a worthy issue. @tnqn ?

@antoninbas antoninbas reopened this Apr 25, 2023
@antoninbas antoninbas removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2023
@tnqn tnqn added this to the Antrea v1.12 release milestone Apr 25, 2023
@tnqn
Copy link
Member

tnqn commented Apr 25, 2023

Re-opening this, as it seems like a worthy issue. @tnqn ?

I agree. Let me add it to milestone 1.12 first and see if we can make it. Assign to me first.

@tnqn tnqn self-assigned this Apr 25, 2023
@tnqn
Copy link
Member

tnqn commented May 22, 2023

I have some code to support it but it's risky to include it in 1.12 which is expected to release this Wednesday. Move it to 1.13.

@github-actions
Copy link
Contributor

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment, or this will be closed in 90 days

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 13, 2023
@tnqn tnqn removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 13, 2023
@luolanzone luolanzone removed this from the Antrea v1.15 release milestone Dec 13, 2023
Copy link
Contributor

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment, or this will be closed in 90 days

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 13, 2024
@tnqn tnqn removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 13, 2024
Copy link
Contributor

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment, or this will be closed in 90 days

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 12, 2024
@tnqn tnqn removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 12, 2024
@lezruk
Copy link

lezruk commented Jun 20, 2024

Good afternoon @luolanzone , by looking at this conversation thread, it is not very clear was this milestone implmented already, dropped, or assigned to future release? Please suggest what is the current state?

@antoninbas
Copy link
Contributor

@lezruk We have not dropped this feature but it has not been implemented it yet and we have not committed to a timeline. To emphasize, we are still planning to support this, which is why the issue has been kept open.

@tnqn
Copy link
Member

tnqn commented Jun 21, 2024

Reminded the latest comments, I picked up the draft code I had for this support. I should be able to create a PR next week, hopefully we can get it into v2.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants