-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Enforce service specific rate limits for Lightpush #667
Comments
Rate-limiting would be required for the following protocols: @jm-clius , @alrevuelta Since RLN would be enabled, I don't think we would require to enforce additional rate-limiting on lightPush. Do confirm this. This task would require analyzing what sort of limits are to be applied, req/sec or max streams, max connections etc. We should also need to come up with required configurations that we want to expose to users. @jm-clius , @fryorcraken : Is this something that is required for the launch of The Waku network? I think it would be required. |
RLN rules need to be applied on light push on server side so that the node relaying doesn't get descored on gossipsub for forwarding messages with incorrect or no proof (when RLN applies) |
I think this is about limiting the number of requests that can be opened for the protocol on the server side. I imagine we want this rate limit to apply at a higher layer than the message generation rate (which is what RLN limits). Perhaps some token bucket mechanism applying to all request-response protocols?
I'd say yes, at least in a basic form. One thing here is that we may not want to make this overly configurable at first. Could choose some value (e.g. max 5 requests per second) and design something simply around that. |
Makes sense. cc @richard-ramos in case it is not already being done. We can take this up as part of #655? |
Agreed, we can start simple without any configurationsby enforcing below 2 types of limits.Do provide any comments.
But, we may need to modify ratio of relay to servicePeers defined as per waku-org/nwaku#1813. This sub-item can be taken up as part of #679 . |
Yes, correct. We can use a token-bucket-filter to achieve such limit for other req/resp protocols. Refer to comment above on what limits we can apply. |
Already done! In go-libp2p-pubsub, msgs are evaluated with validators before a message is broadcasted, so relaying nodes will not be scored negatively due to lightpush messages containing invalid proofs / waku messages. However validating proofs does have some cost associated to it so perhaps it make senses to do some service rate limit to avoid lightpush clients DoS lighpush nodes? |
@jm-clius where does this fit best in the Network roadmap? waku-org/research#3 |
Probably under DoS Protection track? |
Yeah, I'd include this under |
After some thinking, I think we might need a new epic/milestone for issues like this that does not fall neatly into existing epics or the strictly necessary scope for the gen 0 network launch. Taking up this conversation elsewhere. |
Good morning and interesting point! 🥇 In my opinion, I vote for having only one rate limit per node and not having different rate limits per service because:
Nevertheless, we cannot make any decision until we measure the actual limits. @jm-clius, @vpavlin - I've created a draft issue in |
Valid points @Ivansete-status . |
You are absolutely right @chaitanyaprem in the sense that it may require different resources. I'm not saying that your approach is wrong. I think we should take measures of such limits, that will help us decide better the correct approach. For me, the simpler the better, though :) |
Thinking about it in detail, i think following are the 2 reasons for having such rate-limits:
The second point would also probably need a protocol upgrade to indicate to lightClients what is a service-node's capacity and how much can it allocate to the client as part of some capability exchange. This would probably give lightClient an idea whether to continue using this serviceNode or not. But anyways, that would be for later. I think i agree with you for now, since the objective is only DOS prevention, we can go ahead with defining an overall rate-limit would suffice. cc @harsh-98 , As per discussion above, we can just start with a simple rate-limit implementation for overall services for now to prevent DOS. In order to think of having a fine-grain rate-limit per service, we need to do some measurements and more analysis to come up with an approach. |
Review Q1 2024 -- not yet implemented in nwaku. |
This would be required for status desktop as it will be providing Filter and Lightpush services as per status-im/status-go#4655. Scope of this can be limited to only Lightpush service as of now, since Filter is restricted by number of Filter subscriptions which is already set in status-go PR mentioned above. Have a simple Leaky or Token bucket to restrict req/sec for Lightpush. Some sort of rate-limiting for Store also has to be enabled if StoreNode functionality gets enabled in status-desktop. @chair28980 adding this to status-waku-integ board for tracking. |
Not planned for status. The functionality was removed from status-go. I'll attempt to implement a leaky bucket since it's a simple addition to the code. |
Removing from status-waku-integ per Richard's comment. |
I think @richard-ramos was talking about Store. But for lightpush, since we had enabled it on desktop this would be required as part of status-waku-integ in order to protect desktop node from being DOS'ed with lightpush requests. cc @chair28980 |
@chaitanyaprem thank you for the clarity :) |
Problem
As discussed here #594 (comment) , we should enforce some rate limits for heavy protocols such as store. Otherwise a node can be DoSed by store requests. Example: only allow to serve x request/second, or bytes/second. Beyond that, temporally blacklist.
Suggested solution
We can use go-libp2p scaling limit config. e.g: limits set by go-libp2p for the default services: https://github.com/libp2p/go-libp2p/blob/master/limits.go#L15 . We can control maximum number of streams allocated to each service and memory.
Provide configuration for these rate-limits and good defaults as well.
Alternatives considered
None
Acceptance criteria
Validate by pumping service traffic and verifying rate limits are applied as configured.
The text was updated successfully, but these errors were encountered: