Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added queue manager #192

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

added queue manager #192

wants to merge 1 commit into from

Conversation

Krysztal
Copy link

@Krysztal Krysztal commented Apr 27, 2021

Wow api can manage only 100 request per second. After this rate most requests would fail.
It is my propose how we can manage it. I created queue manage which should be singleton(per region) and have to be added to all WarcraftClient instances.

This is not finished state. I want to know you opinion about functionality like this before finish it. Maybe you will propose better solution as more advanced programmer/architect.

@danjagnow danjagnow self-requested a review April 29, 2021 01:09
Copy link
Collaborator

@danjagnow danjagnow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great idea! I will look this over more closely and provide some feedback.

@willwolfram18
Copy link
Collaborator

TL;DR: I feel like we don't want to couple the WarcraftClient itself to Blizzard's rate limit requirements, and instead developers should do their own compensation for it. If we really want this, maybe it should be a separate package, possibly using Polly, to implement this type of behavior.

I've been trying to think on this PR a bit and I'm not sure how I feel about it. On the one hand, I totally get why this might be implemented in the HTTP client. However, it also feels like this isn't something that WarcraftClient itself should be responsible for: it's purpose and focus is making the HTTP requests, not necessarily handling rate limiting. Thinking about other frameworks as well, the Azure CosmosDB client comes to mind I think, they throw exceptions and let applications know that a rate limit/HTTP 429 is returned and force application developers to compensate for it.

I think I would err on the side of the latter, asking developers to determine the best approach for their specific situation and use case if they are encountering Blizzard's rate limits. I also think it would adhere to SOLID principles a little better if the WarcraftClient continues focusing on HTTP requests, and something that wraps or extends it handles rate-limit logic. Perhaps there is a way to implement something like this in a separate extensions project that leverages Polly for request retries in the event of 429s?

@danjagnow
Copy link
Collaborator

I finally got some time to look into this a bit more. According to the Throttling section of the Getting Started documentation...

API clients are limited to 36,000 requests per hour at a rate of 100 requests per second. Exceeding the hourly quota results in slower service until traffic decreases. Exceeding the per-second limit results in a 429 error for the remainder of the second until the quota refreshes.

So the hourly cap allows for an average of 10 requests per second (36,000 / 60 / 60 = 10), and there is an additional per-second cap to limit bursts within that window. So you could theoretically burn through the entire 36,000 requests in the first 6 minutes of the hour at 100 requests per second, and then you'd have "slower service" for the remaining 54 minutes of the hour.

There are a few possible approaches to dealing with this in this package:

  1. Do nothing. Rely on the package consumer to understand the throttling rules and make provision for them (if their use case even demands it, as with bursty batch processing or a heavy-traffic website).
  2. Retry 429s. Have the package (or a companion package) handle automatic retries for 429 responses. Other than the delay, this behavior would be transparent to the consumer.
  3. Proactively rate-limit. The most elaborate approach would involve proactively rate-limiting the calls to the service in an attempt to avoid 429 responses altogether.

In addition to the proposed code in this PR, there are some interesting approaches to option 3, such as the token bucket algorithm. There are .NET implementations of this algorithm, like RateLimiters (package at Bert.RateLimiters) and Esendex.TokenBucket. The difference between the burst limit (100 per second) and the hourly limit (36,000 per hour) makes this a bit more nuanced, though.

My inclination is to look at option 2. That seems like a simple approach that provides a good value for most consumers, doesn't involve much code, and seems to have a low risk of unintended consequences.

Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants