Skip to content
This repository has been archived by the owner on May 3, 2022. It is now read-only.

Rate limiting #40

Open
pwnall opened this issue Oct 17, 2018 · 2 comments
Open

Rate limiting #40

pwnall opened this issue Oct 17, 2018 · 2 comments

Comments

@pwnall
Copy link

pwnall commented Oct 17, 2018

We should have a story for what happens when a user consumes too many resources. Examples

  • too many pending requests
  • too many open stores / pending open requests
  • a keys() / values() call returning too many objects (assuming we keep some form of convenience APIs that batch all keys/values in a single array)

AFAIK, the typical story is that the implementer (browser) has to figure all this out. There are many good reasons to go this route, like not capping future workloads / machines. At the same time, this approach also leads to bugs and confusion, as apps eventually hit the real limits of available RAM or address space. So, while I imagine this API will follow the establish pattern, I'd like us to be deliberate about it -- it's easy to add rate-limiting early on and relax limits over time, and nearly impossible to add limits after an API gets widespread adoption.

As a concrete example, we (Chrome) see a non-trivial amount of out-of-memory crashes due to buggy code that creates a lot of IndexedDB transactions in a loop or code that queues up a lot of requests before waiting for any results.

Strawman proposal:

  • at most 1,000 pending requests
  • at most 100 open stores
  • at most 10,000 results from keys() / values() (we could implement that by passing a limit of 10,001 to getAll() and throwing if we get 10,001 results back)
  • exceeding any limit results in rejected promises
  • (maybe) these limits can be read and changed by calling some method on localStore; changing the limits loses any guarantee that the code won't crash

Note: It is quite possible to stay within these limits and crash the browser. The numbers are meant as a rough separator between "reasonable" workloads and "probably bugs".

@domenic
Copy link
Collaborator

domenic commented Oct 21, 2018

It would make the most sense to me to add this kind of limitation capability to IndexedDB, as it's a general problem affecting all IDB operations, and then use it (or allow it as a similar option) for async local storage. What do you think?

In other words, it's unclear to me what new problems async local storage brings to the table here. Is the idea is just that we get a chance to set new defaults, that we wish we'd had all along in IndexedDB?

@pwnall
Copy link
Author

pwnall commented Oct 21, 2018

Your last sentence nails my intention -- thank you for clarifying this!

  1. IDB is fairly widely deployed at this point, so the changes I've outlined may not be Web-compatible. Figuring this out requires resources that my team does not have at the moment.

  2. I see async-local-storage as an easy intro to client-side storage. If my understanding is correct, I think the API should (within reason) guard against the common problems we've seen in IDB usage. The most popular ways to crash Chrome using IDB involve DOS-ing yourself with too many requests or overly large objects (the subject of separate issues).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants