Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added queueMicrotask support #88

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

tinchoz49
Copy link

No description provided.

@calvinmetcalf
Copy link
Collaborator

you'll note the ridiculousness we use for for grabbing setTimeout that's because this is sometimes run in environments that don't have globals set, so we'd want to check for the presence of that method inside of a try catch since we also don't know what the global object will be named and we don't want an error about unknown variables.

this would be a great pull on immediate but I'm not so sure about here especially because I'm not actually sure how much of a benefit you'll get, you may get up to 4 ms of latency reductions in certain apps, but that's a best case and since setTimeout is not used for recursive calls you're never going to get much more then that at a time. Plus this is latency not performance, those 4ms aren't going to be blocking anything.

@tinchoz49
Copy link
Author

tinchoz49 commented Jun 6, 2020

you'll note the ridiculousness we use for for grabbing setTimeout that's because this is sometimes run in environments that don't have globals set, so we'd want to check for the presence of that method inside of a try catch since we also don't know what the global object will be named and we don't want an error about unknown variables.

Makes sense, I didn't notice that code before, I can update it to use that approach.

this would be a great pull on immediate

Yes, I can do a PR for immediate :)

but I'm not so sure about here especially because I'm not actually sure how much of a benefit you'll get, you may get up to 4 ms of latency reductions in certain apps, but that's a best case and since setTimeout is not used for recursive calls you're never going to get much more then that at a time. Plus this is latency not performance, those 4ms aren't going to be blocking anything.

I never have a latency issue before with this process.nextTick until I try to use level-js in the browser.

I started seeing that it takes seconds to iterate over a readable stream in leveldb and it seems that was caused by calling the process.nextTick here: https://github.com/nodejs/readable-stream/blob/master/lib/_stream_readable.js#L561

I did a repository with the results:

https://github.com/tinchoz49/level-bench

Using setTimeout takes around 4.5 seconds to iterate over 1000 items and using queueMicrotask 76 ms

@tinchoz49
Copy link
Author

btw thank you for being reviewing the PR

@calvinmetcalf
Copy link
Collaborator

both level-mem (via memdown) and abstract-leveldown both use my library immediate, but since readable-stream uses process.nextTick that means that process.nextTick is never called from within another process.nextTick call, if you change memdown and abstract-leveldown to use (the slow) process.nextTick instead of (the faster mutation observer immediate) then the tests speed up dramatically.

Sigh one of my assumptions had been that the initial setTimeout would only be called rarely in would be unlikely to be called in a recursive situation but i guess I didn't take into account it being called recurrently from a different nextTick shim.

I guess the only questions are:

  1. should we still use our own little task queue?
  2. should we be handling the case where somebody latter changes queueMicroTask because they want to do something stupid like make their tests synchronous so they run faster (we are currently handling that)
  3. should we be handling the case where in some strange environment queueMicroTask is not initially but is defined latter via a shim or something?

@tinchoz49
Copy link
Author

Updated to try to solve this:

should we still use our own little task queue?

should we be handling the case where in some strange environment queueMicroTask is not initially but is defined latter via a shim or something?

@tinchoz49 tinchoz49 force-pushed the tinchoz49/add-queuemicrotask-support branch 3 times, most recently from a0110de to 685c6e7 Compare June 10, 2020 14:02
@tinchoz49 tinchoz49 force-pushed the tinchoz49/add-queuemicrotask-support branch from 685c6e7 to 197c0ae Compare June 10, 2020 14:03
@gsvarovsky
Copy link

Re #88 (comment)

The memdown/readable-stream interaction is also giving me a headache, so thanks very much for looking at this guys. This issue is so deeply entrenched in browser/Javascript mechanics, I'd love to have a canonical solution for it.

I've tested the tinchoz49:tinchoz49/add-queuemicrotask-support and I see a 10x speed improvement for my test case.

@feross
Copy link
Contributor

feross commented Feb 10, 2021

Getting queueMicrotask into this package would be a huge performance win for users with modern browsers (most users!)

What do folks think about dropping support for ancient browsers and strange environment to massively simplify this package? As one data point, over in buffer we've even dropped IE11 support. Many browserify shims don't support IE10 and older anymore.

If we release this as a new major version then folks who still need the hacks can pin to this version of process as described here: browserify/browserify#1980

@calvinmetcalf
Copy link
Collaborator

@feross this package gets pulled in automatically by just a mind boggling number of weird places without the user explicitly asking for it so again really hesitant to drop any support

@ljharb
Copy link

ljharb commented Feb 10, 2021

Please don't drop support for anything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants