Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(debounceTime): improves performance on quick succession of emits #6049

Merged
merged 1 commit into from
Feb 24, 2021

Conversation

voliva
Copy link
Contributor

@voliva voliva commented Feb 24, 2021

Addresses #6045

This turned out to be similar to debounce, but with the optimisation on the scheduling bit. Tests unchanged are green, I'm not sure how can I add a test for performance.

This optimization can't be used in the debounce operator, as that one takes in a stream, where only way of getting the "time window" is by unsubscribing from the previous stream and subscribing on the current one. With debounceTime, as the dueTime is always the same we can optimise.

In case this PR moves forward, would it be worth also having the same optimisation on RxJS@^6' debounceTime?

@josepot
Copy link
Contributor

josepot commented Feb 24, 2021

Thanks a lot for this @voliva !

I would like to explain a situation that I found myself pretty recently that I think that highlights the convenience of this change:

I was trying to understand why there was a task that was getting stuck for about ~400ms every time that a certain kind of message came through the websocket. At first I thought that it was a problem with the parser of a library, but then I realized that the issue was a piece of code that looked more or less like this:

const gePricesHistory$ = (symbol: string) =>
  concat(
    getRemoteProcedureCall$<Price[], string>(
      "priceHistory",
      "getPriceHistory",
      symbol,
    ).pipe(mergeAll()),
    getStream$<Price, string>("pricing", "getPriceUpdates", symbol),
  ).pipe(
    scan((acc, price) => {
      const result = acc.concat(price)
      return result.length <= HISTORY_SIZE ? result : result.slice(1)
    }, [] as HistoryPrice[]),
    debounceTime(0),
  )

The problem was that in some instances the RPC call that returns the initial history would have thousands of entries, and that meant that the JS thread would "freeze" for ~400ms becaue debounceTime(0) was not behaving the way that I thought that it would. Once I refactored the code to something like this:

const gePricesHistory$ = (symbol: string) =>
  mergeWithKey(
    {
      init: getRemoteProcedureCall$<Price[], string>(
        "priceHistory",
        "getPriceHistory",
        symbol,
      ).pipe(
        map((history) => history.slice(Math.max(0, x.length - HISTORY_SIZE))),
      ),
      update: getStream$<Price, string>("pricing", "getPriceUpdates", symbol),
    },
    1,
  ).pipe(
    scan((acc, event) => {
      if (event.type === "init") return event.payload
      const result = acc.concat(event.payload)
      return result.length <= HISTORY_SIZE ? result : result.slice(1)
    }, [] as HistoryPrice[]),
  )

then the performance issue was instantly gone.

I know that the second version will always perform better than the first one, but I would have never anticipated for a debouceTime(0) to have that kind of performance overhead.

Copy link
Member

@benlesh benlesh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extremely well done!

I have some very minor nits around stylistic and comment type stuff, but nothing even worth mentioning. Very impressed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants