You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While getting ready for Meteor 3.0, we've done a lot of internal changes (#412, #413, #424, #428; soon also one for #443). Most of them are simply adding on top of the existing logic to support Promises virtually everywhere; some are actual redesigns (e.g., shape of _scopeBindings).
Of course, adding more logic always comes with a cost, and we should check, whether it's acceptable. So far we didn't hear about any visible regressions, but it should be checked nevertheless.
It's strongly connected to #383, since the majority of the benchmarks should remain synchronous-only, so we could compare the results with previous versions.
The text was updated successfully, but these errors were encountered:
I've looked into it over the week, and I have a problem on how to approach it.
On the one hand, we'd like to measure the performance change in a real-life scenario. That involves a complete data change -> render -> DOM change cycle. However, the non-DOM operations take so little computation here, that it's barely visible in the profiler.
On the other, we could focus on the underneath helpers, like Spacebars.dot. However, as written above, in a real-life scenario, these account for 1-3% of CPU.
I did prepare a basic example that rerenders a lot of times on click.
While getting ready for Meteor 3.0, we've done a lot of internal changes (#412, #413, #424, #428; soon also one for #443). Most of them are simply adding on top of the existing logic to support
Promise
s virtually everywhere; some are actual redesigns (e.g., shape of_scopeBindings
).Of course, adding more logic always comes with a cost, and we should check, whether it's acceptable. So far we didn't hear about any visible regressions, but it should be checked nevertheless.
It's strongly connected to #383, since the majority of the benchmarks should remain synchronous-only, so we could compare the results with previous versions.
The text was updated successfully, but these errors were encountered: