-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak with optimistic response #4210
Comments
Hello, Does someone has a workaround so far ? Downgrading is a bit painful because of #4125 :( (version matching...). Thank you :) |
Yeah this bit me too I think. Had to temporarily comment out all my optimisticResponses... would love an update here plz :) |
Currently experiencing this on Spectrum (https://github.com/withspectrum/spectrum) - this is particularly painful for a chat app that uses optimistic responses for each message sent. I can reliably crash my browser with OOM error, and the trace is coming from |
Even with optimistic response turned off (before which the app crashed after about 10 updates after reaching over 1GB of memory usage) it seems very weird how memory usage of my app changes over time:
This gives me a very bad feeling. |
We have been hit by this issue too. Would love to see a fix for it. Any updates / workarounds would be highly appreciated. Also is there anything we can do to help? |
This should help with #4210, since temporary optimistic ObjectCache objects will no longer be instanceof DepTrackingCache, so the result caching system will be disabled for those reads, and thus the temporary objects will not be retained in any cache keys: https://github.com/apollographql/apollo-client/blob/7d0ed16a97e57cb3789f352fd4f359eaaa2eec0b/packages/apollo-cache-inmemory/src/readFromStore.ts#L123 https://github.com/apollographql/apollo-client/blob/7d0ed16a97e57cb3789f352fd4f359eaaa2eec0b/packages/apollo-cache-inmemory/src/readFromStore.ts#L144
Can folks try running npm install [email protected] per #4251? |
I did not see a difference. But my project still has optimisticResponce turned off, so I guess that was to be expected. |
That's right, this change would only prevent memory leaks stemming from optimistic reads. If you're feeling adventurous, you could try reenabling optimistic responses to see whether the additional memory usage is less than before? |
@barbalex The fact the your memory drops again to a few MB shows that you don't have a real leak, otherwise the GC would not be able to collect those 600-700MB. The question only is why the GC does not kick in earlier or why collection of those objects is not possible at that time. I recommend to investigate this with the Chrome Allocation Profiler. @benjamn Thanks, I will give it a try tomorrow. Quick question: wouldn't it be sufficient to use the |
O.k., tested with optimistic response. Here comes the test of clicking an option repeatedly and watching memory usage:
The difference between using optimistic response and not may be accidental, as the numbers vary. But the update definitely seems to solve the crashing when using optimistic response in my app. |
Tested this locally for Spectrum:
|
Although this is a rather drastic option, creating an InMemoryCache that does not attempt to cache previous results may help reduce memory usage, at the expense of cache performance on repeated reads: new InMemoryCache({ resultCaching: false, // defaults to true }) See this comment by @barbalex for one such situation: #4210 (comment) At the very least, this option should help diagnose problems with result caching, even if it's not the right long-term solution.
@barbalex Since result caching is just an optimization, and it sounds like it may be contributing to memory problems that significantly worsen performance for you, I've pushed another change (in I imagine the |
Ok, |
Thank you @benjamn - really appreciate the fix here! |
@mpoeter Yes, but the choice was deliberate! In 45c4169 we switched from |
@benjamn I was using directly InMemoryCache for a project, I was writing one big query and reading pieces using readFragments. With some very little load, my instance was quickly running out of memory because of memory spikes. With latest version, it does not. Thanks for the quick fix on this, very very much appreciated from graphQL lovers out there |
@benjamn Thanks for the clarification. I saw the commit to remove the BTW: our tests also confirm that the memory issue is now fixed! 👍 |
As I suspected when we were originally working on this issue, optimistic cache reads could be implemented much more efficiently. This PR should improve the speed and memory consumption of optimistic reads and writes considerably: #4319 |
Sounds great! |
Previously, the `InMemoryCache` maintained an array of recorded optimistic updates, which it would merge together into an entirely new `ObjectCache` whenever performing any single optimistic read. This merging behavior was wasteful, but the real performance killer was calling `this.extract(true)` each time, which copied the entire underlying (non-optimistic) cache, just to create a throw-away `ObjectCache` for the duration of the optimistic read. Worse still, `this.extract(true)` was also called in `recordOptimisticTransaction`, to create a throw-away `RecordingCache` object. Instead of creating temporary copies of the entire cache, `InMemoryCache` now maintains a linked list of `OptimisticCacheLayer` objects, which extend `ObjectCache` and implement the `NormalizedCache` interface, but cleverly delegate to a parent cache for any missing properties. This delegation happens simply by calling `this.parent.get(dataId)`, so there is no need to extract/copy the parent cache. When no optimistic data are currently active, `cache.optimisticData` will be the same (`===`) as `cache.data`, which means there are no additional layers on top of the actual data. Lookup time is proportional to the number of `OptimisticCacheLayer` objects in the linked list, so it's best if that list remains reasonably short, but at least that's something the application developer can control. Calling `removeOptimistic(id)` removes all `OptimisticCacheLayer` objects with the given `id`, and then reapplies the remaining layers by re-running their transaction functions. These improvements probably would have made the excessive memory usage reported in #4210 much less severe, though disabling dependency tracking for optimistic reads (the previous solution) still seems like a good idea.
When using mutations with an
optimisticResponse
we found a significant increase in memory usage.Intended outcome:
No or only small increase in memory usage.
Actual outcome:
Huge increase in memory usage.
How to reproduce the issue:
Given an application with multiple watched queries:
Alternatively use the memory profiling function of Chrome instead of taking separate snapshots - this has the advantage that we get callstacks for the allocations.
Versions
We suspect that this was introduced in #3394. In particular I suspect the commit 45c4169 to be the main cause.
If
InMemoryCache.optimistic
contains some entry,read
anddiff
create temporary stores with that optimistic response:apollo-client/packages/apollo-cache-inmemory/src/inMemoryCache.ts
Lines 137 to 139 in 48a224d
apollo-client/packages/apollo-cache-inmemory/src/inMemoryCache.ts
Lines 167 to 169 in 48a224d
These temporary stores are than used in the calls to
readQueryFromStore
/diffQueryAgainstStore
and should later get garbage collected. But unfortunately these store objects are internally used as key in calls tocacheKeyRoot.lookup
:apollo-client/packages/apollo-cache-inmemory/src/readFromStore.ts
Lines 124 to 130 in 48a224d
apollo-client/packages/apollo-cache-inmemory/src/readFromStore.ts
Lines 144 to 150 in 48a224d
45c4169 replaced the
defaultMakeCacheKey
function from optimism (which internally uses a WeakMap) with the newly introducedCacheKeyNode
concept, causing these temporary stores to end up as key in one of the (strong) Maps, effectively leaking them.The more watched queries you have, the bigger the problem because
maybeBroadcastWatch
callsdiff
for every single watcher. So we end up creating (and leaking) a temporary store for every single watcher for a single optimistic response (apart from the leakage, there is certainly some room for optimization here).Not sure if this is the only problem or if there are other leaks as well. We saw a single mutation (that updates a single field) to cause a memory increase of 40-80MBs - not sure if such a huge increase can be caused by this problem alone.
@benjamn Since you were the main author of #3394, could you take a look at this? Let me know if you need more information!
The text was updated successfully, but these errors were encountered: