-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Iterator time is exponential in number of Cache wrappings #10310
Comments
We should get a pprof output out, and analyze whats going on! Tried getting a pprof, but I can't build the ethermint library on machine... But also, sounds like a misdesign that should be fixed to just be wrapping the store 34 times as mentioned in that PR. (Theres lots of overheads in doing so, outside of anything in the CacheKVStore) Can you either get a pprof output or make a simplified PR here and I can try to get one? (Just add |
Ah in the code, this is definitely exponential time wrt to CacheKVStore wrapping depth. https://github.com/cosmos/cosmos-sdk/blob/master/store/cachekv/store.go#L168-L184 Its creating its own iterator over its own cache, and over its parents. Its parent will then create an iterator over its own cache, and its parents. This causes a branch factor of 2 at every depth, hence time will follow 2^n. Can you explain more about why your nesting the caches? For a nested cache, you basically want one iterator at every level, but implementing that would require a native concept of 'depth' in the stores, and ways of distinguishing thin store layers (gasKV) and 'thick' ones e.g. CacheKVStore. |
👍
Nested caches are convenient to implement the nested exception revert, which is needed when we embed the EVM. There are other ways to do it, but nested caches are the simplest one.
|
@ValarDragon for reference, pprof from a related issue: |
@yihuang will you be able to handle this issue? |
I feel like this is something we should emit a warning for, but otherwise get downstream things to just not do this access pattern. Also IAVL cache increases significantly helps here. |
In ethermint we already changed our approach to avoid a deep cache context stack. |
So, normally this is not an issue in the core SDK. I'm not sure how we could log a warning without tracking the nested level: we would need to add one more parameter to cache pass it's increment when we create a new cache. |
maybe just put sth in the comments for now? |
I think my prior comment was incorrect, I don't see how this is exponential time in the number of cache wrappings. The parent would then cause two iterations. The local cache would cause no further branching factors, as its just over this layers local sorted cache. |
I have wrote a benchmark for this before: https://github.com/tharsis/ethermint/blob/release/v0.7.x/x/evm/keeper/benchmark_test.go#L100
not sure if it's exponential, but the curve is steep. |
I did some profile and optimization. before:
after:
|
Closes: cosmos#10310 Solution: - cache the valid status
Closes: cosmos#10310 Solution: - cache the valid status
Closes: cosmos#10310 Solution: - cache the valid status
Closes: cosmos#10310 Solution: - cache the valid status
Co-authored-by: Aleksandr Bezobchuk <[email protected]> Co-authored-by: Marko <[email protected]> Closes #10310
Co-authored-by: Aleksandr Bezobchuk <[email protected]> Co-authored-by: Marko <[email protected]> Closes #10310 (cherry picked from commit cbee1b3) # Conflicts: # CHANGELOG.md # go.mod # go.sum # simapp/go.mod # simapp/go.sum # store/cachekv/store.go # store/cachekv/store_test.go # tests/go.mod # tests/go.sum
Co-authored-by: Aleksandr Bezobchuk <[email protected]> Co-authored-by: Marko <[email protected]> Closes #10310 (cherry picked from commit cbee1b3) # Conflicts: # CHANGELOG.md # go.mod # store/cachekv/store.go
Co-authored-by: Aleksandr Bezobchuk <[email protected]> Co-authored-by: Marko <[email protected]> Closes #10310 (cherry picked from commit cbee1b3) # Conflicts: # CHANGELOG.md # go.mod # go.sum # simapp/go.mod # simapp/go.sum # store/cachekv/memiterator.go # store/cachekv/store.go # store/cachekv/store_test.go # tests/go.mod # tests/go.sum
Co-authored-by: Aleksandr Bezobchuk <[email protected]> Co-authored-by: Marko <[email protected]> Closes cosmos#10310 (cherry picked from commit cbee1b3)
Co-authored-by: Aleksandr Bezobchuk <[email protected]> Co-authored-by: Marko <[email protected]> Closes cosmos#10310 (cherry picked from commit cbee1b3)
Summary of Bug
evmos/ethermint#626
benchmark: evmos/ethermint#627
code of concern:
cosmos-sdk/store/cachekv/mergeiterator.go
Line 205 in 1c468de
In ethermint we use a nested cache context stack to support the
Snapshot
andRevertToSnapshot
APIs for EVM.When EVM call
Snapshot
, we push a new cache context based on the top one, whenRevertToSnapshot
, we discard the cache contexts, after the execution, we commit all the cache contexts.What we found is when the context stack is deep, it's extremely slow to create an iterator on it.
Version
v0.44.1
Steps to Reproduce
For Admin Use
The text was updated successfully, but these errors were encountered: