Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linear cache: Avoid linear time iterations on all resources when applying delta updates. No longer build the versionMap when not using delta watches #547

Merged
merged 2 commits into from
Mar 24, 2022

Conversation

valerian-roche
Copy link
Contributor

@valerian-roche valerian-roche commented Mar 17, 2022

The linear cache is currently building the versionMap every time it is receiving any update
This is quite expensive currently in two cases:

  • When not using delta watches, it is never used but constantly maintained. It is very expensive when using SetResources
  • When using delta watches and using delta updates (UpdateResource/DeleteResource/UpdateResources), it is iterating on the entire resource set instead of iterating on modified objects

Also adds a constant time accessor to the nb of entries in the cache as GetResources is no longer constant time

Current profile in our implementation (based on updates usually touching less than 1% of the resources)

  • Lot of time spent in GetResources (mostly called for statistics on the nb of entries)
  • updateVersionMap takes a large share of the time. We do need it as using delta watches, but most users won't if using sotw
    image

…rces. No longer build the versionMap when not using delta watches

Signed-off-by: Valerian Roche <[email protected]>
Copy link
Contributor

@alecholmez alecholmez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks good to me. Just a few nits in the comments but other than that seems like a solid performance increase. One of the things I actually need to get around to is porting over a change to the simple cache where we only hash on modified resources instead of the entire set. Nice addition for the linear cache

pkg/cache/v3/linear.go Outdated Show resolved Hide resolved
pkg/cache/v3/linear.go Outdated Show resolved Hide resolved
Signed-off-by: Valerian Roche <[email protected]>
@valerian-roche
Copy link
Contributor Author

Overall this looks good to me. Just a few nits in the comments but other than that seems like a solid performance increase. One of the things I actually need to get around to is porting over a change to the simple cache where we only hash on modified resources instead of the entire set. Nice addition for the linear cache

Not sure how the snapshot cache can be optimized on that. In most cases the user will rebuild the entire snapshot for the node. On the linear cache side this is really bad today to build the cache map with SetResources (in our production systems we reached ~1s spent just on it) so it can probably become quite bad for snapshot also
Are you planning on supporting delta snapshot updates on simple cache? That'd be very useful for us (e.g. if we want to use runtime flags in the same simple cache)

@alecholmez
Copy link
Contributor

@valerian-roche We would like to add support for incremental snapshot updates with delta. We have this open: #423

I haven't given much thought to it yet but I imagine that designing an API for per resource updates will significantly increase performance in the simple cache. We have a similar problem there where we're constantly building a big hash map regardless of what changed.

@alecholmez alecholmez merged commit d75a9e0 into envoyproxy:main Mar 24, 2022
@valerian-roche
Copy link
Contributor Author

Definitely make sense to me to allow delta updates of snapshots, especially to no longer require to rebuild the entire snapshot to just update one resource
Thanks for looking at improving that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants