-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix todo [tls cache] #13902
fix todo [tls cache] #13902
Conversation
Signed-off-by: ls-2018 <[email protected]>
Codecov Report
@@ Coverage Diff @@
## main #13902 +/- ##
==========================================
- Coverage 72.40% 72.35% -0.05%
==========================================
Files 469 469
Lines 38414 38430 +16
==========================================
- Hits 27813 27806 -7
- Misses 8812 8832 +20
- Partials 1789 1792 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know there is a todo, but I don't think we should assume that the author of it has really though through that cache here is needed. Correctly implemented cache allows us to trade Disk IO & cpu for memory. However it's not always obvious that cache is beneficial, depending on hit vs miss ratio it might lead to lowering performance or resource leaking.
There are two things we should definetly do before merging:
- Add tests
- Guarantee maximum size of cache (implement LRU cache)
I agree that LRU based implementation (ideally with O(1min)) expiration would be better. Few observations:
Summary:
|
I agree with the LRU based implementation and I will implement it next |
Signed-off-by: ls-2018 <[email protected]>
Signed-off-by: ls-2018 <[email protected]>
Signed-off-by: ls-2018 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the fix.
I wonder whether we can avoid maintaining own implementation of LRU.
E.g. usage of https://github.com/hashicorp/golang-lru with a wrapper that
substitutes expired entries might be suffient and imply less maintainance cost.
I've seen also https://github.com/hnlq715/golang-lru
, but it's a fork without it's own brand, so would be misleading to depend on it.
client/pkg/lruutil/lru_cache.go
Outdated
@@ -0,0 +1,125 @@ | |||
// Copyright 2022 Google LLC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually we use: // Copyright 2022 The etcd Authors
Signed-off-by: ls-2018 <[email protected]>
@ptabor I've seen https://github.com/hnlq715/golang-lru. but it's not support time-based elimination strategies. action fails just by changing a copyright |
If we take any existing LRO, we can simulate expiration on fetch.
Even better wrapped in a function:
|
Signed-off-by: ls-2018 <[email protected]>
Signed-off-by: ls-2018 <[email protected]>
Signed-off-by: ls-2018 <[email protected]>
Signed-off-by: ls-2018 <[email protected]>
Signed-off-by: ls-2018 <[email protected]>
Signed-off-by: ls-2018 <[email protected]>
Signed-off-by: ls-2018 <[email protected]>
Signed-off-by: ls-2018 <[email protected]>
Signed-off-by: ls-2018 <[email protected]>
fix
// TODO: cache