Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oom_get_keyvalue_cached should be eliminated #6

Open
donboll opened this issue Mar 14, 2018 · 3 comments
Open

oom_get_keyvalue_cached should be eliminated #6

donboll opened this issue Mar 14, 2018 · 3 comments

Comments

@donboll
Copy link
Contributor

donboll commented Mar 14, 2018

No description provided.

@donboll
Copy link
Contributor Author

donboll commented Mar 14, 2018

oom_get_keyvalue_cached() is almost identical to oom_get_keyvalue(). Pass in an optional flag to force use of cached data, use the flag as:
if (mm[key][0] == 0) or (use_cached):
Do that in both the SFF and CFP clauses

@insekt
Copy link

insekt commented Aug 20, 2019

Yes. You are right.
BTW, it will be great if you will provide explanation what is the aim for "cached" reads. While looking though the code it was big surprise to me when I find "cached" read. I didn't find any obvious reasons it in code and docs. Will be very glad.

@donboll
Copy link
Contributor Author

donboll commented Apr 10, 2020

The purpose of cached reads is to allow multiple dynamic keys to be read in a single access. The obvious case is reading DOM data. This requires a read of temp, voltage, Tx/Rx power and laser bias (x4 lanes for QSFP). That's 5-14 data points for one DOM read. With the cached path, I can invalidate the cache to trigger a read, then read each of those 14 keys (cached). The buffer gets filled once (the data is dynamic, so I need that refresh), then the remaining reads all use the same buffer data. The data is also consistent this way, with all of the keys gathered at the same time.

The original issue here is that the code is redundant. If I collapse those two routines together, then any changes in the future will only need to be made once, reducing the chance of fixing one and forgetting the other.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants