You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some indexes may have their own sets of internal metrics that are meaningful to collect. For example, for an LSM tree like LevelDB we can collect compact speed. These are not included in standard PiBench metrics and are not needed for every index. It would be convenient for the index/wrapper developer to devise a set of metrics that can be summarized via PiBench, for easier stats gathering/plotting etc.
I don't have a concrete design in mind, but one potential solution may be for the index itself to still collect these extra metrics (which they already do), and they register a callback function in PiBench which is called after benchmark is done. What PiBench does is to simply summarize and display these metrics.
The text was updated successfully, but these errors were encountered:
What if internal metrics are left completely for the wrapper to handle?
For example, in the case of LevelDB the wrapper might setup the options required for collecting metrics in the constructor and output the metrics to a file in the destructor code.
Advantage: PiBench is kept agnostic to internal metrics and we don't have to change anything since it might be hard to support a generic interface for all possible wrappers.
Disadvantage: Any changes regarding the collection of internal metrics must be made directly in the wrapper source code and the wrapper must be re-compiled.
Some indexes may have their own sets of internal metrics that are meaningful to collect. For example, for an LSM tree like LevelDB we can collect compact speed. These are not included in standard PiBench metrics and are not needed for every index. It would be convenient for the index/wrapper developer to devise a set of metrics that can be summarized via PiBench, for easier stats gathering/plotting etc.
I don't have a concrete design in mind, but one potential solution may be for the index itself to still collect these extra metrics (which they already do), and they register a callback function in PiBench which is called after benchmark is done. What PiBench does is to simply summarize and display these metrics.
The text was updated successfully, but these errors were encountered: