Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continuous benchmarking #30

Open
tarcieri opened this issue May 1, 2023 · 3 comments
Open

Continuous benchmarking #30

tarcieri opened this issue May 1, 2023 · 3 comments

Comments

@tarcieri
Copy link
Member

tarcieri commented May 1, 2023

We've had a couple notable performance regressions lately, namely in keccak and p256.

It'd be good to have a reusable solution for continuous benchmarking.

Some options:

@Slixe
Copy link

Slixe commented May 2, 2023

Criterion may be a good alternative too:

EDIT: Looks like you're already using criterion crate in some parts of code.

@tarcieri
Copy link
Member Author

tarcieri commented May 2, 2023

Yeah, we already use criterion extensively.

For this particular problem though, we need to be able to produce reliable benchmarking results in a virtualized CI environment, which is not something criterion provides a solution for.

That particular problem is what makes something like iai interesting.

@Slixe
Copy link

Slixe commented May 2, 2023

Agree, iai looks promising, and the perfect choice for benchmarks of hashing algorithms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants