Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Add benchmarks for loaded params #915

Closed
wants to merge 1 commit into from
Closed

Conversation

winston-h-zhang
Copy link
Member

This PR adjusts the fibonacci bench to also test loaded params. In the benchmark, we first test with a set of freshly generated params, writing them to disk if necessary, and then we test again with a loaded set of params.

@winston-h-zhang
Copy link
Member Author

!gpu-benchmark

Copy link
Contributor

Benchmark for 3b9c434

Click to view benchmark
Test Base PR %
LEM Fibonacci Prove - rc = 100/fib/num-100 6.0±0.06s N/A N/A
LEM Fibonacci Prove - rc = 100/fib/num-200 12.3±0.08s N/A N/A
LEM Fibonacci Prove Fresh - rc = 100/fib/num-100 5.9±0.07s N/A N/A
LEM Fibonacci Prove Fresh - rc = 100/fib/num-200 12.7±0.03s N/A N/A
LEM Fibonacci Prove Load - rc = 100/fib/num-100 6.0±0.07s N/A N/A
LEM Fibonacci Prove Load - rc = 100/fib/num-200 12.6±0.02s N/A N/A

@winston-h-zhang
Copy link
Member Author

!gpu-benchmark

Copy link
Contributor

Benchmark for 3b9c434

Click to view benchmark
Test Base PR %
LEM Fibonacci Prove - rc = 100/fib/num-100 3.9±0.06s N/A N/A
LEM Fibonacci Prove - rc = 100/fib/num-200 8.0±0.27s N/A N/A
LEM Fibonacci Prove Fresh - rc = 100/fib/num-100 3.9±0.03s N/A N/A
LEM Fibonacci Prove Fresh - rc = 100/fib/num-200 8.5±0.14s N/A N/A
LEM Fibonacci Prove Fresh - rc = 600/fib/num-100 3.1±0.01s N/A N/A
LEM Fibonacci Prove Fresh - rc = 600/fib/num-200 7.0±0.01s N/A N/A
LEM Fibonacci Prove Load - rc = 100/fib/num-100 3.9±0.02s N/A N/A
LEM Fibonacci Prove Load - rc = 100/fib/num-200 7.8±0.06s N/A N/A
LEM Fibonacci Prove Load - rc = 600/fib/num-100 3.1±0.01s N/A N/A
LEM Fibonacci Prove Load - rc = 600/fib/num-200 7.0±0.03s N/A N/A

@winston-h-zhang
Copy link
Member Author

The strangest thing is that there seems to be no slowdown on this GPU with loaded params.

@winston-h-zhang winston-h-zhang marked this pull request as ready for review November 22, 2023 21:16
@winston-h-zhang winston-h-zhang requested review from a team as code owners November 22, 2023 21:16
@arthurpaulino
Copy link
Member

I'm concerned about the increase in cost for this benchmark. Can we measure it in another place?

Copy link
Member

@huitseeker huitseeker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of our benchmark machines are configured to not let the application write anything to disk, and do so without letting it fail to read or write. The bench machines send the bytes to /dev/null and successfully read zero-sized files.
(for more details : https://github.com/abbbi/nullfsvfs )

As a consequence, modifying the fibonacci benchmark is not a good way to test for public parameter loading.

@winston-h-zhang
Copy link
Member Author

Ah, thank you for the heads up. That explains the absence of any slowdown, then. I guess I'll have to manually test any loading-related regressions.

Copy link
Member

@huitseeker huitseeker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving this in request changes for clarity.

@winston-h-zhang winston-h-zhang force-pushed the load-params-ci branch 2 times, most recently from 7abc39a to 059ecad Compare November 27, 2023 19:02
@winston-h-zhang winston-h-zhang requested a review from a team as a code owner November 27, 2023 19:02
@winston-h-zhang
Copy link
Member Author

Closed after argumentcomputer/arecibo#137

@winston-h-zhang winston-h-zhang deleted the load-params-ci branch December 6, 2023 21:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants