-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial benchmarking implementation: steps to run benchmarks and comb… #418
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #418 +/- ##
=======================================
Coverage 90.45% 90.45%
=======================================
Files 51 51
Lines 2796 2796
=======================================
Hits 2529 2529
Misses 267 267
Flags with carried forward coverage won't be shown. Click here to find out more. |
For convenience, I will leave this link here: https://juliahealth.org/KomaMRI.jl/benchmarks/ |
I think we can merge this tomorrow after I delete the lines with #TO-DO: remove this. I still want to see if there is a way we can run the CPU benchmarks on the same platform each time. We'll also need to ensure we delete the data.js file in gh-pages/benchmarks before merging. |
This pull request creates a pipeline to run benchmarks on all supported backends, combine the results into a single json file, and pass that file to https://github.com/benchmark-action/github-action-benchmark. So far, I've written the benchmarking scripts and buildkite pipeline for running and combining the benchmarks. Once I confirm this is working, then I'll add the github step for downloading the resulting artifact and passing to github-action-benchmark.
Currently, I have it running the CPU benchmarks with 1, 2, 4, and 8 threads, but we can change this if we want. The two benchmarks are the examples from lit-04-3DSliceSelective.jl and MRiLab_speed.jl, with names "Slice Selection 3D" and "MRI Lab" (we can change these names). I also noticed that the lit-04-3DSliceSelective.jl example takes kind of long to run (around a minute on my computer), which may cause BenchmarkTools.jl to run fewer trials and result in more fluctuation in our measurements, so we may want to adjust this example to not take quite as long.
We should also start thinking about how we want the benchmark results to display. Currently, I have the benchmark structure as:
benchmark -> sim method -> cpu / gpu -> # of threads / gpu backend
As of now, the only sim method is Bloch (is it interesting to also profile BlochDict?). When a kernel-based sim method is written we can add this as well. If we want our page to look similar to the one here: https://lux.csail.mit.edu/benchmarks/, we can have two columns for CPU and GPU, and the graphs will have a title like: "MRI Lab + Bloch + GPU", with lines for each backend (and # of threads for the CPU). This will require some updates to the index.html page that github-action-benchmark provides, but this can be done after this pull request is merged.