-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Save benchmark + Haddock artifacts during CI #281
Comments
Since I didn't know what this meant, here's a link for the benefit of others who don't know either: https://docs.github.com/en/actions/advanced-guides/storing-workflow-data-as-artifacts |
Marking the benchmark logs as artifacts sounds fine to me. But as for plotting them, couldn't we just add some commands to the github action to plot the benchmarks right after generating them, and save the plot(s) as artifacts too? |
If we install If the actions do not measure performance well, then at least we would have a script to do it locally 🙂 The only other useful artefact I can think of is the generated documentation, which could be cool to show off. 🤓 |
As a note, we do not run benchmarks in the CI. We could add it if we wanted to.
For example see #415. |
We do run haddock in the CI so it is only a matter of getting it from there to GitHub pages. For example, Cabal seems to have been using GitHub pages and automatically pushing the Haddock documentation there: But that is outdated now, so we should check if there is an easy way for Github Actions. Worst case scenario we add a secret to allow pushing to a new swarm-website repository and manually do the clone/push after Haddock is built. This at least we could copy-paste from the linked commit. |
Examples of pushing built Haddocks somewhere during CI:
See also https://github.com/haskell/actions . |
There is https://github.com/benchmark-action/github-action-benchmark which can be passed results from any benchmark tool in JSON containing a list of benchmarks with names and values. It creates nice HTML graphs on GitHub pages and posts alerts on the commit page if it really degrades performance. The history is stored using I think we should make this a separate GitHub action running only on the main branch. Not sure what needs to be done to create GitHub pages, but it should be relatively easy. |
The GitHub runners are often under different loads, so we might want to run the benchmark and a comparison benchmark: I think we could publish a benchmark binary in the release and use it as a baseline - run A, B, A, B take averages and compare. Another interesting thing to consider is which compiler to use, as the numbers change quite a lot. (See #752) |
But to keep this issue reasonable, let's just set up GitHub pages and an Action that will publish haddock and benchmark. 😅 By using the benchmarks as is, we might be able to use the same format as Rust criterion benchmarks. 🤔 |
It would be nice to mark the benchmark logs as artifacts.
We could then write a script that would use GitHub CLI tool to download the artifacts of main branch and plot the benchmarks.
The text was updated successfully, but these errors were encountered: