Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add automated benchmarking #377

Merged
merged 17 commits into from
Oct 23, 2024
Merged

Conversation

peytondmurray
Copy link
Collaborator

@peytondmurray peytondmurray commented Oct 2, 2024

This PR adds automated benchmarking on GCP via Cirun:

  • Added .cirun.yml configuration. Seems to be working well already 🎉
  • Made a benchmarking workflow that pushes results to master. When the docs build, the asv results are rendered into static HTML. A link to these is added to the front page of the docs
  • The benchmarking install code has been greatly simplified
  • Reduced the number of versions in the delete_version benchmark; it was taking way too long

Closes #343.

This was tested with a pure python version of build_data_dict, as the current one is broken on CI and my local machine: https://github.com/peytondmurray/versioned-hdf5/actions/runs/11391825172

@peytondmurray peytondmurray force-pushed the 343-asv-benchmarks branch 3 times, most recently from 1767d8f to d3ffd87 Compare October 15, 2024 06:37
@peytondmurray peytondmurray changed the title [WIP] Add automated benchmarking Add automated benchmarking Oct 15, 2024
@peytondmurray peytondmurray force-pushed the 343-asv-benchmarks branch 2 times, most recently from f20584d to 1ebee1d Compare October 15, 2024 23:25
@peytondmurray peytondmurray marked this pull request as ready for review October 17, 2024 21:05
benchmarks_install.py Outdated Show resolved Hide resolved
docs/index.rst Show resolved Hide resolved
@crusaderky
Copy link
Collaborator

crusaderky commented Oct 22, 2024

@peytondmurray I think it would be incredibly useful to be able to rerun asv on a controlled environment (GCP) on a PR before it gets merged and compare the results. Do I understand correctly however that this PR can exclusively run in master? Would it be a sensible followup to add the functionality to do a comparative benchmark on an open PR (maybe triggered by a github PR tag)?

@peytondmurray
Copy link
Collaborator Author

I think it would be incredibly useful to be able to rerun asv on a controlled environment (GCP) on a PR before it gets merged and compare the results

Sure, this is definitely possible - especially if you're fine with just a table of benchmark results as output. I can follow up in another PR with a bot that posts benchmarks in a comment, if that makes the most sense.

@peytondmurray
Copy link
Collaborator Author

Here's the latest version, I merged this PR into my own fork's master to trigger the build: https://peytondmurray.github.io/versioned-hdf5/benchmarks/index.html

I think the graphs aren't displaying correctly because there's only one data point (the latest commit comparison), but it could be that master~1 doesn't have the H5Close commit, so the benchmark failed. Anyway that's not terribly relevant as the machinery there is being changed out soon anyway.

@crusaderky crusaderky merged commit c24211b into deshaw:master Oct 23, 2024
8 checks passed
@crusaderky
Copy link
Collaborator

@peytondmurray it's failing in master:

  1. still the dcpl.id not being found, and
  2. something about git permissions when you try pushing

https://github.com/deshaw/versioned-hdf5/actions/runs/11476804181/job/31937575352

@crusaderky
Copy link
Collaborator

You're installing libhdf5 from conda and h5py from pip. Could you try moving h5py to conda too?

@peytondmurray peytondmurray deleted the 343-asv-benchmarks branch October 23, 2024 20:08
@peytondmurray
Copy link
Collaborator Author

Yep, will do now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Get airspeed velocity benchmarks working again (PyInf#12733)
2 participants