Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better benchmarking #1906

Closed
6 tasks
not-my-profile opened this issue Jan 16, 2023 · 6 comments
Closed
6 tasks

Better benchmarking #1906

not-my-profile opened this issue Jan 16, 2023 · 6 comments

Comments

@not-my-profile
Copy link
Contributor

not-my-profile commented Jan 16, 2023

  • create a script for running the CPython benchmark
  • include the benchmark SVG shown in the README in the repository and add some script to regenerate it
    • include the versions of the linters in the SVG
    • add ruff --select ALL to the chart
  • introduce some tooling to benchmark the current working directory in comparison to a certain git branch (main by default)
  • create tooling for creating a breakdown for the total time ruff takes (how much of it is spend reading the files, parsing them with RustPython, how much time in checkers::ast, checkers::lines, checkers::tokens, etc.)
@charliermarsh
Copy link
Member

For what it’s worth, I care more about better tooling to benchmark Ruff than I do about benchmarking against other tools (and recreating the graph, etc.). Most useful here by far would be automating Ruff’s internal benchmark, measuring it continuously, and breaking it down by where time is spent.

@charliermarsh
Copy link
Member

(I'm actually regenerating the graph for unrelated reasons, so I'll check-in some instructions and such. Also, if helpful, all of the versions are encoded in scripts/pyproject.toml.)

@charliermarsh
Copy link
Member

(Updated the graph and added some more instructions in #1907.)

@jvdd
Copy link

jvdd commented Feb 2, 2023

I just discovered codspeed: https://codspeed.io/ - a tool for continuous benchmarking and find it amizing!

It can monitor Python & Rust benchmarks and gives feedback in pull requests about improvements / regressions - see example.
After quickly toying around with it, the benchmark results look really stable (better / same stability as I get on my development server).

Some large Rust libraries that interated codspeed:

P.S.: I am not affiliated / involved with codspeed - just very pleased with it and thought it could be worthwhile to share this with such a performance oriented library as this one :)

@charliermarsh
Copy link
Member

I'm interested in giving it a try :) The author reached out to me a while ago and kindly gave me beta access, but I haven't had a chance to set it up 😂

@charliermarsh
Copy link
Member

Closing as we added cargo benchmark which helps with some of the above. Any remaining we can open as separate tickets in the future if we choose to prioritize.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants