Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add asyncio_tcp benchmark #254

Merged
merged 3 commits into from
Dec 23, 2022
Merged

Add asyncio_tcp benchmark #254

merged 3 commits into from
Dec 23, 2022

Conversation

kumaraditya303
Copy link
Contributor

From python/cpython#91166

We can show off this optimization as part of 3.12 performance improvement :)

@brandtbucher
Copy link
Member

Is the goal for this benchmark to guide our future improvements, or just to highlight the improvement already made in the linked issue?

We should really avoid the latter... the last thing I want is to add synthetic benchmarks just to boost our 3.12 numbers.

@kumaraditya303
Copy link
Contributor Author

Is the goal for this benchmark to guide our future improvements, or just to highlight the improvement already made in the linked issue?

It is to guide for future improvements and more importantly avoid regressions. As it currently stands there are no benchmarks for asyncio thus it is hard to evaluate the impact of any bug fixes on performance, having this is pyperformance will help in tracking this. (recent python/cpython#100053)

We should really avoid the latter... the last thing I want is to add synthetic benchmarks just to boost our 3.12 numbers.

That's true though this isn't any different from other benchmark, even if there is improvement in pyperformance it doesn't mean the user's application will be that much faster or faster at all.

I am fine with keeping this in my own fork but I would prefer to have it in pyperformance so it can be tracked using speed.python.org.

@kumaraditya303
Copy link
Contributor Author

FYI if this is acceptable then I have other benchmarks to add for asyncio notably udp and ssl performance.

@corona10
Copy link
Member

corona10 commented Dec 14, 2022

Comment which does not judge this PR:
I think that adding a new benchmark will consume the cost of running time for measuring.
Since we can not add every single benchmark for each dimension, each benchmark should cover a lot of dimensions of performance if possible.

From this view,

  • asyncio-based http benchmark will be better than asyncio-based TCP benchmark.
  • asyncio-based https benchmark will be better than asyncio-based SSL benchmark.
  • asyncio-based http3 benchmark will be better than asyncio-based UDP benchmark.

@kumaraditya303
Copy link
Contributor Author

I think that adding a new benchmark will consume the cost of running time for measuring.

I was under the impression that cost of running the benchmark is not an issue. If it is an issue then it isn't worth it and I can run it myself when necessary so no need for this in pyperformance.

@ericsnowcurrently
Copy link
Member

I think that adding a new benchmark will consume the cost of running time for measuring.
Since we can not add every single benchmark for each dimension, each benchmark should cover a lot of dimensions of performance if possible.

If running time is a concern, we can restrict the set of benchmarks that are run by default. That is easily done in the manifest file. Then we can do longer runs whenever we feel comfortable with the extra time by passing the --benchmarks all (or maybe it is ""?) option to pyperformance.

In the long term, ideally we would have a solid set of workload-oriented ("macro") benchmarks that measure real-world performance (close enough, at least). This is what we would use to measure performance effectively and those results are what we would communicate to users. However, we still need the wide variety of feature-oriented ("micro") benchmarks, to help us avoid regressions and narrow down causes when we have regressions, as well as point to where possible improvements might be. See https://github.com/faster-cpython/ideas/wiki/All-About-Python-Benchmarking.

Currently, pretty much all of the pyperformance benchmarks are feature-oriented ("micro"). So we have plenty of work ahead of us. (Note that the pyston benchmarks are much closer to workload-oriented and we have been incorporating them into the faster-cpython results as much as possible.)

@corona10
Copy link
Member

If running time is a concern, we can restrict the set of benchmarks that are run by default.

Ah okay, in that case, I am okay to add it.
I just worried about all various of bench_asyncio_xxxx.py.

@kumaraditya303
Copy link
Contributor Author

Final benchmark result:

Benchmark 3.10 patch
asyncio_tcp 1.41 sec 724 ms: 1.95x faster

I plan to merge this by weekend if there are no objections.

@gvanrossum
Copy link
Member

@mdboom What do you think?

@kumaraditya303
Copy link
Contributor Author

I'll merge this now as python/cpython#31871 is almost done, awaiting final review and merge. We can sort out any issues with this later down the road.

@kumaraditya303 kumaraditya303 merged commit b3c5e70 into python:main Dec 23, 2022
pablogsal pushed a commit to pablogsal/pyperformance that referenced this pull request Jan 1, 2023
pablogsal pushed a commit to pablogsal/pyperformance that referenced this pull request Jan 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants