-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set desired running time #16
Comments
That would be interesting, but execution time varies greatly from machines to machines. The only significant result in a benchmark would be to compare A to B. A and B could be different (competitive?) libraries/frameworks, or different version of the same code (to track performance regressions). |
I see your point, but if you set the boundries based on the minimal hardware requirements, that wont be any problem. The better the hardware, the better the speed. But in a language such as PHP, performance is futile anyways. A good algorithm may performed thousands of times worse than one written in C, as an extension. |
I like this in theory, but I doubt it would be useful in practice. For example, we run CI tests on all our different codebases at my job. The timeouts need to be set ludicrously high because, depending on what other jobs are building/running at the same time, performance profiles vary wildly. This is on code that normally executes in 100ms, but sometimes takes up to 15 seconds due to resource contention. Disk I/O is the usual culprit, since that is very difficult to regulate even in tightly controlled VMs. CPU can sometimes do it too. I'm just not sure how practical it would be for someone to base tests on performance metrics which can be crazy variable, even on a single machine. |
I was quite interested in this feature too, knowing it is hard as hell to implement from the start. However, i've found couple of things that can turn athletic into test-like utility:
|
Some annotations that one could set to a "desired running time" would be nice. E.g.
This way, the benchmarking could also lead to some form of tests. We get a quick feedback of methods not running in the preferred time window.
This would translate as:
The text was updated successfully, but these errors were encountered: