Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set desired running time #16

Open
davidsteinsland opened this issue Dec 2, 2013 · 4 comments
Open

Set desired running time #16

davidsteinsland opened this issue Dec 2, 2013 · 4 comments

Comments

@davidsteinsland
Copy link

Some annotations that one could set to a "desired running time" would be nice. E.g.

"this" method should preferably run below "0.0002" at average

This way, the benchmarking could also lead to some form of tests. We get a quick feedback of methods not running in the preferred time window.

<?php
class TestEvent {
/**
 * @iterations 1000
 * @maxTime 0.05
 * @avgTime 0.002
 */
}

This would translate as:

the method should run 1000 iterations, not once allowed above the maximum time limit, and the average time should be less than 0.002

@mnapoli
Copy link

mnapoli commented Dec 3, 2013

That would be interesting, but execution time varies greatly from machines to machines. The only significant result in a benchmark would be to compare A to B. A and B could be different (competitive?) libraries/frameworks, or different version of the same code (to track performance regressions).

@davidsteinsland
Copy link
Author

I see your point, but if you set the boundries based on the minimal hardware requirements, that wont be any problem. The better the hardware, the better the speed. But in a language such as PHP, performance is futile anyways. A good algorithm may performed thousands of times worse than one written in C, as an extension.

@polyfractal
Copy link
Owner

I like this in theory, but I doubt it would be useful in practice. For example, we run CI tests on all our different codebases at my job. The timeouts need to be set ludicrously high because, depending on what other jobs are building/running at the same time, performance profiles vary wildly. This is on code that normally executes in 100ms, but sometimes takes up to 15 seconds due to resource contention.

Disk I/O is the usual culprit, since that is very difficult to regulate even in tightly controlled VMs. CPU can sometimes do it too.

I'm just not sure how practical it would be for someone to base tests on performance metrics which can be crazy variable, even on a single machine.

@etki
Copy link
Contributor

etki commented Dec 28, 2014

I was quite interested in this feature too, knowing it is hard as hell to implement from the start. However, i've found couple of things that can turn athletic into test-like utility:

  1. getrusage() will return time that processor has really been wasting on script. Assuming from comment, this will be a little tricky to ensure timings are correct, but this may help to eliminate latency problems; also, good testing should always use mocks (so there would be no latency problem at all). As i've been told, this doesn't work on Windows.
  2. Even more hardcore solution. perf stat -e instructions -e cycles %your_script% will give real amount of CPU operations and cycles (though i'm not exactly sure what's the difference between them). Those values tend to vary in something like 5% for cycles and less than 1% for instructions on my notebook, so, i guess, it's possible to set tresholds close to real values. And yeap, this, of course, is linux-only solution too, which requires downloading extra utility packages (linux-tools-generic), but it can be adapted to ensure cross-linux script performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants