Skip to content

Commit

Permalink
Add new cprofile options.
Browse files Browse the repository at this point in the history
  • Loading branch information
ionelmc committed Oct 28, 2024
1 parent 61229eb commit e1dcd91
Show file tree
Hide file tree
Showing 5 changed files with 88 additions and 45 deletions.
6 changes: 6 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,12 @@ Changelog
Contributed by Tony Kuo in `#257 <https://github.com/ionelmc/pytest-benchmark/pull/257>`_.
* Fixes spelling in some help texts.
Contributed by Eugeniy in `#267 <https://github.com/ionelmc/pytest-benchmark/pull/267>`_.
* Added new cprofile options:

- ``--benchmark-cprofile-loops=LOOPS`` - previously profiling only ran the function once, this allow customization.
- ``--benchmark-cprofile-top=COUNT`` - allows showing more rows.
- ``--benchmark-cprofile-dump=[FILENAME-PREFIX]`` - allows saving to a file (that you can load in `snakeviz <https://pypi.org/project/snakeviz/>`_ or other tools).


4.0.0 (2022-10-26)
------------------
Expand Down
86 changes: 49 additions & 37 deletions docs/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,20 +77,20 @@ Commandline options
--benchmark-max-time=SECONDS
Maximum run time per test - it will be repeated until
this total time is reached. It may be exceeded if test
function is very slow or --benchmark-min-rounds is
large (it takes precedence). Default: '1.0'
function is very slow or --benchmark-min-rounds is large
(it takes precedence). Default: '1.0'
--benchmark-min-rounds=NUM
Minimum rounds, even if total time would exceed
`--max-time`. Default: 5
Minimum rounds, even if total time would exceed `--max-
time`. Default: 5
--benchmark-timer=FUNC
Timer to use when measuring time. Default:
'time.perf_counter'
--benchmark-calibration-precision=NUM
Precision to use when calibrating number of
iterations. Precision of 10 will make the timer look
10 times more accurate, at a cost of less precise
measure of deviations. Default: 10
--benchmark-warmup=KIND
Precision to use when calibrating number of iterations.
Precision of 10 will make the timer look 10 times more
accurate, at a cost of less precise measure of
deviations. Default: 10
--benchmark-warmup=[KIND]
Activates warmup. Will run the test function up to
number of times in the calibration phase. See
`--benchmark-warmup-iterations`. Note: Even the warmup
Expand All @@ -104,11 +104,11 @@ Commandline options
Disable GC during benchmarks.
--benchmark-skip Skip running any tests that contain benchmarks.
--benchmark-disable Disable benchmarks. Benchmarked functions are only ran
once and no stats are reported. Use this if you want
to run the test but don't do any benchmarking.
--benchmark-enable Forcibly enable benchmarks. Use this option to
override --benchmark-disable (in case you have it in
pytest configuration).
once and no stats are reported. Use this is you want to
run the test but don't do any benchmarking.
--benchmark-enable Forcibly enable benchmarks. Use this option to override
--benchmark-disable (in case you have it in pytest
configuration).
--benchmark-only Only run benchmarks. This overrides --benchmark-skip.
--benchmark-save=NAME
Save the current run into 'STORAGE-PATH/counter-
Expand All @@ -123,49 +123,61 @@ Commandline options
stats.
--benchmark-json=PATH
Dump a JSON report into PATH. Note that this will
include the complete data (all the timings, not just
the stats).
--benchmark-compare=NUM
include the complete data (all the timings, not just the
stats).
--benchmark-compare=[NUM|_ID]
Compare the current run against run NUM (or prefix of
_id in elasticsearch) or the latest saved run if
unspecified.
--benchmark-compare-fail=EXPR
--benchmark-compare-fail=EXPR [EXPR ...]
Fail test if performance regresses according to given
EXPR (eg: min:5% or mean:0.001 for number of seconds).
Can be used multiple times.
--benchmark-cprofile=COLUMN
If specified measure one run with cProfile and stores
10 top functions. Argument is a column to sort by.
Available columns: 'ncalls_recursion', 'ncalls',
'tottime', 'tottime_per', 'cumtime', 'cumtime_per',
'function_name'.
If specified cProfile will be enabled. Top functions
will be stored for the given column. Available columns:
'ncalls_recursion', 'ncalls', 'tottime', 'tottime_per',
'cumtime', 'cumtime_per', 'function_name'.
--benchmark-cprofile-loops=LOOPS
How many times to run the function in cprofile.
Available options: 'auto', or an integer.
--benchmark-cprofile-top=COUNT
How many rows to display.
--benchmark-cprofile-dump=[FILENAME-PREFIX]
Save cprofile dumps as FILENAME-PREFIX-test_name.prof.
If FILENAME-PREFIX contains slashes ('/') then
directories will be created. Default:
'benchmark_20241028_160327'
--benchmark-time-unit=COLUMN
Unit to scale the results to. Available units: 'ns',
'us', 'ms', 's'. Default: 'auto'.
--benchmark-storage=URI
Specify a path to store the runs as uri in form
file\:\/\/path or elasticsearch+http[s]\:\/\/host1,host2/[in
dex/doctype?project_name=Project] (when --benchmark-
save or --benchmark-autosave are used). For backwards
file://path or elasticsearch+http[s]://host1,host2/[inde
x/doctype?project_name=Project] (when --benchmark-save
or --benchmark-autosave are used). For backwards
compatibility unexpected values are converted to
file\:\/\/<value>. Default: 'file\:\/\/./.benchmarks'.
--benchmark-netrc=BENCHMARK_NETRC
file://<value>. Default: 'file://./.benchmarks'.
--benchmark-netrc=[BENCHMARK_NETRC]
Load elasticsearch credentials from a netrc file.
Default: ''.
--benchmark-verbose Dump diagnostic and progress information.
--benchmark-sort=COL Column to sort on. Can be one of: 'min', 'max',
'mean', 'stddev', 'name', 'fullname'. Default: 'min'
--benchmark-group-by=LABELS
Comma-separated list of categories by which to
group tests. Can be one or more of: 'group', 'name',
'fullname', 'func', 'fullfunc', 'param' or
'param:NAME', where NAME is the name passed to
@pytest.parametrize. Default: 'group'
--benchmark-quiet Disable reporting. Verbose mode takes precedence.
--benchmark-sort=COL Column to sort on. Can be one of: 'min', 'max', 'mean',
'stddev', 'name', 'fullname'. Default: 'min'
--benchmark-group-by=LABEL
How to group tests. Can be one of: 'group', 'name',
'fullname', 'func', 'fullfunc', 'param' or 'param:NAME',
where NAME is the name passed to @pytest.parametrize.
Default: 'group'
--benchmark-columns=LABELS
Comma-separated list of columns to show in the result
table. Default: 'min, max, mean, stddev, median, iqr,
outliers, ops, rounds, iterations'
--benchmark-name=FORMAT
How to format names in results. Can be one of 'short',
'normal', 'long', or 'trial'. Default: 'normal'
--benchmark-histogram=FILENAME-PREFIX
--benchmark-histogram=[FILENAME-PREFIX]
Plot graphs of min/max/avg/stddev over time in
FILENAME-PREFIX-test_name.svg. If FILENAME-PREFIX
contains slashes ('/') then directories will be
Expand Down
17 changes: 15 additions & 2 deletions src/pytest_benchmark/fixture.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,12 @@
import traceback
import typing
from math import ceil
from pathlib import Path

from .timers import compute_timer_precision
from .utils import NameWrapper
from .utils import format_time
from .utils import slugify

try:
import statistics
Expand Down Expand Up @@ -45,6 +47,7 @@ def __init__(
disabled,
cprofile,
cprofile_loops,
cprofile_dump,
group=None,
):
self.name = node.name
Expand Down Expand Up @@ -75,6 +78,7 @@ def __init__(
self._mode = None
self.cprofile = cprofile
self.cprofile_loops = cprofile_loops
self.cprofile_dump = cprofile_dump
self.cprofile_stats = None
self.stats = None

Expand Down Expand Up @@ -134,6 +138,15 @@ def _make_stats(self, iterations):
self.stats = bench_stats
return bench_stats

def _save_cprofile(self, profile: cProfile.Profile):
stats = pstats.Stats(profile)
self.stats.cprofile_stats = stats
if self.cprofile_dump:
output_file = Path(f'{self.cprofile_dump}-{slugify(self.name)}.prof')
output_file.parent.mkdir(parents=True, exist_ok=True)
stats.dump_stats(output_file)
self._logger.info(f'Saved profile: {output_file}', bold=True)

def __call__(self, function_to_benchmark, *args, **kwargs):
if self._mode:
self.has_error = True
Expand Down Expand Up @@ -191,7 +204,7 @@ def _raw(self, function_to_benchmark, *args, **kwargs):
profile = cProfile.Profile()
for _ in cprofile_loops:
function_result = profile.runcall(function_to_benchmark, *args, **kwargs)
self.stats.cprofile_stats = pstats.Stats(profile)
self._save_cprofile(profile)
else:
function_result = function_to_benchmark(*args, **kwargs)
return function_result
Expand Down Expand Up @@ -260,7 +273,7 @@ def make_arguments(args=args, kwargs=kwargs):
args, kwargs = make_arguments()
for _ in cprofile_loops:
profile.runcall(target, *args, **kwargs)
self.stats.cprofile_stats = pstats.Stats(profile)
self._save_cprofile(profile)

return result

Expand Down
11 changes: 11 additions & 0 deletions src/pytest_benchmark/plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -287,6 +287,17 @@ def pytest_addoption(parser):
type=int,
help='How many rows to display.',
)
cprofile_dump_prefix = f'benchmark_{get_current_time()}'
group.addoption(
'--benchmark-cprofile-dump',
action='append',
metavar='FILENAME-PREFIX',
nargs='?',
default=[],
const=cprofile_dump_prefix,
help='Save cprofile dumps as FILENAME-PREFIX-test_name.prof. If FILENAME-PREFIX contains'
f" slashes ('/') then directories will be created. Default: {cprofile_dump_prefix!r}",
)
group.addoption(
'--benchmark-time-unit',
metavar='COLUMN',
Expand Down
13 changes: 7 additions & 6 deletions src/pytest_benchmark/session.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,10 @@ def __init__(self, config):
default_machine_id=self.machine_id,
netrc=config.getoption('benchmark_netrc'),
)
self.cprofile_sort_by = config.getoption('benchmark_cprofile')
self.cprofile_loops = config.getoption('benchmark_cprofile_loops')
self.cprofile_top = config.getoption('benchmark_cprofile_top')
self.cprofile_dump = first_or_value(config.getoption('benchmark_cprofile_dump'), False)
self.options = {
'min_time': SecondsDecimal(config.getoption('benchmark_min_time')),
'min_rounds': config.getoption('benchmark_min_rounds'),
Expand All @@ -52,14 +56,12 @@ def __init__(self, config):
'disable_gc': config.getoption('benchmark_disable_gc'),
'warmup': config.getoption('benchmark_warmup'),
'warmup_iterations': config.getoption('benchmark_warmup_iterations'),
'cprofile': bool(config.getoption('benchmark_cprofile')),
'cprofile_loops': config.getoption('benchmark_cprofile_loops'),
'cprofile': bool(self.cprofile_sort_by),
'cprofile_loops': self.cprofile_loops,
'cprofile_dump': self.cprofile_dump,
}
self.skip = config.getoption('benchmark_skip')
self.disabled = config.getoption('benchmark_disable') and not config.getoption('benchmark_enable')
self.cprofile_sort_by = config.getoption('benchmark_cprofile')
self.cprofile_loops = config.getoption('benchmark_cprofile_loops')
self.cprofile_top = config.getoption('benchmark_cprofile_top')

if config.getoption('dist', 'no') != 'no' and not self.skip and not self.disabled:
self.logger.warning(
Expand Down Expand Up @@ -93,7 +95,6 @@ def __init__(self, config):
self.compare = config.getoption('benchmark_compare')
self.compare_fail = config.getoption('benchmark_compare_fail')
self.name_format = NAME_FORMATTERS[config.getoption('benchmark_name')]

self.histogram = first_or_value(config.getoption('benchmark_histogram'), False)

def get_machine_info(self):
Expand Down

0 comments on commit e1dcd91

Please sign in to comment.