Skip to content

Commit

Permalink
add benchmark implementation docs
Browse files Browse the repository at this point in the history
  • Loading branch information
bcumming committed Mar 28, 2019
1 parent 05e9079 commit be5d220
Show file tree
Hide file tree
Showing 7 changed files with 53 additions and 10 deletions.
File renamed without changes.
1 change: 1 addition & 0 deletions benchmarks/models/kway/config.sh
1 change: 1 addition & 0 deletions benchmarks/models/ring/config.sh
52 changes: 47 additions & 5 deletions docs/benchmarks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,52 @@
Benchmarks
==================

For a given model and simulation engine, and benchmark is run with a "parameter sweep", for
example it is run multiple times with increasing number of cells, to understand scaling.
Architecture
------------

.. _benchmark-config:
Benchmarks are set up in the NSuite source tree according to a specific layout.
Different benchmarks models can share an underlying benchmark. For example,
the *ring* and *kway* benchmarks are different configurations of
what we call a *busy-ring* model. In this case, the *busy-ring* is called
a benchmark *ENGINE* and *kway* is a benchmark *MODEL*. All scripts
and inputs for *ENGINE* are in the path ``benchmarks/engines/ENGINE``, and
inputs for a *MODEL* are in ``benchmarks/models/MODEL``.

Every model *MODEL* must provide a configuration
script ``benchmarks/models/MODEL/config.sh`` that takes the following arguments:

.. code-block:: bash
config.sh $model \ # model name
$config \ # configuration name
$ns_base_path \ # the base path of nsuite
$ns_config_path \ # the base path of nsuite
$ns_bench_input_path \ # the base path of nsuite
$ns_bench_output \ # base for the output path
$output_format # format string for simulator+model+config
The script will in turn generate a benchmark runner for each simulation engine:

1. ``$ns_bench_input_path/$model/$config/run_arb.sh``
2. ``$ns_bench_input_path/$model/$config/run_nrn.sh``
3. ``$ns_bench_input_path/$model/$config/run_corenrn.sh``

These scripts should generate benchmark output in the per-simulator path
``$ns_bench_input_path/$output_format`` where the ``$output_format`` defaults to ``$model/$config/$engine``.

.. Note::
NSuite does not specify how the contents of ``benchmarks/engines/ENGINE``
have to be laid out.

Performance reporting
"""""""""""""""""""""

Each benchmark run has to report metrics like simulation time, memory consumption, number of cells in model, and so on.
These are output in the formats described in :ref:`bench-outputs`.

Arbor has a standardised way of measuring and reporting metrics using what it calls *meters*.
NSuite provides utility Python module in ``common/python/metering.py`` that provides the
same functionality in Python, which can be used for NEUORN benchmarks in Python.

With this standard output, ``scrpts/csv_bench.sh`` script can be used to automatically generate the CSV output.

Benchmark Configurations
-------------------------
3 changes: 1 addition & 2 deletions docs/engines.rst
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ in ``scripts/environment.sh``, in the ``default_environment``
Add engine to ``install-local.sh``
""""""""""""""""""""""""""""""""""""""""""""""

The ``install-local.sh`` script will have to be extended to support optional
The ``install-local.sh`` script has to be extended to support optional
installation of the new simulation engine. Follow the steps used by the existing
simulation engines.

Expand All @@ -127,7 +127,6 @@ simulation engines.
benchmark and validation models, follow the example of how Arbor performs this
step in ``scripts/build_arbor.sh``.


Implement benchmarks and validation tests
""""""""""""""""""""""""""""""""""""""""""""""

Expand Down
4 changes: 2 additions & 2 deletions docs/running.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,12 +82,12 @@ characteristics of simpler models.
Likewise, models in *large* configuration take much longer to run, with considerably more parallel
work for benchmarking performance of large models on powerful HPC nodes.

For more information on how to provide custom configurations, see :ref:`benchmark-config`.

.. Note::
NEURON is used to generate input models for CoreNEURON. Before running a benchmark in
CoreNEURON, the benchmark must first be run in NEURON.

.. _bench-outputs:

Benchmark output
"""""""""""""""""""""""""""

Expand Down
2 changes: 1 addition & 1 deletion run-bench.sh
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ do

model_input_path="$ns_bench_input_path/$model/$config"

"$ns_base_path/scripts/bench_config.sh" "$model" "$config" "$ns_base_path" "$ns_config_path" "$ns_bench_input_path" "$ns_bench_output" "${ns_bench_output_format:-%m/%p/%s}"
"$model_config_path/config.sh" "$model" "$config" "$ns_base_path" "$ns_config_path" "$ns_bench_input_path" "$ns_bench_output" "${ns_bench_output_format:-%m/%p/%s}"

if [ "$run_arb" == "true" ]; then
msg benchmark: arbor $model-$config
Expand Down

0 comments on commit be5d220

Please sign in to comment.