Skip to content

Commit

Permalink
Small fixes for documentation. (#60)
Browse files Browse the repository at this point in the history
  • Loading branch information
Sam Yates committed Mar 29, 2019
1 parent 168140c commit 9ab7457
Show file tree
Hide file tree
Showing 6 changed files with 48 additions and 37 deletions.
16 changes: 8 additions & 8 deletions docs/benchmarks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ script ``benchmarks/models/MODEL/config.sh`` that takes the following arguments:
config.sh $model \ # model name
$config \ # configuration name
$ns_base_path \ # the base path of nsuite
$ns_config_path \ # the base path of nsuite
$ns_bench_input_path \ # the base path of nsuite
$ns_bench_output \ # base for the output path
$ns_config_path \ # path to config directory
$ns_bench_input_path \ # path to benchmark input base directory
$ns_bench_output \ # path to benchmark output base directory
$output_format # format string for simulator+model+config
The script will in turn generate a benchmark runner for each simulation engine:
Expand All @@ -34,7 +34,7 @@ The script will in turn generate a benchmark runner for each simulation engine:
3. ``$ns_bench_input_path/$model/$config/run_corenrn.sh``

These scripts should generate benchmark output in the per-simulator path
``$ns_bench_input_path/$output_format`` where the ``$output_format`` defaults to ``$model/$config/$engine``.
``$ns_bench_output/$output_format`` where the ``$output_format`` defaults to ``$model/$config/$engine``.

.. Note::
NSuite does not specify how the contents of ``benchmarks/engines/ENGINE``
Expand All @@ -43,12 +43,12 @@ These scripts should generate benchmark output in the per-simulator path
Performance reporting
"""""""""""""""""""""

Each benchmark run has to report metrics like simulation time, memory consumption, number of cells in model, and so on.
Each benchmark run has to report metrics such as simulation time, memory consumption, the number of cells in model, and so on.
These are output in the formats described in :ref:`bench-outputs`.

Arbor has a standardised way of measuring and reporting metrics using what it calls *meters*.
NSuite provides utility Python module in ``common/python/metering.py`` that provides the
same functionality in Python, which can be used for NEUORN benchmarks in Python.
NSuite provides a Python module in ``common/python/metering.py`` that offers the
same functionality in Python, which can be used for the NEURON benchmarks.

With this standard output, ``scrpts/csv_bench.sh`` script can be used to automatically generate the CSV output.
With this standard output format, the ``scrpts/csv_bench.sh`` script can be used to automatically generate the CSV output.

6 changes: 3 additions & 3 deletions docs/engines.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,12 @@ it must support the following features:
* Output of gid and times for spikes.

.. Note::
If a simulation engine doesn't suport a feature required to run a test,
If a simulation engine doesn't support a feature required to run a test,
the test will be skipped. For example, the only simulation output
provided by CoreNEURON is spike times, so validation tests that require
other information such as voltage traces are skipped when testing CoreNEURON.

NSuite does not prescribe models using universal model descriptions such as
NSuite does not describe models using universal model descriptions such as
`SONATA <https://github.com/AllenInstitute/sonata>`_ or `NeuroML <https://www.neuroml.org>`_.
Instead, benchmark and validation models are described using simulation engine-specific descriptions.

Expand All @@ -54,7 +54,7 @@ NEURON models
""""""""""""""""""""""""""""""""""""""""""

Models to run in NEURON are described using NEURON's Python interface.
The bencmarking and validation runners launch the models using with the Python 3
The benchmarking and validation runners launch the models using with the Python 3
interpreter specified by the ``ns_python`` variable (see :ref:`vars_general`).

CoreNEURON models
Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ There are three motivations for the development of NSuite:
simulation engines on HPC systems.
2. The need to verify the performance and correctness of individual simulation engines
as they change over time.
3. The need to thest that changes to an HPC system do not cause performance or
3. The need to test that changes to an HPC system do not cause performance or
correctness regressions in simulation engines.

The framework currently supports the simulation engines Arbor, NEURON, and CoreNeuron,
Expand Down
38 changes: 19 additions & 19 deletions docs/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Installing NSuite
================================

The first stage of the NSuite workflow is to install the simulation engine(s) to benchmark or validate.
This page describes how to obtain NSuite, then perform this step so that benchmarks and validation tests can be run.
This page describes how to obtain NSuite and then perform this step so that benchmarks and validation tests can be run.

Obtaining NSuite
--------------------------------
Expand All @@ -20,20 +20,20 @@ The simplest way to do this is to clone the repository using git:
cd nsuite
git checkout v1.0
Above ``git checkout`` is used to pick a tagged version of NSuite. If not called
the latest development version in the master branch will be used.
In the example above, ``git checkout v1.0`` is used to pick a tagged version of NSuite.
If omitted, the latest development version in the master branch will be used.

**TODO** guide on how to download zipped/tarred version from tags.
..
**TODO** guide on how to download zipped/tarred version from tags.
Installing Simulation Engines
--------------------------------

NSuite provides a script ``install-local.sh`` that performs the role of
NSuite provides a script ``install-local.sh`` that performs the following operations:

* Obtaining the source code for simulation engines.
* Compiling and installing simulation engines.
* Compiling and installing benchmark and validation test drivers.
* Generating input data sets for validation tests.
* Obtain the source code for simulation engines.
* Compile and install the simulation engines.
* Compile and install benchmark and validation test drivers.

Basic usage of ``install-local.sh`` is best illustrated with some examples:

Expand Down Expand Up @@ -108,12 +108,12 @@ If no prefix is provided, the directory structure is created in the nsuite path.
The contents of each sub-directory are summarised:

==================== ======================================================
``build``` Source code for simulation engines is checked out, and compiled here.
``install``` Installation target for the simulation engine libraries, executables, headers, etc.
``config``` The environment used to build each simulation engine is stored here, to load per-simulator when running benchmarks and validation tests.
``cache``` Validation data sets are stored here when generated during the installation phase.
``input``` **generated by running benchmarks** Input files for benchmark runs in sub-directories for each benchmark configuration.
``output``` **generated by running benchmarks/validation** Benchmark and validation outputs in sub-directories for each benchmark/validation configuration.
``build`` Source code for simulation engines is checked out, and compiled here.
``install`` Installation target for the simulation engine libraries, executables, headers, etc.
``config`` The environment used to build each simulation engine is stored here, to load per-simulator when running benchmarks and validation tests.
``cache`` Validation data sets are stored here when generated during the installation phase.
``input`` **generated by running benchmarks** Input files for benchmark runs in sub-directories for each benchmark configuration.
``output`` **generated by running benchmarks/validation** Benchmark and validation outputs in sub-directories for each benchmark/validation configuration.
==================== ======================================================

Customizing the environment
Expand Down Expand Up @@ -160,7 +160,7 @@ Variable Default value Explanation
``gcc``/``clang`` on Linux/OS X
``ns_cxx`` ``mpicxx`` if available, else The C++ compiler for compiling simulation engines.
``g++``/``clang++`` on Linux/OS X
``ns_with_mpi`` ``ON`` iff MPI is detectedl ``ON``/``OFF`` to compile simulation engines with MPI enabled.
``ns_with_mpi`` ``ON`` iff MPI is detected ``ON``/``OFF`` to compile simulation engines with MPI enabled.
Also controls whether mpirun is used to launch benchmarks.
``ns_makej`` 4 Number of parallel jobs to use when compiling.
``ns_python`` ``which python3`` The Python interpreter to use. Must be Python 3.
Expand Down Expand Up @@ -189,7 +189,7 @@ Variable Default value Explanat
======================== =========================================== ======================================================

The NEURON-specific options are for configuring where to get NEURON's source from.
NEURON can be dowloaded from a tar ball for a specific version, or cloned from a Git repository.
NEURON can be downloaded from a tar ball for a specific version, or cloned from a Git repository.

The official versions of NEURON's source code available to download are inconsistently packaged, so it
is not possible to automatically determine how to download and install from a version string alone, e.g. "7.6.2".
Expand Down Expand Up @@ -223,7 +223,7 @@ Example custom environment
Below is a custom configuration script for a Cray cluster with Intel KNL processors.
It configures all platform-specific details that can't be automatically detected by

* loading and swaping required modules;
* loading and swapping required modules;
* setting a platform-specific magic variable ``CRAYPE_LINK_TYPE`` required to make CMake play nice;
* configuring MPI with the Cray MPI wrapper;
* configuring Arbor to compile with KNL support;
Expand All @@ -234,7 +234,7 @@ It configures all platform-specific details that can't be automatically detected

.. code-block:: bash
# set up Cray Programming environmnet to use GNU toolchain
# set up Cray Programming environment to use GNU toolchain
[ "$PE_ENV" = "CRAY" ] && module swap PrgEnv-cray PrgEnv-gnu
# load python, gcc version and CMake
Expand Down
16 changes: 13 additions & 3 deletions docs/running.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,9 @@ simulator none Which simulation engines to benchmar
Use ``--help`` for all format string options.
==================== ================= ======================================================

The ``--model`` and ``-config`` flags specify which benchmarks to run,
The ``--model`` and ``--config`` flags specify which benchmarks to run
and how they should be configured. Currently there are two benchmark models,
*ring* and *kway*, detailed descriptions are in :ref:`benchmarks`.
*ring* and *kway*; detailed descriptions are in :ref:`benchmarks`.

.. container:: example-code

Expand Down Expand Up @@ -191,6 +191,13 @@ The `run-validation.sh` script runs all or a subset of the models for one or mor
installed simulators, saving test artefacts in a configurable output directory
and a presenting pass/fail status for each test on standard output.

Requirements
""""""""""""

The existing validation scripts use functionality from the ``scipy`` and
``xarray`` Python modules. These modules need to be available in the
Python module search path.

Invocation
""""""""""

Expand Down Expand Up @@ -234,10 +241,12 @@ of ``%s/%m/%p``. Fields in the ``FORMAT`` string are substituted as follows:
+--------+---------------------------------------------------------------------+
| ``%h`` | NSuite git commit short hash (with ``+`` suffix if modified) |
+--------+---------------------------------------------------------------------+
| ``%S`` | System name (if defined in system environment script) or host name. |
| ``%S`` | System name (if defined in system environment script) or host name |
+--------+---------------------------------------------------------------------+
| ``%s`` | Simulator name |
+--------+---------------------------------------------------------------------+
| ``%m`` | Model name |
+--------+---------------------------------------------------------------------+
| ``%p`` | Parameter set name |
+--------+---------------------------------------------------------------------+
| ``%%`` | Literal '%' |
Expand All @@ -260,6 +269,7 @@ record information in the per-test output directories:
+-------------+-------------------------------------------+

The status is one of:

1. ``pass`` — validation test succeeded.
2. ``fail`` — validation test failed.
3. ``missing`` — no implementation for the validation test found for requested simulator.
Expand Down
7 changes: 4 additions & 3 deletions docs/validation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ Validation
==================

A validation test runs a particular model, representing some physical system to
simulate, against one or more sets of parameters, and compares the output to a
simulate, against one or more sets of parameters and compares the output to a
reference solution. If the output deviates from the reference by more than a
given threshold, the respective test is marked as a FAIL for that simulator.
given threshold, the respective test is marked as failed for that simulator.

Simulator output for each model and parameter set is by convention stored in
NetCDF format, where it can be analysed with generic tools.
Expand Down Expand Up @@ -34,6 +34,7 @@ Model run scripts
"""""""""""""""""

A run script is invoked with the following arguments:

1. The output directory.
2. The simulator name.
3. The parameter set name.
Expand Down Expand Up @@ -68,7 +69,7 @@ data for a particular model *MODEL* should be stored in a subdirectory
of the cache directory also named *MODEL*.

If a validation run script does use cached data, that data should
be regenerated regardless if the environment variable ``ns_cache_refresh``
be regenerated if the environment variable ``ns_cache_refresh``
has a non-empty value.

Building tests
Expand Down

0 comments on commit 9ab7457

Please sign in to comment.