Skip to content

juliusf/Neurogenesis

Repository files navigation

Neurogenesis

Abstract

Neurogenesis is a framework which allows a large number of OMNet++ simulations (or with a bit of work arbitrary jobs) to be distributed to several cluster computers. It uses the mpi4py MPI wrapper for distribution of jobs.

Installation

First, select a machine as the master controller. It is responsible for coordinating the distribution of simulation data. All file paths specified in this section are the recommended default locations. You can choose your own if you want to. However, then you have to specify them as an additional parameter to the distsim command, or modify the default parameters in the distsim command accordingly.

In order to be able to exchange simulation results, all machines have to share a common file-system. We recommend NFS for doing so.

  • Mount the common file system on each machine at /mnt/distsim.
  • extract & build the current OMnET++ version in /mnt/distsim/omnet.
  • extract & build your inet library in /mnt/distsim/inet.
  • create the working directory for all simulations at /mnt/distsim/simulations
  • clone the Neurogenesis repo to /mnt/distsim/simtools
  • Add the distsim command to your $PATH variable

Neurogenesis uses MPI for distribution of workload on different machines. This means that your master controller needs to have passwordless ssh access to all machines. Also, the MPI hostfile should be placed at /mnt/distsim/simtools/hostfile Now you have to make sure that all required python dependencies are installed on your machines. Neurogenesis uses python2.7 and uses only few non-standard library dependencies:

  • argpase
  • mpi4pi

Optionally, for plotting and analysis we recommend the matplotlib library together with seaborn.

Please make sure that dynamically linked dependencies of OMNeT++ need also be present on your simulation workers. Now you should have everything you need to start simulating.

Usage

Neurogenesis simulation projects typically share the following directory structure:

 .
+-- my_omnet_parameter_study
|   +-- omnetpp.ini
|   +-- some_ned_file.ned
|   +-- scalars.txt
|   +-- plot.py
|   +-- meta.pickle
|   +-- pushover_secret
  • omnetpp.ini is the standard omnet simulation configuration. If you want to simulate several different parameter values, you can specify them in the ini-file in the form of parameter = {value1, value2, value3}. Neurogenesis automatically generates all possible combinations of different parameter / value configurations and passes them to the simulation worker.

  • If your simulation requires additional ned files, they have to be passed to distsim with the --additionalFiles parameter. Please note that there is currently a bug in the namespacing system of OMNeT++ which requires that all .ned which are not within the default $NEDPATH have to be placed in the root of the ned namespace. This is done simply by commenting out the package directive in the nedfile.

  • scalars.txt contains a list of scalars written during the simulation you are interested in for further analyzing, plotting.

  • plot.py script called by distsim for plotting simulation results. For details see "Usage Example" section.

  • meta.pickle is an auto-generated serialized python object which stores meta information and also simulation results needed for later processing.

  • pushover_secret (optional) contains pushover[1] user API-keys you can use to send notifications to you desktop / mobile client when you want to be notified when a simulation has finished.

Example parameter study

Let's assume you placed an omnetpp.ini with different run configurations together with its corresponding nedfile (example.ned) in an arbitrary folder (e.g. ~/simulations/test_simulation) on your master controller.

Then you can start a distributed simulation simply by calling

   distsim simulate --inifile omnetpp.ini --additionalFiles example.ned

This call performs several things for you: It parses the omnetpp.ini file for all parameters which should be varied in your parameter study. It automagically generates all possible combinations of all the configuration settings and writes them to the /mnt/distsim/simulations/ folder. Then it parses MPI hostfile, extracts the maximum number of available ranks and starts that many worker ranks. It then distributes the different simulation runs generated to these newly spawned mpi workers. The results of these simulations are written to the NFS share.

   distsim extract

The next step is to collect the results generated by the omnet instances during simulation. By executing this command, distsim reads the scalars.txt file (new-line separated vector names), and extracts the desired vectors from the .sca files generated. The simulation results are then written to the meta.pickle file. The advantage of this approach is that now this single file can be downloaded from the master controller when plotting / analysis should be performed.

 distsim plot

reads the previously serialized data again. distsim automatically calls a method conventionally named plot(simulations) in the plot.py file in the current working directory. Here, simulations is a dictionary that maps md5 hashes of the particular generated ini-file instance to an instance of SimulationRun().

This Object looks roughly like this:

class SimulationRun():
    def __init__(self):
        self.hash = ""
        self.config = []
        self.last_executed = 0
        self.path = ""
        self.executable_path = ""
        self.results = {}
        self.result_vectors = {}  # {('module', 'name'): [(event, simSec, value), ...], ...}
        self.parameters = []
        self.execute_binary = ""
        self.last_exit_code = -1
        self.last_executed_on_rank = -1
        self.last_run_at = 0

In order to ease analysis of the data generated by the simulations, a helper called get_datapoints_in_buckets(simulations, x_axis_parameter, y_axis_paramter, [(filter)]) function is provided. It can be called from within the plot.py script.

An example could look like as follows:

datapoints = get_datapoints_in_buckets(simulations, "**.coreChannel.delay", "**.AverageQueueingDelay, [('**.coreChannel.datarate', 1)])

The first parameter this function takes is the simulations object provided by the plot() function. Second and third parameter specify the name of the parameters that should displayed on the x and y axis of a plot. The last parameter is a list of tuples which are interpreted as filters. This can be used to narrow down the results.

This function returns a list of tuples, where the first element is a double and the second element a list of double. ( in python: [(double, [double])] ). When performing parameter studies, you normally run the same simulation config several times in order minimize phase effects. Therefore the parameter corresponding to the y-axis is a list.

If you have any further questions, please do not hesitate to contact me.

[1] https://pushover.net/

About

Distributed Omnet Simulation Toolchain

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages