Skip to content

Commit

Permalink
Cleanup and address review comments
Browse files Browse the repository at this point in the history
Importantly, use MPI_SGI_vtune_is_running instead of
MPI_SGI_init to identify HMPT library becayse we have
to distinguish between MPT and HMPT versions.
coreneuron submodule update
  • Loading branch information
pramodk committed Oct 14, 2021
1 parent 41fd45e commit c174509
Show file tree
Hide file tree
Showing 5 changed files with 21 additions and 11 deletions.
8 changes: 7 additions & 1 deletion packaging/python/Dockerfile_gpu
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,21 @@ LABEL authors="Pramod Kumbhar, Fernando Pereira, Alexandru Savulescu"

WORKDIR /root

# download nvhpc 21.2 rpms. Note that newer versions until at least 21.7 has various
# bugs and hence we are sticking to 21.2 until we verify latest release.
# see https://github.com/BlueBrain/CoreNeuron/issues/605
RUN wget --no-verbose \
https://developer.download.nvidia.com/hpc-sdk/21.2/nvhpc-21-2-21.2-1.x86_64.rpm \
https://developer.download.nvidia.com/hpc-sdk/21.2/nvhpc-2021-21.2-1.x86_64.rpm \
https://developer.download.nvidia.com/hpc-sdk/21.2/nvhpc-21-2-cuda-multi-21.2-1.x86_64.rpm \
&& yum install -y *.rpm \
&& rm *.rpm && yum clean all

# setup nvhpc environment for building wheel and interactive usage
# note that with 21.2, as we load the nvhpc module in the build_wheel.sh,
# latest cuda v11.2 from nvhpc is still used for building the wheel.
RUN yum install -y environment-modules && yum clean all \
&& echo "module use /opt/nvidia/hpc_sdk/modulefiles" >> ~/.bashrc \
&& echo "export PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/21.2/cuda/11.0/bin:\$PATH" >> ~/.bashrc \
&& /opt/nvidia/hpc_sdk/Linux_x86_64/21.2/compilers/bin/makelocalrc -x \
-gcc `which gcc` -gpp `which g++` -g77 `which gfortran` # -cuda 11.0 this option is valid for 21.7
-gcc `which gcc` -gpp `which g++` -g77 `which gfortran` # -cuda 11.0 option is valid for >=21.7
10 changes: 6 additions & 4 deletions packaging/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,10 @@ We mount local neuron repository inside docker as a volume to preserve any code
```
git clone https://github.com/neuronsimulator/nrn.git
docker run -v $PWD/nrn:/root/nrn -v $PWD/mpt-headers/2.21:/nrnwheel/mpt -it neuronsimulator/neuron_wheel bash
docker run -v $PWD/nrn:/root/nrn -v $PWD/mpt-headers/2.21:/nrnwheel/mpt/include -it neuronsimulator/neuron_wheel bash
```

where `$PWD/nrn` is a neuron repository on the host machine and `$PWD/mpt-headers` is a directory containing HPE-MPT MPI headers (optional). We mount those directories inside docker at location `/root/nrn` and `/nrnwheel/mpt` inside the container. The MPT headers are optional and maintained in the separate repository as it's not open source library. You can download the headers as:
where `$PWD/nrn` is a neuron repository on the host machine and `$PWD/mpt-headers` is a directory containing HPE-MPT MPI headers (optional). We mount those directories inside docker at location `/root/nrn` and `/nrnwheel/mpt/include` inside the container. The MPT headers are optional and maintained in the separate repository as it's not open source library. You can download the headers as:

```
git clone ssh://bbpcode.epfl.ch/user/kumbhar/mpt-headers
Expand All @@ -36,7 +36,7 @@ git clone ssh://bbpcode.epfl.ch/user/kumbhar/mpt-headers
If you want to build wheel with *GPU support* via CoreNEURON then we have to use image `neuronsimulator/neuron_wheel_gpu` i.e.

```
docker run -v $PWD/nrn:/root/nrn -v $PWD/mpt-headers/2.21:/nrnwheel/mpt -it neuronsimulator/neuron_wheel_gpu bash
docker run -v $PWD/nrn:/root/nrn -v $PWD/mpt-headers/2.21:/nrnwheel/mpt/include -it neuronsimulator/neuron_wheel_gpu bash
```

Note that for OS X there is no docker image but on a system where all dependencies exist, you have to perform next building step.
Expand All @@ -60,7 +60,9 @@ To build wheel with GPU support you have to pass an additional argument:
* coreneuron-gpu : build wheel with coreneuron and gpu support

```
bash packaging/python/build_wheels.bash linux 3* coreneuron-gpu
bash packaging/python/build_wheels.bash linux 3* coreneuron
or
bash packaging/python/build_wheels.bash linux 38 coreneuron-gpu
```

In the above example we are passing `3*` to build the wheels for all python 3 version.
Expand Down
6 changes: 3 additions & 3 deletions packaging/python/build_wheels.bash
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ build_wheel_linux() {
setup_args="--enable-coreneuron"
elif [ "$2" == "coreneuron-gpu" ]; then
setup_args="--enable-coreneuron --enable-gpu"
# nvhpc is required for GPU support but make
# sure CC and CXX are unset for building wheel extensions
# nvhpc is required for GPU support but make sure
# CC and CXX are unset for building python extensions
module load nvhpc
unset CC CXX
fi
Expand Down Expand Up @@ -191,7 +191,7 @@ case "$1" in
;;

*)
echo "Usage: $(basename $0) < linux | osx > [python version 36|37|38|3*] [coreneuron | coreneuron-gpu]"
echo "Usage: $(basename $0) < linux | osx > [python version 36|37|38|39|3*] [coreneuron | coreneuron-gpu]"
exit 1
;;

Expand Down
6 changes: 4 additions & 2 deletions src/nrnmpi/nrnmpi_dynam.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ sprintf(pmes+strlen(pmes), "Is openmpi or mpich installed? If not in default loc
if (!load_nrnmpi("libnrnmpi_msmpi.dll", pmes+strlen(pmes))){
return pmes;
}
corenrn_mpi_library = std::string(prefix) + "libcorenrnmpi_msmpi.dll";
corenrn_mpi_library = "libcorenrnmpi_msmpi.dll";
}else{
ismes = 1;
return pmes;
Expand Down Expand Up @@ -180,7 +180,9 @@ sprintf(pmes+strlen(pmes), "Is openmpi or mpich installed? If not in default loc
if (dlsym(handle, "ompi_mpi_init")) { /* it is openmpi */
sprintf(lname, "%slibnrnmpi_ompi.so", prefix);
corenrn_mpi_library = std::string(prefix) + "libcorenrnmpi_ompi.so";
}else if (dlsym(handle, "MPI_SGI_init")) { /* it is sgi-mpt */
}else if (dlsym(handle, "MPI_SGI_vtune_is_running")) { /* it is sgi-mpt */
// MPI_SGI_init exist in both mpt as well as hmpt and hence look
// for MPI_SGI_vtune_is_running which exist in non-hmpt version only.
sprintf(lname, "%slibnrnmpi_mpt.so", prefix);
corenrn_mpi_library = std::string(prefix) + "libcorenrnmpi_mpt.so";
}else{ /* must be mpich. Could check for MPID_nem_mpich_init...*/
Expand Down

0 comments on commit c174509

Please sign in to comment.