Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU wheel support #2446

Open
olupton opened this issue Jul 27, 2023 · 0 comments
Open

GPU wheel support #2446

olupton opened this issue Jul 27, 2023 · 0 comments
Labels
building Issue related to build/compilation coreneuron enhancement gpu improvement Improvement over existing implementation python wheel

Comments

@olupton
Copy link
Collaborator

olupton commented Jul 27, 2023

Overview of the feature

Ideally, a simple pip install neuron or similar would yield a NEURON installation that is capable of using CoreNEURON's GPU support. i.e.

from neuron import coreneuron
coreneuron.cell_permute = 2
coreneuron.enable = True
coreneuron.gpu = True
...

A previous attempt at this was introduced in #1452 and ultimately removed in #2378. This ticket concerns a (hypothetical) future attempt to re-introduce GPU wheel support in a more sustainable way.

Retrospective on previous efforts

All NEURON and CoreNEURON workflows involving custom MOD files use the nrnivmodl (or nrnivmodl-core) scripts on the end user's machine. In essence, these scripts, on the user's machine:

  • generate C++ code
  • compile C++ code
  • create shared libraries and dynamically linked executables that link against shared libraries that were shipped as part of the wheel distribution

Note in particular that the last step involves linking together compiled code that was compiled on the user's machine, with the user's toolchain, at nrnivmodl runtime, with [Core]NEURON libraries that were compiled much earlier, on a different machine and with a different toolchain.

This already causes problems that are nothing to do with GPU support, see, for example, #1963.

With GPU support enabled, GPU-aware code exists on both sides: some is compiled on the user's machine, while some is compiled and distributed as part of the wheel distribution.

Because CoreNEURON's GPU support currently requires the NVIDIA HPC C++ compiler, nvc++, the old GPU-enabled wheels had to be compiled with nvc++ (not g++, which is used for the regular Linux wheels). As NVIDIA do not, so far as we know, provide any guarantees about forward/backward link compatibility between GPU-enabled code built with different versions of nvc++, we adopted the conservative approach of requiring an exact match.

This implied that the user had to manually install the same version of the NVIDIA HPC SDK on their own machine as had been used to build the GPU-enabled wheel that they had installed. This is obviously possible, but it was not user-friendly. (Furthermore, it plainly does not scale if any other package were to ship nvc++-compiled wheels that required a different version.)

In practice, limited developer resources meant that the version used to build the wheels was not regularly updated, and it was quite old by the time the old GPU wheel support was removed.

In addition, the auditwheel infrastructure caused significant problems with the extra runtime libraries used by nvc++.
In order for the binaries shipped in the wheel (such as nrniv) to be usable standalone, without a local installation of the NVIDIA HPC SDK, some of these runtime libraries were shipped as part of the wheel. This was not a problem in and of itself -- binaries such as nrniv worked as expected -- but it was problematic in the nrnivmodl workflow described above.

Specifically, the shipped CoreNEURON .so linked against several libnvidiastuff-{hash}.so libraries, where the {hash} suffix was added by the wheel-building infrastructure. Inside nrnivmodl, nvc++ would link libnvidiastuff.so from the user's NVIDIA HPC SDK installation, as well as linking to the CoreNEURON library, resulting in an executable that links against both libnvidiastuff.so (from the system) and libnvidiastuff-{hash}.so (from the wheel). Unsurprisingly, this caused problems. Working around this involved using patchelf inside nrnivmodl to remove the link dependency on libnvidiastuff-{hash}.so. Needless to say, this is unpleasant and fragile.

Proposed way forwards

A lot of the complication above is due to the existence of GPU-enabled code inside the shared library that is shipped, compiled, inside the wheel. This code is principally responsible for:

  • running the matrix solver
  • managing data movement between CPU and GPU
  • providing implementations of the default mechanisms, when nrnivmodl is not run

If the source code were reorganised (in the direction of shipping more source code) so that all GPU-enabled code is compiled inside nrnivmodl, the main NEURON and CoreNEURON libraries (libnrniv.so and so on) could be compiled with g++ as normal, meaning that the binary wheels would no longer be tied to a specific version of nvc++. This would also allow all compute-intensive code to be compiled on the user's machine, with knowledge of the CPU/GPU hardware that is being used.

Without extra effort, this would imply that even models that only use default mechanisms (no custom MOD files) require nrnivmodl to be available on the user's machine. With a little extra effort, it might be possible to additionally ship a GPU-enabled mechanism library, compiled for lowest-common-denominator hardware with some specific version of nvc++, that nrniv could fall back to if nrnivmodl has not been run. Because the GPU-enabled code would be restricted to the default mechanism library, which is not an input to nrnivmodl (or at least need not be), we would avoid the nvc++ mismatches described above.

Foreseeable Impact

A user-friendly GPU-enabled wheel would further reduce the barrier to entry to using CoreNEURON's most performant mode of operation.

  • Area(s) of change: CoreNEURON C++, packaging, CMake
  • Possible issues: high maintenance, insufficient ease-of-use
    • The reality of using GPU-enabled wheels needs to be much easier than building NEURON from source, otherwise it simply does not justify the truckload of over caveats and limitations of the wheels.
  • External dependencies such as requiring users install the NVIDIA HPC SDK
@olupton olupton added enhancement python coreneuron improvement Improvement over existing implementation building Issue related to build/compilation wheel gpu labels Jul 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
building Issue related to build/compilation coreneuron enhancement gpu improvement Improvement over existing implementation python wheel
Projects
None yet
Development

No branches or pull requests

1 participant