Skip to content

Commit

Permalink
Merge branch 'development' of https://github.com/ECP-WarpX/WarpX into…
Browse files Browse the repository at this point in the history
… docs-simplify-citation-urls
  • Loading branch information
eebasso committed Dec 19, 2023
2 parents 0b1dc90 + e460e60 commit 87f4f67
Show file tree
Hide file tree
Showing 25 changed files with 620 additions and 842 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/cuda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ jobs:
which nvcc || echo "nvcc not in PATH!"
git clone https://github.com/AMReX-Codes/amrex.git ../amrex
cd ../amrex && git checkout --detach ecaa46d0be4b5c79b8806e48e3469000d8bb7252 && cd -
cd ../amrex && git checkout --detach ef38229189e3213f992a2e89dbe304fb49db9287 && cd -
make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_PSATD=TRUE USE_CCACHE=TRUE -j 2
ccache -s
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ repos:
# Sorts Python imports according to PEP8
# https://www.python.org/dev/peps/pep-0008/#imports
- repo: https://github.com/pycqa/isort
rev: 5.13.0
rev: 5.13.2
hooks:
- id: isort
name: isort (python)
Expand Down
2 changes: 1 addition & 1 deletion Docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ Theory
theory/intro
theory/pic
theory/amr
theory/PML
theory/boundary_conditions
theory/boosted_frame
theory/input_output
theory/collisions
Expand Down
2 changes: 1 addition & 1 deletion Docs/source/install/dependencies.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Optional dependencies include:
- for on-node accelerated compute *one of either*:

- `OpenMP 3.1+ <https://www.openmp.org>`__: for threaded CPU execution or
- `CUDA Toolkit 11.0+ (11.3+ recommended) <https://developer.nvidia.com/cuda-downloads>`__: for Nvidia GPU support (see `matching host-compilers <https://gist.github.com/ax3l/9489132>`_) or
- `CUDA Toolkit 11.7+ <https://developer.nvidia.com/cuda-downloads>`__: for Nvidia GPU support (see `matching host-compilers <https://gist.github.com/ax3l/9489132>`_) or
- `ROCm 5.2+ (5.5+ recommended) <https://gpuopen.com/learn/amd-lab-notes/amd-lab-notes-rocm-installation-readme/>`__: for AMD GPU support
- `FFTW3 <http://www.fftw.org>`_: for spectral solver (PSATD) support when running on CPU or SYCL

Expand Down
195 changes: 54 additions & 141 deletions Docs/source/install/hpc/karolina.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,83 +12,60 @@ Introduction
If you are new to this system, **please see the following resources**:

* `IT4I user guide <https://docs.it4i.cz>`__
* Batch system: `PBS <https://docs.it4i.cz/general/job-submission-and-execution/>`__
* Batch system: `SLURM <https://docs.it4i.cz/general/job-submission-and-execution/>`__
* Jupyter service: not provided/documented (yet)
* `Filesystems <https://docs.it4i.cz/karolina/storage/>`__:

* ``$HOME``: per-user directory, use only for inputs, source and scripts; backed up (25GB default quota)
* ``/scratch/``: `production directory <https://docs.it4i.cz/karolina/storage/#scratch-file-system>`__; very fast for parallel jobs (20TB default)
* ``/scratch/``: `production directory <https://docs.it4i.cz/karolina/storage/#scratch-file-system>`__; very fast for parallel jobs (10TB default)
* ``/mnt/proj<N>/<proj>``: per-project work directory, used for long term data storage (20TB default)


.. _building-karolina-preparation:

Preparation
-----------

Use the following commands to download the WarpX source code:
Installation
------------

.. code-block:: bash
We show how to install from scratch all the dependencies using `Spack <https://spack.io>`__.

git clone https://github.com/ECP-WarpX/WarpX.git $HOME/src/warpx
For size reasons it is not advisable to install WarpX in the ``$HOME`` directory, it should be installed in the "work directory". For this purpose we set an environment variable ``$WORK`` with the path to the "work directory".

On Karolina, you can run either on GPU nodes with fast A100 GPUs (recommended) or CPU nodes.

.. tab-set::

.. tab-item:: A100 GPUs

We use system software modules, add environment hints and further dependencies via the file ``$HOME/karolina_gpu_warpx.profile``.
Create it now:

.. code-block:: bash
cp $HOME/src/warpx/Tools/machines/karolina-it4i/karolina_gpu_warpx.profile.example $HOME/karolina_gpu_warpx.profile
.. dropdown:: Script Details
:color: light
:icon: info
:animate: fade-in-slide-down

.. literalinclude:: ../../../../Tools/machines/karolina-it4i/karolina_gpu_warpx.profile.example
:language: bash

Edit the 2nd line of this script, which sets the ``export proj=""`` variable.
For example, if you are member of the project ``DD-23-83``, then run ``vi $HOME/karolina_gpu_warpx.profile``.
Enter the edit mode by typing ``i`` and edit line 2 to read:

.. code-block:: bash
Profile file
^^^^^^^^^^^^

export proj="DD-23-83"
One can use the pre-prepared ``karolina_warpx.profile`` script below,
which you can copy to ``${HOME}/karolina_warpx.profile``, edit as required and then ``source``.

Exit the ``vi`` editor with ``Esc`` and then type ``:wq`` (write & quit).
.. dropdown:: Script Details
:color: light
:icon: info
:animate: fade-in-slide-down

.. important::
.. literalinclude:: ../../../../Tools/machines/karolina-it4i/karolina_warpx.profile.example
:language: bash
:caption: Copy the contents of this file to ``${HOME}/karolina_warpx.profile``.

Now, and as the first step on future logins to Karolina, activate these environment settings:
To have the environment activated on every login, add the following line to ``${HOME}/.bashrc``:

.. code-block:: bash
source $HOME/karolina_gpu_warpx.profile
Finally, since Karolina does not yet provide software modules for some of our dependencies, install them once:

.. code-block:: bash
.. code-block:: bash
bash $HOME/src/warpx/Tools/machines/karolina-it4i/install_gpu_dependencies.sh
source $HOME/sw/karolina/gpu/venvs/warpx-gpu/bin/activate
source $HOME/karolina_warpx.profile
.. dropdown:: Script Details
:color: light
:icon: info
:animate: fade-in-slide-down
To install the ``spack`` environment and Python packages:

.. literalinclude:: ../../../../Tools/machines/karolina-it4i/install_gpu_dependencies.sh
:language: bash
.. code-block:: bash
bash $WORK/src/warpx/Tools/machines/karolina-it4i/install_dependencies.sh
.. tab-item:: CPU Nodes
.. dropdown:: Script Details
:color: light
:icon: info
:animate: fade-in-slide-down

CPU usage is documentation is TODO.
.. literalinclude:: ../../../../Tools/machines/karolina-it4i/install_dependencies.sh
:language: bash


.. _building-karolina-compilation:
Expand All @@ -98,117 +75,53 @@ Compilation

Use the following :ref:`cmake commands <building-cmake>` to compile the application executable:

.. tab-set::

.. tab-item:: A100 GPUs

.. code-block:: bash
cd $HOME/src/warpx
rm -rf build_gpu
cmake -S . -B build_gpu -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu -j 12
The WarpX application executables are now in ``$HOME/src/warpx/build_gpu/bin/``.
Additionally, the following commands will install WarpX as a Python module:

.. code-block:: bash
rm -rf build_gpu_py
cmake -S . -B build_gpu_py -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu_py -j 12 --target pip_install
.. tab-item:: CPU Nodes

.. code-block:: bash
.. code-block:: bash
cd $HOME/src/warpx
rm -rf build_cpu
cd $WORK/src/warpx
rm -rf build_gpu
cmake -S . -B build_cpu -DWarpX_COMPUTE=OMP -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_cpu -j 12
cmake -S . -B build_gpu -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu -j 48
The WarpX application executables are now in ``$HOME/src/warpx/build_cpu/bin/``.
Additionally, the following commands will install WarpX as a Python module:
The WarpX application executables are now in ``$WORK/src/warpx/build_gpu/bin/``.
Additionally, the following commands will install WarpX as a Python module:

.. code-block:: bash
.. code-block:: bash
cd $HOME/src/warpx
rm -rf build_cpu_py
cd $WORK/src/warpx
rm -rf build_gpu_py
cmake -S . -B build_cpu_py -DWarpX_COMPUTE=OMP -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_cpu_py -j 12 --target pip_install
cmake -S . -B build_gpu_py -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu_py -j 48 --target pip_install
Now, you can :ref:`submit Karolina compute jobs <running-cpp-karolina>` for WarpX :ref:`Python (PICMI) scripts <usage-picmi>` (:ref:`example scripts <usage-examples>`).
Or, you can use the WarpX executables to submit Karolina jobs (:ref:`example inputs <usage-examples>`).
For executables, you can reference their location in your :ref:`job script <running-cpp-karolina>` or copy them to a location in ``/scratch/``.


.. _building-karolina-update:

Update WarpX & Dependencies
---------------------------

If you already installed WarpX in the past and want to update it, start by getting the latest source code:

.. code-block:: bash
cd $HOME/src/warpx
# read the output of this command - does it look ok?
git status
# get the latest WarpX source code
git fetch
git pull
# read the output of these commands - do they look ok?
git status
git log # press q to exit
And, if needed,

- :ref:`update the karolina_gpu_warpx.profile or karolina_cpu_warpx.profile files <building-karolina-preparation>`,
- log out and into the system, activate the now updated environment profile as usual,
- :ref:`execute the dependency install scripts <building-karolina-preparation>`.

As a last step, clean the build directory ``rm -rf $HOME/src/warpx/build_*`` and rebuild WarpX.


.. _running-cpp-karolina:

Running
-------

.. tab-set::
The batch script below can be used to run a WarpX simulation on multiple GPU nodes (change ``#SBATCH --nodes=`` accordingly) on the supercomputer Karolina at IT4I.
This partition has up to `72 nodes <https://docs.it4i.cz/karolina/hardware-overview/>`__.
Every node has 8x A100 (40GB) GPUs and 2x AMD EPYC 7763, 64-core, 2.45 GHz processors.

.. tab-item:: A100 (40GB) GPUs
Replace descriptions between chevrons ``<>`` by relevant values, for instance ``<proj>`` could be ``DD-23-83``.
Note that we run one MPI rank per GPU.

The batch script below can be used to run a WarpX simulation on multiple GPU nodes (change ``#PBS -l select=`` accordingly) on the supercomputer Karolina at IT4I.
This partition as up to `72 nodes <https://docs.it4i.cz/karolina/hardware-overview/>`__.
Every node has 8x A100 (40GB) GPUs and 2x AMD EPYC 7763, 64-core, 2.45 GHz processors.
.. literalinclude:: ../../../../Tools/machines/karolina-it4i/karolina_gpu.sbatch
:language: bash
:caption: You can copy this file from ``$WORK/src/warpx/Tools/machines/karolina-it4i/karolina_gpu.sbatch``.

Replace descriptions between chevrons ``<>`` by relevant values, for instance ``<proj>`` could be ``DD-23-83``.
Note that we run one MPI rank per GPU.

.. literalinclude:: ../../../../Tools/machines/karolina-it4i/karolina_gpu.qsub
:language: bash
:caption: You can copy this file from ``$HOME/src/warpx/Tools/machines/karolina-it4i/karolina_gpu.qsub``.

To run a simulation, copy the lines above to a file ``karolina_gpu.qsub`` and run

.. code-block:: bash
qsub karolina_gpu.qsub
to submit the job.
To run a simulation, copy the lines above to a file ``karolina_gpu.sbatch`` and run

.. code-block:: bash
.. tab-item:: CPU Nodes
sbatch karolina_gpu.sbatch
CPU usage is documentation is TODO.
to submit the job.


.. _post-processing-karolina:
Expand Down
20 changes: 11 additions & 9 deletions Docs/source/refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -58,17 +58,19 @@ @article{Turner2013
year = {2013}
}

@article{winske2022hybrid,
archivePrefix = {arXiv},
author = {D. Winske and Homa Karimabadi and Ari Le and N. Omidi and Vadim Roytershteyn and Adam Stanier},
eprint = {2204.01676},
journal = {arXiv},
primaryClass = {physics.plasm-ph},
title = {{Hybrid codes (massless electron fluid)}},
year = {2022}
@Inbook{WinskeInBook2023,
author = {Winske, Dan and Karimabadi, Homa and Le, Ari Yitzchak and Omidi, Nojan Nick and Roytershteyn, Vadim and Stanier, Adam John},
bookTitle = {Space and Astrophysical Plasma Simulation: Methods, Algorithms, and Applications},
doi = {10.1007/978-3-031-11870-8_3},
editor = {B{\"u}chner, J{\"o}rg},
isbn = {978-3-031-11870-8},
pages = {63--91},
publisher = {Springer International Publishing},
title = {{Hybrid-Kinetic Approach: Massless Electrons}},
year = {2023}
}

@incollection{NIELSON1976,
@incollection{Nielson1976,
author = {Clair W. Nielson and H. Ralph Lewis},
booktitle = {Controlled Fusion},
doi = {10.1016/B978-0-12-460816-0.50015-4},
Expand Down
Loading

0 comments on commit 87f4f67

Please sign in to comment.