Skip to content

Commit

Permalink
Merge branch 'development' of https://github.com/ECP-WarpX/WarpX into…
Browse files Browse the repository at this point in the history
… rst-files-cleanup
  • Loading branch information
eebasso committed Dec 19, 2023
2 parents 425507e + e460e60 commit 1102c84
Show file tree
Hide file tree
Showing 27 changed files with 327 additions and 611 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/cuda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ jobs:
which nvcc || echo "nvcc not in PATH!"
git clone https://github.com/AMReX-Codes/amrex.git ../amrex
cd ../amrex && git checkout --detach ecaa46d0be4b5c79b8806e48e3469000d8bb7252 && cd -
cd ../amrex && git checkout --detach ef38229189e3213f992a2e89dbe304fb49db9287 && cd -
make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_PSATD=TRUE USE_CCACHE=TRUE -j 2
ccache -s
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ repos:
# Sorts Python imports according to PEP8
# https://www.python.org/dev/peps/pep-0008/#imports
- repo: https://github.com/pycqa/isort
rev: 5.13.0
rev: 5.13.2
hooks:
- id: isort
name: isort (python)
Expand Down
2 changes: 1 addition & 1 deletion Docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ Theory
:hidden:

theory/intro
theory/picsar_theory
theory/pic
theory/amr
theory/boundary_conditions
theory/boosted_frame
Expand Down
2 changes: 1 addition & 1 deletion Docs/source/install/dependencies.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Optional dependencies include:
- for on-node accelerated compute *one of either*:

- `OpenMP 3.1+ <https://www.openmp.org>`__: for threaded CPU execution or
- `CUDA Toolkit 11.0+ (11.3+ recommended) <https://developer.nvidia.com/cuda-downloads>`__: for Nvidia GPU support (see `matching host-compilers <https://gist.github.com/ax3l/9489132>`_) or
- `CUDA Toolkit 11.7+ <https://developer.nvidia.com/cuda-downloads>`__: for Nvidia GPU support (see `matching host-compilers <https://gist.github.com/ax3l/9489132>`_) or
- `ROCm 5.2+ (5.5+ recommended) <https://gpuopen.com/learn/amd-lab-notes/amd-lab-notes-rocm-installation-readme/>`__: for AMD GPU support
- `FFTW3 <http://www.fftw.org>`_: for spectral solver (PSATD) support when running on CPU or SYCL

Expand Down
195 changes: 54 additions & 141 deletions Docs/source/install/hpc/karolina.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,83 +12,60 @@ Introduction
If you are new to this system, **please see the following resources**:

* `IT4I user guide <https://docs.it4i.cz>`__
* Batch system: `PBS <https://docs.it4i.cz/general/job-submission-and-execution/>`__
* Batch system: `SLURM <https://docs.it4i.cz/general/job-submission-and-execution/>`__
* Jupyter service: not provided/documented (yet)
* `Filesystems <https://docs.it4i.cz/karolina/storage/>`__:

* ``$HOME``: per-user directory, use only for inputs, source and scripts; backed up (25GB default quota)
* ``/scratch/``: `production directory <https://docs.it4i.cz/karolina/storage/#scratch-file-system>`__; very fast for parallel jobs (20TB default)
* ``/scratch/``: `production directory <https://docs.it4i.cz/karolina/storage/#scratch-file-system>`__; very fast for parallel jobs (10TB default)
* ``/mnt/proj<N>/<proj>``: per-project work directory, used for long term data storage (20TB default)


.. _building-karolina-preparation:

Preparation
-----------

Use the following commands to download the WarpX source code:
Installation
------------

.. code-block:: bash
We show how to install from scratch all the dependencies using `Spack <https://spack.io>`__.

git clone https://github.com/ECP-WarpX/WarpX.git $HOME/src/warpx
For size reasons it is not advisable to install WarpX in the ``$HOME`` directory, it should be installed in the "work directory". For this purpose we set an environment variable ``$WORK`` with the path to the "work directory".

On Karolina, you can run either on GPU nodes with fast A100 GPUs (recommended) or CPU nodes.

.. tab-set::

.. tab-item:: A100 GPUs

We use system software modules, add environment hints and further dependencies via the file ``$HOME/karolina_gpu_warpx.profile``.
Create it now:

.. code-block:: bash
cp $HOME/src/warpx/Tools/machines/karolina-it4i/karolina_gpu_warpx.profile.example $HOME/karolina_gpu_warpx.profile
.. dropdown:: Script Details
:color: light
:icon: info
:animate: fade-in-slide-down

.. literalinclude:: ../../../../Tools/machines/karolina-it4i/karolina_gpu_warpx.profile.example
:language: bash

Edit the 2nd line of this script, which sets the ``export proj=""`` variable.
For example, if you are member of the project ``DD-23-83``, then run ``vi $HOME/karolina_gpu_warpx.profile``.
Enter the edit mode by typing ``i`` and edit line 2 to read:

.. code-block:: bash
Profile file
^^^^^^^^^^^^

export proj="DD-23-83"
One can use the pre-prepared ``karolina_warpx.profile`` script below,
which you can copy to ``${HOME}/karolina_warpx.profile``, edit as required and then ``source``.

Exit the ``vi`` editor with ``Esc`` and then type ``:wq`` (write & quit).
.. dropdown:: Script Details
:color: light
:icon: info
:animate: fade-in-slide-down

.. important::
.. literalinclude:: ../../../../Tools/machines/karolina-it4i/karolina_warpx.profile.example
:language: bash
:caption: Copy the contents of this file to ``${HOME}/karolina_warpx.profile``.

Now, and as the first step on future logins to Karolina, activate these environment settings:
To have the environment activated on every login, add the following line to ``${HOME}/.bashrc``:

.. code-block:: bash
source $HOME/karolina_gpu_warpx.profile
Finally, since Karolina does not yet provide software modules for some of our dependencies, install them once:

.. code-block:: bash
.. code-block:: bash
bash $HOME/src/warpx/Tools/machines/karolina-it4i/install_gpu_dependencies.sh
source $HOME/sw/karolina/gpu/venvs/warpx-gpu/bin/activate
source $HOME/karolina_warpx.profile
.. dropdown:: Script Details
:color: light
:icon: info
:animate: fade-in-slide-down
To install the ``spack`` environment and Python packages:

.. literalinclude:: ../../../../Tools/machines/karolina-it4i/install_gpu_dependencies.sh
:language: bash
.. code-block:: bash
bash $WORK/src/warpx/Tools/machines/karolina-it4i/install_dependencies.sh
.. tab-item:: CPU Nodes
.. dropdown:: Script Details
:color: light
:icon: info
:animate: fade-in-slide-down

CPU usage is documentation is TODO.
.. literalinclude:: ../../../../Tools/machines/karolina-it4i/install_dependencies.sh
:language: bash


.. _building-karolina-compilation:
Expand All @@ -98,117 +75,53 @@ Compilation

Use the following :ref:`cmake commands <building-cmake>` to compile the application executable:

.. tab-set::

.. tab-item:: A100 GPUs

.. code-block:: bash
cd $HOME/src/warpx
rm -rf build_gpu
cmake -S . -B build_gpu -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu -j 12
The WarpX application executables are now in ``$HOME/src/warpx/build_gpu/bin/``.
Additionally, the following commands will install WarpX as a Python module:

.. code-block:: bash
rm -rf build_gpu_py
cmake -S . -B build_gpu_py -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu_py -j 12 --target pip_install
.. tab-item:: CPU Nodes

.. code-block:: bash
.. code-block:: bash
cd $HOME/src/warpx
rm -rf build_cpu
cd $WORK/src/warpx
rm -rf build_gpu
cmake -S . -B build_cpu -DWarpX_COMPUTE=OMP -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_cpu -j 12
cmake -S . -B build_gpu -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu -j 48
The WarpX application executables are now in ``$HOME/src/warpx/build_cpu/bin/``.
Additionally, the following commands will install WarpX as a Python module:
The WarpX application executables are now in ``$WORK/src/warpx/build_gpu/bin/``.
Additionally, the following commands will install WarpX as a Python module:

.. code-block:: bash
.. code-block:: bash
cd $HOME/src/warpx
rm -rf build_cpu_py
cd $WORK/src/warpx
rm -rf build_gpu_py
cmake -S . -B build_cpu_py -DWarpX_COMPUTE=OMP -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_cpu_py -j 12 --target pip_install
cmake -S . -B build_gpu_py -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu_py -j 48 --target pip_install
Now, you can :ref:`submit Karolina compute jobs <running-cpp-karolina>` for WarpX :ref:`Python (PICMI) scripts <usage-picmi>` (:ref:`example scripts <usage-examples>`).
Or, you can use the WarpX executables to submit Karolina jobs (:ref:`example inputs <usage-examples>`).
For executables, you can reference their location in your :ref:`job script <running-cpp-karolina>` or copy them to a location in ``/scratch/``.


.. _building-karolina-update:

Update WarpX & Dependencies
---------------------------

If you already installed WarpX in the past and want to update it, start by getting the latest source code:

.. code-block:: bash
cd $HOME/src/warpx
# read the output of this command - does it look ok?
git status
# get the latest WarpX source code
git fetch
git pull
# read the output of these commands - do they look ok?
git status
git log # press q to exit
And, if needed,

- :ref:`update the karolina_gpu_warpx.profile or karolina_cpu_warpx.profile files <building-karolina-preparation>`,
- log out and into the system, activate the now updated environment profile as usual,
- :ref:`execute the dependency install scripts <building-karolina-preparation>`.

As a last step, clean the build directory ``rm -rf $HOME/src/warpx/build_*`` and rebuild WarpX.


.. _running-cpp-karolina:

Running
-------

.. tab-set::
The batch script below can be used to run a WarpX simulation on multiple GPU nodes (change ``#SBATCH --nodes=`` accordingly) on the supercomputer Karolina at IT4I.
This partition has up to `72 nodes <https://docs.it4i.cz/karolina/hardware-overview/>`__.
Every node has 8x A100 (40GB) GPUs and 2x AMD EPYC 7763, 64-core, 2.45 GHz processors.

.. tab-item:: A100 (40GB) GPUs
Replace descriptions between chevrons ``<>`` by relevant values, for instance ``<proj>`` could be ``DD-23-83``.
Note that we run one MPI rank per GPU.

The batch script below can be used to run a WarpX simulation on multiple GPU nodes (change ``#PBS -l select=`` accordingly) on the supercomputer Karolina at IT4I.
This partition as up to `72 nodes <https://docs.it4i.cz/karolina/hardware-overview/>`__.
Every node has 8x A100 (40GB) GPUs and 2x AMD EPYC 7763, 64-core, 2.45 GHz processors.
.. literalinclude:: ../../../../Tools/machines/karolina-it4i/karolina_gpu.sbatch
:language: bash
:caption: You can copy this file from ``$WORK/src/warpx/Tools/machines/karolina-it4i/karolina_gpu.sbatch``.

Replace descriptions between chevrons ``<>`` by relevant values, for instance ``<proj>`` could be ``DD-23-83``.
Note that we run one MPI rank per GPU.

.. literalinclude:: ../../../../Tools/machines/karolina-it4i/karolina_gpu.qsub
:language: bash
:caption: You can copy this file from ``$HOME/src/warpx/Tools/machines/karolina-it4i/karolina_gpu.qsub``.

To run a simulation, copy the lines above to a file ``karolina_gpu.qsub`` and run

.. code-block:: bash
qsub karolina_gpu.qsub
to submit the job.
To run a simulation, copy the lines above to a file ``karolina_gpu.sbatch`` and run

.. code-block:: bash
.. tab-item:: CPU Nodes
sbatch karolina_gpu.sbatch
CPU usage is documentation is TODO.
to submit the job.


.. _post-processing-karolina:
Expand Down
6 changes: 3 additions & 3 deletions Docs/source/latex_theory/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@ all: $(SRC_FILES) clean
pandoc Boosted_frame/Boosted_frame.tex --mathjax --wrap=preserve --bibliography allbibs.bib -o boosted_frame.rst
pandoc input_output/input_output.tex --mathjax --wrap=preserve --bibliography allbibs.bib -o input_output.rst
mv *.rst ../theory
cd ../../../../picsar/Doxygen/pages/latex_theory/; pandoc theory.tex --mathjax --wrap=preserve --bibliography allbibs.bib -o picsar_theory.rst
mv ../../../../picsar/Doxygen/pages/latex_theory/picsar_theory.rst ../theory
cd ../../../../picsar/Doxygen/pages/latex_theory/; pandoc theory.tex --mathjax --wrap=preserve --bibliography allbibs.bib -o pic.rst
mv ../../../../picsar/Doxygen/pages/latex_theory/pic.rst ../theory
cp ../../../../picsar/Doxygen/images/PIC.png ../theory
cp ../../../../picsar/Doxygen/images/Yee_grid.png ../theory

clean:
rm -f ../theory/intro.rst
rm -f ../theory/warpx_theory.rst
rm -f ../theory/picsar_theory.rst
rm -f ../theory/pic.rst
rm -f ../theory/PIC.png
rm -f ../theory/Yee_grid.png
22 changes: 12 additions & 10 deletions Docs/source/refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -60,20 +60,22 @@ @article{Turner2013
year = {2013}
}

@article{winske2022hybrid,
archivePrefix = {arXiv},
author = {D. Winske and Homa Karimabadi and Ari Le and N. Omidi and Vadim Roytershteyn and Adam Stanier},
eprint = {2204.01676},
journal = {arXiv},
primaryClass = {physics.plasm-ph},
title = {{Hybrid codes (massless electron fluid)}},
year = {2022}
@Inbook{WinskeInBook2023,
author = {Winske, Dan and Karimabadi, Homa and Le, Ari Yitzchak and Omidi, Nojan Nick and Roytershteyn, Vadim and Stanier, Adam John},
bookTitle = {Space and Astrophysical Plasma Simulation: Methods, Algorithms, and Applications},
doi = {10.1007/978-3-031-11870-8_3},
editor = {B{\"u}chner, J{\"o}rg},
isbn = {978-3-031-11870-8},
pages = {63--91},
publisher = {Springer International Publishing},
title = {{Hybrid-Kinetic Approach: Massless Electrons}},
year = {2023}
}

@incollection{NIELSON1976,
@incollection{Nielson1976,
author = {Clair W. Nielson and H. Ralph Lewis},
booktitle = {Controlled Fusion},
doi = {https://doi.org/10.1016/B978-0-12-460816-0.50015-4},
doi = {10.1016/B978-0-12-460816-0.50015-4},
editor = {John Killeen},
issn = {0076-6860},
pages = {367--388},
Expand Down
2 changes: 1 addition & 1 deletion Docs/source/theory/boundary_conditions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -300,4 +300,4 @@ the right boundary is reflecting.
PEC boundary current deposition along the ``x``-axis. The left boundary is absorbing while the right boundary is reflecting.

.. bibliography::
:keyprefix: bc-
:keyprefix: bc-
6 changes: 3 additions & 3 deletions Docs/source/theory/kinetic_fluid_hybrid_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ of light.

Many authors have described variations of the kinetic ion & fluid electron model,
generally referred to as particle-fluid hybrid or just hybrid-PIC models. The implementation
in WarpX follows the outline from :cite:t:`c-winske2022hybrid`.
in WarpX follows the outline from :cite:t:`kfhm-WinskeInBook2023`.
This description follows mostly from that reference.

Model
Expand All @@ -29,7 +29,7 @@ The basic justification for the hybrid model is that the system to which it is
applied is dominated by ion kinetics, with ions moving much slower than electrons
and photons. In this scenario two critical approximations can be made, namely,
neutrality (:math:`n_e=n_i`) and the Maxwell-Ampere equation can be simplified by
neglecting the displacement current term :cite:p:`c-NIELSON1976`, giving,
neglecting the displacement current term :cite:p:`kfhm-Nielson1976`, giving,

.. math::
Expand Down Expand Up @@ -168,4 +168,4 @@ The isothermal limit is given by :math:`\gamma = 1` while :math:`\gamma = 5/3`
(default) produces the adiabatic limit.

.. bibliography::
:keyprefix: c-
:keyprefix: kfhm-
Loading

0 comments on commit 1102c84

Please sign in to comment.