The GEMINI model (Geospace Environment Model of Ion-Neutral Interactions) is a three-dimensional ionospheric fluid-electrodynamic model written (mostly) in object-oriented fortran (2008+ standard). GEMINI is used for various scientific studies including:
- effects of auroras on the terrestrial ionosphere
- natural hazard effects on the space environment
- effects of ionospheric fluid instabilities on radio propagation
The detailed mathematical formulation of GEMINI is included in GEMINI-docs. A subroutine-level set of inline generated documentation describing functions of individual program units is given via source code comments which are rendered as webpages. GEMINI uses generalized orthogonal curvilinear coordinates and has been tested with dipole and Cartesian coordinates.
Generally, the Git main
branch has the current development version and is the best place to start, while more thoroughly-tested releases happen regularly.
Specific releases corresponding to published results are generally noted in the corresponding journal article.
The GEMINI development teams values input from our user community, particulary in the form of reporting of errors. These allow us to insure that the code functions properly for a wider range of conditions, platforms, and use cases than we are otherwise able to directly test.
Please open a GitHub Issue if you experience difficulty with GEMINI. Try to provide as much detail as possible so we can try to reproduce your error.
Gemini is intended to be OS / CPU arch / platform / compiler agnostic. Operating system support includes: Linux, MacOS, and Windows. CPU arch support includes: Intel, AMD, ARM, IBM POWER, Cray and more. GEMINI can run on hardware ranging from a Raspberry Pi to laptop to a high-performance computing (HPC) cluster. Generally speaking one can run large 2D or modest resolution 3D simulations (less than 10 million grid points) on a quad-core workstation, with some patience.
For large 3D simulations (many tens-to-hundreds of millions of grid points), GEMINI is best run in a cluster environment or a very "large" multi-core workstation (e.g. 16 or more cores). Runtime depends heavily on the grid spacing used, which determines the time step needed to insure stability, For example we have found that a 20M grid point simulations takes about 4 hours on 72 Xeon E5 cores. 200M grid point simulations can take up to a week on 256 cores. It has generally been found that acceptable performance requires > 1GB memory per core; moreover, a large amount of storage (hundreds of GB to several TB) is needed to store results from large simulations.
To build Gemini and run self-tests takes about 10 minutes on a laptop. Gemini3D uses several external libraries that are built as a required one-time procedure. Gemini3D works "offline" that is without internet once initially setup.
Requirements:
- C, C++ and Fortran compiler. See compiler help for optional further details.
- GCC ≥ 9 with OpenMPI or MPICH
- Clang with OpenMPI
- Intel oneAPI
- Cray with GCC or Intel oneAPI backend
- Python and/or MATLAB for scripting front- and back-ends
- CMake: if your CMake is too old, download or
python -m pip install cmake
- MPI: any of OpenMPI, IntelMPI, MPICH, MS-MPI. See MPI help if needed. Without MPI, Gemini3D uses one CPU core only, which runs much more slowly than with MPI.
Build the Gemini3D code
```sh
git clone https://github.com/gemini3d/gemini3d.git
cd gemini3d
cmake -B build
cmake --build build --parallel
```
Non-default build options may be used. Gemini3d developer options allow things like array bounds checking.
GEMINI has self tests that compare the output from a "known" test problem to a reference output. To verify your GEMINI build, run the self-tests.
ctest --test-dir build
Note: some HPC systems only have internet when on a login node, but cannot run MPI simulations on the login node. Batch sessions, including interactive, may be offline. To run CTest in such an environment, download the data once from the login node:
ctest --test-dir build --preset download
then from an interactive batch session, run the tests:
ctest --test-dir build --preset offline
For various numerical solutions Gemini relies on:
- LAPACK
- scalapack
- MUMPS
For file input/output we also use:
- hdf5
- h5fortran funded in part by NASA NNH19ZDA001N-HDEE grant 80NSSC20K0176
- zlib
For basic operations the GEMINI main program simply needs to be run from the command line with arguments corresponding to to the number of processes to be used for the simulation, the location where the input files are and where the output files are to be written:
mpiexec -np <number of processors> build/gemini.bin <output directory>
for example:
mpiexec -np 4 build/gemini.bin ~/mysim3d/arecibo
GEMINI can also be run via scripting frontend of PyGemini python -m gemini3d.run -np
, or the executable gemini3d.run
.
Development of gemini3d.run
was funded by NASA
NNH19ZDA001N-HDEE grant 80NSSC20K0176.
By default, only the current simulation time and a few other messages are shown to keep logs uncluttered. gemini.bin command line options include:
-d
| -debug
: print verbosely -- could be 100s of megabytes of text on long simulation for advanced debugging.
-nooutput
: do not write data to disk. This is for benchmarking file output time, as the simulation output is lost, so this option would rarely be used.
-manual_grid <# x2 images> <# x3 images>
: forces the code to adopt a specific domain decomposition in x2 and x3 by using the integers given. If not specified the code will attempt to find its own x2,x3 decomposition. The number of grid points in x2 and x3 must be evenly divisible by the number of user-specified images in each direction, respectively.
-dryrun
: only run the first time step, do not write any files. This can be useful to diagnose issues not seen in unit tests, particularly issues with gridding. It runs in a few seconds or less than a minute for larger sims, something that can be done before queuing an HPC job.
If you prefer to issue the GEMINI run command through a scripting environment you may do so (via python) in the following way:
-
make a config.nml with desired parameters for an equilibrium sim.
-
run the equilibrium sim:
python -m gemini3d.run /path_to/config_eq.nml /path_to/sim_eq/
-
create a new config.nml for the actual simulation and run
python -m gemini3d.run /path_to/config.nml /path_to/sim_out/
See Readme_input for details on how to prepare input data for GEMINI. Generally speaking there are python and MATLAB scripts available in the mat_gemini and pygemini repositories that will save data in the appropriate format once generated.
GEMINI uses Python for essential interfaces, plotting and analysis. Matlab scripts relevant to Gemini to mat_gemini repo.
Only the essential scripts needed to setup a simple example, and plot the results are included in the main GEMINI repository. The Gemini-scripts and Gemini-examples contain scripts used for various published and ongoing analyses.
See Readme_output for a description of how to load the simulation output files and the different variable names, meanings, and units.
An auxiliary program, magcalc.f90, can be used to compute magnetic field perturbations from a complete disturbance simulation. See Readme_magcalc for a full description of how this program works.
- Readme_output - information about data included in the output files of a GEMINI simulation
- Readme_input - information on how input files should be prepared and formatted.
- Readme_compilers - details regarding various compilers
- Readme_cmake - cmake build options
- Readme_docs - information about model documentation
- Readme_mpi - help with mpi-related issues
- Readme_magcalc - some documentation for the magnetic field calculation program
- Readme_VEGA - information on how to deploy and run GEMINI on ERAU's VEGA HPC system.
- Readme_prereqs - details on how to install prerequisites on common platforms.
-
Generating equilibrium conditions can be a bit tricky with curvilinear grids. A low-res run can be done, but it will not necessary interpolate properly onto a finer grid due to some issue with the way the grids are made with ghost cells etc. A workaround is to use a slightly narrower (x2) grid in the high-res run (quarter of a degree seems to work most of the time).
-
Magnetic field calculations on an open 2D grid do not appear completely consistent with model prototype results; although there are quite close. This may have been related to sign errors in the FAC calculations - these tests should be retried at some point.
-
Occasionally MUMPS will throw an error because it underestimated the amount of memory needed for a solve. If this happens a workaround is to add this line of code to the potential solver being used for your simulations. If the problem persists try changing the number to 100.
mumps_par%ICNTL(14)=50
-
There are potentially some issues with the way the stability condition is evaluated, i.e. it is computed before the perp. drifts are solved so it is possible when using input data to overrun this especially if your target CFL number is > 0.8 or so. Some code has been added as of 8/20/2018 to throttle how much dt is allowed to change between time steps and this seems to completely fix this issue, but theoretically it could still happen; however this is probably very unlikely.
-
Occasionally one will see edge artifacts in either the field -aligned currents or other parameters for non-periodic in x3 solves. This may be related to the divergence calculations needed for the parallel current (under EFL formulation) and for compression calculations in the multifluid module, but this needs to be investigated further... This do not appear to affect solutions in the interior of the grid domain and can probably be safely ignored if your region of interest is sufficiently far from the boundary (which is always good practice anyway).
-
Occasionally on Windows you may get a system error code
0xc0000005
when trying to run Gemini. This typically requires rebooting the Windows computer. If this is annoying, please let us know--it happens rarely enough that we're not sure if it's a Microsoft MPI bug or something else.