Skip to content

Commit

Permalink
Merge branch 'main' into surface_callback_tracking
Browse files Browse the repository at this point in the history
  • Loading branch information
lamkina authored Jun 3, 2024
2 parents 4b79221 + 8d3d035 commit 261d6ae
Show file tree
Hide file tree
Showing 3 changed files with 46 additions and 29 deletions.
16 changes: 14 additions & 2 deletions config/defaults/config.LINUX_INTEL.mk
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,20 @@ PMAKE = make -j 4

# ------- Define the MPI Compilers--------------------------------------
ifdef I_MPI_ROOT # Using Intel MPI
FF90 = mpiifort
CC = mpiicc
# Note that ";" is there to avoid make shell optimization, otherwise the shell command may fail
ICC_EXISTS := $(shell command -v icc;)

ifdef ICC_EXISTS
# icc only exists on older Intel versions
# Assume that we want to use the old compilers
FF90 = mpiifort
CC = mpiicc
else
# Use the new compilers
FF90 = mpiifx
CC = mpiicx
endif

else # Using HPE MPI
FF90 = ifort -lmpi
CC = icc -lmpi
Expand Down
16 changes: 14 additions & 2 deletions config/defaults/config.LINUX_INTEL_AVX2.mk
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,20 @@ PMAKE = make -j 4

# ------- Define the MPI Compilers--------------------------------------
ifdef I_MPI_ROOT # Using Intel MPI
FF90 = mpiifort
CC = mpiicc
# Note that ";" is there to avoid make shell optimization, otherwise the shell command may fail
ICC_EXISTS := $(shell command -v icc;)

ifdef ICC_EXISTS
# icc only exists on older Intel versions
# Assume that we want to use the old compilers
FF90 = mpiifort
CC = mpiicc
else
# Use the new compilers
FF90 = mpiifx
CC = mpiicx
endif

else # Using HPE MPI
FF90 = ifort -lmpi
CC = icc -lmpi
Expand Down
43 changes: 18 additions & 25 deletions doc/introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,44 +3,37 @@
Introduction
============

ADflow is a multi-block structured flow solver initially developed in
the Stanford University under the sponsorship of the Department of
Energy Advanced Strategic Computing (ASC) Initiative. It solves the
compressible Euler, laminar Navier-Stokes and Reynolds-Averaged Navier-Stokes
equations. Although its primary objective in this program was to compute the
flows in the rotating components of jet engines, ADflow has been developed as
a completely general solver and it is therefore applicable to a variety of
other types of problems, including external aerodynamic flows.

ADflow is a parallel code, suited for running on massively parallel platforms.
The parallelization is hidden as much as possible from the end user, i.e.
there is only one grid file, one volume solution file and one surface solution file. The
only thing the end user needs to do is to specify the number of processors
he/she wants to use via the mpirun (or equivalent) command.
ADflow is a multi-block and overset structured flow solver initially developed at Stanford University under the sponsorship of the Department of Energy Advanced Strategic Computing (ASC) Initiative.
It solves the compressible Euler, laminar Navier-Stokes and Reynolds-Averaged Navier-Stokes (RANS) equations using a second-order finite volume discretization.
It currently features three solvers: multigrid, approximate Newton-Krylov, and Newton-Krylov, more details of which are under :ref:`solvers <adflow_solvers>`.
Users control the CFD solver via a solver option dictionary.
More details are in :ref:`options <adflow_options>`.

Although its primary objective in this program was to compute the flows in the rotating components of jet engines, ADflow has been developed as a completely general solver, and it is therefore applicable to a variety of other types of problems, including external aerodynamic flows.
ADflow is a parallel code, suited for running on massively parallel platforms.
The parallelization is hidden as much as possible from the end user, i.e. there is only one grid file, one volume solution file, and one surface solution file.
The only thing the end user needs to do is to specify the number of processors they want to use via the mpirun (or equivalent) command.

A summary of the various features that can be found in ADflow is given below:

* Compressible, URANS flow solver with various turbulence modeling options (Spalart-Allmaras, k-w, SST, v2-f)
* Compressible, URANS flow solver with various turbulence modeling options (Spalart-Allmaras, k-:math:`\omega`, SST, v2-f)

* Multiblock structured approach with arbitrary connectivity. One-to-one mesh point matching with subfacing
(C-0 multiblock) and point mismatched abutting meshes at block interfaces (C-1 multiblock) are allowed.
CGNS I/O (mesh and solution) as well as native, MPI-IO parallel I/O option with back and forth conversion
utilities.
* Multi-block structured approach with arbitrary connectivity.
One-to-one mesh point matching with subfacing (C-0 multiblock) and point mismatched abutting meshes at block interfaces (C-1 multiblock) are allowed.
CGNS I/O (mesh and solution) as well as native, MPI-IO parallel I/O option with back and forth conversion utilities.

* Massively parallel (both CPU and memory scalable) implementation using MPI.

* ALE Deforming grid implementation using ``pyWarp``
* ALE Deforming grid implementation using ``IDWarp``

* Interface to conservative and consistent load and displacement transfer for aeroelastic computations.

* Multigrid, Runge-Kutta solver for the mean flow and DD-ADI solution methodology for the turbulence
equations.
* Multigrid, Runge-Kutta solver for the mean flow and DD-ADI solution methodology for the turbulence equations.

* Central difference discretization (second order in space) with various options for artificial dissipation.

* Adaptive wall functions for poor quality meshes.

* Unsteady time integration using second- or third-order (in time) backwards difference formulae (BDF) or a
time-spectral approach for time-periodic flows.
* Unsteady time integration using second- or third-order (in time) backwards difference formulae (BDF) or a time-spectral approach for time-periodic flows.

* Fully parallel, scalable pre-processor responsible load balancing.

0 comments on commit 261d6ae

Please sign in to comment.