Skip to content

Commit

Permalink
Merge branch 'main' into fsbc-SPfix
Browse files Browse the repository at this point in the history
  • Loading branch information
gawng authored Sep 16, 2024
2 parents 6525adb + 20ff867 commit 5a52fd0
Show file tree
Hide file tree
Showing 65 changed files with 6,185 additions and 3,331 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ doc/tmp*.rst
*.bak
src/build/adflow_project.dep
src/build/libadflow-f2pywrappers2.f90
src/build/libadflow-f2pywrappers.f
src/build/libadflowmodule.c
src/build/importTest.py

Expand Down
2 changes: 1 addition & 1 deletion adflow/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "2.9.1"
__version__ = "2.10.0"

from mpi4py import MPI

Expand Down
239 changes: 207 additions & 32 deletions adflow/mphys/mphys_adflow.py

Large diffs are not rendered by default.

333 changes: 258 additions & 75 deletions adflow/pyADflow.py

Large diffs are not rendered by default.

16 changes: 14 additions & 2 deletions config/defaults/config.LINUX_INTEL.mk
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,20 @@ PMAKE = make -j 4

# ------- Define the MPI Compilers--------------------------------------
ifdef I_MPI_ROOT # Using Intel MPI
FF90 = mpiifort
CC = mpiicc
# Note that ";" is there to avoid make shell optimization, otherwise the shell command may fail
ICC_EXISTS := $(shell command -v icc;)

ifdef ICC_EXISTS
# icc only exists on older Intel versions
# Assume that we want to use the old compilers
FF90 = mpiifort
CC = mpiicc
else
# Use the new compilers
FF90 = mpiifx
CC = mpiicx
endif

else # Using HPE MPI
FF90 = ifort -lmpi
CC = icc -lmpi
Expand Down
16 changes: 14 additions & 2 deletions config/defaults/config.LINUX_INTEL_AVX2.mk
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,20 @@ PMAKE = make -j 4

# ------- Define the MPI Compilers--------------------------------------
ifdef I_MPI_ROOT # Using Intel MPI
FF90 = mpiifort
CC = mpiicc
# Note that ";" is there to avoid make shell optimization, otherwise the shell command may fail
ICC_EXISTS := $(shell command -v icc;)

ifdef ICC_EXISTS
# icc only exists on older Intel versions
# Assume that we want to use the old compilers
FF90 = mpiifort
CC = mpiicc
else
# Use the new compilers
FF90 = mpiifx
CC = mpiicx
endif

else # Using HPE MPI
FF90 = ifort -lmpi
CC = icc -lmpi
Expand Down
15 changes: 15 additions & 0 deletions doc/costFunctions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,21 @@ cofzz:
See ``cofxx`` description for more details.
Units: ``Meter``
colx:
desc: >
Center of lift force, ``x`` coordinate.
Units: ``Meter``
coly:
desc: >
Center of lift force, ``y`` coordinate.
Units: ``Meter``
colz:
desc: >
Center of lift force, ``z`` coordinate.
Units: ``Meter``
fx:
desc: >
Force from surface stresses (this includes shear/viscous, normal/pressure, and momentum/unsteady stresses) integrated in the global :math:`x` direction.
Expand Down
43 changes: 18 additions & 25 deletions doc/introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,44 +3,37 @@
Introduction
============

ADflow is a multi-block structured flow solver initially developed in
the Stanford University under the sponsorship of the Department of
Energy Advanced Strategic Computing (ASC) Initiative. It solves the
compressible Euler, laminar Navier-Stokes and Reynolds-Averaged Navier-Stokes
equations. Although its primary objective in this program was to compute the
flows in the rotating components of jet engines, ADflow has been developed as
a completely general solver and it is therefore applicable to a variety of
other types of problems, including external aerodynamic flows.

ADflow is a parallel code, suited for running on massively parallel platforms.
The parallelization is hidden as much as possible from the end user, i.e.
there is only one grid file, one volume solution file and one surface solution file. The
only thing the end user needs to do is to specify the number of processors
he/she wants to use via the mpirun (or equivalent) command.
ADflow is a multi-block and overset structured flow solver initially developed at Stanford University under the sponsorship of the Department of Energy Advanced Strategic Computing (ASC) Initiative.
It solves the compressible Euler, laminar Navier-Stokes and Reynolds-Averaged Navier-Stokes (RANS) equations using a second-order finite volume discretization.
It currently features three solvers: multigrid, approximate Newton-Krylov, and Newton-Krylov, more details of which are under :ref:`solvers <adflow_solvers>`.
Users control the CFD solver via a solver option dictionary.
More details are in :ref:`options <adflow_options>`.

Although its primary objective in this program was to compute the flows in the rotating components of jet engines, ADflow has been developed as a completely general solver, and it is therefore applicable to a variety of other types of problems, including external aerodynamic flows.
ADflow is a parallel code, suited for running on massively parallel platforms.
The parallelization is hidden as much as possible from the end user, i.e. there is only one grid file, one volume solution file, and one surface solution file.
The only thing the end user needs to do is to specify the number of processors they want to use via the mpirun (or equivalent) command.

A summary of the various features that can be found in ADflow is given below:

* Compressible, URANS flow solver with various turbulence modeling options (Spalart-Allmaras, k-w, SST, v2-f)
* Compressible, URANS flow solver with various turbulence modeling options (Spalart-Allmaras, k-:math:`\omega`, SST, v2-f)

* Multiblock structured approach with arbitrary connectivity. One-to-one mesh point matching with subfacing
(C-0 multiblock) and point mismatched abutting meshes at block interfaces (C-1 multiblock) are allowed.
CGNS I/O (mesh and solution) as well as native, MPI-IO parallel I/O option with back and forth conversion
utilities.
* Multi-block structured approach with arbitrary connectivity.
One-to-one mesh point matching with subfacing (C-0 multiblock) and point mismatched abutting meshes at block interfaces (C-1 multiblock) are allowed.
CGNS I/O (mesh and solution) as well as native, MPI-IO parallel I/O option with back and forth conversion utilities.

* Massively parallel (both CPU and memory scalable) implementation using MPI.

* ALE Deforming grid implementation using ``pyWarp``
* ALE Deforming grid implementation using ``IDWarp``

* Interface to conservative and consistent load and displacement transfer for aeroelastic computations.

* Multigrid, Runge-Kutta solver for the mean flow and DD-ADI solution methodology for the turbulence
equations.
* Multigrid, Runge-Kutta solver for the mean flow and DD-ADI solution methodology for the turbulence equations.

* Central difference discretization (second order in space) with various options for artificial dissipation.

* Adaptive wall functions for poor quality meshes.

* Unsteady time integration using second- or third-order (in time) backwards difference formulae (BDF) or a
time-spectral approach for time-periodic flows.
* Unsteady time integration using second- or third-order (in time) backwards difference formulae (BDF) or a time-spectral approach for time-periodic flows.

* Fully parallel, scalable pre-processor responsible load balancing.
72 changes: 70 additions & 2 deletions doc/options.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ isosurface:
desc: >
Dictionary specifying the type and values to be used for isosurfaces.
Any of the ``volumeVariables`` may be used.
An example dictionary is : ``{"Vx":-0.001, "shock":1.0}``.
An example dictionary is : ``{"vx":-0.001, "shock":1.0}``.
This will place an isosurface at (essentially) 0 x-velocity and an isosurface at the shock sensor value of 1 (used to visualize the shock region).
isoVariables:
Expand Down Expand Up @@ -255,6 +255,12 @@ useApproxWallDistance:
After the geometry deforms (such as during an optimization) the spatial search algorithm is not run, but the distance between the (new) parametric location and the (new) grid cell center is computed and taken as the wall distance.
This is substantially faster and permits efficient wall-distance updates for use in aerostructural analysis.
updateWallAssociations:
desc: >
Flag to update wall associations, even if the approximate wall distance routines are used.
By default, we don't update the associations because the update itself cannot be differentiated.
However, for large mesh changes, users might still want to update the associations for more accurate wall distance values.
eulerWallTreatment:
desc: >
Specifies how wall boundary conditions are implemented for inviscid simulations.
Expand Down Expand Up @@ -712,6 +718,14 @@ oversetPriority:
A lower factor will encourage the usage of that block mesh.
This option may be required to get the flooding algorithm working properly.
recomputeOverlapMatrix:
desc: >
Flag to determine if the domain overlap matrix is re-computed when a full overset update is done.
This overlap matrix determines the connections between domains for the donor search algorithm; for the code to search a domain for potential donors, that domain must overlap with the current domain.
This matrix is re-computed when a full overset update is done because the domain overlaps can change.
If the overlap matrix is outdated, the code will fail to find donors in cases where there are many potential donors available from overlapping domains that are not represented in the matrix.
If the user knows that the overlap matrix will not change, and they want to improve the efficiency of the full overset update, they can set this option to False.
oversetDebugPrint:
desc: >
Flag to enable or disable the debug printout from the overset algorithm when a hole cutting process fails.
Expand Down Expand Up @@ -889,12 +903,20 @@ NKASMOverlap:
More overlap levels result in a stronger preconditioner, at the expense of more expensive iterations and more memory.
Typical values range from 1 for easy cases and up to 2 or 3 for more difficult cases.
NKASMOverlapCoarse:
desc: >
Same as :py:data:`NKASMOverlap` but for the coarse levels when using ``multigrid`` for :py:data:`NKGlobalPreconditioner`.
NKPCILUFill:
desc: >
The number of levels of fill to use on the local Incomplete LU (ILU) factorization in the NK solver.
Typical values are 1 for easy cases and up to 3 for more difficult cases.
More levels of fill result in a stronger preconditioner which will result in fewer (linear) iterations, but individual iterations will be more costly and consume more memory.
NKPCILUFillCoarse:
desc: >
Same as :py:data:`NKPCILUFill` but for the coarse levels when using ``multigrid`` for :py:data:`NKGlobalPreconditioner`.
NKJacobianLag:
desc: >
The option determines the frequency at which the NK preconditioner is reformed.
Expand All @@ -914,6 +936,10 @@ NKInnerPreconIts:
More iterations may help converge the linear system faster.
This should be left at 1 unless a very difficult problem is encountered.
NKInnerPreconItsCoarse:
desc: >
Same as :py:data:`NKInnerPreconIts` but for the coarse levels when using ``multigrid`` for :py:data:`NKGlobalPreconditioner`.
NKOuterPreconIts:
desc: >
Number of global preconditioning iterations for the NK solver adjoint solution.
Expand Down Expand Up @@ -1011,10 +1037,18 @@ ANKASMOverlap:
desc: >
Similar to the :py:data:`NKASMOverlap` option but for the ANK solver.
ANKASMOverlapCoarse:
desc: >
Same as :py:data:`ANKASMOverlap` but for the coarse levels when using ``multigrid`` for :py:data:`ANKGlobalPreconditioner`.
ANKPCILUFill:
desc: >
Similar to the :py:data:`NKPCILUFill` option but for the ANK solver.
ANKPCILUFillCoarse:
desc: >
Same as :py:data:`ANKPCILUFill` but for the coarse levels when using ``multigrid`` for :py:data:`ANKGlobalPreconditioner`.
ANKJacobianLag:
desc: >
Number of nonlinear iterations between every preconditioner update.
Expand All @@ -1026,6 +1060,10 @@ ANKInnerPreconIts:
desc: >
Similar to the :py:data:`NKInnerPreconIts` option but for the ANK solver.
ANKInnerPreconItsCoarse:
desc: >
Same as :py:data:`ANKInnerPreconIts` but for the coarse levels when using ``multigrid`` for :py:data:`ANKGlobalPreconditioner`.
ANKOuterPreconIts:
desc: >
Similar to the :py:data:`NKOuterPreconIts` option but for the ANK solver.
Expand Down Expand Up @@ -1220,6 +1258,11 @@ numberSolutions:
desc: >
Flag to set whether to attach the numbering of the AeroProblem to the grid solution file.
writeSolutionDigits:
desc: >
Number of digits in the solution output filenames.
A value of 4 will give, for example, 0023, while a value of 3 will give 023.
printIterations:
desc: >
Flag to set whether to print out the monitoring values at each iteration.
Expand Down Expand Up @@ -1248,6 +1291,10 @@ printNegativeVolumes:
desc: >
Flag to print the block indices, cell center coordinates, and volume for each negative volume cell in the mesh.
printBadlySkewedCells:
desc: >
Flag to print the block indices, cell center coordinates, and skewness for each cell whose skewness is above the value defined in ``meshMaxSkewness``. Only used when ``useSkewnessCheck`` is active.
monitorVariables:
desc: >
List of the variables whose convergence should be monitored.
Expand Down Expand Up @@ -1441,18 +1488,30 @@ ILUFill:
Typical values are 1 for easy cases and up to 3 for more difficult cases.
More levels of fill result in a stronger preconditioner which will result in fewer (linear) iterations, but individual iterations will be more costly and consume more memory.
ILUFillCoarse:
desc: >
Same as :py:data:`ILUFill` but for the coarse levels when using ``multigrid`` for :py:data:`globalPreconditioner`.
ASMOverlap:
desc: >
The number of overlap levels in the additive Schwarz preconditioner for the adjoint solution.
More overlap levels result in a stronger preconditioner, at the expense of more expensive iterations and more memory.
Typical values range from 1 for easy cases and up to 2 or 3 for more difficult cases.
ASMOverlapCoarse:
desc: >
Same as :py:data:`ASMOverlap` but for the coarse levels when using ``multigrid`` for :py:data:`globalPreconditioner`.
innerPreconIts:
desc: >
Number of local preconditioning iterations for the adjoint solution.
Increasing this number may help with difficult problems.
However, each iteration will take more time.
innerPreconItsCoarse:
desc: >
Same as :py:data:`innerPreconIts` but for the coarse levels when using ``multigrid`` for :py:data:`globalPreconditioner`.
outerPreconIts:
desc: >
Number of global preconditioning iterations for the adjoint solution.
Expand Down Expand Up @@ -1545,4 +1604,13 @@ cavSensorSharpness:
cavExponent:
desc: >
The exponent for the numerator term (- Cp - cavitationnumber) of the cavitation sensor.
The exponent for the numerator term (- Cp - cavitationnumber) of the cavitation sensor.
meshMaxSkewness:
desc: >
Adflow throws an error and fails if the `skewness` of the mesh is above this value. `Skewness` is defined as described `here <https://www.simscale.com/docs/simulation-setup/meshing/mesh-quality/#skewness>`__. Only used when ``useSkewnessCheck`` is active.
useSkewnessCheck:
desc: >
When set to true, ADflow computes the `skewness` of each cell and throws an error if it is above ``meshMaxSkewness``. See also ``printBadlySkewedCells``.
12 changes: 10 additions & 2 deletions src/NKSolver/NKSolvers.F90
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,11 @@ module NKSolver
integer(kind=intType) :: NK_jacobianLag
integer(kind=intType) :: NK_subspace
integer(kind=intType) :: NK_asmOverlap
integer(kind=intType) :: NK_asmOverlapCoarse
integer(kind=intType) :: NK_iluFill
integer(kind=intType) :: NK_iluFillCoarse
integer(kind=intType) :: NK_innerPreConIts
integer(kind=intType) :: NK_innerPreConItsCoarse
integer(kind=intType) :: NK_outerPreConIts
integer(kind=intType) :: NK_AMGLevels
integer(kind=intType) :: NK_AMGNSmooth
Expand Down Expand Up @@ -421,7 +424,8 @@ subroutine FormJacobianNK
else
call setupStandardMultigrid(NK_KSP, kspObjectType, NK_subSpace, &
preConSide, NK_asmOverlap, NK_outerPreConIts, &
localOrdering, NK_iluFill, NK_innerPreConIts)
localOrdering, NK_iluFill, NK_innerPreConIts, &
NK_asmOverlapCoarse, NK_iluFillCoarse, NK_innerPreConItsCoarse)
end if

! Don't do iterative refinement
Expand Down Expand Up @@ -1660,8 +1664,11 @@ module ANKSolver
integer(kind=intType) :: ANK_subSpace
integer(kind=intType) :: ANK_maxIter
integer(kind=intType) :: ANK_asmOverlap
integer(kind=intType) :: ANK_asmOverlapCoarse
integer(kind=intType) :: ANK_iluFill
integer(kind=intType) :: ANK_iluFillCoarse
integer(kind=intType) :: ANK_innerPreConIts
integer(kind=intType) :: ANK_innerPreConItsCoarse
integer(kind=intType) :: ANK_outerPreConIts
integer(kind=intType) :: ANK_AMGLevels
integer(kind=intType) :: ANK_AMGNSmooth
Expand Down Expand Up @@ -2021,7 +2028,8 @@ subroutine FormJacobianANK
else if (ANK_precondType == 'mg') then
call setupStandardMultigrid(ANK_KSP, kspObjectType, subSpace, &
preConSide, ANK_asmOverlap, outerPreConIts, &
localOrdering, ANK_iluFill, ANK_innerPreConIts)
localOrdering, ANK_iluFill, ANK_innerPreConIts, &
ANK_asmOverlapCoarse, ANK_iluFillCoarse, ANK_innerPreConItsCoarse)
end if

! Don't do iterative refinement
Expand Down
8 changes: 6 additions & 2 deletions src/adjoint/Makefile_tapenade
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ ALL_RES_FILES = $(SRC)/adjoint/adjointExtra.F90\
$(SRC)/solver/surfaceIntegrations.F90\
$(SRC)/solver/zipperIntegrations.F90\
$(SRC)/solver/ALEUtils.F90\
$(SRC)/solver/actuatorRegion.F90\
$(SRC)/initFlow/initializeFlow.F90\
$(SRC)/turbulence/turbUtils.F90\
$(SRC)/turbulence/turbBCRoutines.F90\
Expand Down Expand Up @@ -103,6 +104,9 @@ bcData%BCDataSubsonicOutflow(bcVarArray, Pref) > \
bcData%BCDataSupersonicInflow(bcVarArray, muRef, rhoRef, Pref, uRef, wInf, pInfCorr) > \
(bcData%rho, bcData%velx, bcData%vely, bcData%velz, bcData%ps, bcData%turbInlet, muRef, rhoRef, Pref, uRef, wInf, pInfCorr) \
\
actuatorRegion%computeActuatorRegionVolume(vol, actuatorRegions%volLocal) > \
(vol, actuatorRegions%volLocal)\
\
adjointExtra%xhalo_block(x) > \
(x) \
\
Expand Down Expand Up @@ -185,8 +189,8 @@ sa%saViscous(w, vol, si, sj, sk, rlv, scratch) > \
sa%saResScale(scratch, dw) > \
(dw) \
\
residuals%sourceTerms_block(w, pref, uref, plocal, dw, actuatorRegions%force, actuatorRegions%heat) > \
(w, pref, uref, plocal, dw, actuatorRegions%force, actuatorRegions%heat) \
residuals%sourceTerms_block(w, pref, uref, plocal, dw, vol, actuatorRegions%force, actuatorRegions%heat, actuatorRegions%volume) > \
(w, pref, uref, plocal, dw, vol, actuatorRegions%force, actuatorRegions%heat, actuatorRegions%volume) \
\
residuals%initres_block(dw, fw, flowDoms%w, flowDoms%vol, dscalar, dvector) > \
(dw, fw, flowDoms%w, flowDoms%vol, dscalar, dvector) \
Expand Down
3 changes: 2 additions & 1 deletion src/adjoint/adjointAPI.F90
Original file line number Diff line number Diff line change
Expand Up @@ -918,7 +918,8 @@ subroutine setupPETScKsp
else if (PreCondType == 'mg') then

call setupStandardMultigrid(adjointKSP, ADjointSolverType, adjRestart, adjointPCSide, &
overlap, outerPreconIts, matrixOrdering, fillLevel, innerPreConIts)
overlap, outerPreconIts, matrixOrdering, fillLevel, innerPreConIts, &
overlapCoarse, fillLevelCoarse, innerPreConItsCoarse)
end if

! Setup monitor if necessary:
Expand Down
Loading

0 comments on commit 5a52fd0

Please sign in to comment.