diff --git a/docs/ODEs.md b/docs/ODEs.md deleted file mode 100644 index 995fd87fa..000000000 --- a/docs/ODEs.md +++ /dev/null @@ -1,57 +0,0 @@ -Let us consider the transient problem $A(u,\dot{u}) = 0$. We can consider that this is a ODE, since the spatial part is not important here. The most standard situation is the one in which -$$ -A(u,d_t{u}) = Md_t{u} + K u, -$$ -i.e., $A$ is linear with respect to $\dot{u}$, but we can consider the more general case here. - -# $θ$-method - -Our motivation here is to split the time domain into time steps, and for each time step $[t^n,t^{n+1}]$, create an approximation of the map $u^n \mapsto u^{n+1}$ such that $A(R(u^{n+1},u^n),\Delta_t(u^{n+1},u^n)) = 0$. The operator $\Delta_t$ is an approximation of the time derivative and the operator $R$ is some time approximation of $u$ in $[t^n,t^{n+1}]$. - -Let us consider the $θ$-method to fix ideas. In this case, we consider at each time step the initial value $u^n$, we approximate the time derivative using finite differences as $\Delta_t(u^n,u^{n+1}) = \frac{u^{n+1}-u^{n}}{\delta t}$. For Backward-Euler, we compute $R(u^n,u^{n+1}) = u^{n+1}$, for Forward Euler $R(u^n,u^{n+1}) = u^{n}$, and for Crank-Nicolson $R(u^n,u^{n+1}) = \frac{u^{n+1}+u^{n}}{2}$ (or more precisely, $R(u^n,u^{n+1}) = u^{n+1}(t-t^n) + u^n(t^{n+1}-t)$). - -Now, we want to approximate the operator using any of these methods. We can readily check that we can write the Newton linearisation of any of these problems as: -$$ -[\frac{∂A}{∂u}\frac{\partial R}{\partial u^{n+1}} + \frac{∂A}{∂\dot{u}}\frac{\partial \Delta}{\partial u^{n+1}} ] \delta u^{n+1} = - A(R(u^{n+1},u^n),\Delta_t(u^{n+1},u^n)). -$$ -We can denote $J_0 \doteq \frac{∂A}{∂u}$ and $J_1 \doteq \frac{∂A}{∂\dot{u}}$. - -E.g., for BE we have -$$ -\frac{\partial R}{\partial u^{n+1}} = 1, \qquad \frac{\partial \Delta}{\partial u^{n+1}} = 1/δt, -$$ -for FE we have -$$ -\frac{\partial R}{\partial u^{n+1}} = 0, \qquad \frac{\partial \Delta}{\partial u^{n+1}} = 1/δt, -$$ -and for CN we have -$$ -\frac{\partial R}{\partial u^{n+1}} = 1/2, \qquad \frac{\partial \Delta}{\partial u^{n+1}} = 1/δt. -$$ -Analogously, we can define $\gamma_0 \doteq \frac{\partial R}{\partial u^{n+1}}$ and $\gamma_1 \doteq \frac{\partial \Delta}{\partial u^{n+1}}$. Note that the standard Jacobian is pre-multiplied by $\gamma_0$. Thus, it makes no sense to compute $J_0$ when $\gamma_0 = 0$. The same happens for the case of $\gamma_1 = 0$ and $J_1$. But this is not the case for ODEs. $J_1$ is the mass matrix. - -We have decided to write the problem in terms of $u^{n+1}$, but we could do something different. E.g., for the $\theta$-method, we can write the problem in terms of $u^{n+\theta}$ for $\theta > 0$. In this case, we have to define the $R$ and $Δ_t$ operators in terms of $u^{n+\theta}$ and $u^n$ and perform exactly as above. The only difference is a final update step. - -From the discussion above, it seems quite clear that FE should work with the current machinery, since we would be computing $M + 0*L$. That is the reason why I say that the current machinery should work for FE but we need to avoid computing $L$ by using a if statement in the code, and only compute it for $\gamma_0 > 0$. - -When FE works, we can start the discussion about the general RK solver. - -# RK methods - -In DIRK methods, we make the following assumption: -$$ -A(t,u,d_t(u)) \doteq M d_t(u) + K(t,u) = 0. -$$ -Given a Butcher tableau, the method reads as follows. Given $u^{n}$, compute for $s = 1,\ldots,n$, -$$ -M k_s = -K(t_n + c_s \delta t, u^{n} + \delta t \sum_{i=1}^{s} a_{s,i} k_i), -$$ -or -$$ -M \delta k_s = - M k_s -K(t_n + c_s \delta t, u^{n} + \delta t \sum_{i=1}^{s} a_{s,i} k_i, k_s). -$$ -Then, $u^{n+1} = u^n + \delta t \sum_{i=1}^{n} b_i k_i$. Note that we are only considering DIRK methods, since we only allow $i = 1, \ldots, s$ at each stage computation. In this case, the jacobians to be computed are as above, $J_0$ and $J_1$. However, $J_1 = M$ and $J_0 = \frac{\partial K}{\partial u}$. In any case, we could just define $A$ as above and compute its Jacobians as above. - -At each stage, we have $\gamma_1^s = 1$ and $\gamma_0^s = \delta t a_{s,s}$. We can use exactly the same machinery as above with these coefficients. - -In the case of explicit methods, $a_{s,s} = 0$ and we don't need to compute $J_0$, as above for FE. diff --git a/docs/src/ODEs.md b/docs/src/ODEs.md new file mode 100644 index 000000000..501611e39 --- /dev/null +++ b/docs/src/ODEs.md @@ -0,0 +1,452 @@ +# A general framework for the numerical approximation of ODEs +We consider an initial value problem written in the form +```math +\left\{\begin{array}{rcll}\boldsymbol{r}(t, \partial_{t}^{0} \boldsymbol{u}, \ldots, \partial_{t}^{n} \boldsymbol{u}) &=& \boldsymbol{0}_{d}, \\ \partial_{t}^{k} \boldsymbol{u}(t_{0}) &=& \boldsymbol{u}_{0}^{k} & 0 \leq k \leq n-1,\end{array}\right. +``` +where +* $\boldsymbol{u}: \mathbb{R} \to \mathbb{R}^{d}$ is the unknown of the problem, +* $n \in \mathbb{N}$ is the order of the ODE, +* $t_{0} \in \mathbb{R}$ is the initial time and $\\{\{\boldsymbol{u}\_{0}^{k}\}\\}\_{0 \leq k \leq n-1} \in (\mathbb{R}^{d})^{n-1}$ are the initial conditions, and +* $\boldsymbol{r}: \mathbb{R} \times (\mathbb{R}^{d})^{n} \to \mathbb{R}^{d}$ is the residual. + +> We illustrate these notations on the semi-discretisation of the heat equation. It is a first-order ODE so we have $n = 1$, and $d$ is the number of degrees of freedom. The residual and initial condition have the form $$\boldsymbol{r}(t, \boldsymbol{u}, \dot{\boldsymbol{u}}) \doteq \boldsymbol{M} \dot{\boldsymbol{u}} + \boldsymbol{K}(t) \boldsymbol{u} - \boldsymbol{f}(t), \qquad \boldsymbol{u}(t_{0}),$$ where $\boldsymbol{M} \in \mathbb{R}^{d \times d}$ is the mass matrix, $\boldsymbol{K}: \mathbb{R} \to \mathbb{R}^{d \times d}$ is the stiffness matrix, and $\boldsymbol{f}: \mathbb{R} \to \mathbb{R}^{d}$ is the forcing term. + +Suppose that we are willing to approximate $\boldsymbol{u}$ at a time $t_{F} > t_{0}$. A numerical scheme splits the time interval $[t_{0}, t_{F}]$ into smaller intervals $[t_{n}, t_{n+1}]$ (that do not have to be of equal length) and propagates the information at time $t_{n}$ to time $t_{n+1}$. More formally, we consider a general framework consisting of a starting, an update, and a finishing map defined as follows +* The **starting map** $\mathcal{I}: (\mathbb{R}^{d})^{n} \to (\mathbb{R}^{d})^{s}$ converts the initial conditions into $s$ state vectors, where $s \geq n$. +* The **marching map** $\mathcal{U}: \mathbb{R} \times (\mathbb{R}^{d})^{s} \to (\mathbb{R}^{d})^{s}$ updates the state vectors from time $t_{n}$ to time $t_{n+1}$. +* The **finishing map** $\mathcal{F}: \mathbb{R} \times (\mathbb{R}^{d})^{s} \to \mathbb{R}^{d}$ converts the state vectors into the evaluation of $\boldsymbol{u}$ at the current time. + +> In the simplest case, the time step $h = h_{n} = t_{n+1} - t_{n}$ is prescribed and constant across all iterations. The state vectors are simply the initial conditions, i.e. $s = n$ and $\mathcal{I} = \mathrm{id}$, and assuming that the initial conditions are given by increasing order of time derivative, $\mathcal{F}$ returns its first input. +> +> Some schemes need nontrivial starting and finishing maps. (See the generalised- $\alpha$ schemes below.) When higher-order derivatives can be retrieved from the state vectors, it is also possible to take another definition for $\mathcal{F}$ so that it returns the evaluation of $\boldsymbol{u}$ and higher-order derivatives at the current time. + +These three maps need to be designed such that the following recurrence produces approximations of $\boldsymbol{u}$ at the times of interest $t_{n}$ +```math +\left\{\begin{array}{rcl} +\left\{\boldsymbol{s}\right\}_{n+1} &=& \mathcal{U}(h_{n}, \left\{\boldsymbol{s}\right\}_{n}) \\ +\left\{\boldsymbol{s}\right\}_{0} &=& \mathcal{I}(\boldsymbol{u}_{0}^{0}, \ldots, \boldsymbol{u}_{0}^{n-1}) +\end{array}\right., \qquad \boldsymbol{u}_{n} = \mathcal{F}(\left\{\boldsymbol{s}\right\}_{n}). +``` +More precisely, we would like $\boldsymbol{u}\_{n}$ to be close to $\boldsymbol{u}(t_{n})$. Here the notation $\\{\boldsymbol{s}\\}\_{n}$ stands for the state vector, i.e. a vector of $s$ vectors: $\\{\boldsymbol{s}\\}\_{n} = (\boldsymbol{s}\_{n, i})\_{1 \leq i \leq s}$. In particular, we notice that we need the exactness condition $\mathcal{F} \circ \mathcal{I}(\boldsymbol{u}\_{0}^{0}, \ldots, \boldsymbol{u}\_{0}^{n-1}) = \boldsymbol{u}_{0}$. This is a condition on the design of the pair ($\mathcal{I}$, $\mathcal{F}$). + +# Classification of ODEs and numerical schemes +Essentially, a numerical scheme converts a (continuous) ODE into (discrete) nonlinear systems of equations. These systems of equations can be linear under special conditions on the nature of the ODE and the numerical scheme. Since numerical methods for linear and nonlinear systems of equations can be quite different in terms of cost and implementation, we are interested in solving linear systems whenever possible. This leads us to perform the following classifications. + +## Classification of ODEs +We define a few nonlinearity types based on the expression of the residual. +* **Nonlinear**. Nothing special can be said about the residual. +* **Quasilinear**. The residual is linear with respect to the highest-order time derivative and the corresponding linear form may depend on time and lower-order time derivatives, i.e. $$\boldsymbol{r}(t, \partial_{t}^{0} \boldsymbol{u}, \ldots, \partial_{t}^{n} \boldsymbol{u}) = \boldsymbol{M}(t, \partial_{t}^{0} \boldsymbol{u}, \ldots, \partial_{t}^{n-1} \boldsymbol{u}) \partial_{t}^{n} \boldsymbol{u} + \boldsymbol{f}(t, \partial_{t}^{0} \boldsymbol{u}, \ldots, \partial_{t}^{n-1} \boldsymbol{u}).$$ We call the matrix $\boldsymbol{M}: \mathbb{R} \to \mathbb{R}^{d \times d}$ the mass matrix. In particular, a quasilinear ODE is a nonlinear ODE. +* **Semilinear**. The residual is quasilinear and the mass matrix may only depend on time, i.e. $$\boldsymbol{r}(t, \partial_{t}^{0} \boldsymbol{u}, \ldots, \partial_{t}^{n} \boldsymbol{u}) = \boldsymbol{M}(t) \partial_{t}^{n} \boldsymbol{u} + \boldsymbol{f}(t, \partial_{t}^{0} \boldsymbol{u}, \ldots, \partial_{t}^{n-1} \boldsymbol{u}).$$ In particular, a semilinear ODE is a quasilinear ODE. +* **Linear**. The residual is linear with respect to all time derivatives, i.e. $$\boldsymbol{r}(t, \partial_{t}^{0} \boldsymbol{u}, \ldots, \partial_{t}^{n} \boldsymbol{u}) = \sum_{0 \leq k \leq n} \boldsymbol{A}\_{k}(t) \partial_{t}^{k} \boldsymbol{u} + \boldsymbol{f}(t).$$ We refer to the matrix $\boldsymbol{A}\_{k}: \mathbb{R} \to \mathbb{R}^{d \times d}$ as the $k$-th linear form of the residual. We may still define the mass matrix $\boldsymbol{M} = \boldsymbol{A}_{n}$. In particular, a linear ODE is a semilinear ODE. + +> Note that for residuals of order zero (i.e. "standard" systems of equations), the definitions of quasilinear, semilinear, and linear coincide. + +We consider an extra ODE type that is motivated by stiff problems. We say that an ODE has an implicit-explicit (IMEX) decomposition if it be can written as the sum of a residual of order $n$ and another residual of order $n-1$, i.e. $$\boldsymbol{r}(t, \partial_{t}^{0} \boldsymbol{u}, \ldots, \partial_{t}^{n} \boldsymbol{u}) = \boldsymbol{r}\_{\text{implicit}}(t, \partial_{t}^{0} \boldsymbol{u}, \ldots, \partial_{t}^{n} \boldsymbol{u}) + \boldsymbol{r}\_{\text{explicit}}(t, \partial_{t}^{0} \boldsymbol{u}, \ldots, \partial_{t}^{n-1} \boldsymbol{u}).$$ The decomposition takes the form above so that the mass matrix of the global residual is fully contained in the implicit part. The table below indicates the type of the corresponding global ODE. + +| Explicit \ Implicit | Nonlinear | Quasilinear | Semilinear | Linear | +|---------------------|-------------|-------------|------------|------------| +| Nonlinear | Nonlinear | Quasilinear | Semilinear | Semilinear | +| Linear | Nonlinear | Quasilinear | Semilinear | Linear | + +In particular, for the global residual to be linear, both the implicit and explicit parts need to be linear too. + +> In the special case where the implicit part is linear and the explicit part is quasilinear or semilinear, we could, in theory, identify two linear forms for the global residual. However, introducing this difference would call for an order-dependent classification of ODEs and this would create (infinitely) many new types. Since numerical schemes can rarely take advantage of this extra structure in practice, we still say that the global residual is semilinear in these cases. + +## Classification of numerical schemes +We introduce a classification of numerical schemes based on where they evaluate the residual during the state update. + +* If it is possible (up to a change of variables) to write the system of equations for the state update as evaluations of the residual at known values (that depend on the solution at the current time) for all but the highest-order derivative, we say that the scheme is explicit. +* Otherwise, we say that the scheme is implicit. + +> For example, when solving a first-order ODE, the state update would involve solving one or more equations of the type $$\boldsymbol{r}(t_{k}, \boldsymbol{u}\_{k}(\boldsymbol{x}), \boldsymbol{v}\_{k}(\boldsymbol{x})) = \boldsymbol{0},$$ where $\boldsymbol{x}$ and the unknown of the state update. The scheme is explicit if it is possible to introduce a change of variables such that $\boldsymbol{u}_{k}$ does not depend on $\boldsymbol{x}$. Otherwise, it is implicit. + +## Classification of systems of equations +It is advantageous to introduce this classification of ODE and numerical schemes because the system of equations arising from the discretisation of the ODE by a numerical scheme will be linear or nonlinear depending on whether the scheme is explicit, implicit, or implicit-explicit, and on the type of the ODE. More precisely, we have the following table. + +| | Nonlinear | Quasilinear | Semilinear | Linear | +|-------------------|-------------|-------------|------------|--------| +| Explicit | Nonlinear | Linear | Linear | Linear | +| Implicit | Nonlinear | Nonlinear | Nonlinear | Linear | + +When the system is linear, another important practical consideration is whether the matrix of the system is constant across iterations or not. This is important because a linear solver typically performs a factorisation of the matrix, and this operation may only be performed once if the matrix is constant. +* If the linear system comes from an explicit scheme, the matrix of the system is constant if the mass matrix is. This means that the ODE has to be quasilinear. +* If the linear system comes from an implicit scheme, all the linear forms must be constant for the system to have a constant matrix. + +## Reuse across iterations +For performance reasons, it is thus important that the ODE be described in the most specific way. In particular, we consider that the mass term of a quasilinear ODE is not constant, because if it is, the ODE is semilinear. We enable the user to specify the following constant annotations: +* For nonlinear and quasilinear ODE, no quantity can be described as constant. +* For a semilinear ODE, whether the mass term is constant. +* For a linear ODE, whether all the linear forms are constant. + +If a linear form is constant, regardless of whether the numerical scheme relies on a linear or nonlinear system, it is always possible to compute the jacobian of the residual with respect to the corresponding time derivative only once and retrieve it in subsequent computations of the jacobian. + +# High-level API in Gridap +The ODE module of `Gridap` relies on the following structure. + +## Finite element spaces +The time-dependent counterpart of `TrialFESpace` is `TransientTrialFESpace`. It is built from a standard `TestFESpace` and is equipped with time-dependent Dirichlet boundary conditions. +> By definition, test spaces have zero boundary conditions so they need not be seen as time-dependent objects. + +A `TransientTrialFESpace` can be evaluated at any time derivative order, and the corresponding Dirichlet values are the time derivatives of the Dirichlet boundary conditions. + +For example, the following creates a transient `FESpace` and evaluates its first two time derivatives. +``` +g(t::Real) = x -> x[1] + x[2] * t +V = FESpace(model, reffe, dirichlet_tags="boundary") +U = TransientTrialFESpace (V, g) + +t0 = 0.0 +U0 = U(t0) + +∂tU = ∂t(U) +∂tU0 = ∂tU(t0) + +∂ttU = ∂tt(U) # or ∂ttU = ∂t(∂t(U)) +∂ttU0 = ∂ttU(t0) +``` + +## Cell fields +The time-dependent equivalent of `CellField` is `TransientCellField`. It stores the cell field itself together with its derivatives up to the order of the ODE. + +For example, the following creates a `TransientCellField` with two time derivatives. +``` +u0 = zero(get_free_dof_values(U0)) +∂tu0 = zero(get_free_dof_values(∂tU0)) +∂ttu0 = zero(get_free_dof_values(∂ttU0)) +u = TransientCellField(u0, (∂tu0, ∂ttu0)) +``` + +## Finite element operators +The time-dependent analog of `FEOperator` is `TransientFEOperator`. It has the following constructors based on the nonlinearity type of the underlying ODE. + +* `TransientFEOperator(res, jacs, trial, test)` and `TransientFEOperator(res, trial, test; order)` for the version with automatic jacobians. The residual is expected to have the signature `residual(t, u, v)`. +* `TransientQuasilinearFEOperator(mass, res, jacs, trial, test)` and `TransientQuasilinearFEOperator(mass, res, trial, test; order)` for the version with automatic jacobians. The mass and residual are expected to have the signatures `mass(t, u, dtNu, v)` and `residual(t, u, v)`, i.e. the mass is written as a linear form of the highest-order time derivative `dtNu`. In this setting, the mass matrix is supposed to depend on lower-order time derivatives, so `u` is provided for the nonlinearity of the mass matrix. +* `TransientSemilinearFEOperator(mass, res, jacs, trial, test; constant_mass)` and `TransientSemilinearFEOperator(mass, res, trial, test; order, constant_mass)` for the version with automatic jacobians. (The jacobian with respect to $\partial_{t}^{n} \boldsymbol{u}$ is simply the mass term). The mass and residual are expected to have the signatures `mass(t, dtNu, v)` and `residual(t, u, v)`, where here again `dtNu` is the highest-order derivative. In particular, the mass is specified as a linear form of `dtNu`. +* `TransientLinearFEOperator(forms, res, jacs, trial, test; constant_forms)` and `TransientLinearFEOperator(forms, res, trial, test; constant_forms)` for the version with automatic jacobians. (In fact, the jacobians are simply the forms themselves). The forms and residual are expected to have the signatures `form_k(t, dtku, v)` and `residual(t, v)`, i.e. `form_k` is a linear form of the $k$-th order derivative, and the residual does not depend on `u`. + +It is important to note that all the terms are gathered in the residual, including the forcing term. In the common case where the forcing term is on the right-hand side, it will need to be made negative in this description. + +Here, in the signature of the residual, `t` is the time at which the residual is evaluated, `u` is a function in the trial space, and `v` is a test function. Time derivatives of `u` can be included in the residual via the `∂t` operator, applied as many times as needed, or using the shortcut `∂t(u, N)`. + +Let us take the heat equation as an example. The original ODE is $$\partial_{t} u - \nabla \cdot (\kappa(t) \nabla u) = f(t),$$ where $\kappa$ is the (time-dependent) thermal conductivity and $f$ is the forcing term. We readily obtain the weak form $$\int_{\Omega} (\partial_{t} u) v + \kappa(t) \nabla u \cdot \nabla v \ \mathrm{d} \Omega = \int_{\Omega} f(t) v \ \mathrm{d} \Omega.$$ It could be described as follows. +* As a `TransientFEOperator`: +``` +res(t, u, v) = ∫( ∂t(u) ⋅ v + (κ(t) * ∇(u)) ⋅ ∇(v) - f(t) ⋅ v ) * dΩ +TransientFEOperator(res, U, V) +``` +* As a `TransientQuasilinearFEOperator`: +``` +mass(t, u, dtNu, v) = ∫( dtNu ⋅ v ) * dΩ +res(t, u, v) = ∫( (κ(t) * ∇(u)) ⋅ ∇(v) - f(t) ⋅ v ) * dΩ +TransientQuasilinearFEOperator(mass, res, U, V) +``` +* As a `TransientSemilinearFEOperator`: +``` +mass(t, dtu, v) = ∫( dtu ⋅ v ) * dΩ +res(t, u, v) = ∫( (κ(t) * ∇(u)) ⋅ ∇(v) - f(t) ⋅ v ) * dΩ +TransientSemilinearFEOperator(mass, res, U, V, constant_mass=true) +``` +* As a `TransientLinearFEOperator`: +``` +stiffness(t, u, v) = ∫( (κ(t) * ∇(u)) ⋅ ∇(v) ) * dΩ +mass(t, dtu, v) = ∫( dtu ⋅ v ) * dΩ +res(t, u, v) = ∫( -f(t) ⋅ v ) * dΩ +TransientLinearFEOperator((stiffness, mass), res, U, V, constant_forms=(false, true)) +``` +If $\kappa$ is constant, the keyword `constant_forms` could be replaced by `(true, true)`. + +## Solver and solution +The next step is to choose an ODE solver (see below for a full list) and specify the boundary conditions. The solution can then be iterated over until the final time is reached. + +For example, to use the $\theta$-method with a nonlinear solver, one could write +``` +t0 = 0.0 +tF = 1.0 +dt = 0.1 +uh0 = interpolate_everywhere(t0, U(t0)) + +res(t, u, v) = ∫( ∂t(u) ⋅ v + (κ(t) * ∇(u)) ⋅ ∇(v) - f(t) ⋅ v ) * dΩ +jac(t, u, du, v) = ∫( (κ(t) * ∇(du)) ⋅ ∇(v) ) * dΩ +jac_t(t, u, dtu, v) = ∫( dtu ⋅ v ) * dΩ +tfeop = TransientFEOperator(res, (jac, jac_t), U, V) + +ls = LUSolver() +nls = NLSolver(ls, show_trace=true, method=:newton, iterations=10) +odeslvr = ThetaMethod(nls, dt, 0.5) + +sol = solve(odeslvr, tfeop, t0, tF, uh0) +for (tn, un) in enumerate(sol) + # ... +end +``` + +# Low-level implementation +We now briefly describe the low-level implementation of the ODE module in `Gridap`. + +## ODE operators +The `ODEOperator` type represents an ODE according to the description above. It implements the `NonlinearOperator` interface, which enables the computation of residuals and jacobians. + +The algebraic equivalent of `TransientFEOperator` is an `ODEOpFromTFEOp`, which is a subtype of `ODEOperator`. Conceptually, `ODEOpFromTFEOp` can be thought of as an assembled `TransientFEOperator`, i.e. it deals with vectors of degrees of freedom. This operator comes with a cache (`ODEOpFromTFEOpCache`) that stores the transient space, its evaluation at the current time step, a cache for the `TransientFEOperator` itself (if any), and the constant forms (if any). + +> For now `TransientFEOperator` does not implement the `FEOperator` interface, i.e. it is not possible to evaluate residuals and jacobians directly on it. Rather, they are meant to be evaluated on the `ODEOpFromFEOp`. This is to cut down on the number of conversions between a `TransientCellField` and its vectors of degrees of freedom (one per time derivative). Indeed, when linear forms are constant, no conversion is needed as the jacobian matrix will be stored. + +## ODE solvers +An ODE solver has to implement the following interface. +* `allocate_odecache(odeslvr, odeop, t0, us0)`. This function allocates a cache that can be reused across the three functions `ode_start`, `ode_march!`, and `ode_finish!`. In particular, it is necessary to call `allocate_odeopcache` within this function, so as to instantiate the `ODEOpFromTFEOpCache` and be able to update the Dirichlet boundary conditions in the subsequent functions. +* `ode_start(odeslvr, odeop, t0, us0, odecache)`. This function creates the state vectors from the initial conditions. By default, this is the identity. +* `ode_march!(stateF, odeslvr, odeop, t0, state0, odecache)`. This is the update map that evolves the state vectors. +* `ode_finish!(uF, odeslvr, odeop, t0, tF, stateF, odecache)`. This function converts the state vectors into the evaluation of the solution at the current time step. By default, this copies the first state vector into `uF`. + +## Stage operator +A `StageOperator` represents the linear or nonlinear operator that a numerical scheme relies on to evolve the state vector. It is essentially a special kind of `NonlinearOperator` but it overwrites the behaviour of nonlinear and linear solvers to take advantage of the matrix of a linear system being constant. The following subtypes of `StageOperator` are the building blocks of all numerical schemes. +* `LinearStageOperator` represents the system $\boldsymbol{J} \boldsymbol{x} + \boldsymbol{r} = \boldsymbol{0}$, and can build $\boldsymbol{J}$ and $\boldsymbol{r}$ by evaluating the residual at a given point. +* `NonlinearStageOperator` represents $\boldsymbol{r}(\boldsymbol{t}, \boldsymbol{\ell}\_{0}(\boldsymbol{x}), \ldots, \boldsymbol{\ell}\_{N}(\boldsymbol{x})) = \boldsymbol{0}$, where it is assumed that all the $\boldsymbol{\ell}_{k}(\boldsymbol{x})$ are linear in $\boldsymbol{x}$. + +## ODE solution +This type is a simple wrapper around an `ODEOperator`, an `ODESolver`, and initial conditions that can be iterated on to evolve the ODE. + +# Numerical schemes formulation and implementation +We conclude this note by describing some numerical schemes and their implementation in `Gridap`. + +Suppose that the scheme has been evolved up to time $t_{n}$ already and that the state vectors $\\{\boldsymbol{s}\\}\_{n}$ are known. We are willing to evolve the ODE up to time $t_{n+1} > t_{n}$, i.e. compute the state vectors $\\{\boldsymbol{s}\\}\_{n+1}$. Generally speaking, a numerical scheme constructs an approximation of the map $\\{\boldsymbol{s}\\}\_{n} \to \\{\boldsymbol{s}\\}\_{n+1}$ by solving one or more relationships of the type +$$\boldsymbol{r}(t_{i}, \Delta_{i}^{0}(\\{\boldsymbol{s}\\}\_{n}, \\{\boldsymbol{s}\\}\_{n+1}), \ldots, \Delta_{i}^{n}(\\{\boldsymbol{s}\\}\_{n}, \\{\boldsymbol{s}\\}\_{n+1})) = \boldsymbol{0},$$ +where $t_{i}$ is an intermediate time and $\Delta_{i}^{k}$ are discrete operators that approximates the $k$-th order time derivative of $\boldsymbol{u}$ at time $t_{i}$. + +We now describe the numerical schemes implemented in `Gridap` using this framework. It is usually convenient to perform a change of variables so that the unknown $\boldsymbol{x}$ has the dimension of the highest-order time derivative of $\boldsymbol{u}$, i.e. $[\boldsymbol{x}] = [t]^{-n} [\boldsymbol{u}]$ (where $[\bullet]$ stands for "the dimension of $\bullet$"). We always perform such a change of variable in practice. + +We also briefly characterise these schemes in terms of their order and linear stability. + +## $\theta$-method +This scheme is used to solve first-order ODEs and relies on the simple state vector $\\{\boldsymbol{s}(t)\\} = \\{\boldsymbol{u}(t)\\}$. This means that the starting and finishing procedures are simply the identity. + +The $\theta$-method relies on the following approximation +$$\boldsymbol{u}(t_{n+1}) = \boldsymbol{u}(t_{n}) + \int_{t_{n}}^{t_{n+1}} \partial_{t} \boldsymbol{u}(t) \ \mathrm{d} t \approx \boldsymbol{u}(t_{n}) + h_{n} \partial_{t} \boldsymbol{u}(t_{n + \theta}),$$ +where we have introduced the intermediate time $t_{n + \theta} \doteq (1 - \theta) t_{n} + \theta t_{n+1}$. By replacing $\boldsymbol{u}(t_{n})$ and $\boldsymbol{u}(t_{n+1})$ by their discrete equivalents, we have $\partial_{t} \boldsymbol{u}(t_{n + \theta}) \approx \frac{1}{h} (\boldsymbol{u}\_{n+1} - \boldsymbol{u}\_{n})$. This quantity is found by enforcing that the residual is zero at $t_{n + \theta}$. In that sense, the $\theta$-method can be framed as a collocation method at $t_{n + \theta}$. For that purpose, we use the same quadrature rule as above to approximate $\boldsymbol{u}(t_{n + \theta})$, i.e. $\boldsymbol{u}(t_{n + \theta}) \approx \boldsymbol{u}\_{n} + \theta h\_{n} \partial_{t} \boldsymbol{u}(t_{n + \theta})$. Using the notations of the framework above, we have defined +```math +\begin{align*} +t_{1} &= (1 - \theta) t_{n} + \theta t_{n+1}, \\ +\Delta_{1}^{0} &= (1 - \theta) \boldsymbol{u}_{n} + \theta \boldsymbol{u}_{n+1}, \\ +\Delta_{1}^{1} &= \frac{1}{h} (\boldsymbol{u}_{n+1} - \boldsymbol{u}_{n}). +\end{align*} +``` + +To summarize and to be more concrete, let $\boldsymbol{x} = \frac{1}{h} (\boldsymbol{u}\_{n+1} - \boldsymbol{u}\_{n})$. The $\theta$-method solves the following stage operator +$$\boldsymbol{r}(t_{n} + \theta h_{n}, \boldsymbol{u}\_{n} + \theta h_{n} \boldsymbol{x}, \boldsymbol{x}) = \boldsymbol{0},$$ +and sets $\boldsymbol{u}\_{n+1} = \boldsymbol{u}\_{n} + h_{n} \boldsymbol{x}$. The output state is simply $\\{\boldsymbol{s}\\}\_{n+1} = \\{\boldsymbol{u}_{n+1}\\}$. + + +**Analysis** +Since this scheme uses $\boldsymbol{u}(t)$ as its only state vector, the amplification matrix has dimension one, and its coefficient is the stabilisation function, given by +$$\rho(z) = \frac{1 + (1 - \theta) z}{1 - \theta z}.$$ +We plug the Taylor expansion of $\boldsymbol{u}\_{n+1}$ around $\boldsymbol{u}\_{n}$ in $\boldsymbol{u}\_{n+1} = \rho(z) \boldsymbol{u}\_{n}$ and obtain the exactness condition $\rho(z) - \exp(z) = 0$. We then seek to match as many coefficients in the Taylor expansion of both sides to obtain order conditions. We readily obtain the following expansion +$$\rho(z) - \exp(z) = \sum_{k \geq 0} \left[\theta^{k} - \frac{1}{(k+1)!}\right] z^{k+1}.$$ +The order conditions are as follows. +* **Order 0 and 1**. The first two coefficients are always zero, so the method has at least order one. +* **Order 2**. The third coefficient is $\theta - \frac{1}{2}$, and it is zero when $\theta = \frac{1}{2}$. This value of $\theta$ corresponds to a second-order scheme. The next coefficient is $\theta^{2} - \frac{1}{6}$, so this method cannot reach order three. + +By looking at the behaviour of the stability function at infinity, we find that the scheme is $L$-stable only when $\theta = 1$. We determine whether the scheme is $A$-stable or not by looking at stability region. We distinguish three cases based on the value of $\theta$. +* $\theta < \frac{1}{2}$. The stability region is the circle of radius $\frac{1}{1 - 2 \theta}$ centered at $\left(\frac{-1}{1 - 2 \theta}, 0\right)$. In particular, it is not $A$-stable. The special case $\theta = 0$ is known as the Forward Euler scheme, which is the only explicit scheme of the $\theta$-method family. +* $\theta = \frac{1}{2}$. The stability region is the whole left complex plane, so the scheme is $A$-stable. This case is known as the implicit midpoint scheme. +* $\theta > \frac{1}{2}$. The stability region is the whole complex plane except the circle of radius $\frac{1}{2 \theta - 1}$ centered at $\left(\frac{1}{2 \theta - 1}, 0\right)$. In particular, the scheme is $A$-stable. The special case $\theta = 1$ is known as the Backward Euler scheme. + +## Generalised- $\alpha$ scheme for first-order ODEs +This scheme relies on the state vector $\\{\boldsymbol{s}(t)\\} = \\{\boldsymbol{u}(t), \partial_{t} \boldsymbol{u}(t)\\}$. In particular, it needs a nontrivial starting procedure that evaluates $\partial_{t} \boldsymbol{u}(t_{0})$ by enforcing a zero residual at $t_{0}$. The finaliser can still return the first vector of the state vectors. For convenience, let $\partial_{t} \boldsymbol{u}\_{n}$ denote the approximation $\partial_{t} \boldsymbol{u}(t_{n})$. + +This method extends the $\theta$-method by considering the two-point quadrature rule $$\boldsymbol{u}(t_{n+1}) = \boldsymbol{u}\_{n} + \int_{t_{n}}^{t_{n+1}} \partial_{t} \boldsymbol{u}(t) \ \mathrm{d} t \approx \boldsymbol{u}\_{n} + h_{n} [(1 - \gamma) \partial_{t} \boldsymbol{u}(t_{n}) + \gamma \partial_{t} \boldsymbol{u}(t_{n+1})],$$ +where $0 \leq \gamma \leq 1$ is a free parameter. The question is now how to estimate $\partial_{t} \boldsymbol{u}(t_{n+1})$. This is achieved by enforcing a zero residual at $t_{n + \alpha_{F}} \doteq (1 - \alpha_{F}) t_{n} + \alpha_{F} t_{n+1}$, where $0 \leq \alpha_{F} \leq 1$ is another free parameter. The value of $\boldsymbol{u}$ at that time, $\boldsymbol{u}\_{n + \alpha_{F}}$, is obtained by the same linear combination of $\boldsymbol{u}$ at $t_{n}$ and $t_{n+1}$. Regarding $\partial_{t} \boldsymbol{u}$, it is taken as a linear combination weighted by another free parameter $0 < \alpha_{M} \leq 1$ of the time derivative at times $t_{n}$ and $t_{n+1}$. Note that $\alpha_{M}$ cannot be zero. Altogether, we have defined the discrete operators +```math +\begin{align*} +t_{1} &= (1 - \alpha_{F}) t_{n} + \alpha_{F} t_{n+1}, \\ +\Delta_{1}^{0} &= (1 - \alpha_{F}) \boldsymbol{u}_{n} + \alpha_{F} \boldsymbol{u}_{n+1}, \\ +\Delta_{1}^{1} &= (1 - \alpha_{M}) \partial_{t} \boldsymbol{u}_{n} + \alpha_{M} \partial_{t} \boldsymbol{u}_{n+1}. +\end{align*} +``` + +In more concrete terms, we solve the following system: +```math +\begin{align*} +\boldsymbol{0} &= \boldsymbol{r}(t_{n + \alpha_{F}}, \boldsymbol{u}_{n + \alpha_{F}}, \partial_{t} \boldsymbol{u}_{n + \alpha_{M}}), \\ +t_{n + \alpha_{F}} &= (1 - \alpha_{F}) t_{n} + \alpha_{F} t_{n+1}, \\ +\boldsymbol{u}_{n + \alpha_{F}} &= (1 - \alpha_{F}) \boldsymbol{u}_{n} + \alpha_{F} \boldsymbol{u}_{n+1}, \\ +\partial_{t} \boldsymbol{u}_{n + \alpha_{M}} &= (1 - \alpha_{M}) \partial_{t} \boldsymbol{u}_{n} + \alpha_{M} \partial_{t} \boldsymbol{u}_{n+1}, \\ +\boldsymbol{u}_{n+1} &= \boldsymbol{u}_{n} + h_{n} [(1 - \gamma) \partial_{t} \boldsymbol{u}_{n} + \gamma \boldsymbol{x}], \\ +\partial_{t} \boldsymbol{u}_{n+1} &= \boldsymbol{x}. +\end{align*} +``` +The state vector is updated to $\\{\boldsymbol{s}\\}\_{n+1} = \\{\boldsymbol{u}\_{n+1}, \partial_{t} \boldsymbol{u}_{n+1}\\}$. + +**Analysis** +The amplification matrix for the state vector is +```math +\boldsymbol{A}(z) = \frac{1}{\alpha_{M} - \alpha_{F} \gamma z} \begin{bmatrix}\alpha_{M} + (1 - \alpha_{F}) \gamma z & \alpha_{M} - \gamma \\ z & \alpha_{M} - 1 + \alpha_{F} (1 - \gamma) z\end{bmatrix}. +``` +It is then immediate to see that $\boldsymbol{u}\_{n+1} = \mathrm{tr}(\boldsymbol{A}) \boldsymbol{u}\_{n} - \det(\boldsymbol{A}) \boldsymbol{u}\_{n-1}$. This time, plugging the Taylor expansion of $\boldsymbol{u}\_{n+1}$ and $\boldsymbol{u}\_{n-1}$ around $\boldsymbol{u}\_{n}$ in this expression, the exactness condition is $\mathrm{tr}(\boldsymbol{A}(z)) - \det(\boldsymbol{A}(z)) \exp(-z) - \exp(z) = 0$. To simplify the analysis, we write the trace and determinant of $\boldsymbol{A}$ as follows +$$\mathrm{tr}(\boldsymbol{A}(z)) = a + \frac{b}{1 - c z}, \qquad \det(\boldsymbol{A}(z)) = d + \frac{e}{1 - c z},$$ +where +```math +\begin{align*} +a &= 2 - \frac{1}{\alpha_{F}} - \frac{1}{\gamma}, \\ +b &= \frac{1}{\alpha_{F}} + \frac{1}{\gamma} - \frac{1}{\alpha_{M}}, \\ +c &= \frac{\alpha_{F} \gamma}{\alpha_{M}}, \\ +d &= \frac{(1 - \alpha_{F}) (1 - \gamma)}{\alpha_{F} \gamma}, \\ +e &= \frac{\alpha_{M} (\alpha_{F} + \gamma - 1) - \alpha_{F} \gamma}{\alpha_{F} \alpha_{M} \gamma}. +\end{align*} +``` +Next, we obtain the Taylor expansion of the exactness condition and find +$$(a + b - d - e - 1) + \sum_{k \geq 1} \left(b c^{k} - \frac{1}{k!} - \frac{(-1)^{k}}{k!} d - \sum_{0 \leq l \leq k} e c^{(k - l)}\frac{(-1)^{l}}{l!}\right) z^{k} = 0.$$ +The order conditions are as follows. +* **Order 0 and 1**. The first two coefficients are always zero, so the method is at least of order $2$. +* **Order 2**. The third coefficient has a zero at $\gamma = \frac{1}{2} + \alpha_{M} - \alpha_{F}$. +* **Order 3**. The fourth coefficient has a zero at $\alpha_{M} = \frac{1 + 6 \alpha_{F} - 12 \alpha_{F}^{2}}{6(1 - 2 \alpha_{F})}$ (provided that $\alpha_{F} \neq \frac{1}{2}$). In that case we simplify $\gamma$ into $\gamma = \frac{2 - 3 \alpha_{F}}{3(1 - 2 \alpha_{F})}$. +* **Order 4**. The fifth coefficient has zeros at $\alpha_{F} = \frac{3 \pm \sqrt{3}}{6}$ and poles at $\alpha_{F} = \frac{3 \pm \sqrt{21}}{12}$. The corresponding values of $\alpha_{M}$ and $\gamma$ are $\alpha_{M} = \frac{1}{2}$, $\gamma = \frac{3 \mp \sqrt{3}}{6}$. + +We finally study the stability in the extreme cases $|z| \to 0$ and $|z| \to +\infty$. We want the spectral radius of the amplification matrix to be smaller than one so that perturbations are damped away. +* When $|z| \to 0$, we have $\rho(\boldsymbol{A}(z)) \to \max\\{1, \left|1 - \frac{1}{\alpha_{M}}\right|\\}$. +* When $|z| \to +\infty$, we have $\rho(\boldsymbol{A}(z)) \to \max\\{\left|1 - \frac{1}{\alpha_{F}}\right|, \left|1 - \frac{1}{\gamma}\right|\\}$. + +We thus require $\alpha_{M} \geq \frac{1}{2}$, $\alpha_{F} \geq \frac{1}{2}$ and $\gamma \geq \frac{1}{2}$ to ensure stability. In particular when the scheme has order $3$, the stability conditions become $\alpha_{M} \geq \alpha_{F} \geq \frac{1}{2}$. We verify that the scheme is unstable whenever it has an order greater than $3$. We notice that $L$-stability is only achieved when $\alpha_{F} = 1$ and $\gamma = 1$. The corresponding value of $\alpha_{M}$ for a third-order scheme is $\alpha_{M} = \frac{3}{2}$. + +This scheme was originally devised to control the damping of high frequencies. One parameterisation consists in prescribing the eigenvalues at $|z| \to +\infty$, and this leads to +$$\alpha_{F} = \gamma = \frac{1}{1 + \rho_{\infty}}, \qquad \alpha_{M} = \frac{3 - \rho_{\infty}}{2 (1 + \rho_{\infty})},$$ +where $\rho_{\infty}$ is the spectral radius at infinity. Setting $\rho_{\infty}$ cuts all the highest frequencies in one step, whereas taking $\rho_{\infty} = 1$ preserves high frequencies. + +## Runge-Kutta +Runge-Kutta methods are multi-stage, i.e. they build estimates of $\boldsymbol{u}$ at intermediate times between $t_{n}$ and $t_{n+1}$. They can be written as follows +```math +\begin{align*} +\boldsymbol{0} &= \boldsymbol{r}(t_{n} + c_{i} h_{n} , \boldsymbol{u}_{n} + \sum_{1 \leq j \leq s} a_{ij} h_{n} \boldsymbol{x}_{j}, \boldsymbol{x}_{i}), & 1 \leq i \leq p \\ +\boldsymbol{u}_{n+1} &= \boldsymbol{u}_{n} + \sum_{1 \leq i \leq p} b_{i} h_{n} \boldsymbol{x}_{i}, +\end{align*} +``` +where $p$ is the number of stages, $\boldsymbol{A} = (a_{ij})\_{1 \leq i, j \leq p}$ is a matrix of free parameters, $\boldsymbol{b} = (b_{i})\_{1 \leq i \leq p}$ and $\boldsymbol{c} = (c_{i})\_{1 \leq i \leq p}$ are two vectors of free parameters. The stage unknowns $(\boldsymbol{x}\_{i})_{1 \leq i \leq p}$ are involved in a coupled system of equations. This system can take a simpler form when the matrix $\boldsymbol{A}$ has a particular structure. +* When $\boldsymbol{A}$ is lower triangular, the equations are decoupled and can thus be solved sequentially. These schemes are called Diagonally-Implicit Runge-Kutta (DIRK). If the diagonal coefficients of the matrix $\boldsymbol{A}$ are the same, the method is called Singly-Diagonally Implicit (SDIRK). +* If the diagonal coefficients are also zero, the method is explicit. These schemes are called Explicit Runge-Kutta (EXRK). + +**Implementation details** It is particularly advantageous to save the factorisation of the matrices of the stage operators for Runge-Kutta methods. This is always possible when the method is explicit and the mass matrix is constant, in which case all the stage matrices are the mass matrix. When the method is diagonally-implicit and the stiffness and mass matrices are constant, the matrices of the stage operators are $\boldsymbol{M} + a_{ii} h_{n} \boldsymbol{K}$. In particular, if two diagonal coefficients coincide, the corresponding operators will have the same matrix. We implement these reuse strategies by storing them in `CompressedArray`s, and introducing a map `i -> NumericalSetup`. + +**Analysis** +The stability function of a Runge-Kutta scheme is +$$\rho(z) = 1 + z \boldsymbol{b}^{T} (\boldsymbol{I} - z \boldsymbol{A})^{-1} \boldsymbol{1}.$$ + +The analysis of Runge-Kutta methods is well-established but we only derive order conditions for schemes with one, two, or three stages in the diagonally-implicit case. +* **One stage**. These schemes coincide with the $\theta$-method presented above. + +* **Two stages**. We solve the order conditions given by the differential trees and find the following families of tableaus of orders two and three +```math +\begin{array}{c|cc} +\alpha & \alpha & \\ +\beta & \beta - \hat{\beta} & \hat{\beta} \\ \hline +& \frac{2 \beta - 1}{2 (\beta - \alpha)} & \frac{1 - 2 \alpha}{2 (\beta - \alpha)} +\end{array}, \qquad +\begin{array}{c|cc} +\frac{1}{2} - \frac{\sqrt{3}}{6} \frac{1}{\lambda} & \frac{1}{2} - \frac{\sqrt{3}}{6} \frac{1}{\lambda} & \\ +\frac{1}{2} + \frac{\sqrt{3}}{6} \lambda & \frac{\sqrt{3}}{3} \lambda & \frac{1}{2} - \frac{\sqrt{3}}{6} \lambda \\ \hline +& \frac{\lambda^{2}}{\lambda^{2} + 1} & \frac{1}{\lambda^{2} + 1}. +\end{array} +``` + +* **Three stages**. We only solve the explicit schemes in full generality. We find three families of order three +```math +\begin{array}{c|cc} +0 & \\ +\alpha & \alpha & \\ +\beta & \beta - \frac{\beta (\beta - \alpha)}{\alpha (2 - 3\alpha)} & \frac{\beta (\beta - \alpha)}{ \alpha(2 - 3 \alpha)} \\ \hline +& 1 - \frac{3 (\beta + \alpha) - 2}{6 \alpha \beta} & \frac{3 \beta - 2}{6 \alpha (\beta - \alpha)} & \frac{2 - 3 \alpha}{6 \beta (\beta - \alpha)} +\end{array}, \qquad +\begin{array}{c|cc} +0 & \\ +\frac{2}{3} & \frac{2}{3} & \\ +\frac{2}{3} & \frac{2}{3} - \frac{1}{4 \alpha} & \frac{1}{4 \alpha} \\ \hline +& \frac{1}{4} & \frac{3}{4} - \alpha & \alpha +\end{array}, \qquad +\begin{array}{c|cc} +0 & \\ +\frac{2}{3} & \frac{2}{3} & \\ +0 & -\frac{1}{4 \alpha} & \frac{1}{4 \alpha} \\ \hline +& \frac{1}{4} - \alpha & \frac{3}{4} & \alpha +\end{array}. +``` + +## Implicit-Explicit Runge-Kutta +When the residual has an implicit-explicit decomposition, usually because we can identify a stiff part that we want to solve implicitly and a nonstiff part that we want to solve explicitly, the Runge-Kutta method reads as follows +```math +\begin{align*} +\boldsymbol{0} &= \boldsymbol{r}(t_{n} + c_{i} h_{n}, \boldsymbol{u}_{n} + \sum_{1 \leq j \leq i-1} (a_{i, j} h_{n} \boldsymbol{x}_{j} + \hat{a}_{i, j} h_{n} \hat{\boldsymbol{x}}_{j}) + a_{i, i} h_{n} \boldsymbol{x}_{i}, \boldsymbol{x}_{i}), \\ +\boldsymbol{0} &= \hat{\boldsymbol{r}}(t_{n} + c_{i} h_{n}, \boldsymbol{u}_{n} + \sum_{1 \leq j \leq i-1} (a_{i, j} h_{n} \boldsymbol{x}_{j} + \hat{a}_{i, j} h_{n} \hat{\boldsymbol{x}}_{j}) + a_{i, i} h_{n} \boldsymbol{x}_{i}, \hat{\boldsymbol{x}}_{i}), & 1 \leq i \leq p \\ +\boldsymbol{u}_{n+1} &= \boldsymbol{u}_{n} + \sum_{1 \leq i \leq p} (b_{i} h_{n} \boldsymbol{x}_{i} + \hat{b}_{i} h_{n} \hat{\boldsymbol{x}}_{i}). +\end{align*} +``` +In these expressions, quantities that wear a hat are the explicit counterparts of the implicit quantity with the same name. The implicit and explicit stages are alternated, i.e. the implicit and explicit stage unknowns $\boldsymbol{x}\_{i}$ and $\hat{\boldsymbol{x}}\_{i}$ are solved alternatively. As seen above, we require that the nodes $c_{i}$ of the implicit and explicit tableaus coincide. This implies that the first step for the implicit part is actually explicit. + +**Implementation details** +Many methods can be created by padding a DIRK tableau with zeros to give it an additional step. In this case, the first stage for the implicit part does not need to be solved, as all linear combinations give it a zero weight. As an example, an $L$-stable, $2$-stage, second-order SDIRK IMEX scheme is given by +```math +\begin{array}{c|ccc} +0 & 0 & 0 & 0 \\ +\frac{2 - \sqrt{2}}{2} & 0 & \frac{2 - \sqrt{2}}{2} & 0 \\ +1 & 0 & \frac{\sqrt{2}}{2} & \frac{2 - \sqrt{2}}{2} \\ \hline + & 0 & \frac{\sqrt{2}}{2} & \frac{2 - \sqrt{2}}{2} +\end{array}, \qquad +\begin{array}{c|ccc} +0 & 0 & 0 & 0 \\ +\frac{2 - \sqrt{2}}{2} & \frac{2 - \sqrt{2}}{2} & 0 & 0 \\ +1 & -\frac{\sqrt{2}}{2} & 1 + \frac{\sqrt{2}}{2} & 0 \\ \hline + & -\frac{\sqrt{2}}{2} & 1 + \frac{\sqrt{2}}{2} & 0 +\end{array}. +``` +We note that the first column of the matrix and the first weight are all zero, so the first stage for the implicit part does not need to be solved. + +## Generalised- $\alpha$ scheme for second-order ODEs +This scheme relies on the state vector $\\{\boldsymbol{s}(t)\\} = \\{\boldsymbol{u}(t), \partial_{t} \boldsymbol{u}(t), \partial_{tt} \boldsymbol{u}(t)\\}$. It needs a nontrivial starting procedure that evaluates $\partial_{tt} \boldsymbol{u}(t_{0})$ by enforcing a zero residual at $t_{0}$. The finaliser can still return the first vector of the state vectors. For convenience, let $\partial_{tt} \boldsymbol{u}\_{n}$ denote the approximation $\partial_{tt} \boldsymbol{u}(t_{n})$. + +This method is built out of the following update rule +```math +\begin{align*} +\boldsymbol{0} &= \boldsymbol{r}(t_{n + 1 - \alpha_{F}}, \boldsymbol{u}_{n + 1 - \alpha_{F}}, \partial_{t} \boldsymbol{u}_{n + 1 - \alpha_{F}}, \partial_{tt} \boldsymbol{u}_{n + 1 - \alpha_{M}}), \\ +t_{n + 1 - \alpha_{F}} &= \alpha_{F} t_{n} + (1 - \alpha_{F}) t_{n+1}, \\ +\boldsymbol{u}_{n + 1 - \alpha_{F}} &= \alpha_{F} \boldsymbol{u}_{n} + (1 - \alpha_{F}) \boldsymbol{u}_{n+1}, \\ +\partial_{t} \boldsymbol{u}_{n + 1 - \alpha_{F}} &= \alpha_{F} \partial_{t} \boldsymbol{u}_{n} + (1 - \alpha_{F}) \partial_{t} \boldsymbol{u}_{n+1}, \\ +\partial_{tt} \boldsymbol{u}_{n + 1 - \alpha_{M}} &= \alpha_{M} \partial_{tt} \boldsymbol{u}_{n} + (1 - \alpha_{M}) \partial_{tt} \boldsymbol{u}_{n+1}, \\ +\boldsymbol{u}_{n+1} &= \boldsymbol{u}_{n} + h_{n} \partial_{t} \boldsymbol{u}_{n} + \frac{1}{2} h_{n}^{2} [(1 - 2 \beta) \partial_{tt} \boldsymbol{u}_{n} + 2 \beta \boldsymbol{x}] \\ +\partial_{t} \boldsymbol{u}_{n+1} &= \partial_{t} \boldsymbol{u}_{n} + h_{n} [(1 - \gamma) \partial_{tt} \boldsymbol{u}_{n} + \gamma \boldsymbol{x}], \\ +\partial_{tt} \boldsymbol{u}_{n+1} &= \boldsymbol{x} +\end{align*} +``` +The state vector is then updated to $\\{\boldsymbol{s}\\}\_{n+1} = \\{\boldsymbol{u}\_{n+1}, \partial_{t} \boldsymbol{u}\_{n+1}, \partial_{tt} \boldsymbol{u}_{n+1}\\}$. + +**Analysis** The amplification matrix for the state vector is +```math +\boldsymbol{A}(z) = \frac{1}{1 - \alpha_{M} + (1 - \alpha_{F}) \beta z^{2}} \begin{bmatrix} +1 - \alpha_{M} - \alpha_{F} \beta z^{2} & 1 - \alpha_{M} & \frac{1}{2} (1 - 2 \beta) (1 - \alpha_{M}) - \beta \alpha_{M} \\ +-\gamma z^{2} & (1 - \alpha_{M}) + (1 - \alpha_{F})(\beta - \gamma) z^{2} & (1 - \alpha_{M}) (1 - \gamma) - \alpha_{M} \gamma + (1 - \alpha_{F}) [(1 - \gamma) \beta - \frac{1}{2} (1 - 2 \beta) \gamma] z^{2} \\ +-z^{2} & -(1 - \alpha_{F}) z^{2} & -\alpha_{M} - \frac{1}{2} (1 - \alpha_{F}) (1 - 2 \beta) z^{2} +\end{bmatrix}. +``` +Here again, we immediately see that $\boldsymbol{u}_{n+1}$ satisfies the recurrence +```math +\boldsymbol{u}_{n+1} = \mathrm{tr}(\boldsymbol{A}(z)) \boldsymbol{u}_{n} - \frac{1}{2} (\mathrm{tr}(\boldsymbol{A}(z))^{2} - \mathrm{tr}(\boldsymbol{A}(z)^{2})) \boldsymbol{u}_{n-1} + \det(\boldsymbol{A}(z)) \boldsymbol{u}_{n-2}. +``` +By plugging the Taylor expansion of $\boldsymbol{u}$ at times $t_{n+1}$, $t_{n-1}$ and $t_{n-2}$, we obtain the exactness condition +$$\cos(z) = \mathrm{tr}(\boldsymbol{A}(z)) - \frac{1}{2} (\mathrm{tr}(\boldsymbol{A}(z))^{2} - \mathrm{tr}(\boldsymbol{A}(z)^{2})) \cos(z) + \det(\boldsymbol{A}(z)) \cos(2z).$$ +These conditions are hard to examine analytically, but one can verify that this scheme is at least of order $1$. Second-order is achieved by setting $\gamma = \frac{1}{2} - \alpha_{M} + \alpha_{F}$. + +It is easier to consider the limit cases $|z| \to 0$ and $|z| \to +\infty$ and look at the eigenvalues of the amplification matrix. +* When $|z| \to 0$, we find $\rho(\\boldsymbol{A}(z)) = \max\\{1, \left|\frac{\alpha_{M}}{1 - \alpha_{M}}\right|\\}$. +* When $|z| \to +\infty$, we find $\rho(\\boldsymbol{A}(z)) = \max\\{\left|\frac{\alpha_{F}}{1 - \alpha_{F}}\right|, \left|\frac{4 \beta - (1 + 2 \gamma) \pm \sqrt{(1 + 2 \gamma)^{2} - 16 \beta}}{4 \beta}\right|\\}$. + +For all these eigenvalues to have a modulus smaller than one, we need $\alpha_{M} \leq \frac{1}{2}$, $\alpha_{F} \leq \frac{1}{2}$, $\gamma \geq \frac{1}{2}$, i.e. $\alpha_{F} \geq \alpha_{M}$ and $\beta \geq \frac{1}{2} \gamma$. Since dissipation of high-frequency is maximised when the eigenvalues are real at infinity, we also impose $\beta = \frac{1}{16} (1 + 2 \gamma)^{2}$, i.e. $\beta = \frac{1}{4} (1 - \alpha_{M} + \alpha_{F})^{2}$. + +This method was also designed to damp high-frequency perturbations so it is common practice to parameter this scheme in terms of its spectral radius. +* The Hilbert-Huges-Taylor- $\alpha$ (HHT- $\alpha$) method is obtained by setting $\alpha_{M} = 0$, $\alpha_{F} = \frac{1 - \rho_{\infty}}{1 + \rho_{\infty}}$. +* The Wood-Bossak-Zienkiewicz- $\alpha$ (WBZ- $\alpha$) method is recovered by setting $\alpha_{F} = 0$ and $\alpha_{M} = \frac{\rho_{\infty} - 1}{\rho_{\infty} + 1}$. +* The standard generalised- $\alpha$ method is obtained by setting $\alpha_{M} = \frac{2 \rho_{\infty - 1}}{\rho_{\infty} + 1}$, $\alpha_{F} = \frac{\rho_{\infty}}{\rho_{\infty} + 1}$. +* The Newmark method corresponds to $\alpha_{F} = \alpha_{M} = 0$. In this case, the values of $\beta$ and $\gamma$ are usually chosen as $\beta = 0$, $\gamma = \frac{1}{2}$ (explicit central difference scheme), or $\beta = \frac{1}{4}$ and $\gamma = \frac{1}{2}$ (midpoint rule). + +# TODO +Some Runge-Kutta schemes have the First-Same-As-Last (FSAL) property, that enables sharing a residual evaluation from one step to the next. It may not be possible to have this optimisation in our case because of the Dirichlet boundary conditions. + +# Ideas for later +* Adaptive time-stepping with embedded Runge-Kutta methods or Richardson extrapolation +* Linear multistep methods (Adam-Bashford, Adam-Moulton, Backward Difference Formula) +* General linear methods +* Numerical methods for differential-algebraic systems of equations (DAEs) diff --git a/src/Algebra/Algebra.jl b/src/Algebra/Algebra.jl index 7b47f0b97..72f2782ac 100644 --- a/src/Algebra/Algebra.jl +++ b/src/Algebra/Algebra.jl @@ -28,6 +28,7 @@ export allocate_in_domain export allocate_in_range export add_entries! export muladd! +export axpy_entries! export nz_counter export nz_allocation export create_from_nz diff --git a/src/Algebra/AlgebraInterfaces.jl b/src/Algebra/AlgebraInterfaces.jl index 605c68076..41ed27f4f 100644 --- a/src/Algebra/AlgebraInterfaces.jl +++ b/src/Algebra/AlgebraInterfaces.jl @@ -244,6 +244,60 @@ else end end +""" + axpy_entries!(α::Number, A::T, B::T) where {T<: AbstractMatrix} -> T + +Efficient implementation of axpy! for sparse matrices. +""" +function axpy_entries!(α::Number, A::T, B::T) where {T<:AbstractMatrix} + iszero(α) && return B + + axpy!(α, A, B) + B +end + +# For sparse matrices, it is surprisingly quicker to call `@. B += α * A` than +# `axpy!(α, A, B)`.` Calling axpy! on the nonzero values of A and B is the most +# efficient approach but this is only possible when A and B have the same +# sparsity pattern. The checks add some non-negligible overhead so we make them +# optional by adding a keyword. +const cannot_axpy_entries_msg = """ +It is only possible to efficiently add two sparse matrices that have the same +sparsity pattern. +""" + +function axpy_entries!( + α::Number, A::T, B::T; + check::Bool=true +) where {T<:SparseMatrixCSC} + iszero(α) && return B + + if check + msg = cannot_axpy_entries_msg + @check rowvals(A) == rowvals(B) msg + @check all(nzrange(A, j) == nzrange(B, j) for j in axes(A, 2)) msg + end + + axpy!(α, nonzeros(A), nonzeros(B)) + B +end + +function axpy_entries!( + α::Number, A::T, B::T; + check::Bool=true +) where {T<:Union{SparseMatrixCSR,SymSparseMatrixCSR}} + iszero(α) && return B + + if check + msg = cannot_axpy_entries_msg + @check colvals(A) == colvals(B) msg + @check all(nzrange(A, j) == nzrange(B, j) for j in axes(A, 1)) msg + end + + axpy!(α, nonzeros(A), nonzeros(B)) + B +end + # # Some API associated with assembly routines # diff --git a/src/Arrays/Interface.jl b/src/Arrays/Interface.jl index 322bcb1ad..cecb96b72 100644 --- a/src/Arrays/Interface.jl +++ b/src/Arrays/Interface.jl @@ -168,6 +168,16 @@ function testvalue(::Type{T}) where T<:AbstractArray{E,N} where {E,N} similar(T,tfill(0,Val(N))...) end +# When the jacobian of a residual is obtained through automatic differentiation, +# the return type is BlockArray{<:SubArray} and the behaviour of testvalue +# does not allow broadcasting operations between BlockArray{<:AbstractMatrix} +# and BlockArray{<:SubArray}. This function returns a matrix of size a +# P-dimensional array where each dimension has length 0, i.e., (0, ..., 0). +function testvalue(::Type{<:SubArray{T,P,AT}}) where {T,P,AT} + a = testvalue(AT) + return SubArray(a, ntuple(_ -> 0:-1, P)) +end + function testvalue(::Type{T}) where T<:Transpose{E,A} where {E,A} a = testvalue(A) Transpose(a) @@ -272,5 +282,3 @@ function test_array( end true end - - diff --git a/src/Exports.jl b/src/Exports.jl index 9ccbbd189..782625bba 100644 --- a/src/Exports.jl +++ b/src/Exports.jl @@ -182,7 +182,25 @@ using Gridap.CellData: ∫; export ∫ @publish Visualization createpvd @publish Visualization savepvd -include("ODEs/Exports.jl") +@publish ODEs ∂t +@publish ODEs ∂tt +@publish ODEs ForwardEuler +@publish ODEs ThetaMethod +@publish ODEs MidPoint +@publish ODEs BackwardEuler +@publish ODEs GeneralizedAlpha1 +@publish ODEs ButcherTableau +@publish ODEs available_tableaus +@publish ODEs RungeKutta +# @publish ODEs GeneralizedAlpha2 +# @publish ODEs Newmark +@publish ODEs TransientTrialFESpace +@publish ODEs TransientMultiFieldFESpace +@publish ODEs TransientFEOperator +@publish ODEs TransientIMEXFEOperator +@publish ODEs TransientSemilinearFEOperator +@publish ODEs TransientQuasilinearFEOperator +@publish ODEs TransientLinearFEOperator # Deprecated / removed diff --git a/src/MultiField/MultiFieldCellFields.jl b/src/MultiField/MultiFieldCellFields.jl index 117983e5f..ff2378fc0 100644 --- a/src/MultiField/MultiFieldCellFields.jl +++ b/src/MultiField/MultiFieldCellFields.jl @@ -39,3 +39,4 @@ num_fields(a::MultiFieldCellField) = length(a.single_fields) Base.getindex(a::MultiFieldCellField,i::Integer) = a.single_fields[i] Base.iterate(a::MultiFieldCellField) = iterate(a.single_fields) Base.iterate(a::MultiFieldCellField,state) = iterate(a.single_fields,state) +Base.length(a::MultiFieldCellField) = num_fields(a) diff --git a/src/MultiField/MultiFieldFEFunctions.jl b/src/MultiField/MultiFieldFEFunctions.jl index 49bd67e28..6578e8fdc 100644 --- a/src/MultiField/MultiFieldFEFunctions.jl +++ b/src/MultiField/MultiFieldFEFunctions.jl @@ -67,3 +67,4 @@ num_fields(m::MultiFieldFEFunction) = length(m.single_fe_functions) Base.iterate(m::MultiFieldFEFunction) = iterate(m.single_fe_functions) Base.iterate(m::MultiFieldFEFunction,state) = iterate(m.single_fe_functions,state) Base.getindex(m::MultiFieldFEFunction,field_id::Integer) = m.single_fe_functions[field_id] +Base.length(m::MultiFieldFEFunction) = num_fields(m) diff --git a/src/ODEs/Exports.jl b/src/ODEs/Exports.jl deleted file mode 100644 index e8b15158d..000000000 --- a/src/ODEs/Exports.jl +++ /dev/null @@ -1,30 +0,0 @@ -macro publish_gridapodes(mod,name) - quote - using Gridap.ODEs.$mod: $name; export $name - end -end - -# Mostly used from ODETools -@publish_gridapodes ODETools BackwardEuler -@publish_gridapodes ODETools ForwardEuler -@publish_gridapodes ODETools MidPoint -@publish_gridapodes ODETools ThetaMethod -@publish_gridapodes ODETools RungeKutta -@publish_gridapodes ODETools IMEXRungeKutta -@publish_gridapodes ODETools EXRungeKutta -@publish_gridapodes ODETools Newmark -@publish_gridapodes ODETools GeneralizedAlpha -@publish_gridapodes ODETools ∂t -@publish_gridapodes ODETools ∂tt - -# Mostly used from TransientFETools -@publish_gridapodes TransientFETools TransientTrialFESpace -@publish_gridapodes TransientFETools TransientMultiFieldTrialFESpace -@publish_gridapodes TransientFETools TransientMultiFieldFESpace -@publish_gridapodes TransientFETools TransientFEOperator -@publish_gridapodes TransientFETools TransientAffineFEOperator -@publish_gridapodes TransientFETools TransientConstantFEOperator -@publish_gridapodes TransientFETools TransientConstantMatrixFEOperator -@publish_gridapodes TransientFETools TransientRungeKuttaFEOperator -@publish_gridapodes TransientFETools TransientIMEXRungeKuttaFEOperator -@publish_gridapodes TransientFETools TransientEXRungeKuttaFEOperator diff --git a/src/ODEs/ODEOperators.jl b/src/ODEs/ODEOperators.jl new file mode 100644 index 000000000..4401987df --- /dev/null +++ b/src/ODEs/ODEOperators.jl @@ -0,0 +1,649 @@ +################### +# ODEOperatorType # +################### +""" + abstract type ODEOperatorType <: GridapType end + +Trait that indicates the linearity type of an ODE operator. +""" +abstract type ODEOperatorType <: GridapType end +struct NonlinearODE <: ODEOperatorType end + +""" + abstract type AbstractQuasilinearODE <: ODEOperatorType end + +ODE operator whose residual is linear with respect to the highest-order time +derivative, i.e. +```math +residual(t, ∂t^0[u], ..., ∂t^N[u]) = mass(t, ∂t^0[u], ..., ∂t^(N-1)[u]) ∂t^N[u] + + res(t, ∂t^0[u], ..., ∂t^(N-1)[u]), +``` +where `N` is the order of the ODE operator, `∂t^k[u]` is the `k`-th-order time +derivative of `u`, and both `mass` and `res` have order `N-1`. +""" +abstract type AbstractQuasilinearODE <: ODEOperatorType end +struct QuasilinearODE <: AbstractQuasilinearODE end + +""" + abstract type AbstractSemilinearODE <: AbstractQuasilinearODE end + +ODE operator whose residual is linear with respect to the highest-order time +derivative, and whose mass matrix only depend on time, i.e. +```math +residual(t, ∂t^0[u], ..., ∂t^N[u]) = mass(t) ∂t^N[u] + + res(t, ∂t^0[u], ..., ∂t^(N-1)[u]), +``` +where `N` is the order of the ODE operator, `∂t^k[u]` is the `k`-th-order time +derivative of `u`, `mass` is independent of `u` and `res` has order `N-1`. +""" +abstract type AbstractSemilinearODE <: AbstractQuasilinearODE end +struct SemilinearODE <: AbstractSemilinearODE end + +""" + abstract type AbstractLinearODE <: AbstractSemilinearODE end + +ODE operator whose residual is linear with respect to all time derivatives, i.e. +```math +residual(t, ∂t^0[u], ..., ∂t^N[u]) = ∑_{0 ≤ k ≤ N} A_k(t) ∂t^k[u] + res(t), +``` +where `N` is the order of the ODE operator, and `∂t^k[u]` is the `k`-th-order +time derivative of `u`. +""" +abstract type AbstractLinearODE <: AbstractSemilinearODE end +struct LinearODE <: AbstractLinearODE end + +################ +# IMEX Helpers # +################ +""" + check_imex_compatibility(im_order::Integer, ex_order::Integer) -> Bool + +Check whether two operators can make a valid IMEX operator decomposition. This +function should be called in the constructors of concrete IMEX operators. +""" +function check_imex_compatibility(im_order::Integer, ex_order::Integer) + msg = """ + The explicit operator of an IMEX operator decomposition must have one order + less than the implicit operator. + """ + @assert (im_order == ex_order + 1) msg +end + +""" + IMEXODEOperatorType( + T_im::Type{<:ODEOperatorType}, + T_ex::Type{<:ODEOperatorType} + ) -> ODEOperatorType + +Return the `ODEOperatorType` of the operator defined by an IMEX decomposition. +This function should be called in the constructors of concrete IMEX operators. +""" +function IMEXODEOperatorType( + T_im::Type{<:ODEOperatorType}, + T_ex::Type{<:ODEOperatorType} +) + T_im +end + +function IMEXODEOperatorType( + T_im::Type{<:AbstractLinearODE}, + T_ex::Type{<:ODEOperatorType} +) + SemilinearODE +end + +# We should theoretically dispatch on T_ex <: AbstractQuasilinearODE because +# in that case we can write the decomposition as +# im_A_N(t) ∂t^N[u] +# + [im_A_(N-1)(t) + ex_mass(t, ∂t^0[u], ..., ∂t^(N-2)[u])] ∂t^(N-1)[u] +# + ∑_{0 ≤ k ≤ N-2} im_A_k(t) ∂t^k[u] + im_res(t) + ex_res(t, ∂t^0[u], ..., ∂t^(N-1)[u]) +# so we can identify two linear forms corresponding to the two highest-order +# time derivatives, and then the rest of the residual. We decide to still +# define the global operator as semilinear for the following reasons: +# * For a first-order ODE, the explicit part has order zero, so the definitions +# of quasilinear, semilinear and linear coincide. This will default to the +# case below. This means there is only a special case when the residual has +# order two or higher. +# * We would need to have a new type when we can identify two linear forms +# corresponding to the two highest-order time derivatives. This would +# recursively force us to create order-dependent linearity types based on how +# many linear forms have been identified. +# * This distinction is not common in the litterature and indeed there does not +# seem to exist ODE solvers that take advantage of this kind of multi-form +# operator decomposition. + +function IMEXODEOperatorType( + T_im::Type{<:AbstractLinearODE}, + T_ex::Type{<:AbstractLinearODE} +) + T_im +end + +############### +# ODEOperator # +############### +""" + abstract type ODEOperator <: GridapType end + +General implicit, nonlinear ODE operator defined by a residual of the form +```math +residual(t, ∂t^0[u], ..., ∂t^N[u]) = 0, +``` +where `N` is the order of the ODE operator and `∂t^k[u]` is the `k`-th-order +time derivative of `u`. + +# Mandatory +- [`get_order(odeop)`](@ref) +- [`get_forms(odeop)`](@ref) +- [`allocate_residual(odeop, t, us, odeopcache)`](@ref) +- [`residual!(r, odeop, t, us, odeopcache; add::Bool)`](@ref) +- [`allocate_jacobian(odeop, t, us, odeopcache)`](@ref) +- [`jacobian_add!(J, odeop, t, us, ws, odeopcache)`](@ref) + +# Optional +- [`get_num_forms(odeop)`](@ref) +- [`is_form_constant(odeop, k)`](@ref) +- [`allocate_odeopcache(odeop, t, us)`](@ref) +- [`update_odeopcache!(odeopcache, odeop, t)`](@ref) +- [`residual(odeop, t, us, odeopcache)`](@ref) +- [`jacobian!(odeop, t, us, ws, odeopcache)`](@ref) +- [`jacobian(odeop, t, us, ws, odeopcache)`](@ref) +""" +abstract type ODEOperator{T<:ODEOperatorType} <: GridapType end + +""" + ODEOperatorType(odeop::ODEOperator) -> ODEOperatorType + +Return the `ODEOperatorType` of the `ODEOperator`. +""" +ODEOperatorType(::ODEOperator{T}) where {T} = T +ODEOperatorType(::Type{<:ODEOperator{T}}) where {T} = T + +""" + get_order(odeop::ODEOperator) -> Integer + +Return the order of the `ODEOperator`. +""" +function Polynomials.get_order(odeop::ODEOperator) + @abstractmethod +end + +""" + get_num_forms(odeop::ODEOperator) -> Integer + +Return the number of linear forms of the `ODEOperator`. See [`get_forms`](@ref) +""" +function get_num_forms(odeop::ODEOperator) + 0 +end + +function get_num_forms(odeop::ODEOperator{<:AbstractQuasilinearODE}) + 1 +end + +function get_num_forms(odeop::ODEOperator{<:AbstractLinearODE}) + get_order(odeop) + 1 +end + +""" + get_forms(odeop::ODEOperator) -> Tuple{Vararg{Function}} + +Return the linear forms of the `ODEOperator`: +* For a general ODE operator, return an empty tuple, +* For a quasilinear ODE operator, return a tuple with the mass matrix, +* For a linear ODE operator, return all the linear forms. +""" +function get_forms(odeop::ODEOperator) + () +end + +function get_forms(odeop::ODEOperator{<:AbstractQuasilinearODE}) + @abstractmethod +end + +""" + is_form_constant(odeop::ODEOperator, k::Integer) -> Bool + +Indicate whether the linear form of the `ODEOperator` corresponding to the +`k`-th-order time derivative of `u` is constant with respect to `t`. +""" +function is_form_constant(odeop::ODEOperator, k::Integer) + false +end + +""" + allocate_odeopcache( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, args... + ) -> CacheType + +Allocate the cache required by the `ODEOperator`. +""" +function allocate_odeopcache( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, args... +) + nothing +end + +""" + update_odeopcache!(odeopcache, odeop::ODEOperator, t::Real, args...) -> CacheType + +Update the cache of the `ODEOperator`. +""" +function update_odeopcache!(odeopcache, odeop::ODEOperator, t::Real, args...) + odeopcache +end + +""" + allocate_residual( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache + ) -> AbstractVector + +Allocate a residual vector for the `ODEOperator`. +""" +function Algebra.allocate_residual( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache +) + @abstractmethod +end + +""" + residual!( + r::AbstractVector, odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache; add::Bool=false + ) -> AbstractVector + +Compute the residual of the `ODEOperator`. If `add` is true, this function adds +to `r` instead of erasing it. +""" +function Algebra.residual!( + r::AbstractVector, odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache; add::Bool=false +) + @abstractmethod +end + +""" + residual( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache + ) -> AbstractVector + +Allocate a vector and evaluate the residual of the `ODEOperator`. +""" +function Algebra.residual( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache +) + r = allocate_residual(odeop, t, us, odeopcache) + residual!(r, odeop, t, us, odeopcache) + r +end + +""" + allocate_jacobian( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache + ) -> AbstractMatrix + +Allocate a jacobian matrix for the `ODEOperator`. +""" +function Algebra.allocate_jacobian( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache +) + @abstractmethod +end + +const jacobian_weights_order_msg = """ +The weights are ordered by increasing order of time derivative, i.e. the first +weight corresponds to `∂residual / ∂u` and the last to +`∂residual / ∂(d^N u / dt^N)`. +""" + +""" + jacobian_add!( + J::AbstractMatrix, odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache + ) -> AbstractMatrix + +Add the jacobian of the residual of the `ODEOperator` with respect to all time +derivatives, weighted by some factors `ws`. + +$(jacobian_weights_order_msg) +""" +function jacobian_add!( + J::AbstractMatrix, odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache +) + @abstractmethod +end + +""" + jacobian!( + J::AbstractMatrix, odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache + ) -> AbstractMatrix + +Compute the jacobian of the residual of the `ODEOperator` with respect to all +time derivatives, weighted by some factors `ws`. + +$(jacobian_weights_order_msg) +""" +function Algebra.jacobian!( + J::AbstractMatrix, odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache +) + fillstored!(J, zero(eltype(J))) + jacobian_add!(J, odeop, t, us, ws, odeopcache) + J +end + +""" + jacobian( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache + ) -> AbstractMatrix + +Allocate a jacobian matrix for the `ODEOperator` and compute the jacobian of +the residual of the `ODEOperator` with respect to all time derivatives, +weighted by some factors `ws`. + +$(jacobian_weights_order_msg) +""" +function Algebra.jacobian( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache +) + J = allocate_jacobian(odeop, t, us, odeopcache) + jacobian!(J, odeop, t, us, ws, odeopcache) + J +end + +################### +# IMEXODEOperator # +################### +""" + abstract type IMEXODEOperator <: ODEOperator end + +Implicit-Explicit decomposition of a residual defining an `ODEOperator`: +```math +residual(t, ∂t^0[u], ..., ∂t^N[u]) = implicit_residual(t, ∂t^0[u], ..., ∂t^N[u]) + + explicit_residual(t, ∂t^0[u], ..., ∂t^(N-1)[u]), +``` +where +* The implicit operator defined by the implicit residual is considered stiff +and is meant to be solved implicitly, +* The explicit operator defined by the explicit residual is considered non-stiff +and is meant to be solved explicitly. + +# Important +The explicit operator must have one order less than the implicit operator, so +that the mass term of the global operator is fully contained in the implicit +operator. + +# Mandatory +- [`get_imex_operators(odeop)`](@ref) +""" +abstract type IMEXODEOperator{T<:ODEOperatorType} <: ODEOperator{T} end + +# IMEX Helpers +function check_imex_compatibility(im_odeop::ODEOperator, ex_odeop::ODEOperator) + im_order, ex_order = get_order(im_odeop), get_order(ex_odeop) + check_imex_compatibility(im_order, ex_order) +end + +function IMEXODEOperatorType(im_odeop::ODEOperator, ex_odeop::ODEOperator) + T_im, T_ex = ODEOperatorType(im_odeop), ODEOperatorType(ex_odeop) + IMEXODEOperatorType(T_im, T_ex) +end + +# IMEXODEOperator interface +""" + get_imex_operators(odeop::IMEXODEOperator) -> (ODEOperator, ODEOperator) + +Return the implicit and explicit parts of the `IMEXODEOperator`. +""" +function get_imex_operators(odeop::IMEXODEOperator) + @abstractmethod +end + +# ODEOperator interface +function Polynomials.get_order(odeop::IMEXODEOperator) + im_odeop, _ = get_imex_operators(odeop) + get_order(im_odeop) +end + +function get_forms(odeop::IMEXODEOperator{<:AbstractQuasilinearODE}) + im_odeop, _ = get_imex_operators(odeop) + im_forms = get_forms(im_odeop) + (last(im_forms),) +end + +function get_forms(odeop::IMEXODEOperator{<:AbstractLinearODE}) + im_odeop, ex_odeop = get_imex_operators(odeop) + im_forms, ex_forms = get_forms(im_odeop), get_forms(ex_odeop) + forms = () + for (im_form, ex_form) in zip(im_forms, ex_forms) + form = (t, u) -> im_form(t, u) + ex_form(t, u) + forms = (forms..., form) + end + (forms..., last(im_forms)) +end + +function is_form_constant(odeop::IMEXODEOperator, k::Integer) + im_odeop, ex_odeop = get_imex_operators(odeop) + im_const = is_form_constant(im_odeop, k) + ex_const = true + if k < get_order(odeop) + ex_const = is_form_constant(ex_odeop, k) + end + im_const && ex_const +end + +function allocate_odeopcache( + odeop::IMEXODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, args... +) + im_us, ex_us = us, ntuple(i -> us[i], length(us) - 1) + im_odeop, ex_odeop = get_imex_operators(odeop) + im_odeopcache = allocate_odeopcache(im_odeop, t, im_us, args...) + ex_odeopcache = allocate_odeopcache(ex_odeop, t, ex_us, args...) + (im_odeopcache, ex_odeopcache) +end + +function update_odeopcache!( + odeopcache, odeop::IMEXODEOperator, + t::Real, args... +) + im_odeop, ex_odeop = get_imex_operators(odeop) + im_odeopcache, ex_odeopcache = odeopcache + update_odeopcache!(im_odeopcache, im_odeop, t, args...) + update_odeopcache!(ex_odeopcache, ex_odeop, t, args...) + (im_odeopcache, ex_odeopcache) +end + +function Algebra.allocate_residual( + odeop::IMEXODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache +) + im_us, ex_us = us, ntuple(i -> us[i], length(us) - 1) + im_odeop, ex_odeop = get_imex_operators(odeop) + im_odeopcache, ex_odeopcache = odeopcache + im_res = allocate_residual(im_odeop, t, im_us, im_odeopcache) + ex_res = allocate_residual(ex_odeop, t, ex_us, ex_odeopcache) + axpy!(1, ex_res, im_res) + im_res +end + +function Algebra.residual!( + r::AbstractVector, odeop::IMEXODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache; add::Bool=false +) + im_us, ex_us = us, ntuple(i -> us[i], length(us) - 1) + im_odeop, ex_odeop = get_imex_operators(odeop) + im_odeopcache, ex_odeopcache = odeopcache + residual!(r, im_odeop, t, im_us, im_odeopcache; add) + residual!(r, ex_odeop, t, ex_us, ex_odeopcache; add=true) + r +end + +function Algebra.allocate_jacobian( + odeop::IMEXODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache +) + im_us, ex_us = us, ntuple(i -> us[i], length(us) - 1) + im_odeop, ex_odeop = get_imex_operators(odeop) + im_odeopcache, ex_odeopcache = odeopcache + + # TODO Ideally, we want to allocate the jacobian matrix of both parts and sum them into + # a new sparse matrix that has the sparsity structure of the sum. This is not fully + # implemented for now. + # * When both parts come from a TransientFEOperator, we replicate the code of + # `allocate_jacobian` and simply merge the DomainContribution of both parts into a + # single DomainContribution. + # * Otherwise, for now, we allocate the two jacobians separately and add them. This will + # break if they do not have the same sparsity structure. + if im_odeop isa ODEOpFromTFEOp && ex_odeop isa ODEOpFromTFEOp + # Common + Ut = evaluate(get_trial(im_odeop.tfeop), nothing) + du = get_trial_fe_basis(Ut) + V = get_test(im_odeop.tfeop) + v = get_fe_basis(V) + assembler = get_assembler(im_odeop.tfeop) + dc = DomainContribution() + + # Implicit part + uh = _make_uh_from_us(im_odeop, us, im_odeopcache.Us) + jacs = get_jacs(im_odeop.tfeop) + for k in 0:get_order(im_odeop.tfeop) + jac = jacs[k+1] + dc = dc + jac(t, uh, du, v) + end + + # Explicit part + uh = _make_uh_from_us(ex_odeop, us, ex_odeopcache.Us) + jacs = get_jacs(ex_odeop.tfeop) + for k in 0:get_order(ex_odeop.tfeop) + jac = jacs[k+1] + dc = dc + jac(t, uh, du, v) + end + + matdata = collect_cell_matrix(Ut, V, dc) + allocate_matrix(assembler, matdata) + else + im_jac = allocate_jacobian(im_odeop, t, im_us, im_odeopcache) + ex_jac = allocate_jacobian(ex_odeop, t, ex_us, ex_odeopcache) + try + axpy_entries!(1, ex_jac, im_jac) + catch + msg = """ + You are trying to define an IMEX operator where the jacobian of the implicit and + explicit parts do not share the same sparsity structure. For now, this is only + implemented when the implicit and explicit operators are `TransientFEOperator`. + """ + @error msg + end + im_jac + end +end + +function jacobian_add!( + J::AbstractMatrix, odeop::IMEXODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache +) + im_us, ex_us = us, ntuple(i -> us[i], length(us) - 1) + im_ws, ex_ws = ws, ntuple(i -> ws[i], length(ws) - 1) + im_odeop, ex_odeop = get_imex_operators(odeop) + im_odeopcache, ex_odeopcache = odeopcache + jacobian_add!(J, im_odeop, t, im_us, im_ws, im_odeopcache) + jacobian_add!(J, ex_odeop, t, ex_us, ex_ws, ex_odeopcache) + J +end + +########################## +# GenericIMEXODEOperator # +########################## +""" + struct GenericIMEXODEOperator <: IMEXODEOperator end + +Generic `IMEXODEOperator`. +""" +struct GenericIMEXODEOperator{T} <: IMEXODEOperator{T} + im_odeop::ODEOperator + ex_odeop::ODEOperator + + function GenericIMEXODEOperator(im_odeop::ODEOperator, ex_odeop::ODEOperator) + check_imex_compatibility(im_odeop, ex_odeop) + T = IMEXODEOperatorType(im_odeop, ex_odeop) + new{T}(im_odeop, ex_odeop) + end +end + +# Default constructor +function IMEXODEOperator(im_odeop::ODEOperator, ex_odeop::ODEOperator) + GenericIMEXODEOperator(im_odeop, ex_odeop) +end + +# IMEXODEOperator interface +function get_imex_operators(odeop::GenericIMEXODEOperator) + (odeop.im_odeop, odeop.ex_odeop) +end + +######## +# Test # +######## +""" + test_ode_operator( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, args... + ) -> Bool + +Test the interface of `ODEOperator` specializations. +""" +function test_ode_operator( + odeop::ODEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}}, args... +) + num_forms = get_num_forms(odeop) + for k in 0:num_forms-1 + @test is_form_constant(odeop, k) isa Bool + end + + odeopcache = allocate_odeopcache(odeop, t, us, args...) + odeopcache = update_odeopcache!(odeopcache, odeop, t, args...) + + r = allocate_residual(odeop, t, us, odeopcache) + @test r isa AbstractVector + + residual!(r, odeop, t, us, odeopcache) + + J = allocate_jacobian(odeop, t, us, odeopcache) + @assert J isa AbstractMatrix + + ws = ntuple(_ -> 1, get_order(odeop) + 1) + jacobian!(J, odeop, t, us, ws, odeopcache) + + true +end diff --git a/src/ODEs/ODEOpsFromTFEOps.jl b/src/ODEs/ODEOpsFromTFEOps.jl new file mode 100644 index 000000000..56bb775ec --- /dev/null +++ b/src/ODEs/ODEOpsFromTFEOps.jl @@ -0,0 +1,422 @@ +####################### +# ODEOpFromTFEOpCache # +####################### +""" + struct ODEOpFromTFEOpCache <: GridapType + +Structure that stores the `TransientFESpace` and cache of a +`TransientFEOperator`, as well as the jacobian matrices and residual if they +are constant. +""" +mutable struct ODEOpFromTFEOpCache <: GridapType + Us + Uts + tfeopcache + const_forms +end + +################## +# ODEOpFromTFEOp # +################## +""" + struct ODEOpFromTFEOp <: ODEOperator end + +Wrapper that transforms a `TransientFEOperator` into an `ODEOperator`, i.e. +takes `residual(t, uh, ∂t[uh], ..., ∂t^N[uh], vh)` and returns +`residual(t, us)`, where `us[k] = ∂t^k[us]` and `uf` represents the free values +of `uh`. +""" +struct ODEOpFromTFEOp{T} <: ODEOperator{T} + tfeop::TransientFEOperator{T} + + function ODEOpFromTFEOp(tfeop::TransientFEOperator{T}) where {T} + order = get_order(tfeop) + if order == 0 + is_quasilinear = T <: AbstractQuasilinearODE + is_linear = T <: AbstractLinearODE + if is_quasilinear && !is_linear + msg = """ + For an operator of order zero, the definitions of quasilinear, + semilinear and linear coincide. Make sure that you have defined the + transient FE operator as linear. + """ + @unreachable msg + else + new{T}(tfeop) + end + else + new{T}(tfeop) + end + end +end + +# ODEOperator interface +function Polynomials.get_order(odeop::ODEOpFromTFEOp) + get_order(odeop.tfeop) +end + +function get_num_forms(odeop::ODEOpFromTFEOp) + get_num_forms(odeop.tfeop) +end + +function get_forms(odeop::ODEOpFromTFEOp) + get_forms(odeop.tfeop) +end + +function is_form_constant(odeop::ODEOpFromTFEOp, k::Integer) + is_form_constant(odeop.tfeop, k) +end + +function allocate_odeopcache( + odeop::ODEOpFromTFEOp, + t::Real, us::Tuple{Vararg{AbstractVector}} +) + # Allocate FE spaces for derivatives + order = get_order(odeop) + Ut = get_trial(odeop.tfeop) + U = allocate_space(Ut) + Uts = (Ut,) + Us = (U,) + for k in 1:order + Uts = (Uts..., ∂t(Uts[k])) + Us = (Us..., allocate_space(Uts[k+1])) + end + + # Allocate the cache of the FE operator + tfeopcache = allocate_tfeopcache(odeop.tfeop, t, us) + + # Variables for assembly + uh = _make_uh_from_us(odeop, us, Us) + V = get_test(odeop.tfeop) + v = get_fe_basis(V) + Ut = evaluate(get_trial(odeop.tfeop), nothing) + du = get_trial_fe_basis(Ut) + assembler = get_assembler(odeop.tfeop) + + # Store the forms that are constant + const_forms = () + num_forms = get_num_forms(odeop.tfeop) + jacs = get_jacs(odeop.tfeop) + + # We want the stored jacobians to have the same sparsity as the full jacobian + # (when all orders are considered), so we start by allocating it and we will assemble + # the constant jacobians in a copy of the full jacobian + # We need a little workaround here since when the `ODEOperator` is quasilinear or + # semilinear but not linear, it has only one form but `order+1` jacobians. + dc = DomainContribution() + for k in 0:order + jac = jacs[k+1] + dc = dc + jac(t, uh, du, v) + end + matdata = collect_cell_matrix(Ut, V, dc) + J_full = allocate_matrix(assembler, matdata) + + odeoptype = ODEOperatorType(odeop) + if odeoptype <: AbstractLinearODE + for k in 0:num_forms-1 + const_form = nothing + if is_form_constant(odeop, k) + jac = jacs[k+1] + dc = jac(t, uh, du, v) + matdata = collect_cell_matrix(Ut, V, dc) + const_form = copy(J_full) + fillstored!(const_form, zero(eltype(const_form))) + assemble_matrix_add!(const_form, assembler, matdata) + end + const_forms = (const_forms..., const_form) + end + elseif odeoptype <: AbstractQuasilinearODE + const_form = nothing + k = order + if is_form_constant(odeop, k) + jac = jacs[k+1] + dc = jac(t, uh, du, v) + matdata = collect_cell_matrix(Ut, V, dc) + const_form = copy(J_full) + fillstored!(const_form, zero(eltype(const_form))) + assemble_matrix_add!(const_form, assembler, matdata) + end + const_forms = (const_forms..., const_form) + end + + ODEOpFromTFEOpCache(Us, Uts, tfeopcache, const_forms) +end + +function update_odeopcache!(odeopcache, odeop::ODEOpFromTFEOp, t::Real) + Us = () + for k in 0:get_order(odeop) + Us = (Us..., evaluate!(odeopcache.Us[k+1], odeopcache.Uts[k+1], t)) + end + odeopcache.Us = Us + + tfeopcache, tfeop = odeopcache.tfeopcache, odeop.tfeop + odeopcache.tfeopcache = update_tfeopcache!(tfeopcache, tfeop, t) + + odeopcache +end + +function Algebra.allocate_residual( + odeop::ODEOpFromTFEOp, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache +) + uh = _make_uh_from_us(odeop, us, odeopcache.Us) + V = get_test(odeop.tfeop) + v = get_fe_basis(V) + assembler = get_assembler(odeop.tfeop) + + res = get_res(odeop.tfeop) + vecdata = collect_cell_vector(V, res(t, uh, v)) + allocate_vector(assembler, vecdata) +end + +function Algebra.residual!( + r::AbstractVector, odeop::ODEOpFromTFEOp, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache; add::Bool=false +) + uh = _make_uh_from_us(odeop, us, odeopcache.Us) + V = get_test(odeop.tfeop) + v = get_fe_basis(V) + assembler = get_assembler(odeop.tfeop) + + !add && fill!(r, zero(eltype(r))) + + res = get_res(odeop.tfeop) + dc = res(t, uh, v) + vecdata = collect_cell_vector(V, dc) + assemble_vector_add!(r, assembler, vecdata) + + r +end + +function Algebra.residual!( + r::AbstractVector, odeop::ODEOpFromTFEOp{<:AbstractQuasilinearODE}, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache; add::Bool=false +) + uh = _make_uh_from_us(odeop, us, odeopcache.Us) + V = get_test(odeop.tfeop) + v = get_fe_basis(V) + assembler = get_assembler(odeop.tfeop) + + !add && fill!(r, zero(eltype(r))) + + # Residual + res = get_res(odeop.tfeop) + dc = res(t, uh, v) + + # Mass + order = get_order(odeop) + mass = get_forms(odeop.tfeop)[1] + ∂tNuh = ∂t(uh, Val(order)) + dc = dc + mass(t, uh, ∂tNuh, v) + + vecdata = collect_cell_vector(V, dc) + assemble_vector_add!(r, assembler, vecdata) + + r +end + +function Algebra.residual!( + r::AbstractVector, odeop::ODEOpFromTFEOp{<:AbstractSemilinearODE}, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache; add::Bool=false +) + uh = _make_uh_from_us(odeop, us, odeopcache.Us) + V = get_test(odeop.tfeop) + v = get_fe_basis(V) + assembler = get_assembler(odeop.tfeop) + + !add && fill!(r, zero(eltype(r))) + + # Residual + res = get_res(odeop.tfeop) + dc = res(t, uh, v) + + # Mass + order = get_order(odeop) + mass = get_forms(odeop.tfeop)[1] + ∂tNuh = ∂t(uh, Val(order)) + dc = dc + mass(t, ∂tNuh, v) + + vecdata = collect_cell_vector(V, dc) + assemble_vector_add!(r, assembler, vecdata) + + r +end + +function Algebra.residual!( + r::AbstractVector, odeop::ODEOpFromTFEOp{<:AbstractLinearODE}, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache; add::Bool=false +) + uh = _make_uh_from_us(odeop, us, odeopcache.Us) + V = get_test(odeop.tfeop) + v = get_fe_basis(V) + assembler = get_assembler(odeop.tfeop) + + !add && fill!(r, zero(eltype(r))) + + # Residual + res = get_res(odeop.tfeop) + dc = res(t, uh, v) + + # Forms + order = get_order(odeop) + forms = get_forms(odeop.tfeop) + ∂tkuh = uh + for k in 0:order + form = forms[k+1] + dc = dc + form(t, ∂tkuh, v) + if k < order + ∂tkuh = ∂t(∂tkuh) + end + end + + vecdata = collect_cell_vector(V, dc) + assemble_vector_add!(r, assembler, vecdata) + + r +end + +function Algebra.allocate_jacobian( + odeop::ODEOpFromTFEOp, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache +) + uh = _make_uh_from_us(odeop, us, odeopcache.Us) + Ut = evaluate(get_trial(odeop.tfeop), nothing) + du = get_trial_fe_basis(Ut) + V = get_test(odeop.tfeop) + v = get_fe_basis(V) + assembler = get_assembler(odeop.tfeop) + + jacs = get_jacs(odeop.tfeop) + dc = DomainContribution() + for k in 0:get_order(odeop.tfeop) + jac = jacs[k+1] + dc = dc + jac(t, uh, du, v) + end + matdata = collect_cell_matrix(Ut, V, dc) + allocate_matrix(assembler, matdata) +end + +function jacobian_add!( + J::AbstractMatrix, odeop::ODEOpFromTFEOp, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache +) + uh = _make_uh_from_us(odeop, us, odeopcache.Us) + Ut = evaluate(get_trial(odeop.tfeop), nothing) + du = get_trial_fe_basis(Ut) + V = get_test(odeop.tfeop) + v = get_fe_basis(V) + assembler = get_assembler(odeop.tfeop) + + jacs = get_jacs(odeop.tfeop) + dc = DomainContribution() + for k in 0:get_order(odeop) + w = ws[k+1] + iszero(w) && continue + jac = jacs[k+1] + dc = dc + w * jac(t, uh, du, v) + end + + if num_domains(dc) > 0 + matdata = collect_cell_matrix(Ut, V, dc) + assemble_matrix_add!(J, assembler, matdata) + end + + J +end + +function jacobian_add!( + J::AbstractMatrix, odeop::ODEOpFromTFEOp{<:AbstractQuasilinearODE}, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache +) + uh = _make_uh_from_us(odeop, us, odeopcache.Us) + Ut = evaluate(get_trial(odeop.tfeop), nothing) + du = get_trial_fe_basis(Ut) + V = get_test(odeop.tfeop) + v = get_fe_basis(V) + assembler = get_assembler(odeop.tfeop) + + order = get_order(odeop) + jacs = get_jacs(odeop.tfeop) + dc = DomainContribution() + for k in 0:order-1 + w = ws[k+1] + iszero(w) && continue + jac = jacs[k+1] + dc = dc + w * jac(t, uh, du, v) + end + + # Special case for the mass matrix + k = order + w = ws[k+1] + if !iszero(w) + if is_form_constant(odeop, k) + axpy_entries!(w, odeopcache.const_forms[1], J) + else + jac = jacs[k+1] + dc = dc + w * jac(t, uh, du, v) + end + end + + if num_domains(dc) > 0 + matdata = collect_cell_matrix(Ut, V, dc) + assemble_matrix_add!(J, assembler, matdata) + end + + J +end + +function jacobian_add!( + J::AbstractMatrix, odeop::ODEOpFromTFEOp{<:AbstractLinearODE}, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache +) + uh = _make_uh_from_us(odeop, us, odeopcache.Us) + Ut = evaluate(get_trial(odeop.tfeop), nothing) + du = get_trial_fe_basis(Ut) + V = get_test(odeop.tfeop) + v = get_fe_basis(V) + assembler = get_assembler(odeop.tfeop) + + jacs = get_jacs(odeop.tfeop) + dc = DomainContribution() + for k in 0:get_order(odeop) + w = ws[k+1] + iszero(w) && continue + if is_form_constant(odeop, k) + axpy_entries!(w, odeopcache.const_forms[k+1], J) + else + jac = jacs[k+1] + dc = dc + w * jac(t, uh, du, v) + end + end + + if num_domains(dc) > 0 + matdata = collect_cell_matrix(Ut, V, dc) + assemble_matrix_add!(J, assembler, matdata) + end + + J +end + +######### +# Utils # +######### +# NOTE it seems that EvaluationFunction could be replaced by FEFunction. There +# is only a difference between the two functions when the underlying FESpace +# is zero mean (EvaluationFunction does not constrain the DOFs) +function _make_uh_from_us(odeop, us, Us) + u = EvaluationFunction(Us[1], us[1]) + dus = () + for k in 1:get_order(odeop) + dus = (dus..., EvaluationFunction(Us[k+1], us[k+1])) + end + TransientCellField(u, dus) +end diff --git a/src/ODEs/ODESolutions.jl b/src/ODEs/ODESolutions.jl new file mode 100644 index 000000000..c268abed4 --- /dev/null +++ b/src/ODEs/ODESolutions.jl @@ -0,0 +1,164 @@ +############### +# ODESolution # +############### +""" + abstract type ODESolution <: GridapType end + +Wrapper around an `ODEOperator` and `ODESolver` that represents the solution at +a set of time steps. It is an iterator that computes the solution at each time +step in a lazy fashion when accessing the solution. + +# Mandatory +- [`Base.iterate(odesltn)`](@ref) +- [`Base.iterate(odesltn, state)`](@ref) +""" +abstract type ODESolution <: GridapType end + +""" + Base.iterate(odesltn::ODESolution) -> ((Real, AbstractVector), StateType) + +Allocate the operators and cache and perform one time step of the `ODEOperator` +with the `ODESolver` attached to the `ODESolution`. +""" +function Base.iterate(odesltn::ODESolution) + @abstractmethod +end + +""" + Base.iterate(odesltn::ODESolution) -> ((Real, AbstractVector), StateType) + +Perform one time step of the `ODEOperator` with the `ODESolver` attached to the +`ODESolution`. +""" +function Base.iterate(odesltn::ODESolution, state) + @abstractmethod +end + +Base.IteratorSize(::Type{<:ODESolution}) = Base.SizeUnknown() + +###################### +# GenericODESolution # +###################### +""" + struct GenericODESolution <: ODESolution end + +Generic wrapper for the evolution of an `ODEOperator` with an `ODESolver`. +""" +struct GenericODESolution <: ODESolution + odeslvr::ODESolver + odeop::ODEOperator + t0::Real + tF::Real + us0::Tuple{Vararg{AbstractVector}} +end + +function Base.iterate(odesltn::GenericODESolution) + odeslvr, odeop = odesltn.odeslvr, odesltn.odeop + t0, us0 = odesltn.t0, odesltn.us0 + + # Allocate cache + odecache = allocate_odecache(odeslvr, odeop, t0, us0) + + # Starting procedure + state0, odecache = ode_start( + odeslvr, odeop, + t0, us0, + odecache + ) + + # Marching procedure + stateF = copy.(state0) + tF, stateF, odecache = ode_march!( + stateF, + odeslvr, odeop, + t0, state0, + odecache + ) + + # Finishing procedure + uF = copy(first(us0)) + uF, odecache = ode_finish!( + uF, + odeslvr, odeop, + t0, tF, stateF, + odecache + ) + + # Update iterator + data = (tF, uF) + state = (tF, stateF, state0, uF, odecache) + (data, state) +end + +function Base.iterate(odesltn::GenericODESolution, state) + odeslvr, odeop = odesltn.odeslvr, odesltn.odeop + t0, state0, stateF, uF, odecache = state + + if t0 >= odesltn.tF - ε + return nothing + end + + # Marching procedure + tF, stateF, odecache = ode_march!( + stateF, + odeslvr, odeop, + t0, state0, + odecache + ) + + # Finishing procedure + uF, odecache = ode_finish!( + uF, + odeslvr, odeop, + t0, tF, stateF, + odecache + ) + + # Update iterator + data = (tF, uF) + state = (tF, stateF, state0, uF, odecache) + (data, state) +end + +############################## +# Default behaviour of solve # +############################## +""" + solve( + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, tF::Real, us0::Tuple{Vararg{AbstractVector}}, + ) -> ODESolution + +Create an `ODESolution` wrapper around the `ODEOperator` and `ODESolver`, +starting with state `us0` at time `t0`, to be evolved until `tF`. +""" +function Algebra.solve( + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, tF::Real, us0::Tuple{Vararg{AbstractVector}}, +) + GenericODESolution(odeslvr, odeop, t0, tF, us0) +end + +function Algebra.solve( + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, tF::Real, u0::AbstractVector, +) + us0 = (u0,) + solve(odeslvr, odeop, t0, tF, us0) +end + +######## +# Test # +######## +""" + test_ode_solution(odesltn::ODESolution) -> Bool + +Test the interface of `ODESolution` specializations. +""" +function test_ode_solution(odesltn::ODESolution) + for (t_n, us_n) in odesltn + @test t_n isa Real + @test us_n isa AbstractVector + end + true +end diff --git a/src/ODEs/ODESolvers.jl b/src/ODEs/ODESolvers.jl new file mode 100644 index 000000000..c17b9d52c --- /dev/null +++ b/src/ODEs/ODESolvers.jl @@ -0,0 +1,206 @@ +############# +# ODESolver # +############# +""" + abstract type ODESolver <: GridapType end + +An `ODESolver` is a map that update state vectors. These state vectors are +created at the first iteration from the initial conditions, and are then +converted back into the evaluation of the solution at the current time step. + +In the simplest case, the state vectors correspond to the first `N-1` time +derivatives of `u` at time `t_n`, where `N` is the order of the `ODEOperator`, +but some solvers rely on other state variables (values at previous times, + higher-order derivatives...). + +# Mandatory +- [`allocate_odecache(odeslvr, odeop, t0, us0)`](@ref) +- [`ode_march!(stateF, odeslvr, odeop, t0, state0, odecache)`](@ref) + +# Optional +- [`ode_start(odeslvr, odeop, t0, us0, odecache)`](@ref) +- [`ode_finish!(uF, odeslvr, odeop, t0, tF, stateF, odecache)`](@ref) +""" +abstract type ODESolver <: GridapType end + +""" + allocate_odecache( + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, us0::Tuple{Vararg{AbstractVector}} + ) -> CacheType + +Allocate the cache of the `ODESolver` applied to the `ODEOperator`. +""" +function allocate_odecache( + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, us0::Tuple{Vararg{AbstractVector}} +) + @abstractmethod +end + +""" + ode_start( + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, us0::Tuple{Vararg{AbstractVector}}, + odecache + ) -> (Tuple{Vararg{AbstractVector}}, CacheType) + +Convert the initial conditions into state vectors. +""" +function ode_start( + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, us0::Tuple{Vararg{AbstractVector}}, + odecache +) + state0 = copy.(us0) + (state0, odecache) +end + +""" + ode_march!( + stateF::Tuple{Vararg{AbstractVector}}, + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, state0::Tuple{Vararg{AbstractVector}}, + odecache + ) -> (Real, Tuple{Vararg{AbstractVector}}, CacheType) + +March the state vector for one time step. +""" +function ode_march!( + stateF::Tuple{Vararg{AbstractVector}}, + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, state0::Tuple{Vararg{AbstractVector}}, + odecache +) + @abstractmethod +end + +""" + ode_finish!( + uF::AbstractVector, + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, tF, stateF::Tuple{Vararg{AbstractVector}}, + odecache + ) -> (AbstractVector, CacheType) + +Convert the state vectors into the evaluation of the solution of the ODE at the +current time. +""" +function ode_finish!( + uF::AbstractVector, + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, tF, stateF::Tuple{Vararg{AbstractVector}}, + odecache +) + copy!(uF, first(stateF)) + (uF, odecache) +end + +######## +# Test # +######## +""" + test_ode_solver( + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, us0::Tuple{Vararg{AbstractVector}} + ) -> Bool + +Test the interface of `ODESolver` specializations. +""" +function test_ode_solver( + odeslvr::ODESolver, odeop::ODEOperator, + t0::Real, us0::Tuple{Vararg{AbstractVector}} +) + odecache = allocate_odecache(odeslvr, odeop, t0, us0) + + # Starting procedure + state0, odecache = ode_start( + odeslvr, odeop, + t0, us0, + odecache + ) + @test state0 isa Tuple{Vararg{AbstractVector}} + + # Marching procedure + stateF = copy.(state0) + tF, stateF, odecache = ode_march!( + stateF, + odeslvr, odeop, + t0, state0, + odecache + ) + @test tF isa Real + @test stateF isa Tuple{Vararg{AbstractVector}} + + # Finishing procedure + uF = copy(first(us0)) + uF, odecache = ode_finish!( + uF, + odeslvr, odeop, + t0, tF, stateF, + odecache + ) + @test uF isa AbstractVector + + true +end + +################## +# Import solvers # +################## +# First-order +include("ODESolvers/ForwardEuler.jl") + +include("ODESolvers/ThetaMethod.jl") + +include("ODESolvers/GeneralizedAlpha1.jl") + +include("ODESolvers/Tableaus.jl") + +include("ODESolvers/RungeKuttaEX.jl") + +include("ODESolvers/RungeKuttaDIM.jl") + +include("ODESolvers/RungeKuttaIMEX.jl") + +# Second-order +include("ODESolvers/GeneralizedAlpha2.jl") + +######### +# Utils # +######### +function _setindex_all!(a::CompressedArray, v, i::Integer) + # This is a straightforward implementation of setindex! for `CompressedArray` + # when we want to update the value associated to all pointers currently + # pointing to the same value + idx = a.ptrs[i] + a.values[idx] = v + a +end + +function RungeKutta( + sysslvr_nl::NonlinearSolver, sysslvr_l::NonlinearSolver, + dt::Real, tableau::AbstractTableau +) + type = TableauType(tableau) + if type == ExplicitTableau + EXRungeKutta(sysslvr_nl, dt, tableau) + elseif type == DiagonallyImplicitTableau + DIMRungeKutta(sysslvr_nl, sysslvr_l, dt, tableau) + elseif type == ImplicitExplicitTableau + IMEXRungeKutta(sysslvr_nl, sysslvr_l, dt, tableau) + # elseif type == FullyImplicitTableau + # FIMRungeKutta(sysslvr_nl, sysslvr_l, dt, tableau) + end +end + +function RungeKutta( + sysslvr_nl::NonlinearSolver, sysslvr_l::NonlinearSolver, + dt::Real, name::Symbol +) + RungeKutta(sysslvr_nl, sysslvr_l, dt, ButcherTableau(name)) +end + +function RungeKutta(sysslvr_nl::NonlinearSolver, dt::Real, tableau) + RungeKutta(sysslvr_nl, sysslvr_nl, dt, name) +end diff --git a/src/ODEs/ODESolvers/ForwardEuler.jl b/src/ODEs/ODESolvers/ForwardEuler.jl new file mode 100644 index 000000000..443814684 --- /dev/null +++ b/src/ODEs/ODESolvers/ForwardEuler.jl @@ -0,0 +1,164 @@ +""" + struct ForwardEuler <: ODESolver end + +Forward Euler ODE solver. +```math +residual(tx, ux, vx) = 0, + +tx = t_n +ux = u_n +vx = x, + +u_(n+1) = u_n + dt * x. +``` +""" +struct ForwardEuler <: ODESolver + sysslvr::NonlinearSolver + dt::Real +end + +################## +# Nonlinear case # +################## +function allocate_odecache( + odeslvr::ForwardEuler, odeop::ODEOperator, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + sysslvrcache = nothing + odeslvrcache = (sysslvrcache,) + + (odeslvrcache, odeopcache) +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::ForwardEuler, odeop::ODEOperator, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = state0[1] + odeslvrcache, odeopcache = odecache + sysslvrcache, = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt = odeslvr.dt + + # Define scheme + x = stateF[1] + tx = t0 + usx(x) = (u0, x) + ws = (0, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = NonlinearStageOperator( + odeop, odeopcache, + tx, usx, ws + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + tF = t0 + dt + stateF = _update_euler!(stateF, state0, dt, x) + + # Pack outputs + odeslvrcache = (sysslvrcache,) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +############### +# Linear case # +############### +function allocate_odecache( + odeslvr::ForwardEuler, odeop::ODEOperator{<:AbstractQuasilinearODE}, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + is_semilinear = (ODEOperatorType(odeop) <: AbstractSemilinearODE) + constant_mass = is_form_constant(odeop, 1) + reuse = (is_semilinear && constant_mass) + + J = allocate_jacobian(odeop, t0, us0N, odeopcache) + r = allocate_residual(odeop, t0, us0N, odeopcache) + + sysslvrcache = nothing + odeslvrcache = (reuse, J, r, sysslvrcache) + + (odeslvrcache, odeopcache) +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::ForwardEuler, odeop::ODEOperator{<:AbstractQuasilinearODE}, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = state0[1] + odeslvrcache, odeopcache = odecache + reuse, J, r, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt = odeslvr.dt + + # Define scheme + # Set x to zero to split jacobian and residual + x = stateF[1] + fill!(x, zero(eltype(x))) + tx = t0 + usx = (u0, x) + ws = (0, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = LinearStageOperator( + odeop, odeopcache, + tx, usx, ws, + J, r, reuse, sysslvrcache + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + tF = t0 + dt + stateF = _update_euler!(stateF, state0, dt, x) + + # Pack outputs + odeslvrcache = (reuse, J, r, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +######### +# Utils # +######### +function _update_euler!( + stateF::NTuple{1,AbstractVector}, state0::NTuple{1,AbstractVector}, + dt::Real, x::AbstractVector +) + # uF = u0 + dt * x + # We always have x === uF + u0 = state0[1] + uF = stateF[1] + + rmul!(uF, dt) + axpy!(1, u0, uF) + + (uF,) +end diff --git a/src/ODEs/ODESolvers/GeneralizedAlpha1.jl b/src/ODEs/ODESolvers/GeneralizedAlpha1.jl new file mode 100644 index 000000000..96dd235ef --- /dev/null +++ b/src/ODEs/ODESolvers/GeneralizedAlpha1.jl @@ -0,0 +1,298 @@ +""" + struct GeneralizedAlpha1 <: ODESolver + +Generalized-α first-order ODE solver. +```math +residual(tx, ux, vx) = 0, + +tx = (1 - αf) * t_n + αf * t_(n+1) +ux = (1 - αf) * u_n + αf * u_(n+1) +vx = (1 - αm) * v_n + αm * v_(n+1), + +u_(n+1) = u_n + dt * ((1 - γ) * v_n + γ * x) +v_(n+1) = x. +``` +""" +struct GeneralizedAlpha1 <: ODESolver + sysslvr::NonlinearSolver + dt::Real + αf::Real + αm::Real + γ::Real +end + +# Constructors +function GeneralizedAlpha1( + sysslvr::NonlinearSolver, + dt::Real, ρ∞::Real +) + ρ∞01 = clamp(ρ∞, 0, 1) + if ρ∞01 != ρ∞ + msg = """ + The parameter ρ∞ of the generalized-α scheme must lie between zero and one. + Setting ρ∞ to $(ρ∞01). + """ + @warn msg + ρ∞ = ρ∞01 + end + + αf = 1 / (1 + ρ∞) + αm = (3 - ρ∞) / (1 + ρ∞) / 2 + γ = 1 / 2 + αm - αf + + GeneralizedAlpha1(sysslvr, dt, αf, αm, γ) +end + +################## +# Nonlinear case # +################## +function allocate_odecache( + odeslvr::GeneralizedAlpha1, odeop::ODEOperator, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + uα, vα = copy(u0), copy(u0) + + sysslvrcache = nothing + odeslvrcache = (uα, vα, sysslvrcache) + + (odeslvrcache, odeopcache) +end + +function ode_start( + odeslvr::GeneralizedAlpha1, odeop::ODEOperator, + t0::Real, us0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = us0[1] + odeslvrcache, odeopcache = odecache + uα, vα, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + + # Allocate state + s0, s1 = copy(u0), copy(u0) + + # Define scheme + x = s1 + tx = t0 + usx(x) = (u0, x) + ws = (0, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = NonlinearStageOperator( + odeop, odeopcache, + tx, usx, ws + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + state0 = (s0, s1) + + # Pack outputs + odeslvrcache = (uα, vα, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (state0, odecache) +end + +function ode_march!( + stateF::NTuple{2,AbstractVector}, + odeslvr::GeneralizedAlpha1, odeop::ODEOperator, + t0::Real, state0::NTuple{2,AbstractVector}, + odecache +) + # Unpack inputs + u0, v0 = state0[1], state0[2] + odeslvrcache, odeopcache = odecache + uα, vα, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt, αf, αm, γ = odeslvr.dt, odeslvr.αf, odeslvr.αm, odeslvr.γ + + # Define scheme + x = stateF[2] + tx = t0 + αf * dt + function usx(x) + # uα = u0 + αf * dt * [(1 - γ) * v0 + γ * x] + copy!(uα, u0) + axpy!(αf * (1 - γ) * dt, v0, uα) + axpy!(αf * γ * dt, x, uα) + + # vα = (1 - αm) * v0 + αm * x + copy!(vα, v0) + rmul!(vα, 1 - αm) + axpy!(αm, x, vα) + + (uα, vα) + end + ws = (αf * γ * dt, αm) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = NonlinearStageOperator( + odeop, odeopcache, + tx, usx, ws + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + tF = t0 + dt + stateF = _update_alpha1!(stateF, state0, dt, x, γ) + + # Pack outputs + odeslvrcache = (uα, vα, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +############### +# Linear case # +############### +function allocate_odecache( + odeslvr::GeneralizedAlpha1, odeop::ODEOperator{<:AbstractLinearODE}, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + uα, vα = zero(u0), zero(u0) + + constant_stiffness = is_form_constant(odeop, 0) + constant_mass = is_form_constant(odeop, 1) + reuse = (constant_stiffness && constant_mass) + + J = allocate_jacobian(odeop, t0, us0N, odeopcache) + r = allocate_residual(odeop, t0, us0N, odeopcache) + + sysslvrcache = nothing + odeslvrcache = (reuse, uα, vα, J, r, sysslvrcache) + + (odeslvrcache, odeopcache) +end + +function ode_start( + odeslvr::GeneralizedAlpha1, odeop::ODEOperator{<:AbstractLinearODE}, + t0::Real, us0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = us0[1] + odeslvrcache, odeopcache = odecache + reuse, uα, vα, J, r, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + + # Allocate state + s0, s1 = copy(u0), copy(u0) + + # Define scheme + # Set x to zero to split jacobian and residual + x = s1 + fill!(x, zero(eltype(x))) + tx = t0 + usx = (u0, x) + ws = (0, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = LinearStageOperator( + odeop, odeopcache, + tx, usx, ws, + J, r, false, sysslvrcache + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + state0 = (s0, s1) + + # Pack outputs + odeslvrcache = (reuse, uα, vα, J, r, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (state0, odecache) +end + +function ode_march!( + stateF::NTuple{2,AbstractVector}, + odeslvr::GeneralizedAlpha1, odeop::ODEOperator{<:AbstractLinearODE}, + t0::Real, state0::NTuple{2,AbstractVector}, + odecache +) + # Unpack inputs + u0, v0 = state0[1], state0[2] + odeslvrcache, odeopcache = odecache + reuse, uα, vα, J, r, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt, αf, αm, γ = odeslvr.dt, odeslvr.αf, odeslvr.αm, odeslvr.γ + + # Define scheme + dtα = αf * dt + tx = t0 + dtα + x = stateF[2] + copy!(uα, u0) + axpy!(αf * (1 - γ) * dt, v0, uα) + copy!(vα, v0) + rmul!(vα, 1 - αm) + usx = (uα, vα) + ws = (αf * γ * dt, αm) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Solve the discrete ODE operator + stageop = LinearStageOperator( + odeop, odeopcache, + tx, usx, ws, + J, r, reuse, sysslvrcache + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + tF = t0 + dt + stateF = _update_alpha1!(stateF, state0, dt, x, γ) + + # Pack outputs + odeslvrcache = (reuse, uα, vα, J, r, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +######### +# Utils # +######### +function _update_alpha1!( + stateF::NTuple{2,AbstractVector}, state0::NTuple{2,AbstractVector}, + dt::Real, x::AbstractVector, γ::Real +) + # uF = u0 + dt * ((1 - γ) * v0 + γ * x) + # vF = x + # We always have x === vF + u0, v0 = state0[1], state0[2] + uF, vF = stateF[1], stateF[2] + + copy!(uF, u0) + axpy!((1 - γ) * dt, v0, uF) + axpy!(γ * dt, x, uF) + + (uF, vF) +end diff --git a/src/ODEs/ODESolvers/GeneralizedAlpha2.jl b/src/ODEs/ODESolvers/GeneralizedAlpha2.jl new file mode 100644 index 000000000..6d50a01a2 --- /dev/null +++ b/src/ODEs/ODESolvers/GeneralizedAlpha2.jl @@ -0,0 +1,334 @@ +""" + struct GeneralizedAlpha2 <: ODESolver + +Generalized-α second-order ODE solver. +```math +residual(tx, ux, vx, ax) = 0, + +tx = αf * t_n + (1 - αf) * t_(n+1) +ux = αf * u_n + (1 - αf) * u_(n+1) +vx = αf * v_n + (1 - αf) * v_(n+1) +ax = αm * a_n + (1 - αm) * a_(n+1), + +u_(n+1) = u_n + dt * v_n + dt^2 / 2 * ((1 - 2 * β) * a_n + 2 * β * x) +v_(n+1) = v_n + dt * ((1 - γ) * a_n + γ * x) +a_(n+1) = x. +``` +""" +struct GeneralizedAlpha2 <: ODESolver + sysslvr::NonlinearSolver + dt::Real + αf::Real + αm::Real + γ::Real + β::Real +end + +# Constructors +function GeneralizedAlpha2(sysslvr::NonlinearSolver, dt::Real, ρ∞::Real) + ρ∞01 = clamp(ρ∞, 0, 1) + if ρ∞01 != ρ∞ + msg = """ + The parameter ρ∞ of the generalized-α scheme must lie between zero and one. + Setting ρ∞ to $(ρ∞01). + """ + @warn msg + ρ∞ = ρ∞01 + end + + αf = ρ∞ / (1 + ρ∞) + αm = (2 * ρ∞ - 1) / (1 + ρ∞) + γ = 1 / 2 - αm + αf + β = (1 - αm + αf)^2 / 4 + GeneralizedAlpha2(sysslvr, dt, αf, αm, γ, β) +end + +function Newmark(sysslvr::NonlinearSolver, dt::Real, γ::Real, β::Real) + γ01 = clamp(γ, 0, 1) + if γ01 != γ + msg = """ + The parameter γ of the Newmark scheme must lie between zero and one. + Setting γ to $(γ01). + """ + @warn msg + γ = γ01 + end + + β01 = clamp(β, 0, 1) + if β01 != β + msg = """ + The parameter β of the Newmark scheme must lie between zero and one. + Setting β to $(β01). + """ + @warn msg + β = β01 + end + + αf, αm = 0.0, 0.0 + GeneralizedAlpha2(sysslvr, dt, αf, αm, γ, β) +end + +################## +# Nonlinear case # +################## +function allocate_odecache( + odeslvr::GeneralizedAlpha2, odeop::ODEOperator, + t0::Real, us0::NTuple{2,AbstractVector} +) + u0, v0 = us0[1], us0[2] + us0N = (u0, v0, v0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + uα, vα, aα = copy(u0), copy(v0), copy(v0) + + sysslvrcache = nothing + odeslvrcache = (uα, vα, aα, sysslvrcache) + + (odeslvrcache, odeopcache) +end + +function ode_start( + odeslvr::GeneralizedAlpha2, odeop::ODEOperator, + t0::Real, us0::NTuple{2,AbstractVector}, + odecache +) + # Unpack inputs + u0, v0 = us0[1], us0[2] + odeslvrcache, odeopcache = odecache + uα, vα, aα, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + + # Allocate state + s0, s1, s2 = copy(u0), copy(v0), copy(v0) + + # Define scheme + x = s2 + tx = t0 + usx(x) = (u0, v0, x) + ws = (0, 0, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = NonlinearStageOperator( + odeop, odeopcache, + tx, usx, ws + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + state0 = (s0, s1, s2) + + # Pack outputs + odeslvrcache = (uα, vα, aα, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (state0, odecache) +end + +function ode_march!( + stateF::NTuple{3,AbstractVector}, + odeslvr::GeneralizedAlpha2, odeop::ODEOperator, + t0::Real, state0::NTuple{3,AbstractVector}, + odecache +) + # Unpack inputs + u0, v0, a0 = state0[1], state0[2], state0[3] + odeslvrcache, odeopcache = odecache + uα, vα, aα, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt, αf, αm, γ, β = odeslvr.dt, odeslvr.αf, odeslvr.αm, odeslvr.γ, odeslvr.β + + # Define scheme + tx = t0 + (1 - αf) * dt + x = stateF[3] + function usx(x) + copy!(uα, u0) + axpy!((1 - αf) * dt, v0, uα) + axpy!((1 - αf) * (1 - 2 * β) * dt^2 / 2, a0, uα) + axpy!((1 - αf) * β * dt^2, x, uα) + + copy!(vα, v0) + axpy!((1 - αf) * (1 - γ) * dt, a0, vα) + axpy!((1 - αf) * γ * dt, x, vα) + + copy!(aα, a0) + rmul!(aα, αm) + axpy!(1 - αm, x, aα) + + (uα, vα, aα) + end + ws = ((1 - αf) * β * dt^2, (1 - αf) * γ * dt, 1 - αm) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = NonlinearStageOperator( + odeop, odeopcache, + tx, usx, ws + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + tF = t0 + dt + stateF = _update_alpha2!(stateF, state0, dt, x, γ, β) + + # Pack outputs + odeslvrcache = (uα, vα, aα, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +############### +# Linear case # +############### +function allocate_odecache( + odeslvr::GeneralizedAlpha2, odeop::ODEOperator{<:AbstractLinearODE}, + t0::Real, us0::NTuple{2,AbstractVector} +) + u0, v0 = us0[1], us0[2] + us0N = (u0, v0, v0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + uα, vα, aα = zero(u0), zero(v0), zero(v0) + + constant_stiffness = is_form_constant(odeop, 0) + constant_damping = is_form_constant(odeop, 1) + constant_mass = is_form_constant(odeop, 2) + reuse = (constant_stiffness && constant_damping && constant_mass) + + J = allocate_jacobian(odeop, t0, us0N, odeopcache) + r = allocate_residual(odeop, t0, us0N, odeopcache) + + sysslvrcache = nothing + odeslvrcache = (reuse, uα, vα, aα, J, r, sysslvrcache) + + (odeslvrcache, odeopcache) +end + +function ode_start( + odeslvr::GeneralizedAlpha2, odeop::ODEOperator{<:AbstractLinearODE}, + t0::Real, us0::NTuple{2,AbstractVector}, + odecache +) + # Unpack inputs + u0, v0 = us0[1], us0[2] + odeslvrcache, odeopcache = odecache + reuse, uα, vα, aα, J, r, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + + # Allocate state + s0, s1, s2 = copy(u0), copy(v0), copy(v0) + + # Define scheme + # Set x to zero to split jacobian and residual + x = s2 + fill!(x, zero(eltype(x))) + tx = t0 + usx = (u0, v0, x) + ws = (0, 0, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = LinearStageOperator( + odeop, odeopcache, + tx, usx, ws, + J, r, false, sysslvrcache + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + state0 = (s0, s1, s2) + + # Pack outputs + odeslvrcache = (reuse, uα, vα, aα, J, r, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (state0, odecache) +end + +function ode_march!( + stateF::NTuple{3,AbstractVector}, + odeslvr::GeneralizedAlpha2, odeop::ODEOperator{<:AbstractLinearODE}, + t0::Real, state0::NTuple{3,AbstractVector}, + odecache +) + # Unpack inputs + u0, v0, a0 = state0[1], state0[2], state0[3] + odeslvrcache, odeopcache = odecache + reuse, uα, vα, aα, J, r, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt, αf, αm, γ, β = odeslvr.dt, odeslvr.αf, odeslvr.αm, odeslvr.γ, odeslvr.β + + # Define scheme + x = stateF[3] + tx = t0 + (1 - αf) * dt + copy!(uα, u0) + axpy!((1 - αf) * dt, v0, uα) + axpy!((1 - αf) * (1 - 2 * β) * dt^2 / 2, a0, uα) + copy!(vα, v0) + axpy!((1 - αf) * (1 - γ) * dt, a0, vα) + copy!(aα, a0) + rmul!(aα, αm) + usx = (uα, vα, aα) + ws = ((1 - αf) * β * dt^2, (1 - αf) * γ * dt, 1 - αm) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Solve the discrete ODE operator + stageop = LinearStageOperator( + odeop, odeopcache, + tx, usx, ws, + J, r, reuse, sysslvrcache + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + tF = t0 + dt + stateF = _update_alpha2!(stateF, state0, dt, x, γ, β) + + # Pack outputs + odeslvrcache = (reuse, uα, vα, aα, J, r, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +######### +# Utils # +######### +function _update_alpha2!( + stateF::NTuple{3,AbstractVector}, state0::NTuple{3,AbstractVector}, + dt::Real, x::AbstractVector, γ::Real, β::Real +) + # uF = u0 + dt * v0 + dt^2 / 2 * ((1 - 2 * β) * a0 + 2 * β * x) + # vF = v0 + dt * ((1 - γ) * a0 + γ * x) + # We always have x === aF + u0, v0, a0 = state0[1], state0[2], state0[3] + uF, vF, aF = stateF[1], stateF[2], stateF[3] + + copy!(uF, u0) + axpy!(dt, v0, uF) + axpy!((1 - 2 * β) * dt^2 / 2, a0, uF) + axpy!(β * dt^2, x, uF) + + copy!(vF, v0) + axpy!((1 - γ) * dt, a0, vF) + axpy!(γ * dt, x, vF) + + (uF, vF, aF) +end diff --git a/src/ODEs/ODESolvers/RungeKuttaDIM.jl b/src/ODEs/ODESolvers/RungeKuttaDIM.jl new file mode 100644 index 000000000..9b84cd83f --- /dev/null +++ b/src/ODEs/ODESolvers/RungeKuttaDIM.jl @@ -0,0 +1,269 @@ + +################# +# DIMRungeKutta # +################# +""" + struct DIMRungeKutta <: ODESolver end + +Diagonally-implicit Runge-Kutta ODE solver. +```math +residual(tx, ux, vx) = 0, + +tx = t_n + c[i] * dt +ux = u_n + dt * ∑_{1 ≤ j < i} A[i, j] * slopes[j] + dt * A[i, i] * x +vx = x, + +u_(n+1) = u_n + dt * ∑_{1 ≤ i ≤ s} b[i] * slopes[i]. +``` +""" +struct DIMRungeKutta <: ODESolver + sysslvr_nl::NonlinearSolver + sysslvr_l::NonlinearSolver + dt::Real + tableau::AbstractTableau{DiagonallyImplicitTableau} +end + +################## +# Nonlinear case # +################## +function allocate_odecache( + odeslvr::DIMRungeKutta, odeop::ODEOperator, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + ui_pre, ui = zero(u0), zero(u0) + num_stages = length(get_nodes(odeslvr.tableau)) + slopes = [zero(u0) for _ in 1:num_stages] + + odeoptype = ODEOperatorType(odeop) + has_explicit = odeoptype <: AbstractQuasilinearODE + is_semilinear = odeoptype <: AbstractSemilinearODE + mass_constant = is_form_constant(odeop, 1) + reuse = (is_semilinear && mass_constant) + + J, r = nothing, nothing + if has_explicit + # Allocate J, r if there are explicit stages + A = get_matrix(odeslvr.tableau) + if any(i -> iszero(A[i, i]), axes(A, 2)) + J = allocate_jacobian(odeop, t0, us0N, odeopcache) + r = allocate_residual(odeop, t0, us0N, odeopcache) + end + end + + sysslvrcaches = (nothing, nothing) + odeslvrcache = (reuse, has_explicit, ui_pre, ui, slopes, J, r, sysslvrcaches) + + (odeslvrcache, odeopcache) +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::DIMRungeKutta, odeop::ODEOperator, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = state0[1] + odeslvrcache, odeopcache = odecache + reuse, has_explicit, ui_pre, ui, slopes, J, r, sysslvrcaches = odeslvrcache + sysslvrcache_nl, sysslvrcache_l = sysslvrcaches + + # Unpack solver + sysslvr_nl, sysslvr_l = odeslvr.sysslvr_nl, odeslvr.sysslvr_l + dt, tableau = odeslvr.dt, odeslvr.tableau + A, b, c = get_matrix(tableau), get_weights(tableau), get_nodes(tableau) + + for i in eachindex(c) + # Define scheme + x = slopes[i] + tx = t0 + c[i] * dt + copy!(ui_pre, u0) + for j in 1:i-1 + axpy!(A[i, j] * dt, slopes[j], ui_pre) + end + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Decide whether the stage is explicit or implicit + # The stage becomes explicit when aii = 0 and the operator is quasilinear, + # which is precomputed in has_explicit + aii = A[i, i] + explicit_stage = iszero(aii) && has_explicit + + if explicit_stage + # Define scheme + # Set x to zero to split jacobian and residual + fill!(x, zero(eltype(x))) + usx = (ui_pre, x) + ws = (0, 1) + + # Create and solve stage operator + stageop = LinearStageOperator( + odeop, odeopcache, + tx, usx, ws, + J, r, reuse, sysslvrcache_l + ) + + sysslvrcache_l = solve!(x, sysslvr_l, stageop, sysslvrcache_l) + else + # Define scheme + function usx(x) + copy!(ui, ui_pre) + axpy!(aii * dt, x, ui) + (ui, x) + end + ws = (aii * dt, 1) + + # Create and solve stage operator + stageop = NonlinearStageOperator( + odeop, odeopcache, + tx, usx, ws + ) + + sysslvrcache_nl = solve!(x, sysslvr_nl, stageop, sysslvrcache_nl) + end + end + + # Update state + tF = t0 + dt + stateF = _update_dimrk!(stateF, state0, dt, slopes, b) + + # Pack outputs + sysslvrcaches = (sysslvrcache_nl, sysslvrcache_l) + odeslvrcache = (reuse, has_explicit, ui_pre, ui, slopes, J, r, sysslvrcaches) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +############### +# Linear case # +############### +function allocate_odecache( + odeslvr::DIMRungeKutta, odeop::ODEOperator{<:AbstractLinearODE}, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + ui_pre = zero(u0) + num_stages = length(get_nodes(odeslvr.tableau)) + slopes = [zero(u0) for _ in 1:num_stages] + + stiffness_constant = is_form_constant(odeop, 0) + mass_constant = is_form_constant(odeop, 1) + reuse = (stiffness_constant && mass_constant) + + J = allocate_jacobian(odeop, t0, us0N, odeopcache) + r = allocate_residual(odeop, t0, us0N, odeopcache) + + # Numerical setups for the linear solver + # * If the mass and stiffness matrices are constant, we can reuse numerical + # setups and we allocate one for each distinct aii. + # * Otherwise, there will be no reuse so we only need one numerical setup + # that will be updated. + # To be general, we build a map sysslvrcaches: step -> NumericalSetup. + # We will probably never need more than 256 stages so we can use Int8. + if !reuse + n = 1 + ptrs = fill(Int8(1), num_stages) + else + A = get_matrix(odeslvr.tableau) + d = Dict{eltype(A),Int8}() + n = 0 + ptrs = zeros(Int8, num_stages) + for i in 1:num_stages + aii = A[i, i] + if !haskey(d, aii) + n += 1 + d[aii] = n + end + ptrs[i] = d[aii] + end + end + values = Vector{NumericalSetup}(undef, n) + + sysslvrcaches = CompressedArray(values, ptrs) + odeslvrcache = (reuse, ui_pre, slopes, J, r, sysslvrcaches) + + (odeslvrcache, odeopcache) +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::DIMRungeKutta, odeop::ODEOperator{<:AbstractLinearODE}, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = state0[1] + odeslvrcache, odeopcache = odecache + reuse, ui_pre, slopes, J, r, sysslvrcaches = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr_l + dt, tableau = odeslvr.dt, odeslvr.tableau + A, b, c = get_matrix(tableau), get_weights(tableau), get_nodes(tableau) + + for i in eachindex(c) + # Define scheme + # Set x to zero to split jacobian and residual + x = slopes[i] + fill!(x, zero(eltype(x))) + tx = t0 + c[i] * dt + copy!(ui_pre, u0) + for j in 1:i-1 + axpy!(A[i, j] * dt, slopes[j], ui_pre) + end + usx = (ui_pre, x) + ws = (A[i, i] * dt, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + # sysslvrcaches[i] will be unassigned at the first iteration + sysslvrcache = isassigned(sysslvrcaches, i) ? sysslvrcaches[i] : nothing + stageop = LinearStageOperator( + odeop, odeopcache, + tx, usx, ws, + J, r, reuse, sysslvrcache + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + sysslvrcaches = _setindex_all!(sysslvrcaches, sysslvrcache, i) + end + + # Update state + tF = t0 + dt + stateF = _update_dimrk!(stateF, state0, dt, slopes, b) + + # Pack outputs + odeslvrcache = (reuse, ui_pre, slopes, J, r, sysslvrcaches) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +######### +# Utils # +######### +function _update_dimrk!( + stateF::NTuple{1,AbstractVector}, state0::NTuple{1,AbstractVector}, + dt::Real, slopes::AbstractVector, b::AbstractVector +) + # uF = u0 + ∑_{1 ≤ i ≤ s} b[i] * dt * slopes[i] + u0 = state0[1] + uF = stateF[1] + + copy!(uF, u0) + for (bi, slopei) in zip(b, slopes) + axpy!(bi * dt, slopei, uF) + end + + (uF,) +end diff --git a/src/ODEs/ODESolvers/RungeKuttaEX.jl b/src/ODEs/ODESolvers/RungeKuttaEX.jl new file mode 100644 index 000000000..983f1c873 --- /dev/null +++ b/src/ODEs/ODESolvers/RungeKuttaEX.jl @@ -0,0 +1,193 @@ +################ +# EXRungeKutta # +################ +""" + struct EXRungeKutta <: ODESolver end + +Explicit Runge-Kutta ODE solver. +```math +residual(tx, ux, vx) = 0, + +tx = t_n + c[i] * dt +ux = u_n + ∑_{1 ≤ j < i} A[i, j] * dt * slopes[j] +vx = x +slopes[i] = x, + +u_(n+1) = u_n + ∑_{1 ≤ i ≤ s} b[i] * dt * slopes[i]. +``` +""" +struct EXRungeKutta <: ODESolver + sysslvr::NonlinearSolver + dt::Real + tableau::AbstractTableau{ExplicitTableau} +end + +################## +# Nonlinear case # +################## +function allocate_odecache( + odeslvr::EXRungeKutta, odeop::ODEOperator, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + ui_pre = zero(u0) + num_stages = length(get_nodes(odeslvr.tableau)) + slopes = [zero(u0) for _ in 1:num_stages] + + sysslvrcache = nothing + odeslvrcache = (ui_pre, slopes, sysslvrcache) + + (odeslvrcache, odeopcache) +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::EXRungeKutta, odeop::ODEOperator, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = state0[1] + odeslvrcache, odeopcache = odecache + ui_pre, slopes, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt, tableau = odeslvr.dt, odeslvr.tableau + A, b, c = get_matrix(tableau), get_weights(tableau), get_nodes(tableau) + + for i in eachindex(c) + # Define scheme + x = slopes[i] + tx = t0 + c[i] * dt + copy!(ui_pre, u0) + for j in 1:i-1 + axpy!(A[i, j] * dt, slopes[j], ui_pre) + end + usx(x) = (ui_pre, x) + ws = (0, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = NonlinearStageOperator( + odeop, odeopcache, + tx, usx, ws + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + end + + # Update state + tF = t0 + dt + stateF = _update_exrk!(stateF, state0, dt, slopes, b) + + # Pack outputs + odeslvrcache = (ui_pre, slopes, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +############### +# Linear case # +############### +function allocate_odecache( + odeslvr::EXRungeKutta, odeop::ODEOperator{<:AbstractQuasilinearODE}, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + ui_pre = zero(u0) + num_stages = length(get_nodes(odeslvr.tableau)) + slopes = [zero(u0) for _ in 1:num_stages] + + is_semilinear = ODEOperatorType(odeop) <: AbstractSemilinearODE + constant_mass = is_form_constant(odeop, 1) + reuse = (is_semilinear && constant_mass) + + J = allocate_jacobian(odeop, t0, us0N, odeopcache) + r = allocate_residual(odeop, t0, us0N, odeopcache) + + sysslvrcache = nothing + odeslvrcache = (reuse, ui_pre, slopes, J, r, sysslvrcache) + + (odeslvrcache, odeopcache) +end + + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::EXRungeKutta, odeop::ODEOperator{<:AbstractQuasilinearODE}, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = state0[1] + odeslvrcache, odeopcache = odecache + reuse, ui_pre, slopes, J, r, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt, tableau = odeslvr.dt, odeslvr.tableau + A, b, c = get_matrix(tableau), get_weights(tableau), get_nodes(tableau) + + for i in eachindex(c) + # Define scheme + # Set x to zero to split jacobian and residual + x = slopes[i] + fill!(x, zero(eltype(x))) + tx = t0 + c[i] * dt + copy!(ui_pre, u0) + for j in 1:i-1 + axpy!(A[i, j] * dt, slopes[j], ui_pre) + end + usx = (ui_pre, x) + ws = (0, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = LinearStageOperator( + odeop, odeopcache, + tx, usx, ws, + J, r, reuse, sysslvrcache + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + end + + # Update state + tF = t0 + dt + stateF = _update_exrk!(stateF, state0, dt, slopes, b) + + # Pack outputs + odeslvrcache = (reuse, ui_pre, slopes, J, r, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +######### +# Utils # +######### +function _update_exrk!( + stateF::NTuple{1,AbstractVector}, state0::NTuple{1,AbstractVector}, + dt::Real, slopes::AbstractVector, b::AbstractVector +) + # uF = u0 + ∑_{1 ≤ i ≤ s} b[i] * dt * slopes[i] + u0 = state0[1] + uF = stateF[1] + + copy!(uF, u0) + for (bi, slopei) in zip(b, slopes) + axpy!(bi * dt, slopei, uF) + end + + (uF,) +end diff --git a/src/ODEs/ODESolvers/RungeKuttaIMEX.jl b/src/ODEs/ODESolvers/RungeKuttaIMEX.jl new file mode 100644 index 000000000..9e4630c4a --- /dev/null +++ b/src/ODEs/ODESolvers/RungeKuttaIMEX.jl @@ -0,0 +1,417 @@ +""" + struct IMEXRungeKutta <: ODESolver + +Implicit-Explicit Runge-Kutta ODE solver. +```math +mass(tx, ux) vx + im_res(tx, ux) = 0, + +tx = t_n + c[i] * dt +ux = u_n + ∑_{1 ≤ j < i} im_A[i, j] * dt * im_slopes[j] + im_A[i, i] * dt * x + + ∑_{1 ≤ j < i} ex_A[i, j] * dt * ex_slopes[j] +vx = x +im_slopes[i] = x, + +mass(tx, ux) vx + ex_res(tx, ux) = 0, + +tx = t_n + c[i] * dt +ux = u_n + ∑_{1 ≤ j ≤ i} im_A[i, j] * dt * im_slopes[j] + + ∑_{1 ≤ j < i} ex_A[i, j] * dt * ex_slopes[j] +vx = x +ex_slopes[i] = x, + +u_(n+1) = u_n + ∑_{1 ≤ i ≤ s} im_b[i] * dt * im_slopes[i] + + ∑_{1 ≤ i ≤ s} ex_b[i] * dt * ex_slopes[i]. +``` +""" +struct IMEXRungeKutta <: ODESolver + sysslvr_nl::NonlinearSolver + sysslvr_l::NonlinearSolver + dt::Real + tableau::AbstractTableau{ImplicitExplicitTableau} +end + +####################### +# Notimplemented case # +####################### +const imex_rk_not_implemented_msg = """ +IMEX Runge-Kutta is only implemented for IMEX ODE operators whose implicit +residual is quasilinear. +""" + +function allocate_odecache( + odeslvr::IMEXRungeKutta, odeop::IMEXODEOperator, + t0::Real, us0::NTuple{1,AbstractVector} +) + @unreachable imex_rk_not_implemented_msg +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::IMEXRungeKutta, odeop::IMEXODEOperator, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + @unreachable imex_rk_not_implemented_msg +end + +# Dispatch on the IMEX decomposition +function allocate_odecache( + odeslvr::IMEXRungeKutta, odeop::IMEXODEOperator{<:AbstractQuasilinearODE}, + t0::Real, us0::NTuple{1,AbstractVector} +) + im_odeop, ex_odeop = get_imex_operators(odeop) + allocate_odecache(odeslvr, odeop, im_odeop, ex_odeop, t0, us0) +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::IMEXRungeKutta, odeop::IMEXODEOperator{<:AbstractQuasilinearODE}, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + im_odeop, ex_odeop = get_imex_operators(odeop) + ode_march!(stateF, odeslvr, odeop, im_odeop, ex_odeop, t0, state0, odecache) +end + +################## +# Nonlinear case # +################## +# This is very similar to `DIMRungeKutta` applied to a nonlinear `ODEOperator` +function allocate_odecache( + odeslvr::IMEXRungeKutta, odeop, + im_odeop::ODEOperator{<:AbstractQuasilinearODE}, + ex_odeop::ODEOperator, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + ui_pre, ui = zero(u0), zero(u0) + im_tableau, ex_tableau = get_imex_tableaus(odeslvr.tableau) + im_num_stages = length(get_nodes(im_tableau)) + im_slopes = [zero(u0) for _ in 1:im_num_stages] + ex_num_stages = length(get_nodes(ex_tableau)) + ex_slopes = [zero(u0) for _ in 1:ex_num_stages] + + is_semilinear = ODEOperatorType(im_odeop) <: AbstractSemilinearODE + mass_constant = is_form_constant(odeop, 1) + reuse = (is_semilinear && mass_constant) + + # The explicit stage is always going to be linear so we always need to + # allocate a jacobian and residual. + J = allocate_jacobian(odeop, t0, us0N, odeopcache) + r = allocate_residual(odeop, t0, us0N, odeopcache) + + # We can share the numerical setup across the implicit and explicit parts + # because the linear solver will only be called when the implicit part goes + # through an explicit stage (aii = 0) and on the explicit part. In both cases + # the matrix of the linear stage operator is the mass matrix. + sysslvrcaches = (nothing, nothing) + odeslvrcache = (reuse, ui_pre, ui, im_slopes, ex_slopes, J, r, sysslvrcaches) + + (odeslvrcache, odeopcache) +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::IMEXRungeKutta, odeop, + im_odeop::ODEOperator{<:AbstractQuasilinearODE}, + ex_odeop::ODEOperator, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = state0[1] + odeslvrcache, odeopcache = odecache + im_odeopcache, ex_odeopcache = odeopcache + reuse, ui_pre, ui, im_slopes, ex_slopes, J, r, sysslvrcaches = odeslvrcache + sysslvrcache_nl, sysslvrcache_l = sysslvrcaches + + # Unpack solver + sysslvr_nl, sysslvr_l = odeslvr.sysslvr_nl, odeslvr.sysslvr_l + dt, tableau = odeslvr.dt, odeslvr.tableau + im_tableau, ex_tableau = get_imex_tableaus(tableau) + im_A, im_b = get_matrix(im_tableau), get_weights(im_tableau) + ex_A, ex_b = get_matrix(ex_tableau), get_weights(ex_tableau) + c = get_nodes(im_tableau) + + for i in eachindex(c) + # Define scheme + x = im_slopes[i] + tx = t0 + c[i] * dt + copy!(ui_pre, u0) + for j in 1:i-1 + axpy!(im_A[i, j] * dt, im_slopes[j], ui_pre) + axpy!(ex_A[i, j] * dt, ex_slopes[j], ui_pre) + end + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # 1. Implicit part + aii = im_A[i, i] + + # If the implicit tableau is padded, we can skip the first implicit solve + # and set im_slopes[1] to zero + if i == 1 && is_padded(tableau) + fill!(x, zero(eltype(x))) + else + # The stage becomes explicit when aii = 0 because the implicit part is + # quasilinear + explicit_stage = iszero(aii) + + if explicit_stage + # Define scheme + # Set x to zero to split jacobian and residual + fill!(x, zero(eltype(x))) + usx = (ui_pre, x) + ws = (0, 1) + + # Create and solve stage operator + im_stageop = LinearStageOperator( + im_odeop, im_odeopcache, + tx, usx, ws, + J, r, reuse, sysslvrcache_l + ) + + sysslvrcache_l = solve!(x, sysslvr_l, im_stageop, sysslvrcache_l) + else + # Define scheme + function usx(x) + copy!(ui, ui_pre) + axpy!(aii * dt, x, ui) + (ui, x) + end + ws = (aii * dt, 1) + + # Create and solve stage operator + im_stageop = NonlinearStageOperator( + im_odeop, im_odeopcache, + tx, usx, ws + ) + + sysslvrcache_nl = solve!(x, sysslvr_nl, im_stageop, sysslvrcache_nl) + end + end + + # 2. Explicit part + # vx does not matter + # Compute ui from ui_pre and im_slopes[i] + copy!(ui, ui_pre) + axpy!(aii * dt, im_slopes[i], ui) + usx = (ui, u0) + + # This stage operator is a little more complicated than usual so we build + # the jacobian and residual here: + # [m(ti, ui)] x + [ex_res(ti, ui)] = 0 + # * The explicit part does not contain the mass, we take the full residual + # * The jacobian is the mass matrix, which is stored in the implicit part + residual!(r, ex_odeop, tx, usx, ex_odeopcache) + if isnothing(sysslvrcache_l) || !reuse + ws = (0, 1) + jacobian!(J, im_odeop, tx, usx, ws, im_odeopcache) + end + ex_stageop = LinearStageOperator(J, r, reuse) + + x = ex_slopes[i] + sysslvrcache_l = solve!(x, sysslvr_l, ex_stageop, sysslvrcache_l) + end + + # Update state + tF = t0 + dt + stateF = _update_imexrk!(stateF, state0, dt, im_slopes, im_b, ex_slopes, ex_b) + + # Pack outputs + sysslvrcaches = (sysslvrcache_nl, sysslvrcache_l) + odeslvrcache = (reuse, ui_pre, ui, im_slopes, ex_slopes, J, r, sysslvrcaches) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +############### +# Linear case # +############### +# This is very similar to `DIMRungeKutta` applied to a linear `ODEOperator` +function allocate_odecache( + odeslvr::IMEXRungeKutta, odeop, + im_odeop::ODEOperator{<:AbstractLinearODE}, + ex_odeop::ODEOperator, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + ui_pre = zero(u0) + im_tableau, ex_tableau = get_imex_tableaus(odeslvr.tableau) + im_num_stages = length(get_nodes(im_tableau)) + im_slopes = [zero(u0) for _ in 1:im_num_stages] + ex_num_stages = length(get_nodes(ex_tableau)) + ex_slopes = [zero(u0) for _ in 1:ex_num_stages] + + stiffness_constant = is_form_constant(im_odeop, 0) + mass_constant = is_form_constant(im_odeop, 1) + reuse = (stiffness_constant && mass_constant) + + # The explict part will always bring about a linear stage operator, and its + # matrix is always the mass matrix. From the constraint on the nodes of the + # implicit and explicit tableaus, we know that the first stage of the + # implicit part is in fact explicit so the corresponding matrix is also the + # mass matrix. This means that the numerical setup of the explict part can + # always be chosen as the same as numerical setup of the first stage of the + # implicit part. + + # For the same reason, the jacobian and residual can be shared across the + # implicit and explicit parts. + J = allocate_jacobian(odeop, t0, us0N, odeopcache) + r = allocate_residual(odeop, t0, us0N, odeopcache) + + # Numerical setups for the linear solver + # * If the mass and stiffness matrices are constant, we can reuse numerical + # setups and we allocate one for each distinct aii. + # * Otherwise, there will be no reuse so we only need one numerical setup + # that will be updated. + # To be general, we build a map sysslvrcaches: step -> NumericalSetup. + # We will probably never need more than 256 stages so we can use Int8 + if !reuse + n = 1 + ptrs = fill(Int8(1), im_num_stages) + else + im_A = get_matrix(im_tableau) + d = Dict{eltype(im_A),Int8}() + n = 0 + ptrs = zeros(Int8, im_num_stages) + for i in 1:im_num_stages + aii = im_A[i, i] + if !haskey(d, aii) + n += 1 + d[aii] = n + end + ptrs[i] = d[aii] + end + end + values = Vector{NumericalSetup}(undef, n) + + sysslvrcaches = CompressedArray(values, ptrs) + odeslvrcache = (reuse, ui_pre, im_slopes, ex_slopes, J, r, sysslvrcaches) + + (odeslvrcache, odeopcache) +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::IMEXRungeKutta, odeop, + im_odeop::ODEOperator{<:AbstractLinearODE}, + ex_odeop::ODEOperator, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = state0[1] + odeslvrcache, odeopcache = odecache + im_odeopcache, ex_odeopcache = odeopcache + reuse, ui_pre, im_slopes, ex_slopes, J, r, sysslvrcaches = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr_l + dt, tableau = odeslvr.dt, odeslvr.tableau + im_tableau, ex_tableau = get_imex_tableaus(tableau) + im_A, im_b = get_matrix(im_tableau), get_weights(im_tableau) + ex_A, ex_b = get_matrix(ex_tableau), get_weights(ex_tableau) + c = get_nodes(im_tableau) + + for i in eachindex(c) + # Define scheme + # Set x to zero to split jacobian and residual + x = im_slopes[i] + fill!(x, zero(eltype(x))) + tx = t0 + c[i] * dt + copy!(ui_pre, u0) + for j in 1:i-1 + axpy!(im_A[i, j] * dt, im_slopes[j], ui_pre) + axpy!(ex_A[i, j] * dt, ex_slopes[j], ui_pre) + end + usx = (ui_pre, x) + ws = (im_A[i, i] * dt, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + # 1. Implicit stage + # If the implicit tableau is padded, we can skip the first implicit solve + # and set im_slopes[1] to zero + if i == 1 && is_padded(tableau) + fill!(x, zero(eltype(x))) + else + # sysslvrcaches[i] will be unassigned at the first iteration + sysslvrcache = isassigned(sysslvrcaches, i) ? sysslvrcaches[i] : nothing + + im_stageop = LinearStageOperator( + im_odeop, im_odeopcache, + tx, usx, ws, + J, r, reuse, sysslvrcache + ) + + sysslvrcache = solve!(x, sysslvr, im_stageop, sysslvrcache) + sysslvrcaches = _setindex_all!(sysslvrcaches, sysslvrcache, i) + end + + # 2. Explicit part + # vx does not matter + # Compute ui from ui_pre and im_slopes[i] + ui = axpy!(im_A[i, i] * dt, im_slopes[i], ui_pre) + usx = (ui, u0) + + # This stage operator is a little more complicated than usual so we build + # the jacobian and residual here: + # [m(ti, ui)] x + [ex_res(ti, ui)] = 0 + # * The explicit part does not contain the mass, we take the full residual + # * The jacobian is the mass matrix, which is stored in the implicit part + residual!(r, ex_odeop, tx, usx, ex_odeopcache) + sysslvrcache = isassigned(sysslvrcaches, 1) ? sysslvrcaches[1] : nothing + if isnothing(sysslvrcache) || !reuse + ws = (0, 1) + jacobian!(J, im_odeop, tx, usx, ws, im_odeopcache) + end + ex_stageop = LinearStageOperator(J, r, reuse) + + x = ex_slopes[i] + sysslvrcache = solve!(x, sysslvr, ex_stageop, sysslvrcache) + sysslvrcaches = _setindex_all!(sysslvrcaches, sysslvrcache, i) + end + + # Update state + tF = t0 + dt + stateF = _update_imexrk!(stateF, state0, dt, im_slopes, im_b, ex_slopes, ex_b) + + # Pack outputs + odeslvrcache = (reuse, ui_pre, im_slopes, ex_slopes, J, r, sysslvrcaches) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +######### +# Utils # +######### +function _update_imexrk!( + stateF::NTuple{1,AbstractVector}, state0::NTuple{1,AbstractVector}, + dt::Real, im_slopes::AbstractVector{<:AbstractVector}, im_b::AbstractVector, + ex_slopes::AbstractVector{<:AbstractVector}, ex_b::AbstractVector +) + # uF = u0 + ∑_{1 ≤ i ≤ s} im_b[i] * dt * im_slopes[i] + # + ∑_{1 ≤ i ≤ s} ex_b[i] * dt * ex_slopes[i] + u0 = state0[1] + uF = stateF[1] + + copy!(uF, u0) + for (bi, slopei) in zip(im_b, im_slopes) + axpy!(bi * dt, slopei, uF) + end + for (bi, slopei) in zip(ex_b, ex_slopes) + axpy!(bi * dt, slopei, uF) + end + + (uF,) +end diff --git a/src/ODEs/ODESolvers/Tableaus.jl b/src/ODEs/ODESolvers/Tableaus.jl new file mode 100644 index 000000000..81901aef5 --- /dev/null +++ b/src/ODEs/ODESolvers/Tableaus.jl @@ -0,0 +1,336 @@ +############### +# TableauType # +############### +""" + abstract type TableauType <: GridapType end + +Trait that indicates whether a tableau is explicit, implicit or +implicit-explicit. +""" +abstract type TableauType <: GridapType end + +""" + struct ExplicitTableau <: TableauType end + +Tableau whose matrix is strictly lower triangular. +""" +struct ExplicitTableau <: TableauType end + +""" + abstract type ImplicitTableau <: TableauType end + +Tableau whose matrix has at least one nonzero coefficient outside its strict +lower triangular part. +""" +abstract type ImplicitTableau <: TableauType end + +""" + struct DiagonallyImplicitTableau <: ImplicitTableau end + +Tableau whose matrix is lower triangular, with at least one nonzero diagonal +coefficient. +""" +struct DiagonallyImplicitTableau <: ImplicitTableau end + +""" + struct FullyImplicitTableau <: ImplicitTableau end + +Tableau whose matrix has at least one nonzero coefficient in its strict upper +triangular part. +""" +struct FullyImplicitTableau <: ImplicitTableau end + +""" + struct ImplicitExplicitTableau <: ImplicitTableau end + +Pair of implicit and explicit tableaus that form a valid implicit-explicit +scheme. +""" +struct ImplicitExplicitTableau <: TableauType end + +################### +# AbstractTableau # +################### +""" + abstract type AbstractTableau{T} <: GridapType end + +Type that stores the Butcher tableau corresponding to a Runge-Kutta scheme. +""" +abstract type AbstractTableau{T<:TableauType} <: GridapType end + +""" + TableauType(::AbstractTableau) -> TableauType + +Return the `TableauType` of the tableau. +""" +TableauType(::AbstractTableau{T}) where {T} = T + +""" + get_matrix(tableau::AbstractTableau) -> AbstractMatrix + +Return the matrix of the tableau. +""" +function Algebra.get_matrix(tableau::AbstractTableau) + @abstractmethod +end + +""" + get_weights(tableau::AbstractTableau) -> AbstractVector + +Return the weights of the tableau. +""" +function ReferenceFEs.get_weights(tableau::AbstractTableau) + @abstractmethod +end + +""" + get_nodes(tableau::AbstractTableau) -> AbstractVector + +Return the nodes of the tableau. +""" +function ReferenceFEs.get_nodes(tableau::AbstractTableau) + @abstractmethod +end + +""" + get_order(tableau::AbstractTableau) -> Integer + +Return the order of the scheme corresponding to the tableau. +""" +function Polynomials.get_order(tableau::AbstractTableau) + @abstractmethod +end + +################## +# GenericTableau # +################## +""" + struct GenericTableau <: AbstractTableau end + +Generic type that stores any type of Butcher tableau. +""" +struct GenericTableau{T<:TableauType} <: AbstractTableau{T} + matrix::Matrix + weights::Vector + nodes::Vector + order::Integer + + function GenericTableau(matrix, weights, order) + nodes = reshape(sum(matrix, dims=2), size(matrix, 1)) + T = _tableau_type(matrix) + new{T}(matrix, weights, nodes, order) + end +end + +function Algebra.get_matrix(tableau::GenericTableau) + tableau.matrix +end + +function ReferenceFEs.get_weights(tableau::GenericTableau) + tableau.weights +end + +function ReferenceFEs.get_nodes(tableau::GenericTableau) + tableau.nodes +end + +function Polynomials.get_order(tableau::GenericTableau) + tableau.order +end + +function _tableau_type(matrix::Matrix) + T = ExplicitTableau + n = size(matrix, 1) + for i in 1:n + if any(j -> !iszero(matrix[i, j]), i+1:n) + T = FullyImplicitTableau + break + elseif !iszero(matrix[i, i]) + T = DiagonallyImplicitTableau + end + end + T +end + +################### +# EmbeddedTableau # +################### +""" + struct EmbeddedTableau <: AbstractTableau end + +Generic type that stores any type of embedded Butcher tableau. +""" +struct EmbeddedTableau{T} <: AbstractTableau{T} + tableau::AbstractTableau{T} + emb_weights::Vector + emb_order::Integer +end + +function Algebra.get_matrix(tableau::EmbeddedTableau) + get_matrix(tableau.tableau) +end + +function ReferenceFEs.get_weights(tableau::EmbeddedTableau) + get_weights(tableau.tableau) +end + +function ReferenceFEs.get_nodes(tableau::EmbeddedTableau) + get_nodes(tableau.tableau) +end + +function Polynomials.get_order(tableau::EmbeddedTableau) + get_order(tableau.tableau) +end + +""" + get_embedded_weights(tableau::EmbeddedTableau) -> AbstractVector + +Return the embedded weight of the tableau. +""" +function get_embedded_weights(tableau::EmbeddedTableau) + tableau.emb_weights +end + +""" + get_embedded_order(tableau::EmbeddedTableau) -> Integer + +Return the embedded order of the tableau. +""" +function get_embedded_order(tableau::EmbeddedTableau) + tableau.emb_order +end + +############### +# IMEXTableau # +############### +""" + struct IMEXTableau <: AbstractTableau end + +Generic type that stores any type of implicit-explicit pair of Butcher tableaus, +that form a valid IMEX scheme. +""" +struct IMEXTableau <: AbstractTableau{ImplicitExplicitTableau} + im_tableau::AbstractTableau{<:ImplicitTableau} + ex_tableau::AbstractTableau{ExplicitTableau} + imex_order::Integer + is_padded::Bool + + function IMEXTableau(im_tableau, ex_tableau, imex_order) + Tim = TableauType(im_tableau) + Tex = TableauType(ex_tableau) + + msg = """Invalid IMEX tableau: + the first tableau must be implicit and the second must be explicit.""" + @assert (Tim <: ImplicitTableau && Tex == ExplicitTableau) msg + + msg = """Invalid IMEX tableau: + the nodes of the implicit and explicit tableaus must coincide.""" + @assert isapprox(get_nodes(im_tableau), get_nodes(ex_tableau)) msg + + is_padded = _is_padded(im_tableau) + + new(im_tableau, ex_tableau, imex_order, is_padded) + end +end + +function Polynomials.get_order(tableau::IMEXTableau) + tableau.imex_order +end + +function get_imex_tableaus(tableau::IMEXTableau) + (tableau.im_tableau, tableau.ex_tableau) +end + +function is_padded(tableau::IMEXTableau) + tableau.is_padded +end + +function _is_padded(tableau::AbstractTableau) + A = get_matrix(tableau) + b = get_matrix(tableau) + iszero(b[1]) && all(i -> iszero(A[i, 1]), axes(A, 1)) +end + +############################ +# Concrete implementations # +############################ +""" + abstract type TableauName <: GridapType end + +Name of a Butcher tableau. +""" +abstract type TableauName <: GridapType end + +""" + ButcherTableau(name::TableauName, type::Type) -> AbtractTableau + +Builds the Butcher tableau corresponding to a `TableauName`. +""" +function ButcherTableau(name::TableauName, type::Type=Float64) + @abstractmethod +end + +function ButcherTableau(name::Symbol, type::Type=Float64) + eval(:(ButcherTableau($name(), $type))) +end + +################## +# Import schemes # +################## + +include("TableausEX.jl") + +include("TableausDIM.jl") + +include("TableausIMEX.jl") + +const available_tableaus = [ + :EXRK_Euler_1_1, + :EXRK_Midpoint_2_2, + :EXRK_SSP_2_2, + :EXRK_Heun_2_2, + :EXRK_Ralston_2_2, + :EXRK_Kutta_3_3, + :EXRK_Heun_3_3, + :EXRK_Wray_3_3, + :EXRK_VanDerHouwen_3_3, + :EXRK_Ralston_3_3, + :EXRK_SSP_3_3, + :EXRK_SSP_3_2, + :EXRK_Fehlberg_3_2, + :EXRK_RungeKutta_4_4, + :EXRK_Simpson_4_4, + :EXRK_Ralston_4_4, + :EXRK_SSP_4_3, + :EXRK_BogackiShampine_4_3, + :SDIRK_Euler_1_1, + :SDIRK_Midpoint_1_2, + :DIRK_CrankNicolson_2_2, + :SDIRK_QinZhang_2_2, + :DIRK_LobattoIIIA_2_2, + :DIRK_RadauI_2_3, + :DIRK_RadauII_2_3, + :SDIRK_LobattoIIIC_2_2, + :SDIRK_2_2, + :SDIRK_SSP_2_3, + :SDIRK_Crouzeix_2_3, + :SDIRK_3_2, + :DIRK_TRBDF_3_2, + :DIRK_TRX_3_2, + :SDIRK_3_3, + :SDIRK_Crouzeix_3_4, + :SDIRK_Norsett_3_4, + :DIRK_LobattoIIIC_3_4, + :SDIRK_4_3, +] + +const available_imex_tableaus = [ + :IMEXRK_1_1_1, + :IMEXRK_1_2_1, + :IMEXRK_1_2_2, + :IMEXRK_2_2_2, + :IMEXRK_2_3_2, + :IMEXRK_2_3_3, + :IMEXRK_3_4_3, + :IMEXRK_4_4_3, +] diff --git a/src/ODEs/ODESolvers/TableausDIM.jl b/src/ODEs/ODESolvers/TableausDIM.jl new file mode 100644 index 000000000..ab5799b85 --- /dev/null +++ b/src/ODEs/ODESolvers/TableausDIM.jl @@ -0,0 +1,354 @@ +########### +# 1 stage # +########### +function SDIRK11(α::Real, ::Type{T}=Float64) where {T<:Real} + matrix = T[α;;] + weights = T[1] + cond2 = (α ≈ 1 / 2) + order = cond2 ? 2 : 1 + GenericTableau(matrix, weights, order) +end + +function SDIRK12(T::Type{<:Real}=Float64) + SDIRK11(1 / 2, T) +end + +""" +SDIRK_Euler_1_1 +""" +struct SDIRK_Euler_1_1 <: TableauName end + +function ButcherTableau(::SDIRK_Euler_1_1, ::Type{T}=Float64) where {T} + SDIRK11(1, T) +end + +""" +SDIRK_Midpoint_1_2 +""" +struct SDIRK_Midpoint_1_2 <: TableauName end + +function ButcherTableau(::SDIRK_Midpoint_1_2, ::Type{T}=Float64) where {T} + SDIRK12(T) +end + +############ +# 2 stages # +############ +function DIRK22(α::Real, β::Real, γ::Real, ::Type{T}=Float64) where {T<:Real} + δ = β - γ + θ = (1 - 2 * α) / 2 / (β - α) + matrix = T[ + α 0 + δ γ + ] + weights = T[1-θ, θ] + cond31 = ((1 - θ) * α^2 + θ * β^2 ≈ 1 / 3) + cond32 = ((1 - θ) * α^2 + θ * (δ * α + γ * β) ≈ 1 / 6) + cond3 = cond31 && cond32 + order = cond3 ? 3 : 2 + GenericTableau(matrix, weights, order) +end + +function DIRK23(λ::Real, ::Type{T}=Float64) where {T<:Real} + α = 1 / 2 - sqrt(3) / 6 / λ + β = sqrt(3) / 3 * λ + γ = 1 / 2 - sqrt(3) / 6 * λ + θ = 1 / (λ^2 + 1) + matrix = T[ + α 0 + β γ + ] + weights = T[1-θ, θ] + order = 3 + GenericTableau(matrix, weights, order) +end + +""" +DIRK_CrankNicolson_2_2 +""" +struct DIRK_CrankNicolson_2_2 <: TableauName end + +function ButcherTableau(::DIRK_CrankNicolson_2_2, ::Type{T}=Float64) where {T} + DIRK22(0, 1, 1 // 2, T) +end + +""" +SDIRK_QinZhang_2_2 +""" +struct SDIRK_QinZhang_2_2 <: TableauName end + +function ButcherTableau(::SDIRK_QinZhang_2_2, ::Type{T}=Float64) where {T} + DIRK22(1 // 4, 3 // 4, 1 // 4, T) +end + +""" +DIRK_LobattoIIIA_2_2 +""" +struct DIRK_LobattoIIIA_2_2 <: TableauName end + +function ButcherTableau(::DIRK_LobattoIIIA_2_2, ::Type{T}=Float64) where {T} + tableau = DIRK22(0, 1, 1 // 2, T) + emb_weights = T[1, 0] + emb_order = 1 + EmbeddedTableau(tableau, emb_weights, emb_order) +end + +""" +DIRK_RadauI_2_3 +""" +struct DIRK_RadauI_2_3 <: TableauName end + +function ButcherTableau(::DIRK_RadauI_2_3, ::Type{T}=Float64) where {T<:Real} + a = 1 // 3 + b = 1 // 4 + c = 3 // 4 + matrix = T[ + 0 0 + a a + ] + weights = T[b, c] + order = 3 + GenericTableau(matrix, weights, order) +end + +""" +DIRK_RadauII_2_3 +""" +struct DIRK_RadauII_2_3 <: TableauName end + +function ButcherTableau(::DIRK_RadauII_2_3, ::Type{T}=Float64) where {T<:Real} + a = 1 // 3 + b = 3 // 4 + c = 1 // 4 + matrix = T[ + a 0 + 1 0 + ] + weights = T[b, c] + order = 3 + GenericTableau(matrix, weights, order) +end + +""" +SDIRK_LobattoIIIC_2_2 +""" +struct SDIRK_LobattoIIIC_2_2 <: TableauName end + +function ButcherTableau(::SDIRK_LobattoIIIC_2_2, ::Type{T}=Float64) where {T} + DIRK22(0, 1, 0, T) +end + +""" +SDIRK_2_2 +""" +struct SDIRK_2_2 <: TableauName end + +function ButcherTableau(::SDIRK_2_2, ::Type{T}=Float64) where {T} + DIRK22(1, 0, 1, T) +end + +""" +SDIRK_SSP_2_3 +SDIRK_Crouzeix_2_3 +""" +struct SDIRK_SSP_2_3 <: TableauName end + +function ButcherTableau(::SDIRK_SSP_2_3, ::Type{T}=Float64) where {T} + DIRK23(-1, T) +end + +struct SDIRK_Crouzeix_2_3 <: TableauName end + +function ButcherTableau(::SDIRK_Crouzeix_2_3, ::Type{T}=Float64) where {T} + ButcherTableau(SDIRK_SSP_2_3(), T) +end + +############ +# 3 stages # +############ +""" +SDIRK_3_2 +""" +struct SDIRK_3_2 <: TableauName end + +function ButcherTableau(::SDIRK_3_2, ::Type{T}=Float64) where {T<:Real} + c = (2 - sqrt(2)) / 2 + b = (1 - 2 * c) / (4 * c) + a = 1 - b - c + matrix = T[ + 0 0 0 + c c 0 + a b c + ] + weights = T[a, b, c] + order = 2 + tableau = GenericTableau(matrix, weights, order) + + ĉ = -2 * (c^2) * (1 - c + c^2) / (2 * c - 1) + b̂ = c * (-2 + 7 * c - 5(c^2) + 4(c^3)) / (2 * (2 * c - 1)) + â = 1 - b̂ - ĉ + emb_weights = T[â, b̂, ĉ] + emb_order = 1 + EmbeddedTableau(tableau, emb_weights, emb_order) +end + +""" +DIRK_TRBDF_3_2 +""" +struct DIRK_TRBDF_3_2 <: TableauName end + +function ButcherTableau(::DIRK_TRBDF_3_2, ::Type{T}=Float64) where {T<:Real} + γ = 2 - sqrt(2) + d = γ / 2 + w = sqrt(2) / 4 + matrix = T[ + 0 0 0 + d d 0 + w w d + ] + weights = T[w, w, d] + order = 3 + tableau = GenericTableau(matrix, weights, order) + + ĉ = d / 3 + b̂ = (3 * w + 1) / 3 + â = 1 - b̂ - ĉ + emb_weights = [â, b̂, ĉ] + emb_order = 2 + EmbeddedTableau(tableau, emb_weights, emb_order) +end + +""" +DIRK_TRX_3_2 +""" +struct DIRK_TRX_3_2 <: TableauName end + +function ButcherTableau(::DIRK_TRX_3_2, ::Type{T}=Float64) where {T<:Real} + a = 1 / 4 + b = 1 / 2 + matrix = T[ + 0 0 0 + a a 0 + a b a + ] + weights = T[a, b, a] + order = 3 + tableau = GenericTableau(matrix, weights, order) + + c = 1 / 6 + d = 2 / 3 + emb_weights = [c, d, c] + emb_order = 2 + EmbeddedTableau(tableau, emb_weights, emb_order) +end + +""" +SDIRK_3_3 +""" +struct SDIRK_3_3 <: TableauName end + +function ButcherTableau(::SDIRK_3_3, ::Type{T}=Float64) where {T} + α = 2 * cospi(1 // 18) / sqrt(3) + a = (1 - α) / 2 + b = -3 * α^2 + 4 * α - 1 // 4 + c = 3 * α^2 - 5 * α + 5 // 4 + matrix = T[ + α 0 0 + a α 0 + b c α + ] + weights = T[b, c, α] + order = 3 + GenericTableau(matrix, weights, order) +end + +""" +SDIRK_Crouzeix_3_4 +""" +struct SDIRK_Crouzeix_3_4 <: TableauName end + +function ButcherTableau(::SDIRK_Crouzeix_3_4, ::Type{T}=Float64) where {T} + α = 2 * cospi(1 // 18) / sqrt(3) + a = (1 + α) / 2 + b = -α / 2 + c = 1 + α + d = -(1 + 2 * α) + e = 1 // 6 / α^2 + f = 1 - 1 // 3 / α^2 + matrix = T[ + a 0 0 + b a 0 + c d a + ] + weights = T[e, f, e] + order = 4 + GenericTableau(matrix, weights, order) +end + +""" +SDIRK_Norsett_3_4 +""" +struct SDIRK_Norsett_3_4 <: TableauName end + +function ButcherTableau(::SDIRK_Norsett_3_4, ::Type{T}=Float64) where {T} + # One of the three roots of x^3 - 3*x^2 + x/2 - 1/24 + # The largest one brings most stability + α = 1.0685790213016289 + b = 1 // 2 - α + c = 2 * α + d = 1 - 4 * α + e = 1 // 6 / (1 - 2 * α)^2 + f = 1 - 1 // 3 / (1 - 2 * α)^2 + matrix = T[ + α 0 0 + b α 0 + c d α + ] + weights = T[e, f, e] + order = 4 + GenericTableau(matrix, weights, order) +end + +""" +DIRK_LobattoIIIC_3_4 +""" +struct DIRK_LobattoIIIC_3_4 <: TableauName end + +function ButcherTableau(::DIRK_LobattoIIIC_3_4, ::Type{T}=Float64) where {T} + a = 1 // 4 + b = 1 // 6 + c = 2 // 3 + matrix = T[ + 0 0 0 + a a 0 + 0 1 0 + ] + weights = T[b, c, b] + order = 4 + GenericTableau(matrix, weights, order) +end + +############ +# 4 stages # +############ +""" +SDIRK_4_3 +""" +struct SDIRK_4_3 <: TableauName end + +function ButcherTableau(::SDIRK_4_3, ::Type{T}=Float64) where {T} + a = 1 // 2 + b = 1 // 6 + c = -1 // 2 + d = 3 // 2 + e = -3 // 2 + matrix = T[ + a 0 0 0 + b a 0 0 + c a a 0 + d e a a + ] + weights = T[d, e, a, a] + order = 4 + GenericTableau(matrix, weights, order) +end diff --git a/src/ODEs/ODESolvers/TableausEX.jl b/src/ODEs/ODESolvers/TableausEX.jl new file mode 100644 index 000000000..4a6ae3392 --- /dev/null +++ b/src/ODEs/ODESolvers/TableausEX.jl @@ -0,0 +1,335 @@ +########### +# 1 stage # +########### +function EXRK11(::Type{T}) where {T} + matrix = T[0;;] + weights = T[1] + order = 1 + GenericTableau(matrix, weights, order) +end + +""" +EXRK_Euler_1_1 +FE +""" +struct EXRK_Euler_1_1 <: TableauName end + +function ButcherTableau(::EXRK_Euler_1_1, ::Type{T}=Float64) where {T} + EXRK11(T) +end + +############ +# 2 stages # +############ +function EXRK22(α::Real, ::Type{T}=Float64) where {T} + a = 1 // 2 / α + matrix = T[ + 0 0 + α 0 + ] + weights = T[1-a, a] + order = 2 + GenericTableau(matrix, weights, order) +end + +""" +EXRK_Midpoint_2_2 +""" +struct EXRK_Midpoint_2_2 <: TableauName end + +function ButcherTableau(::EXRK_Midpoint_2_2, ::Type{T}=Float64) where {T} + EXRK22(1 // 2, T) +end + +""" +EXRK_SSP_2_2 +EXRK_Heun_2_2 +""" +struct EXRK_SSP_2_2 <: TableauName end + +function ButcherTableau(::EXRK_SSP_2_2, ::Type{T}=Float64) where {T} + tableau = EXRK22(1, T) + emb_weights = T[1, 0] + emb_order = 1 + EmbeddedTableau(tableau, emb_weights, emb_order) +end + +struct EXRK_Heun_2_2 <: TableauName end + +function ButcherTableau(::EXRK_Heun_2_2, ::Type{T}=Float64) where {T} + ButcherTableau(EXRK_SSP_2_2(), T) +end + +""" +EXRK_Ralston_2_2 +""" +struct EXRK_Ralston_2_2 <: TableauName end + +function ButcherTableau(::EXRK_Ralston_2_2, ::Type{T}=Float64) where {T} + EXRK22(2 // 3, T) +end + +############ +# 3 stages # +############ +function EXRK33(α::Real, β::Real, ::Type{T}=Float64) where {T} + b = β * (β - α) / α / (2 - 3 * α) + c = β - b + d = (3 * β - 2) // 6 / α / (β - α) + e = (2 - 3 * α) // 6 / β / (β - α) + matrix = T[ + 0 0 0 + α 0 0 + c b 0 + ] + weights = T[1-d-e, d, e] + order = 3 + GenericTableau(matrix, weights, order) +end + +function EXRK33_1(α::Real, ::Type{T}=Float64) where {T} + a = 2 // 3 + b = 1 // 4 / α + c = 2 // 3 - b + d = 1 // 4 + matrix = T[ + 0 0 0 + a 0 0 + c b 0 + ] + weights = T[d, 1-α-d, α] + order = 3 + GenericTableau(matrix, weights, order) +end + +function EXRK33_2(α::Real, ::Type{T}=Float64) where {T} + a = 2 // 3 + b = 1 // 4 / α + c = -b + d = 3 // 4 + matrix = T[ + 0 0 0 + a 0 0 + c b 0 + ] + weights = T[1-α-d, d, α] + order = 3 + GenericTableau(matrix, weights, order) +end + +""" +EXRK_Kutta_3_3 +""" +struct EXRK_Kutta_3_3 <: TableauName end + +function ButcherTableau(::EXRK_Kutta_3_3, ::Type{T}=Float64) where {T} + EXRK33(1 // 2, 1, T) +end + +""" +EXRK_Heun_3_3 +""" +struct EXRK_Heun_3_3 <: TableauName end + +function ButcherTableau(::EXRK_Heun_3_3, ::Type{T}=Float64) where {T} + EXRK33(1 // 3, 2 // 3, T) +end + +""" +EXRK_Wray_3_3 +EXRK_VanDerHouwen_3_3 +""" +struct EXRK_Wray_3_3 <: TableauName end + +function ButcherTableau(::EXRK_Wray_3_3, ::Type{T}=Float64) where {T} + EXRK33(8 // 15, 2 // 3, T) +end + +struct EXRK_VanDerHouwen_3_3 <: TableauName end + +function ButcherTableau(::EXRK_VanDerHouwen_3_3, ::Type{T}=Float64) where {T} + ButcherTableau(EXRK_Wray_3_3(), T) +end + +""" +EXRK_Ralston_3_3 +""" +struct EXRK_Ralston_3_3 <: TableauName end + +function ButcherTableau(::EXRK_Ralston_3_3, ::Type{T}=Float64) where {T} + EXRK33(1 // 2, 3 // 4, T) +end + +""" +EXRK_SSP_3_3 +""" +struct EXRK_SSP_3_3 <: TableauName end + +function ButcherTableau(::EXRK_SSP_3_3, ::Type{T}=Float64) where {T} + EXRK33(1, 1 // 2, T) +end + +""" +EXRK_SSP_3_2 +""" +struct EXRK_SSP_3_2 <: TableauName end + +function ButcherTableau(::EXRK_SSP_3_2, ::Type{T}=Float64) where {T} + a = 1 // 2 + b = 1 // 3 + matrix = T[ + 0 0 0 + a 0 0 + a a 0 + ] + weights = T[b, b, b] + order = 2 + GenericTableau(matrix, weights, order) +end + +""" +EXRK_Fehlberg_3_2 +""" +struct EXRK_Fehlberg_3_2 <: TableauName end + +function ButcherTableau(::EXRK_Fehlberg_3_2, ::Type{T}=Float64) where {T} + a = 1 // 2 + b = 1 // 256 + c = 255 // 256 + d = 1 // 512 + matrix = T[ + 0 0 0 + a 0 0 + b c 0 + ] + weights = T[d, c, d] + order = 2 + tableau = GenericTableau(matrix, weights, order) + + emb_weights = T[b, c, 0] + emb_order = 1 + EmbeddedTableau(tableau, emb_weights, emb_order) +end + +############ +# 4 stages # +############ +""" +EXRK_RungeKutta_4_4 +""" +struct EXRK_RungeKutta_4_4 <: TableauName end + +function ButcherTableau(::EXRK_RungeKutta_4_4, ::Type{T}=Float64) where {T} + a = 1 // 2 + b = 1 // 6 + c = 1 // 3 + matrix = T[ + 0 0 0 0 + a 0 0 0 + 0 a 0 0 + 0 0 1 0 + ] + weights = T[b, c, c, b] + order = 4 + GenericTableau(matrix, weights, order) +end + +""" +EXRK_Simpson_4_4 +""" +struct EXRK_Simpson_4_4 <: TableauName end + +function ButcherTableau(::EXRK_Simpson_4_4, ::Type{T}=Float64) where {T} + a = 1 // 3 + b = -1 // 3 + c = -1 + d = 1 // 8 + e = 3 // 8 + matrix = T[ + 0 0 0 0 + a 0 0 0 + b 1 0 0 + 1 c 1 0 + ] + weights = T[d, e, e, d] + order = 4 + GenericTableau(matrix, weights, order) +end + +""" +EXRK_Ralston_4_4 +""" +struct EXRK_Ralston_4_4 <: TableauName end + +function ButcherTableau(::EXRK_Ralston_4_4, ::Type{T}=Float64) where {T} + α = 2 // 5 + β = 7 // 8 - 3 * sqrt(5) / 16 + a = β * (β - α) / 2 / α / (1 - 2 * α) + b = β - a + c = (1 - α) * (α + β - 1 - (2 * β - 1)^2) / 2 / α / (β - α) / (6 * α * β - 4 * (α + β) + 3) + d = (1 - 2 * α) * (1 - α) * (1 - β) / β / (β - α) / (6 * α * β - 4 * (α + β) + 3) + e = 1 - c - d + f = 1 // 2 + (1 - 2 * (α + β)) / 12 / α / β + g = (2 * β - 1) / 12 / α / (β - α) / (1 - α) + h = (1 - 2 * α) / 12 / β / (β - α) / (1 - β) + i = 1 // 2 + (2 * (α + β) - 3) / 12 / (1 - α) / (1 - β) + matrix = T[ + 0 0 0 0 + α 0 0 0 + b a 0 0 + e c d 0 + ] + weights = T[f, g, h, i] + order = 4 + GenericTableau(matrix, weights, order) +end + +""" +EXRK_SSP_4_3 +""" +struct EXRK_SSP_4_3 <: TableauName end + +function ButcherTableau(::EXRK_SSP_4_3, ::Type{T}=Float64) where {T} + a = 1 // 2 + b = 1 // 6 + c = 1 // 4 + matrix = T[ + 0 0 0 0 + a 0 0 0 + a a 0 0 + b b b 0 + ] + weights = T[b, b, b, a] + order = 3 + tableau = GenericTableau(matrix, weights, order) + + emb_weights = T[c, c, c, c] + emb_order = 2 + EmbeddedTableau(tableau, emb_weights, emb_order) +end + +""" +EXRK_BogackiShampine_4_3 +""" +struct EXRK_BogackiShampine_4_3 <: TableauName end + +function ButcherTableau(::EXRK_BogackiShampine_4_3, ::Type{T}=Float64) where {T} + a = 1 // 2 + b = 3 // 4 + c = 2 // 9 + d = 1 // 3 + e = 4 // 9 + matrix = T[ + 0 0 0 0 + a 0 0 0 + 0 b 0 0 + c d e 0 + ] + weights = T[c, d, e, 0] + order = 3 + tableau = GenericTableau(matrix, weights, order) + + emb_weights = T[0, a, b, 1] + emb_order = 2 + EmbeddedTableau(tableau, emb_weights, emb_order) +end diff --git a/src/ODEs/ODESolvers/TableausIMEX.jl b/src/ODEs/ODESolvers/TableausIMEX.jl new file mode 100644 index 000000000..e297d0469 --- /dev/null +++ b/src/ODEs/ODESolvers/TableausIMEX.jl @@ -0,0 +1,276 @@ +# All these schemes come from the following paper +# Implicit-explicit Runge-Kutta methods for time-dependent partial differential +# equations, Uri M. Ascher, Steven J. Ruuth, Raymond J. Spiteri, Applied +# numerical mathematics 1997. +# https://www.sciencedirect.com/science/article/abs/pii/S0168927497000561 + +############ +# 2 stages # +############ +""" +IMEXRK_1_1_1 +Backward-Forward Euler pair, order 1 +""" +struct IMEXRK_1_1_1 <: TableauName end + +function ButcherTableau(::IMEXRK_1_1_1, T::Type{<:Real}=Float64) + im_matrix = T[ + 0 0 + 0 1 + ] + im_weights = T[0, 1] + im_order = 1 + im_tableau = GenericTableau(im_matrix, im_weights, im_order) + + ex_matrix = T[ + 0 0 + 1 0 + ] + ex_weights = T[1, 0] + ex_order = 1 + ex_tableau = GenericTableau(ex_matrix, ex_weights, ex_order) + + imex_order = 1 + IMEXTableau(im_tableau, ex_tableau, imex_order) +end + +""" +IMEXRK_1_2_1 +Backward-Forward Euler pair with same weights, order 1 +""" +struct IMEXRK_1_2_1 <: TableauName end + +function ButcherTableau(::IMEXRK_1_2_1, T::Type{<:Real}=Float64) + im_matrix = T[ + 0 0 + 0 1 + ] + im_weights = T[0, 1] + im_order = 1 + im_tableau = GenericTableau(im_matrix, im_weights, im_order) + + ex_matrix = T[ + 0 0 + 1 0 + ] + ex_weights = T[0, 1] + ex_order = 1 + ex_tableau = GenericTableau(ex_matrix, ex_weights, ex_order) + + imex_order = 1 + IMEXTableau(im_tableau, ex_tableau, imex_order) +end + +""" +IMEXRK_1_2_2 +Implicit-Explicit midpoint pair, order 2 +""" +struct IMEXRK_1_2_2 <: TableauName end + +function ButcherTableau(::IMEXRK_1_2_2, T::Type{<:Real}=Float64) + a = 1 / 2 + im_matrix = T[ + 0 0 + 0 a + ] + im_weights = T[0, 1] + im_order = 2 + im_tableau = GenericTableau(im_matrix, im_weights, im_order) + + ex_matrix = T[ + 0 0 + a 0 + ] + ex_weights = T[0, 1] + ex_order = 2 + ex_tableau = GenericTableau(ex_matrix, ex_weights, ex_order) + + imex_order = 2 + IMEXTableau(im_tableau, ex_tableau, imex_order) +end + +""" +IMEXRK_2_2_2 +L-stable, 2-stage, 2-order SDIRK +""" +struct IMEXRK_2_2_2 <: TableauName end + +function ButcherTableau(::IMEXRK_2_2_2, T::Type{<:Real}=Float64) + a = (2 - sqrt(2)) / 2 + b = 1 - a + c = 1 - 1 / 2 / a + d = 1 - c + im_matrix = T[ + 0 0 0 + 0 a 0 + 0 b a + ] + im_weights = T[0, b, a] + im_order = 2 + im_tableau = GenericTableau(im_matrix, im_weights, im_order) + + ex_matrix = T[ + 0 0 0 + a 0 0 + c d 0 + ] + ex_weights = T[c, d, 0] + ex_order = 2 + ex_tableau = GenericTableau(ex_matrix, ex_weights, ex_order) + + imex_order = 2 + IMEXTableau(im_tableau, ex_tableau, imex_order) +end + +""" +IMEXRK_2_3_2 +L-stable, 2-stage, 2-order SDIRK +""" +struct IMEXRK_2_3_2 <: TableauName end + +function ButcherTableau(::IMEXRK_2_3_2, T::Type{<:Real}=Float64) + a = (2 - sqrt(2)) / 2 + b = 1 - a + c = -2 * sqrt(2) / 3 + d = 1 - c + im_matrix = T[ + 0 0 0 + 0 a 0 + 0 b a + ] + im_weights = T[0, b, a] + im_order = 3 + im_tableau = GenericTableau(im_matrix, im_weights, im_order) + + ex_matrix = T[ + 0 0 0 + a 0 0 + c d 0 + ] + ex_weights = T[0, b, a] + ex_order = 3 + ex_tableau = GenericTableau(ex_matrix, ex_weights, ex_order) + + imex_order = 3 + IMEXTableau(im_tableau, ex_tableau, imex_order) +end + +""" +IMEXRK_2_3_3 +2-stage, 3-order SDIRK scheme with best damping properties +""" +struct IMEXRK_2_3_3 <: TableauName end + +function ButcherTableau(::IMEXRK_2_3_3, T::Type{<:Real}=Float64) + a = (3 + sqrt(3)) / 6 + b = 1 - 2 * a + c = 1 // 2 + d = a - 1 + e = 2 * (1 - a) + im_matrix = T[ + 0 0 0 + 0 a 0 + 0 b a + ] + im_weights = T[0, c, c] + im_order = 3 + im_tableau = GenericTableau(im_matrix, im_weights, im_order) + + ex_matrix = T[ + 0 0 0 + a 0 0 + d e 0 + ] + ex_weights = T[0, c, c] + ex_order = 3 + ex_tableau = GenericTableau(ex_matrix, ex_weights, ex_order) + + imex_order = 3 + IMEXTableau(im_tableau, ex_tableau, imex_order) +end + +""" +IMEXRK_3_4_3 +L-stable, 3-stage, 3-order SDIRK +""" +struct IMEXRK_3_4_3 <: TableauName end + +function ButcherTableau(::IMEXRK_3_4_3, T::Type{<:Real}=Float64) + # a is the middle root of 6 * x^3 - 18 * x^2 + 9 * x - 1 + a = 0.435866521508459 + b = (1 - a) / 2 + c = -3 * a^2 / 2 + 4 * a - 1 // 4 + d = 3 * a^2 / 2 - 5 * a + 5 / 4 + h = 0.5529291479 + e = (1 - 9 * a / 2 + 3 * a^2 / 2 + 11 // 4 - 21 * a / 2 + 15 * a^2 / 4) * h - 7 // 2 + 13 * a - 9 * a^2 / 2 + f = (-1 + 9 * a / 2 - 3 * a^2 / 2 - 11 // 4 + 21 * a / 2 - 15 * a^2 / 4) * h + 4 - 25 * a / 2 + 9 * a^2 / 2 + g = 1 - 2 * h + im_matrix = T[ + 0 0 0 0 + 0 a 0 0 + 0 b a 0 + 0 c d a + ] + im_weights = T[0, c, d, a] + im_order = 3 + im_tableau = GenericTableau(im_matrix, im_weights, im_order) + + ex_matrix = T[ + 0 0 0 0 + a 0 0 0 + e f 0 0 + g h h 0 + ] + ex_weights = T[0, c, d, a] + ex_order = 3 + ex_tableau = GenericTableau(ex_matrix, ex_weights, ex_order) + + imex_order = 3 + IMEXTableau(im_tableau, ex_tableau, imex_order) +end + +""" +IMEXRK_4_4_3 +L-stable, 4-stage, 3-order SDIRK +""" +struct IMEXRK_4_4_3 <: TableauName end + +function ButcherTableau(::IMEXRK_4_4_3, T::Type{<:Real}=Float64) + a = 1 // 2 + b = 1 // 6 + c = -1 // 2 + d = 3 // 2 + e = -3 // 2 + f = 11 // 18 + g = 1 // 18 + h = 5 // 6 + i = -5 // 6 + j = 1 // 4 + k = 7 // 4 + l = 3 // 4 + m = -7 // 4 + im_matrix = T[ + 0 0 0 0 0 + 0 a 0 0 0 + 0 b a 0 0 + 0 c a a 0 + 0 d e a a + ] + im_weights = T[0, d, e, a, a] + im_order = 3 + im_tableau = GenericTableau(im_matrix, im_weights, im_order) + + ex_matrix = T[ + 0 0 0 0 0 + a 0 0 0 0 + f g 0 0 0 + h i a 0 0 + j k l m 0 + ] + ex_weights = T[j, k, l, m, 0] + ex_order = 3 + ex_tableau = GenericTableau(ex_matrix, ex_weights, ex_order) + + imex_order = 3 + IMEXTableau(im_tableau, ex_tableau, imex_order) +end diff --git a/src/ODEs/ODESolvers/ThetaMethod.jl b/src/ODEs/ODESolvers/ThetaMethod.jl new file mode 100644 index 000000000..77e301f7f --- /dev/null +++ b/src/ODEs/ODESolvers/ThetaMethod.jl @@ -0,0 +1,191 @@ +""" + struct ThetaMethod <: ODESolver end + +θ-method ODE solver. +```math +residual(tx, ux, vx) = 0, + +tx = t_n + θ * dt +ux = u_n + θ * dt * x +vx = x, + +u_(n+1) = u_n + dt * x. +``` +""" +struct ThetaMethod <: ODESolver + sysslvr::NonlinearSolver + dt::Real + θ::Real + + function ThetaMethod(sysslvr, dt, θ) + θ01 = clamp(θ, 0, 1) + if θ01 != θ + msg = """ + The parameter θ of the θ-method must lie between zero and one. + Setting θ to $(θ01). + """ + @warn msg + end + + if iszero(θ01) + ForwardEuler(sysslvr, dt) + else + new(sysslvr, dt, θ01) + end + end +end + +MidPoint(sysslvr, dt) = ThetaMethod(sysslvr, dt, 0.5) +BackwardEuler(sysslvr, dt) = ThetaMethod(sysslvr, dt, 1) + +################## +# Nonlinear case # +################## +function allocate_odecache( + odeslvr::ThetaMethod, odeop::ODEOperator, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + uθ = copy(u0) + + sysslvrcache = nothing + odeslvrcache = (uθ, sysslvrcache) + + (odeslvrcache, odeopcache) +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::ThetaMethod, odeop::ODEOperator, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = state0[1] + odeslvrcache, odeopcache = odecache + uθ, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt, θ = odeslvr.dt, odeslvr.θ + + # Define scheme + x = stateF[1] + dtθ = θ * dt + tx = t0 + dtθ + function usx(x) + copy!(uθ, u0) + axpy!(dtθ, x, uθ) + (uθ, x) + end + ws = (dtθ, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = NonlinearStageOperator( + odeop, odeopcache, + tx, usx, ws + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + tF = t0 + dt + stateF = _udate_theta!(stateF, state0, dt, x) + + # Pack outputs + odeslvrcache = (uθ, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +############### +# Linear case # +############### +function allocate_odecache( + odeslvr::ThetaMethod, odeop::ODEOperator{<:AbstractLinearODE}, + t0::Real, us0::NTuple{1,AbstractVector} +) + u0 = us0[1] + us0N = (u0, u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + constant_stiffness = is_form_constant(odeop, 0) + constant_mass = is_form_constant(odeop, 1) + reuse = (constant_stiffness && constant_mass) + + J = allocate_jacobian(odeop, t0, us0N, odeopcache) + r = allocate_residual(odeop, t0, us0N, odeopcache) + + sysslvrcache = nothing + odeslvrcache = (reuse, J, r, sysslvrcache) + + (odeslvrcache, odeopcache) +end + +function ode_march!( + stateF::NTuple{1,AbstractVector}, + odeslvr::ThetaMethod, odeop::ODEOperator{<:AbstractLinearODE}, + t0::Real, state0::NTuple{1,AbstractVector}, + odecache +) + # Unpack inputs + u0 = state0[1] + odeslvrcache, odeopcache = odecache + reuse, J, r, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt, θ = odeslvr.dt, odeslvr.θ + + # Define scheme + # Set x to zero to split jacobian and residual + x = stateF[1] + fill!(x, zero(eltype(x))) + dtθ = θ * dt + tx = t0 + dtθ + usx = (u0, x) + ws = (dtθ, 1) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Solve the discrete ODE operator + stageop = LinearStageOperator( + odeop, odeopcache, + tx, usx, ws, + J, r, reuse, sysslvrcache + ) + + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + tF = t0 + dt + stateF = _udate_theta!(stateF, state0, dt, x) + + # Pack outputs + odeslvrcache = (reuse, J, r, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +######### +# Utils # +######### +function _udate_theta!( + stateF::NTuple{1,AbstractVector}, state0::NTuple{1,AbstractVector}, + dt::Real, x::AbstractVector +) + # uF = u0 + dt * x + # We always have x === uF + u0 = state0[1] + uF = stateF[1] + rmul!(uF, dt) + axpy!(1, u0, uF) + stateF = (uF,) +end diff --git a/src/ODEs/ODETools/AffineNewmark.jl b/src/ODEs/ODETools/AffineNewmark.jl deleted file mode 100644 index 13b0e4439..000000000 --- a/src/ODEs/ODETools/AffineNewmark.jl +++ /dev/null @@ -1,129 +0,0 @@ -function solve_step!( - x1::NTuple{3,AbstractVector}, - solver::Newmark, - op::AffineODEOperator, - x0::NTuple{3,AbstractVector}, - t0::Real, - cache) # -> (uF,tF) - - dt = solver.dt - γ = solver.γ - β = solver.β - t1 = t0+dt - u0, v0, a0 = x0 - u1, v1, a1 = x1 - newmatrix = true - - if cache === nothing - # Allocate caches - newmark_cache = allocate_cache(op,v0,a0) - (v,a, ode_cache) = newmark_cache - - # Allocate matrices and vectors - A, b = _allocate_matrix_and_vector(op,t0,x0,ode_cache) - - # Create affine operator cache - affOp_cache = (A,b,nothing) - else - newmark_cache, affOp_cache = cache - end - - # Unpack and update caches - (v,a, ode_cache) = newmark_cache - ode_cache = update_cache!(ode_cache,op,t1) - A,b,l_cache = affOp_cache - - # Define Newmark operator - newmark_affOp = NewmarkAffineOperator(op,t1,dt,γ,β,x0,newmark_cache) - - # Fill matrix and vector - _matrix_and_vector!(A,b,newmark_affOp,u1) - - # Create affine operator with updated RHS - affOp = AffineOperator(A,b) - l_cache = solve!(u1,solver.nls,affOp,l_cache,newmatrix) - - # Update auxiliary variables - @. u1 = u1 + u0 - @. v1 = γ/(β*dt)*(u1-u0) + (1-γ/β)*v0 + dt*(1-γ/(2*β))*a0 - @. a1 = 1.0/(β*dt^2)*(u1-u0) - 1.0/(β*dt)*v0 - (1-2*β)/(2*β)*a0 - - # Pack caches - affOp_cache = A,b,l_cache - cache = (newmark_cache, affOp_cache) - x1 = (u1,v1,a1) - - return (x1,t1,cache) - -end - -""" -Affine operator that represents the Newmark Affine operator at a -given time step, i.e., M(t)(u_n+1-u_n)/dt + K(t)u_n+1 + b(t) -""" -struct NewmarkAffineOperator <: NonlinearOperator - odeop::AffineODEOperator - t1::Float64 - dt::Float64 - γ::Float64 - β::Float64 - x0::NTuple{3,AbstractVector} - ode_cache -end - -function _matrix_and_vector!( - A::AbstractMatrix, - b::AbstractVector, - affOp::NewmarkAffineOperator, - x::AbstractVector) - jacobian!(A,affOp,x) - residual!(b,affOp,x) -end - -function residual!(b::AbstractVector,op::NewmarkAffineOperator,x::AbstractVector) - u1 = x - u0, v0, a0 = op.x0 - v1, a1, cache = op.ode_cache - @. a1 = (1.0/(op.β*op.dt^2)) * (u1 - u0) - (1.0/(op.β*op.dt)) * v0 - ((1.0-2.0*op.β)/(2.0*op.β)) * a0 - @. v1 = op.γ/(op.β*op.dt)*(u1-u0) + (1-op.γ/op.β)*v0 + op.dt*(1-op.γ/(2*op.β))*a0 - residual!(b,op.odeop,op.t1,(u1,v1,a1),cache) - b .*= -1.0 -end - -function jacobian!(A::AbstractMatrix,op::NewmarkAffineOperator,x::AbstractVector) - u1 = x - u0, v0, a0 = op.x0 - v1, a1, cache = op.ode_cache - @. a1 = 1.0/(op.β*op.dt^2)*(u1-u0) - 1.0/(op.β*op.dt)*v0 - (1-2*op.β)/(2*op.β)*a0 - @. v1 = op.γ/(op.β*op.dt)*(u1-u0) + (1-op.γ/op.β)*v0 + op.dt*(1-op.γ/(2*op.β))*a0 - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,op.odeop,op.t1,(u1,v1,a1),(1.0,op.γ/(op.β*op.dt),1.0/(op.β*op.dt^2)),cache) -end - -function _allocate_matrix(odeop::ODEOperator,t0::Real,x::Tuple{Vararg{AbstractVector}},ode_cache) - A = allocate_jacobian(odeop,t0,x[1],ode_cache) - return A -end - -function _allocate_matrix_and_vector(odeop::ODEOperator,t0::Real,x::Tuple{Vararg{AbstractVector}},ode_cache) - b = allocate_residual(odeop,t0,x[1],ode_cache) - A = allocate_jacobian(odeop,t0,x[1],ode_cache) - return A, b -end - -# # function allocate_residual(op::NewmarkAffineOperator,x::AbstractVector) -# # v1, a1, cache = op.ode_cache -# # allocate_residual(op.odeop,x,cache) -# # end - -# # function allocate_jacobian(op::NewmarkAffineOperator,x::AbstractVector) -# # v1, a1, cache = op.ode_cache -# # allocate_jacobian(op.odeop,x,cache) -# # end - -# function _allocate_matrix_and_vector(op::NewmarkAffineOperator,x::AbstractVector) -# A = allocate_jacobian(op,x) -# b = allocate_residual(op,x) -# return A,b -# end diff --git a/src/ODEs/ODETools/AffineThetaMethod.jl b/src/ODEs/ODETools/AffineThetaMethod.jl deleted file mode 100644 index 5f9a5148c..000000000 --- a/src/ODEs/ODETools/AffineThetaMethod.jl +++ /dev/null @@ -1,149 +0,0 @@ -function solve_step!(uf::AbstractVector, - solver::ThetaMethod, - op::AffineODEOperator, - u0::AbstractVector, - t0::Real, - cache) # -> (uF,tF) - - dt = solver.dt - solver.θ == 0.0 ? dtθ = dt : dtθ = dt*solver.θ - tθ = t0+dtθ - - if cache === nothing - ode_cache = allocate_cache(op) - vθ = similar(u0) - vθ .= 0.0 - l_cache = nothing - A, b = _allocate_matrix_and_vector(op,t0,u0,ode_cache) - else - ode_cache, vθ, A, b, l_cache = cache - end - - ode_cache = update_cache!(ode_cache,op,tθ) - - _matrix_and_vector!(A,b,op,tθ,dtθ,u0,ode_cache,vθ) - afop = AffineOperator(A,b) - - newmatrix = true - l_cache = solve!(uf,solver.nls,afop,l_cache,newmatrix) - - uf = uf + u0 - if 0.0 < solver.θ < 1.0 - uf = uf*(1.0/solver.θ)-u0*((1-solver.θ)/solver.θ) - end - - cache = (ode_cache, vθ, A, b, l_cache) - - tf = t0+dt - return (uf,tf,cache) - -end - -function solve_step!(uf::AbstractVector, - solver::ThetaMethod, - op::ConstantODEOperator, - u0::AbstractVector, - t0::Real, - cache) # -> (uF,tF) - - dt = solver.dt - solver.θ == 0.0 ? dtθ = dt : dtθ = dt*solver.θ - tθ = t0+dtθ - - if cache === nothing - ode_cache = allocate_cache(op) - vθ = similar(u0) - vθ .= 0.0 - A, b = _allocate_matrix_and_vector(op,t0,u0,ode_cache) - A = _matrix!(A,op,tθ,dtθ,u0,ode_cache,vθ) - b = _vector!(b,op,tθ,dtθ,vθ,ode_cache,vθ) - M = _allocate_matrix(op,t0,u0,ode_cache) - M = _mass_matrix!(M,op,tθ,dtθ,u0,ode_cache,vθ) - _u0 = similar(u0,(axes(M)[2],)) # Needed for the distributed case - copy!(_u0,u0) - l_cache = nothing - newmatrix = true - else - ode_cache, _u0, vθ, A, b, M, l_cache = cache - newmatrix = false - copy!(_u0,u0) - end - - ode_cache = update_cache!(ode_cache,op,tθ) - - vθ = b + M*_u0 - afop = AffineOperator(A,vθ) - - l_cache = solve!(uf,solver.nls,afop,l_cache,newmatrix) - - if 0.0 < solver.θ < 1.0 - @. uf = uf * (1.0/solver.θ) - u0 * ((1-solver.θ)/solver.θ) - end - - cache = (ode_cache, _u0, vθ, A, b, M, l_cache) - - tf = t0+dt - return (uf,tf,cache) - -end - -""" -Affine operator that represents the θ-method affine operator at a -given time step, i.e., M(t)(u_n+θ-u_n)/dt + K(t)u_n+θ + b(t) -""" -function ThetaMethodAffineOperator(odeop::AffineODEOperator,tθ::Float64,dtθ::Float64, - u0::AbstractVector,ode_cache,vθ::AbstractVector) - # vθ .= 0.0 - A, b = _allocate_matrix_and_vector(odeop,t0,u0,ode_cache) - _matrix_and_vector!(A,b,odeop,tθ,dtθ,u0,ode_cache,vθ) - afop = AffineOperator(A,b) -end - -function _matrix_and_vector!(A,b,odeop,tθ,dtθ,u0,ode_cache,vθ) - _matrix!(A,odeop,tθ,dtθ,u0,ode_cache,vθ) - _vector!(b,odeop,tθ,dtθ,u0,ode_cache,vθ) -end - -function _matrix!(A,odeop,tθ,dtθ,u0,ode_cache,vθ) - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,odeop,tθ,(vθ,vθ),(1.0,1/dtθ),ode_cache) -end - -function _mass_matrix!(A,odeop,tθ,dtθ,u0,ode_cache,vθ) - z = zero(eltype(A)) - fillstored!(A,z) - jacobian!(A,odeop,tθ,(vθ,vθ),2,(1/dtθ),ode_cache) -end - -function _vector!(b,odeop,tθ,dtθ,u0,ode_cache,vθ) - residual!(b,odeop,tθ,(u0,vθ),ode_cache) - b .*= -1.0 -end - -function _allocate_matrix(odeop,t0,u0,ode_cache) - A = allocate_jacobian(odeop,t0,u0,ode_cache) - return A -end - -function _allocate_matrix_and_vector(odeop,t0,u0,ode_cache) - b = allocate_residual(odeop,t0,u0,ode_cache) - A = allocate_jacobian(odeop,t0,u0,ode_cache) - return A, b -end - -""" -Affine operator that represents the θ-method affine operator at a -given time step, i.e., M(t)(u_n+θ-u_n)/dt + K(t)u_n+θ + b(t) -""" -function ThetaMethodConstantOperator(odeop::ConstantODEOperator,tθ::Float64,dtθ::Float64, - u0::AbstractVector,ode_cache,vθ::AbstractVector) - b = allocate_residual(odeop,tθ,u0,ode_cache) - A = allocate_jacobian(odeop,tθ,u0,ode_cache) - residual!(b,odeop,tθ,(u0,vθ),ode_cache) - @. b = -1.0 * b - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,odeop,tθ,(vθ,vθ),(1.0,1/dtθ),ode_cache) - return A, b -end diff --git a/src/ODEs/ODETools/ConstantMatrixNewmark.jl b/src/ODEs/ODETools/ConstantMatrixNewmark.jl deleted file mode 100644 index 795f8097a..000000000 --- a/src/ODEs/ODETools/ConstantMatrixNewmark.jl +++ /dev/null @@ -1,108 +0,0 @@ -function solve_step!( - x1::NTuple{3,AbstractVector}, - solver::Newmark, - op::ConstantMatrixODEOperator, - x0::NTuple{3,AbstractVector}, - t0::Real, - cache) # -> (uF,tF) - - dt = solver.dt - γ = solver.γ - β = solver.β - t1 = t0+dt - u0, v0, a0 = x0 - u1, v1, a1 = x1 - - if cache === nothing - newmatrix = true - - # Allocate caches - newmark_cache = allocate_cache(op,v0,a0) - (v,a, ode_cache) = newmark_cache - - # Define Newmark operator - newmark_affOp = NewmarkConstantMatrixOperator(op,t1,dt,γ,β,x0,newmark_cache) - - # Allocate matrices and vectors - A, b = _allocate_matrix_and_vector(op,t0,x0,ode_cache) - jacobian!(A,newmark_affOp,u1) - - # Create affine operator cache - affOp_cache = (A,b,newmark_affOp,nothing) - else - newmatrix = false - newmark_cache, affOp_cache = cache - end - - # Unpack and update caches - v,a,ode_cache = newmark_cache - ode_cache = update_cache!(ode_cache,op,t1) - A,b,newmark_affOp,l_cache = affOp_cache - - # Fill vector - newmark_affOp = NewmarkConstantMatrixOperator(op,t1,dt,γ,β,x0,newmark_cache) - residual!(b,newmark_affOp,u1) - - # Create affine operator with updated RHS - affOp = AffineOperator(A,b) - l_cache = solve!(u1,solver.nls,affOp,l_cache,newmatrix) - - # Update auxiliary variables - @. u1 = u1 + u0 - @. v1 = γ/(β*dt)*(u1-u0) + (1-γ/β)*v0 + dt*(1-γ/(2*β))*a0 - @. a1 = 1.0/(β*dt^2)*(u1-u0) - 1.0/(β*dt)*v0 - (1-2*β)/(2*β)*a0 - - # Pack caches - affOp_cache = A,b,newmark_affOp,l_cache - cache = (newmark_cache, affOp_cache) - x1 = (u1,v1,a1) - - return (x1,t1,cache) - -end - -""" -Affine operator that represents the Newmark Affine operator with constant -matrix at a given time step, i.e., M(u_n+1-u_n)/dt + K u_n+1 + b(t) -""" -mutable struct NewmarkConstantMatrixOperator <: NonlinearOperator - odeop::ConstantMatrixODEOperator - t1::Float64 - dt::Float64 - γ::Float64 - β::Float64 - x0::NTuple{3,AbstractVector} - ode_cache -end - -function residual!(b::AbstractVector,op::NewmarkConstantMatrixOperator,x::AbstractVector) - u1 = x - u0, v0, a0 = op.x0 - v1, a1, cache = op.ode_cache - @. a1 = 1.0/(op.β*op.dt^2)*(u1-u0) - 1.0/(op.β*op.dt)*v0 - (1-2*op.β)/(2*op.β)*a0 - @. v1 = op.γ/(op.β*op.dt)*(u1-u0) + (1-op.γ/op.β)*v0 + op.dt*(1-op.γ/(2*op.β))*a0 - residual!(b,op.odeop,op.t1,(u1,v1,a1),cache) - b .*= -1.0 -end - -function jacobian!(A::AbstractMatrix,op::NewmarkConstantMatrixOperator,x::AbstractVector) - u1 = x - u0, v0, a0 = op.x0 - v1, a1, cache = op.ode_cache - @. a1 = 1.0/(op.β*op.dt^2)*(u1-u0) - 1.0/(op.β*op.dt)*v0 - (1-2*op.β)/(2*op.β)*a0 - @. v1 = op.γ/(op.β*op.dt)*(u1-u0) + (1-op.γ/op.β)*v0 + op.dt*(1-op.γ/(2*op.β))*a0 - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,op.odeop,op.t1,(u1,v1,a1),(1.0,op.γ/(op.β*op.dt),1.0/(op.β*op.dt^2)),cache) -end - -function _allocate_matrix(odeop::NewmarkConstantMatrixOperator,t0::Real,x::Tuple{Vararg{AbstractVector}},ode_cache) - A = allocate_jacobian(odeop,t0,x[1],ode_cache) - return A -end - -function _allocate_matrix_and_vector(odeop::NewmarkConstantMatrixOperator,t0::Real,x::Tuple{Vararg{AbstractVector}},ode_cache) - b = allocate_residual(odeop,t0,x[1],ode_cache) - A = allocate_jacobian(odeop,t0,x[1],ode_cache) - return A, b -end diff --git a/src/ODEs/ODETools/ConstantNewmark.jl b/src/ODEs/ODETools/ConstantNewmark.jl deleted file mode 100644 index f9cb848f1..000000000 --- a/src/ODEs/ODETools/ConstantNewmark.jl +++ /dev/null @@ -1,152 +0,0 @@ -function solve_step!( - x1::NTuple{3,AbstractVector}, - solver::Newmark, - op::ConstantODEOperator, - x0::NTuple{3,AbstractVector}, - t0::Real, - cache) # -> (uF,tF) - - dt = solver.dt - γ = solver.γ - β = solver.β - t1 = t0+dt - u0, v0, a0 = x0 - u1, v1, a1 = x1 - - if cache === nothing - # Auxiliary variables - newmatrix = true - - # Allocate caches - newmark_cache = allocate_cache(op,v0,a0) - (v,a, ode_cache) = newmark_cache - - # Allocate matrices and vectors - A, b = _allocate_matrix_and_vector(op,t0,x0,ode_cache) - M = _allocate_matrix(op,t0,x0,ode_cache) - C = _allocate_matrix(op,t0,x0,ode_cache) - b1 = similar(b) - b1 .= 0.0 - - # Define Newmark operator - newmark_affOp = NewmarkConstantOperator(op,t1,dt,γ,β,(u0,v0,a0),newmark_cache) - - # Fill matrices and vector - _matrix!(A,newmark_affOp,u1) - _mass_matrix!(M,newmark_affOp,u1) - _damping_matrix!(C,newmark_affOp,u1) - - # Create affine operator cache - affOp_cache = (A,b,b1,M,C,newmark_affOp,nothing) - else - newmark_cache, affOp_cache = cache - newmatrix = false - end - - # Unpack and update caches - (v,a, ode_cache) = newmark_cache - ode_cache = update_cache!(ode_cache,op,t1) - A,b,b1,M,C,newmark_affOp,l_cache = affOp_cache - - # Update RHS - _vector!(b,newmark_affOp,u1) - b1 .= b .+ ( M*(1.0/(β*dt^2)) + C*(γ/(β*dt)) )*u0 .+ - ( M*(1.0/(β*dt)) - C*(1-γ/β) )*v0 .+ - ( M*(1-2*β)/(2*β) - C*(dt*(1-γ/(2*β))) )*a0 - - # Create affine operator with updated RHS - affOp = AffineOperator(A,b1) - l_cache = solve!(u1,solver.nls,affOp,l_cache,newmatrix) - - # Update auxiliary variables - v1 = γ/(β*dt)*(u1-u0) + (1-γ/β)*v0 + dt*(1-γ/(2*β))*a0 - a1 = 1.0/(β*dt^2)*(u1-u0) - 1.0/(β*dt)*v0 - (1-2*β)/(2*β)*a0 - - # Pack caches - affOp_cache = A,b,b1,M,C,newmark_affOp,l_cache - cache = (newmark_cache, affOp_cache) - x1 = (u1,v1,a1) - - return (x1,t1,cache) - -end - -""" -Constant operator that represents the Newmark Affine operator at a -given time step, i.e., M(t)(u_n+1-u_n)/dt + K(t)u_n+1 + b(t) -""" -struct NewmarkConstantOperator <: NonlinearOperator - odeop::ConstantODEOperator - t1::Float64 - dt::Float64 - γ::Float64 - β::Float64 - x0::NTuple{3,AbstractVector} - ode_cache -end - -function _matrix_and_vector!( - A::AbstractMatrix, - b::AbstractVector, - affOp::NewmarkConstantOperator, - x::AbstractVector) - jacobian!(A,affOp,x) - residual!(b,affOp,x) -end - -function _matrix!( - A::AbstractMatrix, - affOp::NewmarkConstantOperator, - x::AbstractVector) - jacobian!(A,affOp,x) -end - -function _vector!( - b::AbstractVector, - affOp::NewmarkConstantOperator, - x::AbstractVector) - residual!(b,affOp,x) -end - -function residual!(b::AbstractVector,op::NewmarkConstantOperator,x::AbstractVector) - u1 = x - u0, v0, a0 = op.x0 - v1, a1, cache = op.ode_cache - @. a1 = 1.0/(op.β*op.dt^2)*(u1-u0) - 1.0/(op.β*op.dt)*v0 - (1-2*op.β)/(2*op.β)*a0 - @. v1 = op.γ/(op.β*op.dt)*(u1-u0) + (1-op.γ/op.β)*v0 + op.dt*(1-op.γ/(2*op.β))*a0 - residual!(b,op.odeop,op.t1,(u1,v1,a1),cache) - b .*= -1.0 -end - -function jacobian!(A::AbstractMatrix,op::NewmarkConstantOperator,x::AbstractVector) - u1 = x - u0, v0, a0 = op.x0 - v1, a1, cache = op.ode_cache - @. a1 = 1.0/(op.β*op.dt^2)*(u1-u0) - 1.0/(op.β*op.dt)*v0 - (1-2*op.β)/(2*op.β)*a0 - @. v1 = op.γ/(op.β*op.dt)*(u1-u0) + (1-op.γ/op.β)*v0 + op.dt*(1-op.γ/(2*op.β))*a0 - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,op.odeop,op.t1,(u1,v1,a1),(1.0,op.γ/(op.β*op.dt),1.0/(op.β*op.dt^2)),cache) -end - -function _mass_matrix!(A::AbstractMatrix,op::NewmarkConstantOperator,x::AbstractVector) - u1 = x - u0, v0, a0 = op.x0 - v1, a1, cache = op.ode_cache - @. a1 = 1.0/(op.β*op.dt^2)*(u1-u0) - 1.0/(op.β*op.dt)*v0 - (1-2*op.β)/(2*op.β)*a0 - @. v1 = op.γ/(op.β*op.dt)*(u1-u0) + (1-op.γ/op.β)*v0 + op.dt*(1-op.γ/(2*op.β))*a0 - z = zero(eltype(A)) - fillstored!(A,z) - jacobian!(A,op.odeop,op.t1,(u1,v1,a1),3,1.0,cache) -end - -function _damping_matrix!(A::AbstractMatrix,op::NewmarkConstantOperator,x::AbstractVector) - u1 = x - u0, v0, a0 = op.x0 - v1, a1, cache = op.ode_cache - @. a1 = 1.0/(op.β*op.dt^2)*(u1-u0) - 1.0/(op.β*op.dt)*v0 - (1-2*op.β)/(2*op.β)*a0 - @. v1 = op.γ/(op.β*op.dt)*(u1-u0) + (1-op.γ/op.β)*v0 + op.dt*(1-op.γ/(2*op.β))*a0 - z = zero(eltype(A)) - fillstored!(A,z) - jacobian!(A,op.odeop,op.t1,(u1,v1,a1),2,1.0,cache) -end diff --git a/src/ODEs/ODETools/DiffOperators.jl b/src/ODEs/ODETools/DiffOperators.jl deleted file mode 100644 index 31cbfbac8..000000000 --- a/src/ODEs/ODETools/DiffOperators.jl +++ /dev/null @@ -1,25 +0,0 @@ -function time_derivative(f::Function) - function time_derivative_f(x,t) - fxt = zero(return_type(f,x,t)) - _time_derivative_f(f,x,t,fxt) - end - time_derivative_f(x::VectorValue) = t -> time_derivative_f(x,t) - time_derivative_f(t) = x -> time_derivative_f(x,t) -end - -const ∂t = time_derivative - -function _time_derivative_f(f,x,t,fxt) - ForwardDiff.derivative(t->f(x,t),t) -end - -function _time_derivative_f(f,x,t,fxt::VectorValue) - VectorValue(ForwardDiff.derivative(t->get_array(f(x,t)),t)) - # VectorValue(ForwardDiff.derivative(t->f(x,t),t)) -end - -function _time_derivative_f(f,x,t,fxt::TensorValue) - TensorValue(ForwardDiff.derivative(t->get_array(f(x,t)),t)) -end - -∂tt(f::Function) = ∂t(∂t(f)) diff --git a/src/ODEs/ODETools/EXRungeKutta.jl b/src/ODEs/ODETools/EXRungeKutta.jl deleted file mode 100644 index 1057bc218..000000000 --- a/src/ODEs/ODETools/EXRungeKutta.jl +++ /dev/null @@ -1,154 +0,0 @@ -""" -Explicit Runge-Kutta ODE solver -""" -struct EXRungeKutta <: ODESolver - ls::LinearSolver - dt::Float64 - tableau::EXButcherTableau - function EXRungeKutta(ls::LinearSolver, dt, type::Symbol) - bt = EXButcherTableau(type) - new(ls, dt, bt) - end -end - -""" -solve_step!(uf,odesol,op,u0,t0,cache) -""" -function solve_step!(uf::AbstractVector, - solver::EXRungeKutta, - op::ODEOperator, - u0::AbstractVector, - t0::Real, - cache) - - # Unpack variables - dt = solver.dt - s = solver.tableau.s - a = solver.tableau.a - b = solver.tableau.b - c = solver.tableau.c - d = solver.tableau.d - - # Create cache if not there - if cache === nothing - ode_cache = allocate_cache(op) - vi = similar(u0) - ki = [similar(u0) for i in 1:s] - M = allocate_jacobian(op,t0,uf,ode_cache) - get_mass_matrix!(M,op,t0,uf,ode_cache) - l_cache = nothing - else - ode_cache, vi, ki, M, l_cache = cache - end - - lop = EXRungeKuttaStageOperator(op,t0,dt,u0,ode_cache,vi,ki,0,a,M) - - for i in 1:s - - # solve at stage i - ti = t0 + c[i]*dt - ode_cache = update_cache!(ode_cache,op,ti) - update!(lop,ti,ki[i],i) - l_cache = solve!(uf,solver.ls,lop,l_cache) - - update!(lop,ti,uf,i) - - end - - # update final solution - tf = t0 + dt - - @. uf = u0 - for i in 1:s - @. uf = uf + dt*b[i]*lop.ki[i] - end - - cache = (ode_cache, vi, ki, M, l_cache) - - return (uf,tf,cache) - - -end - - - - -mutable struct EXRungeKuttaStageOperator <: RungeKuttaNonlinearOperator - odeop::ODEOperator - ti::Float64 - dt::Float64 - u0::AbstractVector - ode_cache - vi::AbstractVector - ki::AbstractVector - i::Int - a::Matrix - M::AbstractMatrix -end - - -""" -ODE: A(t,u,∂u = M ∂u/∂t + K(t,u) = 0 -> solve for u -EX-RK: A(t,u,ki) = M ki + K(ti,u0 + dt ∑_{j solve for ki -where ui = u0 + dt ∑_{j (uF,tF) - - if cache === nothing - ode_cache = allocate_cache(op) - vf = similar(u0) - nl_cache = nothing - else - ode_cache, vf, nl_cache = cache - end - - dt = solver.dt - tf = t0+dt - # The space should have the boundary conditions at tf - ode_cache = update_cache!(ode_cache,op,t0) - - nlop = ForwardEulerNonlinearOperator(op,t0,dt,u0,ode_cache,vf) - - nl_cache = solve!(uf,solver.nls,nlop,nl_cache) - - cache = (ode_cache, vf, nl_cache) - - return (uf,tf,cache) - -end - -""" -Nonlinear operator that represents the Forward Euler nonlinear operator at a -given time step, i.e., A(t,u_n,(u_n+1-u_n)/dt) -""" -struct ForwardEulerNonlinearOperator <: NonlinearOperator - odeop::ODEOperator - tf::Float64 - dt::Float64 - u0::AbstractVector - ode_cache - vf::AbstractVector -end - -function residual!(b::AbstractVector,op::ForwardEulerNonlinearOperator,x::AbstractVector) - vf = op.vf - @. vf = (x-op.u0)/op.dt - residual!(b,op.odeop,op.tf,(op.u0,vf),op.ode_cache) -end - -function jacobian!(A::AbstractMatrix,op::ForwardEulerNonlinearOperator,x::AbstractVector) - vf = op.vf - @. vf = (x-op.u0)/op.dt - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,op.odeop,op.tf,(op.u0,vf),(0,1/op.dt),op.ode_cache) -end - -function allocate_residual(op::ForwardEulerNonlinearOperator,x::AbstractVector) - allocate_residual(op.odeop,op.tf,x,op.ode_cache) -end - -function allocate_jacobian(op::ForwardEulerNonlinearOperator,x::AbstractVector) - allocate_jacobian(op.odeop,op.tf,x,op.ode_cache) -end - -function zero_initial_guess(op::ForwardEulerNonlinearOperator) - x0 = similar(op.u0) - fill!(x0,zero(eltype(x0))) - x0 -end diff --git a/src/ODEs/ODETools/GeneralizedAlpha.jl b/src/ODEs/ODETools/GeneralizedAlpha.jl deleted file mode 100644 index a5ad9a34f..000000000 --- a/src/ODEs/ODETools/GeneralizedAlpha.jl +++ /dev/null @@ -1,231 +0,0 @@ -""" -Generalized-α ODE solver -""" -struct GeneralizedAlpha <: ODESolver - nls::NonlinearSolver - dt::Float64 - ρ∞::Float64 -end - -function solve_step!( - x1::NTuple{2,AbstractVector}, - solver::GeneralizedAlpha, - op::ODEOperator, - x0::NTuple{2,AbstractVector}, - t0::Real, - cache) # -> (uF,tF) - - dt = solver.dt - ρ∞ = solver.ρ∞ - αf = 1.0/(1.0 + ρ∞) - αm = 0.5 * (3-ρ∞) / (1+ρ∞) - γ = 0.5 + αm - αf - tαf = t0+αf*dt - u0, v0 = x0 - u1, v1 = x1 - - if cache === nothing - generalizedAlpha_cache = allocate_cache(op,v0) - nl_cache = nothing - else - generalizedAlpha_cache, nl_cache = cache - end - - (v, ode_cache) = generalizedAlpha_cache - ode_cache = update_cache!(ode_cache,op,tαf) - nlop = GeneralizedAlphaNonlinearOperator(op,tαf,dt,αm,αf,γ,x0,generalizedAlpha_cache) - nl_cache = solve!(u1,solver.nls,nlop,nl_cache) - - @. u1 = u1/αf + (1-1/αf)*u0 - @. v1 = 1/(γ*dt) * (u1-u0) + (1-1/γ)*v0 - - cache = (generalizedAlpha_cache, nl_cache) - x1 = (u1,v1) - t1 = t0+dt - - return (x1,t1,cache) - -end - - -function solve_step!( - x1::NTuple{3,AbstractVector}, - solver::GeneralizedAlpha, - op::ODEOperator, - x0::NTuple{3,AbstractVector}, - t0::Real, - cache) # -> (uF,tF) - - dt = solver.dt - ρ∞ = solver.ρ∞ - αf = ρ∞/(ρ∞ + 1.0) - αm = (2*ρ∞ - 1.0)/(ρ∞ + 1.0) - γ = 0.5 - αm + αf - β = 0.25*((1.0 - αm + αf)^2) - tαf = t0 + (1.0-αf)*dt - u0, v0, a0 = x0 - u1, v1, a1 = x1 - - if cache === nothing - generalizedAlphaDtt_cache = allocate_cache(op,v0,a0) - nl_cache = nothing - else - generalizedAlphaDtt_cache, nl_cache = cache - end - - (v, a, ode_cache) = generalizedAlphaDtt_cache - ode_cache = update_cache!(ode_cache,op,tαf) - nlop = GeneralizedAlphaDttNonlinearOperator(op,tαf,dt,αm,αf,γ,β,x0,generalizedAlphaDtt_cache) - nl_cache = solve!(u1,solver.nls,nlop,nl_cache) - - - @. u1 = 1.0 / (1.0 - αf) * u1 - - αf / (1.0 - αf) * u0 - - @. v1 = γ / (β*dt) * (u1 - u0) - - (γ - β) / β * v0 - - (γ - 2.0 * β) / (2.0 * β) * dt * a0 - - @. a1 = 1.0 / (β * dt * dt) * (u1 - u0) - - 1.0 / (β * dt) * v0 - - (1.0 - 2.0 * β) / (2.0 * β) * a0 - - cache = (generalizedAlphaDtt_cache, nl_cache) - x1 = (u1,v1,a1) - t1 = t0+dt - - return (x1,t1,cache) - -end - - -""" -Generalized-α 1st order ODE solver -Nonlinear operator that represents the Generalized-α method nonlinear operator at a -given time step, i.e., A(t_αf,u_n+αf,v_n+αm) -""" -struct GeneralizedAlphaNonlinearOperator <: NonlinearOperator - odeop::ODEOperator - tαf::Float64 - dt::Float64 - αm::Float64 - αf::Float64 - γ::Float64 - x0::NTuple{2,AbstractVector} - ode_cache -end - -function residual!(b::AbstractVector,op::GeneralizedAlphaNonlinearOperator,x::AbstractVector) - uαf = x - u0, v0 = op.x0 - vαm, cache = op.ode_cache - @. vαm = (1 - op.αm/op.γ ) * v0 + op.αm/(op.γ*op.αf*op.dt) * (uαf - u0) - residual!(b,op.odeop,op.tαf,(uαf,vαm),cache) -end - -function jacobian!(A::AbstractMatrix,op::GeneralizedAlphaNonlinearOperator,x::AbstractVector) - uαf = x - u0, v0 = op.x0 - vαm, cache = op.ode_cache - @. vαm = (1 - op.αm/op.γ ) * v0 + op.αm/(op.γ*op.αf*op.dt) * (uαf - u0) - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,op.odeop,op.tαf,(uαf,vαm),(1.0,op.αm/(op.αf*op.γ*op.dt)),cache) -end - -function allocate_residual(op::GeneralizedAlphaNonlinearOperator,x::AbstractVector) - vαm, cache = op.ode_cache - allocate_residual(op.odeop,op.tαf,x,cache) -end - -function allocate_jacobian(op::GeneralizedAlphaNonlinearOperator,x::AbstractVector) - vαm, cache = op.ode_cache - allocate_jacobian(op.odeop,op.tαf,x,cache) -end - -function zero_initial_guess(op::GeneralizedAlphaNonlinearOperator) - x0 = similar(op.u0) - fill!(x0,zero(eltype(x0))) - x0 -end - - -""" -Generalized-α 2nd order ODE solver -Nonlinear operator that represents the Generalized-α method nonlinear operator at a -given time step, i.e., A(t_(n+1-αf),u_(n+1-αf),v_(n+1-αf),a_(n+1-αm)) -""" - -struct GeneralizedAlphaDttNonlinearOperator <: NonlinearOperator - odeop::ODEOperator - tαf::Float64 - dt::Float64 - αm::Float64 - αf::Float64 - γ::Float64 - β::Float64 - x0::NTuple{3,AbstractVector} - ode_cache -end - - -function residual!(b::AbstractVector,op::GeneralizedAlphaDttNonlinearOperator,x::AbstractVector) - uαf = x - u0, v0, a0 = op.x0 - vαf, aαm, cache = op.ode_cache - - @. vαf = (op.γ) / (op.β * op.dt) * (uαf - u0) + - (op.αf - 1.0) * (op.γ - op.β) / op.β * v0 + - (op.αf - 1.0) * (op.γ - 2.0*op.β) / (2.0 * op.β) * op.dt * a0 + - op.αf * v0 - - @. aαm = (1.0 - op.αm) / (1.0 - op.αf) / (op.β * op.dt * op.dt) * (uαf - u0) + - (op.αm - 1.0) / (op.β * op.dt) * v0 + - (op.αm - 1.0) * (1.0 - 2.0*op.β) / (2.0 * op.β) * a0 + - op.αm * a0 - - residual!(b,op.odeop,op.tαf,(uαf,vαf,aαm),cache) -end - - -function jacobian!(A::AbstractMatrix,op::GeneralizedAlphaDttNonlinearOperator,x::AbstractVector) - uαf = x - u0, v0, a0 = op.x0 - vαf, aαm, cache = op.ode_cache - - @. vαf = (op.γ) / (op.β * op.dt) * (uαf - u0) + - (op.αf - 1.0) * (op.γ - op.β) / op.β * v0 + - (op.αf - 1.0) * (op.γ - 2.0*op.β) / (2.0 * op.β) * op.dt * a0 + - op.αf * v0 - - @. aαm = (1.0 - op.αm) / (1.0 - op.αf) / (op.β * op.dt * op.dt) * (uαf - u0) + - (op.αm - 1.0) / (op.β * op.dt) * v0 + - (op.αm - 1.0) * (1.0 - 2.0*op.β) / (2.0 * op.β) * a0 + - op.αm * a0 - - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,op.odeop,op.tαf,(uαf,vαf,aαm), - (1.0, op.γ/(op.β * op.dt), - (1.0 - op.αm) / (1.0 - op.αf) / (op.β * op.dt * op.dt)), - cache) -end - - -function allocate_residual(op::GeneralizedAlphaDttNonlinearOperator,x::AbstractVector) - vαf, aαm, cache = op.ode_cache - allocate_residual(op.odeop,op.tαf,x,cache) -end - - -function allocate_jacobian(op::GeneralizedAlphaDttNonlinearOperator,x::AbstractVector) - vαf, aαm, cache = op.ode_cache - allocate_jacobian(op.odeop,op.tαf,x,cache) -end - - -function zero_initial_guess(op::GeneralizedAlphaDttNonlinearOperator) - x0 = similar(op.u0) - fill!(x0,zero(eltype(x0))) - x0 -end diff --git a/src/ODEs/ODETools/IMEXRungeKutta.jl b/src/ODEs/ODETools/IMEXRungeKutta.jl deleted file mode 100644 index 781e85da9..000000000 --- a/src/ODEs/ODETools/IMEXRungeKutta.jl +++ /dev/null @@ -1,285 +0,0 @@ -""" -Implicit-Explicit Runge-Kutta ODE solver. - -This struct defines an ODE solver for the system of ODEs - - M(u,t)du/dt = f(u,t) + g(u,t) - -where `f` is a nonlinear function of `u` and `t` that will treated implicitly and - `g` is a nonlinear function of `u` and `t` that will be treated explicitly. - The ODE is solved using an implicit-explicit Runge-Kutta method. -""" -struct IMEXRungeKutta <: ODESolver - nls_stage::NonlinearSolver - nls_update::NonlinearSolver - dt::Float64 - tableau::IMEXButcherTableau - function IMEXRungeKutta(nls_stage::NonlinearSolver, nls_update::NonlinearSolver, dt, type::Symbol) - bt = IMEXButcherTableau(type) - new(nls_stage, nls_update, dt, bt) - end -end - -""" -solve_step!(uf,odesol,op,u0,t0,cache) - -Solve one step of the ODE problem defined by `op` using the ODE solver `odesol` - with initial solution `u0` at time `t0`. The solution is stored in `uf` and - the final time in `tf`. The cache is used to store the solution of the - nonlinear system of equations and auxiliar variables. -""" -function solve_step!(uf::AbstractVector, - solver::IMEXRungeKutta, - op::ODEOperator, - u0::AbstractVector, - t0::Real, - cache) - - # Unpack variables - dt = solver.dt - s = solver.tableau.s - aᵢ = solver.tableau.aᵢ - bᵢ = solver.tableau.bᵢ - aₑ = solver.tableau.aₑ - bₑ = solver.tableau.bₑ - c = solver.tableau.c - d = solver.tableau.d - - # Create cache if not there - if cache === nothing - ode_cache = allocate_cache(op) - vi = similar(u0) - fi = Vector{typeof(u0)}(undef,0) - gi = Vector{typeof(u0)}(undef,0) - for i in 1:s - push!(fi,similar(u0)) - push!(gi,similar(u0)) - end - nls_stage_cache = nothing - nls_update_cache = nothing - else - ode_cache, vi, fi, gi, nls_stage_cache, nls_update_cache = cache - end - - # Create RKNL stage operator - nlop_stage = IMEXRungeKuttaStageNonlinearOperator(op,t0,dt,u0,ode_cache,vi,fi,gi,0,aᵢ,aₑ) - - # Compute intermediate stages - for i in 1:s - - # Update time - ti = t0 + c[i]*dt - ode_cache = update_cache!(ode_cache,op,ti) - update!(nlop_stage,ti,fi,gi,i) - - if(aᵢ[i,i]==0) - # Skip stage solve if a_ii=0 => u_i=u_0, f_i = f_0, gi = g_0 - @. uf = u0 - else - # solve at stage i - nls_stage_cache = solve!(uf,solver.nls_stage,nlop_stage,nls_stage_cache) - end - - # Update RHS at stage i using solution at u_i - rhs!(nlop_stage, uf) - explicit_rhs!(nlop_stage, uf) - - end - - # Update final time - tf = t0+dt - - # Skip final update if not necessary - if !(c[s]==1.0 && aᵢ[s,:] == bᵢ && aₑ[s,:] == bₑ) - - # Create RKNL final update operator - ode_cache = update_cache!(ode_cache,op,tf) - nlop_update = IMEXRungeKuttaUpdateNonlinearOperator(op,tf,dt,u0,ode_cache,vi,fi,gi,s,bᵢ,bₑ) - - # solve at final update - nls_update_cache = solve!(uf,solver.nls_update,nlop_update,nls_update_cache) - - end - - # Update final cache - cache = (ode_cache, vi, fi, gi, nls_stage_cache, nls_update_cache) - - return (uf, tf, cache) - -end - -""" -IMEXRungeKuttaStageNonlinearOperator <: NonlinearOperator - -Nonlinear operator for the implicit-explicit Runge-Kutta stage. - At a given stage `i` it represents the nonlinear operator A(t,u_i,(u_i-u_n)/dt) such that -```math -A(t,u_i,(u_i-u_n)/dt) = M(u_i,t)(u_i-u_n)/Δt - ∑aᵢ[i,j] * f(u_j,t_j) - ∑aₑ[i,j] * g(u_j,t_j) = 0 -``` -""" -mutable struct IMEXRungeKuttaStageNonlinearOperator <: RungeKuttaNonlinearOperator - odeop::ODEOperator - ti::Float64 - dt::Float64 - u0::AbstractVector - ode_cache - vi::AbstractVector - fi::Vector{AbstractVector} - gi::Vector{AbstractVector} - i::Int - aᵢ::Matrix{Float64} - aₑ::Matrix{Float64} -end - -""" -IMEXRungeKuttaUpdateNonlinearOperator <: NonlinearOperator - -Nonlinear operator for the implicit-explicit Runge-Kutta final update. - At the final update it represents the nonlinear operator A(t,u_t,(u_t-u_n)/dt) such that - ```math - A(t,u_f,(u_f-u_n)/dt) = M(u_f,t)(u_f-u_n)/Δt - ∑aᵢ[i,j] * f(u_j,t_j) - ∑aₑ[i,j] * g(u_j,t_j) = 0 - ``` -""" -mutable struct IMEXRungeKuttaUpdateNonlinearOperator <: RungeKuttaNonlinearOperator - odeop::ODEOperator - ti::Float64 - dt::Float64 - u0::AbstractVector - ode_cache - vi::AbstractVector - fi::Vector{AbstractVector} - gi::Vector{AbstractVector} - s::Int - bᵢ::Vector{Float64} - bₑ::Vector{Float64} -end - -IMEXRungeKuttaNonlinearOperator = Union{IMEXRungeKuttaStageNonlinearOperator, - IMEXRungeKuttaUpdateNonlinearOperator} - -""" -residual!(b,op::IMEXRungeKuttaStageNonlinearOperator,x) - -Compute the residual of the IMEXR Runge-Kutta nonlinear operator `op` at `x` and -store it in `b` for a given stage `i`. -```math -b = A(t,x,(x-x₀)/dt) = ∂ui/∂t - ∑aᵢ[i,j] * f(xj,tj) -``` - -Uses the vector b as auxiliar variable to store the residual of the left-hand side of -the i-th stage ODE operator, then adds the corresponding contribution from right-hand side -at all earlier stages. -```math -b = M(ui,ti)∂u/∂t -b - ∑_{j<=i} aᵢ_ij * f(uj,tj) - ∑_{j (uF,tF) - - dt = solver.dt - γ = solver.γ - β = solver.β - t1 = t0+dt - u0, v0, a0 = x0 - u1, v1, a1 = x1 - - if cache === nothing - newmark_cache = allocate_cache(op,v0,a0) - nl_cache = nothing - else - newmark_cache, nl_cache = cache - end - - (v,a, ode_cache) = newmark_cache - ode_cache = update_cache!(ode_cache,op,t1) - nlop = NewmarkNonlinearOperator(op,t1,dt,γ,β,(u0,v0,a0),newmark_cache) - nl_cache = solve!(u1,solver.nls,nlop,nl_cache) - - v1 = γ/(β*dt)*(u1-u0) + (1-γ/β)*v0 + dt*(1-γ/(2*β))*a0 - a1 = 1.0/(β*dt^2)*(u1-u0) - 1.0/(β*dt)*v0 - (1-2*β)/(2*β)*a0 - - cache = (newmark_cache, nl_cache) - x1 = (u1,v1,a1) - - return (x1,t1,cache) - -end - -""" -Nonlinear operator that represents the Newmark nonlinear operator at a -given time step, i.e., A(t,u_n+1,v_n+1,a_n+1) -""" -struct NewmarkNonlinearOperator <: NonlinearOperator - odeop::ODEOperator - t1::Float64 - dt::Float64 - γ::Float64 - β::Float64 - x0::NTuple{3,AbstractVector} - ode_cache -end - -function residual!(b::AbstractVector,op::NewmarkNonlinearOperator,x::AbstractVector) - u1 = x - u0, v0, a0 = op.x0 - v1, a1, cache = op.ode_cache - a1 = 1.0/(op.β*op.dt^2)*(u1-u0) - 1.0/(op.β*op.dt)*v0 - (1-2*op.β)/(2*op.β)*a0 - v1 = op.γ/(op.β*op.dt)*(u1-u0) + (1-op.γ/op.β)*v0 + op.dt*(1-op.γ/(2*op.β))*a0 - residual!(b,op.odeop,op.t1,(u1,v1,a1),cache) -end - -function jacobian!(A::AbstractMatrix,op::NewmarkNonlinearOperator,x::AbstractVector) - u1 = x - u0, v0, a0 = op.x0 - v1, a1, cache = op.ode_cache - a1 = 1.0/(op.β*op.dt^2)*(u1-u0) - 1.0/(op.β*op.dt)*v0 - (1-2*op.β)/(2*op.β)*a0 - v1 = op.γ/(op.β*op.dt)*(u1-u0) + (1-op.γ/op.β)*v0 + op.dt*(1-op.γ/(2*op.β))*a0 - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,op.odeop,op.t1,(u1,v1,a1),(1.0,op.γ/(op.β*op.dt),1.0/(op.β*op.dt^2)),cache) -end - -function allocate_residual(op::NewmarkNonlinearOperator,x::AbstractVector) - v1, a1, cache = op.ode_cache - allocate_residual(op.odeop,op.t1,x,cache) -end - -function allocate_jacobian(op::NewmarkNonlinearOperator,x::AbstractVector) - v1, a1, cache = op.ode_cache - allocate_jacobian(op.odeop,op.t1,x,cache) -end - -function zero_initial_guess(op::NewmarkNonlinearOperator) - x0 = similar(op.u0) - fill!(x0,zero(eltype(x0))) - x0 -end diff --git a/src/ODEs/ODETools/ODEOperators.jl b/src/ODEs/ODETools/ODEOperators.jl deleted file mode 100644 index c2cf58694..000000000 --- a/src/ODEs/ODETools/ODEOperators.jl +++ /dev/null @@ -1,135 +0,0 @@ -""" -Trait for `ODEOperator` that tells us whether the operator depends on the solution -(including its time derivatives), it is an affine operator that depends on time -or it is a constant operator (affine and time-indepedendent) -""" -abstract type OperatorType end -struct Nonlinear <: OperatorType end -struct Affine <: OperatorType end -struct Constant <: OperatorType end -struct ConstantMatrix <: OperatorType end - - -""" -It represents the operator in an implicit N-th order ODE, i.e., A(t,u,∂tu,∂t^2u,...,∂t^Nu) -where the implicit PDE reads A(t,u,∂tu,∂t^2u,...,∂t^Nu) = 0, when ∂t^iu is the -i-th time derivative of u, with i=0,..,N. The trait `{C}` determines whether the -operator is fully nonlinear, affine or constant in time. -""" -abstract type ODEOperator{C<:OperatorType} <: GridapType end - -""" -It represents an _affine_ operator in an implicit ODE, i.e., an ODE operator of -the form A(t,u,∂tu,...,∂t^Nu) = A_N(t)∂t^Nu + ...A_1(t)∂tu + A_0(t)u + f(t) -""" -const AffineODEOperator = ODEOperator{Affine} - -""" -It represents a constant operator in an implicit ODE, i.e., an ODE operator of -the form A(t,u,∂tu,...,∂t^Nu) = A_N∂t^Nu + ...A_1∂tu + A_0u + f -""" -const ConstantODEOperator = ODEOperator{Constant} - -""" -It represents an affine operator in an implicit ODE with constant matrix, but -time-dependent right-hand side, i.e., an ODE operator of -the form A(t,u,∂tu,...,∂t^Nu) = A_N∂t^Nu + ...A_1∂tu + A_0u + f(t) -""" -const ConstantMatrixODEOperator = ODEOperator{ConstantMatrix} - -""" -Returns the `OperatorType`, i.e., nonlinear, affine, or constant in time -""" -OperatorType(::ODEOperator{C}) where {C} = C - -""" -Returns the order of the ODE operator -""" -function get_order(::ODEOperator) - @abstractmethod -end - -""" -It provides A(t,u,∂tu,...,∂t^Nu) for a given (t,u,∂tu,...,∂t^Nu) -""" -function residual!( - r::AbstractVector, - op::ODEOperator, - t::Real, - u::Union{AbstractVector,Tuple{Vararg{AbstractVector}}}, - ode_cache) - @abstractmethod -end - -""" -""" -function allocate_residual( - op::ODEOperator, - t0::Real, - u::Union{AbstractVector,Tuple{Vararg{AbstractVector}}}, - ode_cache) - @abstractmethod -end - -""" -It adds contribution to the Jacobian with respect to the i-th time derivative, -with i=0,...,N. That is, adding γ_i*[∂A/∂(∂t^iu)](t,u,∂tu,...,∂t^Nu) for a -given (t,u,∂tu,...,∂t^Nu) to a given matrix J, where γ_i is a scaling coefficient -provided by the `ODESolver`, e.g., 1/Δt for Backward Euler; It represents -∂(δt^i(u))/∂(u), in which δt^i(⋅) is the approximation of ∂t^i(⋅) in the solver. -Note that for i=0, γ_i=1.0. -""" -function jacobian!( - J::AbstractMatrix, - op::ODEOperator, - t::Real, - u::Tuple{Vararg{AbstractVector}}, - i::Integer, - γᵢ::Real, - ode_cache) - @abstractmethod - # Add values to J -end - -""" -Add the contribution of all jacobians ,i.e., ∑ᵢ γ_i*[∂A/∂(∂t^iu)](t,u,∂tu,...,∂t^Nu) -""" -function jacobians!( - J::AbstractMatrix, - op::ODEOperator, - t::Real, - u::Tuple{Vararg{AbstractVector}}, - γ::Tuple{Vararg{Real}}, - ode_cache) - @abstractmethod - # Add values to J -end - -""" -""" -function allocate_jacobian(op::ODEOperator,t0::Real,u::AbstractVector,ode_cache) - @abstractmethod -end - -""" -Allocates the cache data required by the `ODESolution` for a given `ODEOperator` -""" -allocate_cache(op::ODEOperator) = @abstractmethod - -#@fverdugo to be used as `cache = update_cache!(cache,op,t)` -update_cache!(cache,op::ODEOperator,t::Real) = @abstractmethod - -""" -Tests the interface of `ODEOperator` specializations -""" -function test_ode_operator(op::ODEOperator,t::Real,u::AbstractVector,u_t::AbstractVector) - cache = allocate_cache(op) - cache = update_cache!(cache,op,0.0) - r = allocate_residual(op,0.0,u,cache) - residual!(r,op,t,(u,u_t),cache) - J = allocate_jacobian(op,0.0,u,cache) - jacobian!(J,op,t,(u,u_t),1,1.0,cache) - jacobian!(J,op,t,(u,u_t),2,1.0,cache) - jacobians!(J,op,t,(u,u_t),(1.0,1.0),cache) - true -end diff --git a/src/ODEs/ODETools/ODESolutions.jl b/src/ODEs/ODETools/ODESolutions.jl deleted file mode 100644 index 791912561..000000000 --- a/src/ODEs/ODETools/ODESolutions.jl +++ /dev/null @@ -1,114 +0,0 @@ - -# Represents a lazy iterator over all solution in a time interval -""" -It represents the solution of a ODE at a given time interval. It is a lazy implementation, -i.e., the object is an iterator that computes the solution at each time step -when accessing the solution at each time step. -""" -abstract type ODESolution <: GridapType end - -# First time step -function iterate(u::ODESolution) # (u0,t0)-> (uf,tf) or nothing - @abstractmethod -end - -# Following time steps -function iterate(u::ODESolution,state) # (u0,t0)-> (uf,tf) or nothing - @abstractmethod -end - -# tester - -function test_ode_solution(sol::ODESolution) - for (u_n,t_n) in sol - @test isa(t_n,Real) - @test isa(u_n,AbstractVector) - end - true -end - -# Specialization - -struct GenericODESolution{T} <: ODESolution - solver::ODESolver - op::ODEOperator - u0::T - t0::Real - tF::Real -end - -function Base.iterate(sol::GenericODESolution{T}) where {T<:AbstractVector} - - uf = copy(sol.u0) - u0 = copy(sol.u0) - t0 = sol.t0 - - # Solve step - uf, tf, cache = solve_step!(uf,sol.solver,sol.op,u0,t0) - - # Update - u0 .= uf - state = (uf,u0,tf,cache) - - return (uf, tf), state -end - -function Base.iterate(sol::GenericODESolution{T}, state) where {T<:AbstractVector} - - uf,u0,t0,cache = state - - if t0 >= sol.tF - ϵ - return nothing - end - - # Solve step - uf, tf, cache = solve_step!(uf,sol.solver,sol.op,u0,t0,cache) - - # Update - u0 .= uf - state = (uf,u0,tf,cache) - - return (uf, tf), state -end - -function Base.iterate(sol::GenericODESolution{T}) where {T<:Tuple{Vararg{AbstractVector}}} - - uf = () - u0 = () - for i in 1:length(sol.u0) - uf = (uf...,copy(sol.u0[i])) - u0 = (u0...,copy(sol.u0[i])) - end - t0 = sol.t0 - - # Solve step - uf, tf, cache = solve_step!(uf,sol.solver,sol.op,u0,t0) - - # Update - for i in 1:length(uf) - u0[i] .= uf[i] - end - state = (uf,u0,tf,cache) - - return (uf[1], tf), state -end - -function Base.iterate(sol::GenericODESolution{T}, state) where {T<:Tuple{Vararg{AbstractVector}}} - - uf,u0,t0,cache = state - - if t0 >= sol.tF - ϵ - return nothing - end - - # Solve step - uf, tf, cache = solve_step!(uf,sol.solver,sol.op,u0,t0,cache) - - # Update - for i in 1:length(uf) - u0[i] .= uf[i] - end - state = (uf,u0,tf,cache) - - return (uf[1], tf), state -end diff --git a/src/ODEs/ODETools/ODESolvers.jl b/src/ODEs/ODETools/ODESolvers.jl deleted file mode 100644 index c803674c6..000000000 --- a/src/ODEs/ODETools/ODESolvers.jl +++ /dev/null @@ -1,70 +0,0 @@ -# Now, we need an abstract type representing a numerical discretization scheme -# for the ODE -""" -Represents a map that given (t_n,u_n) returns (t_n+1,u_n+1) and cache for the -corresponding `ODEOperator` and `NonlinearOperator` -""" -abstract type ODESolver <: GridapType end - -function solve_step!( - uF::Union{AbstractVector,Tuple{Vararg{AbstractVector}}}, - solver::ODESolver, - op::ODEOperator, - u0::Union{AbstractVector,Tuple{Vararg{AbstractVector}}}, - t0::Real, - cache) # -> (uF,tF,cache) - @abstractmethod -end - -# Default API - -function solve_step!( - uF::Union{AbstractVector,Tuple{Vararg{AbstractVector}}}, - solver::ODESolver, - op::ODEOperator, - u0::Union{AbstractVector,Tuple{Vararg{AbstractVector}}}, - t0::Real) # -> (uF,tF,cache) - solve_step!(uF,solver,op,u0,t0,nothing) -end - -function solve( - solver::ODESolver, - op::ODEOperator, - u0::T, - t0::Real, - tf::Real) where {T} - GenericODESolution{T}(solver,op,u0,t0,tf) -end - -# testers - -function test_ode_solver(solver::ODESolver,op::ODEOperator,u0,t0,tf) - solution = solve(solver,op,u0,t0,tf) - test_ode_solution(solution) -end - -# Specialization - -include("Tableaus.jl") - -include("ForwardEuler.jl") - -include("ThetaMethod.jl") - -include("AffineThetaMethod.jl") - -include("RungeKutta.jl") - -include("IMEXRungeKutta.jl") - -include("EXRungeKutta.jl") - -include("Newmark.jl") - -include("AffineNewmark.jl") - -include("ConstantNewmark.jl") - -include("ConstantMatrixNewmark.jl") - -include("GeneralizedAlpha.jl") diff --git a/src/ODEs/ODETools/ODETools.jl b/src/ODEs/ODETools/ODETools.jl deleted file mode 100644 index e5977b5eb..000000000 --- a/src/ODEs/ODETools/ODETools.jl +++ /dev/null @@ -1,91 +0,0 @@ -""" - -The exported names are -$(EXPORTS) -""" -module ODETools - -using Test - -using DocStringExtensions - -using ForwardDiff -using LinearAlgebra: fillstored!, rmul! -using SparseArrays: issparse - -const ϵ = 100*eps() -export ∂t -export ∂tt -export time_derivative - -using Gridap.Fields: VectorValue, TensorValue -using Gridap.Fields: return_type -using Gridap.Arrays: get_array - -using Gridap.Helpers: GridapType -using Gridap.Helpers -using Gridap.Algebra: NonlinearSolver -using Gridap.Algebra: LinearSolver -using Gridap.Algebra: NonlinearOperator -using Gridap.Algebra: AffineOperator - -export ODEOperator -export AffineODEOperator -export ConstantODEOperator -export ConstantMatrixODEOperator -export SecondOrderODEOperator -export OperatorType -export Nonlinear -export Affine -export Constant -export ConstantMatrix -using Gridap.Algebra: residual -using Gridap.Algebra: jacobian -using Gridap.Algebra: symbolic_setup -using Gridap.Algebra: numerical_setup -using Gridap.Algebra: numerical_setup! -using Gridap.Algebra: LinearSolverCache -import Gridap.Algebra: residual! -import Gridap.Algebra: jacobian! -import Gridap.Algebra: allocate_residual -import Gridap.Algebra: allocate_jacobian -export allocate_cache -export update_cache! -export jacobian! -export jacobian_t! -export jacobian_and_jacobian_t! -export test_ode_operator -export lhs! -export rhs! -export explicit_rhs! - -export ODESolver -export solve_step! -export test_ode_solver -import Gridap.Algebra: solve -import Gridap.Algebra: solve! -import Gridap.Algebra: zero_initial_guess - -export BackwardEuler -export ForwardEuler -export MidPoint -export ThetaMethod -export RungeKutta -export IMEXRungeKutta -export EXRungeKutta -export Newmark -export GeneralizedAlpha - -export ODESolution -export test_ode_solution -import Base: iterate - -include("DiffOperators.jl") - -include("ODEOperators.jl") - -include("ODESolvers.jl") - -include("ODESolutions.jl") - -end #module diff --git a/src/ODEs/ODETools/RungeKutta.jl b/src/ODEs/ODETools/RungeKutta.jl deleted file mode 100644 index 479b966ee..000000000 --- a/src/ODEs/ODETools/RungeKutta.jl +++ /dev/null @@ -1,202 +0,0 @@ - -""" -Runge-Kutta ODE solver -""" -struct RungeKutta <: ODESolver - nls_stage::NonlinearSolver - nls_update::NonlinearSolver - dt::Float64 - bt::ButcherTableau - function RungeKutta(nls_stage::NonlinearSolver,nls_update::NonlinearSolver,dt,type::Symbol) - bt = ButcherTableau(type) - new(nls_stage,nls_update,dt,bt) - end -end - -function solve_step!(uf::AbstractVector, - solver::RungeKutta, - op::ODEOperator, - u0::AbstractVector, - t0::Real, - cache) - - # Unpack variables - dt = solver.dt - s = solver.bt.s - a = solver.bt.a - b = solver.bt.b - c = solver.bt.c - d = solver.bt.d - - # Create cache if not there - if cache === nothing - ode_cache = allocate_cache(op) - ui = similar(u0) - ki = Vector{typeof(u0)}() - sizehint!(ki,s) - [push!(ki,similar(u0)) for i in 1:s] - rhs = similar(u0) - nls_stage_cache = nothing - nls_update_cache = nothing - else - ode_cache, ui, ki, rhs, nls_stage_cache, nls_update_cache = cache - end - - # Initialize states to zero - for i in 1:s - @. ki[i] *= 0.0 - end - - # Create RKNL stage operator - nlop_stage = RungeKuttaStageNonlinearOperator(op,t0,dt,u0,ode_cache,ui,ki,rhs,0,a) - - # Compute intermediate stages - for i in 1:s - - # Update time - ti = t0 + c[i]*dt - ode_cache = update_cache!(ode_cache,op,ti) - update!(nlop_stage,ti,i) - - # solve at stage i - nls_stage_cache = solve!(uf,solver.nls_stage,nlop_stage,nls_stage_cache) - - # Update stage unknown - @. nlop_stage.ki[i] = uf - - end - - # Update final time - tf = t0+dt - - # Update final solution - @. uf = u0 - for i in 1:s - @. uf = uf + dt * b[i] * nlop_stage.ki[i] - end - - # Update final cache - cache = (ode_cache, ui, ki, rhs, nls_stage_cache, nls_update_cache) - - return (uf,tf,cache) - -end - -abstract type RungeKuttaNonlinearOperator <: NonlinearOperator end - -""" -Nonlinear operator that represents the Runge-Kutta nonlinear operator at a -given time step and stage, i.e., A(tᵢ,uᵢ,kᵢ) -""" -mutable struct RungeKuttaStageNonlinearOperator <: RungeKuttaNonlinearOperator - odeop::ODEOperator - ti::Float64 - dt::Float64 - u0::AbstractVector - ode_cache - ui::AbstractVector - ki::Vector{AbstractVector} - rhs::AbstractVector - i::Int - a::Matrix -end - -""" -Compute the residual of the Runge-Kutta nonlinear operator at stage i. -```math -A(t,ui,ki) = M(ti) ki - f(u₀ + ∑_{j<=i} Δt * a_ij * kj, tj) = 0 -``` - -Uses the vector b as auxiliar variable to store the residual of the left-hand side of -the i-th stage ODE operator, then adds the corresponding contribution from right-hand side -at all earlier stages. -```math -b = M(ti) Ki -b - f(u₀ + ∑_{j<=i} Δt * a_ij * kj, tj) = 0 -``` -""" -function residual!(b::AbstractVector,op::RungeKuttaStageNonlinearOperator,x::AbstractVector) - rhs!(op,x) - lhs!(b,op,x) - @. b = b - op.rhs - b -end - -function jacobian!(A::AbstractMatrix,op::RungeKuttaStageNonlinearOperator,x::AbstractVector) - u = op.ui - @. u = op.u0 - for j in 1:op.i-1 - @. u = u + op.dt * op.a[op.i,j] * op.ki[j] - end - @. u = u + op.dt * op.a[op.i,op.i] * x - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,op.odeop,op.ti,(u,x),(op.dt*op.a[op.i,op.i],1.0),op.ode_cache) -end - -function allocate_residual(op::RungeKuttaNonlinearOperator,x::AbstractVector) - allocate_residual(op.odeop,op.ti,x,op.ode_cache) -end - -function allocate_jacobian(op::RungeKuttaNonlinearOperator,x::AbstractVector) - allocate_jacobian(op.odeop,op.ti,x,op.ode_cache) -end - -function zero_initial_guess(op::RungeKuttaNonlinearOperator) - x0 = similar(op.u0) - fill!(x0,zero(eltype(x0))) - x0 -end - -function rhs!(op::RungeKuttaStageNonlinearOperator, x::AbstractVector) - u = op.ui - @. u = op.u0 - for j in 1:op.i-1 - @. u = u + op.dt * op.a[op.i,j] * op.ki[j] - end - @. u = u + op.dt * op.a[op.i,op.i] * x - rhs!(op.rhs,op.odeop,op.ti,(u,x),op.ode_cache) -end - -function lhs!(b::AbstractVector, op::RungeKuttaNonlinearOperator, x::AbstractVector) - u = op.ui - @. u *= 0 - lhs!(b,op.odeop,op.ti,(u,x),op.ode_cache) -end - -function update!(op::RungeKuttaNonlinearOperator,ti::Float64,i::Int) - op.ti = ti - op.i = i -end - -# Redefining solve! function to enforce computation of the jacobian within -# each stage of the Runge-Kutta method when the solver is "LinearSolver". -function solve!(x::AbstractVector, - ls::LinearSolver, - op::RungeKuttaNonlinearOperator, - cache::Nothing) - fill!(x,zero(eltype(x))) - b = residual(op, x) - A = jacobian(op, x) - ss = symbolic_setup(ls, A) - ns = numerical_setup(ss,A) - rmul!(b,-1) - solve!(x,ns,b) - LinearSolverCache(A,b,ns) -end - -function solve!(x::AbstractVector, - ls::LinearSolver, - op::RungeKuttaNonlinearOperator, - cache) - fill!(x,zero(eltype(x))) - b = cache.b - A = cache.A - ns = cache.ns - residual!(b, op, x) - jacobian!(A, op, x) - numerical_setup!(ns,A) - rmul!(b,-1) - solve!(x,ns,b) - cache -end diff --git a/src/ODEs/ODETools/Tableaus.jl b/src/ODEs/ODETools/Tableaus.jl deleted file mode 100644 index 741e0817c..000000000 --- a/src/ODEs/ODETools/Tableaus.jl +++ /dev/null @@ -1,258 +0,0 @@ -abstract type ButcherTableauType end - -struct BE_1_0_1 <: ButcherTableauType end -struct CN_2_0_2 <: ButcherTableauType end -struct SDIRK_2_0_2 <: ButcherTableauType end -struct SDIRK_2_0_3 <: ButcherTableauType end -struct ESDIRK_3_1_2 <: ButcherTableauType end -struct TRBDF2_3_2_3 <: ButcherTableauType end - -""" -Butcher tableau -""" -struct ButcherTableau{T <: ButcherTableauType} - s::Int # stages - p::Int # embedded order - q::Int # order - a::Matrix # A_ij - b::Vector # b_j - c::Vector # c_i - d::Vector # d_j (embedded) -end - -# Butcher Tableaus constructors -""" -Backward-Euler - -number of stages: 1 -embedded method: no -order: 1 -""" -function ButcherTableau(::BE_1_0_1) - s = 1 - p = 0 - q = 1 - a = reshape([1.0],1,1) - b = [1.0] - c = [1.0] - d = [0.0] - ButcherTableau{BE_1_0_1}(s,p,q,a,b,c,d) -end - -""" -Crank-Nicolson (equivalent to trapezoidal rule) - -number of stages: 2 -embedded method: no -order: 2 -""" -function ButcherTableau(type::CN_2_0_2) - s = 2 - p = 0 - q = 2 - a = [0.0 0.0; 0.5 0.5] - b = [0.5, 0.5] - c = [0.0, 1.0] - d = [0.0, 0.0] - ButcherTableau{CN_2_0_2}(s,p,q,a,b,c,d) -end - -""" -Qin and Zhang's SDIRK - -number of stages: 2 -embedded method: no -order: 2 -""" -function ButcherTableau(type::SDIRK_2_0_2) - s = 2 - p = 0 - q = 2 - a = [0.25 0.0; 0.5 0.25] - b = [0.5, 0.5] - c = [0.25, 0.75] - d = [0.0, 0.0] - ButcherTableau{SDIRK_2_0_2}(s,p,q,a,b,c,d) -end - -""" -3rd order SDIRK - -number of stages: 2 -embedded method: no -order: 3 -""" -function ButcherTableau(type::SDIRK_2_0_3) - s = 2 - p = 0 - q = 3 - γ = (3-√(3))/6 - a = [γ 0.0; 1-2γ γ] - b = [0.5, 0.5] - c = [γ, 1-γ] - d = [0.0, 0.0] - ButcherTableau{SDIRK_2_0_3}(s,p,q,a,b,c,d) -end - -function ButcherTableau(type::ESDIRK_3_1_2) -s = 3 -p = 1 -q = 2 -γ = (2-√(2))/2 -b₂ = (1 − 2γ)/(4γ) -b̂₂ = γ*(−2 + 7γ − 5(γ^2) + 4(γ^3)) / (2(2γ − 1)) -b̂₃ = −2*(γ^2)*(1 − γ + γ^2) / (2γ − 1) -a = [0.0 0.0 0.0; γ γ 0.0; (1 − b₂ − γ) b₂ γ] -b = [(1 − b₂ − γ), b₂, γ] -c = [0.0, 2γ, 1.0] -d = [(1 − b̂₂ − b̂₃), b̂₂, b̂₃] -ButcherTableau{ESDIRK_3_1_2}(s,p,q,a,b,c,d) -end - -function ButcherTableau(type::TRBDF2_3_2_3) - s = 3 - p = 2 - q = 3 - aux = 2.0-√2.0 - a = [0.0 0.0 0.0; aux/2 aux/2 0.0; √2/4 √2/4 aux/2] - b = [√2/4, √2/4, aux/2] - c = [0.0, aux, 1.0] - d = [(1.0-(√2/4))/3, ((3*√2)/4+1.0)/3, aux/6] - ButcherTableau{TRBDF2_3_2_3}(s,p,q,a,b,c,d) -end - -function ButcherTableau(type::Symbol) - eval(:(ButcherTableau($type()))) -end - -abstract type IMEXButcherTableauType end - -struct IMEX_FE_BE_2_0_1 <: IMEXButcherTableauType end -struct IMEX_Midpoint_2_0_2 <: IMEXButcherTableauType end - -""" -Implicit-Explicit Butcher tableaus -""" -struct IMEXButcherTableau{T <: IMEXButcherTableauType} - s::Int # stages - p::Int # embedded order - q::Int # order - aᵢ::Matrix # A_ij implicit - aₑ::Matrix # A_ij explicit - bᵢ::Vector # b_j implicit - bₑ::Vector # b_j explicit - c::Vector # c_i - d::Vector # d_j (embedded) -end - -# IMEX Butcher Tableaus constructors -""" -IMEX Forward-Backward-Euler - -number of stages: 2 -embedded method: no -order: 1 -""" -function IMEXButcherTableau(::IMEX_FE_BE_2_0_1) - s = 2 - p = 0 - q = 1 - aᵢ = [0.0 0.0; 0.0 1.0] - aₑ = [0.0 0.0; 1.0 0.0] - bᵢ = [0.0, 1.0] - bₑ = [0.0, 1.0] - c = [0.0, 1.0] - d = [0.0, 0.0] - IMEXButcherTableau{IMEX_FE_BE_2_0_1}(s,p,q,aᵢ,aₑ,bᵢ,bₑ,c,d) -end - -""" -IMEX Midpoint - -number of stages: 2 -embedded method: no -order: 2 -""" -function IMEXButcherTableau(::IMEX_Midpoint_2_0_2) - s = 2 - p = 0 - q = 2 - aᵢ = [0.0 0.0; 0.0 0.5] - aₑ = [0.0 0.0; 0.5 0.0] - bᵢ = [0.0, 1.0] - bₑ = [0.0, 1.0] - c = [0.0, 0.5] - d = [0.0, 0.0] - IMEXButcherTableau{IMEX_Midpoint_2_0_2}(s,p,q,aᵢ,aₑ,bᵢ,bₑ,c,d) -end - - -function IMEXButcherTableau(type::Symbol) - eval(:(IMEXButcherTableau($type()))) -end - -""" -Explicit Butcher tableaus -""" - -abstract type EXButcherTableauType end - -struct EX_FE_1_0_1 <: EXButcherTableauType end -struct EX_SSP_3_0_3 <: EXButcherTableauType end - -""" -Explicit Butcher tableaus -""" -struct EXButcherTableau{T <: EXButcherTableauType} - s::Int # stages - p::Int # embedded order - q::Int # order - a::Matrix # A_ij explicit - b::Vector # b_j explicit - c::Vector # c_i explicit - d::Vector # d_j (embedded) -end - -# EX Butcher Tableaus constructors - -""" -EX Forward-Backward-Euler - -number of stages: 1 -embedded method: no -order: 1 -""" -function EXButcherTableau(::EX_FE_1_0_1) - s = 1 - p = 0 - q = 1 - a = reshape([0.0],1,1) - b = [1.0] - c = [0.0] - d = [0.0] - EXButcherTableau{EX_FE_1_0_1}(s,p,q,a,b,c,d) -end - -""" -EX SSPRK3 - -number of stages: 3 -embedded method: no -order: 3 -""" -function EXButcherTableau(::EX_SSP_3_0_3) - s = 3 - p = 0 - q = 3 - a = [0.0 0.0 0.0; 1.0 0.0 0.0; 1/4 1/4 0.0] - b = [1/6, 1/6, 2/3] - c = [0.0, 1.0, 1/2] - d = [0.0, 0.0, 0.0] - - - EXButcherTableau{EX_SSP_3_0_3}(s,p,q,a,b,c,d) -end - -function EXButcherTableau(type::Symbol) - eval(:(EXButcherTableau($type()))) -end diff --git a/src/ODEs/ODETools/ThetaMethod.jl b/src/ODEs/ODETools/ThetaMethod.jl deleted file mode 100644 index cdc9c5dbb..000000000 --- a/src/ODEs/ODETools/ThetaMethod.jl +++ /dev/null @@ -1,98 +0,0 @@ -""" -θ-method ODE solver -""" -struct ThetaMethod <: ODESolver - nls::NonlinearSolver - dt::Float64 - θ::Float64 - function ThetaMethod(nls,dt,θ) - if θ > 0.0 - return new(nls,dt,θ) - else - return ForwardEuler(nls,dt) - end - end -end - - -BackwardEuler(nls,dt) = ThetaMethod(nls,dt,1.0) -MidPoint(nls,dt) = ThetaMethod(nls,dt,0.5) - -function solve_step!(uf::AbstractVector, - solver::ThetaMethod, - op::ODEOperator, - u0::AbstractVector, - t0::Real, - cache) # -> (uF,tF) - - dt = solver.dt - solver.θ == 0.0 ? dtθ = dt : dtθ = dt*solver.θ - tθ = t0+dtθ - - if cache === nothing - ode_cache = allocate_cache(op) - vθ = similar(u0) - nl_cache = nothing - else - ode_cache, vθ, nl_cache = cache - end - - ode_cache = update_cache!(ode_cache,op,tθ) - - nlop = ThetaMethodNonlinearOperator(op,tθ,dtθ,u0,ode_cache,vθ) - - nl_cache = solve!(uf,solver.nls,nlop,nl_cache) - - if 0.0 < solver.θ < 1.0 - @. uf = uf * (1.0/solver.θ) - u0 * ((1-solver.θ)/solver.θ) - end - - cache = (ode_cache, vθ, nl_cache) - - tf = t0+dt - return (uf,tf,cache) - -end - -""" -Nonlinear operator that represents the θ-method nonlinear operator at a -given time step, i.e., A(t,u_n+θ,(u_n+θ-u_n)/dt) -""" -struct ThetaMethodNonlinearOperator <: NonlinearOperator - odeop::ODEOperator - tθ::Float64 - dtθ::Float64 - u0::AbstractVector - ode_cache - vθ::AbstractVector -end - -function residual!(b::AbstractVector,op::ThetaMethodNonlinearOperator,x::AbstractVector) - uθ = x - vθ = op.vθ - @. vθ = (x - op.u0) / op.dtθ - residual!(b,op.odeop,op.tθ,(uθ,vθ),op.ode_cache) -end - -function jacobian!(A::AbstractMatrix,op::ThetaMethodNonlinearOperator,x::AbstractVector) - uF = x - vθ = op.vθ - @. vθ = (x - op.u0) / op.dtθ - z = zero(eltype(A)) - fillstored!(A,z) - jacobians!(A,op.odeop,op.tθ,(uF,vθ),(1.0,1/op.dtθ),op.ode_cache) -end - -function allocate_residual(op::ThetaMethodNonlinearOperator,x::AbstractVector) - allocate_residual(op.odeop,op.tθ,x,op.ode_cache) -end - -function allocate_jacobian(op::ThetaMethodNonlinearOperator,x::AbstractVector) - allocate_jacobian(op.odeop,op.tθ,x,op.ode_cache) -end - -function zero_initial_guess(op::ThetaMethodNonlinearOperator) - x0 = similar(op.u0) - fill!(x0,zero(eltype(x0))) - x0 -end diff --git a/src/ODEs/ODEs.jl b/src/ODEs/ODEs.jl index 22ab64320..c172124d8 100644 --- a/src/ODEs/ODEs.jl +++ b/src/ODEs/ODEs.jl @@ -5,14 +5,178 @@ $(EXPORTS) """ module ODEs +using Test using DocStringExtensions -include("ODETools/ODETools.jl") +using LinearAlgebra +using LinearAlgebra: fillstored! +using SparseArrays +using BlockArrays +using NLsolve +using ForwardDiff -include("TransientFETools/TransientFETools.jl") +using Gridap.Helpers +using Gridap.Algebra +using Gridap.Algebra: NLSolversCache +using Gridap.Arrays +using Gridap.TensorValues +using Gridap.Fields +using Gridap.Polynomials +using Gridap.ReferenceFEs +using Gridap.Geometry +using Gridap.CellData +using Gridap.CellData: OperationCellField +using Gridap.CellData: CellFieldAt +using Gridap.FESpaces +using Gridap.FESpaces: SingleFieldFEBasis +using Gridap.MultiField -# include("DiffEqsWrappers/DiffEqsWrappers.jl") +const ε = 100 * eps() -end #module +include("TimeDerivatives.jl") +export time_derivative +export ∂t +export ∂tt + +include("ODEOperators.jl") + +export ODEOperatorType +export NonlinearODE +export AbstractQuasilinearODE +export QuasilinearODE +export AbstractSemilinearODE +export SemilinearODE +export AbstractLinearODE +export LinearODE + +export ODEOperator +export get_num_forms +export get_forms +export is_form_constant +export allocate_odeopcache +export update_odeopcache! +export jacobian_add! + +export IMEXODEOperator +export get_imex_operators + +export GenericIMEXODEOperator + +export test_ode_operator + +include("StageOperators.jl") + +export StageOperator +export NonlinearStageOperator +export LinearStageOperator + +export massless_residual_weights + +include("ODESolvers.jl") + +export ODESolver +export allocate_odecache +export ode_start +export ode_march! +export ode_finish! + +export test_ode_solver + +export ForwardEuler + +export ThetaMethod +export MidPoint +export BackwardEuler + +export GeneralizedAlpha1 + +export TableauType +export ExplicitTableau +export ImplicitTableau +export FullyImplicitTableau +export DiagonallyImplicitTableau +export ImplicitExplicitTableau + +export AbstractTableau +export GenericTableau +export EmbeddedTableau +export get_embedded_weights +export get_embedded_order +export IMEXTableau +export get_imex_tableaus +export is_padded + +export TableauName +export ButcherTableau +export available_tableaus +export available_imex_tableaus + +export RungeKutta + +export GeneralizedAlpha2 +export Newmark + +include("ODESolutions.jl") + +export ODESolution +export GenericODESolution + +export test_ode_solution + +include("TransientFESpaces.jl") + +export allocate_space + +export TransientTrialFESpace +export TransientMultiFieldFESpace + +export test_tfe_space + +include("TransientCellFields.jl") + +export TransientCellField +export TransientSingleFieldCellField +export TransientMultiFieldCellField +export TransientFEBasis + +include("TransientFEOperators.jl") + +export TransientFEOperator +export get_assembler +export get_res +export get_jacs +export allocate_tfeopcache +export update_tfeopcache! + +export TransientFEOpFromWeakForm +export TransientQuasilinearFEOpFromWeakForm +export TransientQuasilinearFEOperator +export TransientSemilinearFEOpFromWeakForm +export TransientSemilinearFEOperator +export TransientLinearFEOpFromWeakForm +export TransientLinearFEOperator + +export TransientIMEXFEOperator +export GenericTransientIMEXFEOperator + +export test_tfe_operator + +include("ODEOpsFromTFEOps.jl") + +export ODEOpFromTFEOpCache +export ODEOpFromTFEOp + +include("TransientFESolutions.jl") + +export TransientFESolution + +export test_tfe_solution +export test_tfe_solver + +# include("_DiffEqsWrappers.jl") + +end # module ODEs + +# TODO useful? const GridapODEs = ODEs diff --git a/src/ODEs/StageOperators.jl b/src/ODEs/StageOperators.jl new file mode 100644 index 000000000..12f7e282f --- /dev/null +++ b/src/ODEs/StageOperators.jl @@ -0,0 +1,288 @@ +########################## +# NonlinearStageOperator # +########################## +""" + abstract type StageOperator <: NonlinearOperator end + +Operator used to perform one stage within one time step of an `ODESolver`. + +# Mandatory +- [`allocate_residual(nlop, x)`] +- [`residual!(r, nlop, x)`] +- [`allocate_jacobian(nlop, x)`] +- [`jacobian!(J, nlop, x)`] +""" +abstract type StageOperator <: NonlinearOperator end + +########################## +# NonlinearStageOperator # +########################## +""" + struct NonlinearStageOperator <: StageOperator end + +Nonlinear stage operator representing `res(x) = residual(t, us(x)...) = 0`, +where `x` is the stage unknown and `us(x)` denotes the point where the residual +of the ODE is to be evaluated. It is assumed that the coordinates of `us(x)` +are linear in `x`, and the coefficients in front of `x` called `ws` are scalar, +i.e. `ws[k] = d/dx us[k](x)` is a scalar constant. +""" +struct NonlinearStageOperator <: StageOperator + odeop::ODEOperator + odeopcache + tx::Real + usx::Function + ws::Tuple{Vararg{Real}} +end + +# NonlinearOperator interface +function Algebra.allocate_residual( + nlop::NonlinearStageOperator, x::AbstractVector +) + odeop, odeopcache = nlop.odeop, nlop.odeopcache + tx = nlop.tx + usx = nlop.usx(x) + allocate_residual(odeop, tx, usx, odeopcache) +end + +function Algebra.residual!( + r::AbstractVector, + nlop::NonlinearStageOperator, x::AbstractVector +) + odeop, odeopcache = nlop.odeop, nlop.odeopcache + tx = nlop.tx + usx = nlop.usx(x) + residual!(r, odeop, tx, usx, odeopcache) +end + +function Algebra.allocate_jacobian( + nlop::NonlinearStageOperator, x::AbstractVector +) + odeop, odeopcache = nlop.odeop, nlop.odeopcache + tx = nlop.tx + usx = nlop.usx(x) + allocate_jacobian(odeop, tx, usx, odeopcache) +end + +function Algebra.jacobian!( + J::AbstractMatrix, + nlop::NonlinearStageOperator, x::AbstractVector +) + odeop, odeopcache = nlop.odeop, nlop.odeopcache + tx = nlop.tx + usx = nlop.usx(x) + ws = nlop.ws + jacobian!(J, odeop, tx, usx, ws, odeopcache) + J +end + +####################### +# LinearStageOperator # +####################### +""" + struct LinearStageOperator <: StageOperator end + +Linear stage operator representing `res(x) = J(t, us) x + r(t, us) = 0`, +where `x` is the stage unknown and `us` denotes the point where the residual +of the ODE is to be evaluated. +""" +struct LinearStageOperator <: StageOperator + J::AbstractMatrix + r::AbstractVector + reuse::Bool +end + +function LinearStageOperator( + odeop::ODEOperator, odeopcache, + tx::Real, usx::Tuple{Vararg{AbstractVector}}, + ws::Tuple{Vararg{Real}}, + J::AbstractMatrix, r::AbstractVector, reuse::Bool, sysslvrcache +) + residual!(r, odeop, tx, usx, odeopcache) + + if isnothing(sysslvrcache) || !reuse + jacobian!(J, odeop, tx, usx, ws, odeopcache) + end + + LinearStageOperator(J, r, reuse) +end + +# NonlinearOperator interface +function Algebra.allocate_residual( + lop::LinearStageOperator, x::AbstractVector +) + r = allocate_in_range(typeof(lop.r), lop.J) + fill!(r, zero(eltype(r))) + r +end + +function Algebra.residual!( + r::AbstractVector, + lop::LinearStageOperator, x::AbstractVector +) + mul!(r, lop.J, x) + axpy!(1, lop.r, r) + r +end + +function Algebra.allocate_jacobian( + lop::LinearStageOperator, x::AbstractVector +) + lop.J +end + +function Algebra.jacobian!( + J::AbstractMatrix, + lop::LinearStageOperator, x::AbstractVector +) + copy_entries!(J, lop.J) + J +end + +################################### +# NonlinearSolver / StageOperator # +################################### +# Default behaviour from Gridap.Algebra. + +######################################### +# NonlinearSolver / LinearStageOperator # +######################################### +# Skip numerical setup update if possible. Since we cannot dispatch on the +# numerical setup to prevent it from updating itself when the matrix is +# constant, we have to overwrite the `NonlinearSolver` interface. + +function Algebra._update_nlsolve_cache!( + cache::NLSolversCache, + x0::AbstractVector, lop::LinearStageOperator +) + f!(r, x) = residual!(r, lop, x) + j!(j, x) = jacobian!(j, lop, x) + fj!(r, j, x) = residual_and_jacobian!(r, j, lop, x) + f0, j0 = cache.f0, cache.j0 + residual_and_jacobian!(f0, j0, lop, x0) + df = OnceDifferentiable(f!, j!, fj!, x0, f0, j0) + + ns = cache.ns + if !lop.reuse + numerical_setup!(ns, j0) + end + + NLSolversCache(f0, j0, df, ns, nothing) +end + +function Algebra._nlsolve_with_updated_cache!( + x::AbstractVector, + nls::NLSolver, lop::LinearStageOperator, + cache::NLSolversCache +) + # After checking NLsolve.jl, the linsolve argument is only passed to Newton + # and it is only called on the jacobian (j!), so we can save from updating + # the numerical setup + ns = cache.ns + function linsolve!(x, A, b) + if !lop.reuse + numerical_setup!(ns, A) + end + solve!(x, ns, b) + end + + result = nlsolve(cache.df, x; linsolve=linsolve!, nls.kwargs...) + cache.result = result + copy_entries!(x, result.zero) +end + +# IMPORTANT: because `NewtonRaphsonSolver` calls `numerical_setup!` internally, +# we would need to rewrite the functions `solve!` and `_solve_nr!` entirely +# for `LinearStageOperator` in order to skip numerical setup updates when the +# matrix is constant. To be on the safe side, and since `NewtonRaphsonSolver` +# is not exported anyway, we just prevent the user from using it at as a +# nonlinear solver for `LinearStageOperator`. + +const nr_on_lop_msg = """ +You are trying to use `NewtonRaphsonSolver` to solve a `LinearStageOperator`. +Since this is not optimised (yet), it is forbidden for now. Consider using a +nonlinear solver coming from `NLSolvers`, e.g. +``` + ls = LUSolver() + nls = NLSolver(ls, show_trace=true, method=:newton, iterations=10) +``` +""" + +function Algebra.solve!( + x::AbstractVector, + nls::NewtonRaphsonSolver, lop::LinearStageOperator, + cache::Nothing +) + @unreachable nr_on_lop_msg +end + +function Algebra.solve!( + x::AbstractVector, + nls::NewtonRaphsonSolver, lop::LinearStageOperator, + cache +) + @unreachable nr_on_lop_msg +end + +################################ +# LinearSolver / StageOperator # +################################ +# Forbid solving `StageOperator`s with `LinearSolver`s. For now it is already +# forbidden to solve a generic `FEOperator` with a `LinearSolver`, but it is +# still possible to solve a `NonlinearOperator` with a `LinearSolver`. The +# following should be replicated in Gridap.Algebra for `NonlinearOperator`s at +# some point. +const ls_on_nlop = """ +Cannot solve a generic `StageOperator` with a `LinearSolver`. +""" + +function Algebra.solve!( + x::AbstractVector, + ls::LinearSolver, nlop::StageOperator, + cache::Nothing +) + @unreachable ls_on_nlop +end + +function Algebra.solve!( + x::AbstractVector, + ls::LinearSolver, nlop::StageOperator, + cache +) + @unreachable ls_on_nlop +end + +###################################### +# LinearSolver / LinearStageOperator # +###################################### +function Algebra.solve!( + x::AbstractVector, + ls::LinearSolver, lop::LinearStageOperator, + ns::Nothing +) + J = lop.J + ss = symbolic_setup(ls, J) + ns = numerical_setup(ss, J) + + r = lop.r + rmul!(r, -1) + + solve!(x, ns, r) + ns +end + +function Algebra.solve!( + x::AbstractVector, + ls::LinearSolver, lop::LinearStageOperator, + ns +) + if !lop.reuse + J = lop.J + numerical_setup!(ns, J) + end + + r = lop.r + rmul!(r, -1) + + solve!(x, ns, r) + ns +end diff --git a/src/ODEs/TimeDerivatives.jl b/src/ODEs/TimeDerivatives.jl new file mode 100644 index 000000000..df66902ac --- /dev/null +++ b/src/ODEs/TimeDerivatives.jl @@ -0,0 +1,94 @@ +############################# +# time_derivative interface # +############################# +""" + time_derivative(f::DerivableType) -> DerivableType + +Build the first-order time derivative operator for `f`. +""" +function time_derivative(f) + @abstractmethod +end + +""" + time_derivative(f::DerivableType, ::Val{k}) -> DerivableType + +Build the `k`-th-order time derivative operator for `f`. +""" +function time_derivative(f, ::Val{0}) + f +end + +function time_derivative(f, ::Val{1}) + time_derivative(f) +end + +function time_derivative(f, ::Val{k}) where {k} + time_derivative(time_derivative(f), Val(k - 1)) +end + +""" + ∂t(f::DerivableType) -> DerivableType + +Build the first-th-order time derivative operator for `f`. + +Alias for `time_derivative(f)`. +""" +function ∂t(f) + time_derivative(f) +end + +""" + ∂t(f::DerivableType, ::Val{k}) -> DerivableType + +Build the `k`-th-order time derivative operator for `f`. + +Alias for `time_derivative(f, Val(k))`. +""" +function ∂t(f, ::Val{k}) where {k} + time_derivative(f, Val(k)) +end + +""" + ∂tt(f::DerivableType) -> DerivableType + +Second-order time derivative operator for `f`. + +Alias for `time_derivative(f, Val(2))`. +""" +function ∂tt(f) + time_derivative(f, Val(2)) +end + +################################# +# Specialisation for `Function` # +################################# +function time_derivative(f::Function) + function dfdt(x, t) + z = zero(return_type(f, x, t)) + _time_derivative(f, x, t, z) + end + # Extend definition to include restrictions + _dfdt(x, t) = dfdt(x, t) + _dfdt(x::VectorValue) = t -> dfdt(x, t) + _dfdt(t::Real) = x -> dfdt(x, t) + return _dfdt +end + +function _time_derivative(f, x, t, z) + ForwardDiff.derivative(t -> f(x, t), t) +end + +function _time_derivative(f, x, t, z::VectorValue) + VectorValue(ForwardDiff.derivative(t -> get_array(f(x, t)), t)) + # VectorValue(ForwardDiff.derivative(t -> f(x, t), t)) +end + +function _time_derivative(f, x, t, z::TensorValue) + TensorValue(ForwardDiff.derivative(t -> get_array(f(x, t)), t)) +end + +############################### +# Specialisation for `Number` # +############################### +time_derivative(x::Number) = zero(x) diff --git a/src/ODEs/TransientCellFields.jl b/src/ODEs/TransientCellFields.jl new file mode 100644 index 000000000..02bd687a6 --- /dev/null +++ b/src/ODEs/TransientCellFields.jl @@ -0,0 +1,287 @@ +###################### +# TransientCellField # +###################### +""" + abstract type TransientCellField <: CellField end + +Transient version of `CellField`. + +# Mandatory +- [`time_derivative(f)`](@ref) +""" +abstract type TransientCellField <: CellField end + +# CellField interface +CellData.get_data(f::TransientCellField) = @abstractmethod + +CellData.get_triangulation(f::TransientCellField) = @abstractmethod + +CellData.DomainStyle(::Type{TransientCellField}) = @abstractmethod + +function CellData.change_domain( + f::TransientCellField, trian::Triangulation, target_domain::DomainStyle +) + @abstractmethod +end + +Fields.gradient(f::TransientCellField) = @abstractmethod + +Fields.∇∇(f::TransientCellField) = @abstractmethod + +# TransientCellField interface +function time_derivative(f::TransientCellField) + @abstractmethod +end + +################################# +# TransientSingleFieldCellField # +################################# +""" + struct TransientSingleFieldCellField <: TransientCellField end + +Transient `CellField` for a single-field `FESpace`. +""" +struct TransientSingleFieldCellField{A} <: TransientCellField + cellfield::A + derivatives::Tuple # {Vararg{A,B} where B} +end + +# Default constructor (see `TransientMultiFieldCellField` for the implementations +# of `TransientCellField` when the field is a `MultiFieldCellField`) +function TransientCellField(field::CellField, derivatives::Tuple) + TransientSingleFieldCellField(field, derivatives) +end + +# CellField interface +CellData.get_data(f::TransientSingleFieldCellField) = get_data(f.cellfield) + +CellData.get_triangulation(f::TransientSingleFieldCellField) = get_triangulation(f.cellfield) + +CellData.DomainStyle(::Type{<:TransientSingleFieldCellField{A}}) where {A} = DomainStyle(A) + +function CellData.change_domain( + f::TransientSingleFieldCellField, trian::Triangulation, + target_domain::DomainStyle) + change_domain(f.cellfield, trian, target_domain) +end + +Fields.gradient(f::TransientSingleFieldCellField) = gradient(f.cellfield) + +Fields.∇∇(f::TransientSingleFieldCellField) = ∇∇(f.cellfield) + +# Skeleton-related operations +function Base.getproperty(f::TransientSingleFieldCellField, sym::Symbol) + if sym in (:⁺, :plus, :⁻, :minus) + derivatives = () + if sym in (:⁺, :plus) + cellfield = CellFieldAt{:plus}(f.cellfield) + for iderivative in f.derivatives + derivatives = (derivatives..., CellFieldAt{:plus}(iderivative)) + end + elseif sym in (:⁻, :minus) + cellfield = CellFieldAt{:minus}(f.cellfield) + for iderivative in f.derivatives + derivatives = (derivatives..., CellFieldAt{:plus}(iderivative)) + end + end + return TransientSingleFieldCellField(cellfield, derivatives) + else + return getfield(f, sym) + end +end + +# TransientCellField interface +function time_derivative(f::TransientSingleFieldCellField) + cellfield, derivatives = first_and_tail(f.derivatives) + TransientCellField(cellfield, derivatives) +end + +################################ +# TransientMultiFieldCellField # +################################ +""" + struct TransientMultiFieldCellField <: TransientCellField end + +Transient `CellField` for a multi-field `FESpace`. +""" +struct TransientMultiFieldCellField{A} <: TransientCellField + cellfield::A + derivatives::Tuple + transient_single_fields::Vector{<:TransientCellField} # used to iterate +end + +const MultiFieldTypes = Union{MultiFieldCellField,MultiFieldFEFunction} +function TransientMultiFieldCellField(fields::MultiFieldTypes, derivatives::Tuple) + _flat = _to_transient_single_fields(fields, derivatives) + TransientMultiFieldCellField(fields, derivatives, _flat) +end + +# Default constructors +function TransientCellField(fields::MultiFieldTypes, derivatives::Tuple) + TransientMultiFieldCellField(fields, derivatives) +end + +function TransientCellField(fields::TransientMultiFieldCellField, derivatives::Tuple) + TransientMultiFieldCellField(fields, derivatives) +end + +# CellField interface +function CellData.get_data(f::TransientMultiFieldCellField) + s = """ + Function `get_data` is not implemented for `TransientMultiFieldCellField` at + this moment. You need to extract the individual fields and then evaluate them + separately. + + If this function is ever to be implemented, evaluating a `MultiFieldCellField` + directly would provide, at each evaluation point, a tuple with the value of + the different fields. + """ + @notimplemented s +end + +CellData.get_triangulation(f::TransientMultiFieldCellField) = get_triangulation(f.cellfield) + +CellData.DomainStyle(::Type{TransientMultiFieldCellField{A}}) where {A} = DomainStyle(A) + +function CellData.change_domain( + f::TransientMultiFieldCellField, trian::Triangulation, + target_domain::DomainStyle +) + change_domain(f.cellfield, trian, target_domain) +end + +Fields.gradient(f::TransientMultiFieldCellField) = gradient(f.cellfield) + +Fields.∇∇(f::TransientMultiFieldCellField) = ∇∇(f.cellfield) + +# MultiField interface +MultiField.num_fields(f::TransientMultiFieldCellField) = length(f.cellfield) + +function Base.getindex(f::TransientMultiFieldCellField, index::Integer) + sub_cellfield = f.cellfield[index] + + sub_derivatives = () + for derivative in f.derivatives + sub_derivative = derivative[index] + sub_derivatives = (sub_derivatives..., sub_derivative) + end + + TransientSingleFieldCellField(sub_cellfield, sub_derivatives) +end + +function Base.getindex( + f::TransientMultiFieldCellField, + indices::AbstractVector{<:Integer} +) + sub_cellfield = MultiFieldCellField( + f.cellfield[indices], + DomainStyle(f.cellfield) + ) + + sub_derivatives = () + for derivative in f.derivatives + sub_derivative = MultiFieldCellField( + derivative[indices], + DomainStyle(derivative) + ) + sub_derivatives = (sub_derivatives..., sub_derivative) + end + + _sub_flat = _to_transient_single_fields(sub_cellfield, sub_derivatives) + TransientMultiFieldCellField(sub_cellfield, sub_derivatives, _sub_flat) +end + +function Base.iterate(f::TransientMultiFieldCellField) + iterate(f.transient_single_fields) +end + +function Base.iterate(f::TransientMultiFieldCellField, state) + iterate(f.transient_single_fields, state) +end + +# TransientCellField interface +function time_derivative(f::TransientMultiFieldCellField) + cellfield, derivatives = first_and_tail(f.derivatives) + + single_field_derivatives = map(cellfield, derivatives...) do cellfield, derivatives... + TransientSingleFieldCellField(cellfield, derivatives) + end + + TransientMultiFieldCellField( + cellfield, derivatives, + single_field_derivatives + ) +end + +#################### +# TransientFEBasis # +#################### +""" + struct TransientFEBasis <: FEBasis end + +Transient `FEBasis`. +""" +struct TransientFEBasis{A} <: FEBasis + febasis::A + derivatives::Tuple{Vararg{A}} +end + +# CellField interface +CellData.get_data(f::TransientFEBasis) = get_data(f.febasis) + +CellData.get_triangulation(f::TransientFEBasis) = get_triangulation(f.febasis) + +CellData.DomainStyle(::Type{<:TransientFEBasis{A}}) where {A} = DomainStyle(A) + +function CellData.change_domain( + f::TransientFEBasis, trian::Triangulation, + target_domain::DomainStyle +) + change_domain(f.febasis, trian, target_domain) +end + +Fields.gradient(f::TransientFEBasis) = gradient(f.febasis) + +Fields.∇∇(f::TransientFEBasis) = ∇∇(f.febasis) + +# FEBasis interface +FESpaces.BasisStyle(::Type{<:TransientFEBasis{A}}) where {A} = BasisStyle(A) + +# Transient FEBasis interface +function time_derivative(f::TransientFEBasis) + cellfield, derivatives = first_and_tail(f.derivatives) + TransientCellField(cellfield, derivatives) +end + +######### +# Utils # +######### +""" + _to_transient_single_fields( + multi_field, + derivatives + ) -> Vector{<:TransientSingleFieldCellField} + +Convert a `TransientMultiFieldCellField` into a vector of +`TransientSingleFieldCellField`s. +""" +function _to_transient_single_fields(multi_field, derivatives) + transient_single_fields = TransientCellField[] + + for index in 1:num_fields(multi_field) + single_field = multi_field[index] + + single_derivatives = () + for derivative in derivatives + single_derivatives = (single_derivatives..., derivative[index]) + end + + transient_single_field = TransientSingleFieldCellField( + single_field, + single_derivatives + ) + push!(transient_single_fields, transient_single_field) + end + + transient_single_fields +end diff --git a/src/ODEs/TransientFEOperators.jl b/src/ODEs/TransientFEOperators.jl new file mode 100644 index 000000000..3244a42d1 --- /dev/null +++ b/src/ODEs/TransientFEOperators.jl @@ -0,0 +1,877 @@ +""" + abstract type TransientFEOperator <: GridapType end + +Transient version of `FEOperator` corresponding to a residual of the form +```math +residual(t, u, v) = 0, +``` +where `residual` is linear in `v`. Time derivatives of `u` can be included by +using the `∂t` operator. + +# Important +For now, the residual and jacobians cannot be directly computed on a +`TransientFEOperator`. They have to be evaluated on the corresponding +algebraic operator, which is an `ODEOperator`. As such, `TransientFEOperator` +is not exactly a subtype of `FEOperator`, but rather at the intersection of +`FEOperator` and `ODEOperator`. This is because the `ODEOperator` works with +vectors and it is optimised to take advantage of constant forms. + +# Mandatory +- [`get_test(tfeop)`](@ref) +- [`get_trial(tfeop)`](@ref) +- [`get_order(tfeop)`](@ref) +- [`get_res(tfeop::TransientFEOperator)`](@ref) +- [`get_jacs(tfeop::TransientFEOperator)`](@ref) +- [`get_forms(tfeop::TransientFEOperator)`](@ref) +- [`get_assembler(tfeop)`](@ref) + +# Optional +- [`get_algebraic_operator(tfeop)`](@ref) +- [`get_num_forms(tfeop::TransientFEOperator)`](@ref) +- [`is_form_constant(tfeop, k)`](@ref) +- [`allocate_tfeopcache(tfeop)`](@ref) +- [`update_tfeopcache!(tfeopcache, tfeop, t)`](@ref) +""" +abstract type TransientFEOperator{T<:ODEOperatorType} <: GridapType end + +""" + ODEOperatorType(::Type{<:TransientFEOperator}) -> ODEOperatorType + +Return the `ODEOperatorType` of the `TransientFEOperator`. +""" +ODEOperatorType(::TransientFEOperator{T}) where {T} = T +ODEOperatorType(::Type{<:TransientFEOperator{T}}) where {T} = T + +# FEOperator interface +function FESpaces.get_test(tfeop::TransientFEOperator) + @abstractmethod +end + +function FESpaces.get_trial(tfeop::TransientFEOperator) + @abstractmethod +end + +function FESpaces.get_algebraic_operator(tfeop::TransientFEOperator) + ODEOpFromTFEOp(tfeop) +end + +# ODEOperator interface +function Polynomials.get_order(tfeop::TransientFEOperator) + @abstractmethod +end + +# TransientFEOperator interface +""" + get_res(tfeop::TransientFEOperator) -> Function + +Return the lowest-order element in the decomposition of the residual of the +`ODEOperator`: +* In the general case, return the whole residual, +* For an `AbstractQuasilinearODE`, return the residual excluding the mass term, +* For an `AbstractLinearODE`, return the forcing term. +""" +function get_res(tfeop::TransientFEOperator) + @abstractmethod +end + +""" + get_jacs(tfeop::TransientFEOperator) -> Tuple{Vararg{Function}} + +Return the jacobians of the `TransientFEOperator`. +""" +function get_jacs(tfeop::TransientFEOperator) + @abstractmethod +end + +""" + get_num_forms(tfeop::TransientFEOperator) -> Integer + +Return the number of bilinear forms of the `TransientFEOperator`. See +[`get_forms`](@ref). +""" +function get_num_forms(tfeop::TransientFEOperator) + 0 +end + +function get_num_forms(tfeop::TransientFEOperator{<:AbstractQuasilinearODE}) + 1 +end + +function get_num_forms(tfeop::TransientFEOperator{<:AbstractLinearODE}) + get_order(tfeop) + 1 +end + +""" + get_forms(tfeop::TransientFEOperator) -> Function + +Return the bilinear forms of the `TransientFEOperator`: +* For a general transient FE operator, return nothing, +* For a quasilinear transient FE operator, return the mass matrix, +* For a linear transient FE operator, return all the linear forms. +""" +function get_forms(tfeop::TransientFEOperator) + () +end + +function get_forms(tfeop::TransientFEOperator{<:AbstractQuasilinearODE}) + @abstractmethod +end + +""" + is_form_constant(tfeop::TransientFEOperator, k::Integer) -> Bool + +Indicate whether the bilinear form of the `TransientFEOperator` corresponding +to the `k`-th-order time derivative of `u` is constant with respect to `t`. +""" +function is_form_constant(tfeop::TransientFEOperator, k::Integer) + false +end + +""" + get_assembler(tfeop::TransientFEOperator) -> Assembler + +Return the assembler of the `TransientFEOperator`. +""" +function get_assembler(tfeop::TransientFEOperator) + @abstractmethod +end + +""" + allocate_tfeopcache( + tfeop::TransientFEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}} + ) -> CacheType + +Allocate the cache of the `TransientFEOperator`. +""" +function allocate_tfeopcache( + tfeop::TransientFEOperator, + t::Real, us::Tuple{Vararg{AbstractVector}} +) + nothing +end + +""" + update_tfeopcache!(tfeopcache, tfeop::TransientFEOperator, t::Real) -> CacheType + +Update the cache of the `TransientFEOperator` at time `t`. +""" +function update_tfeopcache!(tfeopcache, tfeop::TransientFEOperator, t::Real) + tfeopcache +end + +# Broken FESpaces interface +const res_jac_on_transient_tfeop_msg = """ +For now, the residual and jacobians cannot be directly computed on a +`TransientFEOperator`. They have to be evaluated on the corresponding +algebraic operator, which is an `ODEOperator`. + +This is because the `ODEOperator` works with vectors and it is optimised to +take advantage of constant jacobians. +""" + +function Algebra.allocate_residual(tfeop::TransientFEOperator, u) + @unreachable res_jac_on_transient_tfeop_msg +end + +function Algebra.residual!(r::AbstractVector, tfeop::TransientFEOperator, u) + @unreachable res_jac_on_transient_tfeop_msg +end + +function Algebra.residual(tfeop::TransientFEOperator, u) + @unreachable res_jac_on_transient_tfeop_msg +end + +function Algebra.allocate_jacobian(tfeop::TransientFEOperator, u) + @unreachable res_jac_on_transient_tfeop_msg +end + +function Algebra.jacobian!(J::AbstractMatrix, tfeop::TransientFEOperator, u) + @unreachable res_jac_on_transient_tfeop_msg +end + +function Algebra.jacobian(tfeop::TransientFEOperator, u) + @unreachable res_jac_on_transient_tfeop_msg +end + +const default_linear_msg = """ +For an operator of order zero, the definitions of quasilinear, semilinear and +linear coincide. Defaulting to linear. +""" + +############################# +# TransientFEOpFromWeakForm # +############################# +""" + struct TransientFEOpFromWeakForm <: TransientFEOperator end + +Generic `TransientFEOperator` constructed from the weak formulation of a +partial differential equation. +""" +struct TransientFEOpFromWeakForm <: TransientFEOperator{NonlinearODE} + res::Function + jacs::Tuple{Vararg{Function}} + assembler::Assembler + trial::FESpace + test::FESpace + order::Integer +end + +# Constructor with manual jacobians +function TransientFEOperator( + res::Function, jacs::Tuple{Vararg{Function}}, + trial, test +) + order = length(jacs) - 1 + assembler = SparseMatrixAssembler(trial, test) + TransientFEOpFromWeakForm( + res, jacs, + assembler, trial, test, order + ) +end + +# Constructors with flat arguments (orders 0, 1, 2) +function TransientFEOperator( + res::Function, + jac::Function, + trial, test +) + TransientFEOperator( + res, (jac,), + trial, test + ) +end + +function TransientFEOperator( + res::Function, + jac::Function, jac_t::Function, + trial, test +) + TransientFEOperator( + res, (jac, jac_t), + trial, test + ) +end + +function TransientFEOperator( + res::Function, + jac::Function, jac_t::Function, jac_tt::Function, + trial, test +) + TransientFEOperator( + res, (jac, jac_t, jac_tt), + trial, test + ) +end + +# Constructor with automatic jacobians +function TransientFEOperator( + res::Function, + trial, test; + order::Integer=1 +) + function jac_0(t, u, du, v) + function res_0(y) + u0 = TransientCellField(y, u.derivatives) + res(t, u0, v) + end + jacobian(res_0, u.cellfield) + end + jacs = (jac_0,) + + for k in 1:order + function jac_k(t, u, duk, v) + function res_k(y) + derivatives = (u.derivatives[1:k-1]..., y, u.derivatives[k+1:end]...) + uk = TransientCellField(u.cellfield, derivatives) + res(t, uk, v) + end + jacobian(res_k, u.derivatives[k]) + end + jacs = (jacs..., jac_k) + end + + TransientFEOperator(res, jacs, trial, test) +end + +# TransientFEOperator interface +FESpaces.get_test(tfeop::TransientFEOpFromWeakForm) = tfeop.test + +FESpaces.get_trial(tfeop::TransientFEOpFromWeakForm) = tfeop.trial + +Polynomials.get_order(tfeop::TransientFEOpFromWeakForm) = tfeop.order + +get_res(tfeop::TransientFEOpFromWeakForm) = tfeop.res + +get_jacs(tfeop::TransientFEOpFromWeakForm) = tfeop.jacs + +get_assembler(tfeop::TransientFEOpFromWeakForm) = tfeop.assembler + +######################################## +# TransientQuasilinearFEOpFromWeakForm # +######################################## +""" + struct TransientQuasilinearFEOpFromWeakForm <: TransientFEOperator end + +Transient `FEOperator` defined by a transient weak form +```math +residual(t, u, v) = mass(t, u, ∂t^N[u], v) + res(t, u, v) = 0. +``` +Let `N` be the order of the operator. We impose the following conditions: +* `mass` is linear in the `N`-th-order time derivative of `u`, +* `res` has order `N-1`, +* both `mass` and `res` are linear in `v`. + +For convenience, the mass matrix has to be specified as a function of `u` for +the nonlinear part, and `∂t^N[u]`. +""" +struct TransientQuasilinearFEOpFromWeakForm <: TransientFEOperator{QuasilinearODE} + mass::Function + res::Function + jacs::Tuple{Vararg{Function}} + assembler::Assembler + trial::FESpace + test::FESpace + order::Integer +end + +# Constructor with manual jacobians +function TransientQuasilinearFEOperator( + mass::Function, res::Function, jacs::Tuple{Vararg{Function}}, + trial, test +) + order = length(jacs) - 1 + if order == 0 + @warn default_linear_msg + return TransientLinearFEOperator((mass,), res, jacs, trial, test) + end + + assembler = SparseMatrixAssembler(trial, test) + TransientQuasilinearFEOpFromWeakForm( + mass, res, jacs, + assembler, trial, test, order + ) +end + +# Constructor with flat arguments (orders 0, 1, 2) +function TransientQuasilinearFEOperator( + mass::Function, res::Function, + jac::Function, + trial, test +) + @warn default_linear_msg + TransientLinearFEOperator(mass, res, jac, trial, test) +end + +function TransientQuasilinearFEOperator( + mass::Function, res::Function, + jac::Function, jac_t::Function, + trial, test +) + TransientQuasilinearFEOperator( + mass, res, (jac, jac_t), + trial, test + ) +end + +function TransientQuasilinearFEOperator( + mass::Function, res::Function, + jac::Function, jac_t::Function, jac_tt::Function, + trial, test +) + TransientQuasilinearFEOperator( + mass, res, (jac, jac_t, jac_tt), + trial, test + ) +end + +# Constructor with automatic jacobians +function TransientQuasilinearFEOperator( + mass::Function, res::Function, + trial, test; + order::Integer=1 +) + if order == 0 + @warn default_linear_msg + return TransientLinearFEOperator(mass, res, trial, test) + end + + jacs = () + if order > 0 + function jac_0(t, u, du, v) + function res_0(y) + u0 = TransientCellField(y, u.derivatives) + ∂tNu0 = ∂t(u0, Val(order)) + mass(t, u0, ∂tNu0, v) + res(t, u0, v) + end + jacobian(res_0, u.cellfield) + end + jacs = (jacs..., jac_0) + end + + for k in 1:order-1 + function jac_k(t, u, duk, v) + function res_k(y) + derivatives = (u.derivatives[1:k-1]..., y, u.derivatives[k+1:end]...) + u0 = TransientCellField(u.cellfield, derivatives) + ∂tNu0 = ∂t(u0, Val(order)) + mass(t, u0, ∂tNu0, v) + res(t, u0, v) + end + jacobian(res_k, u.derivatives[k]) + end + jacs = (jacs..., jac_k) + end + + # When the operator is quasilinear, the jacobian of the residual w.r.t. the + # highest-order term is simply the mass term. + jac_N(t, u, duN, v) = mass(t, u, duN, v) + jacs = (jacs..., jac_N) + + TransientQuasilinearFEOperator( + mass, res, jacs, trial, test + ) +end + +# TransientFEOperator interface +FESpaces.get_test(tfeop::TransientQuasilinearFEOpFromWeakForm) = tfeop.test + +FESpaces.get_trial(tfeop::TransientQuasilinearFEOpFromWeakForm) = tfeop.trial + +Polynomials.get_order(tfeop::TransientQuasilinearFEOpFromWeakForm) = tfeop.order + +get_res(tfeop::TransientQuasilinearFEOpFromWeakForm) = tfeop.res + +get_jacs(tfeop::TransientQuasilinearFEOpFromWeakForm) = tfeop.jacs + +get_forms(tfeop::TransientQuasilinearFEOpFromWeakForm) = (tfeop.mass,) + +get_assembler(tfeop::TransientQuasilinearFEOpFromWeakForm) = tfeop.assembler + +######################################## +# TransientQuasilinearFEOpFromWeakForm # +######################################## +""" + struct TransientSemilinearFEOpFromWeakForm <: TransientFEOperator end + +Transient `FEOperator` defined by a transient weak form +```math +residual(t, u, v) = mass(t, ∂t^N[u], v) + res(t, u, v) = 0. +``` +Let `N` be the order of the operator. We impose the following conditions: +* `mass` is linear in the `N`-th-order time derivative of `u`, +* `res` has order `N-1`, +* both `mass` and `res` are linear in `v`. + +For convenience, the mass matrix has to be specified as a function of +`∂t^N[u]`, i.e. as a linear form. +""" +struct TransientSemilinearFEOpFromWeakForm <: TransientFEOperator{SemilinearODE} + mass::Function + res::Function + jacs::Tuple{Vararg{Function}} + constant_mass::Bool + assembler::Assembler + trial::FESpace + test::FESpace + order::Integer +end + +# Constructor with manual jacobians +function TransientSemilinearFEOperator( + mass::Function, res::Function, jacs::Tuple{Vararg{Function}}, + trial, test; + constant_mass::Bool=false, +) + order = length(jacs) - 1 + if order == 0 + @warn default_linear_msg + return TransientLinearFEOperator( + (mass,), res, jacs, trial, test; constant_mass + ) + end + + assembler = SparseMatrixAssembler(trial, test) + TransientSemilinearFEOpFromWeakForm( + mass, res, jacs, constant_mass, + assembler, trial, test, order + ) +end + +# Constructor with flat arguments (orders 0, 1, 2) +function TransientSemilinearFEOperator( + mass::Function, res::Function, + jac::Function, + trial, test; + constant_mass::Bool=false, +) + @warn default_linear_msg + TransientLinearFEOperator(mass, res, jac, trial, test; constant_mass) +end + +function TransientSemilinearFEOperator( + mass::Function, res::Function, + jac::Function, jac_t::Function, + trial, test; + constant_mass::Bool=false, +) + TransientSemilinearFEOperator( + mass, res, (jac, jac_t), + trial, test; + constant_mass + ) +end + +function TransientSemilinearFEOperator( + mass::Function, res::Function, + jac::Function, jac_t::Function, jac_tt::Function, + trial, test; + constant_mass::Bool=false, +) + TransientSemilinearFEOperator( + mass, res, (jac, jac_t, jac_tt), + trial, test; + constant_mass + ) +end + +# Constructor with automatic jacobians +function TransientSemilinearFEOperator( + mass::Function, res::Function, + trial, test; + order::Integer=1, + constant_mass::Bool=false +) + if order == 0 + @warn default_linear_msg + return TransientLinearFEOperator(mass, res, trial, test; constant_mass) + end + + # When the operator is semilinear, the mass term can be omitted in the + # computation of the other jacobians. + jacs = () + if order > 0 + function jac_0(t, u, du, v) + function res_0(y) + u0 = TransientCellField(y, u.derivatives) + res(t, u0, v) + end + jacobian(res_0, u.cellfield) + end + jacs = (jacs..., jac_0) + end + + for k in 1:order-1 + function jac_k(t, u, duk, v) + function res_k(y) + derivatives = (u.derivatives[1:k-1]..., y, u.derivatives[k+1:end]...) + uk = TransientCellField(u.cellfield, derivatives) + res(t, uk, v) + end + jacobian(res_k, u.derivatives[k]) + end + jacs = (jacs..., jac_k) + end + + # When the operator is semilinear, the jacobian of the residual w.r.t. the + # highest-order term is simply the mass term. + jac_N(t, u, duN, v) = mass(t, duN, v) + jacs = (jacs..., jac_N) + + TransientSemilinearFEOperator( + mass, res, jacs, trial, test; + constant_mass + ) +end + +# TransientFEOperator interface +FESpaces.get_test(tfeop::TransientSemilinearFEOpFromWeakForm) = tfeop.test + +FESpaces.get_trial(tfeop::TransientSemilinearFEOpFromWeakForm) = tfeop.trial + +Polynomials.get_order(tfeop::TransientSemilinearFEOpFromWeakForm) = tfeop.order + +get_res(tfeop::TransientSemilinearFEOpFromWeakForm) = tfeop.res + +get_jacs(tfeop::TransientSemilinearFEOpFromWeakForm) = tfeop.jacs + +get_forms(tfeop::TransientSemilinearFEOpFromWeakForm) = (tfeop.mass,) + +function is_form_constant(tfeop::TransientSemilinearFEOpFromWeakForm, k::Integer) + (k == get_order(tfeop)) && tfeop.constant_mass +end + +get_assembler(tfeop::TransientSemilinearFEOpFromWeakForm) = tfeop.assembler + +################################### +# TransientLinearFEOpFromWeakForm # +################################### +""" + struct TransientLinearFEOpFromWeakForm <: TransientFEOperator end + +Transient `FEOperator` defined by a transient weak form +```math +residual(t, u, v) = ∑_{0 ≤ k ≤ N} form_k(t, ∂t^k[u], v) + res(t, v) = 0, +``` +where `N` is the order of the operator, `form_k` is linear in `∂t^k[u]` and +does not depend on the other time derivatives of `u`, and the `form_k` and +`res` are linear in `v`. + +For convenience, the form corresponding to order `k` has to be written as a +function of `∂t^k[u]`, i.e. as a linear form, and the residual as a function +of `t` and `v` only. +""" +struct TransientLinearFEOpFromWeakForm <: TransientFEOperator{LinearODE} + forms::Tuple{Vararg{Function}} + res::Function + jacs::Tuple{Vararg{Function}} + constant_forms::Tuple{Vararg{Bool}} + assembler::Assembler + trial::FESpace + test::FESpace + order::Integer +end + +# Constructor with manual jacobians +function TransientLinearFEOperator( + forms::Tuple{Vararg{Function}}, res::Function, jacs::Tuple{Vararg{Function}}, + trial, test; + constant_forms::Tuple{Vararg{Bool}}=ntuple(_ -> false, length(forms)) +) + order = length(jacs) - 1 + assembler = SparseMatrixAssembler(trial, test) + TransientLinearFEOpFromWeakForm( + forms, res, jacs, constant_forms, + assembler, trial, test, order + ) +end + +# No constructor with flat arguments: would clash with the constructors +# below with flat forms and automatic jacobians, which are more useful + +# Constructor with automatic jacobians +function TransientLinearFEOperator( + forms::Tuple{Vararg{Function}}, res::Function, + trial, test; + constant_forms::Tuple{Vararg{Bool}}=ntuple(_ -> false, length(forms)) +) + # When the operator is linear, the jacobians are the forms themselves + order = length(forms) - 1 + jacs = ntuple(k -> ((t, u, duk, v) -> forms[k](t, duk, v)), order + 1) + + TransientLinearFEOperator( + forms, res, jacs, trial, test; + constant_forms + ) +end + +# Constructor with flat forms and automatic jacobians (orders 0, 1, 2) +function TransientLinearFEOperator( + mass::Function, res::Function, + trial, test; + constant_forms::NTuple{1,Bool}=(false,) +) + TransientLinearFEOperator( + (mass,), res, + trial, test; constant_forms + ) +end + +function TransientLinearFEOperator( + stiffness::Function, mass::Function, res::Function, + trial, test; + constant_forms::NTuple{2,Bool}=(false, false) +) + TransientLinearFEOperator( + (stiffness, mass), res, + trial, test; constant_forms + ) +end + +function TransientLinearFEOperator( + stiffness::Function, damping::Function, mass::Function, res::Function, + trial, test; + constant_forms::NTuple{3,Bool}=(false, false, false) +) + TransientLinearFEOpFromWeakForm( + (stiffness, damping, mass), res, + trial, test; constant_forms + ) +end + +# TransientFEOperator interface +FESpaces.get_test(tfeop::TransientLinearFEOpFromWeakForm) = tfeop.test + +FESpaces.get_trial(tfeop::TransientLinearFEOpFromWeakForm) = tfeop.trial + +Polynomials.get_order(tfeop::TransientLinearFEOpFromWeakForm) = tfeop.order + +get_res(tfeop::TransientLinearFEOpFromWeakForm) = (t, u, v) -> tfeop.res(t, v) + +get_jacs(tfeop::TransientLinearFEOpFromWeakForm) = tfeop.jacs + +get_forms(tfeop::TransientLinearFEOpFromWeakForm) = tfeop.forms + +function is_form_constant(tfeop::TransientLinearFEOpFromWeakForm, k::Integer) + tfeop.constant_forms[k+1] +end + +get_assembler(tfeop::TransientLinearFEOpFromWeakForm) = tfeop.assembler + +########################### +# TransientIMEXFEOperator # +########################### +""" + abstract type TransientIMEXFEOperator <: TransientFEOperator end + +Implicit-Explicit decomposition of a residual defining a `TransientFEOperator`: +```math +residual(t, u, v) = implicit_residual(t, u, v) + + explicit_residual(t, u, v), +``` +where +* The implicit operator defined by the implicit residual is considered stiff +and is meant to be solved implicitly, +* The explicit operator defined by the explicit residual is considered non-stiff +and is meant to be solved explicitly. +* Both the implicit and explicit residuals are linear in `v`. + +# Important +The explicit operator must have one order less than the implicit operator, so +that the mass term of the global operator is fully contained in the implicit +operator. + +# Mandatory +- [`get_imex_operators(tfeop)`](@ref) + +# Optional +- [`get_test(tfeop)`](@ref) +- [`get_trial(tfeop)`](@ref) +- [`get_algebraic_operator(tfeop)`](@ref) +""" +abstract type TransientIMEXFEOperator{T<:ODEOperatorType} <: TransientFEOperator{T} end + +""" + get_imex_operators(tfeop::TransientIMEXFEOperator) -> (TransientFEOperator, TransientFEOperator) + +Return the implicit and explicit parts of the `TransientIMEXFEOperator`. +""" +function get_imex_operators(tfeop::TransientIMEXFEOperator) + @abstractmethod +end + +# TransientFEOperator interface +# Only these function need to be implemented because all other functions of the +# interface are going to be called on the implicit and explicit +# `ODEOpFromFEOp`s within the `IMEXODEOperator` interface, and in turn called +# on the implicit and explicit `TransientFEOperator`s separately +function FESpaces.get_test(tfeop::TransientIMEXFEOperator) + im_tfeop, _ = get_imex_operators(tfeop) + get_test(im_tfeop) +end + +function FESpaces.get_trial(tfeop::TransientIMEXFEOperator) + im_tfeop, _ = get_imex_operators(tfeop) + get_trial(im_tfeop) +end + +function FESpaces.get_algebraic_operator(tfeop::TransientIMEXFEOperator) + im_tfeop, ex_tfeop = get_imex_operators(tfeop) + im_odeop, ex_odeop = ODEOpFromTFEOp(im_tfeop), ODEOpFromTFEOp(ex_tfeop) + GenericIMEXODEOperator(im_odeop, ex_odeop) +end + +function FESpaces.get_order(tfeop::TransientIMEXFEOperator) + im_tfeop, _ = get_imex_operators(tfeop) + get_order(im_tfeop) +end + +# IMEX Helpers +function check_imex_compatibility( + im_tfeop::TransientFEOperator, ex_tfeop::TransientFEOperator +) + msg = """ + The implicit and explicit parts of a `TransientIMEXFEOperator` must be + defined on the same test and trial spaces and have the same assembler. + """ + @assert (get_test(im_tfeop) == get_test(ex_tfeop)) msg + @assert (get_trial(im_tfeop) == get_trial(ex_tfeop)) msg + @assert (get_assembler(im_tfeop) == get_assembler(ex_tfeop)) msg + + im_order, ex_order = get_order(im_tfeop), get_order(ex_tfeop) + check_imex_compatibility(im_order, ex_order) +end + +function IMEXODEOperatorType( + im_tfeop::TransientFEOperator, ex_tfeop::TransientFEOperator +) + T_im, T_ex = ODEOperatorType(im_tfeop), ODEOperatorType(ex_tfeop) + IMEXODEOperatorType(T_im, T_ex) +end + +################################## +# GenericTransientIMEXFEOperator # +################################## +""" + struct GenericTransientIMEXFEOperator <: TransientIMEXFEOperator end +""" +struct GenericTransientIMEXFEOperator{T<:ODEOperatorType} <: TransientIMEXFEOperator{T} + im_tfeop::TransientFEOperator + ex_tfeop::TransientFEOperator + + function GenericTransientIMEXFEOperator( + im_tfeop::TransientFEOperator, + ex_tfeop::TransientFEOperator + ) + check_imex_compatibility(im_tfeop, ex_tfeop) + T = IMEXODEOperatorType(im_tfeop, ex_tfeop) + new{T}(im_tfeop, ex_tfeop) + end +end + +# Default constructor +function TransientIMEXFEOperator( + im_tfeop::TransientFEOperator, + ex_tfeop::TransientFEOperator +) + GenericTransientIMEXFEOperator(im_tfeop, ex_tfeop) +end + +# TransientIMEXFEOperator interface +function get_imex_operators(tfeop::GenericTransientIMEXFEOperator) + (tfeop.im_tfeop, tfeop.ex_tfeop) +end + +######## +# Test # +######## +""" + test_tfe_operator( + tfeop::TransientFEOperator, + t::Real, uh::TransientCellField + ) -> Bool + +Test the interface of `TransientFEOperator` specializations. +""" +function test_tfe_operator( + tfeop::TransientFEOperator, + t::Real, uh::TransientCellField +) + U = get_trial(tfeop) + Ut = U(t) + @test Ut isa FESpace + + V = get_test(tfeop) + @test V isa FESpace + + odeop = get_algebraic_operator(tfeop) + @test odeop isa ODEOperator + + us = (get_free_dof_values(uh.cellfield),) + for derivative in uh.derivatives + us = (us..., get_free_dof_values(derivative)) + end + + test_ode_operator(odeop, t, us) + + true +end diff --git a/src/ODEs/TransientFESolutions.jl b/src/ODEs/TransientFESolutions.jl new file mode 100644 index 000000000..132096980 --- /dev/null +++ b/src/ODEs/TransientFESolutions.jl @@ -0,0 +1,169 @@ +####################### +# TransientFESolution # +####################### +""" + abstract type TransientFESolution <: GridapType end + +Wrapper around a `TransientFEOperator` and `ODESolver` that represents the +solution at a set of time steps. It is an iterator that computes the solution +at each time step in a lazy fashion when accessing the solution. + +# Mandatory +- [`Base.iterate(tfesltn)`](@ref) +- [`Base.iterate(tfesltn, state)`](@ref) +""" +abstract type TransientFESolution <: GridapType end + +""" + Base.iterate(tfesltn::TransientFESolution) -> ((Real, FEFunction), StateType) + +Allocate a cache and perform one step of the `ODEOperator` with the `ODESolver` +attached to the `TransientFESolution`. +""" +function Base.iterate(tfesltn::TransientFESolution) + @abstractmethod +end + +""" + Base.iterate(tfesltn::TransientFESolution) -> ((Real, FEFunction), StateType) + +Perform one step of the `ODEOperator` with the `ODESolver` attached to the +`TransientFESolution`. +""" +function Base.iterate(tfesltn::TransientFESolution, state) + @abstractmethod +end + +Base.IteratorSize(::Type{<:TransientFESolution}) = Base.SizeUnknown() + +############################## +# GenericTransientFESolution # +############################## +""" + struct GenericTransientFESolution <: TransientFESolution end + +Generic wrapper for the evolution of an `TransientFEOperator` with an +`ODESolver`. +""" + +struct GenericTransientFESolution <: TransientFESolution + odesltn::ODESolution + trial +end + +# Constructors +function GenericTransientFESolution( + odeslvr::ODESolver, tfeop::TransientFEOperator, + t0::Real, tF::Real, uhs0::Tuple{Vararg{CellField}} +) + odeop = get_algebraic_operator(tfeop) + us0 = get_free_dof_values.(uhs0) + odesltn = solve(odeslvr, odeop, t0, tF, us0) + trial = get_trial(tfeop) + GenericTransientFESolution(odesltn, trial) +end + +function GenericTransientFESolution( + odeslvr::ODESolver, tfeop::TransientFEOperator, + t0::Real, tF::Real, uh0::CellField, +) + uhs0 = (uh0,) + GenericTransientFESolution(odeslvr, tfeop, t0, tF, uhs0) +end + +function Base.iterate(tfesltn::GenericTransientFESolution) + ode_it = iterate(tfesltn.odesltn) + if isnothing(ode_it) + return nothing + end + + ode_it_data, ode_it_state = ode_it + tF, uF = ode_it_data + + Uh = allocate_space(tfesltn.trial) + Uh = evaluate!(Uh, tfesltn.trial, tF) + uhF = FEFunction(Uh, uF) + + tfe_it_data = (tF, uhF) + tfe_it_state = (Uh, ode_it_state) + (tfe_it_data, tfe_it_state) +end + +function Base.iterate(tfesltn::GenericTransientFESolution, state) + Uh, ode_it_state = state + + ode_it = iterate(tfesltn.odesltn, ode_it_state) + if isnothing(ode_it) + return nothing + end + + ode_it_data, ode_it_state = ode_it + tF, uF = ode_it_data + + Uh = evaluate!(Uh, tfesltn.trial, tF) + uhF = FEFunction(Uh, uF) + + tfe_it_data = (tF, uhF) + tfe_it_state = (Uh, ode_it_state) + (tfe_it_data, tfe_it_state) +end + +############################## +# Default behaviour of solve # +############################## +""" + solve( + odeslvr::ODESolver, tfeop::TransientFEOperator, + t0::Real, tF::Real, uhs0 + ) -> TransientFESolution + +Create a `TransientFESolution` wrapper around the `TransientFEOperator` and +`ODESolver`, starting at time `t0` with state `us0`, to be evolved until `tF`. +""" +function Algebra.solve( + odeslvr::ODESolver, tfeop::TransientFEOperator, + t0::Real, tF::Real, uhs0::Tuple{Vararg{CellField}} +) + GenericTransientFESolution(odeslvr, tfeop, t0, tF, uhs0) +end + +function Algebra.solve( + odeslvr::ODESolver, tfeop::TransientFEOperator, + t0::Real, tF::Real, uh0::CellField +) + uhs0 = (uh0,) + solve(odeslvr, tfeop, t0, tF, uhs0) +end + +######## +# Test # +######## +""" + test_tfe_solution(tfesltn::TransientFESolution) -> Bool + +Test the interface of `TransientFESolution` specializations. +""" +function test_tfe_solution(tfesltn::TransientFESolution) + for (t_n, uh_n) in tfesltn + @test t_n isa Real + @test uh_n isa FEFunction + end + + true +end + +""" + test_tfe_solver( + odeslvr::ODESolver, tfeop::TransientFEOperator, + t0::Real, tF::Real, uhs0 + ) -> Bool + +Test the interface of `ODESolver` specializations on `TransientFEOperator`s. +""" +function test_tfe_solver( + odeslvr::ODESolver, tfeop::TransientFEOperator, + t0::Real, tF::Real, uhs0::Tuple{Vararg{AbstractVector}} +) + tfesltn = solve(odeslvr, tfeop, t0, tF, uhs0) + test_tfe_solution(tfesltn) +end diff --git a/src/ODEs/TransientFESpaces.jl b/src/ODEs/TransientFESpaces.jl new file mode 100644 index 000000000..75a55a54c --- /dev/null +++ b/src/ODEs/TransientFESpaces.jl @@ -0,0 +1,244 @@ +######################### +# TransientTrialFESpace # +######################### +""" + struct TransientTrialFESpace <: SingleFieldFESpace end + +Transient version of `TrialFESpace`: the Dirichlet boundary conditions are +allowed to be time-dependent. + +# Mandatory +- [`allocate_space(space)`](@ref) +- [`evaluate!(space, t)`](@ref) +- [`evaluate(space, t)`](@ref) +- [`time_derivative(space)`](@ref) + +# Optional +- [`evaluate(space, t::Real)`](@ref) +""" +struct TransientTrialFESpace{U,U0} <: SingleFieldFESpace + space::U + homogeneous_space::U0 + transient_dirichlet::Union{Function,AbstractVector{<:Function}} + + function TransientTrialFESpace( + space::FESpace, transient_dirichlet::Union{Function,AbstractVector{<:Function}} + ) + homogeneous_space = HomogeneousTrialFESpace(space) + U = typeof(space) + U0 = typeof(homogeneous_space) + new{U,U0}(space, homogeneous_space, transient_dirichlet) + end +end + +# Constructors +function TransientTrialFESpace(space) + HomogeneousTrialFESpace(space) +end + +""" + allocate_space(space::TransientTrialFESpace) -> FESpace + +Allocate a transient space, intended to be updated at every time step. +""" +function allocate_space(U::TransientTrialFESpace) + HomogeneousTrialFESpace(U.space) +end + +""" + evaluate!( + transient_space::FESpace, + space::TransientTrialFESpace, t::Real + ) -> FESpace + +Replace the Dirichlet values of the space by those at time `t`. +""" +function Arrays.evaluate!(Ut::FESpace, U::TransientTrialFESpace, t::Real) + if U.transient_dirichlet isa AbstractVector + dirichlets_at_t = map(o -> o(t), U.transient_dirichlet) + else + dirichlets_at_t = U.transient_dirichlet(t) + end + TrialFESpace!(Ut, dirichlets_at_t) + Ut +end + +""" + evaluate(space::TransientTrialFESpace, t::Real) -> FESpace + +Allocate a transient space and evaluate the Dirichlet values at time `t`. +""" +function Arrays.evaluate(U::TransientTrialFESpace, t::Real) + Ut = allocate_space(U) + evaluate!(Ut, U, t) + Ut +end + +""" + evaluate(space::TransientTrialFESpace, t::Nothing) -> FESpace + +Evaluating at `nothing` means that the Dirichlet values are not important. +""" +function Arrays.evaluate(U::TransientTrialFESpace, t::Nothing) + U.homogeneous_space +end + +""" + (space::TransientTrialFESpace)(t) -> FESpace + +Alias for [`evaluate(space, t)`](@ref). +""" +(space::TransientTrialFESpace)(t) = evaluate(space, t) + +""" + time_derivative(space::TransientTrialFESpace) -> FESpace + +First-order time derivative of the Dirichlet functions. +""" +function time_derivative(U::TransientTrialFESpace) + TransientTrialFESpace(U.space, time_derivative.(U.transient_dirichlet)) +end + +# FESpace interface +FESpaces.get_free_dof_ids(f::TransientTrialFESpace) = get_free_dof_ids(f.space) +FESpaces.get_vector_type(f::TransientTrialFESpace) = get_vector_type(f.space) +Geometry.get_triangulation(f::TransientTrialFESpace) = get_triangulation(f.space) +FESpaces.get_cell_dof_ids(f::TransientTrialFESpace) = get_cell_dof_ids(f.space) +FESpaces.get_fe_basis(f::TransientTrialFESpace) = get_fe_basis(f.space) +FESpaces.get_fe_dof_basis(f::TransientTrialFESpace) = get_fe_dof_basis(f.space) +FESpaces.ConstraintStyle(::Type{<:TransientTrialFESpace{U}}) where {U} = ConstraintStyle(U) +function FESpaces.get_cell_constraints(f::TransientTrialFESpace, c::Constrained) + get_cell_constraints(f.space, c) +end +function FESpaces.get_cell_isconstrained(f::TransientTrialFESpace, c::Constrained) + get_cell_isconstrained(f.space, c) +end + +# SingleFieldFESpace interface +FESpaces.get_dirichlet_dof_ids(f::TransientTrialFESpace) = get_dirichlet_dof_ids(f.space) +FESpaces.num_dirichlet_tags(f::TransientTrialFESpace) = num_dirichlet_tags(f.space) +FESpaces.get_dirichlet_dof_tag(f::TransientTrialFESpace) = get_dirichlet_dof_tag(f.space) +function FESpaces.scatter_free_and_dirichlet_values(f::TransientTrialFESpace, free_values, dirichlet_values) + scatter_free_and_dirichlet_values(f.space, free_values, dirichlet_values) +end +function FESpaces.gather_free_and_dirichlet_values!(free_values, dirichlet_values, f::TransientTrialFESpace, cell_vals) + gather_free_and_dirichlet_values!(free_values, dirichlet_values, f.space, cell_vals) +end + +function FESpaces.get_dirichlet_dof_values(f::TransientTrialFESpace) + msg = """ + It does not make sense to get the Dirichlet DOF values of a transient FE space. You + should first evaluate the transient FE space at a point in time and get the Dirichlet + DOF values from there. + """ + @unreachable msg +end + +# function FESpaces.SparseMatrixAssembler( +# trial::TransientTrialFESpace, +# test::FESpace +# ) +# SparseMatrixAssembler(evaluate(trial, nothing), test) +# end + +########### +# FESpace # +########### +allocate_space(space::FESpace) = space +Arrays.evaluate!(transient_space::FESpace, space::FESpace, t::Real) = space +Arrays.evaluate(space::FESpace, t::Real) = space +Arrays.evaluate(space::FESpace, t::Nothing) = space + +# TODO why is this needed? +@static if VERSION >= v"1.3" + (space::FESpace)(t) = evaluate(space, t) +end +(space::TrialFESpace)(t) = evaluate(space, t) +(space::ZeroMeanFESpace)(t) = evaluate(space, t) + +function time_derivative(space::SingleFieldFESpace) + HomogeneousTrialFESpace(space) +end + +##################### +# MultiFieldFESpace # +##################### +# This is only for backward compatibility, we could remove it +const TransientMultiFieldFESpace = MultiFieldFESpace + +function has_transient(U::MultiFieldFESpace) + any(space -> space isa TransientTrialFESpace, U.spaces) +end + +function allocate_space(U::MultiFieldFESpace) + if !has_transient(U) + return U + end + spaces = map(allocate_space, U) + style = MultiFieldStyle(U) + MultiFieldFESpace(spaces; style) +end + +function Arrays.evaluate!(Ut::MultiFieldFESpace, U::MultiFieldFESpace, t::Real) + if !has_transient(U) + return Ut + end + for (Uti, Ui) in zip(Ut, U) + evaluate!(Uti, Ui, t) + end + Ut +end + +function Arrays.evaluate(U::MultiFieldFESpace, t::Real) + if !has_transient(U) + return U + end + Ut = allocate_space(U) + evaluate!(Ut, U, t) +end + +function Arrays.evaluate(U::MultiFieldFESpace, t::Nothing) + if !has_transient(U) + return U + end + spaces = map(space -> evaluate(space, t), U.spaces) + style = MultiFieldStyle(U) + MultiFieldFESpace(spaces; style) +end + +function time_derivative(U::MultiFieldFESpace) + spaces = map(time_derivative, U.spaces) + style = MultiFieldStyle(U) + MultiFieldFESpace(spaces; style) +end + +######## +# Test # +######## +""" + test_tfe_space(U::FESpace) -> Bool + +Test the transient interface of `FESpace` specializations. +""" +function test_tfe_space(U::FESpace) + UX = evaluate(U, nothing) + @test UX isa FESpace + + t = 0.0 + + U0 = allocate_space(U) + U0 = evaluate!(U0, U, t) + @test U0 isa FESpace + + U0 = evaluate(U, t) + @test U0 isa FESpace + + U0 = U(t) + @test U0 isa FESpace + + Ut = ∂t(U) + Ut0 = Ut(t) + @test Ut0 isa FESpace + + true +end diff --git a/src/ODEs/TransientFETools/ODEOperatorInterfaces.jl b/src/ODEs/TransientFETools/ODEOperatorInterfaces.jl deleted file mode 100644 index b31dc9cae..000000000 --- a/src/ODEs/TransientFETools/ODEOperatorInterfaces.jl +++ /dev/null @@ -1,177 +0,0 @@ -""" -A wrapper of `TransientFEOperator` that transforms it to `ODEOperator`, i.e., -takes A(t,uh,∂tuh,∂t^2uh,...,∂t^Nuh,vh) and returns A(t,uF,∂tuF,...,∂t^NuF) -where uF,∂tuF,...,∂t^NuF represent the free values of the `EvaluationFunction` -uh,∂tuh,∂t^2uh,...,∂t^Nuh. -""" -struct ODEOpFromFEOp{C} <: ODEOperator{C} - feop::TransientFEOperator{C} -end - -get_order(op::ODEOpFromFEOp) = get_order(op.feop) - -function allocate_cache(op::ODEOpFromFEOp) - Ut = get_trial(op.feop) - U = allocate_trial_space(Ut) - Uts = (Ut,) - Us = (U,) - for i in 1:get_order(op) - Uts = (Uts...,∂t(Uts[i])) - Us = (Us...,allocate_trial_space(Uts[i+1])) - end - fecache = allocate_cache(op.feop) - ode_cache = (Us,Uts,fecache) - ode_cache -end - -function allocate_cache(op::ODEOpFromFEOp,v::AbstractVector) - ode_cache = allocate_cache(op) - _v = similar(v) - (_v, ode_cache) -end - -function allocate_cache(op::ODEOpFromFEOp,v::AbstractVector,a::AbstractVector) - ode_cache = allocate_cache(op) - _v = similar(v) - _a = similar(a) - (_v,_a, ode_cache) -end - -function update_cache!(ode_cache,op::ODEOpFromFEOp,t::Real) - _Us,Uts,fecache = ode_cache - Us = () - for i in 1:get_order(op)+1 - Us = (Us...,evaluate!(_Us[i],Uts[i],t)) - end - fecache = update_cache!(fecache,op.feop,t) - (Us,Uts,fecache) -end - -function allocate_residual(op::ODEOpFromFEOp,t0::Real,uhF::AbstractVector,ode_cache) - Us,Uts,fecache = ode_cache - uh = EvaluationFunction(Us[1],uhF) - allocate_residual(op.feop,t0,uh,fecache) -end - -function allocate_jacobian(op::ODEOpFromFEOp,t0::Real,uhF::AbstractVector,ode_cache) - Us,Uts,fecache = ode_cache - uh = EvaluationFunction(Us[1],uhF) - allocate_jacobian(op.feop,t0,uh,fecache) -end - -""" -It provides A(t,uh,∂tuh,...,∂t^Nuh) for a given (t,uh,∂tuh,...,∂t^Nuh) -""" -function residual!( - b::AbstractVector, - op::ODEOpFromFEOp, - t::Real, - xhF::Tuple{Vararg{AbstractVector}}, - ode_cache) - Xh, = ode_cache - dxh = () - for i in 2:get_order(op)+1 - dxh = (dxh...,EvaluationFunction(Xh[i],xhF[i])) - end - xh=TransientCellField(EvaluationFunction(Xh[1],xhF[1]),dxh) - residual!(b,op.feop,t,xh,ode_cache) -end - - -""" -It adds contribution to the Jacobian with respect to the i-th time derivative, -with i=0,...,N. That is, adding γ_i*[∂A/∂(∂t^iuh)](t,uh,∂tuh,...,∂t^Nuh) for a -given (t,uh,∂tuh,...,∂t^Nuh) to a given matrix J, where γ_i is a scaling coefficient -provided by the `ODESolver`, e.g., 1/Δt for Backward Euler; It represents -∂(δt^i(uh))/∂(uh), in which δt^i(⋅) is the approximation of ∂t^i(⋅) in the solver. -Note that for i=0, γ_i=1.0. -""" -function jacobian!( - A::AbstractMatrix, - op::ODEOpFromFEOp, - t::Real, - xhF::Tuple{Vararg{AbstractVector}}, - i::Integer, - γᵢ::Real, - ode_cache) - Xh, = ode_cache - dxh = () - for i in 2:get_order(op)+1 - dxh = (dxh...,EvaluationFunction(Xh[i],xhF[i])) - end - xh=TransientCellField(EvaluationFunction(Xh[1],xhF[1]),dxh) - jacobian!(A,op.feop,t,xh,i,γᵢ,ode_cache) -end - -""" -Add the contribution of all jacobians ,i.e., ∑ᵢ γ_i*[∂A/∂(∂t^iuh)](t,uh,∂tuh,...,∂t^Nuh,vh) -""" -function jacobians!( - J::AbstractMatrix, - op::ODEOpFromFEOp, - t::Real, - xhF::Tuple{Vararg{AbstractVector}}, - γ::Tuple{Vararg{Real}}, - ode_cache) - Xh, = ode_cache - dxh = () - for i in 2:get_order(op)+1 - dxh = (dxh...,EvaluationFunction(Xh[i],xhF[i])) - end - xh=TransientCellField(EvaluationFunction(Xh[1],xhF[1]),dxh) - jacobians!(J,op.feop,t,xh,γ,ode_cache) -end - -""" -It provides the Left hand side, RHS, of LHS(t,uh,∂tuh) = RHS(t,uh) for a given (t,uh,∂tuh,...,∂t^Nuh) -""" -function lhs!( - lhs::AbstractVector, - op::ODEOpFromFEOp, - t::Real, - xhF::Tuple{Vararg{AbstractVector}}, - ode_cache) - Xh, = ode_cache - dxh = () - for i in 2:get_order(op)+1 - dxh = (dxh...,EvaluationFunction(Xh[i],xhF[i])) - end - xh=TransientCellField(EvaluationFunction(Xh[1],xhF[1]),dxh) - lhs!(lhs,op.feop,t,xh,ode_cache) -end - -""" -It provides the Right hand side, RHS, of LHS(t,uh,∂tuh) = RHS(t,uh) for a given (t,uh,∂tuh,...,∂t^Nuh) -""" -function rhs!( - rhs::AbstractVector, - op::ODEOpFromFEOp, - t::Real, - xhF::Tuple{Vararg{AbstractVector}}, - ode_cache) - Xh, = ode_cache - dxh = () - for i in 2:get_order(op)+1 - dxh = (dxh...,EvaluationFunction(Xh[i],xhF[i])) - end - xh=TransientCellField(EvaluationFunction(Xh[1],xhF[1]),dxh) - rhs!(rhs,op.feop,t,xh,ode_cache) -end - -""" -It provides the explicit right hand side, E_RHS, of LHS(t,uh,∂tuh) = I_RHS(t,uh) + E_RHS(t,uh) for a given (t,uh,∂tuh,...,∂t^Nuh) -""" -function explicit_rhs!( - explicit_rhs::AbstractVector, - op::ODEOpFromFEOp, - t::Real, - xhF::Tuple{Vararg{AbstractVector}}, - ode_cache) - Xh, = ode_cache - dxh = () - for i in 2:get_order(op)+1 - dxh = (dxh...,EvaluationFunction(Xh[i],xhF[i])) - end - xh=TransientCellField(EvaluationFunction(Xh[1],xhF[1]),dxh) - explicit_rhs!(explicit_rhs,op.feop,t,xh,ode_cache) -end diff --git a/src/ODEs/TransientFETools/TransientCellField.jl b/src/ODEs/TransientFETools/TransientCellField.jl deleted file mode 100644 index 815bfacc3..000000000 --- a/src/ODEs/TransientFETools/TransientCellField.jl +++ /dev/null @@ -1,74 +0,0 @@ -# Transient CellField -abstract type TransientCellField <: CellField end - -get_data(f::TransientCellField) = @abstractmethod -get_triangulation(f::TransientCellField) = @abstractmethod -DomainStyle(::Type{TransientCellField}) = @abstractmethod -gradient(f::TransientCellField) = @abstractmethod -∇∇(f::TransientCellField) = @abstractmethod -function change_domain(f::TransientCellField,trian::Triangulation,target_domain::DomainStyle) - @abstractmethod -end - -struct TransientSingleFieldCellField{A} <: TransientCellField - cellfield::A - derivatives::Tuple#{Vararg{A,B} where B} -end - -SingleFieldTypes = Union{GenericCellField,SingleFieldFEFunction} - -function TransientCellField(single_field::SingleFieldTypes,derivatives::Tuple) - TransientSingleFieldCellField(single_field,derivatives) -end - -# CellField methods -get_data(f::TransientSingleFieldCellField) = get_data(f.cellfield) -get_triangulation(f::TransientSingleFieldCellField) = get_triangulation(f.cellfield) -DomainStyle(::Type{<:TransientSingleFieldCellField{A}}) where A = DomainStyle(A) -gradient(f::TransientSingleFieldCellField) = gradient(f.cellfield) -∇∇(f::TransientSingleFieldCellField) = ∇∇(f.cellfield) -change_domain(f::TransientSingleFieldCellField,trian::Triangulation,target_domain::DomainStyle) = change_domain(f.cellfield,trian,target_domain) - -# Skeleton related Operations -function Base.getproperty(f::TransientSingleFieldCellField, sym::Symbol) - if sym in (:⁺,:plus,:⁻, :minus) - derivatives = () - if sym in (:⁺,:plus) - cellfield = CellFieldAt{:plus}(f.cellfield) - for iderivative in f.derivatives - derivatives = (derivatives...,CellFieldAt{:plus}(iderivative)) - end - elseif sym in (:⁻, :minus) - cellfield = CellFieldAt{:minus}(f.cellfield) - for iderivative in f.derivatives - derivatives = (derivatives...,CellFieldAt{:plus}(iderivative)) - end - end - return TransientSingleFieldCellField(cellfield,derivatives) - else - return getfield(f, sym) - end -end - -# Transient FEBasis -struct TransientFEBasis{A} <: FEBasis - febasis::A - derivatives::Tuple{Vararg{A}} -end - -# FEBasis methods -get_data(f::TransientFEBasis) = get_data(f.febasis) -get_triangulation(f::TransientFEBasis) = get_triangulation(f.febasis) -DomainStyle(::Type{<:TransientFEBasis{A}}) where A = DomainStyle(A) -BasisStyle(::Type{<:TransientFEBasis{A}}) where A = BasisStyle(A) -gradient(f::TransientFEBasis) = gradient(f.febasis) -∇∇(f::TransientFEBasis) = ∇∇(f.febasis) -change_domain(f::TransientFEBasis,trian::Triangulation,target_domain::DomainStyle) = change_domain(f.febasis,trian,target_domain) - -# Time derivative -function ∂t(f::Union{TransientCellField,TransientFEBasis}) - cellfield, derivatives = first_and_tail(f.derivatives) - TransientCellField(cellfield,derivatives) -end - -∂tt(f::Union{TransientCellField,TransientFEBasis}) = ∂t(∂t(f::Union{TransientCellField,TransientFEBasis})) diff --git a/src/ODEs/TransientFETools/TransientFEOperators.jl b/src/ODEs/TransientFETools/TransientFEOperators.jl deleted file mode 100644 index f195baa65..000000000 --- a/src/ODEs/TransientFETools/TransientFEOperators.jl +++ /dev/null @@ -1,666 +0,0 @@ -""" -A transient version of the `Gridap` `FEOperator` that depends on time -""" -abstract type TransientFEOperator{C<:OperatorType} <: GridapType end - -""" -Returns the test space -""" -function get_test(op::TransientFEOperator) - @abstractmethod -end - -""" -Returns the (possibly) time-dependent trial space -""" -function get_trial(op::TransientFEOperator) - @abstractmethod # time dependent -end - -function allocate_residual(op::TransientFEOperator,t0,uh,cache) - @abstractmethod -end - -function allocate_jacobian(op::TransientFEOperator,t0,uh,cache) - @notimplemented -end - -""" -Idem as `residual!` of `ODEOperator` -""" -function residual!( - b::AbstractVector, - op::TransientFEOperator, - t::Real, - xh::Union{AbstractVector,Tuple{Vararg{AbstractVector}}}, - cache) - @abstractmethod -end - -""" -Idem as `jacobian!` of `ODEOperator` -""" -function jacobian!( - A::AbstractMatrix, - op::TransientFEOperator, - t::Real, - xh::Union{AbstractVector,Tuple{Vararg{AbstractVector}}}, - i::Int, - γᵢ::Real, - cache) - @abstractmethod -end - -""" -Idem as `jacobians!` of `ODEOperator` -""" -function jacobians!( - A::AbstractMatrix, - op::TransientFEOperator, - t::Real, - x::Tuple{Vararg{AbstractVector}}, - γ::Tuple{Vararg{Real}}, - cache) - @abstractmethod -end - -""" -Returns the assembler, which is constant for all time steps for a given FE -operator. - -Note: adaptive FE spaces involve to generate new FE spaces and -corresponding operators, due to the ummutable approach in `Gridap` -""" -get_assembler(feop::TransientFEOperator) = @abstractmethod - - -# Default API - -""" -Returns a `ODEOperator` wrapper of the `TransientFEOperator` that can be -straightforwardly used with the `ODETools` module. -""" -function get_algebraic_operator(feop::TransientFEOperator{C}) where C - ODEOpFromFEOp{C}(feop) -end - -OperatorType(::Type{<:TransientFEOperator{C}}) where C = C - -# @fverdugo This function is just in case we need to override it in the future for some specialization. -# This default implementation is enough for the moment. -function allocate_cache(op::TransientFEOperator) - nothing -end - -function update_cache!(cache::Nothing,op::TransientFEOperator,t::Real) - nothing -end - -# Specializations - -""" -Transient FE operator that is defined by a transient Weak form -""" -struct TransientFEOperatorFromWeakForm{C} <: TransientFEOperator{C} - res::Function - rhs::Function - jacs::Tuple{Vararg{Function}} - assem_t::Assembler - trials::Tuple{Vararg{Any}} - test::FESpace - order::Integer -end - -function TransientConstantFEOperator(m::Function,a::Function,b::Function, - trial,test) - res(t,u,v) = -1.0 * b(v) - rhs(t,u,v) = b(v) - jac(t,u,du,v) = a(du,v) - jac_t(t,u,dut,v) = m(dut,v) - assem_t = SparseMatrixAssembler(trial,test) - TransientFEOperatorFromWeakForm{Constant}(res,rhs,(jac,jac_t),assem_t,(trial,∂t(trial)),test,1) -end - -function TransientConstantMatrixFEOperator(m::Function,a::Function,b::Function, - trial,test) - res(t,u,v) = m(∂t(u),v) + a(u,v) - b(t,v) - rhs(t,u,v) = b(t,v) - a(u,v) - jac(t,u,du,v) = a(du,v) - jac_t(t,u,dut,v) = m(dut,v) - assem_t = SparseMatrixAssembler(trial,test) - TransientFEOperatorFromWeakForm{ConstantMatrix}(res,rhs,(jac,jac_t),assem_t,(trial,∂t(trial)),test,1) -end - -function TransientAffineFEOperator(m::Function,a::Function,b::Function, - trial,test) - res(t,u,v) = m(t,∂t(u),v) + a(t,u,v) - b(t,v) - rhs(t,u,v) = b(t,v) - a(t,u,v) - jac(t,u,du,v) = a(t,du,v) - jac_t(t,u,dut,v) = m(t,dut,v) - assem_t = SparseMatrixAssembler(trial,test) - TransientFEOperatorFromWeakForm{Affine}(res,rhs,(jac,jac_t),assem_t,(trial,∂t(trial)),test,1) -end - -function TransientFEOperator(res::Function,jac::Function,jac_t::Function, - trial,test) - assem_t = SparseMatrixAssembler(trial,test) - TransientFEOperatorFromWeakForm{Nonlinear}(res,rhs_error,(jac,jac_t),assem_t,(trial,∂t(trial)),test,1) -end - - -function TransientConstantFEOperator(m::Function,c::Function,a::Function,b::Function, - trial,test) - res(t,u,v) = -1.0 * b(v) - rhs(t,u,v) = b(v) - jac(t,u,du,v) = a(du,v) - jac_t(t,u,dut,v) = c(dut,v) - jac_tt(t,u,dutt,v) = m(dutt,v) - assem_t = SparseMatrixAssembler(trial,test) - trial_t = ∂t(trial) - trial_tt = ∂t(trial_t) - TransientFEOperatorFromWeakForm{Constant}( - res,rhs,(jac,jac_t,jac_tt),assem_t,(trial,trial_t,trial_tt),test,2) -end - -function TransientConstantMatrixFEOperator(m::Function,c::Function,a::Function,b::Function, - trial,test) - res(t,u,v) = m(∂tt(u),v) + c(∂t(u),v) + a(u,v) - b(t,v) - rhs(t,u,v) = b(t,v) - c(∂t(u),v) - a(u,v) - jac(t,u,du,v) = a(du,v) - jac_t(t,u,dut,v) = c(dut,v) - jac_tt(t,u,dutt,v) = m(dutt,v) - assem_t = SparseMatrixAssembler(trial,test) - trial_t = ∂t(trial) - trial_tt = ∂t(trial_t) - TransientFEOperatorFromWeakForm{ConstantMatrix}( - res,rhs,(jac,jac_t,jac_tt),assem_t,(trial,trial_t,trial_tt),test,2) -end - -function TransientAffineFEOperator(m::Function,c::Function,a::Function,b::Function, - trial,test) - res(t,u,v) = m(t,∂tt(u),v) + c(t,∂t(u),v) + a(t,u,v) - b(t,v) - rhs(t,u,v) = b(t,v) - c(t,∂t(u),v) - a(t,u,v) - jac(t,u,du,v) = a(t,du,v) - jac_t(t,u,dut,v) = c(t,dut,v) - jac_tt(t,u,dutt,v) = m(t,dutt,v) - assem_t = SparseMatrixAssembler(trial,test) - trial_t = ∂t(trial) - trial_tt = ∂t(trial_t) - TransientFEOperatorFromWeakForm{Affine}( - res,rhs,(jac,jac_t,jac_tt),assem_t,(trial,trial_t,trial_tt),test,2) -end - -function TransientFEOperator(res::Function,jac::Function,jac_t::Function, - jac_tt::Function,trial,test) - assem_t = SparseMatrixAssembler(trial,test) - trial_t = ∂t(trial) - trial_tt = ∂t(trial_t) - TransientFEOperatorFromWeakForm{Nonlinear}( - res,rhs_error,(jac,jac_t,jac_tt),assem_t,(trial,trial_t,trial_tt),test,2) -end - -function TransientFEOperator(res::Function,trial,test;order::Integer=1) - function jac_0(t,x,dx0,dv) - function res_0(y) - x0 = TransientCellField(y,x.derivatives) - res(t,x0,dv) - end - jacobian(res_0,x.cellfield) - end - jacs = (jac_0,) - for i in 1:order - function jac_i(t,x,dxi,dv) - function res_i(y) - derivatives = (x.derivatives[1:i-1]...,y,x.derivatives[i+1:end]...) - xi = TransientCellField(x.cellfield,derivatives) - res(t,xi,dv) - end - jacobian(res_i,x.derivatives[i]) - end - jacs = (jacs...,jac_i) - end - TransientFEOperator(res,jacs...,trial,test) -end - -function allocate_residual( - op::TransientFEOperatorFromWeakForm, - t0::Real, - uh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - dxh = () - for i in 1:get_order(op) - dxh = (dxh...,uh) - end - xh = TransientCellField(uh,dxh) - vecdata = collect_cell_vector(V,op.res(t0,xh,v)) - allocate_vector(op.assem_t,vecdata) -end - -function residual!( - b::AbstractVector, - op::TransientFEOperatorFromWeakForm, - t::Real, - xh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - vecdata = collect_cell_vector(V,op.res(t,xh,v)) - assemble_vector!(b,op.assem_t,vecdata) - b -end - -""" -Transient FE operator that is defined by a transient Weak form with the -form: LHS(t,u,∂u/∂t,...) ∂u/∂t = RHS(t,u,∂u/∂t,...). Used in Runge-Kutta schemes -""" -struct TransientRKFEOperatorFromWeakForm{C} <: TransientFEOperator{C} - lhs::Function - rhs::Function - jacs::Tuple{Vararg{Function}} - assem_t::Assembler - trials::Tuple{Vararg{Any}} - test::FESpace - order::Integer -end - -function TransientRungeKuttaFEOperator(lhs::Function,rhs::Function,jac::Function, - jac_t::Function,trial,test) - assem_t = SparseMatrixAssembler(trial,test) - TransientRKFEOperatorFromWeakForm{Nonlinear}(lhs,rhs,(jac,jac_t),assem_t,(trial,∂t(trial)),test,1) -end - -function TransientRungeKuttaFEOperator(lhs::Function,rhs::Function,trial,test) - res(t,u,v) = lhs(t,u,v) - rhs(t,u,v) - function jac_0(t,x,dx0,dv) - function res_0(y) - x0 = TransientCellField(y,x.derivatives) - res(t,x0,dv) - end - jacobian(res_0,x.cellfield) - end - jacs = (jac_0,) - function jac_t(t,x,dxt,dv) - function res_t(y) - derivatives = (y,x.derivatives[2:end]...) - xt = TransientCellField(x.cellfield,derivatives) - res(t,xt,dv) - end - jacobian(res_t,x.derivatives[1]) - end - jacs = (jac_0,jac_t) - TransientRungeKuttaFEOperator(lhs,rhs,jacs...,trial,test) -end - -function allocate_residual( - op::TransientRKFEOperatorFromWeakForm, - t0::Real, - uh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - dxh = () - for i in 1:get_order(op) - dxh = (dxh...,uh) - end - xh = TransientCellField(uh,dxh) - vecdata = collect_cell_vector(V,op.lhs(t0,xh,v)) - allocate_vector(op.assem_t,vecdata) -end - -function lhs!( - b::AbstractVector, - op::TransientRKFEOperatorFromWeakForm, - t::Real, - xh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - vecdata = collect_cell_vector(V,op.lhs(t,xh,v)) - assemble_vector!(b,op.assem_t,vecdata) - b -end - -function rhs!( - rhs::AbstractVector, - op::TransientRKFEOperatorFromWeakForm, - t::Real, - xh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - vecdata = collect_cell_vector(V,op.rhs(t,xh,v)) - assemble_vector!(rhs,op.assem_t,vecdata) - rhs -end - -# IMEX-RK Transient FE operators -""" -Transient FE operator that is defined by a transient Weak form with the -form: LHS(t,u,∂u/∂t,...) ∂u/∂t = I_RHS(t,u,∂u/∂t,...) + E_RHS(t,u,∂u/∂t,...). -Used in Implicit-Explicit Runge-Kutta schemes -""" -struct TransientIMEXRKFEOperatorFromWeakForm{C} <: TransientFEOperator{C} - lhs::Function - rhs::Function - explicit_rhs::Function - jacs::Tuple{Vararg{Function}} - assem_t::Assembler - trials::Tuple{Vararg{Any}} - test::FESpace - order::Integer -end - -function TransientIMEXRungeKuttaFEOperator(lhs::Function,rhs::Function, - explicit_rhs::Function,jac::Function,jac_t::Function,trial,test) - assem_t = SparseMatrixAssembler(trial,test) - TransientIMEXRKFEOperatorFromWeakForm{Nonlinear}(lhs,rhs,explicit_rhs,(jac,jac_t),assem_t,(trial,∂t(trial)),test,1) -end - -function TransientIMEXRungeKuttaFEOperator(lhs::Function,rhs::Function, - explicit_rhs::Function,trial,test) - res(t,u,v) = lhs(t,u,v) - rhs(t,u,v) - function jac_0(t,x,dx0,dv) - function res_0(y) - x0 = TransientCellField(y,x.derivatives) - res(t,x0,dv) - end - jacobian(res_0,x.cellfield) - end - jacs = (jac_0,) - function jac_t(t,x,dxt,dv) - function res_t(y) - derivatives = (y,x.derivatives[2:end]...) - xt = TransientCellField(x.cellfield,derivatives) - res(t,xt,dv) - end - jacobian(res_t,x.derivatives[1]) - end - jacs = (jac_0,jac_t) - TransientIMEXRungeKuttaFEOperator(lhs,rhs,explicit_rhs,jacs...,trial,test) -end - -function allocate_residual( - op::TransientIMEXRKFEOperatorFromWeakForm, - t0::Real, - uh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - dxh = () - for i in 1:get_order(op) - dxh = (dxh...,uh) - end - xh = TransientCellField(uh,dxh) - vecdata = collect_cell_vector(V,op.lhs(t0,xh,v)) - allocate_vector(op.assem_t,vecdata) -end - -function lhs!( - b::AbstractVector, - op::TransientIMEXRKFEOperatorFromWeakForm, - t::Real, - xh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - vecdata = collect_cell_vector(V,op.lhs(t,xh,v)) - assemble_vector!(b,op.assem_t,vecdata) - b -end - -function rhs!( - rhs::AbstractVector, - op::TransientIMEXRKFEOperatorFromWeakForm, - t::Real, - xh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - vecdata = collect_cell_vector(V,op.rhs(t,xh,v)) - assemble_vector!(rhs,op.assem_t,vecdata) - rhs -end - -function explicit_rhs!( - explicit_rhs::AbstractVector, - op::TransientIMEXRKFEOperatorFromWeakForm, - t::Real, - xh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - vecdata = collect_cell_vector(V,op.explicit_rhs(t,xh,v)) - assemble_vector!(explicit_rhs,op.assem_t,vecdata) - explicit_rhs -end - - -# EX-RK Transient FE operators -""" -Used in Explicit Runge-Kutta schemes -""" -struct TransientEXRKFEOperatorFromWeakForm{C} <: TransientFEOperator{C} - res::Function - lhs::Function - rhs::Function - jacs::Tuple{Vararg{Function}} - assem_t::Assembler - trials::Tuple{Vararg{Any}} - test::FESpace - order::Integer -end - - -function TransientEXRungeKuttaFEOperator(lhs::Function,rhs::Function,jac::Function, - jac_t::Function,trial,test) - res(t,u,v) = lhs(t,u,v) - rhs(t,u,v) - assem_t = SparseMatrixAssembler(trial,test) - TransientEXRKFEOperatorFromWeakForm{Nonlinear}(res,lhs,rhs,(jac,jac_t),assem_t,(trial,∂t(trial)),test,1) -end - - -function allocate_residual( - op::TransientEXRKFEOperatorFromWeakForm, - t0::Real, - uh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - dxh = () - for i in 1:get_order(op) - dxh = (dxh...,uh) - end - xh = TransientCellField(uh,dxh) - vecdata = collect_cell_vector(V,op.res(t0,xh,v)) - allocate_vector(op.assem_t,vecdata) -end - -function residual!( - b::AbstractVector, - op::TransientEXRKFEOperatorFromWeakForm, - t::Real, - xh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - vecdata = collect_cell_vector(V,op.res(t,xh,v)) - assemble_vector!(b,op.assem_t,vecdata) - b -end - - -function rhs!( - rhs::AbstractVector, - op::TransientEXRKFEOperatorFromWeakForm, - t::Real, - xh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - vecdata = collect_cell_vector(V,op.rhs(t,xh,v)) - assemble_vector!(rhs,op.assem_t,vecdata) - rhs -end - -function lhs!( - b::AbstractVector, - op::TransientEXRKFEOperatorFromWeakForm, - t::Real, - xh::T, - cache) where T - V = get_test(op) - v = get_fe_basis(V) - vecdata = collect_cell_vector(V,op.lhs(t,xh,v)) - assemble_vector!(b,op.assem_t,vecdata) - b -end - -# Common functions - -TransientFEOperatorsFromWeakForm = Union{TransientFEOperatorFromWeakForm, -TransientRKFEOperatorFromWeakForm, TransientIMEXRKFEOperatorFromWeakForm, -TransientEXRKFEOperatorFromWeakForm} - -function SparseMatrixAssembler( - trial::Union{TransientTrialFESpace,TransientMultiFieldTrialFESpace}, - test::FESpace) - SparseMatrixAssembler(evaluate(trial,nothing),test) -end - -get_assembler(op::TransientFEOperatorsFromWeakForm) = op.assem_t - -get_test(op::TransientFEOperatorsFromWeakForm) = op.test - -get_trial(op::TransientFEOperatorsFromWeakForm) = op.trials[1] - -get_order(op::TransientFEOperatorsFromWeakForm) = op.order - - -function allocate_jacobian( - op::TransientFEOperatorsFromWeakForm, - t0::Real, - uh::CellField, - cache) - _matdata_jacobians = fill_initial_jacobians(op,t0,uh) - matdata = _vcat_matdata(_matdata_jacobians) - allocate_matrix(op.assem_t,matdata) -end - -function jacobian!( - A::AbstractMatrix, - op::TransientFEOperatorsFromWeakForm, - t::Real, - xh::T, - i::Integer, - γᵢ::Real, - cache) where T - matdata = _matdata_jacobian(op,t,xh,i,γᵢ) - assemble_matrix_add!(A,op.assem_t, matdata) - A -end - -function jacobians!( - A::AbstractMatrix, - op::TransientFEOperatorsFromWeakForm, - t::Real, - xh::TransientCellField, - γ::Tuple{Vararg{Real}}, - cache) - _matdata_jacobians = fill_jacobians(op,t,xh,γ) - matdata = _vcat_matdata(_matdata_jacobians) - assemble_matrix_add!(A,op.assem_t, matdata) - A -end - -function fill_initial_jacobians(op::TransientFEOperatorsFromWeakForm,t0::Real,uh) - dxh = () - for i in 1:get_order(op) - dxh = (dxh...,uh) - end - xh = TransientCellField(uh,dxh) - _matdata = () - for i in 1:get_order(op)+1 - _matdata = (_matdata...,_matdata_jacobian(op,t0,xh,i,0.0)) - end - return _matdata -end - -function fill_jacobians( - op::TransientFEOperatorsFromWeakForm, - t::Real, - xh::T, - γ::Tuple{Vararg{Real}}) where T - _matdata = () - for i in 1:get_order(op)+1 - if (γ[i] > 0.0) - _matdata = (_matdata...,_matdata_jacobian(op,t,xh,i,γ[i])) - end - end - return _matdata -end - -function _vcat_matdata(_matdata) - term_to_cellmat_j = () - term_to_cellidsrows_j = () - term_to_cellidscols_j = () - for j in 1:length(_matdata) - term_to_cellmat_j = (term_to_cellmat_j...,_matdata[j][1]) - term_to_cellidsrows_j = (term_to_cellidsrows_j...,_matdata[j][2]) - term_to_cellidscols_j = (term_to_cellidscols_j...,_matdata[j][3]) - end - - term_to_cellmat = vcat(term_to_cellmat_j...) - term_to_cellidsrows = vcat(term_to_cellidsrows_j...) - term_to_cellidscols = vcat(term_to_cellidscols_j...) - - matdata = (term_to_cellmat,term_to_cellidsrows, term_to_cellidscols) -end - -function _matdata_jacobian( - op::TransientFEOperatorsFromWeakForm, - t::Real, - xh::T, - i::Integer, - γᵢ::Real) where T - Uh = evaluate(get_trial(op),nothing) - V = get_test(op) - du = get_trial_fe_basis(Uh) - v = get_fe_basis(V) - matdata = collect_cell_matrix(Uh,V,γᵢ*op.jacs[i](t,xh,du,v)) -end - -function rhs_error(t::Real,xh,v) - error("The \"rhs\" function is not defined for this TransientFEOperator. - Please, try to use another type of TransientFEOperator that supports this - functionality.") -end - -# Tester - -function test_transient_fe_operator(op::TransientFEOperator,uh) - odeop = get_algebraic_operator(op) - @test isa(odeop,ODEOperator) - cache = allocate_cache(op) - V = get_test(op) - @test isa(V,FESpace) - U = get_trial(op) - U0 = U(0.0) - @test isa(U0,FESpace) - r = allocate_residual(op,0.0,uh,cache) - @test isa(r,AbstractVector) - xh = TransientCellField(uh,(uh,)) - residual!(r,op,0.0,xh,cache) - @test isa(r,AbstractVector) - J = allocate_jacobian(op,0.0,uh,cache) - @test isa(J,AbstractMatrix) - jacobian!(J,op,0.0,xh,1,1.0,cache) - @test isa(J,AbstractMatrix) - jacobian!(J,op,0.0,xh,2,1.0,cache) - @test isa(J,AbstractMatrix) - jacobians!(J,op,0.0,xh,(1.0,1.0),cache) - @test isa(J,AbstractMatrix) - cache = update_cache!(cache,op,0.0) - true -end diff --git a/src/ODEs/TransientFETools/TransientFESolutions.jl b/src/ODEs/TransientFETools/TransientFESolutions.jl deleted file mode 100644 index a4da218ad..000000000 --- a/src/ODEs/TransientFETools/TransientFESolutions.jl +++ /dev/null @@ -1,111 +0,0 @@ -""" -It represents a FE function at a set of time steps. It is a wrapper of a ODE -solution for free values combined with data for Dirichlet values. Thus, it is a -lazy iterator that computes the solution at each time step when accessing the -solution. -""" -struct TransientFESolution - odesol::ODESolution - trial -end - - -function TransientFESolution( - solver::ODESolver, op::TransientFEOperator, uh0, t0::Real, tF::Real) - - ode_op = get_algebraic_operator(op) - u0 = get_free_dof_values(uh0) - ode_sol = solve(solver,ode_op,u0,t0,tF) - trial = get_trial(op) - - TransientFESolution(ode_sol, trial) -end - -function TransientFESolution( - solver::ODESolver, - op::TransientFEOperator, - xh0::Tuple{Vararg{Any}}, - t0::Real, - tF::Real) - - ode_op = get_algebraic_operator(op) - x0 = () - for xhi in xh0 - x0 = (x0...,get_free_dof_values(xhi)) - end - ode_sol = solve(solver,ode_op,x0,t0,tF) - trial = get_trial(op) - - TransientFESolution(ode_sol, trial) -end - -# Solve functions - -function solve( - solver::ODESolver,op::TransientFEOperator,u0,t0::Real,tf::Real) - TransientFESolution(solver,op,u0,t0,tf) -end - -function solve( - solver::ODESolver,op::TransientFEOperator,u0,v0,a0,t0::Real,tf::Real) - TransientFESolution(solver,op,u0,v0,a0,t0,tf) -end - -function test_transient_fe_solver(solver::ODESolver,op::TransientFEOperator,u0,t0,tf) - solution = solve(solver,op,u0,t0,tf) - test_transient_fe_solution(solution) -end - -#@fverdugo this is a general implementation of iterate for TransientFESolution -# We could also implement another one for the very common case that the -# underlying ode_op is a ODEOpFromFEOp object - -function Base.iterate(sol::TransientFESolution) - - odesolnext = Base.iterate(sol.odesol) - - if odesolnext === nothing - return nothing - end - - (uf, tf), odesolstate = odesolnext - - Uh = allocate_trial_space(sol.trial) - Uh = evaluate!(Uh,sol.trial,tf) - uh = FEFunction(Uh,uf) - - state = (Uh, odesolstate) - - (uh, tf), state -end - -function Base.iterate(sol::TransientFESolution, state) - - Uh, odesolstate = state - - odesolnext = Base.iterate(sol.odesol,odesolstate) - - if odesolnext === nothing - return nothing - end - - (uf, tf), odesolstate = odesolnext - - Uh = evaluate!(Uh,sol.trial,tf) - uh = FEFunction(Uh,uf) - - state = (Uh, odesolstate) - - (uh, tf), state - -end - -Base.IteratorSize(::Type{TransientFESolution}) = Base.SizeUnknown() - -function test_transient_fe_solution(fesol::TransientFESolution) - for (uhn,tn) in fesol - @test isa(uhn,FEFunction) - @test isa(tn,Real) - end - true -end diff --git a/src/ODEs/TransientFETools/TransientFESpaces.jl b/src/ODEs/TransientFETools/TransientFESpaces.jl deleted file mode 100644 index 29c3bb733..000000000 --- a/src/ODEs/TransientFETools/TransientFESpaces.jl +++ /dev/null @@ -1,228 +0,0 @@ -""" -A single field FE space with transient Dirichlet data (see Multifield below). -""" -struct TransientTrialFESpace{A,B} - space::A - dirichlet_t::Union{Function,Vector{<:Function}} - Ud0::B - - function TransientTrialFESpace(space::A,dirichlet_t::Union{Function,Vector{<:Function}}) where A - Ud0 = HomogeneousTrialFESpace(space) - B = typeof(Ud0) - new{A,B}(space,dirichlet_t,Ud0) - end -end - -function TransientTrialFESpace(space::A) where A - HomogeneousTrialFESpace(space) -end - -""" -Time evaluation without allocating Dirichlet vals -""" -function evaluate!(Ut::T,U::TransientTrialFESpace,t::Real) where T - if isa(U.dirichlet_t,Vector) - objects_at_t = map( o->o(t), U.dirichlet_t) - else - objects_at_t = U.dirichlet_t(t) - end - TrialFESpace!(Ut,objects_at_t) - Ut -end - -""" -Allocate the space to be used as first argument in evaluate! -""" -function allocate_trial_space(U::TransientTrialFESpace) - HomogeneousTrialFESpace(U.space) -end - -""" -Time evaluation allocating Dirichlet vals -""" -function evaluate(U::TransientTrialFESpace,t::Real) - Ut = allocate_trial_space(U) - evaluate!(Ut,U,t) - return Ut -end - -""" -We can evaluate at `nothing` when we do not care about the Dirichlet vals -""" -function evaluate(U::TransientTrialFESpace,t::Nothing) - return U.Ud0 -end - -evaluate(U::TrialFESpace,t::Nothing) = U - -""" -Functor-like evaluation. It allocates Dirichlet vals in general. -""" -(U::TransientTrialFESpace)(t) = evaluate(U,t) - -(U::TrialFESpace)(t) = U -(U::ZeroMeanFESpace)(t) = U -# (U::Union{TrialFESpace,ZeroMeanFESpace})(t) = U - -""" -Time derivative of the Dirichlet functions -""" -∂t(U::TransientTrialFESpace) = TransientTrialFESpace(U.space,∂t.(U.dirichlet_t)) -∂t(U::SingleFieldFESpace) = HomogeneousTrialFESpace(U) -∂t(U::MultiFieldFESpace) = MultiFieldFESpace(∂t.(U.spaces)) -∂t(t::T) where T<:Number = zero(T) - -""" -Time 2nd derivative of the Dirichlet functions -""" -∂tt(U::TransientTrialFESpace) = TransientTrialFESpace(U.space,∂tt.(U.dirichlet_t)) -∂tt(U::SingleFieldFESpace) = HomogeneousTrialFESpace(U) -∂tt(U::MultiFieldFESpace) = MultiFieldFESpace(∂tt.(U.spaces)) -∂tt(t::T) where T<:Number = zero(T) - -zero_free_values(f::TransientTrialFESpace) = zero_free_values(f.space) -has_constraints(f::TransientTrialFESpace) = has_constraints(f.space) -get_dof_value_type(f::TransientTrialFESpace) = get_dof_value_type(f.space) -get_vector_type(f::TransientTrialFESpace) = get_vector_type(f.space) - -# Testing the interface - -function test_transient_trial_fe_space(Uh) - UhX = evaluate(Uh,nothing) - @test isa(UhX,FESpace) - Uh0 = allocate_trial_space(Uh) - Uh0 = evaluate!(Uh0,Uh,0.0) - @test isa(Uh0,FESpace) - Uh0 = evaluate(Uh,0.0) - @test isa(Uh0,FESpace) - Uh0 = Uh(0.0) - @test isa(Uh0,FESpace) - Uht=∂t(Uh) - Uht0=Uht(0.0) - @test isa(Uht0,FESpace) - true -end - -# Define the TransientTrialFESpace interface for stationary spaces - -evaluate!(Ut::FESpace,U::FESpace,t::Real) = U -allocate_trial_space(U::FESpace) = U -evaluate(U::FESpace,t::Real) = U -evaluate(U::FESpace,t::Nothing) = U - -@static if VERSION >= v"1.3" - (U::FESpace)(t) = U -end - -# Define the interface for MultiField - -struct TransientMultiFieldTrialFESpace{MS<:MultiFieldStyle,CS<:ConstraintStyle,V} - vector_type::Type{V} - spaces::Vector - multi_field_style::MS - constraint_style::CS - function TransientMultiFieldTrialFESpace( - ::Type{V}, - spaces::Vector, - multi_field_style::MultiFieldStyle) where V - @assert length(spaces) > 0 - - MS = typeof(multi_field_style) - if any( map(has_constraints,spaces) ) - constraint_style = Constrained() - else - constraint_style = UnConstrained() - end - CS = typeof(constraint_style) - new{MS,CS,V}(V,spaces,multi_field_style,constraint_style) - end -end - -# Default constructors -function TransientMultiFieldFESpace(spaces::Vector; - style = ConsecutiveMultiFieldStyle()) - Ts = map(get_dof_value_type,spaces) - T = typeof(*(map(zero,Ts)...)) - if isa(style,BlockMultiFieldStyle) - style = BlockMultiFieldStyle(style,spaces) - VT = typeof(mortar(map(zero_free_values,spaces))) - else - VT = Vector{T} - end - TransientMultiFieldTrialFESpace(VT,spaces,style) -end - -function TransientMultiFieldFESpace(::Type{V},spaces::Vector) where V - TransientMultiFieldTrialFESpace(V,spaces,ConsecutiveMultiFieldStyle()) -end - -function TransientMultiFieldFESpace(spaces::Vector{<:SingleFieldFESpace}; - style = ConsecutiveMultiFieldStyle()) - MultiFieldFESpace(spaces,style=style) -end - -function TransientMultiFieldFESpace(::Type{V},spaces::Vector{<:SingleFieldFESpace}) where V - MultiFieldFESpace(V,spaces,ConsecutiveMultiFieldStyle()) -end - -Base.iterate(m::TransientMultiFieldTrialFESpace) = iterate(m.spaces) -Base.iterate(m::TransientMultiFieldTrialFESpace,state) = iterate(m.spaces,state) -Base.getindex(m::TransientMultiFieldTrialFESpace,field_id::Integer) = m.spaces[field_id] -Base.length(m::TransientMultiFieldTrialFESpace) = length(m.spaces) - -function evaluate!(Ut::T,U::TransientMultiFieldTrialFESpace,t::Real) where T - spaces_at_t = [evaluate!(Uti,Ui,t) for (Uti,Ui) in zip(Ut,U)] - mfs = MultiFieldStyle(U) - return MultiFieldFESpace(spaces_at_t;style=mfs) -end - -function allocate_trial_space(U::TransientMultiFieldTrialFESpace) - spaces = allocate_trial_space.(U.spaces) - mfs = MultiFieldStyle(U) - return MultiFieldFESpace(spaces;style=mfs) -end - -function evaluate(U::TransientMultiFieldTrialFESpace,t::Real) - Ut = allocate_trial_space(U) - evaluate!(Ut,U,t) - return Ut -end - -function evaluate(U::TransientMultiFieldTrialFESpace,t::Nothing) - spaces = [evaluate(fesp,nothing) for fesp in U.spaces] - mfs = MultiFieldStyle(U) - MultiFieldFESpace(spaces;style=mfs) -end - -(U::TransientMultiFieldTrialFESpace)(t) = evaluate(U,t) - -function ∂t(U::TransientMultiFieldTrialFESpace) - spaces = ∂t.(U.spaces) - mfs = MultiFieldStyle(U) - TransientMultiFieldFESpace(spaces;style=mfs) -end - -function zero_free_values(f::TransientMultiFieldTrialFESpace{<:BlockMultiFieldStyle{NB,SB,P}}) where {NB,SB,P} - block_ranges = get_block_ranges(NB,SB,P) - block_num_dofs = map(range->sum(map(num_free_dofs,f.spaces[range])),block_ranges) - block_vtypes = map(range->get_vector_type(first(f.spaces[range])),block_ranges) - values = mortar(map(allocate_vector,block_vtypes,block_num_dofs)) - fill!(values,zero(eltype(values))) - return values -end - -get_dof_value_type(f::TransientMultiFieldTrialFESpace{MS,CS,V}) where {MS,CS,V} = eltype(V) -get_vector_type(f::TransientMultiFieldTrialFESpace) = f.vector_type -ConstraintStyle(::Type{TransientMultiFieldTrialFESpace{S,B,V}}) where {S,B,V} = B() -ConstraintStyle(::TransientMultiFieldTrialFESpace) = ConstraintStyle(typeof(f)) -MultiFieldStyle(::Type{TransientMultiFieldTrialFESpace{S,B,V}}) where {S,B,V} = S() -MultiFieldStyle(f::TransientMultiFieldTrialFESpace) = MultiFieldStyle(typeof(f)) - -function SparseMatrixAssembler(mat,vec, - trial::TransientMultiFieldTrialFESpace{MS}, - test ::TransientMultiFieldTrialFESpace{MS}, - strategy::AssemblyStrategy=DefaultAssemblyStrategy() - ) where MS <: BlockMultiFieldStyle - mfs = MultiFieldStyle(test) - return BlockSparseMatrixAssembler(mfs,trial,test,SparseMatrixBuilder(mat),ArrayBuilder(vec),strategy) -end diff --git a/src/ODEs/TransientFETools/TransientFETools.jl b/src/ODEs/TransientFETools/TransientFETools.jl deleted file mode 100644 index 288bfd5fc..000000000 --- a/src/ODEs/TransientFETools/TransientFETools.jl +++ /dev/null @@ -1,145 +0,0 @@ -""" - -The exported names are -$(EXPORTS) -""" -module TransientFETools - -using Test -using DocStringExtensions - -using Gridap.Helpers - -export ∂t - -import Gridap.ODEs.ODETools: ∂t, ∂tt -import Gridap.ODEs.ODETools: time_derivative - -export TransientTrialFESpace -export TransientMultiFieldFESpace -export test_transient_trial_fe_space -import Gridap.Fields: evaluate -import Gridap.Fields: evaluate! -import Gridap.MultiField: MultiFieldFESpace -using Gridap.FESpaces: FESpace -using Gridap.FESpaces: SingleFieldFESpace -using Gridap.FESpaces: TrialFESpace -using Gridap.FESpaces: ZeroMeanFESpace -using Gridap.FESpaces: get_free_dof_values -using Gridap.FESpaces: get_dirichlet_dof_values -using Gridap.FESpaces: TrialFESpace! -using Gridap.FESpaces: HomogeneousTrialFESpace -using Gridap.FESpaces: jacobian - -import Gridap.Geometry: Triangulation -import Gridap.CellData: Measure -using Gridap.FESpaces: ∫ - -export TransientFEOperator -export TransientAffineFEOperator -export TransientConstantFEOperator -export TransientConstantMatrixFEOperator -export TransientRungeKuttaFEOperator -export TransientIMEXRungeKuttaFEOperator -export TransientEXRungeKuttaFEOperator -using Gridap.FESpaces: Assembler -using Gridap.FESpaces: SparseMatrixAssembler -import Gridap.ODEs.ODETools: allocate_cache -import Gridap.ODEs.ODETools: update_cache! -import Gridap.ODEs.ODETools: ODEOperator -import Gridap.ODEs.ODETools: AffineODEOperator -import Gridap.ODEs.ODETools: ConstantODEOperator -import Gridap.ODEs.ODETools: ConstantMatrixODEOperator -import Gridap.ODEs.ODETools: allocate_residual -import Gridap.ODEs.ODETools: allocate_jacobian -import Gridap.ODEs.ODETools: residual! -import Gridap.ODEs.ODETools: jacobian! -import Gridap.ODEs.ODETools: jacobians! -import Gridap.ODEs.ODETools: lhs! -import Gridap.ODEs.ODETools: rhs! -import Gridap.ODEs.ODETools: explicit_rhs! -import Gridap.ODEs.ODETools: OperatorType -using Gridap.ODEs.ODETools: Nonlinear -using Gridap.ODEs.ODETools: Affine -using Gridap.ODEs.ODETools: Constant -using Gridap.ODEs.ODETools: ConstantMatrix -import Gridap.FESpaces: get_algebraic_operator -import Gridap.FESpaces: assemble_vector! -import Gridap.FESpaces: assemble_matrix_add! -import Gridap.FESpaces: allocate_vector -import Gridap.FESpaces: allocate_matrix -using Gridap.FESpaces: get_fe_basis -using Gridap.FESpaces: get_trial_fe_basis -using Gridap.FESpaces: collect_cell_vector -using Gridap.FESpaces: collect_cell_matrix -using Gridap.FESpaces: return_type -import Gridap.FESpaces: SparseMatrixAssembler -import Gridap.FESpaces: get_trial -import Gridap.FESpaces: get_test -using Gridap.ODEs.ODETools: test_ode_operator -export test_transient_fe_operator - -import Gridap.FESpaces: FESolver -import Gridap.ODEs.ODETools: ODESolver -import Gridap.Algebra: solve -import Gridap.Algebra: solve! -import Gridap.ODEs.ODETools: solve_step! -export test_transient_fe_solver - -export TransientFEFunction -import Gridap.FESpaces: FEFunction -import Gridap.FESpaces: SingleFieldFEFunction -import Gridap.FESpaces: EvaluationFunction -import Gridap.MultiField: MultiFieldFEFunction -import Gridap.MultiField: num_fields - -export TransientFESolution -import Gridap.Algebra: solve -import Gridap.ODEs.ODETools: ODESolution -import Gridap.ODEs.ODETools: GenericODESolution -import Base: iterate -export test_transient_fe_solution - -export TransientCellField -using Gridap.CellData: CellField -using Gridap.CellData: CellFieldAt -using Gridap.CellData: GenericCellField -using Gridap.MultiField: MultiFieldCellField -using Gridap.FESpaces: FEBasis -import Gridap.CellData: get_data -import Gridap.CellData: get_triangulation -import Gridap.CellData: DomainStyle -import Gridap.CellData: gradient -import Gridap.CellData: ∇∇ -import Gridap.CellData: change_domain -import Gridap.FESpaces: BasisStyle -using Gridap.FESpaces: Constrained, UnConstrained, AssemblyStrategy -using Gridap.MultiField: ConsecutiveMultiFieldStyle, BlockSparseMatrixAssembler -import Gridap.MultiField: ConstraintStyle, MultiFieldStyle, BlockMultiFieldStyle -import Gridap.FESpaces: zero_free_values, has_constraints, SparseMatrixAssembler -import Gridap.FESpaces: get_dof_value_type, get_vector_type - -using BlockArrays - -include("TransientFESpaces.jl") - -include("TransientCellField.jl") - -include("TransientMultiFieldCellField.jl") - -include("TransientFEOperators.jl") - -include("ODEOperatorInterfaces.jl") - -include("TransientFESolutions.jl") - -# export FETerm -# function FETerm(args...) -# Helpers.@unreachable """\n -# Function FETerm has been removed. The API for specifying the weak form has changed significantly. -# See the gridap/Tutorials repo for some examples of how to use the new API. -# This error message will be deleted in future versions. -# """ -# end - -end #module diff --git a/src/ODEs/TransientFETools/TransientMultiFieldCellField.jl b/src/ODEs/TransientFETools/TransientMultiFieldCellField.jl deleted file mode 100644 index b48430bb3..000000000 --- a/src/ODEs/TransientFETools/TransientMultiFieldCellField.jl +++ /dev/null @@ -1,81 +0,0 @@ -struct TransientMultiFieldCellField{A} <: TransientCellField - cellfield::A - derivatives::Tuple - transient_single_fields::Vector{<:TransientCellField} # used to iterate -end - -MultiFieldTypes = Union{MultiFieldCellField,MultiFieldFEFunction} - -function TransientCellField(multi_field::MultiFieldTypes,derivatives::Tuple) - transient_single_fields = _to_transient_single_fields(multi_field,derivatives) - TransientMultiFieldCellField(multi_field,derivatives,transient_single_fields) -end - -function get_data(f::TransientMultiFieldCellField) - s = """ - Function get_data is not implemented for TransientMultiFieldCellField at this moment. - You need to extract the individual fields and then evaluate them separately. - - If ever implement this, evaluating a `MultiFieldCellField` directly would provide, - at each evaluation point, a tuple with the value of the different fields. - """ - @notimplemented s -end - -get_triangulation(f::TransientMultiFieldCellField) = get_triangulation(f.cellfield) -DomainStyle(::Type{TransientMultiFieldCellField{A}}) where A = DomainStyle(A) -num_fields(f::TransientMultiFieldCellField) = length(f.cellfield) -gradient(f::TransientMultiFieldCellField) = gradient(f.cellfield) -∇∇(f::TransientMultiFieldCellField) = ∇∇(f.cellfield) -change_domain(f::TransientMultiFieldCellField,trian::Triangulation,target_domain::DomainStyle) = change_domain(f.cellfield,trian,target_domain) - -# Get single index -function Base.getindex(f::TransientMultiFieldCellField,ifield::Integer) - single_field = f.cellfield[ifield] - single_derivatives = () - for ifield_derivatives in f.derivatives - single_derivatives = (single_derivatives...,getindex(ifield_derivatives,ifield)) - end - TransientSingleFieldCellField(single_field,single_derivatives) -end - -# Get multiple indices -function Base.getindex(f::TransientMultiFieldCellField,indices::Vector{<:Int}) - cellfield = MultiFieldCellField(f.cellfield[indices],DomainStyle(f.cellfield)) - derivatives = () - for derivative in f.derivatives - derivatives = (derivatives...,MultiFieldCellField(derivative[indices],DomainStyle(derivative))) - end - transient_single_fields = _to_transient_single_fields(cellfield,derivatives) - TransientMultiFieldCellField(cellfield,derivatives,transient_single_fields) -end - -function _to_transient_single_fields(multi_field,derivatives) - transient_single_fields = TransientCellField[] - for ifield in 1:num_fields(multi_field) - single_field = multi_field[ifield] - single_derivatives = () - for ifield_derivatives in derivatives - single_derivatives = (single_derivatives...,getindex(ifield_derivatives,ifield)) - end - transient_single_field = TransientSingleFieldCellField(single_field,single_derivatives) - push!(transient_single_fields,transient_single_field) - end - transient_single_fields -end - -# Iterate functions -Base.iterate(f::TransientMultiFieldCellField) = iterate(f.transient_single_fields) -Base.iterate(f::TransientMultiFieldCellField,state) = iterate(f.transient_single_fields,state) - -# Time derivative -function ∂t(f::TransientMultiFieldCellField) - cellfield, derivatives = first_and_tail(f.derivatives) - transient_single_field_derivatives = TransientCellField[] - for transient_single_field in f.transient_single_fields - push!(transient_single_field_derivatives,∂t(transient_single_field)) - end - TransientMultiFieldCellField(cellfield,derivatives,transient_single_field_derivatives) -end - -∂tt(f::TransientMultiFieldCellField) = ∂t(∂t(f)) diff --git a/src/ODEs/DiffEqsWrappers/DiffEqsWrappers.jl b/src/ODEs/_DiffEqsWrappers.jl similarity index 63% rename from src/ODEs/DiffEqsWrappers/DiffEqsWrappers.jl rename to src/ODEs/_DiffEqsWrappers.jl index 54e5722f8..0cb5cc25d 100644 --- a/src/ODEs/DiffEqsWrappers/DiffEqsWrappers.jl +++ b/src/ODEs/_DiffEqsWrappers.jl @@ -3,17 +3,9 @@ The exported names are $(EXPORTS) """ -module DiffEqWrappers +module DiffEqsWrappers -using Test - -using Gridap.ODEs.TransientFETools: TransientFEOperator - -using Gridap.ODEs.ODETools: allocate_cache -using Gridap.ODEs.ODETools: update_cache! -using Gridap.ODEs.ODETools: residual! -using Gridap.ODEs.ODETools: jacobians! -using Gridap.ODEs.ODETools: jacobian! +using DocStringExtensions using Gridap.Algebra: allocate_jacobian @@ -42,35 +34,35 @@ export diffeq_wrappers """ function diffeq_wrappers(op) - ode_op = get_algebraic_operator(op) - ode_cache = allocate_cache(ode_op) + odeop = get_algebraic_operator(op) + odeopcache = allocate_cache(odeop) function _residual!(res, du, u, p, t) # TO DO (minor): Improve update_cache! st do nothing if same time t as in the cache # now it would be done twice (residual and jacobian) - ode_cache = update_cache!(ode_cache, ode_op, t) - residual!(res, ode_op, t, (u, du), ode_cache) + odeopcache = update_cache!(odeopcache, odeop, t) + residual!(res, odeop, t, (u, du), odeopcache) end function _jacobian!(jac, du, u, p, gamma, t) - ode_cache = update_cache!(ode_cache, ode_op, t) + odeopcache = update_cache!(odeopcache, odeop, t) z = zero(eltype(jac)) fillstored!(jac, z) - jacobians!(jac, ode_op, t, (u, du), (1.0, gamma), ode_cache) + jacobians!(jac, odeop, t, (u, du), (1.0, gamma), odeopcache) end function _mass!(mass, du, u, p, t) - ode_cache = update_cache!(ode_cache, ode_op, t) + odeopcache = update_cache!(odeopcache, odeop, t) z = zero(eltype(mass)) fillstored!(mass, z) - jacobian!(mass, ode_op, t, (u, du), 2, 1.0, ode_cache) + jacobian!(mass, odeop, t, (u, du), 2, 1.0, odeopcache) end function _stiffness!(stif, du, u, p, t) - ode_cache = update_cache!(ode_cache, ode_op, t) + odeopcache = update_cache!(odeopcache, odeop, t) z = zero(eltype(stif)) fillstored!(stif, z) - jacobian!(stif, ode_op, t, (u, du), 1, 1.0, ode_cache) + jacobian!(stif, odeop, t, (u, du), 1, 1.0, odeopcache) end return _residual!, _jacobian!, _mass!, _stiffness! @@ -81,14 +73,14 @@ end It allocates the Jacobian (or mass or stiffness) matrix, given the `FEOperator` and a vector of size total number of unknowns """ -function prototype_jacobian(op::TransientFEOperator,u0) - ode_op = get_algebraic_operator(op) - ode_cache = allocate_cache(ode_op) # Not acceptable in terms of performance - return allocate_jacobian(ode_op, u0, ode_cache) +function prototype_jacobian(op::TransientFEOperator, u0) + odeop = get_algebraic_operator(op) + odeopcache = allocate_cache(odeop) # Not acceptable in terms of performance + return allocate_jacobian(odeop, u0, odeopcache) end const prototype_mass = prototype_jacobian const prototype_stiffness = prototype_jacobian -end #module +end # module DiffEqsWrappers diff --git a/test/ODEsTests/DiffEqsWrappersTests/runtests.jl b/test/ODEsTests/DiffEqsWrappersTests/runtests.jl deleted file mode 100644 index fd0d0a10e..000000000 --- a/test/ODEsTests/DiffEqsWrappersTests/runtests.jl +++ /dev/null @@ -1,7 +0,0 @@ -module DiffEqsWrappersTests - -using Test - -@testset "DiffEqWrappers" begin include("DiffEqsTests.jl") end - -end # module diff --git a/test/ODEsTests/ODEOperatorsMocks.jl b/test/ODEsTests/ODEOperatorsMocks.jl new file mode 100644 index 000000000..aec9cb723 --- /dev/null +++ b/test/ODEsTests/ODEOperatorsMocks.jl @@ -0,0 +1,109 @@ +using LinearAlgebra +using SparseArrays: spzeros + +using Gridap +using Gridap.Algebra +using Gridap.Polynomials +using Gridap.ODEs + +################### +# ODEOperatorMock # +################### +""" + struct ODEOperatorMock <: ODEOperator end + +Mock linear ODE of arbitrary order +```math +∑_{0 ≤ k ≤ N} form_k(t) ∂t^k u + forcing(t) = 0. +``` +""" +struct ODEOperatorMock{T} <: ODEOperator{T} + forms::Tuple{Vararg{Function}} + forcing::Function +end + +Polynomials.get_order(odeop::ODEOperatorMock) = length(odeop.forms) - 1 + +ODEs.get_forms(odeop::ODEOperatorMock{<:AbstractQuasilinearODE}) = (odeop.forms[end],) +ODEs.get_forms(odeop::ODEOperatorMock{<:AbstractLinearODE}) = odeop.forms + +function Algebra.allocate_residual( + odeop::ODEOperatorMock, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache +) + f = forcing(t) + copy(f) +end + +function Algebra.residual!( + r::AbstractVector, odeop::ODEOperatorMock, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache; add::Bool=false +) + order = get_order(odeop) + !add && fill!(r, zero(eltype(r))) + axpy!(1, odeop.forcing(t), r) + for k in 0:order + mat = odeop.forms[k+1](t) + axpy!(1, mat * us[k+1], r) + end + r +end + +function Algebra.residual!( + r::AbstractVector, odeop::ODEOperatorMock{<:AbstractQuasilinearODE}, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache; add::Bool=false +) + order = get_order(odeop) + !add && fill!(r, zero(eltype(r))) + axpy!(1, odeop.forcing(t), r) + for k in 0:order-1 + mat = odeop.forms[k+1](t) + axpy!(1, mat * us[k+1], r) + end + k = order + mat = odeop.forms[k+1](t) + axpy!(1, mat * us[k+1], r) + r +end + +function Algebra.residual!( + r::AbstractVector, odeop::ODEOperatorMock{<:AbstractLinearODE}, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache; add::Bool=false +) + order = get_order(odeop) + !add && fill!(r, zero(eltype(r))) + axpy!(1, odeop.forcing(t), r) + for k in 0:order + mat = odeop.forms[k+1](t) + axpy!(1, mat * us[k+1], r) + end + r +end + +function Algebra.allocate_jacobian( + odeop::ODEOperatorMock, + t::Real, us::Tuple{Vararg{AbstractVector}}, + odeopcache +) + T = eltype(first(us)) + n = length(first(us)) + J = spzeros(T, n, n) + fill!(J, 1) + J +end + +function ODEs.jacobian_add!( + J::AbstractMatrix, odeop::ODEOperatorMock, + t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, + odeopcache +) + for (w, form) in zip(ws, odeop.forms) + iszero(w) && continue + jac = form(t) + axpy_entries!(w, jac, J) + end +end diff --git a/test/ODEsTests/ODEOperatorsTests.jl b/test/ODEsTests/ODEOperatorsTests.jl new file mode 100644 index 000000000..4e448d6db --- /dev/null +++ b/test/ODEsTests/ODEOperatorsTests.jl @@ -0,0 +1,127 @@ +module ODEOperatorsTests + +using Test +using SparseArrays + +using Gridap +using Gridap.ODEs + +include("ODEOperatorsMocks.jl") + +num_eqs = 5 +order_max = 5 + +all_mats = ntuple(_ -> sprandn(num_eqs, num_eqs, 1.0), order_max + 1) +all_forms = ntuple(k -> (t -> all_mats[k] .* cospi(t)), order_max + 1) + +vec = randn(num_eqs) +forcing(t) = vec .* cospi(t) + +mat0 = sprand(num_eqs, num_eqs, 1.0) +nonzeros(mat0) .= 0 +form0(t) = mat0 + +vec0 = zeros(num_eqs) +forcing0(t) = vec0 + +t = randn() +all_us = ntuple(i -> randn(num_eqs), order_max + 1) + +exp_r = zeros(num_eqs) +exp_J = spzeros(num_eqs, num_eqs) + +Ts = (NonlinearODE, QuasilinearODE, SemilinearODE, LinearODE) + +for N in 0:order_max + us = tuple((all_us[k] for k in 1:N+1)...) + forms = all_forms[1:N+1] + + for T in Ts + standard_odeop = ODEOperatorMock{T}(forms, forcing) + odeops = (standard_odeop,) + + # Randomly create a `IMEXODEOperator`s + if N > 0 + im_forms = () + ex_forms = () + for k in 0:N-1 + form = forms[k+1] + to_im = rand(Bool) + im_forms = (im_forms..., to_im ? form : form0) + ex_forms = (ex_forms..., to_im ? form0 : form) + end + im_forms = (im_forms..., last(forms)) + + to_im = rand(Bool) + im_forcing = to_im ? forcing : forcing0 + ex_forcing = to_im ? forcing0 : forcing + + im_odeop = ODEOperatorMock{T}(im_forms, im_forcing) + for T_ex in Ts + ex_odeop = ODEOperatorMock{T_ex}(ex_forms, ex_forcing) + imex_odeop = IMEXODEOperator(im_odeop, ex_odeop) + # odeops = (odeops..., imex_odeop) + end + end + + # Compute expected residual + f = forcing(t) + copy!(exp_r, f) + for (ui, formi) in zip(us, forms) + form = formi(t) + exp_r .+= form * ui + end + + for odeop in odeops + num_forms = get_num_forms(odeop) + if odeop isa IMEXODEOperator + im_odeop, ex_odeop = get_imex_operators(odeop) + T_im, T_ex = ODEOperatorType(im_odeop), ODEOperatorType(ex_odeop) + if T_im <: AbstractLinearODE + if T_ex <: AbstractLinearODE + @test num_forms == get_order(im_odeop) + 1 + else + @test num_forms == 1 + end + elseif T_im <: AbstractQuasilinearODE + @test num_forms == 1 + else + @test num_forms == 0 + end + else + if T <: AbstractLinearODE + @test num_forms == get_order(odeop) + 1 + elseif T <: AbstractQuasilinearODE + @test num_forms == 1 + else + @test num_forms == 0 + end + end + + odeopcache = allocate_odeopcache(odeop, t, us) + update_odeopcache!(odeopcache, odeop, t) + + r = allocate_residual(odeop, t, us, odeopcache) + @test size(r) == (num_eqs,) + + J = allocate_jacobian(odeop, t, us, odeopcache) + @test size(J) == (num_eqs, num_eqs) + + residual!(r, odeop, t, us, odeopcache) + @test r ≈ exp_r + + fill!(exp_J, zero(eltype(exp_J))) + fill!(J, zero(eltype(J))) + for k in 0:N + exp_J .+= forms[k+1](t) + end + ws = ntuple(_ -> 1, N + 1) + jacobian!(J, odeop, t, us, ws, odeopcache) + @test J ≈ exp_J + + @test test_ode_operator(odeop, t, us) + end + end +end + +end # module ODEOperatorsTests diff --git a/test/ODEsTests/ODEProblemsTests.jl b/test/ODEsTests/ODEProblemsTests.jl new file mode 100644 index 000000000..c43886706 --- /dev/null +++ b/test/ODEsTests/ODEProblemsTests.jl @@ -0,0 +1,11 @@ +module ODESolversAllTests + +using Test + +@time @testset "Tableaus" begin include("ODEProblemsTests/TableausTests.jl") end + +@time @testset "Order1ODE" begin include("ODEProblemsTests/Order1ODETests.jl") end + +@time @testset "Order2ODE" begin include("ODEProblemsTests/Order2ODETests.jl") end + +end # module ODESolversAllTests diff --git a/test/ODEsTests/ODEProblemsTests/Order1ODETests.jl b/test/ODEsTests/ODEProblemsTests/Order1ODETests.jl new file mode 100644 index 000000000..544e1f33d --- /dev/null +++ b/test/ODEsTests/ODEProblemsTests/Order1ODETests.jl @@ -0,0 +1,133 @@ +module Order1ODETests + +using Test +using SparseArrays + +using Gridap +using Gridap.ODEs + +include("../ODEOperatorsMocks.jl") +include("../ODESolversMocks.jl") + +# M u̇ + M * Diag(λ) u = M * f(t), +# where f(t) = exp(Diag(α) * t) +t0 = 0.0 +dt = 1.0e-3 +tF = t0 + 10 * dt + +num_eqs = 5 + +M = sprandn(num_eqs, num_eqs, 1.0) +λ = randn(num_eqs) +K = M * spdiagm(-λ) + +mass(t) = M +stiffness(t) = K +forms = (stiffness, mass) +mat0 = sprand(num_eqs, num_eqs, 1.0) +nonzeros(mat0) .= 0 +form_zero(t) = mat0 + +α = randn(num_eqs) +forcing(t) = -M * exp.(α .* t) +forcing_zero(t) = zeros(typeof(t), num_eqs) + +u0 = randn(num_eqs) + +function u(t) + s = zeros(typeof(t), num_eqs) + for i in 1:num_eqs + # Homogeneous solution + s[i] += exp(λ[i] * t) * u0[i] + # Particular solution + s[i] += (exp(λ[i] * t) - exp(α[i] * t)) / (λ[i] - α[i]) + end + s +end + +odeop_nl = ODEOperatorMock{NonlinearODE}(forms, forcing) +odeop_ql = ODEOperatorMock{QuasilinearODE}(forms, forcing) +odeop_sl = ODEOperatorMock{SemilinearODE}(forms, forcing) +odeop_l = ODEOperatorMock{LinearODE}(forms, forcing) + +# Testing some random combinations of `ODEOperatorType`s +odeop_imex1 = GenericIMEXODEOperator( + ODEOperatorMock{LinearODE}((form_zero, mass), forcing_zero), + ODEOperatorMock{QuasilinearODE}((stiffness,), forcing) +) + +odeop_imex2 = GenericIMEXODEOperator( + ODEOperatorMock{SemilinearODE}((form_zero, mass), forcing_zero), + ODEOperatorMock{NonlinearODE}((stiffness,), forcing) +) + +odeops = ( + odeop_nl, + odeop_ql, + odeop_sl, + odeop_l, + odeop_imex1, + odeop_imex2, +) + +function test_solver(odeslvr, odeop, us0, tol) + odesltn = solve(odeslvr, odeop, t0, tF, us0) + + for (t_n, uh_n) in odesltn + eh_n = u(t_n) - uh_n + e_n = sqrt(sum(abs2, eh_n)) + @test e_n < tol + end +end + +tol = 1.0e-4 +atol = 1.0e-12 +rtol = 1.0e-8 +maxiter = 100 +sysslvr_l = LUSolver() +sysslvr_nl = NonlinearSolverMock(rtol, atol, maxiter) + +# odeslvrs = ( +# ForwardEuler(sysslvr_nl, dt), +# ThetaMethod(sysslvr_nl, dt, 0.2), +# MidPoint(sysslvr_nl, dt), +# ThetaMethod(sysslvr_nl, dt, 0.8), +# BackwardEuler(sysslvr_nl, dt), +# GeneralizedAlpha1(sysslvr_nl, dt, 0.0), +# GeneralizedAlpha1(sysslvr_nl, dt, 0.5), +# GeneralizedAlpha1(sysslvr_nl, dt, 1.0), +# ) +# for tableau in available_tableaus +# global odeslvrs +# odeslvr = RungeKutta(sysslvr_nl, sysslvr_l, dt, tableau) +# odeslvrs = (odeslvrs..., odeslvr) +# end + +# us0 = (u0,) +# for odeslvr in odeslvrs +# for odeop in odeops +# test_solver(odeslvr, odeop, us0, tol) +# end +# end + +# Solvers for `IMEXODEOperator`s +odeops = ( + odeop_imex1, + odeop_imex2, +) + +odeslvrs = () +for tableau in available_imex_tableaus + global odeslvrs + odeslvr = RungeKutta(sysslvr_nl, sysslvr_l, dt, tableau) + odeslvrs = (odeslvrs..., odeslvr) +end + +us0 = (u0,) +for odeslvr in odeslvrs + for odeop in odeops + test_solver(odeslvr, odeop, us0, tol) + end +end + +end # module Order1ODETests diff --git a/test/ODEsTests/ODEProblemsTests/Order2ODETests.jl b/test/ODEsTests/ODEProblemsTests/Order2ODETests.jl new file mode 100644 index 000000000..b663d11a8 --- /dev/null +++ b/test/ODEsTests/ODEProblemsTests/Order2ODETests.jl @@ -0,0 +1,109 @@ +module Order2ODETests + +using Test +using SparseArrays + +using Gridap +using Gridap.ODEs + +include("../ODEOperatorsMocks.jl") +include("../ODESolversMocks.jl") + +t0 = 0.0 +dt = 1.0e-3 +tF = t0 + 10 * dt + +num_eqs = 5 + +M = sprandn(num_eqs, num_eqs, 1.0) +λ = randn(num_eqs) +μ = randn(num_eqs) +C = M * spdiagm(-(λ .+ μ)) +K = M * spdiagm(λ .* μ) + +mass(t) = M +damping(t) = C +stiffness(t) = K +forms = (stiffness, damping, mass) +mat0 = sprand(num_eqs, num_eqs, 1.0) +nonzeros(mat0) .= 0 +form_zero(t) = mat0 + +α = randn(num_eqs) +forcing(t) = -M * exp.(α .* t) +forcing_zero(t) = zeros(typeof(t), num_eqs) + +u0 = randn(num_eqs) +v0 = randn(num_eqs) + +function u(t) + s = zeros(typeof(t), num_eqs) + for i in 1:num_eqs + # Homogeneous solution + s[i] += (μ[i] * u0[i] - v0[i]) / (μ[i] - λ[i]) * exp(λ[i] * t) + s[i] += (λ[i] * u0[i] - v0[i]) / (λ[i] - μ[i]) * exp(μ[i] * t) + # Particular solution + s[i] += (exp(λ[i] * t) - exp(α[i] * t)) / (λ[i] - μ[i]) / (λ[i] - α[i]) + s[i] += (exp(μ[i] * t) - exp(α[i] * t)) / (μ[i] - λ[i]) / (μ[i] - α[i]) + end + s +end + +odeop_nl = ODEOperatorMock{NonlinearODE}(forms, forcing) +odeop_ql = ODEOperatorMock{QuasilinearODE}(forms, forcing) +odeop_sl = ODEOperatorMock{SemilinearODE}(forms, forcing) +odeop_l = ODEOperatorMock{LinearODE}(forms, forcing) + +# Testing some random combinations of `IMEXODEOperator`s +odeop_imex1 = GenericIMEXODEOperator( + ODEOperatorMock{LinearODE}((stiffness, form_zero, mass), forcing_zero), + ODEOperatorMock{QuasilinearODE}((form_zero, damping,), forcing) +) + +odeop_imex2 = GenericIMEXODEOperator( + ODEOperatorMock{SemilinearODE}((form_zero, damping, mass), forcing_zero), + ODEOperatorMock{NonlinearODE}((stiffness, form_zero,), forcing) +) + +odeops = ( + odeop_nl, + odeop_ql, + odeop_sl, + odeop_l, + odeop_imex1, + odeop_imex2, +) + +function test_solver(odeslvr, odeop, us0, tol) + odesltn = solve(odeslvr, odeop, t0, tF, us0) + + for (t_n, uh_n) in odesltn + eh_n = u(t_n) - uh_n + e_n = sqrt(sum(abs2, eh_n)) + @test e_n < tol + end +end + +tol = 1.0e-4 +atol = 1.0e-12 +rtol = 1.0e-8 +maxiter = 100 +sysslvr_l = LUSolver() +sysslvr_nl = NonlinearSolverMock(rtol, atol, maxiter) + +odeslvrs = ( + GeneralizedAlpha2(sysslvr_nl, dt, 0.0), + GeneralizedAlpha2(sysslvr_nl, dt, 0.5), + GeneralizedAlpha2(sysslvr_nl, dt, 1.0), + Newmark(sysslvr_nl, dt, 0.5, 0.0), + Newmark(sysslvr_nl, dt, 0.5, 0.25), +) + +us0 = (u0, v0) +for odeslvr in odeslvrs + for odeop in odeops + test_solver(odeslvr, odeop, us0, tol) + end +end + +end # module Order2ODETests diff --git a/test/ODEsTests/ODEProblemsTests/TableausTests.jl b/test/ODEsTests/ODEProblemsTests/TableausTests.jl new file mode 100644 index 000000000..eb6ca6164 --- /dev/null +++ b/test/ODEsTests/ODEProblemsTests/TableausTests.jl @@ -0,0 +1,41 @@ +module TableausTests + +using Test + +using Gridap +using Gridap.ODEs +matrix = [ + 1 2 3 + 4 5 6 + 7 8 9 +] +@test ODEs._tableau_type(matrix) == FullyImplicitTableau + +matrix = [ + 1 0 3 + 4 5 0 + 7 8 0 +] +@test ODEs._tableau_type(matrix) == FullyImplicitTableau + +matrix = [ + 1 0 0 + 4 5 0 + 7 8 0 +] +@test ODEs._tableau_type(matrix) == DiagonallyImplicitTableau + +matrix = [ + 0 0 0 + 4 0 0 + 7 8 0 +] +@test ODEs._tableau_type(matrix) == ExplicitTableau + +for tableauname in available_tableaus + ButcherTableau(tableauname) + tableau = eval(Meta.parse("ODEs." * string(tableauname) * "()")) + ButcherTableau(tableau, Float64) +end + +end # module TableausTests diff --git a/test/ODEsTests/ODESolutionsTests.jl b/test/ODEsTests/ODESolutionsTests.jl new file mode 100644 index 000000000..363c8121f --- /dev/null +++ b/test/ODEsTests/ODESolutionsTests.jl @@ -0,0 +1,89 @@ +module ODESolutionsTests + +using Test +using SparseArrays + +using Gridap +using Gridap.ODEs + +include("ODEOperatorsMocks.jl") +include("ODESolversMocks.jl") + +num_eqs = 5 +order_max = 5 + +all_mats = ntuple(_ -> sprandn(num_eqs, num_eqs, 1.0), order_max + 1) +all_forms = ntuple(k -> (t -> all_mats[k] .* cospi(t)), order_max + 1) + +vec = randn(num_eqs) +forcing(t) = vec .* cospi(t) + +mat0 = sprand(num_eqs, num_eqs, 1.0) +nonzeros(mat0) .= 0 +form0(t) = mat0 + +vec0 = zeros(num_eqs) +forcing0(t) = vec0 + +t0 = randn() +tF = t0 + rand() +dt = (tF - t0) / 5 +all_us0 = ntuple(i -> randn(num_eqs), order_max + 1) +all_usF = copy.(all_us0) +exp_usF = copy.(all_us0) + +atol = 1.0e-12 +rtol = 1.0e-8 +maxiter = 100 +sysslvr = NonlinearSolverMock(rtol, atol, maxiter) +odeslvr = ODESolverMock(sysslvr, dt) + +Ts = (NonlinearODE, QuasilinearODE, SemilinearODE, LinearODE) + +for N in 1:order_max + us0 = tuple((all_us0[k] for k in 1:N)...) + usF = tuple((all_usF[k] for k in 1:N)...) + forms = all_forms[1:N+1] + + for T in Ts + standard_odeop = ODEOperatorMock{T}(forms, forcing) + odeops = (standard_odeop,) + + # Create an IMEXODEOperator randomly + im_forms = () + ex_forms = () + for k in 0:N-1 + form = forms[k+1] + to_im = rand(Bool) + im_forms = (im_forms..., to_im ? form : form0) + ex_forms = (ex_forms..., to_im ? form0 : form) + end + im_forms = (im_forms..., last(forms)) + + to_im = rand(Bool) + im_forcing = to_im ? forcing : forcing0 + ex_forcing = to_im ? forcing0 : forcing + + for T_ex in Ts + imex_odeop = IMEXODEOperator( + ODEOperatorMock{T}(im_forms, im_forcing), + ODEOperatorMock{T_ex}(ex_forms, ex_forcing) + ) + odeops = (odeops..., imex_odeop) + end + + for odeop in odeops + odesltn = solve(odeslvr, odeop, t0, tF, us0) + + tprev = t0 + for (t_n, u_n) in odesltn + @test t_n ≈ tprev + dt + tprev = t_n + end + + @test test_ode_solution(odesltn) + end + end +end + +end # module ODESolutionsTests diff --git a/test/ODEsTests/ODESolversMocks.jl b/test/ODEsTests/ODESolversMocks.jl new file mode 100644 index 000000000..9efb08af8 --- /dev/null +++ b/test/ODEsTests/ODESolversMocks.jl @@ -0,0 +1,199 @@ +using LinearAlgebra +using LinearAlgebra: fillstored! + +using Gridap +using Gridap.Algebra +using Gridap.ODEs +using Gridap.Polynomials + +####################### +# NonlinearSolverMock # +####################### +""" + struct NonlinearSolverMock <: NonlinearSolver end + +Mock `NonlinearSolver` for `NonlinearStageOperator` (simple Newton-Raphson with +Backslash) which defaults to Backslash for `linearStageOperator`. +""" +struct NonlinearSolverMock <: NonlinearSolver + atol::Real + rtol::Real + maxiter::Integer +end + +function Algebra.solve!( + x::AbstractVector, nls::NonlinearSolverMock, + nlop::NonlinearStageOperator, cache +) + atol, rtol, maxiter = nls.atol, nls.rtol, nls.maxiter + + if isnothing(cache) + J = allocate_jacobian(nlop, x) + r = allocate_residual(nlop, x) + else + J, r = cache + end + + # Update residual and check convergence + residual!(r, nlop, x) + nr_prev = norm(r) + converged = (nr_prev < atol) + + iter = 0 + while !converged + if iter > maxiter + throw("NonlinearSolverMock did not converge.") + end + + # Update x + jacobian!(J, nlop, x) + dx = J \ r + axpy!(-1, dx, x) + + # Update residual and check convergence + residual!(r, nlop, x) + nr = norm(r) + converged = (nr < atol) || (nr < rtol * nr_prev) + nr_prev = nr + + iter += 1 + end + + (J, r) +end + +function Algebra.solve!( + x::AbstractVector, nls::NonlinearSolverMock, + lop::LinearStageOperator, cache +) + copy!(x, lop.J \ lop.r) + rmul!(x, -1) + cache +end + +################# +# ODESolverMock # +################# +""" + struct ODESolverMock <: ODESolver end + +Mock `ODESolver` for ODEs of arbitrary order, using a backward Euler scheme. +```math +res(tx, ux[0], ..., ux[N]) = 0, + +tx = t_n + dt +ux[i] = ∂t^i[u](t_n) + ∑_{i + 1 ≤ j ≤ N} 1/(j - i)! dt^(j - i) ux[j]. +ux[N] = x + +∂t^i[u](t_(n+1)) = ux[i]. +``` +""" +struct ODESolverMock <: ODESolver + sysslvr::NonlinearSolverMock + dt::Real +end + +################## +# Nonlinear case # +################## +function ODEs.allocate_odecache( + odeslvr::ODESolverMock, odeop::ODEOperator, + t0::Real, us0::Tuple{Vararg{AbstractVector}} +) + u0 = us0[1] + us0N = (us0..., u0) + odeopcache = allocate_odeopcache(odeop, t0, us0N) + + usx = copy.(us0) + + N = get_order(odeop) + dt = odeslvr.dt + ws = ntuple(i -> 0, N + 1) + ws = Base.setindex(ws, 1, N + 1) + for i in N:-1:1 + wi, coef = 0, 1 + for j in i+1:N+1 + coef = coef * dt / (j - i) + wi += coef * ws[j] + end + ws = Base.setindex(ws, wi, i) + end + + sysslvrcache = nothing + odeslvrcache = (usx, ws, sysslvrcache) + + (odeslvrcache, odeopcache) +end + +function ODEs.ode_march!( + stateF::Tuple{Vararg{AbstractVector}}, + odeslvr::ODESolverMock, odeop::ODEOperator, + t0::Real, state0::Tuple{Vararg{AbstractVector}}, + odecache +) + # Unpack inputs + us0, usF = state0, stateF + odeslvrcache, odeopcache = odecache + usx, ws, sysslvrcache = odeslvrcache + + # Unpack solver + sysslvr = odeslvr.sysslvr + dt = odeslvr.dt + + # Define scheme + tx = t0 + dt + _usx(x) = _stage_mock!(usx, us0, dt, x) + + # Update ODE operator cache + update_odeopcache!(odeopcache, odeop, tx) + + # Create and solve stage operator + stageop = NonlinearStageOperator(odeop, odeopcache, tx, _usx, ws) + + x = usF[1] + sysslvrcache = solve!(x, sysslvr, stageop, sysslvrcache) + + # Update state + tF = t0 + dt + copy!(usx[1], x) + x = usx[1] + usF = _convert_mock!(usF, us0, dt, x) + stateF = usF + + # Pack outputs + odeslvrcache = (usx, ws, sysslvrcache) + odecache = (odeslvrcache, odeopcache) + (tF, stateF, odecache) +end + +######### +# Utils # +######### +function _stage_mock!( + usx::Tuple{Vararg{AbstractVector}}, us0::Tuple{Vararg{AbstractVector}}, + dt::Real, x::AbstractVector +) + _convert_mock!(usx, us0, dt, x) + (usx..., x) +end + +function _convert_mock!( + usF::Tuple{Vararg{AbstractVector}}, us0::Tuple{Vararg{AbstractVector}}, + dt::Real, x::AbstractVector +) + # usF[i] = us0[i] + ∑_{i < j ≤ N+1} 1/(j - i)! dt^(j - i) usF[j] + N = length(us0) + for i in N:-1:1 + ui0, uiF = us0[i], usF[i] + copy!(uiF, ui0) + coef = 1 + for j in i+1:N + coef = coef * dt / (j - i) + axpy!(coef, usF[j], uiF) + end + j = N + 1 + coef = coef * dt / (j - i) + axpy!(coef, x, uiF) + end + usF +end diff --git a/test/ODEsTests/ODESolversTests.jl b/test/ODEsTests/ODESolversTests.jl new file mode 100644 index 000000000..0273bc814 --- /dev/null +++ b/test/ODEsTests/ODESolversTests.jl @@ -0,0 +1,163 @@ +module ODESolversTests + +using Test +using SparseArrays + +using Gridap +using Gridap.ODEs + +include("ODEOperatorsMocks.jl") +include("ODESolversMocks.jl") + +num_eqs = 5 +order_max = 5 + +all_mats = ntuple(_ -> sprandn(num_eqs, num_eqs, 1.0), order_max + 1) +all_forms = ntuple(k -> (t -> all_mats[k] .* cospi(t)), order_max + 1) + +vec = randn(num_eqs) +forcing(t) = vec .* cospi(t) + +mat0 = sprand(num_eqs, num_eqs, 1.0) +nonzeros(mat0) .= 0 +form0(t) = mat0 + +vec0 = zeros(num_eqs) +forcing0(t) = vec0 + +t0 = randn() +tF = t0 + rand() +dt = (tF - t0) / 10 +all_us0 = ntuple(i -> randn(num_eqs), order_max + 1) +all_usF = copy.(all_us0) +exp_usF = copy.(all_us0) + +exp_J = spzeros(num_eqs, num_eqs) +exp_r = zeros(num_eqs) + +atol = 1.0e-12 +rtol = 1.0e-8 +maxiter = 100 +sysslvr = NonlinearSolverMock(rtol, atol, maxiter) +odeslvr = ODESolverMock(sysslvr, dt) + +for N in 1:order_max + us0 = ntuple(k -> all_us0[k], N) + usF = ntuple(k -> all_usF[k], N) + forms = all_forms[1:N+1] + + # Compute the expected solution after one step of the ODE solver + tx = t0 + dt + + f = forcing(tx) + fill!(exp_r, zero(eltype(exp_r))) + exp_r .+= f + + m = last(forms)(tx) + fillstored!(exp_J, zero(eltype(exp_J))) + exp_J .+= m + + ws = ntuple(i -> 0, N + 1) + ws = Base.setindex(ws, 1, N + 1) + for i in N:-1:1 + ui0, uiF = us0[i], usF[i] + copy!(uiF, ui0) + wi, coef = 0, 1 + for j in i+1:N + coef = coef * dt / (j - i) + wi += coef * ws[j] + axpy!(coef, usF[j], uiF) + end + coef = coef * dt / (N + 1 - i) + wi += coef * ws[N+1] + ws = Base.setindex(ws, wi, i) + + usF = Base.setindex(usF, uiF, i) + + form = forms[i](tx) + exp_r .+= form * uiF + exp_J .+= wi .* form + end + + # Solve system + rmul!(exp_r, -1) + exp_x = exp_J \ exp_r + + # Update state + for i in N:-1:1 + global exp_usF + ui0, exp_uiF = us0[i], exp_usF[i] + copy!(exp_uiF, ui0) + coef = 1 + for j in i+1:N + coef = coef * dt / (j - i) + axpy!(coef, exp_usF[j], exp_uiF) + end + coef = coef * dt / (N + 1 - i) + axpy!(coef, exp_x, exp_uiF) + exp_usF = Base.setindex(exp_usF, exp_uiF, i) + end + + for C in (NonlinearODE, QuasilinearODE, SemilinearODE, LinearODE,) + standard_odeop = ODEOperatorMock{C}(forms, forcing) + + # Create an IMEXODEOperator randomly + im_forms = () + ex_forms = () + for k in 0:N-1 + form = forms[k+1] + to_im = rand(Bool) + im_forms = (im_forms..., to_im ? form : form0) + ex_forms = (ex_forms..., to_im ? form0 : form) + end + im_forms = (im_forms..., last(forms)) + + to_im = rand(Bool) + im_forcing = to_im ? forcing : forcing0 + ex_forcing = to_im ? forcing0 : forcing + + im_odeop = ODEOperatorMock{C}(im_forms, im_forcing) + ex_odeop = ODEOperatorMock{C}(ex_forms, ex_forcing) + imex_odeop = IMEXODEOperator(im_odeop, ex_odeop) + + for odeop in (standard_odeop, imex_odeop,) + # Allocate cache + odecache = allocate_odecache(odeslvr, odeop, t0, us0) + + # Starting procedure + state0, odecache = ode_start( + odeslvr, odeop, + t0, us0, + odecache + ) + + # Marching procedure + stateF = copy.(state0) + tF, stateF, odecache = ode_march!( + stateF, + odeslvr, odeop, + t0, state0, + odecache + ) + + # Finishing procedure + uF = copy(first(us0)) + uF, odecache = ode_finish!( + uF, + odeslvr, odeop, + t0, tF, stateF, + odecache + ) + + usF = stateF + for i in 1:N + @test usF[i] ≈ exp_usF[i] + end + @test uF ≈ first(exp_usF) + + @test test_ode_solver(odeslvr, odeop, t0, us0) + end + end +end + +end # module ODESolversTests diff --git a/test/ODEsTests/ODEsTests/DiffOperatorsTests.jl b/test/ODEsTests/ODEsTests/DiffOperatorsTests.jl deleted file mode 100644 index 452838bed..000000000 --- a/test/ODEsTests/ODEsTests/DiffOperatorsTests.jl +++ /dev/null @@ -1,72 +0,0 @@ -module DiffOperatorsTests - -using Gridap -using Test -using Gridap.ODEs -using Gridap.ODEs.ODETools: ∂t, ∂tt - -using ForwardDiff - -f(x,t) = 5*x[1]*x[2]+x[2]^2*t^3 -∂tf(x,t) = x[2]^2*3*t^2 - -tv = rand(Float64) -xv = Point(rand(Float64,2)...) -@test ∂tf(xv,tv) ≈ ∂t(f)(xv,tv) - -F(x,t) = VectorValue([5*x[1]*x[2],x[2]^2*t^3]) -∂tF(x,t) = VectorValue([0.0,x[2]^2*3*t^2]) - -tv = rand(Float64) -xv = Point(rand(Float64,2)...) -@test ∂tF(xv,tv) ≈ ∂t(F)(xv,tv) - -# Time derivatives - -f(x,t) = t^2 -dtf = (x,t) -> ForwardDiff.derivative(t->f(x,t),t) -@test dtf(xv,tv) ≈ ∂t(f)(xv,tv) ≈ ∂t(f)(xv)(tv) ≈ ∂t(f)(tv)(xv) - -f2(x,t) = x[1]^2 -dtf2 = (x,t) -> ForwardDiff.derivative(t->f2(x,t),t) -@test dtf2(xv,tv) ≈ ∂t(f2)(xv,tv) ≈ ∂t(f2)(xv)(tv) ≈ ∂t(f2)(tv)(xv) - -f2(x,t) = x[1]^t^2 -dtf2 = (x,t) -> ForwardDiff.derivative(t->f2(x,t),t) -@test dtf2(xv,tv) ≈ ∂t(f2)(xv,tv) ≈ ∂t(f2)(xv)(tv) ≈ ∂t(f2)(tv)(xv) - -f2(x,t) = VectorValue(x[1]^2,0.0) -dtf2 = (x,t) -> VectorValue(ForwardDiff.derivative(t -> get_array(f2(x,t)),t)) -@test dtf2(xv,tv) ≈ ∂t(f2)(xv,tv) ≈ ∂t(f2)(xv)(tv) ≈ ∂t(f2)(tv)(xv) - -f2(x,t) = VectorValue(x[1]^2,t) -dtf2 = (x,t) -> VectorValue(ForwardDiff.derivative(t -> get_array(f2(x,t)),t)) -@test dtf2(xv,tv) ≈ ∂t(f2)(xv,tv) ≈ ∂t(f2)(xv)(tv) ≈ ∂t(f2)(tv)(xv) - -f6(x,t) = TensorValue(x[1]*t,x[2]*t,x[1]*x[2],x[1]*t^2) -dtf6 = (x,t) -> TensorValue(ForwardDiff.derivative(t->get_array(f6(x,t)),t)) -@test dtf6(xv,tv) ≈ ∂t(f6)(xv,tv) ≈ ∂t(f6)(xv)(tv) ≈ ∂t(f6)(tv)(xv) - -# Spatial derivatives -f2(x,t) = VectorValue(x[1]^2,t) -∇f2(x,t) = ∇(y->f2(y,t))(x) -∇f2(xv,tv) -# @santiagobadia : Is there any way to make this transparent to the user -# I guess not unless we create a type for these analytical (space-only or -# space-time via a trait) functions -# Probably a try-catch? - -# 2nd time derivative -f(x,t) = t^2 -dtf = (x,t) -> ForwardDiff.derivative(t->f(x,t),t) -dttf = (x,t) -> ForwardDiff.derivative(t->dtf(x,t),t) -@test dttf(xv,tv) ≈ ∂tt(f)(xv,tv) ≈ ∂tt(f)(xv)(tv) ≈ ∂tt(f)(tv)(xv) -@test ∂tt(f)(xv,tv) ≈ 2.0 - -f2(x,t) = x[1]*t^2 -dtf2 = (x,t) -> ForwardDiff.derivative(t->f2(x,t),t) -dttf2 = (x,t) -> ForwardDiff.derivative(t->dtf2(x,t),t) -@test dttf2(xv,tv) ≈ ∂tt(f2)(xv,tv) ≈ ∂tt(f2)(xv)(tv) ≈ ∂tt(f2)(tv)(xv) -@test ∂tt(f2)(xv,tv) ≈ 2.0*xv[1] - -end #module diff --git a/test/ODEsTests/ODEsTests/ODEOperatorMocks.jl b/test/ODEsTests/ODEsTests/ODEOperatorMocks.jl deleted file mode 100644 index bfc3bb1f6..000000000 --- a/test/ODEsTests/ODEsTests/ODEOperatorMocks.jl +++ /dev/null @@ -1,142 +0,0 @@ -# Toy linear ODE with 2 DOFs -# u_1_t - a * u_1 = 0 -# u_2_t - b * u_1 - c * u_2 = 0 -# For a = 1, b = 0, c = 1, the analytical solution is: -# u_1 = u0*exp(t) -# u_2 = u0*exp(t) - -# Toy 2nd order ODE with 2 DOFs -# u_1_tt + b * u_1_t - a * u_1 = 0 -# u_2_tt + a * u_1_t - b * u_1 - c * u_2 = 0 - -import Gridap.ODEs.ODETools: ODEOperator -import Gridap.ODEs.ODETools: AffineODEOperator -import Gridap.ODEs.ODETools: ConstantODEOperator -import Gridap.ODEs.ODETools: allocate_cache -import Gridap.ODEs.ODETools: update_cache! -import Gridap.ODEs.ODETools: allocate_residual -import Gridap.ODEs.ODETools: jacobian! -import Gridap.ODEs.ODETools: jacobians! -import Gridap.ODEs.ODETools: allocate_jacobian -import Gridap.ODEs.ODETools: residual! -import Gridap.ODEs.ODETools: rhs! -import Gridap.ODEs.ODETools: explicit_rhs! -import Gridap.ODEs.ODETools: lhs! -using SparseArrays: spzeros - -struct ODEOperatorMock{T<:Real,C} <: ODEOperator{C} - a::T - b::T - c::T - order::Integer -end - -get_order(op::ODEOperatorMock) = op.order - -function residual!(r::AbstractVector,op::ODEOperatorMock,t::Real,x::NTuple{2,AbstractVector},ode_cache) - u,u_t = x - r .= 0 - r[1] = u_t[1] - op.a * u[1] - r[2] = u_t[2] - op.b * u[1] - op.c * u[2] - r -end - -function rhs!(r::AbstractVector,op::ODEOperatorMock,t::Real,x::NTuple{2,AbstractVector},ode_cache) - u,u_t = x - r .= 0 - r[1] = op.a * u[1] - r[2] = op.b * u[1] + op.c * u[2] - r -end - -function explicit_rhs!(r::AbstractVector,op::ODEOperatorMock,t::Real,x::NTuple{2,AbstractVector},ode_cache) - u,u_t = x - r .= 0 - r -end - -function lhs!(r::AbstractVector,op::ODEOperatorMock,t::Real,x::NTuple{2,AbstractVector},ode_cache) - u,u_t = x - r .= 0 - r[1] = u_t[1] - r[2] = u_t[2] - r -end - -function residual!(r::AbstractVector,op::ODEOperatorMock,t::Real,x::NTuple{3,AbstractVector},ode_cache) - u,u_t,u_tt = x - r .= 0 - r[1] = u_tt[1] + op.b * u_t[1] - op.a * u[1] - r[2] = u_tt[2] + op.a * u_t[1]- op.b * u[1] - op.c * u[2] - r -end - -function allocate_residual(op::ODEOperatorMock,t0::Real,u::AbstractVector,cache) - zeros(2) -end - -function jacobian!(J::AbstractMatrix, - op::ODEOperatorMock, - t::Real, - x::NTuple{2,AbstractVector}, - i::Int, - γᵢ::Real, - ode_cache) - @assert get_order(op) == 1 - @assert 0 < i <= get_order(op)+1 - if i==1 - J[1,1] += -op.a*γᵢ - J[2,1] += -op.b*γᵢ - J[2,2] += -op.c*γᵢ - elseif i==2 - J[1,1] += 1.0*γᵢ - J[2,2] += 1.0*γᵢ - end - J -end - -function jacobian!(J::AbstractMatrix, - op::ODEOperatorMock, - t::Real, - x::NTuple{3,AbstractVector}, - i::Int, - γᵢ::Real, - ode_cache) - @assert get_order(op) == 2 - @assert 0 < i <= get_order(op)+1 - if i==1 - J[1,1] += -op.a*γᵢ - J[2,1] += -op.b*γᵢ - J[2,2] += -op.c*γᵢ - elseif i==2 - J[1,1] += op.b*γᵢ - J[2,2] += op.a*γᵢ - elseif i==3 - J[1,1] += 1.0*γᵢ - J[2,2] += 1.0*γᵢ - end - J -end - -function jacobians!( - J::AbstractMatrix, - op::ODEOperatorMock, - t::Real, - x::Tuple{Vararg{AbstractVector}}, - γ::Tuple{Vararg{Real}}, - ode_cache) - @assert length(γ) == get_order(op) + 1 - for order in 1:get_order(op)+1 - jacobian!(J,op,t,x,order,γ[order],ode_cache) - end - J -end - -function allocate_jacobian(op::ODEOperatorMock,t0::Real,u::AbstractVector,cache) - spzeros(2,2) -end - -allocate_cache(op::ODEOperatorMock) = nothing -allocate_cache(op::ODEOperatorMock,v::AbstractVector) = (similar(v),nothing) -allocate_cache(op::ODEOperatorMock,v::AbstractVector,a::AbstractVector) = (similar(v),similar(a),nothing) -update_cache!(cache,op::ODEOperatorMock,t::Real) = cache diff --git a/test/ODEsTests/ODEsTests/ODEOperatorsTests.jl b/test/ODEsTests/ODEsTests/ODEOperatorsTests.jl deleted file mode 100644 index ad1660f70..000000000 --- a/test/ODEsTests/ODEsTests/ODEOperatorsTests.jl +++ /dev/null @@ -1,51 +0,0 @@ -module ODEOperatorsTests - -using Gridap.ODEs.ODETools -using Test - -import Gridap.ODEs.ODETools: test_ode_operator - -include("ODEOperatorMocks.jl") - -op = ODEOperatorMock{Float64,Constant}(1.0,2.0,3.0,1) - -u = ones(2) -u_t = ones(2)*2.0 - -@assert(length(u) == 2) -@assert(length(u_t) == 2) - -cache = allocate_cache(op) -update_cache!(cache,op,0.0) - -t = 0.0 -r = allocate_residual(op,t,u,cache) -@test r == zeros(2) - -J = allocate_jacobian(op,t,u,cache) -@test J == zeros(2,2) - -residual!(r,op,t,(u,u_t),cache) -_r = zeros(2) -_r[1] = u_t[1] - op.a * u[1] -_r[2] = u_t[2] - op.b * u[1] - op.c * u[2] -@test all(r .== _r) - -J .= 0 -jacobian!(J,op,t,(u,u_t),1,1.0,cache) -_J = zeros(2,2) -_J[1,1] = -op.a -_J[2,1] = -op.b -_J[2,2] = -op.c -@test all(J .== _J) - -jacobian!(J,op,t,(u,u_t),2,1.0,cache) -_J[1,1] += 1.0 -_J[2,2] += 1.0 -@test all(J .== _J) -_J -J - -@test test_ode_operator(op,t,u,u_t) - -end #module diff --git a/test/ODEsTests/ODEsTests/ODESolutionsTests.jl b/test/ODEsTests/ODEsTests/ODESolutionsTests.jl deleted file mode 100644 index b5782eee3..000000000 --- a/test/ODEsTests/ODEsTests/ODESolutionsTests.jl +++ /dev/null @@ -1,58 +0,0 @@ -module ODESolversTests - -using Gridap.ODEs.ODETools: GenericODESolution -using Gridap.ODEs.ODETools: BackwardEuler -using Gridap.ODEs.ODETools: BackwardEulerNonlinearOperator -using Gridap.ODEs.ODETools: solve! - -using Test -using Gridap -using Gridap.ODEs -using Gridap.ODEs.ODETools - - -include("ODEOperatorMocks.jl") - -op = ODEOperatorMock(1.0,0.0,1.0,1) - -include("ODESolverMocks.jl") - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -u0 = ones(2)*2 - -nls = NLSolverMock() - -solver = BackwardEuler(nls,dt) - -steps = solve(solver,op,u0,t0,tF) - -uf = copy(u0) -uf.=1.0 -current, state = Base.iterate(steps) -uf, tf = current -uf, u0, tf, cache = state -cache -@test tf==t0+dt -@test all(uf.≈1+11/9) -# current, state = Base.iterate(steps) -current, state = Base.iterate(steps,state) -uf, tf = current -@test tf≈t0+2*dt -uf, u0, tf, cache = state - -_t_n = t0 -for (u_n, t_n) in steps - global _t_n - _t_n += dt - @test t_n≈_t_n -end - -steps - -@test test_ode_solution(steps) -# println("The solution at time $(t_n) is $(u_n)") - -end diff --git a/test/ODEsTests/ODEsTests/ODESolverMocks.jl b/test/ODEsTests/ODEsTests/ODESolverMocks.jl deleted file mode 100644 index f4568cf4b..000000000 --- a/test/ODEsTests/ODEsTests/ODESolverMocks.jl +++ /dev/null @@ -1,104 +0,0 @@ -using Gridap.Algebra: residual -using Gridap.Algebra: jacobian -import Gridap.Algebra: NonlinearSolver -import Gridap.Algebra: NonlinearOperator -import Gridap.Algebra: solve! -import Gridap.ODEs.ODETools: solve_step! -import Gridap.ODEs.ODETools: ODESolver -import Gridap.ODEs.ODETools: zero_initial_guess -import Gridap.ODEs.ODETools: residual! -import Gridap.ODEs.ODETools: jacobian! -import Gridap.ODEs.ODETools: solve! -import Gridap.ODEs.ODETools: allocate_residual -import Gridap.ODEs.ODETools: allocate_jacobian - -struct OperatorMock <: NonlinearOperator - odeop - tf::Float64 - dt::Float64 - u0::AbstractVector - cache -end - -function OperatorMock(odeop::ODEOperator,tf::Real,dt::Real,u0::AbstractVector) - cache = nothing - OperatorMock(odeop,tf,dt,u0,cache) -end - -function residual!(b::AbstractVector,op::OperatorMock,x::AbstractVector) - uf = x - uf_t = (x-op.u0)/op.dt - residual!(b,op.odeop,op.tf,(uf,uf_t),op.cache) -end - -function jacobian!(A::AbstractMatrix,op::OperatorMock,x::AbstractVector) - uf = x - uf_t = (x-op.u0)/op.dt - fill!(A,0.0) - # jacobian!(A,op.odeop,op.tf,(uf,uf_t),1,1.0,op.cache) - # jacobians!(A,op.odeop,op.tf,(uf,uf_t),(1/op.dt),op.cache) - jacobians!(A,op.odeop,op.tf,(uf,uf_t),(1.0,1.0/op.dt),op.cache) -end - -function allocate_residual(op::OperatorMock,x::AbstractVector) - allocate_residual(op.odeop,op.tf,x,op.cache) -end - -function allocate_jacobian(op::OperatorMock,x::AbstractVector) - allocate_jacobian(op.odeop,op.tf,x,op.cache) -end - -function zero_initial_guess(op::OperatorMock) - x0 = similar(op.u0) - fill!(x0,zero(eltype(x0))) - x0 -end - -struct NLSolverMock <: NonlinearSolver -end - -function solve!(x::AbstractVector,nls::NLSolverMock,nlop::NonlinearOperator,cache::Nothing) - r = residual(nlop,x) - J = jacobian(nlop,x) - dx = inv(Matrix(J))*(-r) - x.= x.+dx - cache = (r,J,dx) -end - -function solve!(x::AbstractVector,nls::NLSolverMock,nlop::NonlinearOperator,cache) - r, J, dx = cache - residual!(r, nlop, x) - jacobian!(J, nlop, x) - dx = inv(Matrix(J))*(-r) - x.= x.+dx -end - -struct ODESolverMock <: ODESolver - nls::NLSolverMock - dt::Float64 -end - -function solve_step!( - uf::AbstractVector,solver::ODESolverMock,op::ODEOperator,u0::AbstractVector,t0::Real, cache) # -> (uF,tF) - - dt = solver.dt - tf = t0+dt - if (cache == nothing) - ode_cache = allocate_cache(op) - else - ode_cache, nl_cache = cache - update_cache!(ode_cache,op,tf) - end - - nlop = OperatorMock(op,tf,dt,u0,ode_cache) - - if (cache==nothing) - nl_cache = solve!(uf,solver.nls,nlop) - else - nl_cache = solve!(uf,solver.nls,nlop,nl_cache) - end - - cache = ode_cache, nl_cache - - return (uf, tf, cache) -end diff --git a/test/ODEsTests/ODEsTests/ODESolversTests.jl b/test/ODEsTests/ODEsTests/ODESolversTests.jl deleted file mode 100644 index 5aaa46341..000000000 --- a/test/ODEsTests/ODEsTests/ODESolversTests.jl +++ /dev/null @@ -1,288 +0,0 @@ -module ODESolversTests - -using Gridap.ODEs -using Gridap.ODEs.ODETools: GenericODESolution -using Gridap.ODEs.ODETools: BackwardEuler -using Gridap.ODEs.ODETools: RungeKutta -using Gridap.ODEs.ODETools: IMEXRungeKutta -using Gridap.ODEs.ODETools: EXRungeKutta -using Gridap.ODEs.ODETools: ForwardEuler -using Gridap.ODEs.ODETools: ThetaMethodNonlinearOperator -using Gridap.ODEs.ODETools: GeneralizedAlpha -using Gridap.ODEs.ODETools: solve! -using Gridap.ODEs -using Gridap.ODEs.ODETools -using Gridap -using Test - -# using Gridap.Algebra: residual, jacobian - -include("ODEOperatorMocks.jl") - -op = ODEOperatorMock{Float64,Constant}(1.0,0.0,1.0,1) - -include("ODESolverMocks.jl") - -t0 = 0.0 -tf = 1.0 -dt = 0.1 -u0 = ones(2)*2 - -# NonlinearOperator tests - -sop = OperatorMock(op,tf,dt,u0) -isa(sop,NonlinearOperator) - -ode_cache = allocate_cache(op) - -x = zero_initial_guess(sop) -x .+= 1.0 -isa(sop,OperatorMock) -isa(x,AbstractVector) -r = allocate_residual(sop,x) -J = allocate_jacobian(sop,x) -residual!(r,sop,x) -jacobian!(J,sop,x) -@test all(r .== [ -11.0 -11.0]) -@test all(J .== [ 9.0 0.0; 0.0 9.0]) -_r = residual(sop,x) -_J = jacobian(sop,x) -@test all(_r .== [ -11.0 -11.0]) -@test all(_J .== [ 9.0 0.0; 0.0 9.0]) - -# NLSolver tests - -nls = NLSolverMock() -cache = solve!(x,nls,sop) -r, J, dx = cache -@test all(r.==_r) -@test all(J.==_J) -@test all(dx.≈11/9) -@test all(x.≈1+11/9) - -#ODESolver tests - -odesol = ODESolverMock(nls,dt) -uf = copy(u0) -uf.=1.0 - -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,nothing) -uf -@test tf==t0+dt -@test all(uf.≈x) - -# ODESolutions - -tF = 10.0 -sol = GenericODESolution(odesol,op,u0,t0,tF) -current, state = Base.iterate(sol) -uf, tf = current -@test tf==t0+dt -@test all(uf.≈x) - -# BackwardEulerNonlinearOperator tests - -tf = t0+dt -vf = copy(u0) -sop = ThetaMethodNonlinearOperator(op,tf,dt,u0,ode_cache,vf) # See below -x = zero_initial_guess(sop) -x .+= 1.0 -r = allocate_residual(sop,x) -J = allocate_jacobian(sop,x) -residual!(r,sop,x) -jacobian!(J,sop,x) -@test all(r .== [ -11.0 -11.0]) -@test all(J .== [ 9.0 0.0; 0.0 9.0]) -_r = residual(sop,x) -_J = jacobian(sop,x) -@test all(_r .== [ -11.0 -11.0]) -@test all(_J .== [ 9.0 0.0; 0.0 9.0]) - -ls = LUSolver() - -# BackwardEuler tests -dt = 0.01 -odesol = BackwardEuler(ls,dt) -uf = copy(u0) -uf.=1.0 -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# Affine and nonlinear solvers -op = ODEOperatorMock{Float64,Nonlinear}(1.0,0.0,1.0,1) -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -op = ODEOperatorMock{Float64,Affine}(1.0,0.0,1.0,1) -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# ThetaMethod tests -odesolθ = ThetaMethod(ls,dt,0.5) -ufθ = copy(u0) -ufθ.=1.0 -ufθ, tf, cache = solve_step!(ufθ,odesolθ,op,u0,t0,nothing) -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# RK tests -# RK: BE equivalent -# u1-u0 = dt*u1 => u1 = u0/(1-dt) = 2.2222222222222223 -# uf-u0 = dt*u1 => uf = u1 -odesol = RungeKutta(ls,ls,dt,:BE_1_0_1) -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# RK: CN 2nd order -# k1 = u0 -# k2 = u0 + dt * 0.5 * u0 + dt * 0.5 * k2 -# k2 = u0 * (1+dt*0.5)/(1-dt*0.5) -# un+1 = u0 + dt * 0.5 * u0 + dt * 0.5 * u0 * (1+dt*0.5)/(1-dt*0.5) -# un+1 = u0 * (1+ dt * 0.5 + dt * 0.5* (1+dt*0.5)/(1-dt*0.5)) -odesol = RungeKutta(ls,ls,dt,:CN_2_0_2) -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# RK: SDIRK 2nd order -# k1 = u0 + dt * 0.25 * k1 -# k1 = u0 * 1/(1-dt*0.25) -# k2 = u0 + dt * 0.5 * k1 + dt * 0.25 * k2 -# k2 = u0 * 1/(1-dt*0.25) * (1 + dt*0.5/(1-dt*0.25)) -# un+1 = u0 + dt * 0.5 * k1 + dt * 0.5 * k2 -# un+1 = u0 + dt * 0.5 * u0 * 1/(1-dt*0.25) + dt * 0.5 * u0 * 1/(1-dt*0.25) * (1 + dt*0.5/(1-dt*0.25)) -# un+1 = u0 * (1 + dt*0.5/(1-dt*0.25) + dt*0.5/(1-dt*0.25) * (1 + dt*0.5/(1-dt*0.25)) -odesol = RungeKutta(ls,ls,dt,:SDIRK_2_0_2) -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# RK: TRBDF (2nd order with some 0 on the diagonal) -odesol = RungeKutta(ls,ls,dt,:TRBDF2_3_2_3) -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# IMEX RK tests (explicit part = 0) -odesol = IMEXRungeKutta(ls,ls,dt,:IMEX_FE_BE_2_0_1) -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# EX-RK: FE equivalent -odesol = EXRungeKutta(ls,dt,:EX_FE_1_0_1) -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# EX-RK: SSP equivalent -odesol = EXRungeKutta(ls,dt,:EX_SSP_3_0_3) -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# Forward Euler test -odesol = ForwardEuler(ls,dt) -cache = nothing -uf, tf, cache = solve_step!(uf,odesol,op,u0,t0,cache) -@test tf==t0+dt -@test all( (uf.- u0*exp(dt)) .< 1e-3 ) -@test test_ode_solver(odesol,op,u0,t0,tf) - -# Newmark test -op_const = ODEOperatorMock{Float64,Constant}(1.0,0.0,0.0,2) -op_const_mat = ODEOperatorMock{Float64,ConstantMatrix}(1.0,0.0,0.0,2) -op_affine = ODEOperatorMock{Float64,Affine}(1.0,0.0,0.0,2) -op_nonlinear = ODEOperatorMock{Float64,Nonlinear}(1.0,0.0,0.0,2) -ops = [op_const, op_const_mat, op_affine, op_nonlinear] -ls = LUSolver() -γ = 0.5 -β = 0.25 -odesol = Newmark(ls,dt,γ,β) -v0 = ones(2)*(β*dt) -a0 = 0.0*ones(2) -for op in ops - _uf = copy(u0) - _uf.=1.0 - _vf = copy(v0) - _af = copy(a0) - _cache = nothing - (_uf, _vf, _af), _tf, _cache = solve_step!((_uf,_vf,_af),odesol,op,(u0,v0,a0),t0,_cache) - aᵧ = γ*_af .+ (1-γ)*a0 - aᵦ = 2*β*_af .+ (1-2*β)*a0 - @test _tf==t0+dt - @test all(_vf .≈ (v0 + dt*aᵧ)) - @test all(_uf .≈ (u0 + dt*v0 + 0.5*dt^2*aᵦ)) -end - -# GeneralizedAlpha test -op = ODEOperatorMock{Float64,Nonlinear}(1.0,0.0,1.0,1) -ρ∞ = 1.0 # Equivalent to θ-method with θ=0.5 -αf = 1.0/(1.0 + ρ∞) -αm = 0.5 * (3-ρ∞) / (1+ρ∞) -γ = 0.5 + αm - αf -odesolα = GeneralizedAlpha(ls,dt,ρ∞) -ufα = copy(u0) -v0 = 0.0*ones(2) -vf = copy(v0) -ufα.=1.0 -(ufα, vf), tf, cache = solve_step!((ufα,vf),odesolα,op,(u0,v0),t0,nothing) -@test tf==t0+dt -@test all(ufα.≈ufθ) -@test all(vf.≈ 1/(γ*dt) * (ufα-u0) + (1-1/γ)*v0) - -# GeneralizedAlpha ∂tt test -op = ODEOperatorMock{Float64,Nonlinear}(0.0,0.0,0.0,2) -γ = 0.5 -β = 0.25 -ρ∞ = 1.0 # Equivalent to Newmark(0.5, 0.25) -odesolN = Newmark(ls,dt,γ,β) -odesolα = GeneralizedAlpha(ls, dt, ρ∞) -u0 = ones(2)*2 -v0 = 0.0*ones(2) -a0 = 0.0*ones(2) -ufN = copy(u0) -ufN .= 1.0 -vfN = copy(v0) -afN = copy(a0) -(ufN, vfN, afN), tfN, cache = - solve_step!((ufN,vfN,afN),odesolN,op,(u0,v0,a0),t0,nothing) -u0 = ones(2)*2 -v0 = 0.0*ones(2) -a0 = 0.0*ones(2) -ufα = copy(u0) -ufα .= 1.0 -vfα = copy(v0) -afα = copy(a0) -(ufα, vfα, afα), tfα, cache = - solve_step!((ufα,vfα,afα),odesolα,op,(u0,v0,a0),t0,nothing) -@test tfα==tfN -@test sqrt(sum(abs2.(ufα - ufN))) < 1.0e-10 -@test sqrt(sum(abs2.(vfα - vfN))) < 1.0e-10 -@test sqrt(sum(abs2.(afα - afN))) < 1.0e-10 - -end #module diff --git a/test/ODEsTests/ODEsTests/runtests.jl b/test/ODEsTests/ODEsTests/runtests.jl deleted file mode 100644 index ea45a1bb4..000000000 --- a/test/ODEsTests/ODEsTests/runtests.jl +++ /dev/null @@ -1,11 +0,0 @@ -module ODEToolsTests - -using Test - -@testset "DiffOperators" begin include("DiffOperatorsTests.jl") end - -@testset "ODEOperators" begin include("ODEOperatorsTests.jl") end - -@testset "ODESolvers" begin include("ODESolversTests.jl") end - -end # module diff --git a/test/ODEsTests/TimeDerivativesTests.jl b/test/ODEsTests/TimeDerivativesTests.jl new file mode 100644 index 000000000..10a74b0e6 --- /dev/null +++ b/test/ODEsTests/TimeDerivativesTests.jl @@ -0,0 +1,101 @@ +module TimeDerivativesTests + +using Test + +using ForwardDiff + +using Gridap +using Gridap.ODEs + +# First time derivative, scalar-valued +f1(x, t) = 5 * x[1] * x[2] + x[2]^2 * t^3 +∂tf1(x, t) = 3 * x[2]^2 * t^2 + +f2(x, t) = t^2 +∂tf2(x, t) = 2 * t + +f3(x, t) = x[1]^2 +∂tf3(x, t) = zero(x[1]) + +f4(x, t) = x[1]^t^2 +∂tf4(x, t) = 2 * t * log(x[1]) * f4(x, t) + +for (f, ∂tf) in ((f1, ∂tf1), (f2, ∂tf2), (f3, ∂tf3), (f4, ∂tf4),) + dtf = (x, t) -> ForwardDiff.derivative(t -> f(x, t), t) + + tv = rand(Float64) + xv = Point(rand(Float64, 2)...) + @test ∂t(f)(xv, tv) ≈ ∂tf(xv, tv) + @test ∂t(f)(xv, tv) ≈ dtf(xv, tv) + @test ∂t(f)(xv, tv) ≈ ∂t(f)(xv)(tv) ≈ ∂t(f)(tv)(xv) +end + +# First time derivative, vector-valued +f1(x, t) = VectorValue(5 * x[1] * x[2], x[2]^2 * t^3) +∂tf1(x, t) = VectorValue(zero(x[1]), x[2]^2 * 3 * t^2) + +f2(x, t) = VectorValue(x[1]^2, zero(x[2])) +∂tf2(x, t) = VectorValue(zero(x[1]), zero(x[2])) + +f3(x, t) = VectorValue(x[1]^2, t) +∂tf3(x, t) = VectorValue(zero(x[1]), one(t)) + +for (f, ∂tf) in ((f1, ∂tf1), (f2, ∂tf2), (f3, ∂tf3),) + dtf = (x, t) -> VectorValue(ForwardDiff.derivative(t -> get_array(f(x, t)), t)) + + tv = rand(Float64) + xv = Point(rand(Float64, 2)...) + @test ∂t(f)(xv, tv) ≈ ∂tf(xv, tv) + @test ∂t(f)(xv, tv) ≈ dtf(xv, tv) + @test ∂t(f)(xv, tv) ≈ ∂t(f)(xv)(tv) ≈ ∂t(f)(tv)(xv) +end + +# First time derivative, tensor-valued +f1(x, t) = TensorValue(x[1] * t, x[2] * t, x[1] * x[2], x[1] * t^2) +∂tf1(x, t) = TensorValue(x[1], x[2], zero(x[1]), 2 * x[1] * t) + +for (f, ∂tf) in ((f1, ∂tf1),) + dtf = (x, t) -> TensorValue(ForwardDiff.derivative(t -> get_array(f(x, t)), t)) + + tv = rand(Float64) + xv = Point(rand(Float64, 2)...) + @test ∂t(f)(xv, tv) ≈ ∂tf(xv, tv) + @test ∂t(f)(xv, tv) ≈ dtf(xv, tv) + @test ∂t(f)(xv, tv) ≈ ∂t(f)(xv)(tv) ≈ ∂t(f)(tv)(xv) +end + +# Spatial derivatives +# f(x, t) = VectorValue(x[1]^2, t) +# ∇f(x, t) = ∇(y -> f(y, t))(x) + +# tv = rand(Float64) +# xv = Point(rand(Float64, 2)...) +# ∇f(xv, tv) + +# TODO +# @santiagobadia : Is there any way to make this transparent to the user +# I guess not unless we create a type for these analytical (space-only or +# space-time via a trait) functions +# Probably a try-catch? + +# Second time derivative, scalar-valued +f1(x, t) = t^2 +∂tf1(x, t) = 2 * t +∂ttf1(x, t) = 2 * one(t) + +f2(x, t) = x[1] * t^2 +∂tf2(x, t) = 2 * x[1] * t +∂ttf2(x, t) = 2 * x[1] + +for (f, ∂tf, ∂ttf) in ((f1, ∂tf1, ∂ttf1), (f2, ∂tf2, ∂ttf2),) + dtf = (x, t) -> ForwardDiff.derivative(t -> f(x, t), t) + dttf = (x, t) -> ForwardDiff.derivative(t -> dtf(x, t), t) + + tv = rand(Float64) + xv = Point(rand(Float64, 2)...) + @test ∂tt(f)(xv, tv) ≈ ∂ttf(xv, tv) + @test ∂tt(f)(xv, tv) ≈ dttf(xv, tv) + @test ∂tt(f)(xv, tv) ≈ ∂tt(f)(xv)(tv) ≈ ∂tt(f)(tv)(xv) +end + +end # module TimeDerivativesTests diff --git a/test/ODEsTests/TransientCellFieldsTests.jl b/test/ODEsTests/TransientCellFieldsTests.jl new file mode 100644 index 000000000..bbe8a6a6b --- /dev/null +++ b/test/ODEsTests/TransientCellFieldsTests.jl @@ -0,0 +1,142 @@ +module TransientCellFieldsTests + +using Test +using LinearAlgebra +using SparseArrays +using BlockArrays + +using Gridap +using Gridap.CellData +using Gridap.FESpaces +using Gridap.MultiField +using Gridap.ODEs +using Gridap.ODEs: TransientMultiFieldCellField + +f(x, t) = sum(x) +f(t::Real) = x -> f(x, t) + +domain = (0, 1, 0, 1) +partition = (5, 5) +model = CartesianDiscreteModel(domain, partition) + +order = 1 +reffe = ReferenceFE(lagrangian, Float64, order) +V = TestFESpace(model, reffe, conformity=:H1, dirichlet_tags="boundary") +U = TransientTrialFESpace(V, f) + +Ω = Triangulation(model) +degree = 2 * order +dΩ = Measure(Ω, degree) + +############### +# SingleField # +############### +m(t, ∂ₜu, v) = ∫(∂ₜu ⋅ v) * dΩ +b(t, u, v) = ∫(∇(u) ⋅ ∇(v) + u ⋅ v) * dΩ +l(t, v) = ∫(v) * dΩ + +res(t, u, v) = m(t, ∂t(u), v) + b(t, u, v) - l(t, v) +jac(t, u, du, v) = b(t, du, v) +jac_t(t, u, dut, v) = m(t, dut, v) + +t0 = 0.0 +U0 = U(t0) +u = get_trial_fe_basis(U0) +uₜ = TransientCellField(u, (u,)) +v = get_fe_basis(V) + +dc = DomainContribution() +dc = dc + jac(t0, uₜ, u, v) +dc = dc + jac_t(t0, uₜ, u, v) +matdata = collect_cell_matrix(U0, V, dc) +vecdata = collect_cell_vector(V, l(t0, v)) + +assembler = SparseMatrixAssembler(U, V) +mat = assemble_matrix(assembler, matdata) +vec = assemble_vector(assembler, vecdata) + +############## +# MultiField # +############## +m2(t, (∂ₜu1, ∂ₜu2), (v1, v2)) = ∫(∂ₜu1 ⋅ v1) * dΩ +b2(t, (u1, u2), (v1, v2)) = ∫(∇(u1) ⋅ ∇(v1) + u2 ⋅ v2 - u1 ⋅ v2) * dΩ +l2(t, (v1, v2)) = ∫(v1 - v2) * dΩ + +m3(t, (∂ₜu1, ∂ₜu2, ∂ₜu3), (v1, v2, v3)) = ∫(∂ₜu1 ⋅ v1) * dΩ +b3(t, (u1, u2, u3), (v1, v2, v3)) = ∫(∇(u1) ⋅ ∇(v1) + u2 ⋅ v2 - u1 ⋅ v2 - u3 ⋅ v2 - u2 ⋅ v3) * dΩ +l3(t, (v1, v2, v3)) = ∫(v1 - v2 + v3) * dΩ + +function test_multifield(n, mfs, m, b, l, U, V) + res(t, u, v) = m(t, ∂t(u), v) + b(t, u, v) - l(t, v) + jac(t, u, du, v) = b(t, du, v) + jac_t(t, u, dut, v) = m(t, dut, v) + + # Normal assembly + Y = MultiFieldFESpace(fill(V, n)) + X = TransientMultiFieldFESpace(fill(U, n)) + + t0 = 0.0 + X0 = X(t0) + test_fe_space(Y) + test_fe_space(X0) + + u = get_trial_fe_basis(X0) + uₜ = TransientMultiFieldCellField(u, (u,)) + v = get_fe_basis(Y) + + dc = DomainContribution() + dc = dc + jac(t0, uₜ, u, v) + dc = dc + jac_t(t0, uₜ, u, v) + matdata = collect_cell_matrix(X0, Y, dc) + vecdata = collect_cell_vector(Y, l(t0, v)) + + assembler = SparseMatrixAssembler(X, Y) + mat = assemble_matrix(assembler, matdata) + vec = assemble_vector(assembler, vecdata) + + # Block MultiFieldStyle + Y_blocks = MultiFieldFESpace(fill(V, n); style=mfs) + X_blocks = TransientMultiFieldFESpace(fill(U, n); style=mfs) + X0_blocks = X_blocks(t0) + test_fe_space(Y_blocks) + test_fe_space(X0_blocks) + + u_blocks = get_trial_fe_basis(X0_blocks) + uₜ_blocks = TransientMultiFieldCellField(u_blocks, (u_blocks,)) + v_blocks = get_fe_basis(Y_blocks) + + dc = DomainContribution() + dc = dc + jac(t0, uₜ_blocks, u_blocks, v_blocks) + dc = dc + jac_t(t0, uₜ_blocks, u_blocks, v_blocks) + matdata_blocks = collect_cell_matrix(X0_blocks, Y_blocks, dc) + vecdata_blocks = collect_cell_vector(Y_blocks, l(t0, v_blocks)) + + # Block Assembly + assembler_blocks = SparseMatrixAssembler(X_blocks, Y_blocks) + + mat_blocks = assemble_matrix(assembler_blocks, matdata_blocks) + vec_blocks = assemble_vector(assembler_blocks, vecdata_blocks) + @test mat_blocks ≈ mat + @test vec_blocks ≈ vec + + matvec = similar(vec) + mul!(matvec, mat, vec) + matvec_blocks = similar(vec_blocks) + mul!(matvec_blocks, mat_blocks, vec_blocks) + @test matvec_blocks ≈ matvec + + mat_blocks = allocate_matrix(assembler_blocks, matdata_blocks) + vec_blocks = allocate_vector(assembler_blocks, vecdata_blocks) + assemble_matrix!(mat_blocks, assembler_blocks, matdata_blocks) + assemble_vector!(vec_blocks, assembler_blocks, vecdata_blocks) + @test mat_blocks ≈ mat + @test vec_blocks ≈ vec +end + +for (n, m, b, l) in ((2, m2, b2, l2), (3, m3, b3, l3),) + for mfs in (BlockMultiFieldStyle(), BlockMultiFieldStyle(2, (1, n - 1))) + test_multifield(n, mfs, m, b, l, U, V) + end +end + +end # module TransientCellFieldsTests diff --git a/test/ODEsTests/TransientFEOperatorsSolutionsTests.jl b/test/ODEsTests/TransientFEOperatorsSolutionsTests.jl new file mode 100644 index 000000000..f6bc81ff2 --- /dev/null +++ b/test/ODEsTests/TransientFEOperatorsSolutionsTests.jl @@ -0,0 +1,301 @@ +module TransientFEOperatorsSolutionsTests + +# This file tests both TransientFEOperators.jl and TransientFESolutions.jl + +using Test + +using LinearAlgebra +using LinearAlgebra: fillstored! + +using Gridap +using Gridap.Algebra +using Gridap.FESpaces +using Gridap.ODEs + +include("ODESolversMocks.jl") + +# Analytical functions +u(x, t) = x[1] * (1 - x[2]) * (1 + t) +u(t::Real) = x -> u(x, t) +u(x) = t -> u(x, t) + +∂tu(x, t) = ∂t(u)(x, t) +∂tu(t::Real) = x -> ∂tu(x, t) + +# Geometry +domain = (0, 1, 0, 1) +partition = (5, 5) +model = CartesianDiscreteModel(domain, partition) + +# FE spaces +order = 2 +reffe = ReferenceFE(lagrangian, Float64, order) +V = FESpace(model, reffe, conformity=:H1, dirichlet_tags="boundary") +U = TransientTrialFESpace(V, u) + +# Integration +Ω = Triangulation(model) +degree = 2 * order +dΩ = Measure(Ω, degree) + +# Initial conditions +t0 = 0.0 +tF = 1.0 +dt = 0.2 +dt⁻¹ = 1 / dt +dt⁻² = dt⁻¹^2 + +U0 = U(t0) +uh0 = interpolate_everywhere(u(t0), U0) +∂ₜuh0 = FEFunction(U0, get_free_dof_values(uh0) .* dt⁻¹) +∂ₜₜuh0 = FEFunction(U0, get_free_dof_values(uh0) .* dt⁻²) + +uh0_dof = get_free_dof_values(uh0) +∂ₜuh0_dof = get_free_dof_values(∂ₜuh0) +∂ₜₜuh0_dof = get_free_dof_values(∂ₜₜuh0) + +# ODE solver +atol = 1.0e-12 +rtol = 1.0e-8 +maxiter = 100 +sysslvr = NonlinearSolverMock(rtol, atol, maxiter) +odeslvr = ODESolverMock(sysslvr, dt) + +function test_transient_operator( + tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms +) + odeop = get_algebraic_operator(tfeop) + odeopcache = allocate_odeopcache(odeop, t0, uhs0_dof) + update_odeopcache!(odeopcache, odeop, t0) + + if !(tfeop isa TransientIMEXFEOperator) + @test test_tfe_operator(tfeop, t0, uhs0_cf) + end + + fesltn = solve(odeslvr, tfeop, t0, tF, uhs0_fe) + @test test_tfe_solution(fesltn) + + # Test storage of constant forms + if tfeop isa TransientIMEXFEOperator + im_tfeop, ex_tfeop = get_imex_operators(tfeop) + im_odeopcache, ex_odeopcache = odeopcache + tfeops_odeopcaches = ((im_tfeop, im_odeopcache), (ex_tfeop, ex_odeopcache)) + else + tfeops_odeopcaches = ((tfeop, odeopcache),) + end + for (tfeop, odeopcache) in tfeops_odeopcaches + num_forms = get_num_forms(tfeop) + if num_forms == 1 + order = get_order(tfeop) + constant_form = constant_forms[1] + @test is_form_constant(tfeop, order) == constant_form + if constant_form + @test !isnothing(odeopcache.const_forms[1]) + end + else + for k in 0:num_forms-1 + constant_form = constant_forms[k+1] + @test is_form_constant(tfeop, k) == constant_form + if constant_form + @test !isnothing(odeopcache.const_forms[k+1]) + end + end + end + end +end + +############### +# First-order # +############### +order = 1 +f(t) = x -> ∂t(u)(x, t) - Δ(u(t))(x) + +mass(t, ∂ₜu, v) = ∫(∂ₜu ⋅ v) * dΩ +mass(t, u, ∂ₜu, v) = mass(t, ∂ₜu, v) +stiffness(t, u, v) = ∫(∇(u) ⊙ ∇(v)) * dΩ +forcing(t, v) = ∫(f(t) ⋅ v) * dΩ + +res(t, u, v) = mass(t, u, ∂t(u), v) + stiffness(t, u, v) - forcing(t, v) +jac(t, u, du, v) = stiffness(t, du, v) +jac_t(t, u, dut, v) = mass(t, u, dut, v) + +res_ql(t, u, v) = stiffness(t, u, v) - forcing(t, v) +res_l(t, v) = (-1) * forcing(t, v) + +# TODO could think of a simple and optimised way to create a zero residual or +# jacobian without assembling the vector / matrix +im_res(t, u, v) = ∫(0 * u * v) * dΩ +im_jac(t, u, du, v) = ∫(0 * du * v) * dΩ +im_jac_t(t, u, dut, v) = mass(t, u, dut, v) + +ex_res(t, u, v) = stiffness(t, u, v) - forcing(t, v) +ex_jac(t, u, du, v) = stiffness(t, du, v) + +# Initial data +uhs0_fe = (uh0,) +uhs0_cf = TransientCellField(uh0, (∂ₜuh0,)) +uhs0_dof = (uh0_dof, ∂ₜuh0_dof) + +# TransientFEOperator +constant_forms = () + +tfeop = TransientFEOperator(res, (jac, jac_t), U, V) +test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + +tfeop = TransientFEOperator(res, U, V; order) +test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + +# TransientQuasilinearFEOperator +constant_forms = (false,) + +tfeop = TransientQuasilinearFEOperator(mass, res_ql, (jac, jac_t), U, V) +test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + +tfeop = TransientQuasilinearFEOperator(mass, res_ql, U, V; order) +test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + +for constant_mass in (true, false) + # TransientSemilinearFEOperator + constant_forms = (constant_mass,) + + tfeop = TransientSemilinearFEOperator( + mass, res_ql, (jac, jac_t), U, V; + constant_mass + ) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + + tfeop = TransientSemilinearFEOperator( + mass, res_ql, U, V; + constant_mass, order + ) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + + # TransientIMEXFEOperator + constant_forms = (constant_mass,) + + im_tfeop = TransientSemilinearFEOperator( + mass, im_res, (im_jac, im_jac_t), U, V; + constant_mass + ) + ex_tfeop = TransientFEOperator(ex_res, (ex_jac,), U, V) + tfeop = TransientIMEXFEOperator(im_tfeop, ex_tfeop) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + + for constant_stiffness in (true, false) + # TransientLinearFEOperator + constant_forms = (constant_stiffness, constant_mass) + + tfeop = TransientLinearFEOperator( + (stiffness, mass), res_l, (jac, jac_t), U, V; + constant_forms + ) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + + tfeop = TransientLinearFEOperator( + (stiffness, mass), res_l, U, V; + constant_forms + ) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + end +end + +################ +# Second-order # +################ +order = 2 +f(t) = x -> ∂tt(u)(x, t) + ∂t(u)(x, t) - Δ(u(t))(x) + +mass(t, ∂ₜₜu, v) = ∫(∂ₜₜu ⋅ v) * dΩ +mass(t, u, ∂ₜₜu, v) = mass(t, ∂ₜₜu, v) +damping(t, ∂ₜu, v) = ∫(∂ₜu ⋅ v) * dΩ +stiffness(t, u, v) = ∫(∇(u) ⊙ ∇(v)) * dΩ +forcing(t, v) = ∫(f(t) ⋅ v) * dΩ + +res(t, u, v) = mass(t, u, ∂tt(u), v) + damping(t, ∂t(u), v) + stiffness(t, u, v) - forcing(t, v) +jac(t, u, du, v) = stiffness(t, du, v) +jac_t(t, u, dut, v) = damping(t, dut, v) +jac_tt(t, u, dutt, v) = mass(t, dutt, v) + +res_ql(t, u, v) = damping(t, ∂t(u), v) + stiffness(t, u, v) - forcing(t, v) +res_l(t, v) = (-1) * forcing(t, v) + +im_res(t, u, v) = ∫(0 * u * v) * dΩ +im_jac(t, u, du, v) = ∫(0 * du * v) * dΩ +im_jac_t(t, u, dut, v) = ∫(0 * dut * v) * dΩ +im_jac_tt(t, u, dutt, v) = mass(t, u, dutt, v) + +ex_res(t, u, v) = damping(t, ∂t(u), v) + stiffness(t, u, v) - forcing(t, v) +ex_jac(t, u, du, v) = stiffness(t, du, v) +ex_jac_t(t, u, dut, v) = damping(t, dut, v) + +# Initial data +uhs0_fe = (uh0, ∂ₜuh0) +uhs0_cf = TransientCellField(uh0, (∂ₜuh0, ∂ₜₜuh0,)) +uhs0_dof = (uh0_dof, ∂ₜuh0_dof, ∂ₜₜuh0_dof) + +# TransientFEOperator +constant_forms = () + +tfeop = TransientFEOperator(res, (jac, jac_t, jac_tt), U, V) +test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + +tfeop = TransientFEOperator(res, U, V; order) +test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + +for constant_mass in (true, false) + # TransientQuasilinearFEOperator + constant_forms = (false,) + + tfeop = TransientQuasilinearFEOperator(mass, res_ql, (jac, jac_t, jac_tt), U, V) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + + tfeop = TransientQuasilinearFEOperator(mass, res_ql, U, V; order) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + + # TransientSemilinearFEOperator + constant_forms = (constant_mass,) + + tfeop = TransientSemilinearFEOperator( + mass, res_ql, (jac, jac_t, jac_tt), U, V; + constant_mass + ) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + + tfeop = TransientSemilinearFEOperator( + mass, res_ql, U, V; + constant_mass, order + ) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + + # TransientIMEXFEOperator + constant_forms = (constant_mass,) + + im_tfeop = TransientSemilinearFEOperator( + mass, im_res, (im_jac, im_jac_t, im_jac_tt), U, V; + constant_mass + ) + ex_tfeop = TransientFEOperator(ex_res, (ex_jac, ex_jac_t), U, V) + tfeop = TransientIMEXFEOperator(im_tfeop, ex_tfeop) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + + for constant_damping in (true, false) + for constant_stiffness in (true, false) + # TransientLinearFEOperator + constant_forms = (constant_stiffness, constant_damping, constant_mass) + + tfeop = TransientLinearFEOperator( + (stiffness, damping, mass), res_l, (jac, jac_t, jac_tt), U, V; + constant_forms + ) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + + tfeop = TransientLinearFEOperator( + (stiffness, damping, mass), res_l, U, V; + constant_forms + ) + test_transient_operator(tfeop, uhs0_fe, uhs0_cf, uhs0_dof, constant_forms) + end + end +end + +end # module TransientFEOperatorsSolutionsTests diff --git a/test/ODEsTests/TransientFEProblemsTests.jl b/test/ODEsTests/TransientFEProblemsTests.jl new file mode 100644 index 000000000..49886a04c --- /dev/null +++ b/test/ODEsTests/TransientFEProblemsTests.jl @@ -0,0 +1,21 @@ +module TransientFEProblemsTests + +using Test + +@testset "HeatEquationScalar" begin include("TransientFEProblemsTests/HeatEquationScalarTests.jl") end + +@testset "HeatEquationVector" begin include("TransientFEProblemsTests/HeatEquationVectorTests.jl") end + +@testset "HeatEquationMultiField" begin include("TransientFEProblemsTests/HeatEquationMultiFieldTests.jl") end + +@testset "HeatEquationNeumann" begin include("TransientFEProblemsTests/HeatEquationNeumannTests.jl") end + +@testset "HeatEquationDG" begin include("TransientFEProblemsTests/HeatEquationDGTests.jl") end + +@testset "StokesEquation" begin include("TransientFEProblemsTests/StokesEquationTests.jl") end + +@testset "FreeSurfacePotentialFlow" begin include("TransientFEProblemsTests/FreeSurfacePotentialFlowTests.jl") end + +@testset "SecondOrderEquation" begin include("TransientFEProblemsTests/SecondOrderEquationTests.jl") end + +end # module TransientFEProblemsTests diff --git a/test/ODEsTests/TransientFEProblemsTests/FreeSurfacePotentialFlowTests.jl b/test/ODEsTests/TransientFEProblemsTests/FreeSurfacePotentialFlowTests.jl new file mode 100644 index 000000000..675d1be19 --- /dev/null +++ b/test/ODEsTests/TransientFEProblemsTests/FreeSurfacePotentialFlowTests.jl @@ -0,0 +1,115 @@ +module FreeSurfacePotentialFlowTests + +using Test + +using Gridap +using Gridap.Geometry + +# Parameters +L = 2 * π +H = 1.0 +n = 8 +order = 2 +g = 9.81 +ξ = 0.1 +λ = L / 2 +k = 2 * π / L +h = L / n +ω = √(g * k * tanh(k * H)) +t0 = 0.0 +dt = h / (2 * λ * ω) +tF = 10 * dt # 2 * π +α = 2 / dt +tol = 1.0e-2 + +# Exact solution +ϕₑ(x, t) = ω / k * ξ * (cosh(k * (x[2]))) / sinh(k * H) * sin(k * x[1] - ω * t) +ηₑ(x, t) = ξ * cos(k * x[1] - ω * t) +ϕₑ(t::Real) = x -> ϕₑ(x, t) +ηₑ(t::Real) = x -> ηₑ(x, t) + +# Domain +domain = (0, L, 0, H) +partition = (n, n) +model = CartesianDiscreteModel(domain, partition; isperiodic=(true, false)) + +# Boundaries +labels = get_face_labeling(model) +add_tag_from_tags!(labels, "bottom", [1, 2, 5]) +add_tag_from_tags!(labels, "free_surface", [3, 4, 6]) + +# Triangulation +Ω = Interior(model) +Γ = Boundary(model, tags="free_surface") +dΩ = Measure(Ω, 2 * order) +dΓ = Measure(Γ, 2 * order) + +# FE spaces +reffe = ReferenceFE(lagrangian, Float64, order) +V = TestFESpace(Ω, reffe, conformity=:H1) +V_Γ = TestFESpace(Γ, reffe, conformity=:H1) +U = TransientTrialFESpace(V) +U_Γ = TransientTrialFESpace(V_Γ) +X = TransientMultiFieldFESpace([U, U_Γ]) +Y = MultiFieldFESpace([V, V_Γ]) + +# Weak form +m(t, (ϕt, ηt), (w, v)) = ∫(0.5 * (α / g * (w * ϕt) + v * ϕt) - (w * ηt))dΓ +a(t, (ϕ, η), (w, v)) = ∫(∇(ϕ) ⋅ ∇(w))dΩ + ∫(0.5 * (α * (w * η) + g * v * η))dΓ +b(t, (w, v)) = ∫(0.0 * w)dΓ + +res(t, x, y) = m(t, ∂t(x), y) + a(t, x, y) - b(t, y) +jac(t, x, dx, y) = a(t, dx, y) +jac_t(t, x, dxt, y) = m(t, dxt, y) + +# Optimal transient FE Operator +op_const = TransientLinearFEOperator((a, m), b, (jac, jac_t), X, Y, constant_forms=(true, true)) + +# TransientFEOperator exploiting automatic differentiation (testing purposes) +op_trans = TransientFEOperator(res, (jac, jac_t), X, Y) +op_ad = TransientFEOperator(res, X, Y) + +# TransientFEOperator exploiting time derivative of separate fields (TransientMultiFieldCellField) +res2(t, (ϕ, η), y) = m(t, (∂t(ϕ), ∂t(η)), y) + a(t, (ϕ, η), y) - b(t, y) +op_multifield = TransientFEOperator(res2, (jac, jac_t), X, Y) + +# Solver +sysslvr_l = LUSolver() +sysslvr_nl = NLSolver(sysslvr_l, show_trace=false, method=:newton, iterations=10) +odeslvr = ThetaMethod(sysslvr_nl, dt, 0.5) + +# Initial solution +U0 = U(t0) +UΓ0 = U_Γ(t0) +X0 = X(t0) +uh0 = interpolate_everywhere(ϕₑ(t0), U0) +uhΓ0 = interpolate_everywhere(ηₑ(t0), UΓ0) +xh0 = interpolate_everywhere([uh0, uhΓ0], X0); +xhs0 = (xh0,) + +function test_flow_operator(op) + fesltn = solve(odeslvr, op, t0, tF, xhs0) + + # Post-process + l2_Ω(v) = √(∑(∫(v ⋅ v) * dΩ)) + l2_Γ(v) = √(∑(∫(v ⋅ v) * dΓ)) + E_kin(v) = 0.5 * ∑(∫(∇(v) ⋅ ∇(v)) * dΩ) + E_pot(v) = g * 0.5 * ∑(∫(v * v)dΓ) + Eₑ = 0.5 * g * ξ^2 * L + + for (tn, (ϕn, ηn)) in fesltn + E = E_kin(ϕn) + E_pot(ηn) + error_ϕ = l2_Ω(ϕn - ϕₑ(tn)) + error_η = l2_Γ(ηn - ηₑ(tn)) + @test abs(E / Eₑ - 1.0) <= tol + @test error_ϕ <= tol + @test error_η <= tol + end +end + +# op_ad not working yet +for op in (op_const, op_trans, op_multifield) + test_flow_operator(op) +end + +end diff --git a/test/ODEsTests/TransientFEProblemsTests/HeatEquationDGTests.jl b/test/ODEsTests/TransientFEProblemsTests/HeatEquationDGTests.jl new file mode 100644 index 000000000..9a650d8b4 --- /dev/null +++ b/test/ODEsTests/TransientFEProblemsTests/HeatEquationDGTests.jl @@ -0,0 +1,105 @@ +module HeatEquationDGTests + +using Test + +using LinearAlgebra + +using Gridap +using Gridap.Algebra +using Gridap.FESpaces +using Gridap.ODEs + +# Analytical functions +u(x, t) = x[1] * (1 - x[2]) * (1 + t) +u(t::Real) = x -> u(x, t) +u(x) = t -> u(x, t) + +∂tu(x, t) = ∂t(u)(x, t) +∂tu(t::Real) = x -> ∂tu(x, t) + +# Geometry +domain = (0, 1, 0, 1) +partition = (5, 5) +model = CartesianDiscreteModel(domain, partition) + +# FE spaces +order = 2 +reffe = ReferenceFE(lagrangian, Float64, order) +V = FESpace(model, reffe, conformity=:L2) +U = TransientTrialFESpace(V, u) + +# Integration +Ω = Triangulation(model) +degree = 2 * order +dΩ = Measure(Ω, degree) + +Γ = BoundaryTriangulation(model) +dΓ = Measure(Γ, degree) +nΓ = get_normal_vector(Γ) + +Λ = SkeletonTriangulation(model) +dΛ = Measure(Λ, degree) +nΛ = get_normal_vector(Λ) + +# FE operator +f(t) = x -> ∂t(u)(x, t) - Δ(u(t))(x) +h = 1 / 5 +γ = order * (order + 1) + +mass(t, ∂ₜu, v) = ∫(∂ₜu ⋅ v) * dΩ +stiffness_Ω(t, u, v) = ∫(∇(u) ⊙ ∇(v)) * dΩ +stiffness_Γ(t, u, v) = ∫((γ / h) * (u ⋅ v) - (∇(u) ⋅ nΓ) ⋅ v - u ⋅ (∇(v) ⋅ nΓ)) * dΓ +stiffness_Λ(t, u, v) = ∫((γ / h) * (jump(u * nΛ) ⊙ jump(v * nΛ)) - mean(∇(u)) ⊙ jump(v * nΛ) - jump(u * nΛ) ⊙ mean(∇(v))) * dΛ +stiffness(t, u, v) = stiffness_Ω(t, u, v) + stiffness_Γ(t, u, v) + stiffness_Λ(t, u, v) +forcing_Ω(t, v) = ∫(f(t) ⋅ v) * dΩ +forcing_Γ(t, v) = ∫((γ / h) * v ⋅ u(t) - u(t) ⋅ (∇(v) ⋅ nΓ)) * dΓ +forcing(t, v) = forcing_Ω(t, v) + forcing_Γ(t, v) + +res(t, u, v) = mass(t, ∂t(u), v) + stiffness(t, u, v) - forcing(t, v) +jac(t, u, du, v) = stiffness(t, du, v) +jac_t(t, u, dut, v) = mass(t, dut, v) + +tfeop_nl_man = TransientFEOperator(res, (jac, jac_t), U, V) + +# TODO there is an issue with AD here. The issue is already there in the +# current version of Gridap.ODEs. This happens when calling +# TransientCellFieldType(y, u.derivatives) in the construction of the jacobians +# with AD +tfeop_nl_ad = TransientFEOperator(res, U, V) + +tfeops = ( + tfeop_nl_man, + # tfeop_nl_ad +) + +# Initial conditions +t0 = 0.0 +tF = 1.0 +dt = 0.1 + +U0 = U(t0) +uh0 = interpolate_everywhere(u(t0), U0) +uhs0 = (uh0,) + +# ODE Solver +tol = 1.0e-6 +sysslvr_l = LUSolver() +sysslvr_nl = NLSolver(sysslvr_l, show_trace=false, method=:newton, iterations=10) +odeslvrs = ( + ThetaMethod(sysslvr_nl, dt, 0.5), +) + +# Tests +for odeslvr in odeslvrs + for tfeop in tfeops + fesltn = solve(odeslvr, tfeop, t0, tF, uhs0) + + for (t_n, uh_n) in fesltn + eh_n = u(t_n) - uh_n + e_n = sqrt(sum(∫(eh_n ⋅ eh_n) * dΩ)) + @test e_n < tol + end + end +end + +end # module HeatEquationDGTests diff --git a/test/ODEsTests/TransientFEProblemsTests/HeatEquationMultiFieldTests.jl b/test/ODEsTests/TransientFEProblemsTests/HeatEquationMultiFieldTests.jl new file mode 100644 index 000000000..61bd123c6 --- /dev/null +++ b/test/ODEsTests/TransientFEProblemsTests/HeatEquationMultiFieldTests.jl @@ -0,0 +1,120 @@ +module HeatEquationMultifieldTests + +using Test + +using LinearAlgebra + +using Gridap +using Gridap.Algebra +using Gridap.FESpaces +using Gridap.ODEs + +# Analytical functions +u(x, t) = x[1] * (1 - x[2]) * (1 + t) +u(t::Real) = x -> u(x, t) +u(x) = t -> u(x, t) + +∂tu(x, t) = ∂t(u)(x, t) +∂tu(t::Real) = x -> ∂tu(x, t) + +# Geometry +domain = (0, 1, 0, 1) +partition = (5, 5) +model = CartesianDiscreteModel(domain, partition) + +# FE spaces +order = 2 +reffe = ReferenceFE(lagrangian, Float64, order) +V = FESpace(model, reffe, conformity=:H1, dirichlet_tags="boundary") +U = TransientTrialFESpace(V, u) + +Y = MultiFieldFESpace([V, V]) +X = TransientMultiFieldFESpace([U, U]) + +# Integration +Ω = Triangulation(model) +degree = 2 * order +dΩ = Measure(Ω, degree) + +# FE operator +f(t) = x -> ∂t(u)(x, t) - Δ(u(t))(x) +_mass(t, ∂ₜu, v) = ∫(∂ₜu ⋅ v) * dΩ +_mass(t, u, ∂ₜu, v) = _mass(t, ∂ₜu, v) +_stiffness(t, u, v) = ∫(∇(u) ⊙ ∇(v)) * dΩ +_forcing(t, v) = ∫(f(t) ⋅ v) * dΩ + +_res(t, u, v) = _mass(t, ∂t(u), v) + _stiffness(t, u, v) - _forcing(t, v) +_res_ql(t, u, v) = _stiffness(t, u, v) - _forcing(t, v) +_res_l(t, v) = (-1) * _forcing(t, v) + +mass(t, (∂ₜu1, ∂ₜu2), (v1, v2)) = _mass(t, ∂ₜu1, v1) + _mass(t, ∂ₜu2, v2) +mass(t, (u1, u2), (∂ₜu1, ∂ₜu2), (v1, v2)) = _mass(t, u1, ∂ₜu1, v1) + _mass(t, u2, ∂ₜu2, v2) +stiffness(t, (u1, u2), (v1, v2)) = _stiffness(t, u1, v1) + _stiffness(t, u2, v2) + +res(t, (u1, u2), (v1, v2)) = _res(t, u1, v1) + _res(t, u2, v2) +jac(t, x, (du1, du2), (v1, v2)) = _stiffness(t, du1, v1) + _stiffness(t, du2, v2) +jac_t(t, x, (dut1, dut2), (v1, v2)) = _mass(t, dut1, v1) + _mass(t, dut2, v2) + +res_ql(t, (u1, u2), (v1, v2)) = _res_ql(t, u1, v1) + _res_ql(t, u2, v2) +res_l(t, (v1, v2)) = _res_l(t, v1) + _res_l(t, v2) + +args_man = ((jac, jac_t), X, Y) +tfeop_nl_man = TransientFEOperator(res, args_man...) +tfeop_ql_man = TransientQuasilinearFEOperator(mass, res_ql, args_man...) +tfeop_sl_man = TransientSemilinearFEOperator(mass, res_ql, args_man...) +tfeop_l_man = TransientLinearFEOperator((stiffness, mass), res_l, args_man...) + +args_ad = (X, Y) +tfeop_nl_ad = TransientFEOperator(res, args_ad...) +tfeop_ql_ad = TransientQuasilinearFEOperator(mass, res_ql, args_ad...) +tfeop_sl_ad = TransientSemilinearFEOperator(mass, res_ql, args_ad...) +tfeop_l_ad = TransientLinearFEOperator((stiffness, mass), res_l, args_ad...) + +tfeops = ( + tfeop_nl_man, + tfeop_ql_man, + tfeop_sl_man, + tfeop_l_man, + tfeop_nl_ad, + tfeop_ql_ad, + tfeop_sl_ad, + tfeop_l_ad, +) + +# Initial conditions +t0 = 0.0 +tF = 1.0 +dt = 0.1 + +U0 = U(t0) +uh0 = interpolate_everywhere(u(t0), U0) +X0 = X(t0) +xh0 = interpolate_everywhere([uh0, uh0], X0) +xhs0 = (xh0,) + +# ODE Solver +tol = 1.0e-6 +sysslvr_l = LUSolver() +sysslvr_nl = NLSolver(sysslvr_l, show_trace=false, method=:newton, iterations=10) +odeslvrs = ( + ThetaMethod(sysslvr_nl, dt, 0.5), +) + +# Tests +for odeslvr in odeslvrs + for tfeop in tfeops + fesltn = solve(odeslvr, tfeop, t0, tF, xhs0) + + for (t_n, xhs_n) in fesltn + eh_n = u(t_n) - xhs_n[1] + e_n = sqrt(sum(∫(eh_n ⋅ eh_n) * dΩ)) + @test e_n < tol + + eh_n = u(t_n) - xhs_n[2] + e_n = sqrt(sum(∫(eh_n ⋅ eh_n) * dΩ)) + @test e_n < tol + end + end +end + +end # module HeatEquationMultifieldTests diff --git a/test/ODEsTests/TransientFEProblemsTests/HeatEquationNeumannTests.jl b/test/ODEsTests/TransientFEProblemsTests/HeatEquationNeumannTests.jl new file mode 100644 index 000000000..f9a75c90f --- /dev/null +++ b/test/ODEsTests/TransientFEProblemsTests/HeatEquationNeumannTests.jl @@ -0,0 +1,111 @@ +module HeatEquationNeumannTests + +using Test + +using LinearAlgebra + +using Gridap +using Gridap.Algebra +using Gridap.FESpaces +using Gridap.ODEs + +# Analytical functions +u(x, t) = x[1] * (1 - x[2]) * (1 + t) +u(t::Real) = x -> u(x, t) +u(x) = t -> u(x, t) + +∂tu(x, t) = ∂t(u)(x, t) +∂tu(t::Real) = x -> ∂tu(x, t) + +# Geometry +domain = (0, 1, 0, 1) +partition = (5, 5) +model = CartesianDiscreteModel(domain, partition) +dirichlet_tags = [1, 2, 3, 4, 5, 6] +neumanntags = [7, 8] + +# FE spaces +order = 2 +reffe = ReferenceFE(lagrangian, Float64, order) +V = FESpace(model, reffe, conformity=:H1, dirichlet_tags=dirichlet_tags) +U = TransientTrialFESpace(V, u) + +# Integration +Ω = Triangulation(model) +degree = 2 * order +dΩ = Measure(Ω, degree) + +Γ = BoundaryTriangulation(model, tags=neumanntags) +dΓ = Measure(Γ, degree) +nΓ = get_normal_vector(Γ) + +# FE operator +f(t) = x -> ∂t(u)(x, t) - Δ(u(t))(x) +mass(t, ∂ₜu, v) = ∫(∂ₜu ⋅ v) * dΩ +mass(t, u, ∂ₜu, v) = mass(t, ∂ₜu, v) +stiffness(t, u, v) = ∫(∇(u) ⊙ ∇(v)) * dΩ +forcing_Ω(t, v) = ∫(f(t) ⋅ v) * dΩ +forcing_Γ(t, v) = ∫((∇(u(t)) ⋅ nΓ) ⋅ v) * dΓ +forcing(t, v) = forcing_Ω(t, v) + forcing_Γ(t, v) + +res(t, u, v) = mass(t, ∂t(u), v) + stiffness(t, u, v) - forcing(t, v) +jac(t, u, du, v) = stiffness(t, du, v) +jac_t(t, u, dut, v) = mass(t, dut, v) + +res_ql(t, u, v) = stiffness(t, u, v) - forcing(t, v) +res_l(t, v) = (-1) * forcing(t, v) + +args_man = ((jac, jac_t), U, V) +tfeop_nl_man = TransientFEOperator(res, args_man...) +tfeop_ql_man = TransientQuasilinearFEOperator(mass, res_ql, args_man...) +tfeop_sl_man = TransientSemilinearFEOperator(mass, res_ql, args_man...) +tfeop_l_man = TransientLinearFEOperator((stiffness, mass), res_l, args_man...) + +args_ad = (U, V) +tfeop_nl_ad = TransientFEOperator(res, args_ad...) +tfeop_ql_ad = TransientQuasilinearFEOperator(mass, res_ql, args_ad...) +tfeop_sl_ad = TransientSemilinearFEOperator(mass, res_ql, args_ad...) +tfeop_l_ad = TransientLinearFEOperator((stiffness, mass), res_l, args_ad...) + +tfeops = ( + tfeop_nl_man, + tfeop_ql_man, + tfeop_sl_man, + tfeop_l_man, + tfeop_nl_ad, + tfeop_ql_ad, + tfeop_sl_ad, + tfeop_l_ad +) + +# Initial conditions +t0 = 0.0 +tF = 1.0 +dt = 0.1 + +U0 = U(t0) +uh0 = interpolate_everywhere(u(t0), U0) +uhs0 = (uh0,) + +# ODE Solver +tol = 1.0e-6 +sysslvr_l = LUSolver() +sysslvr_nl = NLSolver(sysslvr_l, show_trace=false, method=:newton, iterations=10) +odeslvrs = ( + ThetaMethod(sysslvr_nl, dt, 0.5), +) + +# Tests +for odeslvr in odeslvrs + for tfeop in tfeops + fesltn = solve(odeslvr, tfeop, t0, tF, uhs0) + + for (t_n, uh_n) in fesltn + eh_n = u(t_n) - uh_n + e_n = sqrt(sum(∫(eh_n ⋅ eh_n) * dΩ)) + @test e_n < tol + end + end +end + +end # module HeatEquationNeumannTests diff --git a/test/ODEsTests/TransientFEProblemsTests/HeatEquationScalarTests.jl b/test/ODEsTests/TransientFEProblemsTests/HeatEquationScalarTests.jl new file mode 100644 index 000000000..7038afa73 --- /dev/null +++ b/test/ODEsTests/TransientFEProblemsTests/HeatEquationScalarTests.jl @@ -0,0 +1,149 @@ +module Order1FETests + +using Test + +using LinearAlgebra + +using Gridap +using Gridap.Algebra +using Gridap.FESpaces +using Gridap.ODEs + +# Analytical functions +u(x, t) = x[1] * (1 - x[2]) * (1 + t) +∂tu(x, t) = ∂t(u)(x, t) + +u(t::Real) = x -> u(x, t) +∂tu(t::Real) = x -> ∂tu(x, t) + +# Geometry +domain = (0, 1, 0, 1) +partition = (5, 5) +model = CartesianDiscreteModel(domain, partition) + +# FE spaces +order = 2 +reffe = ReferenceFE(lagrangian, Float64, order) +V = FESpace(model, reffe, conformity=:H1, dirichlet_tags="boundary") +U = TransientTrialFESpace(V, u) + +# Integration +Ω = Triangulation(model) +degree = 2 * order +dΩ = Measure(Ω, degree) + +# FE operator +f(t) = x -> ∂t(u)(x, t) - Δ(u(t))(x) + +mass(t, ∂ₜu, v) = ∫(∂ₜu ⋅ v) * dΩ +mass(t, u, ∂ₜu, v) = mass(t, ∂ₜu, v) +stiffness(t, u, v) = ∫(∇(u) ⊙ ∇(v)) * dΩ +forcing(t, v) = ∫(f(t) ⋅ v) * dΩ + +res(t, u, v) = mass(t, u, ∂t(u), v) + stiffness(t, u, v) - forcing(t, v) +jac(t, u, du, v) = stiffness(t, du, v) +jac_t(t, u, dut, v) = mass(t, u, dut, v) + +res_ql(t, u, v) = stiffness(t, u, v) - forcing(t, v) +res_l(t, v) = (-1) * forcing(t, v) + +res0(t, u, v) = ∫(0 * u * v) * dΩ +jac0(t, u, du, v) = ∫(0 * du * v) * dΩ + +args_man = ((jac, jac_t), U, V) +args0_man = ((jac0,), U, V) +tfeop_nl_man = TransientFEOperator(res, args_man...) +tfeop_ql_man = TransientQuasilinearFEOperator(mass, res_ql, args_man...) +tfeop_sl_man = TransientSemilinearFEOperator(mass, res_ql, args_man...) +tfeop_l_man = TransientLinearFEOperator((stiffness, mass), res_l, args_man...) + +tfeop_im_man = TransientSemilinearFEOperator(mass, res_ql, args_man...) +tfeop_ex_man = TransientFEOperator(res0, args0_man...) +tfeop_imex_man = TransientIMEXFEOperator(tfeop_im_man, tfeop_ex_man) + +args_ad = (U, V) +tfeop_nl_ad = TransientFEOperator(res, args_ad...) +tfeop_ql_ad = TransientQuasilinearFEOperator(mass, res_ql, args_ad...) +tfeop_sl_ad = TransientSemilinearFEOperator(mass, res_ql, args_ad...) +tfeop_l_ad = TransientLinearFEOperator((stiffness, mass), res_l, args_ad...) + +tfeop_im_ad = TransientSemilinearFEOperator(mass, res_ql, args_ad...) +tfeop_ex_ad = TransientFEOperator(res0, args_ad..., order=0) +tfeop_imex_ad = TransientIMEXFEOperator(tfeop_im_ad, tfeop_ex_ad) + +tfeops = ( + tfeop_nl_man, + tfeop_ql_man, + tfeop_sl_man, + tfeop_l_man, + tfeop_imex_man, + tfeop_nl_ad, + tfeop_ql_ad, + tfeop_sl_ad, + tfeop_l_ad, + tfeop_imex_ad, +) + +# Initial conditions +t0 = 0.0 +tF = 1.0 +dt = 0.1 + +U0 = U(t0) +uh0 = interpolate_everywhere(u(t0), U0) + +tol = 1.0e-6 +sysslvr_l = LUSolver() +sysslvr_nl = NLSolver(sysslvr_l, show_trace=false, method=:newton, iterations=10) + +# Testing function +function test_transient_heat_scalar(odeslvr, tfeop, uhs0) + fesltn = solve(odeslvr, tfeop, t0, tF, uhs0) + + for (t_n, uh_n) in fesltn + eh_n = u(t_n) - uh_n + e_n = sqrt(sum(∫(eh_n ⋅ eh_n) * dΩ)) + @test e_n < tol + end +end + +# Do not try explicit solvers +odeslvrs = ( + ThetaMethod(sysslvr_nl, dt, 0.2), + MidPoint(sysslvr_nl, dt), + ThetaMethod(sysslvr_nl, dt, 0.8), + BackwardEuler(sysslvr_nl, dt), + RungeKutta(sysslvr_nl, sysslvr_l, dt, :SDIRK_Euler_1_1), + RungeKutta(sysslvr_nl, sysslvr_l, dt, :SDIRK_Midpoint_1_2), + RungeKutta(sysslvr_nl, sysslvr_l, dt, :DIRK_CrankNicolson_2_2), + RungeKutta(sysslvr_nl, sysslvr_l, dt, :SDIRK_QinZhang_2_2), + GeneralizedAlpha1(sysslvr_nl, dt, 0.0), + GeneralizedAlpha1(sysslvr_nl, dt, 0.5), + GeneralizedAlpha1(sysslvr_nl, dt, 1.0), +) + +uhs0 = (uh0,) +for odeslvr in odeslvrs + for tfeop in tfeops + test_transient_heat_scalar(odeslvr, tfeop, uhs0) + end +end + +# Solvers for IMEX decompositions +tfeops = ( + tfeop_imex_man, + tfeop_imex_ad, +) + +odeslvrs = ( + RungeKutta(sysslvr_nl, sysslvr_l, dt, :IMEXRK_1_2_2), +) + +uhs0 = (uh0,) +for odeslvr in odeslvrs + for tfeop in tfeops + test_transient_heat_scalar(odeslvr, tfeop, uhs0) + end +end + +end # module Order1FETests diff --git a/test/ODEsTests/TransientFEProblemsTests/HeatEquationVectorTests.jl b/test/ODEsTests/TransientFEProblemsTests/HeatEquationVectorTests.jl new file mode 100644 index 000000000..22f88a6bf --- /dev/null +++ b/test/ODEsTests/TransientFEProblemsTests/HeatEquationVectorTests.jl @@ -0,0 +1,103 @@ +module HeatEquationVectorTests + +using Test + +using LinearAlgebra + +using Gridap +using Gridap.Algebra +using Gridap.FESpaces +using Gridap.ODEs + +# Analytical functions +u(x, t) = VectorValue(x[1] * (1 - x[2]), (1 - x[1]) * x[2]) * (1 + t) +∂tu(x, t) = ∂t(u)(x, t) + +u(t::Real) = x -> u(x, t) +∂tu(t::Real) = x -> ∂tu(x, t) + +# Geometry +domain = (0, 1, 0, 1) +partition = (5, 5) +model = CartesianDiscreteModel(domain, partition) + +# FE spaces +order = 2 +reffe = ReferenceFE(lagrangian, VectorValue{2,Float64}, order) +V = FESpace(model, reffe, conformity=:H1, dirichlet_tags="boundary") +U = TransientTrialFESpace(V, u) + +# Integration +Ω = Triangulation(model) +degree = 2 * order +dΩ = Measure(Ω, degree) + +# FE operator +f(t) = x -> ∂t(u)(x, t) - Δ(u(t))(x) + +mass(t, ∂ₜu, v) = ∫(∂ₜu ⋅ v) * dΩ +mass(t, u, ∂ₜu, v) = mass(t, ∂ₜu, v) +stiffness(t, u, v) = ∫(∇(u) ⊙ ∇(v)) * dΩ +forcing(t, v) = ∫(f(t) ⋅ v) * dΩ + +res(t, u, v) = mass(t, u, ∂t(u), v) + stiffness(t, u, v) - forcing(t, v) +jac(t, u, du, v) = stiffness(t, du, v) +jac_t(t, u, dut, v) = mass(t, u, dut, v) + +res_ql(t, u, v) = stiffness(t, u, v) - forcing(t, v) +res_l(t, v) = (-1) * forcing(t, v) + +args_man = ((jac, jac_t), U, V) +tfeop_nl_man = TransientFEOperator(res, args_man...) +tfeop_ql_man = TransientQuasilinearFEOperator(mass, res_ql, args_man...) +tfeop_sl_man = TransientSemilinearFEOperator(mass, res_ql, args_man...) +tfeop_l_man = TransientLinearFEOperator((stiffness, mass), res_l, args_man...) + +args_ad = (U, V) +tfeop_nl_ad = TransientFEOperator(res, args_ad...) +tfeop_ql_ad = TransientQuasilinearFEOperator(mass, res_ql, args_ad...) +tfeop_sl_ad = TransientSemilinearFEOperator(mass, res_ql, args_ad...) +tfeop_l_ad = TransientLinearFEOperator((stiffness, mass), res_l, args_ad...) + +tfeops = ( + tfeop_nl_man, + tfeop_ql_man, + tfeop_sl_man, + tfeop_l_man, + tfeop_nl_ad, + tfeop_ql_ad, + tfeop_sl_ad, + tfeop_l_ad, +) + +# Initial conditions +t0 = 0.0 +tF = 1.0 +dt = 0.1 + +U0 = U(t0) +uh0 = interpolate_everywhere(u(t0), U0) +uhs0 = (uh0,) + +# ODE Solver +tol = 1.0e-6 +sysslvr_l = LUSolver() +sysslvr_nl = NLSolver(sysslvr_l, show_trace=false, method=:newton, iterations=10) +odeslvrs = ( + ThetaMethod(sysslvr_nl, dt, 0.5), +) + +# Tests +for odeslvr in odeslvrs + for tfeop in tfeops + fesltn = solve(odeslvr, tfeop, t0, tF, uhs0) + + for (t_n, uh_n) in fesltn + eh_n = u(t_n) - uh_n + e_n = sqrt(sum(∫(eh_n ⋅ eh_n) * dΩ)) + @test e_n < tol + end + end +end + +end # module HeatEquationVectorTests diff --git a/test/ODEsTests/TransientFEProblemsTests/SecondOrderEquationTests.jl b/test/ODEsTests/TransientFEProblemsTests/SecondOrderEquationTests.jl new file mode 100644 index 000000000..c5213610e --- /dev/null +++ b/test/ODEsTests/TransientFEProblemsTests/SecondOrderEquationTests.jl @@ -0,0 +1,128 @@ +module Order2FETests + +using Test + +using LinearAlgebra + +using Gridap +using Gridap.Algebra +using Gridap.FESpaces +using Gridap.ODEs + +# Analytical functions +u(x, t) = (1 - x[1]) * x[2] * (t^2 + 3.0) +∂tu(x, t) = ∂t(u)(x, t) +∂ttu(x, t) = ∂tt(u)(x, t) + +u(t::Real) = x -> u(x, t) +∂tu(t::Real) = x -> ∂tu(x, t) +∂ttu(t::Real) = x -> ∂ttu(x, t) + +# Geometry +domain = (0, 1, 0, 1) +partition = (5, 5,) +model = CartesianDiscreteModel(domain, partition) + +# FE spaces +order = 2 +reffe = ReferenceFE(lagrangian, Float64, order) +V = FESpace(model, reffe, conformity=:H1, dirichlet_tags="boundary") +U = TransientTrialFESpace(V, u) + +# Integration +Ω = Triangulation(model) +degree = 2 * order +dΩ = Measure(Ω, degree) + +# FE operator +order = 2 +f(t) = x -> ∂tt(u)(x, t) + ∂t(u)(x, t) - Δ(u(t))(x) + +mass(t, ∂ₜₜu, v) = ∫(∂ₜₜu ⋅ v) * dΩ +mass(t, u, ∂ₜₜu, v) = mass(t, ∂ₜₜu, v) +damping(t, ∂ₜu, v) = ∫(∂ₜu ⋅ v) * dΩ +stiffness(t, u, v) = ∫(∇(u) ⊙ ∇(v)) * dΩ +forcing(t, v) = ∫(f(t) ⋅ v) * dΩ + +res(t, u, v) = mass(t, u, ∂tt(u), v) + damping(t, ∂t(u), v) + stiffness(t, u, v) - forcing(t, v) +jac(t, u, du, v) = stiffness(t, du, v) +jac_t(t, u, dut, v) = damping(t, dut, v) +jac_tt(t, u, dutt, v) = mass(t, dutt, v) + +res_ql(t, u, v) = damping(t, ∂t(u), v) + stiffness(t, u, v) - forcing(t, v) +res_l(t, v) = (-1) * forcing(t, v) + +res0(t, u, v) = ∫(0 * u * v) * dΩ +jac0(t, u, du, v) = ∫(0 * du * v) * dΩ + +args_man = ((jac, jac_t, jac_tt), U, V) +args0_man = ((jac0, jac0), U, V) +tfeop_nl_man = TransientFEOperator(res, args_man...) +tfeop_ql_man = TransientQuasilinearFEOperator(mass, res_ql, args_man...) +tfeop_sl_man = TransientSemilinearFEOperator(mass, res_ql, args_man...) +tfeop_l_man = TransientLinearFEOperator((stiffness, damping, mass), res_l, args_man...) + +tfeop_im_man = TransientSemilinearFEOperator(mass, res_ql, args_man...) +tfeop_ex_man = TransientFEOperator(res0, args0_man...) +tfeop_imex_man = TransientIMEXFEOperator(tfeop_im_man, tfeop_ex_man) + +args_ad = (U, V) +tfeop_nl_ad = TransientFEOperator(res, args_ad..., order=2) +tfeop_ql_ad = TransientQuasilinearFEOperator(mass, res_ql, args_ad..., order=2) +tfeop_sl_ad = TransientSemilinearFEOperator(mass, res_ql, args_ad..., order=2) +tfeop_l_ad = TransientLinearFEOperator((stiffness, damping, mass), res_l, args_ad...) + +tfeop_im_ad = TransientSemilinearFEOperator(mass, res_ql, args_ad..., order=2) +tfeop_ex_ad = TransientFEOperator(res0, args_ad..., order=1) +tfeop_imex_ad = TransientIMEXFEOperator(tfeop_im_ad, tfeop_ex_ad) + +tfeops = ( + tfeop_nl_man, + tfeop_ql_man, + tfeop_sl_man, + tfeop_l_man, + tfeop_imex_man, + tfeop_nl_ad, + tfeop_ql_ad, + tfeop_sl_ad, + tfeop_l_ad, + tfeop_imex_ad, +) + +# Initial conditions +t0 = 0.0 +tF = 1.0 +dt = 0.1 + +U0 = U(t0) +uh0 = interpolate_everywhere(u(t0), U0) +∂tuh0 = interpolate_everywhere(∂tu(t0), U0) +∂ttuh0 = interpolate_everywhere(∂ttu(t0), U0) + +tol = 1.0e-6 +sysslvr_l = LUSolver() +sysslvr_nl = NLSolver(sysslvr_l, show_trace=false, method=:newton, iterations=10) + +# Testing function +function test_tfeop_order2(odeslvr, tfeop, uhs0) + fesltn = solve(odeslvr, tfeop, t0, tF, uhs0) + + for (t_n, uh_n) in fesltn + eh_n = u(t_n) - uh_n + e_n = sqrt(sum(∫(eh_n ⋅ eh_n) * dΩ)) + @test e_n < tol + end +end + +odeslvrs = ( + Newmark(sysslvr_nl, dt, 0.5, 0.25), +) + +uhs0 = (uh0, ∂tuh0) +for odeslvr in odeslvrs + for tfeop in tfeops + test_tfeop_order2(odeslvr, tfeop, uhs0) + end +end + +end # module Order2FETests diff --git a/test/ODEsTests/TransientFEProblemsTests/StokesEquationTests.jl b/test/ODEsTests/TransientFEProblemsTests/StokesEquationTests.jl new file mode 100644 index 000000000..c9f9702a4 --- /dev/null +++ b/test/ODEsTests/TransientFEProblemsTests/StokesEquationTests.jl @@ -0,0 +1,99 @@ +module StokesEquationTests + +using Test + +using LinearAlgebra + +using Gridap +using Gridap.Algebra +using Gridap.FESpaces +using Gridap.ODEs + +# Analytical functions +u(x, t) = VectorValue(x[1], x[2]) * t +u(t::Real) = x -> u(x, t) + +p(x, t) = (x[1] - x[2]) * t +p(t::Real) = x -> p(x, t) +q(x) = t -> p(x, t) + +# Geometry +domain = (0, 1, 0, 1) +partition = (5, 5) +model = CartesianDiscreteModel(domain, partition) + +# FE spaces +order = 2 +reffe_u = ReferenceFE(lagrangian, VectorValue{2,Float64}, order) +V = FESpace(model, reffe_u, conformity=:H1, dirichlet_tags="boundary") +U = TransientTrialFESpace(V, u) + +reffe_p = ReferenceFE(lagrangian, Float64, order - 1) +Q = FESpace(model, reffe_p, conformity=:H1, constraint=:zeromean) +P = TrialFESpace(Q) + +X = TransientMultiFieldFESpace([U, P]) +Y = MultiFieldFESpace([V, Q]) + +# Integration +Ω = Triangulation(model) +degree = 2 * order +dΩ = Measure(Ω, degree) + +# FE operator +f(t) = x -> ∂t(u)(t)(x) - Δ(u(t))(x) + ∇(p(t))(x) +g(t) = x -> (∇ ⋅ u(t))(x) +mass(t, ∂ₜu, v) = ∫(∂ₜu ⋅ v) * dΩ +stiffness(t, u, v) = ∫(∇(u) ⊙ ∇(v)) * dΩ +forcing(t, (v, q)) = ∫(f(t) ⋅ v) * dΩ + ∫(g(t) * q) * dΩ + +res(t, (u, p), (v, q)) = mass(t, ∂t(u), v) + stiffness(t, u, v) - forcing(t, (v, q)) - ∫(p * (∇ ⋅ v)) * dΩ + ∫((∇ ⋅ u) * q) * dΩ +jac(t, (u, p), (du, dp), (v, q)) = stiffness(t, du, v) - ∫(dp * (∇ ⋅ v)) * dΩ + ∫((∇ ⋅ du) * q) * dΩ +jac_t(t, (u, p), (dut, dpt), (v, q)) = mass(t, dut, v) + +tfeop_nl_man = TransientFEOperator(res, (jac, jac_t), X, Y) +tfeop_nl_ad = TransientFEOperator(res, X, Y) +tfeops = ( + tfeop_nl_man, + tfeop_nl_ad, +) + +# Initial conditions +t0 = 0.0 +tF = 1.0 +dt = 0.1 + +U0 = U(t0) +uh0 = interpolate_everywhere(u(t0), U0) +P0 = P(t0) +ph0 = interpolate_everywhere(p(t0), P0) +X0 = X(t0) +xh0 = interpolate_everywhere([uh0, ph0], X0) +xhs0 = (xh0,) + +# ODE Solver +tol = 1.0e-6 +sysslvr_l = LUSolver() +sysslvr_nl = NLSolver(sysslvr_l, show_trace=false, method=:newton, iterations=10) +odeslvrs = ( + ThetaMethod(sysslvr_nl, dt, 0.5), +) + +# Tests +for odeslvr in odeslvrs + for tfeop in tfeops + fesltn = solve(odeslvr, tfeop, t0, tF, xhs0) + + for (t_n, xhs_n) in fesltn + eh_n = u(t_n) - xhs_n[1] + e_n = sqrt(sum(∫(eh_n ⋅ eh_n) * dΩ)) + @test e_n < tol + + eh_n = p(t_n) - xhs_n[2] + e_n = sqrt(sum(∫(eh_n ⋅ eh_n) * dΩ)) + @test e_n < tol + end + end +end + +end # module StokesEquationTests diff --git a/test/ODEsTests/TransientFESpacesTests.jl b/test/ODEsTests/TransientFESpacesTests.jl new file mode 100644 index 000000000..a2ee48d3b --- /dev/null +++ b/test/ODEsTests/TransientFESpacesTests.jl @@ -0,0 +1,82 @@ +module TransientFESpacesTests + +using Test + +using Gridap +using Gridap.Fields +using Gridap.ODEs + +u1(x, t) = (x[1] + x[2]) * t +u1(t::Real) = x -> u1(x, t) + +∂tu1(t) = x -> x[1] + x[2] +ODEs.∂t(::typeof(u1)) = ∂tu1 + +∂ttu1(t) = x -> zero(x[1]) +ODEs.∂tt(::typeof(u1)) = ∂ttu1 + +u2(x, t) = x[1] * t^2 + x[2] * t +u2(t::Real) = x -> u2(x, t) + +∂tu2(t) = x -> 2 * t * x[1] + x[2] +ODEs.∂t(::typeof(u2)) = ∂tu2 + +∂ttu2(t) = x -> 2 * x[1] +ODEs.∂tt(::typeof(u2)) = ∂ttu2 + +domain = (0, 1, 0, 1) +partition = (5, 5) +model = CartesianDiscreteModel(domain, partition) + +order = 1 +reffe = ReferenceFE(lagrangian, Float64, order) +V1 = TestFESpace(model, reffe, conformity=:H1, dirichlet_tags="boundary") +U1 = TransientTrialFESpace(V1, u1) +@test test_tfe_space(U1) + +V2 = TestFESpace(model, reffe, conformity=:H1, dirichlet_tags=["tag_1", "tag_2"]) +U2 = TransientTrialFESpace(V2, [u1, u2]) +@test test_tfe_space(U2) + +ts = randn(5) +for (U, V, us) in ((U1, V1, [u1]), (U2, V2, [u1, u2])) + Ut = ∂t(U) + Utt = ∂tt(U) + + for t in ts + # Dirichlet values of U + ust = [ui(t) for ui in us] + ust = (length(ust) == 1) ? ust[1] : ust + _U0 = TrialFESpace(V, ust) + U0 = U(t) + + _ud0 = get_dirichlet_dof_values(_U0) + ud0 = get_dirichlet_dof_values(U0) + + @test all(ud0 .≈ _ud0) + + # Dirichlet values of ∂t(U) + ∂tust = [∂t(ui)(t) for ui in us] + ∂tust = (length(∂tust) == 1) ? ∂tust[1] : ∂tust + _Ut0 = TrialFESpace(V, ∂tust) + Ut0 = Ut(t) + + _utd0 = get_dirichlet_dof_values(_Ut0) + utd0 = get_dirichlet_dof_values(Ut0) + + @test all(utd0 .≈ _utd0) + + # Dirichlet values of ∂tt(U) + ∂ttust = [∂tt(ui)(t) for ui in us] + ∂ttust = (length(∂ttust) == 1) ? ∂ttust[1] : ∂ttust + _Utt0 = TrialFESpace(V, ∂ttust) + Utt0 = Utt(t) + + _uttd0 = get_dirichlet_dof_values(_Utt0) + uttd0 = get_dirichlet_dof_values(Utt0) + + @test all(uttd0 .≈ _uttd0) + end +end + +end # module TransientFESpacesTests diff --git a/test/ODEsTests/TransientFEsTests/AffineFEOperatorsTests.jl b/test/ODEsTests/TransientFEsTests/AffineFEOperatorsTests.jl deleted file mode 100644 index 4dc4e72dd..000000000 --- a/test/ODEsTests/TransientFEsTests/AffineFEOperatorsTests.jl +++ /dev/null @@ -1,68 +0,0 @@ -module AffineFEOperatorsTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.FESpaces: get_algebraic_operator -using LineSearches: BackTracking - -θ = 1.0 - -u(x,t) = (1.0-x[1])*x[1]*(1.0-x[2])*x[2]*t -u(t::Real) = x -> u(x,t) -∂tu = ∂t(u) - -f(t) = x -> ∂t(u)(x,t)-Δ(u(t))(x) # or ∂t(u)(t)(x)-Δ(u(t))(x) - -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = FESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags="boundary" -) -U = TransientTrialFESpace(V0,u) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -# -a(t,u,v) = ∫(∇(v)⋅∇(u))dΩ -b(t,v) = ∫(v*f(t))dΩ -m(t,ut,v) = ∫(ut*v)dΩ - -op = TransientAffineFEOperator(m,a,b,U,V0) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -U0 = U(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) - -ls = LUSolver() -ode_solver = ThetaMethod(ls,dt,θ) -sol_t = solve(ode_solver,op,uh0,t0,tF) - -l2(w) = w*w - -tol = 1.0e-6 -_t_n = t0 - -for (uh_tn, tn) in sol_t - global _t_n - _t_n += dt - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/BoundaryHeatEquationTests.jl b/test/ODEsTests/TransientFEsTests/BoundaryHeatEquationTests.jl deleted file mode 100644 index 0754767bd..000000000 --- a/test/ODEsTests/TransientFEsTests/BoundaryHeatEquationTests.jl +++ /dev/null @@ -1,90 +0,0 @@ -module BoundaryHeatEquationTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.FESpaces: get_algebraic_operator - -import Gridap: ∇ -import Gridap.ODEs.TransientFETools: ∂t - -θ = 0.2 - -# Analytical functions -# u(x,t) = (x[1]+x[2])*t -# u(x,t) = (2*x[1]+x[2])*t -u(x,t) = (1.0-x[1])*x[1]*(1.0-x[2])*x[2]*t -u(t::Real) = x -> u(x,t) -v(x) = t -> u(x,t) -∂tu(t) = x -> ForwardDiff.derivative(v(x),t) -∂tu(x,t) = ∂tu(t)(x) -∂t(::typeof(u)) = ∂tu -f(t) = x -> ∂t(u)(x,t)-Δ(u(t))(x) - -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = FESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags=[1,2,3,4,5,6] -) -U = TransientTrialFESpace(V0,u) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -neumanntags = [7,8] -Γ = BoundaryTriangulation(model,tags=neumanntags) -dΓ = Measure(Γ,degree) -nb = get_normal_vector(Γ) - -# -a(u,v) = ∫(∇(v)⋅∇(u))dΩ -b(v,t) = ∫(v*f(t))dΩ -m(ut,v) = ∫(ut*v)dΩ -b_Γ(v,t) = ∫(v*(∇(u(t))⋅nb))dΓ - -res(t,u,v) = a(u,v) + m(∂t(u),v) - b(v,t) - b_Γ(v,t) -jac(t,u,du,v) = a(du,v) -jac_t(t,u,dut,v) = m(dut,v) - -op = TransientFEOperator(res,jac,jac_t,U,V0) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -U0 = U(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) - -ls = LUSolver() -using Gridap.Algebra: NewtonRaphsonSolver -# nls = NLSolver(ls;show_trace=true,method=:newton) #linesearch=BackTracking()) -ode_solver = ThetaMethod(ls,dt,θ) - -sol_t = solve(ode_solver,op,uh0,t0,tF) - -# Juno.@enter Base.iterate(sol_t) - -l2(w) = w*w - -tol = 1.0e-6 -_t_n = t0 - -for (uh_tn, tn) in sol_t - global _t_n - _t_n += dt - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/ConstantFEOperatorsTests.jl b/test/ODEsTests/TransientFEsTests/ConstantFEOperatorsTests.jl deleted file mode 100644 index d5aade45b..000000000 --- a/test/ODEsTests/TransientFEsTests/ConstantFEOperatorsTests.jl +++ /dev/null @@ -1,68 +0,0 @@ -module ConstantFEOperatorsTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.FESpaces: get_algebraic_operator -using LineSearches: BackTracking - -θ = 1.0 - -u(x,t) = (1.0-x[1])*x[1]*(1.0-x[2])*x[2] -u(t::Real) = x -> u(x,t) -∂tu = ∂t(u) - -f(t) = x -> ∂t(u)(x,t)-Δ(u(t))(x) - -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = FESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags="boundary" -) -U = TransientTrialFESpace(V0,u) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -# -a(u,v) = ∫(∇(v)⋅∇(u))dΩ -b(v) = ∫(v*f(0.0))dΩ -m(ut,v) = ∫(ut*v)dΩ - -op = TransientConstantFEOperator(m,a,b,U,V0) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -U0 = U(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) - -ls = LUSolver() -ode_solver = ThetaMethod(ls,dt,θ) -sol_t = solve(ode_solver,op,uh0,t0,tF) - -l2(w) = w*w - -tol = 1.0e-6 -_t_n = t0 - -for (uh_tn, tn) in sol_t - global _t_n - _t_n += dt - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/DGHeatEquationTests.jl b/test/ODEsTests/TransientFEsTests/DGHeatEquationTests.jl deleted file mode 100644 index 9bdb1869e..000000000 --- a/test/ODEsTests/TransientFEsTests/DGHeatEquationTests.jl +++ /dev/null @@ -1,87 +0,0 @@ -module DGHeatEquationTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.FESpaces: get_algebraic_operator - -θ = 0.2 - -u(x,t) = (1.0-x[1])*x[1]*(1.0-x[2])*x[2]*t -u(t::Real) = x -> u(x,t) -f(t) = x -> ∂t(u)(x,t)-Δ(u(t))(x) - -L= 1.0 -n = 2 -domain = (0,L,0,L) -partition = (n,n) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = FESpace( - model, - reffe, - conformity=:L2 -) -U = TransientTrialFESpace(V0) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -Γ = BoundaryTriangulation(model) -dΓ = Measure(Γ,degree) -nb = get_normal_vector(Γ) - -Λ = SkeletonTriangulation(model) -dΛ = Measure(Λ,degree) -ns = get_normal_vector(Λ) - -a(u,v) = ∫(∇(v)⋅∇(u))dΩ -b(v,t) = ∫(v*f(t))dΩ -m(u,v) = ∫(v*u)dΩ - -h = 1.0 / n -γ = order*(order+1) -a_Γ(u,v) = ∫( (γ/h)*v*u - v*(∇(u)⋅nb) - (∇(v)⋅nb)*u )dΓ -b_Γ(v,t) = ∫( (γ/h)*v*u(t) - (∇(v)⋅nb)*u(t) )dΓ - -a_Λ(u,v) = ∫( (γ/h)*jump(v*ns)⊙jump(u*ns) - jump(v*ns)⊙mean(∇(u)) - mean(∇(v))⊙jump(u*ns) )dΛ - - -res(t,u,v) = a(u,v) + m(∂t(u),v) + a_Γ(u,v) + a_Λ(u,v) - b(v,t) - b_Γ(v,t) -jac(t,u,du,v) = a(du,v) + a_Γ(du,v) + a_Λ(du,v) -jac_t(t,u,dut,v) = m(dut,v) - -op = TransientFEOperator(res,jac,jac_t,U,V0) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -U0 = U(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) - -ls = LUSolver() -using Gridap.Algebra: NewtonRaphsonSolver -ode_solver = ThetaMethod(ls,dt,θ) - -sol_t = solve(ode_solver,op,uh0,t0,tF) - -l2(w) = w*w - -tol = 1.0e-6 -_t_n = t0 - -for (uh_tn, tn) in sol_t - global _t_n - _t_n += dt - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/ForwardEulerHeatEquationTests.jl b/test/ODEsTests/TransientFEsTests/ForwardEulerHeatEquationTests.jl deleted file mode 100644 index d8997b977..000000000 --- a/test/ODEsTests/TransientFEsTests/ForwardEulerHeatEquationTests.jl +++ /dev/null @@ -1,78 +0,0 @@ -module ForwardEulerHeatEquationTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.FESpaces: get_algebraic_operator - -import Gridap.ODEs.TransientFETools: ∂t - -θ = 0.0 - -# Analytical functions -# u(x,t) = (x[1]+x[2])*t -# u(x,t) = (2*x[1]+x[2])*t -u(x,t) = (1.0-x[1])*x[1]*(1.0-x[2])*x[2]*t -u(t::Real) = x -> u(x,t) -v(x) = t -> u(x,t) -∂tu(t) = x -> ForwardDiff.derivative(v(x),t) -∂tu(x,t) = ∂tu(t)(x) -∂t(::typeof(u)) = ∂tu -f(t) = x -> ∂t(u)(x,t)-Δ(u(t))(x) - -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = FESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags="boundary" -) -U = TransientTrialFESpace(V0,u) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -# -a(u,v) = ∇(v)⋅∇(u) -b(v,t) = v*f(t) - -res(t,u,v) = ∫( a(u,v) + ∂t(u)*v - b(v,t) )dΩ -jac(t,u,du,v) = ∫( a(du,v) )dΩ -jac_t(t,u,dut,v) = ∫( dut*v )dΩ - -op = TransientFEOperator(res,jac,jac_t,U,V0) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -U0 = U(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) - -ls = LUSolver() -ode_solver = ThetaMethod(ls,dt,θ) - -sol_t = solve(ode_solver,op,uh0,t0,tF) - -l2(w) = w*w - -tol = 1.0e-4 -_t_n = t0 - -for (uh_tn, tn) in sol_t - global _t_n - _t_n += dt - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/FreeSurfacePotentialFlowTests.jl b/test/ODEsTests/TransientFEsTests/FreeSurfacePotentialFlowTests.jl deleted file mode 100644 index efc7fbf6c..000000000 --- a/test/ODEsTests/TransientFEsTests/FreeSurfacePotentialFlowTests.jl +++ /dev/null @@ -1,109 +0,0 @@ -module FreeSurfacePotentialFlowTests - -using Gridap -using Gridap.Geometry -using Test - -# Parameters -L = 2*π -H = 1.0 -n = 8 -order = 2 -g = 9.81 -ξ = 0.1 -λ = L/2 -k = 2*π/L -h = L/n -ω = √(g*k*tanh(k*H)) -t₀ = 0.0 -tf = 2*π -Δt = h/(2*λ*ω) -θ = 0.5 - -# Exact solution -ϕₑ(x,t) = ω/k * ξ * (cosh(k*(x[2]))) / sinh(k*H) * sin(k*x[1] - ω*t) -ηₑ(x,t) = ξ * cos(k*x[1] - ω*t) -ϕₑ(t::Real) = x -> ϕₑ(x,t) -ηₑ(t::Real) = x -> ηₑ(x,t) - -# Domain -domain = (0,L,0,H) -partition = (n,n) -model = CartesianDiscreteModel(domain,partition;isperiodic=(true,false)) - -# Boundaries -labels = get_face_labeling(model) -add_tag_from_tags!(labels,"bottom",[1,2,5]) -add_tag_from_tags!(labels,"free_surface",[3,4,6]) - -# Triangulation -Ω = Interior(model) -Γ = Boundary(model,tags="free_surface") -dΩ = Measure(Ω,2*order) -dΓ = Measure(Γ,2*order) - -# FE spaces -reffe = ReferenceFE(lagrangian,Float64,order) -V = TestFESpace(Ω,reffe,conformity=:H1) -V_Γ = TestFESpace(Γ,reffe,conformity=:H1) -U = TransientTrialFESpace(V) -U_Γ = TransientTrialFESpace(V_Γ) -X = TransientMultiFieldFESpace([U,U_Γ]) -Y = MultiFieldFESpace([V,V_Γ]) - -# Weak form -α = 2/Δt - -# Optimal transient FE Operator: -m((ϕt,ηt),(w,v)) = ∫( 0.5*(α/g*(w*ϕt) + v*ϕt) - (w*ηt) )dΓ -a((ϕ,η),(w,v)) = ∫( ∇(ϕ)⋅∇(w) )dΩ + ∫( 0.5*(α*(w*η) + g*v*η) )dΓ -b((w,v)) = ∫( 0.0*w )dΓ -op_const = TransientConstantFEOperator(m,a,b,X,Y) - -# TransientFEOperator exploiting automatic differentiation (testing purposes) -res(t,x,y) = m(∂t(x),y) + a(x,y) - b(y) -jac(t,x,dx,y) = a(dx,y) -jac_t(t,x,dxt,y) = m(dxt,y) -op_trans = TransientFEOperator(res,jac,jac_t,X,Y) -op_ad = TransientFEOperator(res,X,Y) - -# TransientFEOperator exploiting time derivative of separate fields (TransientMultiFieldCellField) -res2(t,(ϕ,η),y) = m((∂t(ϕ),∂t(η)),y) + a((ϕ,η),y) - b(y) -op_multifield = TransientFEOperator(res2,jac,jac_t,X,Y) - - -# Solver -ls = LUSolver() -ode_solver = ThetaMethod(ls,Δt,θ) - -# Initial solution -x₀ = interpolate_everywhere([ϕₑ(0.0),ηₑ(0.0)],X(0.0)) - -function test_operator(op) - # Solution - sol_t = solve(ode_solver,op,x₀,t₀,tf) - - # Post-process - l2_Ω(w) = √(∑(∫(w*w)dΩ)) - l2_Γ(v) = √(∑(∫(v*v)dΓ)) - E_kin(w) = 0.5*∑( ∫(∇(w)⋅∇(w))dΩ ) - E_pot(v) = g*0.5*∑( ∫(v*v)dΓ ) - Eₑ = 0.5*g*ξ^2*L - - tol = 1.0e-2 - for ((ϕn,ηn),tn) in sol_t - E = E_kin(ϕn) + E_pot(ηn) - error_ϕ = l2_Ω(ϕn-ϕₑ(tn)) - error_η = l2_Γ(ηn-ηₑ(tn)) - @test abs(E/Eₑ-1.0) <= tol - @test error_ϕ <= tol - @test error_η <= tol - end -end - -test_operator(op_const) -test_operator(op_trans) -test_operator(op_multifield) -# test_operator(op_ad) # Not working yet - -end diff --git a/test/ODEsTests/TransientFEsTests/HeatEquationAutoDiffTests.jl b/test/ODEsTests/TransientFEsTests/HeatEquationAutoDiffTests.jl deleted file mode 100644 index 2d45a9b1a..000000000 --- a/test/ODEsTests/TransientFEsTests/HeatEquationAutoDiffTests.jl +++ /dev/null @@ -1,91 +0,0 @@ -module HeatEquationAutoDiffTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.ODEs.ODETools -using Gridap.ODEs.TransientFETools -using Gridap.FESpaces: get_algebraic_operator -using Gridap.Arrays: test_array - -θ = 0.2 - -u(x,t) = (1.0-x[1])*x[1]*(1.0-x[2])*x[2]*t -u(t::Real) = x -> u(x,t) -f(t) = x -> ∂t(u)(x,t)-Δ(u(t))(x) - -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = FESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags="boundary" -) -U = TransientTrialFESpace(V0,u) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -# -a(u,v) = ∫(∇(v)⋅∇(u))dΩ -b(v,t) = ∫(v*f(t))dΩ - -res(t,u,v) = a(u,v) + ∫(∂t(u)*v)dΩ - b(v,t) -jac(t,u,du,v) = a(du,v) -jac_t(t,u,dut,v) = ∫(dut*v)dΩ - -U₀ = evaluate(U,nothing) -dv = get_fe_basis(V0) -du = get_trial_fe_basis(U₀) -uh = FEFunction(U₀,rand(num_free_dofs(U₀))) -uh_t = TransientCellField(uh,(uh,)) - -cell_j = get_array(jac(0.5,uh_t,du,dv)) -cell_j_t = get_array(jac_t(0.5,uh_t,du,dv)) - -cell_j_auto = get_array(jacobian(x->res(0.5,TransientCellField(x,(uh,)),dv),uh)) -cell_j_t_auto = get_array(jacobian(x->res(0.5,TransientCellField(uh,(x,)),dv),uh)) - -test_array(cell_j_auto,cell_j,≈) -test_array(cell_j_t_auto,cell_j_t,≈) - -op = TransientFEOperator(res,U,V0) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -U0 = U(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) - -ls = LUSolver() -using Gridap.Algebra: NewtonRaphsonSolver -nls = NLSolver(ls;show_trace=false,method=:newton) #linesearch=BackTracking()) -ode_solver = ThetaMethod(ls,dt,θ) - -sol_t = solve(ode_solver,op,uh0,t0,tF) - -# Juno.@enter Base.iterate(sol_t) - -l2(w) = w*w - -tol = 1.0e-6 -_t_n = t0 - -for (uh_tn, tn) in sol_t - global _t_n - _t_n += dt - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/HeatEquationTests.jl b/test/ODEsTests/TransientFEsTests/HeatEquationTests.jl deleted file mode 100644 index 542c35779..000000000 --- a/test/ODEsTests/TransientFEsTests/HeatEquationTests.jl +++ /dev/null @@ -1,73 +0,0 @@ -module HeatEquationTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.FESpaces: get_algebraic_operator - -θ = 0.2 - -u(x,t) = (1.0-x[1])*x[1]*(1.0-x[2])*x[2]*t -u(t::Real) = x -> u(x,t) -f(t) = x -> ∂t(u)(x,t)-Δ(u(t))(x) - -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = FESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags="boundary" -) -U = TransientTrialFESpace(V0,u) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -# -a(u,v) = ∫(∇(v)⋅∇(u))dΩ -b(v,t) = ∫(v*f(t))dΩ - -res(t,u,v) = a(u,v) + ∫(∂t(u)*v)dΩ - b(v,t) -jac(t,u,du,v) = a(du,v) -jac_t(t,u,dut,v) = ∫(dut*v)dΩ - -op = TransientFEOperator(res,jac,jac_t,U,V0) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -U0 = U(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) - -ls = LUSolver() -using Gridap.Algebra: NewtonRaphsonSolver -nls = NLSolver(ls;show_trace=false,method=:newton) #linesearch=BackTracking()) -ode_solver = ThetaMethod(ls,dt,θ) - -sol_t = solve(ode_solver,op,uh0,t0,tF) - -# Juno.@enter Base.iterate(sol_t) - -l2(w) = w*w - -tol = 1.0e-6 -_t_n = t0 - -for (uh_tn, tn) in sol_t - global _t_n - _t_n += dt - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/HeatVectorEquationTests.jl b/test/ODEsTests/TransientFEsTests/HeatVectorEquationTests.jl deleted file mode 100644 index 63f55987d..000000000 --- a/test/ODEsTests/TransientFEsTests/HeatVectorEquationTests.jl +++ /dev/null @@ -1,141 +0,0 @@ -module HeatVectorEquationTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.FESpaces: get_algebraic_operator -using Gridap.ODEs.ODETools -using Gridap.ODEs.TransientFETools - -θ = 0.5 - -u(x,t) = VectorValue(x[1],x[2])*t -u(t::Real) = x -> u(x,t) - -f(t) = x -> ∂t(u)(x,t)-Δ(u(t))(x) - -domain = (0,1,0,1) -partition = (1,1) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffe = ReferenceFE(lagrangian,VectorValue{2,Float64},order) -V0 = FESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags="boundary" -) - -U = TransientTrialFESpace(V0,u) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -a(u,v) = ∫(∇(v)⊙∇(u))dΩ -m(u,v) = ∫(u⋅v)dΩ -b(v,t) = ∫(v⋅f(t))dΩ - -res(t,u,v) = a(u,v) + m(∂t(u),v) - b(v,t) -jac(t,u,du,v) = a(du,v) -jac_t(t,u,dut,v) = m(dut,v) - -op = TransientFEOperator(res,jac,jac_t,U,V0) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - - -U0 = U(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) - -ls = LUSolver() -ode_solver = ThetaMethod(ls,dt,θ) - -sol_t = solve(ode_solver,op,uh0,t0,tF) - -l2(w) = w⋅w - -tol = 1.0e-6 -_t_n = t0 - -rv, _ = Base.iterate(sol_t) -uh_tn, tn = rv -uh_tn.free_values - -_t_n = t0 -for (uh_tn, tn) in sol_t - global _t_n - _t_n += dt - @test tn≈_t_n - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -u0 = get_free_dof_values(uh0) -uf = copy(get_free_dof_values(uh0)) - -odeop = get_algebraic_operator(op) - -ode_cache = allocate_cache(odeop) -vθ = similar(u0) -nl_cache = nothing - -ode_solver.θ == 0.0 ? dtθ = dt : dtθ = dt*ode_solver.θ -tθ = t0+dtθ -ode_cache = update_cache!(ode_cache,odeop,tθ) - -using Gridap.ODEs.ODETools: ThetaMethodNonlinearOperator -nlop = ThetaMethodNonlinearOperator(odeop,tθ,dtθ,u0,ode_cache,vθ) - -nl_cache = solve!(uf,ode_solver.nls,nlop,nl_cache) -uf - -K = nl_cache.A -h = nl_cache.b - -# Steady version of the problem to extract the Laplacian and mass matrices -tf = tθ -Utf = U(tf) -fst(x) = f(tf)(x) -a(u,v) = ∫(∇(v)⊙∇(u))dΩ - -function extract_matrix_vector(a,fst) - btf(v) = ∫(v⋅fst)dΩ - op = AffineFEOperator(a,btf,Utf,V0) - ls = LUSolver() - solver = LinearFESolver(ls) - uh = solve(solver,op) - - tol = 1.0e-6 - e = uh-u(tf) - l2(e) = e⋅e - l2e = sqrt(sum( ∫(l2(e))dΩ )) - # @test l2e < tol - - Ast = op.op.matrix - bst = op.op.vector - - @test uh.free_values ≈ Ast \ bst - - return Ast, bst -end - -A,rhs = extract_matrix_vector(a,fst) - -gst(x) = -1.0*u(0.0)(x) -m(u,v) = (1/(θ*dt))*∫(u⋅v)dΩ - -M,rhs2 = extract_matrix_vector(m,gst) - -@test rhs + rhs2 ≈ h -@test A+M ≈ K -@test K \ h ≈ uf -@test K \ h ≈ uf - -end #module diff --git a/test/ODEsTests/TransientFEsTests/StokesEquationAutoDiffTests.jl b/test/ODEsTests/TransientFEsTests/StokesEquationAutoDiffTests.jl deleted file mode 100644 index a977d48a4..000000000 --- a/test/ODEsTests/TransientFEsTests/StokesEquationAutoDiffTests.jl +++ /dev/null @@ -1,126 +0,0 @@ -module StokesEquationAutoDiffTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.ODEs.TransientFETools -using Gridap.FESpaces -using Gridap.Arrays: test_array - -θ = 0.5 - -u(x,t) = VectorValue(x[1],x[2])*t -u(t::Real) = x -> u(x,t) - -p(x,t) = (x[1]-x[2])*t -p(t::Real) = x -> p(x,t) -q(x) = t -> p(x,t) - -f(t) = x -> ∂t(u)(t)(x)-Δ(u(t))(x)+ ∇(p(t))(x) -g(t) = x -> (∇⋅u(t))(x) - -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffeᵤ = ReferenceFE(lagrangian,VectorValue{2,Float64},order) -V0 = FESpace( - model, - reffeᵤ, - conformity=:H1, - dirichlet_tags="boundary" -) - -reffeₚ = ReferenceFE(lagrangian,Float64,order-1) -Q = TestFESpace( - model, - reffeₚ, - conformity=:H1, - constraint=:zeromean -) - -U = TransientTrialFESpace(V0,u) - -P = TrialFESpace(Q) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -# -a(u,v) = ∫(∇(u)⊙∇(v))dΩ -b((v,q),t) = ∫(v⋅f(t))dΩ + ∫(q*g(t))dΩ -m(ut,v) = ∫(ut⋅v)dΩ - -X = TransientMultiFieldFESpace([U,P]) -Y = MultiFieldFESpace([V0,Q]) - -res(t,(u,p),(v,q)) = a(u,v) + m(∂t(u),v) - ∫((∇⋅v)*p)dΩ + ∫(q*(∇⋅u))dΩ - b((v,q),t) -jac(t,(u,p),(du,dp),(v,q)) = a(du,v) - ∫((∇⋅v)*dp)dΩ + ∫(q*(∇⋅du))dΩ -jac_t(t,(u,p),(dut,dpt),(v,q)) = m(dut,v) - -b((v,q)) = b((v,q),0.0) - -mat((du1,du2),(v1,v2)) = a(du1,v1)+a(du2,v2) - -X₀ = evaluate(X,nothing) -dy = get_fe_basis(Y) -dx = get_trial_fe_basis(X₀) -xh = FEFunction(X₀,rand(num_free_dofs(X₀))) -xh_t = TransientCellField(xh,(xh,)) - -cell_j = get_array(jac(0.5,xh_t,dx,dy)) -cell_j_t = get_array(jac_t(0.5,xh_t,dx,dy)) - -cell_j_auto = get_array(jacobian(x->res(0.5,TransientCellField(x,(xh,)),dy),xh)) -cell_j_t_auto = get_array(jacobian(x->res(0.5,TransientCellField(xh,(x,)),dy),xh)) - -for i in 1:length(cell_j) - test_array(cell_j[i].array[1,1],cell_j_auto[i].array[1,1],≈) - test_array(cell_j[i].array[1,2],cell_j_auto[i].array[1,2],≈) - test_array(cell_j[i].array[2,1],cell_j_auto[i].array[2,1],≈) - test_array(cell_j_t[i].array[1,1],cell_j_t_auto[i].array[1,1],≈) -end - -op = TransientFEOperator(res,X,Y) - -U0 = U(0.0) -P0 = P(0.0) -X0 = X(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) -ph0 = interpolate_everywhere(p(0.0),P0) -xh0 = interpolate_everywhere([uh0,ph0],X0) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -ls = LUSolver() -ode_solver = ThetaMethod(ls,dt,θ) - -sol_t = solve(ode_solver,op,xh0,t0,tF) - -l2(w) = w⋅w - - -tol = 1.0e-6 -_t_n = t0 - -result = Base.iterate(sol_t) - -for (xh_tn, tn) in sol_t - global _t_n - _t_n += dt - uh_tn = xh_tn[1] - ph_tn = xh_tn[2] - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - e = p(tn) - ph_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/StokesEquationTests.jl b/test/ODEsTests/TransientFEsTests/StokesEquationTests.jl deleted file mode 100644 index b32e2bea6..000000000 --- a/test/ODEsTests/TransientFEsTests/StokesEquationTests.jl +++ /dev/null @@ -1,106 +0,0 @@ -module StokesEquationTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.FESpaces: get_algebraic_operator - - -θ = 0.5 - -u(x,t) = VectorValue(x[1],x[2])*t -u(t::Real) = x -> u(x,t) - -p(x,t) = (x[1]-x[2])*t -p(t::Real) = x -> p(x,t) -q(x) = t -> p(x,t) - -f(t) = x -> ∂t(u)(t)(x)-Δ(u(t))(x)+ ∇(p(t))(x) -g(t) = x -> (∇⋅u(t))(x) - -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffeᵤ = ReferenceFE(lagrangian,VectorValue{2,Float64},order) -V0 = FESpace( - model, - reffeᵤ, - conformity=:H1, - dirichlet_tags="boundary" -) - -reffeₚ = ReferenceFE(lagrangian,Float64,order-1) -Q = TestFESpace( - model, - reffeₚ, - conformity=:H1, - constraint=:zeromean -) - -U = TransientTrialFESpace(V0,u) - -P = TrialFESpace(Q) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -# -a(u,v) = ∫(∇(u)⊙∇(v))dΩ -b((v,q),t) = ∫(v⋅f(t))dΩ + ∫(q*g(t))dΩ -m(ut,v) = ∫(ut⋅v)dΩ - -X = TransientMultiFieldFESpace([U,P]) -Y = MultiFieldFESpace([V0,Q]) - -res(t,(u,p),(v,q)) = a(u,v) + m(∂t(u),v) - ∫((∇⋅v)*p)dΩ + ∫(q*(∇⋅u))dΩ - b((v,q),t) -jac(t,(u,p),(du,dp),(v,q)) = a(du,v) - ∫((∇⋅v)*dp)dΩ + ∫(q*(∇⋅du))dΩ -jac_t(t,(u,p),(dut,dpt),(v,q)) = m(dut,v) - -b((v,q)) = b((v,q),0.0) - -mat((du1,du2),(v1,v2)) = a(du1,v1)+a(du2,v2) - -U0 = U(0.0) -P0 = P(0.0) -X0 = X(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) -ph0 = interpolate_everywhere(p(0.0),P0) -xh0 = interpolate_everywhere([uh0,ph0],X0) - -op = TransientFEOperator(res,jac,jac_t,X,Y) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -ls = LUSolver() -ode_solver = ThetaMethod(ls,dt,θ) - -sol_t = solve(ode_solver,op,xh0,t0,tF) - -l2(w) = w⋅w - - -tol = 1.0e-6 -_t_n = t0 - -result = Base.iterate(sol_t) - -for (xh_tn, tn) in sol_t - global _t_n - _t_n += dt - uh_tn = xh_tn[1] - ph_tn = xh_tn[2] - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - e = p(tn) - ph_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/Transient2ndOrderFEOperatorsTests.jl b/test/ODEsTests/TransientFEsTests/Transient2ndOrderFEOperatorsTests.jl deleted file mode 100644 index 22dc61c6a..000000000 --- a/test/ODEsTests/TransientFEsTests/Transient2ndOrderFEOperatorsTests.jl +++ /dev/null @@ -1,120 +0,0 @@ -module Transient2nOrderFEOperatorsTests - -using Gridap -using Test - -# Analytical functions -u(x,t) = (1.0-x[1])*x[1]*(t^2+3.0) -u(t::Real) = x -> u(x,t) -v(t::Real) = ∂t(u)(t) -a(t::Real) = ∂tt(u)(t) -f(t) = x -> ∂tt(u)(x,t) + ∂t(u)(x,t) - Δ(u(t))(x) - -u_const(x,t) = (1.0-x[1])*x[1]*(3.0) -u_const(t::Real) = x -> u_const(x,t) -v_const(t::Real) = ∂t(u_const)(t) -a_const(t::Real) = ∂tt(u_const)(t) -f_const(t) = x -> ∂tt(u_const)(x,t) + ∂t(u_const)(x,t) - Δ(u_const(t))(x) - -domain = (0,1) -partition = (2,) -model = CartesianDiscreteModel(domain,partition) - -order = 2 -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = FESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags="boundary") -U = TransientTrialFESpace(V0,u) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -m(utt,v) = ∫(v*utt)dΩ -c(ut,v) = ∫(v*ut)dΩ -a(u,v) = ∫(∇(v)⊙∇(u))dΩ -b(t,v) = ∫(v*f(t))dΩ -b_const(v) = ∫(v*f_const(0.0))dΩ -m(t,utt,v) = m(utt,v) -c(t,ut,v) = c(ut,v) -a(t,u,v) = a(u,v) - -res(t,u,v) = m(∂tt(u),v) + c(∂t(u),v) + a(u,v) - b(t,v) -jac(t,u,du,v) = a(du,v) -jac_t(t,u,dut,v) = c(dut,v) -jac_tt(t,u,dutt,v) = m(dutt,v) - -op = TransientFEOperator(res,jac,jac_t,jac_tt,U,V0) -op_affine = TransientAffineFEOperator(m,c,a,b,U,V0) -op_const = TransientConstantFEOperator(m,c,a,b_const,U,V0) -op_const_mat = TransientConstantMatrixFEOperator(m,c,a,b,U,V0) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 -γ = 0.5 -β = 0.25 - -U0 = U(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) -vh0 = interpolate_everywhere(v(0.0),U0) -ah0 = interpolate_everywhere(a(0.0),U0) -vh0_const = interpolate_everywhere(v_const(0.0),U0) -ah0_const = interpolate_everywhere(a_const(0.0),U0) - -ls = LUSolver() -ode_solver = Newmark(ls,dt,γ,β) - -sol_t = solve(ode_solver,op,(uh0,vh0,ah0),t0,tF) -sol_affine_t = solve(ode_solver,op_affine,(uh0,vh0,ah0),t0,tF) -sol_const_t = solve(ode_solver,op_const,(uh0,vh0_const,ah0_const),t0,tF) -sol_const_mat_t = solve(ode_solver,op_const_mat,(uh0,vh0,ah0),t0,tF) - -l2(w) = w*w - -tol = 1.0e-6 -_t_n = t0 - -for (uh_tn, tn) in sol_t - global _t_n - _t_n += dt - @test tn≈_t_n - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -_t_n = t0 -for (uh_tn, tn) in sol_affine_t - global _t_n - _t_n += dt - @test tn≈_t_n - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -_t_n = t0 -for (uh_tn, tn) in sol_const_t - global _t_n - _t_n += dt - @test tn≈_t_n - e = u_const(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -_t_n = t0 -for (uh_tn, tn) in sol_const_mat_t - global _t_n - _t_n += dt - @test tn≈_t_n - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end diff --git a/test/ODEsTests/TransientFEsTests/TransientBlockMultiFieldStyleTests.jl b/test/ODEsTests/TransientFEsTests/TransientBlockMultiFieldStyleTests.jl deleted file mode 100644 index 6c6e4dbe3..000000000 --- a/test/ODEsTests/TransientFEsTests/TransientBlockMultiFieldStyleTests.jl +++ /dev/null @@ -1,103 +0,0 @@ -module TransientBlockMultiFieldStyleTests -using Test, BlockArrays, SparseArrays, LinearAlgebra - -using Gridap -using Gridap.FESpaces, Gridap.ReferenceFEs, Gridap.MultiField -using Gridap.ODEs.TransientFETools - -function main(n_spaces,mfs,weakform,Ω,dΩ,U,V) - mass, biform, liform = weakform - res(t,x,y) = mass(t,∂t(x),y) + biform(t,x,y) - liform(t,y) - jac(t,x,dx,y) = biform(t,dx,y) - jac_t(t,xt,dxt,y) = mass(t,dxt,y) - - ############################################################################################ - # Normal assembly - - Y = MultiFieldFESpace(fill(V,n_spaces)) - X = TransientMultiFieldFESpace(fill(U,n_spaces)) - - u = get_trial_fe_basis(X(0.0)) - v = get_fe_basis(Y) - uₜ = TransientCellField(u,(u,)) - - matdata_jac = collect_cell_matrix(X(0),Y,jac(0,uₜ,u,v)) - matdata_jac_t = collect_cell_matrix(X(0),Y,jac_t(0,uₜ,u,v)) - matdata_jacs = (matdata_jac,matdata_jac_t) - matdata = TransientFETools._vcat_matdata(matdata_jacs) - vecdata = collect_cell_vector(Y,liform(0,v)) - - assem = SparseMatrixAssembler(X(0),Y) - A1 = assemble_matrix(assem,matdata) - b1 = assemble_vector(assem,vecdata) - - ############################################################################################ - # Block MultiFieldStyle - - Yb = MultiFieldFESpace(fill(V,n_spaces);style=mfs) - Xb = TransientMultiFieldFESpace(fill(U,n_spaces);style=mfs) - test_fe_space(Yb) - test_fe_space(Xb(0)) - - ub = get_trial_fe_basis(Xb(0)) - vb = get_fe_basis(Yb) - ubₜ = TransientCellField(ub,(ub,)) - - bmatdata_jac = collect_cell_matrix(Xb(0),Yb,jac(0,ubₜ,ub,vb)) - bmatdata_jac_t = collect_cell_matrix(Xb(0),Yb,jac_t(0,ubₜ,ub,vb)) - bmatdata_jacs = (bmatdata_jac,bmatdata_jac_t) - bmatdata = TransientFETools._vcat_matdata(bmatdata_jacs) - bvecdata = collect_cell_vector(Yb,liform(0,vb)) - - ############################################################################################ - # Block Assembly - - assem_blocks = SparseMatrixAssembler(Xb,Yb) - - A1_blocks = assemble_matrix(assem_blocks,bmatdata) - b1_blocks = assemble_vector(assem_blocks,bvecdata) - @test A1 ≈ A1_blocks - @test b1 ≈ b1_blocks - - y1_blocks = similar(b1_blocks) - mul!(y1_blocks,A1_blocks,b1_blocks) - y1 = similar(b1) - mul!(y1,A1,b1) - @test y1_blocks ≈ y1 - - A3_blocks = allocate_matrix(assem_blocks,bmatdata) - b3_blocks = allocate_vector(assem_blocks,bvecdata) - assemble_matrix!(A3_blocks,assem_blocks,bmatdata) - assemble_vector!(b3_blocks,assem_blocks,bvecdata) - @test A3_blocks ≈ A1 - @test b3_blocks ≈ b1_blocks - -end - -############################################################################################ - -sol(x,t) = sum(x) -sol(t::Real) = x->sol(x,t) - -model = CartesianDiscreteModel((0.0,1.0,0.0,1.0),(5,5)) -Ω = Triangulation(model) - -reffe = LagrangianRefFE(Float64,QUAD,1) -V = FESpace(Ω, reffe; dirichlet_tags="boundary") -U = TransientTrialFESpace(V,sol) - -dΩ = Measure(Ω, 2) -mass2(t,(u1t,u2t),(v1,v2)) = ∫(u1t⋅v1)*dΩ -biform2(t,(u1,u2),(v1,v2)) = ∫(∇(u1)⋅∇(v1) + u2⋅v2 - u1⋅v2)*dΩ -liform2(t,(v1,v2)) = ∫(v1 - v2)*dΩ -mass3(t,(u1t,u2t,u3t),(v1,v2,v3)) = ∫(u1t⋅v1)*dΩ -biform3(t,(u1,u2,u3),(v1,v2,v3)) = ∫(∇(u1)⋅∇(v1) + u2⋅v2 - u1⋅v2 - u3⋅v2 - u2⋅v3)*dΩ -liform3(t,(v1,v2,v3)) = ∫(v1 - v2 + 2.0*v3)*dΩ - -for (n_spaces,weakform) in zip([2,3],[(mass2,biform2,liform2),(mass3,biform3,liform3)]) - for mfs in [BlockMultiFieldStyle(),BlockMultiFieldStyle(2,(1,n_spaces-1))] - main(n_spaces,mfs,weakform,Ω,dΩ,U,V) - end -end - -end # module diff --git a/test/ODEsTests/TransientFEsTests/TransientFEOperatorsTests.jl b/test/ODEsTests/TransientFEsTests/TransientFEOperatorsTests.jl deleted file mode 100644 index dcdd80aa9..000000000 --- a/test/ODEsTests/TransientFEsTests/TransientFEOperatorsTests.jl +++ /dev/null @@ -1,173 +0,0 @@ -module TransientFEOperatorsTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.ODEs.ODETools -using Gridap.ODEs.TransientFETools -using Gridap.FESpaces: get_algebraic_operator - -θ = 0.4 - -# Analytical functions -u(x,t) = (1.0-x[1])*x[1]*(1.0-x[2])*x[2]*(t+3.0) -u(t::Real) = x -> u(x,t) -v(x) = t -> u(x,t) -f(t) = x -> ∂t(u)(x,t)-Δ(u(t))(x) -∂tu(x,t) = ∂t(u)(x,t) -∂tu(t::Real) = x -> ∂tu(x,t) - -# Domain and triangulations -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) -order = 2 -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = FESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags="boundary") -U = TransientTrialFESpace(V0,u) -Ω = Triangulation(model) -# Γ = BoundaryTriangulation(model,tags="boundary") -degree = 2*order -dΩ = Measure(Ω,degree) -# dΓ = Measure(Γ,degree) -# nΓ = get_normal_vector(Γ) -# h = 1/partition[1] - -# Affine FE operator -a(u,v) = ∫(∇(v)⊙∇(u))dΩ #- ∫(0.0*v⋅(nΓ⋅∇(u)) + u⋅(nΓ⋅∇(v)) - 10/h*(v⋅u))dΓ -m(u,v) = ∫(v*u)dΩ -b(v,t) = ∫(v*f(t))dΩ #- ∫(u(t)⋅(nΓ⋅∇(v)) - 10/h*(v⋅u(t)) )dΓ -res(t,u,v) = a(u,v) + m(∂t(u),v) - b(v,t) -lhs(t,u,v) = m(∂t(u),v) -rhs(t,u,v) = b(v,t) - a(u,v) -irhs(t,u,v) = b(v,t) - a(u,v)#∫( -1.0*(∇(v)⊙∇(u)))dΩ -erhs(t,u,v) = ∫( 0.0*(∇(v)⊙∇(u)))dΩ#b(v,t) -jac(t,u,du,v) = a(du,v) -jac_t(t,u,dut,v) = m(dut,v) -op = TransientFEOperator(res,jac,jac_t,U,V0) -opRK = TransientRungeKuttaFEOperator(lhs,rhs,jac,jac_t,U,V0) -opIMEXRK = TransientIMEXRungeKuttaFEOperator(lhs,irhs,erhs,jac,jac_t,U,V0) - -# Time stepping -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -# Initial solution -U0 = U(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) -∂tuh0 = interpolate_everywhere(∂tu(0.0),U0) - -function test_ode_solver(ode_solver,op,xh0) - sol_t = solve(ode_solver,op,xh0,t0,tF) - - l2(w) = w*w - - tol = 1.0e-6 - _t_n = t0 - - for (uh_tn, tn) in sol_t - # global _t_n - _t_n += dt - @test tn≈_t_n - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol - end - - @test length( [uht for uht in sol_t] ) == ceil((tF - t0)/dt) - -end - -# Linear solver -ls = LUSolver() - -# ODE solvers -ode_solvers = [] -push!(ode_solvers,(ThetaMethod(ls,dt,θ),op,uh0)) -push!(ode_solvers,(BackwardEuler(ls,dt),op,uh0)) -push!(ode_solvers,(MidPoint(ls,dt),op,uh0)) -push!(ode_solvers,(GeneralizedAlpha(ls,dt,1.0),op,(uh0,∂tuh0))) -push!(ode_solvers,(RungeKutta(ls,ls,dt,:BE_1_0_1),opRK,uh0)) -push!(ode_solvers,(RungeKutta(ls,ls,dt,:CN_2_0_2),opRK,uh0)) -push!(ode_solvers,(RungeKutta(ls,ls,dt,:SDIRK_2_0_2),opRK,uh0)) -push!(ode_solvers,(IMEXRungeKutta(ls,ls,dt,:IMEX_FE_BE_2_0_1),opIMEXRK,uh0)) -for ode_solver in ode_solvers - test_ode_solver(ode_solver...) -end -# - -u0 = get_free_dof_values(uh0) -uf = get_free_dof_values(uh0) - -odeop = get_algebraic_operator(op) - -ode_cache = allocate_cache(odeop) -vθ = similar(u0) -nl_cache = nothing - -# tf = t0+dt - -# Nonlinear ThetaMethod -ode_solver = ThetaMethod(ls,dt,θ) -ode_solver.θ == 0.0 ? dtθ = dt : dtθ = dt*ode_solver.θ -tθ = t0+dtθ -ode_cache = update_cache!(ode_cache,odeop,tθ) - -using Gridap.ODEs.ODETools: ThetaMethodNonlinearOperator -nlop = ThetaMethodNonlinearOperator(odeop,tθ,dtθ,u0,ode_cache,vθ) - -nl_cache = solve!(uf,ode_solver.nls,nlop,nl_cache) - -K = nl_cache.A -h = nl_cache.b - -# Steady version of the problem to extract the Laplacian and mass matrices -# tf = 0.1 -tf = tθ -Utf = U(tf) -# fst(x) = -Δ(u(tf))(x) -fst(x) = f(tf)(x) -a(u,v) = ∫(∇(v)⊙∇(u))dΩ - -function extract_matrix_vector(a,fst) - btf(v) = ∫(v*fst)dΩ - op = AffineFEOperator(a,btf,Utf,V0) - ls = LUSolver() - solver = LinearFESolver(ls) - uh = solve(solver,op) - - tol = 1.0e-6 - e = uh-u(tf) - l2(e) = e*e - l2e = sqrt(sum( ∫(l2(e))dΩ )) - # @test l2e < tol - - Ast = op.op.matrix - bst = op.op.vector - - @test uh.free_values ≈ Ast \ bst - - return Ast, bst -end - -A,vec = extract_matrix_vector(a,fst) - -gst(x) = u(tf)(x) -m(u,v) = ∫(u*v)dΩ - -M,_ = extract_matrix_vector(m,gst) - -@test vec ≈ h -@test A+M/(θ*dt) ≈ K - -rhs -h - - -end #module diff --git a/test/ODEsTests/TransientFEsTests/TransientFETests.jl b/test/ODEsTests/TransientFEsTests/TransientFETests.jl deleted file mode 100644 index cf28e8887..000000000 --- a/test/ODEsTests/TransientFEsTests/TransientFETests.jl +++ /dev/null @@ -1,247 +0,0 @@ -module TransientFETests - -using Gridap -using Test -using Gridap.ODEs.ODETools -using Gridap.ODEs.TransientFETools -using Gridap.FESpaces: get_algebraic_operator - -u(x,t) = (x[1] + x[2])*t -u(t::Real) = x -> u(x,t) -∇u(x,t) = VectorValue(1,1)*t -∇u(t::Real) = x -> ∇u(x,t) -import Gridap: ∇ -∇(::typeof(u)) = ∇u -∇(u) === ∇u - -θ = 1.0 - -∂tu(t) = x -> x[1]+x[2] -import Gridap.ODEs.TransientFETools: ∂t -∂t(::typeof(u)) = ∂tu -@test ∂t(u) === ∂tu - -f(t) = x -> (x[1]+x[2]) - -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) - -order = 1 -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = TestFESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags="boundary" -) - -U = TransientTrialFESpace(V0,u) -U0 = TrialFESpace(V0,u(0.0)) -@test test_transient_trial_fe_space(U) - -U0 = U(1.0) -ud0 = copy(get_dirichlet_dof_values(U0)) -_ud0 = get_dirichlet_dof_values(U0) -U1 = U(2.0) -ud1 = copy(get_dirichlet_dof_values(U1)) -_ud1 = get_dirichlet_dof_values(U1) -@test all(ud0 .≈ 0.5ud1) -all(_ud0 .≈ _ud1) - -Ut = ∂t(U) -Ut.dirichlet_t -Ut0 = Ut(0.0) -Ut0.dirichlet_values - -Ut1 = Ut(1.0) -utd0 = copy(get_dirichlet_dof_values(Ut0)) -utd1 = copy(get_dirichlet_dof_values(Ut1)) -@test all(utd0 .== utd1) -@test all(utd1 .== ud0) - -Ω = Triangulation(model) -degree = 2 -dΩ = Measure(Ω,degree) - -a(u,v) = ∫(∇(v)⋅∇(u))dΩ -b(v,t) = ∫(v*f(t))dΩ - -res(t,u,v) = a(u,v) + ∫(∂t(u)*v)dΩ - b(v,t) -jac(t,u,du,v) = a(du,v) -jac_t(t,u,dut,v) = ∫(dut*v)dΩ - -U0 = U(0.0) -_res(u,v) = a(u,v) + 10.0*∫(u*v)dΩ - b(v,0.0) -_jac(u,du,v) = a(du,v) + 10.0*∫(du*v)dΩ -_op = FEOperator(_res,_jac,U0,V0) - -uh = interpolate_everywhere(0.0,U0)#1.0) -using Gridap.FESpaces: allocate_residual, allocate_jacobian -_r = allocate_residual(_op,uh) -_J = allocate_jacobian(_op,uh) -using Gridap.FESpaces: residual!, jacobian! -residual!(_r,_op,uh) -jacobian!(_J,_op,uh) - -op = TransientFEOperator(res,jac,jac_t,U,V0) -odeop = get_algebraic_operator(op) -cache = allocate_cache(odeop) - -r = allocate_residual(op,0.0,uh,cache) -J = allocate_jacobian(op,0.0,uh,cache) -uh10 = interpolate_everywhere(0.0,U0)#10.0) -xh = TransientCellField(uh,(uh10,)) -residual!(r,op,0.0,xh,cache) -jacobian!(J,op,1.0,xh,1,1.0,cache) -jacobian!(J,op,1.0,xh,2,10.0,cache) -@test all(r.≈_r) -@test all(J.≈_J) - -U0 = U(0.0) -uh0 = interpolate_everywhere(0.0,U0) -@test test_transient_fe_operator(op,uh0) - -u0 = u(0.0) -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -ls = LUSolver() -# using LineSearches: BackTracking -tol = 1.0 -maxiters = 20 -using Gridap.Algebra: NewtonRaphsonSolver -nls = NLSolver(ls;show_trace=false,method=:newton) #linesearch=BackTracking()) -ode_solver = ThetaMethod(nls,dt,1.0) -@test test_transient_fe_solver(ode_solver,op,uh0,t0,tF) - -xh = TransientCellField(uh,(uh,)) -residual!(r,op,0.1,xh,cache) -jacobian!(J,op,1.0,xh,1,1.0,cache) -jacobian!(J,op,1.0,xh,2,10.0,cache) - -u0 = get_free_dof_values(uh0) -solver = ode_solver -t0 = 0.0 -ode_cache = allocate_cache(odeop) -cache = nothing -uf = copy(u0) -dt = solver.dt -tf = t0+dt -update_cache!(ode_cache,odeop,tf) -using Gridap.ODEs.ODETools: ThetaMethodNonlinearOperator -vf = copy(u0) -nlop = ThetaMethodNonlinearOperator(odeop,tf,dt,u0,ode_cache,vf) - -x = copy(nlop.u0) - -b1 = allocate_residual(nlop,x) -residual!(b1,nlop,x) -b2 = allocate_residual(nlop,x) -residual!(b2,nlop.odeop,nlop.tθ,(x,10.0*x),nlop.ode_cache) -@test all(b1 .≈ b2) -J1 = allocate_jacobian(nlop,x) -jacobian!(J1,nlop,x) -J2 = allocate_jacobian(nlop,x) -jacobian!(J2,nlop.odeop,nlop.tθ,(x,10.0*x),1,1.0,nlop.ode_cache) -jacobian!(J2,nlop.odeop,nlop.tθ,(x,10.0*x),2,10.0,nlop.ode_cache) -@test all(J1 .≈ J2) -using Gridap.Algebra: test_nonlinear_operator -test_nonlinear_operator(nlop,x,b1,jac=J1) - -x .= 0.0 -r = allocate_residual(nlop,x) -residual!(r,nlop,x) -J = allocate_jacobian(nlop,x) -jacobian!(J,nlop,x) - -cache = solve!(uf,solver.nls,nlop) -df = cache.df -ns = cache.ns - -function linsolve!(x,A,b) - numerical_setup!(ns,A) - solve!(x,ns,b) -end - -p = copy(x) -p .= 0.0 -l_sol = linsolve!(p,J,-r) -J*l_sol .≈ -r -x = x + l_sol -@test all(abs.(residual!(r,nlop,x)) .< 1e-6) - -residual!(r,nlop,x) -jacobian!(J,nlop,x) -p .= 0.0 -l_sol = linsolve!(p,J,-r) - -cache = solve!(uf,solver.nls,nlop) -@test all(uf .≈ x) -solve!(uf,solver.nls,nlop,cache) -@test all(uf .≈ x) - -uf .= 0.0 -x = copy(nlop.u0) -cache = Gridap.Algebra._new_nlsolve_cache(x,nls,nlop) -df = cache.df -ns = cache.ns -x .= 0.0 -l_sol = linsolve!(x,df.DF,df.F) -@test all(df.DF*l_sol.≈df.F) -x .= 0 -Gridap.Algebra.nlsolve(df,x;linsolve=linsolve!,nls.kwargs...) - -using Gridap.FESpaces: get_algebraic_operator -odeop = get_algebraic_operator(op) -sol_ode_t = solve(ode_solver,odeop,u0,t0,tF) - -test_ode_solution(sol_ode_t) -_t_n = t0 -for (u_n, t_n) in sol_ode_t - global _t_n - _t_n += dt - @test t_n≈_t_n - @test all(u_n .≈ t_n) -end - -ode_solver = ThetaMethod(nls,dt,θ) -sol_ode_t = solve(ode_solver,odeop,u0,t0,tF) -test_ode_solution(sol_ode_t) -_t_n = t0 -un, tn = Base.iterate(sol_ode_t) -for (u_n, t_n) in sol_ode_t - global _t_n - _t_n += dt - @test t_n≈_t_n - @test all(u_n .≈ t_n) -end - -sol_t = solve(ode_solver,op,uh0,t0,tF) -@test test_transient_fe_solution(sol_t) - -_t_n = 0.0 -for (u_n, t_n) in sol_t - global _t_n - _t_n += dt - @test t_n≈_t_n - @test all(u_n.free_values .≈ t_n) -end - -l2(w) = w*w - -# h1(w) = a(w,w) + l2(w) - -_t_n = t0 -for (uh_tn, tn) in sol_t - global _t_n - _t_n += dt - @test tn≈_t_n - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol - # writevtk(trian,"sol at time: $tn",cellfields=["u" => uh_tn]) -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/VectorHeatEquationTests.jl b/test/ODEsTests/TransientFEsTests/VectorHeatEquationTests.jl deleted file mode 100644 index 26b394415..000000000 --- a/test/ODEsTests/TransientFEsTests/VectorHeatEquationTests.jl +++ /dev/null @@ -1,84 +0,0 @@ -module VectorHeatEquationTests - -using Gridap -using ForwardDiff -using LinearAlgebra -using Test -using Gridap.FESpaces: get_algebraic_operator - -θ = 1.0 - -u(x,t) = (1.0-x[1])*x[1]*(1.0-x[2])*x[2]*t -u(t::Real) = x -> u(x,t) -v(x) = t -> u(x,t) -f(t) = x -> ∂t(u)(t)(x)-Δ(u(t))(x) - -domain = (0,1,0,1) -partition = (2,2) -model = CartesianDiscreteModel(domain,partition) - -order = 2 - -reffe = ReferenceFE(lagrangian,Float64,order) -V0 = FESpace( - model, - reffe, - conformity=:H1, - dirichlet_tags="boundary" -) -U = TransientTrialFESpace(V0,u) - -Ω = Triangulation(model) -degree = 2*order -dΩ = Measure(Ω,degree) - -# -a(u,v) = ∫(∇(v)⊙∇(u))dΩ -b(v,t) = ∫(v⋅f(t))dΩ -m(ut,v) = ∫(ut⋅v)dΩ - -X = TransientMultiFieldFESpace([U,U]) -Y = MultiFieldFESpace([V0,V0]) - -_res(t,u,v) = a(u,v) + m(∂t(u),v) - b(v,t) - -res(t,(u1,u2),(v1,v2)) = _res(t,u1,v1) + _res(t,u2,v2) -jac(t,x,(du1,du2),(v1,v2)) = a(du1,v1) + a(du2,v2) -jac_t(t,x,(du1t,du2t),(v1,v2)) = m(du1t,v1) + m(du2t,v2) - -op = TransientFEOperator(res,jac,jac_t,X,Y) - -t0 = 0.0 -tF = 1.0 -dt = 0.1 - -U0 = U(0.0) -X0 = X(0.0) -uh0 = interpolate_everywhere(u(0.0),U0) -xh0 = interpolate_everywhere([uh0,uh0],X0) - -ls = LUSolver() -# using Gridap.Algebra: NewtonRaphsonSolver -# nls = NLSolver(ls;show_trace=true,method=:newton) #linesearch=BackTracking()) -ode_solver = ThetaMethod(ls,dt,θ) - -sol_t = solve(ode_solver,op,xh0,t0,tF) - -l2(w) = w⋅w - - -tol = 1.0e-6 -_t_n = t0 - -result = Base.iterate(sol_t) - -for (xh_tn, tn) in sol_t - global _t_n - _t_n += dt - uh_tn = xh_tn[1] - e = u(tn) - uh_tn - el2 = sqrt(sum( ∫(l2(e))dΩ )) - @test el2 < tol -end - -end #module diff --git a/test/ODEsTests/TransientFEsTests/runtests.jl b/test/ODEsTests/TransientFEsTests/runtests.jl deleted file mode 100644 index 83b6fc658..000000000 --- a/test/ODEsTests/TransientFEsTests/runtests.jl +++ /dev/null @@ -1,35 +0,0 @@ -module TransientFEToolsTests - -using Test - -@testset "TransientFETests" begin include("TransientFETests.jl") end - -@testset "TransientFEOperatorsTests" begin include("TransientFEOperatorsTests.jl") end - -@testset "Transient2ndOrderFEOperatorsTests" begin include("Transient2ndOrderFEOperatorsTests.jl") end - -@testset "AffineFEOperatorsTests" begin include("AffineFEOperatorsTests.jl") end - -@testset "ConstantFEOperatorsTests" begin include("ConstantFEOperatorsTests.jl") end - -@testset "HeatEquationTests" begin include("HeatEquationTests.jl") end - -@testset "HeatVectorEquationTests" begin include("HeatVectorEquationTests.jl") end - -@testset "VectorHeatEquationTests" begin include("VectorHeatEquationTests.jl") end - -@testset "StokesEquationTests" begin include("StokesEquationTests.jl") end - -@testset "BoundaryEquationTests" begin include("BoundaryHeatEquationTests.jl") end - -@testset "DGHeatEquationTests" begin include("DGHeatEquationTests.jl") end - -@testset "FreeSurfacePotentialFlowTests" begin include("FreeSurfacePotentialFlowTests.jl") end - -@testset "HeatEquationAutoDiffTests" begin include("HeatEquationAutoDiffTests.jl") end - -@testset "StokesEquationAutoDiffTests" begin include("StokesEquationAutoDiffTests.jl") end - -@testset "ForwardEulerHeatEquationTests" begin include("ForwardEulerHeatEquationTests.jl") end - -end # module diff --git a/test/ODEsTests/DiffEqsWrappersTests/DiffEqsTests.jl b/test/ODEsTests/_DiffEqsWrappersTests.jl similarity index 81% rename from test/ODEsTests/DiffEqsWrappersTests/DiffEqsTests.jl rename to test/ODEsTests/_DiffEqsWrappersTests.jl index 0f5d7262b..785675a50 100644 --- a/test/ODEsTests/DiffEqsWrappersTests/DiffEqsTests.jl +++ b/test/ODEsTests/_DiffEqsWrappersTests.jl @@ -1,11 +1,9 @@ -module DiffEqsWrapperTests +module DiffEqsWrappersTests using Test + using Gridap using Gridap.ODEs -using Gridap.ODEs.ODETools -using Gridap.ODEs.TransientFETools -using Gridap.ODEs.DiffEqWrappers # using DifferentialEquations # using Sundials @@ -23,12 +21,12 @@ function fe_problem(u, n) order = 1 - reffe = ReferenceFE(lagrangian,Float64,order) + reffe = ReferenceFE(lagrangian, Float64, order) V0 = FESpace( model, reffe, - conformity = :H1, - dirichlet_tags = "boundary", + conformity=:H1, + dirichlet_tags="boundary", ) U = TransientTrialFESpace(V0, u) @@ -36,9 +34,9 @@ function fe_problem(u, n) degree = 2 * order dΩ = Measure(Ω, degree) - a(u, v) = ∫( ∇(v) ⋅ ∇(u) )dΩ - b(v, t) = ∫( v * f(t) )dΩ - m(u, v) = ∫( v * u )dΩ + a(u, v) = ∫(∇(v) ⋅ ∇(u))dΩ + b(v, t) = ∫(v * f(t))dΩ + m(u, v) = ∫(v * u)dΩ res(t, u, v) = a(u, v) + m(∂t(u), v) - b(v, t) jac(t, u, du, v) = a(du, v) @@ -67,28 +65,33 @@ u(t) = x -> u(x, t) # ISSUE 2: When I pass `jac_prototype` the code gets stuck n = 3 # cells per dim (2D) -op, u0 = fe_problem(u,n) +op, u0 = fe_problem(u, n) # Some checks res!, jac!, mass!, stif! = diffeq_wrappers(op) -J = prototype_jacobian(op,u0) +J = prototype_jacobian(op, u0) r = copy(u0) -θ = 1.0; t0 = 0.0; tF = 1.0; dt = 0.1; tθ = 1.0; dtθ = dt*θ +θ = 1.0 +t0 = 0.0 +tF = 1.0 +dt = 0.1 +tθ = 1.0 +dtθ = dt * θ res!(r, u0, u0, nothing, tθ) jac!(J, u0, u0, nothing, (1 / dtθ), tθ) -K = prototype_jacobian(op,u0) -M = prototype_jacobian(op,u0) +K = prototype_jacobian(op, u0) +M = prototype_jacobian(op, u0) stif!(K, u0, u0, nothing, tθ) mass!(M, u0, u0, nothing, tθ) # Here you have the mass matrix M -@test (1/dtθ)*M+K ≈ J +@test (1 / dtθ) * M + K ≈ J # To explore the Sundials solver options, e.g., BE with fixed time step dtd -f_iip = DAEFunction{true}(res!; jac = jac!)#, jac_prototype=J) +f_iip = DAEFunction{true}(res!; jac=jac!)#, jac_prototype=J) # jac_prototype is the way to pass my pre-allocated jacobian matrix -prob_iip = DAEProblem{true}(f_iip, u0, u0, tspan, differential_vars = [true,true,true,true]) +prob_iip = DAEProblem{true}(f_iip, u0, u0, tspan, differential_vars=[true, true, true, true]) # When I pass `jac_prototype` the code get stuck here: # sol_iip = Sundials.solve(prob_iip, IDA(), reltol = 1e-8, abstol = 1e-8) # @show sol_iip.u @@ -99,10 +102,10 @@ prob_iip = DAEProblem{true}(f_iip, u0, u0, tspan, differential_vars = [true,true # Show using integrators as iterators # for i in take(integ, 100) - # @show integ.u +# @show integ.u # end -end # module +end # module DiffEqsWrappersTests # FUTURE WORK: Check other options, not only Sundials diff --git a/test/ODEsTests/runtests.jl b/test/ODEsTests/runtests.jl index efab0ea42..02cec51ff 100644 --- a/test/ODEsTests/runtests.jl +++ b/test/ODEsTests/runtests.jl @@ -2,12 +2,26 @@ module ODEsTests using Test -@time @testset "ODETools" begin include("ODEsTests/runtests.jl") end +@time @testset "TimeDerivatives" begin include("TimeDerivativesTests.jl") end -@time @testset "TransientFETools" begin include("TransientFEsTests/runtests.jl") end +@time @testset "ODEOperators" begin include("ODEOperatorsTests.jl") end -# @time @testset "DiffEqsWrappers" begin include("DiffEqsWrappersTests/runtests.jl") end +@time @testset "ODESolvers" begin include("ODESolversTests.jl") end + +@time @testset "ODEProblems" begin include("ODEProblemsTests.jl") end + +@time @testset "ODESolutions" begin include("ODESolutionsTests.jl") end + +@time @testset "TransientFESpaces" begin include("TransientFESpacesTests.jl") end + +@time @testset "TransientCellFields" begin include("TransientCellFieldsTests.jl") end + +@time @testset "TransientFEOperatorsSolutions" begin include("TransientFEOperatorsSolutionsTests.jl") end + +@time @testset "TransientFEProblems" begin include("TransientFEProblemsTests.jl") end + +# @time @testset "DiffEqsWrappers" begin include("_DiffEqsWrappersTests.jl") end # include("../bench/runbenchs.jl") -end #module +end # module ODEsTests