Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

functional inverse problem #572

Open
YichengDWu opened this issue Jul 7, 2022 · 5 comments
Open

functional inverse problem #572

YichengDWu opened this issue Jul 7, 2022 · 5 comments

Comments

@YichengDWu
Copy link
Member

YichengDWu commented Jul 7, 2022

I have an inverse problem where the parameter function is a function of the solution, what is the correct way to implement it?

using ModelingToolkit, NeuralPDE, Lux, Random, NNlib
using Optimization, OptimizationOptimJL, Zygote
import ModelingToolkit: Interval

@parameters x
@variables u(..), b(..)

f(x) = (1+(sin*x)^2))/(1+2*(sin*x)^2)) * π^2 * sin*x) + sin*x)
Dxx = Differential(x)^2
eq = [-b(u(x)) * Dxx(u(x)) + u(x) ~ f(x)]

bcs = [u(0) ~ 0, u(1) ~ 0]
domain = [x  Interval(0.0, 1.0)]

@named pdesys = PDESystem(eq, bcs, domain, [x], [u(x), b(u(x))])

# the neural networks
chain_u =  Chain(Dense(1, 20, tanh), Dense(20,20, tanh), Dense(20,1))
chain_b =  Chain(Dense(1, 20, tanh), Dense(20,20, tanh), Dense(20,1))

x_data = reshape(LinRange(0,1,101)[2:end-1],1,:) |> collect
u_data = sin.(π .* x_data)

function additional_loss(phi, θ, p)
    return sum(abs2, phi[1](x_data, θ.depvar.u).-u_data)/length(x_data)
end

dx = 0.01
discretization = NeuralPDE.PhysicsInformedNN([chain_u, chain_b],
                                             NeuralPDE.GridTraining(dx),
                                             additional_loss=additional_loss)  

prob = NeuralPDE.discretize(pdesys, discretization)                
callback = function (p,l)
    println("Current loss is: $l")
    return false
end
res = Optimization.solve(prob, BFGS(), callback = callback, maxiters=500)
@YichengDWu YichengDWu changed the title Take the solution as the input of another nn Take the solution as the input of another NN Jul 7, 2022
@YichengDWu
Copy link
Member Author

julia> prob = NeuralPDE.discretize(pdesys, discretization)
ERROR: MethodError: no method matching nameof(::Term{Real, Base.ImmutableDict{DataType, Any}})
Closest candidates are:
  nameof(::Sym) at C:\Users\Luffy\.julia\packages\SymbolicUtils\vnuIf\src\types.jl:144
  nameof(::ModelingToolkit.AbstractSystem) at C:\Users\Luffy\.julia\packages\ModelingToolkit\tMgaW\src\systems\abstractsystem.jl:139
  nameof(::DataType) at C:\Users\Luffy\AppData\Local\Programs\Julia-1.7.2\share\julia\base\reflection.jl:223
  ...
Stacktrace:
 [1] (::NeuralPDE.var"#40#41")(argument::Term{Real, Base.ImmutableDict{DataType, Any}})
   @ NeuralPDE .\none:0
 [2] iterate
   @ .\generator.jl:47 [inlined]
 [3] collect(itr::Base.Generator{Vector{Term{Real, Base.ImmutableDict{DataType, Any}}}, NeuralPDE.var"#40#41"})
   @ Base .\array.jl:724
 [4] get_vars(indvars_::Vector{Num}, depvars_::Vector{Num})
   @ NeuralPDE C:\Users\Luffy\.julia\packages\NeuralPDE\iNhvg\src\symbolic_utilities.jl:353
 [5] symbolic_discretize(pde_system::PDESystem, discretization::PhysicsInformedNN{GridTraining{Float64}, Nothing, Vector{NeuralPDE.Phi{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, typeof(NeuralPDE.numeric_derivative), Bool, typeof(additional_loss), Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}})
   @ NeuralPDE C:\Users\Luffy\.julia\packages\NeuralPDE\iNhvg\src\discretize.jl:420
 [6] discretize(pde_system::PDESystem, discretization::PhysicsInformedNN{GridTraining{Float64}, Nothing, Vector{NeuralPDE.Phi{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, typeof(NeuralPDE.numeric_derivative), Bool, typeof(additional_loss), Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}})
   @ NeuralPDE C:\Users\Luffy\.julia\packages\NeuralPDE\iNhvg\src\discretize.jl:669
 [7] top-level scope
   @ REPL[106]:1
 [8] top-level scope
   @ C:\Users\Luffy\.julia\packages\CUDA\tTK8Y\src\initialization.jl:52

@YichengDWu
Copy link
Member Author

YichengDWu commented Jul 7, 2022

Changing b(u(x)) to b(x) would work, but that is not how i want it to be

@ChrisRackauckas
Copy link
Member

I'm not sure this case will parse. It probably needs a few changes to be supported. I can't think of a quick workaround either, so I'll say this is unsupported right now.

I'm going to post about how the parsing and codegen in this repo should be changed, and that would make it easy to support this, but it won't happen "soon", so I'd just write the PINN out by hand here if you need it. But this is a good thing to target, and the code you posted is probably what the front end should be.

@YichengDWu
Copy link
Member Author

I'll just move on if it's not supported yet.

@YichengDWu
Copy link
Member Author

But yeah, a very good feature to have.

@YichengDWu YichengDWu changed the title Take the solution as the input of another NN functional inverse problem Aug 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants