Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version bump and Documentation Updates #110

Merged
merged 6 commits into from
Jul 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "RobustNeuralNetworks"
uuid = "a1f18e6b-8af1-433f-a85d-2e1ee636a2b8"
authors = ["Nicholas H. Barbara", "Max Revay", "Ruigang Wang", "Jing Cheng", "Jerome Justin", "Ian R. Manchester"]
version = "0.2.2"
version = "0.2.3"

[deps]
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
Expand Down
2 changes: 0 additions & 2 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
[deps]
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
LiveServer = "16fef848-5104-11e9-1b77-fb7a48bbb589"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
RobustNeuralNetworks = "a1f18e6b-8af1-433f-a85d-2e1ee636a2b8"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[compat]
Documenter = "0.27"
Binary file removed docs/src/assets/lbdn-mnist/dense_mnist.bson
Binary file not shown.
Binary file removed docs/src/assets/lbdn-mnist/lbdn_mnist.bson
Binary file not shown.
Binary file removed docs/src/assets/lbdn-mnist/mnist_data.bson
Binary file not shown.
6 changes: 5 additions & 1 deletion docs/src/examples/box_obsv.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
# Observer Design with REN

*Full example code can be found [here](https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/examples/src/ren_obsv_box.jl).*

In [Reinforcement Learning with LBDN](@ref), we designed a controller for a simple nonlinear system consisting of a box sitting in a tub of fluid, suspended between two springs. We assumed the controller had *full state knowledge*: i.e, it had access to both the position and velocity of the box. In many practical situations, we might only be able to measure some of the system states. For example, our box may have a camera to estimate its position but not its velocity. In these cases, we need a [*state observer*](https://en.wikipedia.org/wiki/State_observer) to estimate the full state of the system for feedback control.

In this example, we will show how a contracting REN can be used to learn stable observers for dynamical systems. A common approach to designing state estimators for nonlinear systems is the *Extended Kalman Filter* ([EKF](https://en.wikipedia.org/wiki/Extended_Kalman_filter)). In our case, we'll consider observer design as a supervised learning problem. For a detailed explanation of the theory behind this example, please refer to Section VIII of [Revay, Wang & Manchester (2021)](https://ieeexplore.ieee.org/document/10179161). See [PDE Observer Design with REN](@ref) for explanation of a more complex example from the paper.
In this example, we will show how a contracting REN can be used to learn stable observers for dynamical systems. A common approach to designing state estimators for nonlinear systems is the *Extended Kalman Filter* ([EKF](https://en.wikipedia.org/wiki/Extended_Kalman_filter)). In our case, we'll consider observer design as a supervised learning problem. For a detailed explanation of the theory behind this example, please refer to Section VIII of [Revay, Wang & Manchester (2021)](https://ieeexplore.ieee.org/document/10179161).

See [PDE Observer Design with REN](@ref) for explanation of a more complex example from the paper.

## 1. Background theory

Expand Down
4 changes: 2 additions & 2 deletions docs/src/examples/echo_ren.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# (Convex) Nonlinear Control with REN

*This example was first presented in Section IX of [Revay, Wang & Manchester (2021)](https://ieeexplore.ieee.org/document/10179161).*
*This example was first presented in Section IX of [Revay, Wang & Manchester (2021)](https://ieeexplore.ieee.org/document/10179161). Full example code can be found [here](https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/examples/src/echo_ren.jl).*


RENs and LBDNs can be used for a lot more than just learning-based problems. In this example, we'll see how RENs can be used to design nonlinear feedback controllers with stability guarantees for linear dynamical systems with constraints. Introducing constraints (eg: minimum/maximum control inputs) often means that nonlinear controllers perform better than linear policies. A common approach is to use *Model Predictive Control* ([MPC](https://en.wikipedia.org/wiki/Model_predictive_control)). In our case, we'll use convex optimisation to design a nonlinear controller. The controller will be an [*echo state network*](https://en.wikipedia.org/wiki/Echo_state_network) based on a contracting REN. We'll use this alongside the [*Youla-Kucera parameterisation*](https://www.sciencedirect.com/science/article/pii/S1367578820300249) to guarantee stability of the final controller.
Expand Down Expand Up @@ -249,7 +249,7 @@ With the problem all nicely defined, all we have to do is solve it and investiga
using BSON
using Mosek, MosekTools

# Optimize the closed-loop response
# Optimise the closed-loop response
problem = minimize(J, constraints)
Convex.solve!(problem, Mosek.Optimizer)

Expand Down
7 changes: 5 additions & 2 deletions docs/src/examples/lbdn_curvefit.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Fitting a Curve with LBDN

*Full example code can be found [here](https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/examples/src/lbdn_curvefit.jl).*

For our first example, let's fit a Lipschitz-bounded Deep Network (LBDN) to a curve in one dimension. Consider the step function function below.
```math
f(x) =
Expand Down Expand Up @@ -54,7 +56,7 @@ model = DiffLBDN(model_ps)
Note that we first constructed the model parameters `model_ps`, and *then* created a callable `model`. In `RobustNeuralNetworks.jl`, model parameterisations are separated from "explicit" definitions of a model used for evaluation on data. See the [Direct & explicit parameterisations](@ref) for more information.

!!! info "A layer-wise approach"
We have also provided single LBDN layers with [`SandwichFC`](@ref) to mimic the layer-wise construction of models like with [`Flux.Dense`](https://fluxml.ai/Flux.jl/stable/models/layers/#Flux.Dense). This may be more convenient for users used to working with `Flux.jl`.
We have also provided single LBDN layers with [`SandwichFC`](@ref). Introduced in [Wang & Manchester (2023)](https://proceedings.mlr.press/v202/wang23v.html), the [`SandwichFC`](@ref) layer is a fully-connected or dense layer with a guaranteed Lipschitz bound of 1.0. We have designed the user interface for [`SandwichFC`](@ref) to be as similar to that of [`Flux.Dense`](https://fluxml.ai/Flux.jl/stable/models/layers/#Flux.Dense) as possible. This may be more convenient for users used to working with `Flux.jl`.

For example, we can construct an identical model to the LBDN `model` above with the following.
```julia
Expand Down Expand Up @@ -129,7 +131,8 @@ using Printf

# Estimate Lipschitz lower-bound
Empirical_Lipschitz = lip(model, xs, dx)
@printf "Empirical lower Lipschitz bound: %.2f\n" Empirical_Lipschitz
@printf "Imposed Lipschitz upper bound: %.2f\n" get_lipschitz(model)
@printf "Empirical Lipschitz lower bound: %.2f\n" Empirical_Lipschitz
```

We can now plot the results to see what our model looks like.
Expand Down
Loading
Loading