Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs redux Part II #135

Merged
merged 8 commits into from
Jun 9, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion LICENSE.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
The SDDP.jl package is licensed under the Mozilla Public License, Version 2.0:

> Copyright (c) 2017: Oscar Dowson.
> Copyright (c) 2017-2018: Oscar Dowson.
>
>
> Mozilla Public License, version 2.0
Expand Down
40 changes: 2 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,32 +4,11 @@
|:-----------------:|:--------------------:|:----------------:|
| [![][docs-latest-img]][docs-latest-url] | [![Build Status][build-img]][build-url] | [![Codecov branch][codecov-img]][codecov-url]

## Installation
This package is unregistered so you will need to `Pkg.clone` it as follows:
```julia
Pkg.clone("https://github.com/odow/SDDP.jl.git")
```

## Documentation

The documentation is still very incomplete, however the user-facing API from the examples should
be stable enough to use.

**If you are stuggling to figure out how to use something, raise a Github issue!**

However, you can find some documentation at https://odow.github.io/SDDP.jl/latest/

In addition, most functions are documented, and this can be accessed via the Julia
help. e.g.:
```julia
julia>? @state
```
You can find the documentation at https://odow.github.io/SDDP.jl/latest/.

Some other resources include:
- many examples: https://github.com/odow/SDDP.jl/tree/master/examples
- a paper on Optimization-Online:
http://www.optimization-online.org/DB_HTML/2017/12/6388.html
- an example of a large-scale model here: https://github.com/odow/MilkPOWDER
**If you are struggling to figure out how to use something, raise a Github issue!**

## Examples

Expand All @@ -43,21 +22,6 @@ Bonus points for models where you know the optimal first stage objective value.
We need your bug reports! We've only stressed a few code paths on real-world models.
If you run into any problems, [file an issue here](https://github.com/odow/SDDP.jl/issues/new).

## FAQ

**Q.** How do I make the constraint coefficients random?

**A.** Due to the design of JuMP, it's difficult to efficiently modify constraint
coefficients. Therefore, you can only vary the right-hand-side of a constraint
using the `@rhsnoise` macro.

As a work around, we suggest you either reformulate the model so the uncertainty
appears in the RHS, or model the uncertainty as a markov process. Take a look at
the [asset management example](https://github.com/odow/SDDP.jl/blob/master/examples/AssetManagement/asset_management.jl)
to see an example of this. Make sure you keep in mind that a new value function
is built at each markov state which increases the computation time and memory
requirements.

## Other Packages

`SDDP.jl` isn't the only Julia package for solving multi-stage stochastic programs.
Expand Down
34 changes: 0 additions & 34 deletions appveyor.yml

This file was deleted.

5 changes: 3 additions & 2 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,11 @@ makedocs(
"tutorial/05_risk.md",
"tutorial/06_cut_selection.md",
"tutorial/07_plotting.md",
"tutorial/08_odds_and_ends.md"
"tutorial/08_odds_and_ends.md",
"tutorial/09_nonlinear.md",
"tutorial/10_parallel.md"
],
"Readings" => "readings.md",
"Old Manual" => "oldindex.md",
"Reference" => "apireference.md"
],
assets = [
Expand Down
2 changes: 2 additions & 0 deletions docs/src/apireference.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ DynamicPriceInterpolation
solve
MonteCarloSimulation
BoundConvergence
Asynchronous
Serial
```
## Understanding the solution
```@docs
Expand Down
54 changes: 0 additions & 54 deletions docs/src/examples.md

This file was deleted.

21 changes: 17 additions & 4 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,6 @@ optimization, the SDDP algorithm, Julia, and JuMP.
If you don't have that background, you may want to brush up on some
[Readings](@ref).

!!! note
You can find the old, terribly incomplete documentation at [Old Manual](@ref).

## Getting started

This package is unregistered so you will need to `Pkg.clone` it as follows:
Expand All @@ -27,7 +24,7 @@ If you want to use the parallel features of SDDP.jl, you should start Julia with
some worker processes (`julia -p N`), or add by running `julia> addprocs(N)` in
a running Julia session.

Once you've got SDDP.jl installed, you should read some tutorials, beginnng with
Once you've got SDDP.jl installed, you should read some tutorials, beginning with
[Tutorial One: first steps](@ref).

## Citing SDDP.jl
Expand All @@ -42,3 +39,19 @@ If you use SDDP.jl, we ask that you please cite the following [paper](http://www
year = {2017}
}
```

## FAQ

**Q.** How do I make the constraint coefficients random?

**A.** Due to the design of JuMP, it's difficult to efficiently modify constraint
coefficients. Therefore, you can only vary the right hand-side of a constraint
using the `@rhsnoise` macro.

As a work around, we suggest you either reformulate the model so the uncertainty
appears in the RHS, or model the uncertainty as a Markov process.
[Tutorial Four: Markovian policy graphs](@ref) explains how to implement this.
You might also want to take a look at the [asset management example](https://github.com/odow/SDDP.jl/blob/master/examples/AssetManagement/asset_management.jl)
to see an example of this. Make sure you keep in mind that a new value function
is built at each Markov state which increases the computation time and memory
requirements.
Loading