diff --git a/README.md b/README.md index de81100..96deea4 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ These notebooks assume some familiarity with the [Julia programming language](ht For reference on Gen see: -- The [Gen documentation](https://www.gen.dev/dev/) +- The [Gen documentation](https://www.gen.dev/docs/dev/) - Documentation for [GenPyTorch.jl](https://probcomp.github.io/GenPyTorch.jl/dev/) and [GenTF.jl](https://probcomp.github.io/GenTF/dev/) diff --git a/tutorials/Data-Driven Proposals in Gen.ipynb b/tutorials/Data-Driven Proposals in Gen.ipynb index 6ea0e07..f0fe191 100644 --- a/tutorials/Data-Driven Proposals in Gen.ipynb +++ b/tutorials/Data-Driven Proposals in Gen.ipynb @@ -909,7 +909,7 @@ "\n", "To see how to use the built-in importance resampling function, run\n", "```?Gen.importance_resampling``` or check out the\n", - "[documentation](https://www.gen.dev/dev/ref/importance/#Gen.importance_resampling)." + "[documentation](https://www.gen.dev/docs/dev/ref/importance/#Gen.importance_resampling)." ] }, { @@ -1319,7 +1319,7 @@ "metadata": {}, "source": [ "We will propose the x-coordinate of the destination from a\n", - "[piecewise_uniform](https://www.gen.dev/dev/ref/distributions/#Gen.piecewise_uniform)\n", + "[piecewise_uniform](https://www.gen.dev/docs/dev/ref/distributions/#Gen.piecewise_uniform)\n", "distribution, where we set higher probability for certain bins based on the\n", "heuristic described above and use a uniform continuous distribution for the\n", "coordinate within a bin. The `compute_bin_probs` function below computes the\n", @@ -1774,7 +1774,7 @@ "source": [ "Our choice of the `score_high` value of 5. was somewhat arbitrary. To use\n", "more informed value, we can make `score_high` into a [*trainable\n", - "parameter*](https://www.gen.dev/dev/ref/gfi/#Trainable-parameters-1)\n", + "parameter*](https://www.gen.dev/docs/dev/ref/gfi/#Trainable-parameters-1)\n", "of the generative function. Below, we write a new version of the proposal\n", "function that makes `score_high` trainable. However, the optimization\n", "algorithms we will use for training work best with *unconstrained* parameters\n", @@ -1928,7 +1928,7 @@ "Next, we choose type of optimization algorithm we will use for training. Gen\n", "supports a set of gradient-based optimization algorithms (see [Optimizing\n", "Trainable\n", - "Parameters](https://www.gen.dev/dev/ref/parameter_optimization/#Optimizing-Trainable-Parameters-1)).\n", + "Parameters](https://www.gen.dev/docs/dev/ref/parameter_optimization/#Optimizing-Trainable-Parameters-1)).\n", "Here we will use gradient descent with a fixed step size of 0.001." ] }, diff --git a/tutorials/Data-Driven Proposals in Gen.jl b/tutorials/Data-Driven Proposals in Gen.jl index 0d67675..c360545 100644 --- a/tutorials/Data-Driven Proposals in Gen.jl +++ b/tutorials/Data-Driven Proposals in Gen.jl @@ -390,7 +390,7 @@ measurements = [ # # To see how to use the built-in importance resampling function, run # ```?Gen.importance_resampling``` or check out the -# [documentation](https://www.gen.dev/dev/ref/importance/#Gen.importance_resampling). +# [documentation](https://www.gen.dev/docs/dev/ref/importance/#Gen.importance_resampling). # We have provided some starter code. @@ -614,7 +614,7 @@ num_x_bins = 5 num_y_bins = 5; # We will propose the x-coordinate of the destination from a -# [piecewise_uniform](https://www.gen.dev/dev/ref/distributions/#Gen.piecewise_uniform) +# [piecewise_uniform](https://www.gen.dev/docs/dev/ref/distributions/#Gen.piecewise_uniform) # distribution, where we set higher probability for certain bins based on the # heuristic described above and use a uniform continuous distribution for the # coordinate within a bin. The `compute_bin_probs` function below computes the @@ -839,7 +839,7 @@ visualize_inference(measurements, scene, start, computation_amt=5, samples=1000) # Our choice of the `score_high` value of 5. was somewhat arbitrary. To use # more informed value, we can make `score_high` into a [*trainable -# parameter*](https://www.gen.dev/dev/ref/gfi/#Trainable-parameters-1) +# parameter*](https://www.gen.dev/docs/dev/ref/gfi/#Trainable-parameters-1) # of the generative function. Below, we write a new version of the proposal # function that makes `score_high` trainable. However, the optimization # algorithms we will use for training work best with *unconstrained* parameters @@ -927,7 +927,7 @@ end; # Next, we choose type of optimization algorithm we will use for training. Gen # supports a set of gradient-based optimization algorithms (see [Optimizing # Trainable -# Parameters](https://www.gen.dev/dev/ref/parameter_optimization/#Optimizing-Trainable-Parameters-1)). +# Parameters](https://www.gen.dev/docs/dev/ref/parameter_optimization/#Optimizing-Trainable-Parameters-1)). # Here we will use gradient descent with a fixed step size of 0.001. update = Gen.ParamUpdate(Gen.FixedStepGradientDescent(0.001), custom_dest_proposal_trainable); diff --git a/tutorials/Introduction to Modeling in Gen.ipynb b/tutorials/Introduction to Modeling in Gen.ipynb index d3875e4..252c18d 100644 --- a/tutorials/Introduction to Modeling in Gen.ipynb +++ b/tutorials/Introduction to Modeling in Gen.ipynb @@ -9269,7 +9269,7 @@ "What if we'd want to predict `ys` given `xs`?\n", "\n", "Using the API method\n", - "[`generate`](https://www.gen.dev/dev/ref/gfi/#Gen.generate), we\n", + "[`generate`](https://www.gen.dev/docs/dev/ref/gfi/#Gen.generate), we\n", "can generate a trace of a generative function in which the values of certain\n", "random choices are constrained to given values. The constraints are a choice\n", "map that maps the addresses of the constrained random choices to their\n", diff --git a/tutorials/Introduction to Modeling in Gen.jl b/tutorials/Introduction to Modeling in Gen.jl index 59c7a26..8a577af 100644 --- a/tutorials/Introduction to Modeling in Gen.jl +++ b/tutorials/Introduction to Modeling in Gen.jl @@ -512,7 +512,7 @@ scatter(xs, ys_sine, color="black", label=nothing) # What if we'd want to predict `ys` given `xs`? # # Using the API method -# [`generate`](https://www.gen.dev/dev/ref/gfi/#Gen.generate), we +# [`generate`](https://www.gen.dev/docs/dev/ref/gfi/#Gen.generate), we # can generate a trace of a generative function in which the values of certain # random choices are constrained to given values. The constraints are a choice # map that maps the addresses of the constrained random choices to their diff --git a/tutorials/Particle Filtering in Gen.ipynb b/tutorials/Particle Filtering in Gen.ipynb index 33b5309..bcbda92 100644 --- a/tutorials/Particle Filtering in Gen.ipynb +++ b/tutorials/Particle Filtering in Gen.ipynb @@ -46,12 +46,12 @@ "\"bearings only tracking\" problem described in [4]. \n", "\n", "This notebook will also introduce you to the \n", - "[`Unfold`](https://www.gen.dev/dev/ref/combinators/#Unfold-combinator-1) combinator, \n", + "[`Unfold`](https://www.gen.dev/docs/dev/ref/combinators/#Unfold-combinator-1) combinator, \n", "which can be used to improve performance of SMC.\n", "`Unfold` is just one example of the levers that Gen provides for\n", "improving performance; once you understand it, you can check\n", "Gen's documentation to see how similar principles apply to the \n", - "[`Map`](https://www.gen.dev/dev/ref/combinators/#Map-combinator-1) combinator \n", + "[`Map`](https://www.gen.dev/docs/dev/ref/combinators/#Map-combinator-1) combinator \n", "and to the static DSL. (These features are also covered in the previous tutorial,\n", "[Scaling with Combinators and the Static Modeling Language](Scaling%20with%20Combinators%20and%20the%20Static%20Modeling%20Language.ipynb).)\n", "\n", @@ -607,7 +607,7 @@ "filter produces.\n", "\n", "Gen provides methods for initializing and updating the state of a particle\n", - "filter, documented in [Particle Filtering](https://www.gen.dev/dev/ref/pf/).\n", + "filter, documented in [Particle Filtering](https://www.gen.dev/docs/dev/ref/pf/).\n", "\n", "- `Gen.initialize_particle_filter`\n", "\n", @@ -678,7 +678,7 @@ "- The new arguments to the generative function for this step. In our case,\n", " this is the number of measurements beyond the first measurement.\n", "\n", - "- The [argdiff](https://www.gen.dev/dev/ref/gfi/#Argdiffs-1)\n", + "- The [argdiff](https://www.gen.dev/docs/dev/ref/gfi/#Argdiffs-1)\n", " value, which provides detailed information about the change to the\n", " arguments between the previous step and this step. We will revisit this\n", " value later. For now, we indicate that we do not know how the `T::Int`\n", @@ -42966,7 +42966,7 @@ "DSL to be very flexible and to have a simple implementation, at the cost of\n", "performance. There are several ways of improving performance after one has a\n", "prototype written in the built-in modeling DSL. One of these is [Generative\n", - "Function Combinators](https://www.gen.dev/dev/ref/combinators/), which make \n", + "Function Combinators](https://www.gen.dev/docs/dev/ref/combinators/), which make \n", "the flow of information through the generative process more explicit to Gen, \n", "and enable asymptotically more efficient inference programs.\n", "\n", @@ -42997,7 +42997,7 @@ "This `for` loop has a very specific pattern of information flow—there is a\n", "sequence of states (represented by `x`, `y`, `vx`, and `vy`), and each state is\n", "generated from the previous state. This is exactly the pattern that the\n", - "[Unfold](https://www.gen.dev/dev/ref/combinators/#Unfold-combinator-1)\n", + "[Unfold](https://www.gen.dev/docs/dev/ref/combinators/#Unfold-combinator-1)\n", "generative function combinator is designed to handle.\n", "\n", "Below, we re-express the Julia `for` loop over the state sequence using the\n", diff --git a/tutorials/Particle Filtering in Gen.jl b/tutorials/Particle Filtering in Gen.jl index 6b08674..b2e511d 100644 --- a/tutorials/Particle Filtering in Gen.jl +++ b/tutorials/Particle Filtering in Gen.jl @@ -56,12 +56,12 @@ # "bearings only tracking" problem described in [4]. # # This notebook will also introduce you to the -# [`Unfold`](https://www.gen.dev/dev/ref/combinators/#Unfold-combinator-1) combinator, +# [`Unfold`](https://www.gen.dev/docs/dev/ref/combinators/#Unfold-combinator-1) combinator, # which can be used to improve performance of SMC. # `Unfold` is just one example of the levers that Gen provides for # improving performance; once you understand it, you can check # Gen's documentation to see how similar principles apply to the -# [`Map`](https://www.gen.dev/dev/ref/combinators/#Map-combinator-1) combinator +# [`Map`](https://www.gen.dev/docs/dev/ref/combinators/#Map-combinator-1) combinator # and to the static DSL. (These features are also covered in the previous tutorial, # [Scaling with Combinators and the Static Modeling Language](Scaling%20with%20Combinators%20and%20the%20Static%20Modeling%20Language.ipynb).) # @@ -243,7 +243,7 @@ title!("Observed bearings (lines) and positions (dots)") # filter produces. # # Gen provides methods for initializing and updating the state of a particle -# filter, documented in [Particle Filtering](https://www.gen.dev/dev/ref/pf/). +# filter, documented in [Particle Filtering](https://www.gen.dev/docs/dev/ref/pf/). # # - `Gen.initialize_particle_filter` # @@ -302,7 +302,7 @@ end; # - The new arguments to the generative function for this step. In our case, # this is the number of measurements beyond the first measurement. # -# - The [argdiff](https://www.gen.dev/dev/ref/gfi/#Argdiffs-1) +# - The [argdiff](https://www.gen.dev/docs/dev/ref/gfi/#Argdiffs-1) # value, which provides detailed information about the change to the # arguments between the previous step and this step. We will revisit this # value later. For now, we indicate that we do not know how the `T::Int` @@ -592,7 +592,7 @@ title!("Rejuvenation with resimulation MH on the starting points") # DSL to be very flexible and to have a simple implementation, at the cost of # performance. There are several ways of improving performance after one has a # prototype written in the built-in modeling DSL. One of these is [Generative -# Function Combinators](https://www.gen.dev/dev/ref/combinators/), which make +# Function Combinators](https://www.gen.dev/docs/dev/ref/combinators/), which make # the flow of information through the generative process more explicit to Gen, # and enable asymptotically more efficient inference programs. # @@ -623,7 +623,7 @@ title!("Rejuvenation with resimulation MH on the starting points") # This `for` loop has a very specific pattern of information flow—there is a # sequence of states (represented by `x`, `y`, `vx`, and `vy`), and each state is # generated from the previous state. This is exactly the pattern that the -# [Unfold](https://www.gen.dev/dev/ref/combinators/#Unfold-combinator-1) +# [Unfold](https://www.gen.dev/docs/dev/ref/combinators/#Unfold-combinator-1) # generative function combinator is designed to handle. # # Below, we re-express the Julia `for` loop over the state sequence using the diff --git a/tutorials/Reversible-Jump MCMC in Gen.ipynb b/tutorials/Reversible-Jump MCMC in Gen.ipynb index 559071b..23ff89d 100644 --- a/tutorials/Reversible-Jump MCMC in Gen.ipynb +++ b/tutorials/Reversible-Jump MCMC in Gen.ipynb @@ -83,7 +83,7 @@ "\n", "### Using `@dist` to define new distributions for convenience \n", "To sample the number of segments, we need a distribution with support only on\n", - "the positive integers. We create one using the [`@dist` DSL](https://www.gen.dev/dev/ref/distributions/#dist_dsl-1):" + "the positive integers. We create one using the [`@dist` DSL](https://www.gen.dev/docs/dev/ref/distributions/#dist_dsl-1):" ] }, { diff --git a/tutorials/Reversible-Jump MCMC in Gen.jl b/tutorials/Reversible-Jump MCMC in Gen.jl index 0d6f8ab..8a0480c 100644 --- a/tutorials/Reversible-Jump MCMC in Gen.jl +++ b/tutorials/Reversible-Jump MCMC in Gen.jl @@ -74,7 +74,7 @@ Logging.disable_logging(Logging.Info); # # ### Using `@dist` to define new distributions for convenience # To sample the number of segments, we need a distribution with support only on -# the positive integers. We create one using the [`@dist` DSL](https://www.gen.dev/dev/ref/distributions/#dist_dsl-1): +# the positive integers. We create one using the [`@dist` DSL](https://www.gen.dev/docs/dev/ref/distributions/#dist_dsl-1): # A distribution that is guaranteed to be 1 or higher. @dist poisson_plus_one(rate) = poisson(rate) + 1; diff --git a/tutorials/Scaling with Combinators and the Static Modeling Language.ipynb b/tutorials/Scaling with Combinators and the Static Modeling Language.ipynb index 0b9e7ea..98e5138 100644 --- a/tutorials/Scaling with Combinators and the Static Modeling Language.ipynb +++ b/tutorials/Scaling with Combinators and the Static Modeling Language.ipynb @@ -11,11 +11,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Up until this point, we have been using [Gen's generic built-in modeling language](https://www.gen.dev/dev/ref/modeling/), which is a very flexible modeling language that is shallowly embedded in Julia. However, better performance and scaling characteristics can be obtained using specialized modeling languages or modeling constructs. This notebook introduces two built-in features of Gen:\n", + "Up until this point, we have been using [Gen's generic built-in modeling language](https://www.gen.dev/docs/dev/ref/modeling/), which is a very flexible modeling language that is shallowly embedded in Julia. However, better performance and scaling characteristics can be obtained using specialized modeling languages or modeling constructs. This notebook introduces two built-in features of Gen:\n", "\n", - "- A more specialized [Static Modeling Language](https://www.gen.dev/dev/ref/modeling/#Static-Modeling-Language-1) which is built-in to Gen.\n", + "- A more specialized [Static Modeling Language](https://www.gen.dev/docs/dev/ref/modeling/#Static-Modeling-Language-1) which is built-in to Gen.\n", "\n", - "- A class of modeling constructs called [Generative function combinators](https://www.gen.dev/dev/ref/combinators/).\n", + "- A class of modeling constructs called [Generative function combinators](https://www.gen.dev/docs/dev/ref/combinators/).\n", "\n", "These features provide both constant-factor speedups, as well as improvements in asymptotic orders of growth, over the generic built-in modeling language.\n", "\n", diff --git a/tutorials/Scaling with Combinators and the Static Modeling Language.jl b/tutorials/Scaling with Combinators and the Static Modeling Language.jl index b9cf0f4..1fe3176 100644 --- a/tutorials/Scaling with Combinators and the Static Modeling Language.jl +++ b/tutorials/Scaling with Combinators and the Static Modeling Language.jl @@ -15,11 +15,11 @@ # # Scaling with Combinators and the Static Modeling Language -# Up until this point, we have been using [Gen's generic built-in modeling language](https://www.gen.dev/dev/ref/modeling/), which is a very flexible modeling language that is shallowly embedded in Julia. However, better performance and scaling characteristics can be obtained using specialized modeling languages or modeling constructs. This notebook introduces two built-in features of Gen: +# Up until this point, we have been using [Gen's generic built-in modeling language](https://www.gen.dev/docs/dev/ref/modeling/), which is a very flexible modeling language that is shallowly embedded in Julia. However, better performance and scaling characteristics can be obtained using specialized modeling languages or modeling constructs. This notebook introduces two built-in features of Gen: # -# - A more specialized [Static Modeling Language](https://www.gen.dev/dev/ref/modeling/#Static-Modeling-Language-1) which is built-in to Gen. +# - A more specialized [Static Modeling Language](https://www.gen.dev/docs/dev/ref/modeling/#Static-Modeling-Language-1) which is built-in to Gen. # -# - A class of modeling constructs called [Generative function combinators](https://www.gen.dev/dev/ref/combinators/). +# - A class of modeling constructs called [Generative function combinators](https://www.gen.dev/docs/dev/ref/combinators/). # # These features provide both constant-factor speedups, as well as improvements in asymptotic orders of growth, over the generic built-in modeling language. #