diff --git a/doc/source/rllib/core-concepts.rst b/doc/source/rllib/core-concepts.rst
index e62630a09e45..6d65b6443763 100644
--- a/doc/source/rllib/core-concepts.rst
+++ b/doc/source/rllib/core-concepts.rst
@@ -32,7 +32,7 @@ An environment in RL is the agent's world, it is a simulation of the problem to
An RLlib environment consists of:
1. all possible actions (**action space**)
-2. a complete omniscient description of the environment, nothing hidden (**state space**)
+2. a complete description of the environment, nothing hidden (**state space**)
3. an observation by the agent of certain parts of the state (**observation space**)
4. **reward**, which is the only feedback the agent receives per action.
diff --git a/doc/source/rllib/index.rst b/doc/source/rllib/index.rst
index 7192bf8262a7..b19a80491ad8 100644
--- a/doc/source/rllib/index.rst
+++ b/doc/source/rllib/index.rst
@@ -66,7 +66,7 @@ To be able to run our Atari examples, you should also install:
After these quick pip installs, you can start coding against RLlib.
-Here is an example of running a PPO Trainer on the "`Taxi domain `_"
+Here is an example of running a PPO Trainer on the `Taxi domain `_
for a few training iterations, then perform a single evaluation loop
(with rendering enabled):
diff --git a/doc/source/rllib/rllib-env.rst b/doc/source/rllib/rllib-env.rst
index 77fa27b645bf..3725538cd7e9 100644
--- a/doc/source/rllib/rllib-env.rst
+++ b/doc/source/rllib/rllib-env.rst
@@ -13,6 +13,8 @@ RLlib works with several different types of environments, including `OpenAI Gym
.. image:: images/rllib-envs.svg
+.. _configuring-environments:
+
Configuring Environments
------------------------