Skip to content

Commit

Permalink
Merge pull request #86 from gtbook/frank_jul17
Browse files Browse the repository at this point in the history
Fixes and comments in 6.5 and 6.6
  • Loading branch information
dellaert authored Jul 17, 2024
2 parents eb4f92a + bf63b0a commit 68d82fb
Show file tree
Hide file tree
Showing 4 changed files with 438 additions and 444 deletions.
4 changes: 2 additions & 2 deletions S57_diffdrive_summary.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@
"The kinematics of differential drive robots are described in detail in [Introduction to Autonomous Mobile Robots](https://mitpress.mit.edu/9780262015356/introduction-to-autonomous-mobile-robots/) by Siegwart, Nourbakhsh, Scaramuzza {cite:p}`Siegwart11book_robots`\n",
"\n",
"The first mathematically rigorous book on robot motion planning was written by Latombe\n",
"in the early nineties {cite:p}`JCL:91`.\n",
"in the early nineties {cite:p}`Latombe91book`.\n",
"Brian Eno once remarked\n",
"that only about 1,000 people bought the first Velvet Underground album, but every one of them formed a rock 'n' roll band. Latombe's book held this status in robotics; if you owned it, likely as not, you went on to become a researcher in robot motion planning.\n",
"In subsequent years, [Principles of Robot Motion](https://mitpress.mit.edu/9780262033275/principles-of-robot-motion/) by Choset, et al. {cite:p}`Choset05book_motion`\n",
Expand All @@ -283,7 +283,7 @@
"Excellent introductions to the material on machine learning can be found in\n",
"[Deep Learning](https://www.deeplearningbook.org/) by Goodfellow, Bengio, and Courville {cite:p}`Goodfellow16book_dl`\n",
"and\n",
"[Dive into Deep Learning](https://d2l.ai/) by Zhang et al. {cite:p}`Zhang20book_d2l`"
"[Dive into Deep Learning](https://d2l.ai/) by Zhang et al. {cite:p}`Zhang20book_d2l`."
]
}
],
Expand Down
16 changes: 8 additions & 8 deletions S65_driving_planning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@
"<figcaption>Initial and final configuration for a lane change maneuver. </figcaption>\n",
"</figure>\n",
"\n",
"Here, we have taken the $s$-axis to be longitudinal direction (parallel to the highway), and the $d$-axis is along the lateral direction (perpendicular to the highway). This choice of coordinates will be convenient below, when we generalize trajectrories to arbitrary curves. In addition, for now let us assume that the car speed satisfies $s=t$.\n",
"Here, we have taken the $s$-axis to be longitudinal direction (parallel to the highway), and the $d$-axis is along the lateral direction (perpendicular to the highway). This choice of coordinates will be convenient below, when we generalize trajectories to arbitrary curves. In addition, for now let us assume that the car speed satisfies $s=t$.\n",
"Below, we will generalize further by defining $s$ to be the distance along the path\n",
"(instead of a linear distance along a straight lane), and\n",
"$s(t)$ to be the time parameterization of the path.\n",
Expand Down Expand Up @@ -244,15 +244,15 @@
"So, if we wish to match initial and final conditions on heading (which is defined by the first derivative of $d$),\n",
"we would require a cubic polynomial, and if we wished to also satisfy lateral acceleration constraints, we would need a fifth order polynomial.\n",
"The lateral velocity and acceleration for the trajectory are given by the first and second derivatives of $d$,\n",
"which we denote by $d$ and $d’’$, respectively.\n",
"which we denote by $d'$ and $d''$, respectively.\n",
"For a fifth order polynomial, we have \n",
"\n",
"\n",
"$$ \n",
"\\begin{aligned}\n",
"d(s) &=& \\alpha_0 + \\alpha_1 s + \\alpha_2 s^2 + \\alpha_3 s^3 + \\alpha_4 s^4 + \\alpha_5 s^5\\\\\n",
"d(s) &=& \\alpha_1 + 2 \\alpha_2 s + 3 \\alpha_3 s^2 + 4 \\alpha_4 s^3 + 5 \\alpha_5 s^4\\\\\n",
"d’’(s) &=& 2 \\alpha_2 +6 \\alpha_3 s + 12 \\alpha_4 s^2 + 20 \\alpha_5 s^3 \n",
"d'(s) &=& \\alpha_1 + 2 \\alpha_2 s + 3 \\alpha_3 s^2 + 4 \\alpha_4 s^3 + 5 \\alpha_5 s^4\\\\\n",
"d''(s) &=& 2 \\alpha_2 +6 \\alpha_3 s + 12 \\alpha_4 s^2 + 20 \\alpha_5 s^3 \n",
"\\end{aligned}\n",
"$$"
]
Expand All @@ -266,13 +266,13 @@
"$$ \n",
"\\begin{aligned}\n",
"y(0) = 0& =& \\alpha_0 \\\\\n",
"y(0) = 0 & =& \\alpha_1 \\\\\n",
"y’’(0) = 0 &=& 2 \\alpha_2 \\\\\n",
"y'(0) = 0 & =& \\alpha_1 \\\\\n",
"y''(0) = 0 &=& 2 \\alpha_2 \\\\\n",
"y(x_\\mathrm{g}) = y_\\mathrm{g} &=&\n",
" \\alpha_0 + \\alpha_1 x_\\mathrm{g} + \\alpha_2 x_\\mathrm{g}^2 + \\alpha_3 x_\\mathrm{g}^3 + \\alpha_4 x_\\mathrm{g}^4 + \\alpha_5 x_\\mathrm{g}^5 \\\\\n",
"y(x_\\mathrm{g}) = 0 &=&\n",
"y'(x_\\mathrm{g}) = 0 &=&\n",
" \\alpha_1 + 2 \\alpha_2 x_\\mathrm{g} + 3 \\alpha_3 x_\\mathrm{g}^2 + 4 \\alpha_4 x_\\mathrm{g}^3 + 5 \\alpha_5 x_\\mathrm{g}^4 \\\\\n",
"y’’(x_\\mathrm{g}) = 0 &=&\n",
"y''(x_\\mathrm{g}) = 0 &=&\n",
" 2 \\alpha_2 +6 \\alpha_3 x_\\mathrm{g} + 12 \\alpha_4 x_\\mathrm{g}^2 + 20 \\alpha_5 x_\\mathrm{g}^3 \n",
"\\end{aligned}\n",
"$$\n",
Expand Down
42 changes: 24 additions & 18 deletions S66_driving_DRL.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@
"\n",
"<img src=\"Figures6/S66-Autonomous_Vehicle_with_LIDAR_and_cameras-03.jpg\" alt=\"Splash image with steampunk autonomous car\" width=\"40%\" align=center style=\"vertical-align:middle;margin:10px 0px\">\n",
"\n",
"Deep reinforcement learning (DRL) brings the power of deep learning to much more complex domains than what we were able to tackle with the Markov Decision Processes and RL concepts introduced in Chapter 3. The use of large, expressive neural networks has allowed researchers and practitioners alike to work with high bandwidth sensors such as video streams and LIDAR, and bring the promise of RL into real-world domains such as autonomous driving. This is still a field of active discovery and research, however, and we can give but a brief introduction here about what is a vast literature and problem space."
"Deep reinforcement learning (DRL) applies the power of deep learning to bring reinforcement learning to much more complex domains than what we were able to tackle with the Markov Decision Processes and RL concepts introduced in Chapter 3. The use of large, expressive neural networks has allowed researchers and practitioners alike to work with high bandwidth sensors such as video streams and LIDAR, and bring the promise of RL into real-world domains such as autonomous driving. This is still a field of active discovery and research, however, and we can give but a brief introduction here about what is a vast literature and problem space."
]
},
{
Expand All @@ -82,14 +82,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"A simple example in the autonomous driving domain is *lane switching*. Suppose we are driving along at 3-lane highway, and we can see some ways ahead, and some ways behind us. We are driving at a speed that is comfortable to us, but other cars have different ideas about the optimal speed to drive at. Hence, sometimes we would like to change lanes, and we could learn a policy to do this for us. This is called **lateral control**. A more sophisticated example would also allow us to adapt our speed to the traffic pattern, but by relying on a smart cruise control system we could safely ignore this **longitudinal control** problem."
"A simple example in the autonomous driving domain is *lane switching*. Suppose we are driving along at 3-lane highway, and we can see some ways ahead, and some ways behind us. We are driving at a speed that is comfortable to us, but other cars have different ideas about the optimal speed to drive at. Hence, sometimes we would like to change lanes, and we could learn a policy to do this for us. As discussed in Section 6.5, this is **lateral control**. A more sophisticated example would also allow us to adapt our speed to the traffic pattern, but by relying on a smart cruise control system we could safely ignore the **longitudinal control** problem."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To turn this into a reinforcement learning problem, we first need to define a state space ${\\cal X}$ and an action space ${\\cal A}$. There are a variety of ways to engineer this aspect of the problem. For example, we could somehow encode the longitudinal distance and lane index for each of the K closest cars, where K is a parameter, say 5 or 10. One problem is that the number of cars that are *actually* present is variable, which is difficult to deal with. Another approach is to make this into an image processing problem, by creating a finite element representation of the road before and behind us, and marking a cell as occupied or not. The latter is fairly compatible with automotive sensors such as LIDAR."
"To turn this into a reinforcement learning problem, we first need to define a state space ${\\cal X}$ and an action space ${\\cal A}$. There are a variety of ways to engineer this aspect of the problem. For example, we could somehow encode the longitudinal distance and lane index for each of the K closest cars, where K is a parameter, say 5 or 10. One problem is that the number of cars that are *actually* present is variable, which is difficult to deal with. Another approach is to make this into an image processing problem, by creating a finite element representation of the road before and behind us, and marking each cell as occupied or not. The latter is fairly compatible with automotive sensors such as LIDAR."
]
},
{
Expand All @@ -98,7 +98,7 @@
"source": [
"In terms of actions, the easiest approach is to have a number of *discrete* choices to go `left`, `right`, or `stay` in the current lane. We could be more sophisticated about it and have both \"aggressive\" and \"slow\" versions of these in addition to a default version, akin to the motion primitives we previously discussed.\n",
"\n",
"Actually implementing this on an autonomous vehicle, or even sketching an implementation in a notebook with recorded or simulated data, is beyond what we can accomplish in a notebook. Hence, we will be content below to sketch three popular foundational methods from deep reinforcement learning, without actually implementing them here. At the end of this chapter we provide some references where you can delve into these topics more deeply."
"Actually implementing this on an autonomous vehicle, or even sketching an implementation in a notebook with recorded or simulated data, is beyond what we can accomplish in a jupyter notebook. Hence, we will be content below to sketch three popular foundational methods from deep reinforcement learning, without actually implementing them here. At the end of this chapter we provide some references where you can delve into these topics more deeply."
]
},
{
Expand All @@ -115,13 +115,13 @@
"\\pi^*(x) = \\arg \\max_a Q^*(x,a)\n",
"$$\n",
"\n",
"where $Q^*(x,a)$ denote the Q-values for the *optimal* policy. In Q-learning, we start with some random Q-values and then gradually estimate the optimal Q-values by alpha-blending between old and new estimates:\n",
"where $Q^*(x,a)$ denote the Q-values for the *optimal* policy. In Q-learning, we start with some random Q-values and then iteratively improve the estimate for the optimal Q-values by alpha-blending between old and new estimates:\n",
"\n",
"$$\n",
"\\hat{Q}(x,a) \\leftarrow (1-\\alpha) \\hat{Q}(x,a) + \\alpha~\\text{target}(x,a,x')\n",
"$$\n",
"\n",
"where $\\text{target}(x,a,x') \\doteq R(x,a,x') + \\gamma \\max_{a'} \\hat{Q}(x',a')$ is the \"target\" value that we think is an improvment on the previous value $\\hat{Q}(x,a)$."
"where $\\text{target}(x,a,x') \\doteq R(x,a,x') + \\gamma \\max_{a'} \\hat{Q}(x',a')$ is the \"target\" value that we think is an improvement on the previous value $\\hat{Q}(x,a)$. Indeed: the target $\\text{target}(x,a,x')$ uses the current estimate of the Q-values for future states, but improves on this by using the *known* reward $R(x,a,x')$ for the current action in the current state."
]
},
{
Expand All @@ -134,15 +134,21 @@
"Q^*(x,a) \\approx Q(x,a; \\theta)\n",
"$$\n",
"\n",
"DQN as a method uses two additional ideas that are crucial in making the training converge to something sensible in difficult problems. The first is splitting the training into *execution* and *experience replay* phases:\n",
"It might be worthwhile to re-visit Section 5.6, where we introduced neural networks and how to train them using stochastic gradient descent (SGD). In the context of RL, the DQN method uses two additional ideas that are crucial in making the training converge to something sensible in difficult problems. The first is splitting the training into *execution* and *experience replay* phases:\n",
"\n",
"- during the **execution phase**, it executes the policy (possibly with some degree of randomness) and stores the experiences $(x,a,r,x')$, with $r$ the reward, in a dataset $D$;\n",
"- during **experience replay**, it *randomly samples* from these experiences to create mini-batches of data, which are in turn used to perform stochastic gradient descent (SGD) on the parameters $\\theta$.\n",
"- during the **execution phase**, the policy is executed (possibly with some degree of randomness) and the experiences $(x,a,r,x')$, with $r$ the reward, are stored in a dataset $D$;\n",
"- during **experience replay**, we *randomly sample* from these experiences to create mini-batches of data, which are in turn used to perform SGD on the parameters $\\theta$.\n",
"\n",
"The second idea is to calculate the target values $\\text{target}(x,a,x') \\doteq R(x,a,x') + \\gamma \\max_{a'} \\hat{Q}(x',a'; \\theta^{old})$ with the parameters $\\theta^{old}$ from the previous epoch, to provide a more stable approximation target. The mini-batch loss we minimize using SGD is then\n",
"The second idea is to calculate the target values \n",
"\n",
"$$\n",
"\\sum_{(x,a,r,x')} [R(x,a,x') + \\gamma \\max_{a'} \\hat{Q}(x',a'; \\theta^{old}) - Q(x,a; \\theta)]^2\n",
"\\text{target}(x,a,x') \\doteq R(x,a,x') + \\gamma \\max_{a'} \\hat{Q}(x',a'; \\theta^{old})$$\n",
"\n",
"with the parameters $\\theta^{old}$ from the previous epoch, to provide a more stable approximation target.\n",
"The mini-batch loss we minimize using SGD is then\n",
"\n",
"$$\n",
"\\mathcal{L}_{\\text{DQN}}(\\theta; D) \\doteq \\sum_{(x,a,r,x')\\in D} [R(x,a,x') + \\gamma \\max_{a'} \\hat{Q}(x',a'; \\theta^{old}) - Q(x,a; \\theta)]^2\n",
"$$\n",
"\n",
"With this basic scheme, a team from DeepMind was able to achieve human or super-human performance on about 50 Atari 2600 games in 2015 {cite:p}`Mnih15nature_dqn`.\n",
Expand All @@ -161,12 +167,12 @@
"Whereas the above gets at an optimal policy indirectly, via deep Q-learning, a different and very popular idea is to directly parameterize the policy using a neural network, with weights $\\theta$. It is common to make this a **stochastic policy**,\n",
"\n",
"$$\n",
"\\pi(a|x, \\theta)\n",
"\\pi(a|x; \\theta)\n",
"$$\n",
"\n",
"where $a \\in {\\cal A}$ is an action, $x \\in {\\cal X}$ is a state, and the policy outputs a *probability* for each action $a$ based on the state $x$. One of the reasons to prefer stochastic policies is that they are differentiable, which allows us to optimize for them via fradient descent, as we explore in the next section.\n",
"where $a \\in {\\cal A}$ is an action, $x \\in {\\cal X}$ is a state, and the policy outputs a *probability* for each action $a$ based on the state $x$. One of the reasons to prefer stochastic policies is that they are differentiable, as they output continuous values rather than discrete actions. This allows us to optimize for them via gradient descent, as we explore in the next section.\n",
"\n",
"In Section 5 we used *supervised* learning to train neural networks, and we just this above for learning Q-values in DQN. It is useful to consider how this might work for training a *policy*. Recall from Section 5.6 that we defined the empirical cross-entropy loss as\n",
"In Chapter 5 we used *supervised* learning to train neural networks, and we just applied this for learning Q-values in DQN. It is useful to consider how this might work for training a *policy*. Recall from Section 5.6 that we defined the empirical cross-entropy loss as\n",
"\n",
"$$\\mathcal{L}_{\\text{CE}}(\\theta; D) \\doteq \\sum_c \\sum_{(x,y=c)\\in D}\\log\\frac{1}{p_c(x;\\theta)}$$\n",
"\n",
Expand All @@ -176,7 +182,7 @@
"\n",
"This formulation is equivalent, but now we are summing over all classes for each data point, with $y_c$ acting as a *weight*, either one or zero. When someone is so kind as to give us the optimal action $y_a$ (as a one-hot encoding) for every state $x$ in some dataset $D$, we can apply this same loss function to a stochastic policy, obtaining\n",
"\n",
"$$\\mathcal{L}_{\\text{CE}}(\\theta; D) = -\\sum_{(x,y)\\in D} \\sum_a y_a \\log \\pi(a| x, \\theta)$$"
"$$\\mathcal{L}_{\\text{CE}}(\\theta; D) = -\\sum_{(x,y)\\in D} \\sum_a y_a \\log \\pi(a| x; \\theta)$$"
]
},
{
Expand All @@ -192,7 +198,7 @@
"\\hat{Q}(x_{it},a_{it}) \\doteq \\sum_{k=t}^H \\gamma^{k-t}R(x_{ik},a_{ik},x_{ik}').\n",
"$$\n",
"\n",
"We can then use this as an alternative to the \"one or zero\" weight above, obtaining\n",
"Note in each rollout we can only sum until $k=H$, so Q-values earlier in the rollout will be estimated more accurately. Regardless, we can then use these estimated Q-values as an alternative to the \"one or zero\" weight above, obtaining\n",
"\n",
"$$\n",
"\\mathcal{L}(\\theta) = - \\sum_i \\sum_{t=1}^H \\hat{Q}(x_{it},a_{it}) \\log \\pi(a_{it}|x_{it}, \\theta)\n",
Expand Down Expand Up @@ -246,8 +252,8 @@
"\n",
"where $\\alpha$ is a learning rate.\n",
"\n",
"The algorithm above, using the estimated Q-values, is almost identical to the REINFORCE method {cite:p}`Williams92ml_reinforce`. That algorithm further improves on performance by not using the raw Q-values but rather the difference between the Q-values and some baseline policy. This has the effect of reducing the variance that comes from estimating the Q-values from a finite amount of data each time.\n",
"The REINFORCE algorithm was introduced in 1992 and hence pre-dates the deep-learning revolution by about 20 years. It should also be said that in DRL, the neural networks that are used are typically not very deep. Several modern methods, such as \"proximal policy optimization\" (PPO) apply a number of techniques to improve this basic method even further and make it more sample-efficient. PPO is now one of the most often-used DRL methods."
"The algorithm above, using the estimated Q-values, is almost identical to the REINFORCE method {cite:p}`Williams92ml_reinforce`. That algorithm further improves on performance by not using the raw Q-values but rather the difference between the Q-values and some baseline policy. This has the effect of reducing the variance in the estimated Q-values due to using only a finite amount of data.\n",
"The REINFORCE algorithm was introduced in 1992 and hence pre-dates the deep-learning revolution by about 20 years. It should also be said that in DRL, the neural networks that are used are typically not very deep. Several modern methods, such as \"proximal policy optimization\" (PPO) {cite:p}`Schulman17_PPO` apply a number of techniques to improve this basic method even further and make it more sample-efficient. PPO is now one of the most often-used DRL methods."
]
},
{
Expand Down
Loading

0 comments on commit 68d82fb

Please sign in to comment.