D4PG agent learns to play tennis against itself.
In this multi-agent reinforcement learning task, two agents sharing the same neural network weights play tennis, each with the goal of maximizing its own score. Specifically, the environment provides:
- States: A stack of
3
state vectors, each of size8
, for a total of24
state variables per time step. The state variables correspond to the agent’s horizontal and vertical position and velocity, as well as the ball’s position. - Actions: A vector of size
2
corresponding to the agent’s movement away from or towards the net, and jumping. - Rewards: A reward of
0.1
when the agent hits the ball over the net, and a reward of-0.01
when the agent hits the ball out of bounds or lets it hit the ground on its side of the court.
Each episode ends when the ball flies out of bounds or hits the ground. The episode score is the maximum of the two scores achieved by the agents. Finally, the environment is considered solved when the average episode score over a window of 100 episodes reaches 0.5
.
Follow the instructions here to:
- Create a
conda
environment. - Clone the Udacity Deep RL repository.
- Install Python packages into the environment.
- Create an IPython kernel using the environment.
The OpenAI Gym instructions can be skipped.
In order to watch the agent play the game, you also need to download the environment by following the instructions here.
Once you've completed the setup, you can:
- Open
Tennis.ipynb
. - Select the kernel created during setup.
- Run all the cells in the notebook to train the agent.
Follow the instructions here, load the saved neural network weights (actor.pth
and critic.pth
), and watch the trained agent interact with the environment!