Reproduce (performance of) the following reinforcement learning methods:
-
Nature-DQN in: Human-level Control Through Deep Reinforcement Learning
-
Double-DQN in: Deep Reinforcement Learning with Double Q-learning
-
Dueling-DQN in: Dueling Network Architectures for Deep Reinforcement Learning
-
A3C in Asynchronous Methods for Deep Reinforcement Learning. (I used a modified version where each batch contains transitions from different simulators, which I called "Batch-A3C".)
Claimed performance in the paper can be reproduced, on several games I've tested with.
On one GTX 1080Ti, the ALE version took ~3 hours of training to reach 21 (maximum) score on Pong, ~15 hours of training to reach 400 score on Breakout. It runs at 50 batches (~3.2k trained frames, 200 seen frames, 800 game frames) per second on GTX 1080Ti.
Install ALE and gym.
Download an atari rom, e.g.:
wget https://github.com/openai/atari-py/raw/master/atari_py/atari_roms/breakout.bin
Start Training:
./DQN.py --env breakout.bin
# use `--algo` to select other DQN algorithms. See `-h` for more options.
Watch the agent play:
# Download pretrained models or use one you trained:
wget http://models.tensorpack.com/DeepQNetwork/DoubleDQN-Breakout.npz
./DQN.py --env breakout.bin --task play --load DoubleDQN-Breakout.npz
Install gym and atari_py.
./DQN.py --env BreakoutDeterministic-v4
A3C code and models for Atari games in OpenAI Gym are released in examples/A3C-Gym