Compeititive Pong | Compeititive Car-Racing |
In this repo, we provide two interesting competitive RL environments:
- Competitive Pong (cPong): The environment extends the classic Atari Game Pong into a competitive environment, where both side can be trainable agents.
- Competitive Car-Racing (cCarRacing): The environment allows multiple cars to race and compete in the same map.
pip install git+https://github.com/cuhkrlcourse/competitive-rl.git
You can easily create the vectorized environment with this function:
from competitive_rl import make_envs
envs = make_envs("CompetitivePongDouble-v0", num_envs=num_envs, asynchronous=True)
See docs in make_envs.py for more information.
Note that for Pong environment, since it is built based on Atari Pong game, we recommand following the standard pipeline to preprocess the observation. We should convert the image to grayscale, resize it and apply frame stacking. Please refer to this function and our wrapper for more information.
If you want to create a single Gym environment instance:
import gym
import competitive_rl
competitive_rl.register_competitive_envs()
pong_single_env = gym.make("cPong-v0")
pong_double_env = gym.make("cPongDouble-v0")
racing_single_env = gym.make("cCarRacing-v0")
racing_double_env = gym.make("cCarRacingDouble-v0")
The observation spaces:
cPong-v0
:Box(210, 160, 3)
cPongDouble-v0
:Tuple(Box(210, 160, 3), Box(210, 160, 3))
cCarRacing-v0
:Box(96, 96, 1)
cCarRacingDouble-v0
:Box(96, 96, 1)
The action spaces:
cPong-v0
:Discrete(3)
cPongDouble-v0
:Tuple(Discrete(3), Discrete(3))
cCarRacing-v0
:Box(2,)
cCarRacingDouble-v0
:Dict(0:Box(2,), 1:Box(2,))
This repo is contributed by many students and alumni from CUHK: Zhenghao Peng (@pengzhenghao), Edward Hui (@Edwardhk), Yi Zhang (@1155107756), Billy Ho (@Poiutrew1004), Joe Lam (@JoeLamKC)