Skip to content

saikrishna-1996/deep_pepper_chess

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Pepper

MCTS-based algorithm for parallel training of a chess engine. Adapted from existing deep learning game engines such as Giraffe and AlphaZero, Deep Pepper is a clean-room implementation of a chess engine that leverages Stockfish for the opening and closing book, and learns a policy entirely through self-play.

Technologies Used

We use the following technologies to train the model and interface with the Stockfish Chess engine.

  • python-chess - For handling the chess environment and gameplay.
  • pytorch - For training and inference.
  • Stockfish - For value function and endgame evaluation.
  • Tensorboard - For visualizing training progress.

Setup Instructions

  1. Run pip install -r requirements.txt to install the necessary dependencies.
  2. Run python launch_script.py to start training the Chess Engine.

Acknowledgements

About

different AI algorithms to solve board games

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •