Skip to content

Codification used for the AAMAS-17 paper "Simultaneously Learning and Advising in Multiagent Reinforcement Learning"

Notifications You must be signed in to change notification settings

f-leno/AdHoc_AAMAS-17

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Simultaneously Learning and Advising in Multiagent Reinforcement Learning

This is the codification used in the AAMAS 2017 paper proposing Ad Hoc Advising as means of accelerating learning in Multiagent Systems composed of simultaneously learning agents. You are free to use all or part of the codes here presented for any purpose, provided that the paper is properly cited and the original authors properly credited. All the files here shared come with no warranties.

Paper bib entry:

@inproceedings{SilvaAndCosta2017,
author = {Silva, Felipe Leno da and
Ruben Glatt and
Anna Helena Reali Costa},
title = {{Simultaneously Learning and Advising in Multiagent Reinforcement Learning}},
booktitle = {Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS)},
year = {2017},
pages = {1100--1108}
}


This project was built on Python 2.7. All the experiments are executed in the HFO platform (https://github.com/LARG/HFO), we included the version we used in the HFO folder (slighly different from the standard HFO). For the graph generation code you will need to install Jupyter Notebook (http://jupyter.readthedocs.io/en/latest/install.html).

Files

The folder HFO contains the HFO server we used for experiments.

The folder AdHoc contains our implementation of all algorithms and experiments.

Finally, the folder ProcessedFiles contains already processed .csv files for graph printing and data visualization.

How to use

First install HFO following instructions in https://github.com/LARG/HFO.

In folder AdHoc, executing the script experiment1and2.sh is enough to run the first and second experiment. However, it will take a very long time until the experiments are completed. It may be of interest running more than one algorithm at the same time if you have enough computing power.

Executing experiment3.sh runs the third experiment. Before running this experiment, the script pretrain.sh should be executed, so as to store the Q-table for the already trained agent.

The result of any experiment is a folder with .csv files, that can be used to generate graphs using evaluation-leno.ipynb in jupyter notebook. (all the files used for the paper are in the folder ProcessedFiles).

Contact

For questions about the Codification or paper, please send an email to the first author.

About

Codification used for the AAMAS-17 paper "Simultaneously Learning and Advising in Multiagent Reinforcement Learning"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published