Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Support for distributional-DQNalgorithms (C51, Rainbow) #2269

Open
roger-creus opened this issue Jul 5, 2024 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@roger-creus
Copy link

roger-creus commented Jul 5, 2024

Is the Distributional Q-Value Actor currently fully supported? If so, are there any plans to integrate C51 and more importantly, Rainbow, to the list of sota-implementations?

@roger-creus roger-creus added the enhancement New feature or request label Jul 5, 2024
@vmoens
Copy link
Contributor

vmoens commented Jul 5, 2024

We have a version of this here
https://pytorch.org/rl/stable/reference/generated/torchrl.objectives.DistributionalDQNLoss.html#torchrl.objectives.DistributionalDQNLoss
but I don't think we have an official version of Rainbow yet (although this is the first thing we had in the lib - for some reason we never made a script that was high-quality enough to be made public!)
LMK if you need further help with it!

@roger-creus
Copy link
Author

I have implemented a first version of Rainbow containing all tricks! (Dueling DQN, Distributional, Prioritized Experience, etc.) and I am now running some preliminary experiments to debug its performance and make sure it works well.

However, I had to change this line to Tz = reward + (1 - terminated.to(reward.dtype)) * discount.unsqueeze(-1) * support.repeat(batch_size, 1).

Otherwise I would get shape errors. Let me know if this makes sense!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants