-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug Report] actor's std becomes "nan" during PPO training #33
Comments
me too |
Can confirm I've experienced it too. In my case, I had introduced some sparse rewards to my environment. Not sure that's the cause tho. |
Same problem here. When visualizing the training data in tensorboard, I notice that Loss/value_function suddenly goes to infinity |
Same problem |
Thanks for your answer, I'll try it out |
When facing the error std>=0, check the output 'Value Function Loss' to see whether it's inf or not. If it is inf, there is a solution that you can try. Based on the knowledge from issues ray-project/ray#19291 with the fix ray-project/ray#22171 and ray-project/ray@ddd1160, the codes starting at L159 in ppo.py file of rsl_rl (version 2.0.2) need to be modified as follows: |
Note this kind of method may not work and it may reduce the learning speed. I've tested it using parameters 'iteration : 30000' & 'num_envs : 12000 to 30000' for training my own robot. The training process randomly failed between 1,000 and 18,000 iterations. I've checked the 'value batch' and 'return batch'. Once the training failed, these two values showed very large positive or negative numbers. I ultimately completed the entire training process by modifying the reward and penalty. Since I'm still fresh to the RL, I don't know exactly what happened. By the way, I've tested modifying the hyperparameters of PPO and the structure of networks. It didn't work. I would greatly appreciate it if someone could provide some information on this topic. There is an unsuitable method to ensure the training proceeds. When std >= 0 and the Value Function Loss shows inf, you can first adjust some parameters in the project and then use --resume to load the checkpoint and continue training. |
Adding code actions = torch.clip(actions, min=-6.28, max=6.28) before env.step(actions) seems to help. And it is better to add a penalty to actions to prevent the actor model from outputting too large values. |
I am conducting reinforcement learning for a robot using rsl_rl and isaac lab. While it works fine with simple settings, when I switch to more complex settings (such as Domain Randomization), the following error occurs during training(After some progress in training), indicating that the actor's standard deviation does not meet the condition of being ≥ 0. Has anyone experienced a similar error?
num_env is 3600
I investigated the value of std(self.scale) and found that the std value in a certain environment appears to be nan. (The number of columns represents the action dimensions for the robot.)
The text was updated successfully, but these errors were encountered: