-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training speed become slower iteration by iteration #10
Comments
I find the reason.
Hence, set the learning rate out of the loop can avoid this problem. |
@philokey Hi, I also find the problem. What do you mean by "set the learning rate out of the loop can avoid this problem"? I think it is weird. Have you found the reason? |
@flowice when you set learning rate, it will add a new node in tensorflow's graph, therefore, if you set learning rate in every iteration, it will add many nodes in the graph and become very slow. |
@philokey Wonderful ! I will have a try. Thank you. |
Hello,
When I training the model, the speed of per iteration slow down, eg. in the beginning, the speed is about 0.4s/iter, after 10000 iterations, the speed reduce to about 1s/iter. However, the time of tensorflow session
rpn_loss_cls_value, rpn_loss_box_value,loss_cls_value, loss_box_value, _ = sess.run([rpn_cross_entropy, rpn_loss_box, cross_entropy, loss_box, train_op], feed_dict=feed_dict)
does not increase.
What's more, the CPU time seems much more than the beginning, the usage of GPU is often 0%. Therefore, I suspect that there are something wrong in roi_data_layer which run in CPU.
I have check the code, but I can not find any bug. Has anyone meets this problem and how to solve this problem.
Thank you.
The text was updated successfully, but these errors were encountered: