You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@aitalk not recommended. Multiple size of batches have been tested and reported, till 32 it's good, then it can't catch up with the flow. Original authors reported 16 batch size as perfect.
I've personally tested up to 1024 batch size, it basically takes the same amount of time to converge, but 4 times more resources at least.
Keep it to 16.
@aitalk not recommended. Multiple size of batches have been tested and reported, till 32 it's good, then it can't catch up with the flow. Original authors reported 16 batch size as perfect.
I've personally tested up to 1024 batch size, it basically takes the same amount of time to converge, but 4 times more resources at least.
Keep it to 16.
I am a bit confused here. When i set the batch size to 50 i get 1 it/s (9 GB memory used) whereas when I leave it at 16 I get 2.6 it/sec (7 GB memory used) on an Nvidia 1080 Ti. Should I still use batch size 16?
Is it possible to train it on multi gpu? multi node? Thanks.
The text was updated successfully, but these errors were encountered: