You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that in Table 5 of the appendix of your paper, you provided the computation efficiency, mentioning that fine-tuning Dynamicrafter for 20k iterations on 8*A800 takes 8 hours (averaging about 1.4 seconds per iter). However, the comments in the train_512.yaml file indicate about 3 seconds per step. I was wondering what is the reason for this discrepancy.
Additionally, I tried to fine-tune on 8*A6000 (49G), and the monitor showed that one step takes about 7 seconds (batch_size=48). I would greatly appreciate if you could provide any suggestions for speeding this up.
The text was updated successfully, but these errors were encountered:
Hi @yujiangpu20 ,
Thank you for your kind words! train_512.yaml was copied from the original DynamiCrafter repo, and the 3s comment was succeeded from it. Since our training configs were different from the original DynamiCrafter repo, this 3s comment isn't aplicable to our training process. We've deleted this comment from the file now.
As for speeding up, maybe the accelerate package ZeRO2 will help.
Hi, Thank you for the excellent work!
I noticed that in Table 5 of the appendix of your paper, you provided the computation efficiency, mentioning that fine-tuning Dynamicrafter for 20k iterations on 8*A800 takes 8 hours (averaging about 1.4 seconds per iter). However, the comments in the train_512.yaml file indicate about 3 seconds per step. I was wondering what is the reason for this discrepancy.
Additionally, I tried to fine-tune on 8*A6000 (49G), and the monitor showed that one step takes about 7 seconds (batch_size=48). I would greatly appreciate if you could provide any suggestions for speeding this up.
The text was updated successfully, but these errors were encountered: