You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As of 46aee85, when sampling images during training, CUDA keeps the (unused) pipeline data cached on VRAM on method exit, possibly causing overcommit (8.5~8.9 / 8.0 on my case), which can slow down training, as well as other applications that are also using the graphics card due to constant VRAM<->RAM swapping
Unloading the pipeline and clearing CUDA cache by adding (before exiting sample_images)
As of 46aee85, when sampling images during training, CUDA keeps the (unused) pipeline data cached on VRAM on method exit, possibly causing overcommit (8.5~8.9 / 8.0 on my case), which can slow down training, as well as other applications that are also using the graphics card due to constant VRAM<->RAM swapping
Unloading the pipeline and clearing CUDA cache by adding (before exiting sample_images)
Before
sd-scripts/library/train_util.py
Line 2359 in 46aee85
Should mitigate this issue and keep the VRAM usage (7.0~7.2 / 8.0 on my case) the same as it was before calling
sample_images
on method exitThe text was updated successfully, but these errors were encountered: