You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have code that uses the pytorch AdamW optimizer and almost immediately it returns the following error:
\cuda\Lib\site-packages\torch\optim\adamw.py:547: UserWarning: The operator 'aten::foreach_mul.Scalar' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.)
torch.foreach_mul(device_params, 1 - lr * weight_decay)
The training loop is taking much longer that it needs to since it switches back to CPU mode, instead of using the GPU.
It would be great if this operator can be implemented ASAP, so I can get my optimizer and training loop to run at optimal speed.
The text was updated successfully, but these errors were encountered:
I have code that uses the pytorch AdamW optimizer and almost immediately it returns the following error:
\cuda\Lib\site-packages\torch\optim\adamw.py:547: UserWarning: The operator 'aten::foreach_mul.Scalar' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.)
torch.foreach_mul(device_params, 1 - lr * weight_decay)
The training loop is taking much longer that it needs to since it switches back to CPU mode, instead of using the GPU.
It would be great if this operator can be implemented ASAP, so I can get my optimizer and training loop to run at optimal speed.
The text was updated successfully, but these errors were encountered: