You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run this using Xformers 0.0.25 because I have to run latest Torch 2.2.1 which Google Colab just updated to, and xformers 0.0.24 only works with Torch 2.2.0, so installing 0.0.24 takes ~5 minutes and a Restart to downgrade to torch 2.2.0. I got it working in my app at DiffusionDeluxe.com using the recommended 0.0.24 (although keeps running out of RAM), but with 0.0.25 I'm getting this error:
Traceback (most recent call last):
File "/content/sdd_colab.py", line 46547, in run_crm
crm_model = CRM(specs).to(torch_device)
File "/content/CRM/model/crm/model.py", line 46, in __init__
self.unet2 = UNetPP(in_channels=self.dec.c_dim)
File "/content/CRM/model/archs/unet.py", line 43, in __init__
self.unet.enable_xformers_memory_efficient_attention()
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 295, in enable_xformers_memory_efficient_attention
self.set_use_memory_efficient_attention_xformers(True, attention_op)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 259, in set_use_memory_efficient_attention_xformers
fn_recursive_set_mem_eff(module)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 255, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 255, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 255, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 252, in fn_recursive_set_mem_eff
module.set_use_memory_efficient_attention_xformers(valid, attention_op)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 253, in set_use_memory_efficient_attention_xformers
raise ModuleNotFoundError(
ModuleNotFoundError: Refer to https://github.com/facebookresearch/xformers for more information on how to install xformers
I'm hoping you can figure out a solution to fix the breaking change to make it compatible with both, but always nice to be using the newest versions. Thanks, tried to trace the problem down myself, but didn't understand it enough.
The text was updated successfully, but these errors were encountered:
Along the way, I also encountered some strange things:
I found the .pth and .bin files will be storaged in the ./.cache/huggingface/hub/model*, it is difficult for me to move them directly.
I can not run the code python run.py "imgfile" in juypter, it will raise errors about Connection with huggingface.co, but I can run this code in the terminal.
I'm trying to run this using Xformers 0.0.25 because I have to run latest Torch 2.2.1 which Google Colab just updated to, and xformers 0.0.24 only works with Torch 2.2.0, so installing 0.0.24 takes ~5 minutes and a Restart to downgrade to torch 2.2.0. I got it working in my app at DiffusionDeluxe.com using the recommended 0.0.24 (although keeps running out of RAM), but with 0.0.25 I'm getting this error:
I'm hoping you can figure out a solution to fix the breaking change to make it compatible with both, but always nice to be using the newest versions. Thanks, tried to trace the problem down myself, but didn't understand it enough.
The text was updated successfully, but these errors were encountered: