Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with Xformers newest version #8

Open
Skquark opened this issue Mar 20, 2024 · 2 comments
Open

Error with Xformers newest version #8

Skquark opened this issue Mar 20, 2024 · 2 comments

Comments

@Skquark
Copy link

Skquark commented Mar 20, 2024

I'm trying to run this using Xformers 0.0.25 because I have to run latest Torch 2.2.1 which Google Colab just updated to, and xformers 0.0.24 only works with Torch 2.2.0, so installing 0.0.24 takes ~5 minutes and a Restart to downgrade to torch 2.2.0. I got it working in my app at DiffusionDeluxe.com using the recommended 0.0.24 (although keeps running out of RAM), but with 0.0.25 I'm getting this error:

Traceback (most recent call last):
  File "/content/sdd_colab.py", line 46547, in run_crm
    crm_model = CRM(specs).to(torch_device)
  File "/content/CRM/model/crm/model.py", line 46, in __init__
    self.unet2 = UNetPP(in_channels=self.dec.c_dim)
  File "/content/CRM/model/archs/unet.py", line 43, in __init__
    self.unet.enable_xformers_memory_efficient_attention()
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 295, in enable_xformers_memory_efficient_attention
    self.set_use_memory_efficient_attention_xformers(True, attention_op)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 259, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 255, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 255, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 255, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 252, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 253, in set_use_memory_efficient_attention_xformers
    raise ModuleNotFoundError(
ModuleNotFoundError: Refer to https://github.com/facebookresearch/xformers for more information on how to install xformers

I'm hoping you can figure out a solution to fix the breaking change to make it compatible with both, but always nice to be using the newest versions. Thanks, tried to trace the problem down myself, but didn't understand it enough.

@thuwzy
Copy link
Collaborator

thuwzy commented Mar 25, 2024

My testing environment is torch==1.13.0+cu117. Maybe you can try torch 1.x? I am not sure whether will torch 2.x work.

@Riyue120
Copy link

Riyue120 commented Apr 3, 2024

I have tested a new environment (torch2.2.2+cu118), and used the newest xformers.
You can use the offcial method to get the newest version.

# cuda 11.8 version
pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118

Here are my steps:

# python.version //3.8
# print(torch.__version__) //2.2.2+cu118
git clone https://github.com/thu-ml/CRM.git
cd CRM
pip install -r requirements.txt
pip install ninja
pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118
pip install git+https://github.com/NVlabs/nvdiffrast
# other package(forget)

I made a docker image in CodeWithGPU platform, Here are the page: Page Link.
It is under reviewing and will take two days.

docker pull registry.cn-beijing.aliyuncs.com/codewithgpu2/thu-ml-crm:x00G39kpFb

Along the way, I also encountered some strange things:

  1. I found the .pth and .bin files will be storaged in the ./.cache/huggingface/hub/model*, it is difficult for me to move them directly.

  2. I can not run the code python run.py "imgfile" in juypter, it will raise errors about Connection with huggingface.co, but I can run this code in the terminal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants