Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NotImplementedError: Cannot copy out of meta tensor; no data! #372

Closed
Csiling opened this issue Sep 14, 2023 · 3 comments
Closed

NotImplementedError: Cannot copy out of meta tensor; no data! #372

Csiling opened this issue Sep 14, 2023 · 3 comments

Comments

@Csiling
Copy link

Csiling commented Sep 14, 2023

Hi, Could you help me with this error? I have no idea what to do.

C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.0.12
Inference Engine exists.
Inference Engine checkout finished.
Total VRAM 2048 MB, total RAM 16325 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram
Running on local URL: http://127.0.0.1:7860
xformers version: 0.0.20

To create a public link, set share=True in launch().
Set vram state to: LOW_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1050 : native
Using xformers cross attention
Fooocus Text Processing Pipelines are retargeted to cuda:0
model_type EPS
adm 2816
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: sd_xl_base_1.0_0.9vae.safetensors
model_type EPS
adm 2560
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Refiner model loaded: sd_xl_refiner_1.0_0.9vae.safetensors
LoRAs loaded: [('sd_xl_offset_example-lora_1.0.safetensors', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5)]
Fooocus Expansion engine loaded.
App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860
[Fooocus] Initializing ...
[Fooocus] Loading models ...
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] New suffix: extremely detailed digital painting, in the style of fenghua zhong and ruan jia and jeremy lipking and peter mohrbacher, mystical colors, rim light, beautiful lighting, 8 k, stunning scene, raytracing, octane, trending on artstation
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] New suffix: intricate, elegant, highly detailed, my rendition, digital painting, artstation, concept art, smooth, sharp focus, radiant light, illustration, art by artgerm and greg rutkowski and alphonse mucha
[Fooocus] Encoding base positive #1 ...
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\Fooocus\modules\async_worker.py", line 209, in worker
handler(task)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\Fooocus\modules\async_worker.py", line 131, in handler
t['c'][0] = pipeline.clip_encode(sd=pipeline.xl_base_patched, texts=t['positive'],
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\Fooocus\modules\default_pipeline.py", line 140, in clip_encode
cond, pooled = clip_encode_single(clip, text)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\Fooocus\modules\default_pipeline.py", line 117, in clip_encode_single
result = clip.encode_from_tokens(tokens, return_pooled=True)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 556, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sdxl_clip.py", line 62, in encode_token_weights
g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd1_clip.py", line 18, in encode_token_weights
out, pooled = self.encode(to_encode)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd1_clip.py", line 161, in encode
return self(tokens)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd1_clip.py", line 143, in forward
outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer=="hidden")
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\accelerate\hooks.py", line 290, in pre_forward
return send_to_device(args, self.execution_device), send_to_device(
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\accelerate\utils\operations.py", line 160, in send_to_device
{
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\accelerate\utils\operations.py", line 161, in
k: t if k in skip_keys else send_to_device(t, device, non_blocking=non_blocking, skip_keys=skip_keys)
File "C:\Users\Angéla\Downloads\Fontok\Fooocus_win64_2-0-0\python_embeded\lib\site-packages\accelerate\utils\operations.py", line 167, in send_to_device
return tensor.to(device, non_blocking=non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

@lllyasviel
Copy link
Owner

I am not sure why the log says it cannot copy the model
but the log says that the VRAM is 2GB - Fooocus requires at least 4GB VRAM - consider using Colab or other devices...

@PangFayue-stack
Copy link

I am not sure why the log says it cannot copy the model but the log says that the VRAM is 2GB - Fooocus requires at least 4GB VRAM - consider using Colab or other devices...

hello,
I also met this problem ,but my GPU has 8 GB.It ran well this morning, but moment before, after generating a picture, it doesn't work any more because of this problem.I have not idear to do with it.By the way ,I haven't change any of the code .

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 1, 2024

Issue is stale, similar issues haven't been reported since. Closing for now, feel free to reopen with new feedback, which is much appreciated. Thanks!

@mashb1t mashb1t closed this as not planned Won't fix, can't repro, duplicate, stale Jan 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants