Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XlabsSampler. Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype. #5541

Open
Djon253 opened this issue Nov 8, 2024 · 3 comments
Labels
Potential Bug User is reporting a bug. This should be tested.

Comments

@Djon253
Copy link

Djon253 commented Nov 8, 2024

Expected Behavior

Run the inference

Actual Behavior

The error appears at the moment of XlabsSampler processing

Steps to Reproduce

1

Debug Logs

# ComfyUI Error Report
## Error Details
- **Node Type:** XlabsSampler
- **Exception Type:** TypeError
- **Exception Message:** Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
## Stack Trace

  File "/Users/osborn/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/Users/osborn/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/Users/osborn/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/Users/osborn/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 354, in sampling
    x = denoise(

  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 193, in denoise
    pred = model_forward(

  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 28, in model_forward
    img = model.img_in(img)

  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)

  File "/Users/osborn/ComfyUI/comfy/ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)

  File "/Users/osborn/ComfyUI/comfy/ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)

  File "/Users/osborn/ComfyUI/comfy/ops.py", line 42, in cast_bias_weight
    bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function)

  File "/Users/osborn/ComfyUI/comfy/model_management.py", line 851, in cast_to
    return weight.to(dtype=dtype, copy=copy)

System Information

  • ComfyUI Version: v0.2.7-6-g2865f91
  • Arguments: main.py
  • OS: posix
  • Python Version: 3.10.15 (main, Oct 3 2024, 02:24:49) [Clang 14.0.6 ]
  • Embedded Python: false
  • PyTorch Version: 2.3.1

Devices

  • Name: mps
    • Type: mps
    • VRAM Total: 68719476736
    • VRAM Free: 24615649280
    • Torch VRAM Total: 68719476736
    • Torch VRAM Free: 24615649280

Logs

2024-11-08 17:43:36,324 - root - INFO - Total VRAM 65536 MB, total RAM 65536 MB
2024-11-08 17:43:36,324 - root - INFO - pytorch version: 2.3.1
2024-11-08 17:43:36,324 - root - INFO - Set vram state to: SHARED
2024-11-08 17:43:36,324 - root - INFO - Device: mps
2024-11-08 17:43:36,860 - root - INFO - Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
2024-11-08 17:43:37,466 - root - INFO - [Prompt Server] web root: /Users/osborn/ComfyUI/web
2024-11-08 17:43:46,618 - root - INFO - --------------
2024-11-08 17:43:46,618 - root - INFO - �[91m ### Mixlab Nodes: �[93mLoaded
2024-11-08 17:43:46,623 - root - INFO - ChatGPT.available True
2024-11-08 17:43:46,624 - root - INFO - edit_mask.available True
2024-11-08 17:43:46,834 - root - INFO - ClipInterrogator.available True
2024-11-08 17:43:46,930 - root - INFO - PromptGenerate.available True
2024-11-08 17:43:46,930 - root - INFO - ChinesePrompt.available True
2024-11-08 17:43:46,930 - root - INFO - RembgNode_.available False
2024-11-08 17:43:47,176 - root - INFO - TripoSR.available
2024-11-08 17:43:47,176 - root - INFO - MiniCPMNode.available
2024-11-08 17:43:47,195 - root - INFO - Scenedetect.available
2024-11-08 17:43:47,196 - root - INFO - FishSpeech.available False
2024-11-08 17:43:47,202 - root - INFO - SenseVoice.available
2024-11-08 17:43:47,215 - root - INFO - Whisper.available False
2024-11-08 17:43:47,219 - root - INFO - FalVideo.available
2024-11-08 17:43:47,219 - root - INFO - �[93m -------------- �[0m
2024-11-08 17:43:49,953 - root - INFO - 
Import times for custom nodes:
2024-11-08 17:43:49,953 - root - INFO -    0.0 seconds: /Users/osborn/ComfyUI/custom_nodes/websocket_image_save.py
2024-11-08 17:43:49,953 - root - INFO -    0.0 seconds: /Users/osborn/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale
2024-11-08 17:43:49,953 - root - INFO -    0.0 seconds: /Users/osborn/ComfyUI/custom_nodes/rgthree-comfy
2024-11-08 17:43:49,953 - root - INFO -    0.0 seconds: /Users/osborn/ComfyUI/custom_nodes/ComfyUI-GGUF
2024-11-08 17:43:49,953 - root - INFO -    0.0 seconds: /Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui
2024-11-08 17:43:49,953 - root - INFO -    0.1 seconds: /Users/osborn/ComfyUI/custom_nodes/ComfyUI-Manager
2024-11-08 17:43:49,953 - root - INFO -    2.7 seconds: /Users/osborn/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux
2024-11-08 17:43:49,953 - root - INFO -    9.5 seconds: /Users/osborn/ComfyUI/custom_nodes/comfyui-mixlab-nodes
2024-11-08 17:43:49,953 - root - INFO - 
2024-11-08 17:43:49,958 - root - INFO - Starting server

2024-11-08 17:43:49,958 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-11-08 17:44:20,104 - root - INFO - got prompt
2024-11-08 17:44:20,139 - root - INFO - Using split attention in VAE
2024-11-08 17:44:20,140 - root - INFO - Using split attention in VAE
2024-11-08 17:44:20,382 - root - INFO - Requested to load FluxClipModel_
2024-11-08 17:44:20,382 - root - INFO - Loading 1 new model
2024-11-08 17:44:20,387 - root - INFO - loaded completely 0.0 4777.53759765625 True
2024-11-08 17:44:20,473 - root - WARNING - clip missing: ['text_projection.weight']
2024-11-08 17:44:28,011 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-11-08 17:44:28,011 - root - INFO - model_type FLUX
2024-11-08 17:45:22,170 - root - INFO - Requested to load Flux
2024-11-08 17:45:22,170 - root - INFO - Loading 1 new model
2024-11-08 17:45:22,185 - root - INFO - loaded completely 0.0 11350.048889160156 True
2024-11-08 17:45:22,379 - root - ERROR - !!! Exception during processing !!! Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2024-11-08 17:45:22,383 - root - ERROR - Traceback (most recent call last):
  File "/Users/osborn/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/osborn/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/osborn/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Users/osborn/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 354, in sampling
    x = denoise(
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 193, in denoise
    pred = model_forward(
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 28, in model_forward
    img = model.img_in(img)
  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 42, in cast_bias_weight
    bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function)
  File "/Users/osborn/ComfyUI/comfy/model_management.py", line 851, in cast_to
    return weight.to(dtype=dtype, copy=copy)
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.

2024-11-08 17:45:22,383 - root - INFO - Prompt executed in 62.28 seconds
2024-11-08 17:51:44,705 - root - INFO - got prompt
2024-11-08 17:51:44,811 - root - ERROR - !!! Exception during processing !!! Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2024-11-08 17:51:44,812 - root - ERROR - Traceback (most recent call last):
  File "/Users/osborn/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/osborn/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/osborn/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Users/osborn/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 354, in sampling
    x = denoise(
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 193, in denoise
    pred = model_forward(
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 28, in model_forward
    img = model.img_in(img)
  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 42, in cast_bias_weight
    bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function)
  File "/Users/osborn/ComfyUI/comfy/model_management.py", line 851, in cast_to
    return weight.to(dtype=dtype, copy=copy)
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.

2024-11-08 17:51:44,812 - root - INFO - Prompt executed in 0.10 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":24,"last_link_id":42,"nodes":[{"id":7,"type":"VAEDecode","pos":{"0":1371,"1":152},"size":{"0":210,"1":46},"flags":{},"order":8,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":42,"slot_index":0},{"name":"vae","type":"VAE","link":7}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[31],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":19,"type":"CLIPTextEncodeFlux","pos":{"0":97,"1":123},"size":{"0":400,"1":200},"flags":{},"order":6,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":27,"slot_index":0}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[40],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"CLIPTextEncodeFlux"},"widgets_values":["","",4]},{"id":10,"type":"UNETLoader","pos":{"0":209,"1":387},"size":{"0":315,"1":82},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[36],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["flux1-dev.safetensors","fp8_e4m3fn"]},{"id":23,"type":"FluxLoraLoader","pos":{"0":506,"1":231},"size":{"0":315,"1":82},"flags":{},"order":4,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":36}],"outputs":[{"name":"MODEL","type":"MODEL","links":[38],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"FluxLoraLoader"},"widgets_values":["furry_lora.safetensors",1]},{"id":5,"type":"CLIPTextEncodeFlux","pos":{"0":518,"1":-63},"size":{"0":400,"1":200},"flags":{},"order":5,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":2,"slot_index":0}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[39],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"CLIPTextEncodeFlux"},"widgets_values":["furry in the city with text \"hello world\"","furry in the city with text \"hello world\"",3.5]},{"id":8,"type":"VAELoader","pos":{"0":1102,"1":48},"size":{"0":315,"1":58},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[7],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["ae.safetensors"]},{"id":21,"type":"PreviewImage","pos":{"0":1612,"1":128},"size":{"0":364.77178955078125,"1":527.6837158203125},"flags":{},"order":9,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":31,"slot_index":0}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":6,"type":"EmptyLatentImage","pos":{"0":626,"1":428},"size":{"0":315,"1":106},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[41],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[1024,1024,2]},{"id":24,"type":"XlabsSampler","pos":{"0":1013,"1":169},"size":{"0":342.5999755859375,"1":282},"flags":{},"order":7,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":38},{"name":"conditioning","type":"CONDITIONING","link":39},{"name":"neg_conditioning","type":"CONDITIONING","link":40},{"name":"latent_image","type":"LATENT","link":41,"shape":7},{"name":"controlnet_condition","type":"ControlNetCondition","link":null,"shape":7}],"outputs":[{"name":"latent","type":"LATENT","links":[42]}],"properties":{"Node name for S&R":"XlabsSampler"},"widgets_values":[600258048956591,"randomize",20,20,3,0,1]},{"id":4,"type":"DualCLIPLoader","pos":{"0":121,"1":-111},"size":{"0":315,"1":106},"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[2,27],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["clip_l.safetensors","t5xxl_fp8_e4m3fn.safetensors","flux"]}],"links":[[2,4,0,5,0,"CLIP"],[7,8,0,7,1,"VAE"],[27,4,0,19,0,"CLIP"],[31,7,0,21,0,"IMAGE"],[36,10,0,23,0,"MODEL"],[38,23,0,24,0,"MODEL"],[39,5,0,24,1,"CONDITIONING"],[40,19,0,24,2,"CONDITIONING"],[41,6,0,24,3,"LATENT"],[42,24,0,7,0,"LATENT"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.9646149645000006,"offset":[-113.06857937189307,125.66804176669243]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)



### Other
I have a MacBook PRO M2 Max Silicon 64gb. I installed ComfyUI and then Flux. However, when I run the model I get this error. I am a beginner. I see that many people have this problem. Please help me to solve it.
@Djon253 Djon253 added the Potential Bug User is reporting a bug. This should be tested. label Nov 8, 2024
@Djon253
Copy link
Author

Djon253 commented Nov 8, 2024

I have a MacBook PRO M2 Max Silicon 64gb. I installed ComfyUI and then Flux. However, when I run the model I get this error. I am a beginner. I see that many people have this problem. Please help me to solve it.

@Anant-Raj17
Copy link

I am facing the same problem when i tried to upscale an image, its the SUPIR sampler which is showing the error, i have a macbook pro M3 pro with 18 gb ram

@Djon253
Copy link
Author

Djon253 commented Nov 11, 2024

I got to this after chatting with chatgpt: PYTORCH_ENABLE_MPS_FALLBACK=1 python main.py --force-fp32 --fp32-unet --fp32-vae --fp32-text-enc
PYTORCH_ENABLE_MPS_FALLBACK=1 python main.py --force-fp16 --fp16-unet --fp16-vae --fp32-text-enc --reserve-vram 2
the error disappeared. However, instead of generating a picture, a black screen appeared. Haven't managed it yet. Looking for a way out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Potential Bug User is reporting a bug. This should be tested.
Projects
None yet
Development

No branches or pull requests

2 participants