Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting error when trying to load LLM #52

Open
Pojo267 opened this issue Apr 15, 2024 · 0 comments
Open

Getting error when trying to load LLM #52

Pojo267 opened this issue Apr 15, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@Pojo267
Copy link

Pojo267 commented Apr 15, 2024

I've tried loading two different LLMs within the GPT Loader Simple node and both give me the same error and I'm not sure what the issue is.

The two LLMs attempted are...
dolphin-2.5-mixtral-8x7b.Q5_K_M.gguf
phi-2-layla-v1-chatml-Q8_0.gguf

Error occurred when executing GPT Loader Simple [n-suite]:

File "...\GitHub\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "...\GitHub\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "...\GitHub\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 381, in new_func
res_value = old_func(*final_args, **kwargs)
File "...\GitHub\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "...\GitHub\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 388, in load_gpt_checkpoint
llm = Llama(model_path=ckpt_path, n_gpu_layers=gpu_layers, verbose=False, n_threads=n_threads, n_ctx=max_ctx)
File "...\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 923, in __init__
self._n_vocab = self.n_vocab()
File "...\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 2184, in n_vocab
return self._model.n_vocab()
File "...\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 250, in n_vocab
assert self.model is not None
@Pojo267 Pojo267 added the bug Something isn't working label Apr 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant