-
Notifications
You must be signed in to change notification settings - Fork 26.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'LlamaAWQForCausalLM' object has no attribute 'config' #26970
Comments
Hi @OriginalGoku |
Hi @OriginalGoku , you could try doing |
Hi @ptanov |
Hi everyone, |
@OriginalGoku, instead of pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
) write pipe = pipeline(
"text-generation",
model=model.model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
) |
Hi @younesbelkada is there any way to set |
Hi @ptanov |
I have been working hard on making 0.1.7 ready! And it soon will be. After that, you will get the equivalent speedup straight from transformers - stay tuned |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
CLosing as #27411 has been merged! |
System Info
I am trying to run a CodeLlama model on Colab with a free GPU.
The code was copied from here:
https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-AWQ
Who can help?
@ArthurZucker
@younesbelkada
@Narsil
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Here is the code:
and here is the error:
Expected behavior
When I do the inference with the following code, everything works:
The text was updated successfully, but these errors were encountered: