Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to load the original model LM head in the new version of Adapters? #592

Closed
Guitaricet opened this issue Oct 12, 2023 · 2 comments
Closed
Labels
enhancement New feature or request question Further information is requested

Comments

@Guitaricet
Copy link

Details

Hello!

We are working with LLaMA which is not supported in the old version of Adapter-Transformers. So we decided to use the beta (Adapters branch). The issue is, it's unclear how to load the llama language modeling head. By default, the model is loaded without any head, which is a problem. This kind of approach made a lot of sense in the BERT era, but it's unclear if it's an intuitive solution now when people want to fine-tune language models with their original head on.

Here's what we tried:

from adapters import LlamaAdapterModel
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
model = LlamaAdapterModel.from_pretrained("meta-llama/Llama-2-7b-hf")

model.add_causal_lm_head("lm_head")
adapters.init(model)
out = model.generate(tokenizer("The cat is", return_tensors="pt")["input_ids"], max_length=20)
tokenizer.decode(out[0])
# output: '<s> The cat isРСРoulevalu str siècleTest materadejkradeavenradebosebose étudesvä'

so, it seems like this way the added head is randomly initialized.

Could you help us to get the original llama head weights?

@Guitaricet Guitaricet added the question Further information is requested label Oct 12, 2023
@hSterz
Copy link
Member

hSterz commented Oct 13, 2023

Hello @Guitaricet, yes you are correct the add_causal_lm_head adds a randomly initialized language modelling head. What you can do here is you can use the original transformers class and initialize it afterwards:

model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
adapters.init(model)

This should solve the problem (let me know if it does not).

@calpt calpt added the enhancement New feature or request label Oct 16, 2023
@calpt calpt self-assigned this Oct 16, 2023
@calpt calpt linked a pull request Oct 17, 2023 that will close this issue
@calpt calpt removed their assignment Nov 11, 2023
@Guitaricet
Copy link
Author

Thank you, everything works now!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants