Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'tuple' object has no attribute 'to_legacy_cache' #28045

Closed
2 of 4 tasks
wuxb45 opened this issue Dec 14, 2023 · 20 comments
Closed
2 of 4 tasks

AttributeError: 'tuple' object has no attribute 'to_legacy_cache' #28045

wuxb45 opened this issue Dec 14, 2023 · 20 comments

Comments

@wuxb45
Copy link

wuxb45 commented Dec 14, 2023

System Info

transformers 4.36.1.

transformers/models/llama/modeling_llama.py", line 1093, in forward
    next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'to_legacy_cache'

This error pops up when running inference with llama 2 model with the new tranformers 4.36.1. I didn't test 4.36.0. It was running correctly with 4.35.x.

This seems to be related to changes from #26681, and commit 633215b.
@ArthurZucker and @younesbelkada according to suggestions in "Who can help?"

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Sorry that I don't have an easy reprod now. Here is the relavant stack trace:

  File "###transformers/generation/utils.py", line 1764, in generate
    return self.sample(
           ^^^^^^^^^^^^
  File "###transformers/generation/utils.py", line 2861, in sample
    outputs = self(
              ^^^^^
  File "###torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "###transformers/models/llama/modeling_llama.py", line 1181, in forward
    outputs = self.model(
              ^^^^^^^^^^^
  File "###torch/nn/modules/module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "###transformers/models/llama/modeling_llama.py", line 1093, in forward
    next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'to_legacy_cache'

Expected behavior

Crash with the provided stack track.

@younesbelkada
Copy link
Contributor

hi @wuxb45
can you share a fully reproducible snippet?

@liziniu
Copy link

liziniu commented Dec 17, 2023

I have the same issue with transofrmers 4.36.1. I am using DeepSpeed framework to generate a response and face this the same error.

@ArthurZucker
Copy link
Collaborator

We cannot help you if you don't share a reproducible snippet. The way this part of the code works should not trigger this error because the past key values are casted to the DynamicCache if use_legacy_cache. Thus there is probably a versioning issue

@junewgl
Copy link

junewgl commented Dec 19, 2023

I have the same issue with transofrmers 4.36.1. I am using DeepSpeed framework to generate a response and face this the same error.

Me too. I also struggled with this problem for a long time, using deepspeed-chat to train reinforcement learning code.@liziniu

@wuxb45
Copy link
Author

wuxb45 commented Dec 19, 2023

hi @wuxb45 can you share a fully reproducible snippet?

I don't have the capacity to generate a reprod at this time. The issue was from running a code base forked from deepspeed chat's step 3. I'm sorry that I cannot provide more information now.

@luohongyin
Copy link

I solved this by removing tensor parallel. It seems that merging perdevicetensor converted Cache to Tuple.

@stevie1023
Copy link

I solved this by removing tensor parallel. It seems that merging perdevicetensor converted Cache to Tuple.

HI, i also faced the same issue. May I ask how you actually removed the tensor parallel if you are also using the deepspeed chat code?

@taozhang9527
Copy link

I had the same errors with 4.36.0 and 4.36.2 versions for Llama inferencing on multiple GPUs with tensor_parallel package.

@taozhang9527
Copy link

I tried version 4.36.0.dev0 . I don't have the issue. Other versions including 4.37.0.dev0 will give the AttributeError.

@younesbelkada
Copy link
Contributor

Hi everyone, please let us know whenever you can share a small reproducible snippet as we can't do anything without a repro to fix the bug

@taozhang9527
Copy link

taozhang9527 commented Jan 10, 2024

@younesbelkada

You can probably try the following code with different transformers versions to reproduce:

import torch
from tensor_parallel import TensorParallelPreTrainedModel
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig

model_path = "meta-llama/Llama-2-7b-chat-hf"
model = LlamaForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)
model = TensorParallelPreTrainedModel(model, ["cuda:0", "cuda:1", "cuda:2", "cuda:3"])
tokenizer = LlamaTokenizer.from_pretrained(model_path)
inputs = tokenizer("Hi, how are you doing?", return_tensors="pt", add_special_tokens=False)
outputs = tokenizer.decode(model.generate(inputs["input_ids"].cuda(0), attention_mask=inputs["attention_mask"].cuda(0), max_length=256)[0], add_special_tokens=False)

print(outputs)

To me, only 4.36.0.dev0 works. After updating to new version, it won't work and I was not able to go back to the old 4.36.0.dev0 version.

@ArthurZucker
Copy link
Collaborator

Alright this is pretty much a duplicate of #28003. We made a mistake by not advertising to test a bit more for other repos to get ready, feel free to share it on the tensor_parallel repo

@guojiapub
Copy link

To me, it works with transformers==4.34.1

@yiakwy-xpu-ml-framework-team
Copy link

yiakwy-xpu-ml-framework-team commented Jan 18, 2024

Apparently this issue was introduced due to this this commit PR #26681 by @tomaarsen and @patrickvonplaten

next_decoder_cache should be a cache, which means it is not well initialized as a cache. Instead of a tuple , the new HF implementation pass a list of cache:

https://github.com/tomaarsen/transformers/blob/ee60b1cc13e2819ef31e69952c0b6f616bd724b8/src/transformers/models/llama/modeling_llama.py#L287C45-L287C76
layer_idx: Optional[int] = None

#https://github.com/tomaarsen/transformers/blob/ee60b1cc13e2819ef31e69952c0b6f616bd724b8/src/transformers/models/llama/modeling_llama.py#L355
past_key_value: Optional[Cache] = None,

Layer_idx is later used by past_key_value, and past_key_value is currently replaced as list of Cache.

Note the diff contains a kind of cache (for attention KV cache) which implements to_legacy_cache.

I guess deepspeed version does not instantiate llama attention correctly or we should change the code as @fxmarty suggests:

        if use_cache:
	            use_legacy_cache = not isinstance(past_key_values, Cache) and past_key_values is not None
	            if use_legacy_cache:
	                past_key_values = DynamicCache.from_legacy_cache(past_key_values)
	            elif past_key_values is None:
	                past_key_values = DynamicCache()
	            past_key_values_length = past_key_values.get_seq_length()

@huggingface huggingface deleted a comment from github-actions bot Feb 12, 2024
@wuxb45 wuxb45 closed this as completed Feb 13, 2024
@yiakwy-xpu-ml-framework-team
Copy link

@yadavpa1
Copy link

I am facing a similar issue AttributeError: 'tuple' object has no attribute 'to_legacy_cache' while training Llama 7B. What is the concluded solution?

@ArthurZucker
Copy link
Collaborator

If you did not change your version of transformers that is expected. Upgrading to the latest / providing a repo should help!

@wangguojim
Copy link

transformers==4.35.2 works

@AsteriaCao
Copy link

I tried version 4.36.0.dev0 . I don't have the issue. Other versions including 4.37.0.dev0 will give the AttributeError.

My transformers version is 4.45.0.dev0. It works

@wangguojim
Copy link

wangguojim commented Sep 9, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests