Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch Generation giving different output when using batch size > 1 or when using padding in MambaForCausalLM #31540

Closed
2 of 4 tasks
piyushdevlpr opened this issue Jun 21, 2024 · 2 comments · Fixed by #31549
Closed
2 of 4 tasks

Comments

@piyushdevlpr
Copy link

piyushdevlpr commented Jun 21, 2024

System Info

  • transformers version: 4.41.2
  • Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
  • Python version: 3.10.8
  • Huggingface_hub version: 0.23.3
  • Safetensors version: 0.4.3
  • Accelerate version: 0.26.0
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.0.0+cu117 (True)
  • Tensorflow version (GPU?): 2.9.0 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Who can help?

@ArthurZucker @gante

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

I have trained a MambaForCausalLM model on a custom dataset.
I am using the following code to generate next token in eval mode -

model = MambaForCausalLM.from_pretrained("mamba_custom")
tokenizer = PreTrainedTokenizerFast.from_pretrained("mamba_tokenizer_custom")
tokenizer.padding_side = 'left'

inputs_rl = tokenizer(sentence, padding="max_length", truncation=True, max_length=100, return_tensors="pt").to("cuda:0")
with torch.no_grad():
    outputs = model(inputs_rl["input_ids"], attention_mask=inputs_rl["attention_mask"])

Expected behavior

The tokenizer pads the input towards the left side. When I change the argument padding="max_length" to generate inputs without padding, I get different tokens as prediction.

Using model.generate gives the same issue as well.

@amyeroberts
Copy link
Collaborator

cc @gante

@gante
Copy link
Member

gante commented Jun 21, 2024

@piyushdevlpr 👋

Mamba, contrarily to transformers models, does not take an attention mask as input (see the signature here). As such, it does not support padding, and will return different values.

(I'm going to open a PR to try to prevent this issue from happening again)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants