Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: FA2 with packed training #32487

Merged
merged 5 commits into from
Aug 12, 2024

Conversation

zucchini-nlp
Copy link
Member

What does this PR do?

Fixes the issue from #32241 (comment). The FA2 was taking packed path when we were trying to continue with filled cache and had more than 1 new token. This could happen in assisted decoding for ex.

I think instead if checking the position ids length and last elements, we can check that if they are arranged in increasing order. We can know for sure that packed sequences will not have all elements with increasing positions. Also, we can be sure that in inference we'll always have an increasing order if there's no attn mask provided. If attn mask is there, we'll not reach this check so should not be a problem

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably need a patch for this no? Can you add a non slow tests as well? 🤗

# If position_ids is provided and check all examples do not contain only 1 sequence, If tensor in increasing
# then we probably have one sequence, otherwise it is packed. Additionally check we are in pre-fill/training stage.
# Use `flash_attn_varlen_func` to prevent cross-example attention and also allow padding free approach
elif position_ids is not None and not (torch.diff(position_ids) >= 0).all() and query_length != 1:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we explicit that we diff on the batch dim?

Copy link
Member Author

@zucchini-nlp zucchini-nlp Aug 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, will do so and add more tests!

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! Can you just run the slow test to make sur it's not skipped! 🤗

@zucchini-nlp
Copy link
Member Author

Slow CI tests are failing for unrelated reasons: gated repo, OOM or failing even w/o this PR. I believe this can be merged, since many tests need to be fixed in general

@ArthurZucker
Copy link
Collaborator

Yep we need to patch with this let's merge! 🤗

@zucchini-nlp
Copy link
Member Author

Yep, and #32527 for patch release plz. Can you approve it also?

@zucchini-nlp zucchini-nlp merged commit 8f2b6d5 into huggingface:main Aug 12, 2024
22 checks passed
ArthurZucker pushed a commit that referenced this pull request Aug 16, 2024
* fix check

* add tests

* [run-slow] llama, gemma2

* oops, whisper actually runs but needed some special treatment
ArthurZucker pushed a commit that referenced this pull request Aug 20, 2024
* fix check

* add tests

* [run-slow] llama, gemma2

* oops, whisper actually runs but needed some special treatment
ArthurZucker pushed a commit that referenced this pull request Aug 20, 2024
* fix check

* add tests

* [run-slow] llama, gemma2

* oops, whisper actually runs but needed some special treatment
stevhliu pushed a commit to stevhliu/transformers that referenced this pull request Aug 21, 2024
* fix check

* add tests

* [run-slow] llama, gemma2

* oops, whisper actually runs but needed some special treatment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants