-
Notifications
You must be signed in to change notification settings - Fork 26.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Demo][ExecuTorch] Lower and run native Gemma e2e in ExecuTorch #31706
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
guangy10
added a commit
to pytorch/executorch
that referenced
this pull request
Jul 12, 2024
…ecuTorch" This PR is a prototype to showcase the minimal changes required to lower Gemma-2b to ExecuTorch w/ static kv cache and run it directly in [llama runner](https://github.com/pytorch/executorch/tree/main/examples/models/llama2) w/o single line of code change in the ExecuTorch runtime. By standardizing on the contract between HuggingFace modeling and ExecuTorch runtime, any LLM in HuggingFace could utilize llama runner as a universal runtime for a given backend. Instructions to run the demo: To run the demo, you need to clone huggingface/transformers and patch [PR#31706](huggingface/transformers#31706) on top, which contains minimal changes required on the modeling side. Patch this PR to your ExecuTorch repo, from there you can: 1. Run the export_hf_model.py to lower gemma-2b to ExecuTorch: ``` python -m examples.models.export_hf_model -hfm "google/gemma-2b" --export # The model is exported statical dims with static KV cache ``` 2. Run the tokenizer.py to generate the binary format for ExecuTorch runtime: ``` python -m examples.models.llama2.tokenizer.tokenizer -t <path_to_downloaded_gemma_checkpoint_dir>/tokenizer.model -o <your_out_dir>/tokenizer.bin ``` 3. Build and run the lowered model wiht llama runner by following this guide [step 4](https://github.com/pytorch/executorch/tree/main/examples/models/llama2#step-4-run-on-your-computer-to-validate) NOTE: This prototype is to demonstrate the feasibility of exporting and running native HF model in ExecuTorch by reusing llama runner. It does NOT come with performance yet. It's an ongoing effort along this path to enable 1) delegations, e.g. xnnpack 2) custom sdpa 3) parallel prefill recently enabled in #4068. [ghstack-poisoned]
guangy10
added a commit
to pytorch/executorch
that referenced
this pull request
Jul 12, 2024
This PR is a prototype to showcase the minimal changes required to lower Gemma-2b to ExecuTorch w/ static kv cache and run it directly in [llama runner](https://github.com/pytorch/executorch/tree/main/examples/models/llama2) w/o single line of code change in the ExecuTorch runtime. By standardizing on the contract between HuggingFace modeling and ExecuTorch runtime, any LLM in HuggingFace could utilize llama runner as a universal runtime for a given backend. Instructions to run the demo: To run the demo, you need to clone huggingface/transformers and patch [PR#31706](huggingface/transformers#31706) on top, which contains minimal changes required on the modeling side. Patch this PR to your ExecuTorch repo, from there you can: 1. Run the export_hf_model.py to lower gemma-2b to ExecuTorch: ``` python -m examples.models.export_hf_model -hfm "google/gemma-2b" --export # The model is exported statical dims with static KV cache ``` 2. Run the tokenizer.py to generate the binary format for ExecuTorch runtime: ``` python -m examples.models.llama2.tokenizer.tokenizer -t <path_to_downloaded_gemma_checkpoint_dir>/tokenizer.model -o <your_out_dir>/tokenizer.bin ``` 3. Build and run the lowered model wiht llama runner by following this guide [step 4](https://github.com/pytorch/executorch/tree/main/examples/models/llama2#step-4-run-on-your-computer-to-validate) NOTE: This prototype is to demonstrate the feasibility of exporting and running native HF model in ExecuTorch by reusing llama runner. It does NOT come with performance yet. It's an ongoing effort along this path to enable 1) delegations, e.g. xnnpack 2) custom sdpa 3) parallel prefill recently enabled in #4068. [ghstack-poisoned]
3 tasks
guangy10
force-pushed
the
gemma_executorch
branch
from
July 23, 2024 20:52
d3a336c
to
1dfd20d
Compare
guangy10
changed the title
[ExecuTorch] Lower and run native Gemma e2e in ExecuTorch
[Not To Merge][Demo][ExecuTorch] Lower and run native Gemma e2e in ExecuTorch
Jul 24, 2024
guangy10
changed the title
[Not To Merge][Demo][ExecuTorch] Lower and run native Gemma e2e in ExecuTorch
[Demo][ExecuTorch] Lower and run native Gemma e2e in ExecuTorch
Jul 24, 2024
amyeroberts
added
the
WIP
Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress
label
Aug 19, 2024
Gemma and Gemma2 have been enabled already. This demo PR can be closed. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
ExecuTorch
WIP
Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR is a prototype to showcase the minimal changes required to lower Gemma-2b to ExecuTorch w/ static kv cache and run it directly in llama runner w/o single line of code change in the ExecuTorch runtime.
By standardizing on the contract between HuggingFace modeling and ExecuTorch runtime, any LLM in HuggingFace could utilize llama runner as a universal runtime for a given backend.
Instructions to run the demo:
To run the demo, you need follow this guide to install ExecuTorch, patch PR#4088 to include the script export_hf_model.py there to export and lower the model to XNNPACK backend. From there, you can:
NOTE: This prototype is to demonstrate the feasibility of exporting and running native HF model in ExecuTorch by reusing llama runner. The demo shown in the screenshot is using XNNPACK delegation running the fp32 model on a Linux host.