Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Demo][ExecuTorch] Lower and run native Gemma e2e in ExecuTorch #31706

Closed
wants to merge 2 commits into from

Conversation

guangy10
Copy link
Contributor

@guangy10 guangy10 commented Jun 29, 2024

This PR is a prototype to showcase the minimal changes required to lower Gemma-2b to ExecuTorch w/ static kv cache and run it directly in llama runner w/o single line of code change in the ExecuTorch runtime.

By standardizing on the contract between HuggingFace modeling and ExecuTorch runtime, any LLM in HuggingFace could utilize llama runner as a universal runtime for a given backend.

Instructions to run the demo:

To run the demo, you need follow this guide to install ExecuTorch, patch PR#4088 to include the script export_hf_model.py there to export and lower the model to XNNPACK backend. From there, you can:

  1. Run the export_hf_model.py to lower gemma-2b to ExecuTorch:
cd executorch # In the root dir in executorch
python -m examples.models.export_hf_model -hfm "google/gemma-2b" --export  # The model is exported statical dims with static KV cache
  1. Run the tokenizer.py to generate the binary format for ExecuTorch runtime:
python -m examples.models.llama2.tokenizer.tokenizer -t <path_to_downloaded_gemma_checkpoint_dir>/tokenizer.model -o <your_out_dir>/tokenizer.bin
  1. Build and run the lowered model wiht llama runner by following this guide step 4

Screenshot 2024-07-23 at 2 09 56 PM

NOTE: This prototype is to demonstrate the feasibility of exporting and running native HF model in ExecuTorch by reusing llama runner. The demo shown in the screenshot is using XNNPACK delegation running the fp32 model on a Linux host.

guangy10 added a commit to pytorch/executorch that referenced this pull request Jul 12, 2024
…ecuTorch"

This PR is a prototype to showcase the minimal changes required to lower Gemma-2b to ExecuTorch w/ static kv cache and run it directly in [llama runner](https://github.com/pytorch/executorch/tree/main/examples/models/llama2) w/o single line of code change in the ExecuTorch runtime.

By standardizing on the contract between HuggingFace modeling and ExecuTorch runtime, any LLM in HuggingFace could utilize llama runner as a universal runtime for a given backend.

Instructions to run the demo:

To run the demo, you need to clone huggingface/transformers and patch [PR#31706](huggingface/transformers#31706) on top, which contains minimal changes required on the modeling side. Patch this PR to your ExecuTorch repo, from there you can:

1. Run the export_hf_model.py to lower gemma-2b to ExecuTorch:
```
python -m examples.models.export_hf_model -hfm "google/gemma-2b" --export  # The model is exported statical dims with static KV cache
```
2. Run the tokenizer.py to generate the binary format for ExecuTorch runtime:
```
python -m examples.models.llama2.tokenizer.tokenizer -t <path_to_downloaded_gemma_checkpoint_dir>/tokenizer.model -o <your_out_dir>/tokenizer.bin
```
3. Build and run the lowered model wiht llama runner by following this guide [step 4](https://github.com/pytorch/executorch/tree/main/examples/models/llama2#step-4-run-on-your-computer-to-validate)

NOTE: This prototype is to demonstrate the feasibility of exporting and running native HF model in ExecuTorch by reusing llama runner. It does NOT come with performance yet. It's an ongoing effort along this path to enable 1) delegations, e.g. xnnpack 2) custom sdpa 3) parallel prefill recently enabled in #4068.




[ghstack-poisoned]
guangy10 added a commit to pytorch/executorch that referenced this pull request Jul 12, 2024
This PR is a prototype to showcase the minimal changes required to lower Gemma-2b to ExecuTorch w/ static kv cache and run it directly in [llama runner](https://github.com/pytorch/executorch/tree/main/examples/models/llama2) w/o single line of code change in the ExecuTorch runtime.

By standardizing on the contract between HuggingFace modeling and ExecuTorch runtime, any LLM in HuggingFace could utilize llama runner as a universal runtime for a given backend.

Instructions to run the demo:

To run the demo, you need to clone huggingface/transformers and patch [PR#31706](huggingface/transformers#31706) on top, which contains minimal changes required on the modeling side. Patch this PR to your ExecuTorch repo, from there you can:

1. Run the export_hf_model.py to lower gemma-2b to ExecuTorch:
```
python -m examples.models.export_hf_model -hfm "google/gemma-2b" --export  # The model is exported statical dims with static KV cache
```
2. Run the tokenizer.py to generate the binary format for ExecuTorch runtime:
```
python -m examples.models.llama2.tokenizer.tokenizer -t <path_to_downloaded_gemma_checkpoint_dir>/tokenizer.model -o <your_out_dir>/tokenizer.bin
```
3. Build and run the lowered model wiht llama runner by following this guide [step 4](https://github.com/pytorch/executorch/tree/main/examples/models/llama2#step-4-run-on-your-computer-to-validate)

NOTE: This prototype is to demonstrate the feasibility of exporting and running native HF model in ExecuTorch by reusing llama runner. It does NOT come with performance yet. It's an ongoing effort along this path to enable 1) delegations, e.g. xnnpack 2) custom sdpa 3) parallel prefill recently enabled in #4068.




[ghstack-poisoned]
@guangy10 guangy10 changed the title [ExecuTorch] Lower and run native Gemma e2e in ExecuTorch [Not To Merge][Demo][ExecuTorch] Lower and run native Gemma e2e in ExecuTorch Jul 24, 2024
@guangy10 guangy10 changed the title [Not To Merge][Demo][ExecuTorch] Lower and run native Gemma e2e in ExecuTorch [Demo][ExecuTorch] Lower and run native Gemma e2e in ExecuTorch Jul 24, 2024
@guangy10 guangy10 mentioned this pull request Jul 26, 2024
24 tasks
@huggingface huggingface deleted a comment from github-actions bot Aug 19, 2024
@amyeroberts amyeroberts added the WIP Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress label Aug 19, 2024
@guangy10
Copy link
Contributor Author

Gemma and Gemma2 have been enabled already. This demo PR can be closed.

@guangy10 guangy10 closed this Oct 29, 2024
@guangy10 guangy10 deleted the gemma_executorch branch October 29, 2024 17:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ExecuTorch WIP Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants