Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rotary embedding and layer norm #17

Open
qwopqwop200 opened this issue May 8, 2023 · 1 comment
Open

rotary embedding and layer norm #17

qwopqwop200 opened this issue May 8, 2023 · 1 comment

Comments

@qwopqwop200
Copy link

I've added two enhancements to the current GPTQ for LLaMA. This brings speed up.
1.triton rotary embedding implemented by aljungberg
qwopqwop200/GPTQ-for-LLaMa#221
Implement rotary embedding with triton. This gives a huge speed-up.
2.triton RMS norm
https://github.com/qwopqwop200/GPTQ-for-LLaMa/blob/triton/quant/triton_norm.py
The RMS norm is implemented as a triton. You get a slight extra speed boost.

@Qubitium
Copy link

Added PR https://github.com/fpgaminer/GPTQ-triton/pull/21/files?diff=split&w=1 to port the triton rotary over to this repo. Saw on avg 9% increase in new tokens/s on my 30b model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants