Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelize LayerNormalization operator #334

Open
robertknight opened this issue Aug 26, 2024 · 0 comments
Open

Parallelize LayerNormalization operator #334

robertknight opened this issue Aug 26, 2024 · 0 comments

Comments

@robertknight
Copy link
Owner

This is an operation which appears in most transformer models, although some more recent ones use relatives such as RMSNorm. It can be parallelized by splitting the input over a non-normalized axis and applying normalization to each chunk separately.

From a quick experiment on a 4-core system I can get ~2x speedup quite easily.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
@robertknight and others