Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference with Reference: Lossless Acceleration of Large Language Models by Nan Yang et al. #803

Open
1 task
ShellLM opened this issue Apr 12, 2024 · 1 comment
Labels
llm Large Language Models llm-inference-engines Software to run inference on large language models llm-serving-optimisations Tips, tricks and tools to speedup inference of large language models

Comments

@ShellLM
Copy link
Collaborator

ShellLM commented Apr 12, 2024

Inference with Reference: Lossless Acceleration of Large Language Models by Nan Yang et al.

Snippet

"Inference with Reference: Lossless Acceleration of Large Language Models

Nan Yang, Tao Ge, Liang Wang, Binxing Jiao, Daxin Jiang, Linjun Yang, Rangan Majumder, Furu Wei

We propose LLMA, an LLM accelerator to losslessly speed up Large Language Model (LLM) inference with references. LLMA is motivated by the observation that there are abundant identical text spans between the decoding result by an LLM and the reference that is available in many real-world scenarios (e.g., retrieved documents). LLMA first selects a text span from the reference and copies its tokens to the decoder and then efficiently checks the tokens' appropriateness as the decoding result in parallel within one decoding step. The improved computational parallelism allows LLMA to achieve over 2x speed-up for LLMs with identical generation results as greedy decoding in many practical generation scenarios where significant overlap between in-context reference and outputs exists (e.g., search engines and multi-turn conversations)."

Read the full paper here

Suggested labels

None

@ShellLM ShellLM added llm Large Language Models llm-inference-engines Software to run inference on large language models llm-serving-optimisations Tips, tricks and tools to speedup inference of large language models labels Apr 12, 2024
@ShellLM
Copy link
Collaborator Author

ShellLM commented Apr 12, 2024

Related content

#174 similarity score: 0.89
#495 similarity score: 0.88
#494 similarity score: 0.88
#680 similarity score: 0.88
#332 similarity score: 0.88
#317 similarity score: 0.88

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llm Large Language Models llm-inference-engines Software to run inference on large language models llm-serving-optimisations Tips, tricks and tools to speedup inference of large language models
Projects
None yet
Development

No branches or pull requests

1 participant