Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend][Core] Add guidance logits processor for guided decoding #10208

Closed
wants to merge 214 commits into from

Conversation

JC1DA
Copy link

@JC1DA JC1DA commented Nov 11, 2024

Add Guidance backend for guided decoding

This pull request extends guided decoding capabilities

  • Add guidance backend
  • Process logits in parallel with threadpool

guidance backend supports regex, choice, json and grammar.

relevant: #5245

Usage

  • JSON Generation
from pydantic import BaseModel, ConfigDict

model = "Qwen/Qwen2.5-0.5B-Instruct-GPTQ-Int4"
llm = LLM(model=model)

class UserProfile(BaseModel):
    name: str
    age: int
    email: str

    model_config = ConfigDict(extra="forbid")

sampling_params = SamplingParams(
    temperature=0.0,
    top_p=0.95,
    max_tokens=512,
    guided_decoding=GuidedDecodingParams(
        json=UserProfile,
        backend="guidance",
    ),
)

outputs = llm.chat(
    messages=[
        [
            CustomChatCompletionMessageParam(
                role="system", content="You are a helpful assistant."
            ),
            CustomChatCompletionMessageParam(
                role="user",
                content="Tell me something about yourself (name, age, email) in JSON format.\n",
            ),
        ],
    ],
    sampling_params=[sampling_params],
)
  • Choices Generation
sampling_params = SamplingParams(
    temperature=0.0,
    top_p=0.95,
    max_tokens=512,
    guided_decoding=GuidedDecodingParams(
        choice=["3","4","5","6"],
        backend="guidance",
    ),
)

outputs = llm.chat(
    messages=[
        [
            CustomChatCompletionMessageParam(
                role="system", content="You are a 5 years-old helpful assistant."
            ),
            CustomChatCompletionMessageParam(
                role="user",
                content="How old are you?",
            ),
        ],
    ],
    sampling_params=[sampling_params],
)
  • Regex Generation via OpenAI Client
model = "Qwen/Qwen2.5-0.5B-Instruct-GPTQ-Int4"
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="NOKEY",
)

completion = client.chat.completions.create(
    model=model,
    messages=[
        {
            "role": "user",
            "content": "You are a 5 years-old helpful assistant. information.",
        },
        {
            "role": "user",
            "content": """How old are you?""",
        },
    ],
    extra_body={"guided_regex": "\\d+", "guided_decoding_backend": "guidance"}
)

Benchmark

Model: QWEN2.5-7B-GPTQ-INT4
Dataset: GSM8K
Guided Type: JSON

Metric Outlines Guidance
Accuracy 1023/1318 (77.62%) 1032/1318 (78.3%)
Average Output tokens 166 (+/- 83) 195 (+/- 69)
Average Latency in ms per Request (1 concurrent req) 2567 (+/- 976) 1799 (+/- 466)
Average Latency in ms per Request (4 concurrent reqs) 8697 (+/- 3866) 3655 (+/- 1154)
Average Latency in ms per Request (8 concurrent reqs) 17370 (+/- 8139) 5997 (+/- 1991)

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mergify mergify bot added the ci/build label Nov 11, 2024
JC1DA and others added 28 commits November 10, 2024 23:02
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Loc Huynh <[email protected]>
…ing (vllm-project#8339)

Signed-off-by: Max de Bayser <[email protected]>
Co-authored-by: Max de Bayser <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Signed-off-by: Loc Huynh <[email protected]>
Copy link

mergify bot commented Nov 11, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @JC1DA.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Nov 11, 2024
@JC1DA JC1DA closed this Nov 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build documentation Improvements or additions to documentation frontend needs-rebase
Projects
None yet
Development

Successfully merging this pull request may close these issues.