Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable ChatQnA with vllm Arc support #771

Closed
wants to merge 2 commits into from

Conversation

gavinlichn
Copy link

Description

Enable ChatQnA with vllm inference on Intel ARC GPU

Issues

n/a

Type of change

List the type of change like below. Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds new functionality)
  • Breaking change (fix or feature that would break existing design and interface)
  • Others (enhancement, documentation, validation, etc.)

Dependencies

opea-project/GenAIComps#641

Tests

n/a

Support vllm inference with Intel ARC GPU

Signed-off-by: Li Gang <[email protected]>
Co-authored-by: Chen, Hu1 <[email protected]>
@chensuyue
Copy link
Collaborator

Seems this PR is out of date, shall we close it? @gavinlichn

@gavinlichn
Copy link
Author

Seems this PR is out of date, shall we close it? @gavinlichn

Let's close this PR first, add vLLM ARC later if required.

@gavinlichn gavinlichn closed this Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants