Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HPU] [Serve] [experimental] Add vllm HPU support in vllm example #45893

Merged
merged 19 commits into from
Aug 19, 2024

Conversation

KepingYan
Copy link
Contributor

Why are these changes needed?

This PR adds vllm HPU support in vllm example (#45430). The added codes will check whether the HPU device exists before allocating resources to vllm actors. If it exists, HPU resources are used, otherwise GPU resources are still used.

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: KepingYan <[email protected]>
@KepingYan KepingYan changed the title [HPU] [Serve] Add vllm HPU support in vllm example [HPU] [Serve] [experimental] Add vllm HPU support in vllm example Jun 21, 2024
Copy link
Collaborator

@can-anyscale can-anyscale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

has been several weeks, @akshay-anyscale , @edoakes do you mind have a look, thanks

doc/source/serve/doc_code/vllm_openai_example.py Outdated Show resolved Hide resolved
doc/source/serve/doc_code/vllm_openai_example.py Outdated Show resolved Hide resolved
@anyscalesam anyscalesam added the serve Ray Serve Related Issue label Jul 15, 2024
@edoakes edoakes added the go add ONLY when ready to merge, run all tests label Jul 16, 2024
Signed-off-by: KepingYan <[email protected]>
@anyscalesam anyscalesam merged commit c46c2e5 into ray-project:master Aug 19, 2024
4 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests serve Ray Serve Related Issue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants