Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(docs): Add IPEX-LLM section in LLM backend guide (for local model deployment on Intel CPU/GPU) #1968

Merged
merged 2 commits into from
Jul 8, 2024

Conversation

shane-huang
Copy link
Contributor

@shane-huang shane-huang commented Jun 12, 2024

This PR adds IPEX-LLM as a new local LLM backend to the LLM Backends Page

IPEX-LLM is a library which allows users to run LLMs locally on Intel CPU and GPU with very low latency. With IPEX-LLM, user can leverage even the integrated GPU of low-cost PCs to run LLMs with a smooth experience.

PrivateGPT can successfully connect to LLMs deployed with IPEX-LLM, and there's a detailed quickstart guide with a demo video at IPEX-LLM docs website (see Run PrivateGPT with IPEX-LLM Quickstart ).

Any suggestions on document styles or contents, please kindly let me know.

@shane-huang shane-huang changed the title feat(docs): Add IPEX-LLM section (for local model deployment on Intel CPU/GPU) feat(docs): Add IPEX-LLM LLM backend guide (for local model deployment on Intel CPU/GPU) Jun 12, 2024
@shane-huang shane-huang changed the title feat(docs): Add IPEX-LLM LLM backend guide (for local model deployment on Intel CPU/GPU) feat(docs): Add IPEX-LLM section in LLM backend guide (for local model deployment on Intel CPU/GPU) Jun 12, 2024
@imartinez imartinez merged commit 19a7c06 into zylon-ai:main Jul 8, 2024
7 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants