Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GraphRAG #355

Closed
kevinintel opened this issue Jul 26, 2024 · 2 comments
Closed

GraphRAG #355

kevinintel opened this issue Jul 26, 2024 · 2 comments
Assignees
Labels
DEV features feature New feature or request
Milestone

Comments

@kevinintel
Copy link
Collaborator

No description provided.

@endomorphosis
Copy link

I am interested in this as well. keep me in the loop on the tasks that need to be done. Right now I am trying to get llama 3.1 405B 8bit running on Gaudi-TGI, and llama 3.1 405B GGUF with speculative decoding using llama 3.1 8B working on Xeon

lkk12014402 pushed a commit that referenced this issue Aug 8, 2024
@kevinintel kevinintel added this to the v1.1 milestone Oct 14, 2024
@XuhuiRen
Copy link
Collaborator

enabled langchain and llamaindex two versions

@github-project-automation github-project-automation bot moved this to Done in OPEA Oct 29, 2024
@joshuayao joshuayao added the feature New feature or request label Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DEV features feature New feature or request
Projects
Status: Done
Development

No branches or pull requests

5 participants