-
Notifications
You must be signed in to change notification settings - Fork 334
Issues: SciSharp/LLamaSharp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[BUG:] When switching to new versions of LLamaSharp 0.16.0, there was a slowdown
#921
opened Sep 4, 2024 by
aropb
[BUG]: KernelMemory - Simultaneous execution of AskDocument & ImportDocument
#918
opened Sep 3, 2024 by
aropb
How do i use RAG by kernel memory and Semantic kernel Handlebar Planner with llama3
#899
opened Aug 13, 2024 by
Barshan-Mandal
Application Not Using GPU Despite Installing LlamaSharp.Backend.Cuda12
documentation
Improvements or additions to documentation
good first issue
Good for newcomers
#896
opened Aug 10, 2024 by
ZCOREP
[BUG]: "The type or namespace 'Common' does not exist in the namespace 'LLama'"
#895
opened Aug 9, 2024 by
crisdesivo
[BUG]: Object Reference Error in ApplyPenalty when setting nl_logit
#866
opened Jul 24, 2024 by
GalactixGod
[BUG]: ChatSession unnecessarily prevents arbitrary conversation interleaving
#857
opened Jul 19, 2024 by
lostmsu
[BUG]: Tokenization in 0.14.0 adds spaces
bug
Something isn't working
Upstream
Tracking an issue in llama.cpp
#856
opened Jul 18, 2024 by
newsletternewsletter
Method not found: 'Double Microsoft.KernelMemory.AI.TextGenerationOptions.get_TopP()'.
#832
opened Jul 6, 2024 by
KanonRim
How to handle Tracking an issue in llama.cpp
CUDA error: out of memory
?
Upstream
#831
opened Jul 6, 2024 by
yukozh
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.