-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: intel-analytics/ipex-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Vulnerability issue CVE-2024-31583 and CVE-2024-31580 on torch<2.2.0
#12380
opened Nov 11, 2024 by
Johere
performance problem about internvl image embedding using ggml.dll
user issue
#12376
opened Nov 11, 2024 by
cjsdurj
Several GPU models behave erratically compared to CPU execution
user issue
#12374
opened Nov 10, 2024 by
pepijndevos
cant run ollama in docker container with iGPU in linux
user issue
#12363
opened Nov 7, 2024 by
user7z
Could not use SFT Trainer in qlora_finetuning.py
user issue
#12356
opened Nov 7, 2024 by
shungyantham
ipex-llm-ollama-installer-20240918.exe安装后用另一个exe调用文件夹中的start.bat会提示缺少dll等无法运行
user issue
#12334
opened Nov 5, 2024 by
dayskk
llava-hf/llava-1.5-7b-hf: error when multi-turn chat with multi-images
user issue
#12288
opened Oct 29, 2024 by
Johere
[NPU] Slow Token Generation with Latest NPU Driver 32.0.100.3053 on LNL 226V series
user issue
#12266
opened Oct 25, 2024 by
climh
Llamacpp generation incoherent (always <eos>). Driver version on ubuntu 22.04.5?
user issue
#12258
opened Oct 23, 2024 by
ultoris
Codegeex Nano model runs benchmark slower than qwen1.5-1.8B(0.5 times)
user issue
#12254
opened Oct 23, 2024 by
johnysh
Slow Down / Pod Stuck After Orca Init, Resolves with Barrier Mode False
user issue
#12220
opened Oct 17, 2024 by
kahlun
Ollama error after a few requests - ubatch must be set as the times of VS
user issue
#12216
opened Oct 17, 2024 by
mateHD
LLM benchmark for chatglm3-6b Unable to run properly
user issue
#12208
opened Oct 15, 2024 by
vincent-wsz
ollama generate incorrect answer when it run glm-4-9b-chat model
user issue
#12193
opened Oct 14, 2024 by
junruizh2021
Previous Next
ProTip!
Updated in the last three days: updated:>2024-11-08.