Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama-3-Taiwan-8B-Instruct是否支援LangChain Tools #72

Open
Hsun1128 opened this issue Oct 3, 2024 · 0 comments
Open

Llama-3-Taiwan-8B-Instruct是否支援LangChain Tools #72

Hsun1128 opened this issue Oct 3, 2024 · 0 comments

Comments

@Hsun1128
Copy link

Hsun1128 commented Oct 3, 2024

目前使用 vllm 的方式運行8B模型

export NUM_GPUS=1
export PORT=8000

docker run \
  -e HF_TOKEN=$HF_TOKEN \
  --gpus all \
  -v ~/.cache/huggingface:/root/.cache/huggingface \
  -p "${PORT}:8000" \
  --ipc=host \
  vllm/vllm-openai:v0.4.0.post1 \
  --model "yentinglin/Llama-3-Taiwan-8B-Instruct" \
  -tp "${NUM_GPUS}"

然後使用LangChain 提供的連接vllm的方法連接 llm

from langchain_openai import ChatOpenAI

model_id = "yentinglin/Llama-3-Taiwan-8B-Instruct"
inference_server_url = "http://localhost:8000/v1"

llm = ChatOpenAI(
    model=model_id,
    openai_api_key="EMPTY",
    openai_api_base=inference_server_url,
    temperature=0,
    streaming=True,
)

接著參考官方的教學文件試做,但卻一直得不到 tool_calls,所以想請教您該模型是否支援 LangChain Tools 的用法,以及該問題的解決方法,還請您不吝指教,謝謝。

參考連結https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/#define-the-nodes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant