Skip to content

dawangbaixiaofu/inference_online

Repository files navigation

llm inference online

inference engine

llm inference engine can run llm locality, using this model to inference offline without server.

inference server

this server can expose host's ip and port to other client to use llm online.

client

users exploring the intenet can input their words to model and waiting for its response.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published