-
Notifications
You must be signed in to change notification settings - Fork 4
/
TODO
75 lines (36 loc) · 2.88 KB
/
TODO
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# TODO: https://docs.litellm.ai/docs/completion/token_usage
# TODO: update vision handling handling (local & remote) with gpt-4o https://platform.openai.com/docs/guides/vision
# TODO handle vision capability for GPT-4o
# TODO Hello GPT-4o https://openai.com/index/hello-gpt-4o/
https://platform.openai.com/docs/models/gpt-4o
# TODO: hugging face support https://docs.litellm.ai/docs/providers/huggingface
# TODO: embed llama.cpp Python binding for Local LLM https://github.com/abetlen/llama-cpp-python
# TODO: HN API tools https://github.com/HackerNews/API
# TODO: deploy 💰 Budgets, Rate Limits https://docs.litellm.ai/docs/proxy/users
# TODO: 🐳 Docker, Deploying LiteLLM Proxy https://docs.litellm.ai/docs/proxy/deploy
# TODO: model fallback https://docs.litellm.ai/docs/tutorials/model_fallbacks
# TODO: check Cohere docs, seems like currenlty had error https://docs.litellm.ai/docs/providers/cohere
# TODO: IMPORTANT: about tool use, note to self tool use streaming is not support for most LLM provider (OpenAI, Anthropic) so in other to use tool, need to disable `streaming` param
# TODO: idea integrate token cost counter https://github.com/AgentOps-AI/tokencost?tab=readme-ov-file
# TODO: proper support local LLM <https://github.com/Chainlit/cookbook/tree/main/local-llm>
# TODO: handle passing file to code-interpreter for Mino Assistant <https://platform.openai.com/docs/assistants/tools/code-interpreter/passing-files-to-code-interpreter>
# TODO: https://github.com/jakecyr/openai-function-calling
# TODO: https://github.com/Chainlit/cookbook/tree/main/openai-functions
# TODO: https://github.com/Chainlit/cookbook/tree/main/openai-functions-streaming
# TODO: https://github.com/Chainlit/cookbook/tree/main/openai-concurrent-streaming
# TODO: https://github.com/Chainlit/cookbook/tree/main/openai-concurrent-functions
# TODO: starting screen -> generate list of conversation starter buttons.
# TODO: token count, model name https://docs.litellm.ai/docs/completion/output
# TODO: TaskList https://docs.chainlit.io/api-reference/elements/tasklist
# TODO: https://docs.chainlit.io/api-reference/data-persistence/custom-data-layer
# TODO: callback https://docs.chainlit.io/concepts/action#define-a-python-callback
# TODO: customize https://docs.chainlit.io/customisation/overview
# TODO: config https://docs.chainlit.io/backend/config/overview
# TODO: sync/async https://docs.chainlit.io/guides/sync-async
# TODO: function call https://docs.litellm.ai/docs/completion/function_call
# TODO https://docs.litellm.ai/docs/completion/reliable_completions
# TODO: Auth https://docs.chainlit.io/authentication/overview
# TODO: Data persistence https://docs.chainlit.io/data-persistence/overview
# TODO: custom endpoint https://docs.chainlit.io/backend/custom-endpoint
# TODO: deploy https://docs.chainlit.io/deployment/tutorials
# TODO: copilot chat widget https://docs.chainlit.io/deployment/copilot