Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update header #166

Merged
merged 3 commits into from
Aug 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 13 additions & 2 deletions docs/tutorials/litellm.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,13 @@ LiteLLM currently supports requests in:
- [The OpenAI format](https://docs.litellm.ai/docs/completion/input) - `/chat/completion`, `/embedding`, `completion`, `/audio/transcription`, etc.
- [The Anthropic format](https://docs.litellm.ai/docs/anthropic_completion) - `/messages`


[**Detailed Docs**](https://docs.litellm.ai/docs/proxy/quick_start)

## Pre-Requisites
- Install litellm proxy - `pip install 'litellm[proxy]'`
- Setup [LLM Guard Docker](https://llm-guard.com/api/deployment/#from-docker)

## Quick Start

Let's add LLM Guard content mod for Anthropic API calls
Expand All @@ -18,7 +25,7 @@ export LLM_GUARD_API_BASE="http://0.0.0.0:8192" # deployed llm guard api
export ANTHROPIC_API_KEY="sk-..." # anthropic api key
```

Add `llmguard_moderations` as a callback
Add `llmguard_moderations` as a callback in a config.yaml

```yaml
model_list:
Expand All @@ -32,7 +39,11 @@ litellm_settings:
callbacks: ["llmguard_moderations"]
```

Now you can easily test it
Now you can easily test it:

```bash
litellm --config /path/to/config.yaml
```

- Make a regular /chat/completion call

Expand Down
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ nav:
- Optimization: tutorials/optimization.md
- Local Models: tutorials/notebooks/local_models.ipynb
- OpenAI SDK: tutorials/openai.md
- Across 100+ LLMs: tutorials/litellm.md
- Across 100+ LLMs (LiteLLM): tutorials/litellm.md
- Langchain: tutorials/notebooks/langchain.ipynb
- Retrieval-augmented Generation: tutorials/rag.md
- Secure RAG with Langchain: tutorials/notebooks/langchain_rag.ipynb
Expand Down
Loading