Skip to content

Commit

Permalink
chore: remove all mention of bedrock from litellm blog post (#127)
Browse files Browse the repository at this point in the history
* chore: remove all mention of bedrock from litellm blog post

* chore: remove more mentions
  • Loading branch information
danny-avila committed Sep 17, 2024
1 parent 612a0a7 commit d6dbaf7
Showing 1 changed file with 4 additions and 178 deletions.
182 changes: 4 additions & 178 deletions pages/blog/2023-11-30_litellm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ import { BlogHeader } from '@/components/blog/BlogHeader'
# Using LibreChat with LiteLLM Proxy
Use **[LiteLLM Proxy](https://docs.litellm.ai/docs/simple_proxy)** for:

* Calling 100+ LLMs Huggingface/Bedrock/TogetherAI/etc. in the OpenAI ChatCompletions & Completions format
* Calling 100+ LLMs Huggingface/TogetherAI/etc. in the OpenAI ChatCompletions & Completions format
* Load balancing - between Multiple Models + Deployments of the same model LiteLLM proxy can handle 1k+ requests/second during load tests
* Authentication & Spend Tracking Virtual Keys

Expand Down Expand Up @@ -63,7 +63,7 @@ LiteLLM requires a configuration file in addition to the override file. Within L
below has the options to enable llm proxy to various providers, load balancing, Redis caching, and Langfuse monitoring. Review documentation for other configuration options.
More information on LiteLLM configurations here: **[docs.litellm.ai/docs/simple_proxy](https://docs.litellm.ai/docs/simple_proxy)**

### Working Example of incorporating OpenAI, Azure OpenAI, AWS Bedrock, and GCP
### Working Example of incorporating OpenAI, Azure OpenAI, and GCP

Please note the `...` being a secret or a value you should not share (API key, custom tenant endpoint, etc)
You can potentially use env variables for these too, ex: `api_key: "os.environ/AZURE_API_KEY" # does os.getenv("AZURE_API_KEY")`
Expand All @@ -74,180 +74,6 @@ You can potentially use env variables for these too, ex: `api_key: "os.environ/A
model_list:
# https://litellm.vercel.app/docs/proxy/quick_start
# AWS Bedrock - Anthropic
- model_name: claude-3-haiku
litellm_params:
model: bedrock/anthropic.claude-3-haiku-20240307-v1:0
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: claude-3-sonnet
litellm_params:
model: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: claude-3-opus
litellm_params:
model: bedrock/anthropic.claude-3-opus-20240229-v1:0
aws_region_name: us-west-2
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: claude-v2
litellm_params:
model: bedrock/anthropic.claude-v2:1
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: claude-instant
litellm_params:
model: bedrock/anthropic.claude-instant-v1
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
# AWS Bedrock - Meta Llama
- model_name: llama2-13b
litellm_params:
model: bedrock/meta.llama2-13b-chat-v1
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: llama2-70b
litellm_params:
model: bedrock/meta.llama2-70b-chat-v1
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: llama3-8b
litellm_params:
model: bedrock/meta.llama3-8b-instruct-v1:0
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: llama3-70b
litellm_params:
model: bedrock/meta.llama3-70b-instruct-v1:0
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
# AWS Bedrock - Mistral and Mixtral
- model_name: mistral-7b-instruct
litellm_params:
model: bedrock/mistral.mistral-7b-instruct-v0:2
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: mistral-large
litellm_params:
model: bedrock/mistral.mistral-large-2402-v1:0
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: mistral-small
litellm_params:
model: bedrock/mistral.mistral-small-2402-v1:0
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: mixtral-8x7b-instruct
litellm_params:
model: bedrock/mistral.mixtral-8x7b-instruct-v0:1
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: mixtral-large
litellm_params:
model: bedrock/mistral.mistral-large-2402-v1:0
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
# AWS Bedrock - Cohere
- model_name: cohere-command-v14
litellm_params:
model: bedrock/cohere.command-text-v14
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: cohere-command-light-v14
litellm_params:
model: bedrock/cohere.command-light-text-v14
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: cohere-command-r
litellm_params:
model: bedrock/cohere.command-r-v1:0
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: cohere-command-r-plus
litellm_params:
model: bedrock/cohere.command-r-plus-v1:0
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
# AWS Bedrock - AI21 Labs
- model_name: ai21-j2-mid
litellm_params:
model: bedrock/ai21.j2-mid-v1
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: ai21-j2-ultra
litellm_params:
model: bedrock/ai21.j2-ultra-v1
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
# Amazon
- model_name: amazon-titan-lite
litellm_params:
model: bedrock/amazon.titan-text-lite-v1
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: amazon-titan-express
litellm_params:
model: bedrock/amazon.titan-text-express-v1
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
- model_name: amazon-titan-premier
litellm_params:
model: bedrock/amazon.titan-text-premier-v1:0
aws_region_name: us-east-1
aws_access_key_id: A...
aws_secret_access_key: ...
# MS Azure
- model_name: azure-gpt-4-turbo-preview
litellm_params:
Expand Down Expand Up @@ -448,7 +274,7 @@ custom:

## Why use LiteLLM?

1. **Access to Multiple LLMs**: It allows calling over 100 LLMs from platforms like Huggingface, Bedrock, TogetherAI, etc., using OpenAI's ChatCompletions and Completions format.
1. **Access to Multiple LLMs**: It allows calling over 100 LLMs from platforms like Huggingface, TogetherAI, etc., using OpenAI's ChatCompletions and Completions format.

2. **Load Balancing**: Capable of handling over 1,000 requests per second during load tests, it balances load across various models and deployments.

Expand All @@ -459,7 +285,7 @@ Key components and features include:
- **Installation**: Easy installation.
- **Testing**: Testing features to route requests to specific models.
- **Server Endpoints**: Offers multiple endpoints for chat completions, completions, embeddings, model lists, and key generation.
- **Supported LLMs**: Supports a wide range of LLMs, including AWS Bedrock, Azure OpenAI, Huggingface, AWS Sagemaker, Anthropic, and more.
- **Supported LLMs**: Supports a wide range of LLMs, including Azure OpenAI, Huggingface, AWS Sagemaker, Anthropic, and more.
- **Proxy Configurations**: Allows setting various parameters like model list, server settings, environment variables, and more.
- **Multiple Models Management**: Configurations can be set up for managing multiple models with fallbacks, cooldowns, retries, and timeouts.
- **Embedding Models Support**: Special configurations for embedding models.
Expand Down

0 comments on commit d6dbaf7

Please sign in to comment.