Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Amazon Bedrock support #483

Merged
merged 5 commits into from
Nov 29, 2023
Merged

Conversation

tmokmss
Copy link
Contributor

@tmokmss tmokmss commented Nov 28, 2023

closes #459

Added support for Anthropic Claude on Amazon Bedrock. I'll add some explanation in the review comments.

@@ -60,6 +62,12 @@ def __init__(self):
litellm.vertex_location = get_settings().get(
"VERTEXAI.VERTEX_LOCATION", None
)
if get_settings().get("AWS.BEDROCK_REGION", None):
litellm.AmazonAnthropicConfig.max_tokens_to_sample = 2000
Copy link
Contributor Author

@tmokmss tmokmss Nov 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need this because max_tokens_to_sample in litellm is 256 by default and that will stop the completion in the middle of a review. We can allow users to set this parameter from a config file but I think hardcoding this large value is usually enough for many users.

}
if self.aws_bedrock_client:
kwargs["aws_bedrock_client"] = self.aws_bedrock_client
response = await acompletion(**kwargs)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

adding aws_bedrock_client kwarg when calling other providers like openai will fail due to the below error, so I only add this arg when using bedrock.

openai.error.InvalidRequestError: Unrecognized request argument supplied: aws_bedrock_client

@@ -290,6 +290,7 @@ def _fix_key_value(key: str, value: str):

def load_yaml(response_text: str) -> dict:
response_text = response_text.removeprefix('```yaml').rstrip('`')
response_text = response_text.strip().rstrip().removeprefix('{').removesuffix('}')
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude tends to add a leading { and a trailing } in the generated yaml, so I'm removing these characters here. try_fix_yaml didn't work for some reason.

Copy link
Collaborator

@mrT23 mrT23 Nov 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is problematic.

what if the message actually starts with '{' ?
Its not something that should come in any way with a yaml, so removing it just like that is risky

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this line, as a seperate fallback, to 'try_fix_yaml' section.

but this is not good for claude in terms of claude-vs-openai comparison :-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi thanks. I moved them to try_fix_yaml. Yea I think I'll have to optimize prompts for Claude :)

@@ -14,7 +14,7 @@ GitPython==3.1.32
PyYAML==6.0
starlette-context==0.3.6
litellm==0.12.5
boto3==1.28.25
boto3==1.33.1
Copy link
Contributor Author

@tmokmss tmokmss Nov 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

litellm requires boto3>=1.28.57, so I updated boto3 to the latest version. I don't see any breaking changes between 1.28.25 and 1.33.1 (doc).

@@ -18,4 +18,7 @@
'vertex_ai/codechat-bison-32k': 32000,
'codechat-bison': 6144,
'codechat-bison-32k': 32000,
'anthropic.claude-v2': 100000,
'anthropic.claude-instant-v1': 100000,
'anthropic.claude-v1': 100000,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I add support only for Claude for now because it's difficult to tune more parameters like max_tokens_to_sample specific for each model.

@mrT23
Copy link
Collaborator

mrT23 commented Nov 29, 2023

thanks @tmokmss

@mrT23 mrT23 merged commit b90dde4 into Codium-ai:main Nov 29, 2023
@tmokmss tmokmss deleted the add-bedrock-support branch November 29, 2023 12:02
yochail pushed a commit to yochail/pr-agent that referenced this pull request Feb 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Amazon Bedrock support
2 participants