Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

return mode id without provider prefix #64250

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

taras-yemets
Copy link
Contributor

@taras-yemets taras-yemets commented Aug 2, 2024

Currently, clients fail to parse the autocomplete model. Context can be found in this thread.

This issue is not reproducible in all cases. Clients (tested in VS Code only) may resolve the autocomplete model based on the client configuration and feature flags before parsing the provider and model data from the connected Sourcegraph instance. However, if we disable the client configuration and feature flags (make resolveDefaultModelFromVSCodeConfigOrFeatureFlags always return null), creating the provider configuration for autocomplete consistently fails (at least for me).

The current implementation of codyLLMConfigurationResolver returns:

  • The code completion provider ID as provider
  • The code completion model ID prefixed with the provider ID as completionModel.

This PR proposes to return only the model ID as completionModel. This adjustment allows clients to correctly resolve the provider and model for completions.

I don’t believe this is a comprehensive solution or that it aligns with the new model configuration approach. With this change, chatModel and fastChatModel use model IDs prefixed with providers, while code completions use a combination of provider and completionModel (without the provider prefix as it’s a separate field).

This PR is primarily for demonstration purposes. For more details, refer to the referenced Linear thread.

Test plan

Tested locally in VS Code.

Configuration used for testing

  • site config
"completions": {
    "provider": "sourcegraph",
    "accessToken": "TOKEN",
    "endpoint": "https://cody-gateway.sgdev.org",
    "completionModel": "anthropic/claude-instant-v1",
    // ...limits
} 

@cla-bot cla-bot bot added the cla-signed label Aug 2, 2024
@taras-yemets taras-yemets requested review from chrsmith and a team August 2, 2024 15:47
Copy link
Contributor

@chrsmith chrsmith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to have the codyLLMConfigurationResolver have the same behavior as it did before we recently updated the code to be aware of the modelconfig.

It should just be a matter of returning the "model ID" where a model ID is expected, or a "provider ID / model ID" where that is expected... but from the sounds of it, it isn't clear what the expected behavior is.

Is it the case that the completion model is either the "model ID" (claude-3-sonnet) OR the "provider-and-model-id" (anthropic/starcoder)?

In other words, we can modify this code any way we wish without breaking the internal Sourcegraph instance. But without having a crisp understanding of what the client is expecting from us, that's going to be problematic...

Thinking aloud, I'll just take a very close look at the previous behavior and try to infer what how this was supposed to work. (Or if it is just "return whatever was specified in the site config", then just hard-code it based on the things we are expecting client side?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants