Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there support for v1/completions or only v1/chat/completions? #30

Closed
potaslab opened this issue Apr 15, 2023 · 2 comments
Closed

Is there support for v1/completions or only v1/chat/completions? #30

potaslab opened this issue Apr 15, 2023 · 2 comments

Comments

@potaslab
Copy link

Ive tried using models like "text-ada-001", but I get this error:

{
  "error": {
    "message": "Invalid URL (POST /v1/chat/completions)",
    "type": "invalid_request_error",
    "param": null,
    "code": null
  }

Is there a way to support these models? Do they might need different input parameters?

@algunion
Copy link
Contributor

It seems that you used the create_chat instead of create_completion.

The /v1/chat/completions is used for chat (create_chat), and it only supports the gpt-4 and gpt-3.5-turbo models.

The correct endpoint for completion is /v1/completions (and you need to use create_completion).

Here is an example:

create_completion(
    ENV["OPENAI_API_KEY"],
    "text-ada-001";
    prompt="Oranges are..."
  )

@logankilpatrick
Copy link
Member

See the docs, this package supports both completions and chat completions: https://juliaml.github.io/OpenAI.jl/dev/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants