Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: GPT4Generator #5744

Merged
merged 7 commits into from
Sep 13, 2023
Merged

feat: GPT4Generator #5744

merged 7 commits into from
Sep 13, 2023

Conversation

ZanSara
Copy link
Contributor

@ZanSara ZanSara commented Sep 8, 2023

Related Issues

Proposed Changes:

  • Adds GPT4Generator, a small subclass of GPT35Generator that sets a different default model.
  • Adds its unit tests
  • Expand GPT35Generator's e2e tests to also test the subclass

How did you test it?

Local tests run

Notes for the reviewer

Checklist

@ZanSara ZanSara requested a review from a team as a code owner September 8, 2023 08:20
@ZanSara ZanSara requested review from anakin87 and removed request for a team September 8, 2023 08:20
@github-actions github-actions bot added topic:tests type:documentation Improvements on the docs labels Sep 8, 2023
@ZanSara ZanSara requested a review from a team as a code owner September 8, 2023 08:21
@ZanSara ZanSara requested review from dfokina and removed request for a team September 8, 2023 08:21
Copy link
Member

@anakin87 anakin87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, it looks good to me.

I am only a bit worried about the e2e test error:
openai.error.RateLimitError: Rate limit reached for default-gpt-4 ... on tokens per min. Limit: 40000 / min.

@ZanSara
Copy link
Contributor Author

ZanSara commented Sep 8, 2023

@anakin87 I tried to lower the amount of tokens sent, but I can't tell if that will help or if OpenAI is just unreliable today. On my local machine it fails once every 3-4 executions, on CI seems to fail a lot more often. And the error message makes no sense because the rate is always much lower than the threshold 😅

@ZanSara ZanSara self-assigned this Sep 11, 2023
@coveralls
Copy link
Collaborator

Pull Request Test Coverage Report for Build 6162263615

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 2 unchanged lines in 1 file lost coverage.
  • Overall coverage increased (+0.02%) to 48.942%

Files with Coverage Reduction New Missed Lines %
preview/components/generators/openai/gpt35.py 2 96.15%
Totals Coverage Status
Change from base Build 6162162895: 0.02%
Covered Lines: 11847
Relevant Lines: 24206

💛 - Coveralls

Copy link
Member

@anakin87 anakin87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@anakin87 anakin87 merged commit 2c4d839 into main Sep 13, 2023
57 checks passed
@anakin87 anakin87 deleted the gpt4-llm-generator branch September 13, 2023 08:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic:tests type:documentation Improvements on the docs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

OpenAI LLM Generators for Haystack 2.0
3 participants