Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: code-review-gpt ci #14

Merged
merged 1 commit into from
Jul 20, 2023
Merged

feat: code-review-gpt ci #14

merged 1 commit into from
Jul 20, 2023

Conversation

mattzcarey
Copy link
Contributor

@mattzcarey mattzcarey commented Jul 20, 2023

Please add OPENAI_API_KEY to repo secrets

@@ -22,9 +22,10 @@ class RequestBody(BaseModel):

@completions_router.post("/chat/completions", tags=["Chat Completions"])
async def post_chat_completions(
body: RequestBody = Body(...), api_key=Depends(AuthHandler.check_auth_header)
body: RequestBody = Body(...),
api_key=Depends(AuthHandler.check_auth_header, use_cache=False),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldnt cache api keys :(

@github-actions
Copy link

/home/runner/work/GenossGPT/GenossGPT/genoss/api/completions_routes.py - LOGAF Level 3

The code is generally good, but there are areas for potential improvement.

  1. The post_chat_completions function logs the last message content, which could potentially expose sensitive information. Consider removing or obfuscating this log.
  2. The RequestBody class has a temperature attribute that is not used anywhere. If it's not needed, consider removing it. If it's for future use, add a comment to clarify.
  3. The post_chat_completions function returns the result of model.generate_answer. It would be better to handle any exceptions that might be raised by this function and return a meaningful error message.

/home/runner/work/GenossGPT/GenossGPT/genoss/llm/openai/openai_llm.py - LOGAF Level 2

The code functions, but has significant issues needing attention.

  1. The generate_answer function prints the response text. This could potentially expose sensitive information. Consider removing this print statement.
  2. The generate_answer function does not handle any exceptions that might be raised by the LLMChain or OpenAIChat classes. Consider adding error handling to provide a meaningful error message.
  3. The generate_embedding function does not handle any exceptions that might be raised by the OpenAIEmbeddings class. Consider adding error handling to provide a meaningful error message.

/home/runner/work/GenossGPT/GenossGPT/genoss/services/model_factory.py - LOGAF Level 4

/home/runner/work/GenossGPT/GenossGPT/tests/services/test_model_factory.py - LOGAF Level 4


🔒🗑️🔧


Powered by Code Review GPT

@StanGirard StanGirard merged commit 2ff8b5b into main Jul 20, 2023
2 checks passed
@StanGirard StanGirard deleted the feat/code-review-gpt branch July 20, 2023 16:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants