-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/gpt
slash command
#1
base: development
Are you sure you want to change the base?
/gpt
slash command
#1
Conversation
Unused dependencies (1)
Unused types (2)
|
What are we calling this so I can update the references in QA: |
https://platform.openai.com/docs/guides/reasoning I'm not sure which model is best. I'm assuming
|
o1 in my opinion is too slow compared to 4o, I'd prefer to use it and honestly, reasoning models on the OpenAi website have not impressed me so far idk about you guys.
i.e it's faster and cheaper than o1-preview but it drags compared to 4o.
I hope so and as soon as it gets merged. I will apply the finishing touches and it should be mergeable following any other review comments. |
Typically slash command type plugins have a
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice to be able to configure the ChatGpt endpoint and model through the configuration (can be inside another issue).
I think it's fine. A comment responding ten seconds later isn't a problem |
I moved |
@@ -79,5 +78,6 @@ | |||
"extends": [ | |||
"@commitlint/config-conventional" | |||
] | |||
} | |||
}, | |||
"packageManager": "[email protected]" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't you downgrade to 1.22.21 so you don't have this problem anymore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a feature not a problem and wasn't it agreed we'd standardize it since we are a yarn-only org with the exception of one or two bun repos? If we are no longer standardizing it I'll change my sys config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No it's not a problem for anybody except your yarn
content: `You are a GitHub integrated chatbot tasked with assisting in research and discussion on GitHub issues and pull requests. | ||
Using the provided context, address the question being asked providing a clear and concise answer with no follow-up statements. | ||
The LAST comment in 'Issue Conversation' is the most recent one, focus on it as that is the question being asked. | ||
Use GitHub flavoured markdown in your response making effective use of lists, code blocks and other supported GitHub md features.`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you test using the openai playground for optimizing the prompt? If not, please do in a new task.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not, I'll extract a 50k+ token prompt and I'll do some testing with it in another task.
|
||
export const pluginSettingsSchema = T.Object({ | ||
model: T.String({ default: "o1-mini" }), | ||
openAiBaseUrl: T.String({ default: "" }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Empty string always seems wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could replace with a T.Optional(T.String())
and remove the default but the empty string is falsy so it's not used when instantiating openAi
so it's not wrong in this context, prefer I remove?
Resolves ubiquity-os/plugins-wishlist#29
I followed your prompt template and kept the system message short and sweet.
It seems it's able to lose the question being asked so I think it might be better to prioritize the question.
I think filling the chat history slightly would do the trick