Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

/gpt slash command #1

Open
wants to merge 57 commits into
base: development
Choose a base branch
from

Conversation

Keyrxng
Copy link
Contributor

@Keyrxng Keyrxng commented Jul 13, 2024

Resolves ubiquity-os/plugins-wishlist#29

I followed your prompt template and kept the system message short and sweet.

It seems it's able to lose the question being asked so I think it might be better to prioritize the question.

I think filling the chat history slightly would do the trick

  1. system
  2. user - long prompt
  3. assistant - manually inserted short acknowledgement of the context received
  4. user - directly ask the question
  5. assistant - the real API response

  • Should this plugin be able to read it's own comments or not?
  • Are we only going one level deep with the linked issue context?
  • Are there to be any kind of safe guards, formatting or anything (not including the spec prompt template) included in the system message or it's free reign with little guidance like it is now?

.github/workflows/compute.yml Outdated Show resolved Hide resolved
src/handlers/ask-gpt.ts Outdated Show resolved Hide resolved
src/plugin.ts Outdated Show resolved Hide resolved
src/types/context.ts Outdated Show resolved Hide resolved
src/utils/format-chat-history.ts Outdated Show resolved Hide resolved
src/utils/format-chat-history.ts Outdated Show resolved Hide resolved
src/utils/issue.ts Outdated Show resolved Hide resolved
src/utils/issue.ts Outdated Show resolved Hide resolved
src/utils/issue.ts Outdated Show resolved Hide resolved
Copy link

github-actions bot commented Jul 15, 2024

Unused dependencies (1)

Filename dependencies
package.json dotenv

Unused types (2)

Filename types
src/types/github.ts IssueComment
ReviewComment

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Sep 25, 2024

What are we calling this so I can update the references in package.json and the readme?

QA:

@0x4007
Copy link

0x4007 commented Sep 25, 2024

command-ask is fine for now. Your QA makes it look stable. Can we start using it? Also I want to mention that I have access to o1 from the API now.

https://platform.openai.com/docs/guides/reasoning

I'm not sure which model is best. I'm assuming o1-mini is pretty solid for our use case though.

The maximum output token limits are:
o1-preview: Up to 32,768 tokens
o1-mini: Up to 65,536 tokens


Hi there,I’m

Nikunj, PM for the OpenAI API. We’ve been working on expanding access to the OpenAI o1 beta and we’re excited to provide API access to you today. We’ve developed these models to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.As a trusted developer on usage tier 4, you’re invited to get started with the o1 beta today.

Hi there,

I’m Nikunj, PM for the OpenAI API. We’ve been working on expanding access to the OpenAI o1 beta and we’re excited to provide API access to you today. We’ve developed these models to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.

As a trusted developer on usage tier 4, you’re invited to get started with the o1 beta today.
Read the docs
You have access to two models:

Our larger model, o1-preview, which has strong reasoning capabilities and broad world knowledge.
Our smaller model, o1-mini, which is 80% cheaper than o1-preview.

Try both models! You may find one better than the other for your specific use case. But keep in mind o1-mini is faster, cheaper, and competitive with o1-preview at coding tasks (you can see how it performs here). We’ve also written up more about these models in our blog post.

These models currently have a rate limit of 100 requests per minute for developers on usage tier 4, but we’ll be increasing rate limits soon. To get immediately notified of updates, follow @OpenAIDevs. I can’t wait to see what you build with o1—please don’t hesitate to reply with any questions.

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Sep 26, 2024

command-ask is fine for now. Your QA makes it look stable. Can we start using it? Also I want to mention that I have access to o1 from the API now.

https://platform.openai.com/docs/guides/reasoning

I'm not sure which model is best. I'm assuming o1-mini is pretty solid for our use case though.

o1 in my opinion is too slow compared to 4o, I'd prefer to use it and honestly, reasoning models on the OpenAi website have not impressed me so far idk about you guys.

But keep in mind o1-mini is faster, cheaper, and competitive with o1-preview at coding tasks

i.e it's faster and cheaper than o1-preview but it drags compared to 4o.

Your QA makes it look stable. Can we start using it?

I hope so and as soon as it gets merged. I will apply the finishing touches and it should be mergeable following any other review comments.

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Sep 26, 2024

Typically slash command type plugins have a commands entry in the manifest but with this since I'm unsure what to do basically, if the command is configurable then an entry does not make sense however if it's going to be a constant then I guess I could add one.

  • Currently we pass in the bot name via the config. Should this be an env var or hardcoded so partner's can't change it? Should we use the app_slug or the bot.user.id and fetch it's username?
  • Since the slash command in this case now is @UbiquityOS which may be subject to change (if it's not subject to change then it's easy) should I write a commands entry or just have it forward the payload since the plugin does the processing anyway?

.env.example Outdated Show resolved Hide resolved
Copy link
Member

@gentlementlegen gentlementlegen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to be able to configure the ChatGpt endpoint and model through the configuration (can be inside another issue).

@0x4007
Copy link

0x4007 commented Sep 26, 2024

o1 in my opinion is too slow compared to 4o

I think it's fine. A comment responding ten seconds later isn't a problem

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Sep 26, 2024

I moved UBIQUITY_OS_APP_SLUG into .env so that we set it when we deploy the worker. I done this to make it impossible for a partner to whitelabel it and alter the command as I got the feeling that what's intended with this plugin.

@@ -79,5 +78,6 @@
"extends": [
"@commitlint/config-conventional"
]
}
},
"packageManager": "[email protected]"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't you downgrade to 1.22.21 so you don't have this problem anymore

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a feature not a problem and wasn't it agreed we'd standardize it since we are a yarn-only org with the exception of one or two bun repos? If we are no longer standardizing it I'll change my sys config

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it's not a problem for anybody except your yarn

Comment on lines +147 to +150
content: `You are a GitHub integrated chatbot tasked with assisting in research and discussion on GitHub issues and pull requests.
Using the provided context, address the question being asked providing a clear and concise answer with no follow-up statements.
The LAST comment in 'Issue Conversation' is the most recent one, focus on it as that is the question being asked.
Use GitHub flavoured markdown in your response making effective use of lists, code blocks and other supported GitHub md features.`,
Copy link

@0x4007 0x4007 Sep 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you test using the openai playground for optimizing the prompt? If not, please do in a new task.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not, I'll extract a 50k+ token prompt and I'll do some testing with it in another task.


export const pluginSettingsSchema = T.Object({
model: T.String({ default: "o1-mini" }),
openAiBaseUrl: T.String({ default: "" }),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Empty string always seems wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#1 (comment)

I could replace with a T.Optional(T.String()) and remove the default but the empty string is falsy so it's not used when instantiating openAi so it's not wrong in this context, prefer I remove?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

/gpt ask a context aware question
3 participants