Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial implementation of AI quick fixes in the Monaco editor #127

Closed
wants to merge 66 commits into from

Conversation

cdamus
Copy link
Collaborator

@cdamus cdamus commented Jul 31, 2024

What it does

A very basic prototype of a Monaco CodeActionProvider for AI quick fixes.

Addresses parts of eclipsesource/osweek-2024#29

demo

How to test

Manifest some kind of problem, identified in the Monaco editor by a problem marker, in a source file of your choice in a language of your choice. So long as that choice is implemented in a Monaco editor with a suitable language server.

Summon quick fixes on the problem using Ctrl+. (or Cmd+. on Mac).

Select the "AI Quick Fix" option.

Bask in the glory of a corrected source code.

Follow-ups

  • Trigger chat from problem marker
  • Support AI Quick Fix in the Problems view
  • Handle rate-limiting of a remote AI service (do we do that somewhere in the framework already?)
  • Distinguish between no useful response and connection error
  • Cache requests to avoid sending the same request over and over again for markers
  • format on AI quick-fix

Review checklist

Reminder for reviewers

eneufeld and others added 30 commits May 13, 2024 13:38
- add ui
- add openai integration
- introduce ChatResponseParts
- LanguageModelProvider can be used in both backend and frontend
  - frontend access is generically implemented independent of the
    actual LanguageModelProvider implementation
- split code into four packages:
  - ai-agent: containing the AgentDispatcher. At the moment just
    delegates to the LanguageModelProvider. Can run in both frontend
    and backend
  - ai-chat: only containing the UI part of the chat.
  - ai-model-provider: containing the infrastructure of the
    LanguageModelProvider and its frontend bridge
  - ai-openao: only contains the Open AI LanguageModelProvider
Implements the LanguageModelProviderRegistry which is able to handle
an arbitrary number of LanguageModelProviders.

Refactors the LanguageModelProvider to only return a simple text or
stream of text. It's now the agent's responsibility to convert this
into response parts. Therefore the interfaces are also moved to the
agent package.

The LanguageModelProviderRegistry implementation for the frontend
handles all LanguageModelProvider registered in the frontend as well as
in the backend.

Fixes the StreamNode in the tree-widget to update itself correctly
when new tokens arrive.
Introduces ChatModel, including nested ChatRequestModel and
ChatResponseModel to represent chat sessions. The chat models allow to
inspect and track requests and their responses.

Also introduces the ChatService which can be used to manage chat
sessions and sending requests.

Architecture is inspired by the VS Code implementation, however it
intends to be more generic.
…e-theia#13936)

fixes eclipse-theia#13800

contributed on behalf of STMicroelectronics

Signed-off-by: Remi Schnekenburger <[email protected]>
Co-authored-by: Philip Langer <[email protected]>
…#13912)

fixes eclipse-theia#13886

contributed on behalf of STMicroelectronics

Signed-off-by: Remi Schnekenburger <[email protected]>
Change-Id: I179432698332ff52b33aba7b1f7e203f2bee9c77
fixes eclipse-theia#13848

contributed on behalf of STMicroelectronics

Signed-off-by: Remi Schnekenburger <[email protected]>
Change-Id: I80d33303ceadf940f17265b7d910a5c13b59ec89
eclipsesource/osweek-2024#47

Change-Id: Ib9dd82e3ba062990f5642883bc9439aca52931ad
Change-Id: I186190dede14d729992977c2805e2c07100c2d17
Change-Id: Ia257c9a65b5f2bb9aa3e9ccc506f6394d744ff8f
sdirix and others added 29 commits July 29, 2024 11:41
Logs LanguageModel requests and their results to a separate output
channel per LanguageModel.
- rename the open button to 'Select Folder'
- set the default folder name to 'prompt-templates'
- check if a template with a given id was overridden
- adapt calls to return the overridden template if so
- add temporary test command
- implement initial cutomization service reading the templates on
preferences change
- no file watching yet
Review and adapt prompt templates
Fixes an issue with circular injections when using the PromptService
in an agent.
Fixes a circular dependency by removing prompt collection. Instead the
PromptService is filled programatically on start.
Adds a new ai-code-completion Theia extension which provides the
CodeCompletionAgent. The agent is integrated via a
CompletionItemProvider into Monaco for all files.

The extension offers two preferences to enable/disable the feature as
well as control its behavior.

Co-authored-by: Stefan Dirix <[email protected]>
Change-Id: Ie7f4cfc1923db5afbaef6455089ad5cb21107db7
The language model selection value should be initialized with the value
from settings if it exists
Implements:
- Copy
- Insert at cursor
- Monaco Editor
- Navigating to the location of the file (if provided)


Co-authored-by: Lucas Koehler <[email protected]>
- Ensure we always create a variable part even for undefined variables
-- Prompt text will then default to user text (including '#')

- Allow adopters to register resolvers with priority
-- Given a particular variable name, argument and context

- Automatically resolve all variable parts in a chat request
-- Ensure parts always provide a matching prompt text

- Make sure variable service is part of core
-- Generic variable handling for all agents and UI layers
-- Chat-specific variable handling only in the chat layer
-- Provide example of 'today' variable

Fixes eclipsesource/osweek-2024#46
Co-authored-by: Christian W. Damus <[email protected]>
Added a view for displaying all the configured llamafiles.
Configured llamafiles can be started and killed.
One llamafile can be set as active, then being used in the chat.
The chat integration is currently hardcoded to use the active llamafile language model.
This should be changed as soon as the chat integration has a dropdown to select the language model (#42).
A follow up will be created to describe the next steps.
- Extend `ChatServiceImpl` to extract the selected agent from the parsed request
- Extend `ChatModel` to ensure  request and response model keep a reference to their agent
- Update rendering of chat view to render icons and label of the chat agent if set
- Update agent definitions. Since we reuse the request parsing from vscode following reuqirements have to be met
  - No whitespaces are allowed in the agent name.
  - The locations property needs to be set

- Extract type of `Agent.languageModelRequirements` into dedicated definition
@cdamus cdamus closed this Jul 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.