Upon installation of this extension, you'll see the icon in the bottom-right corner. It will respond to mouse hover/over.
Hover the top of the head on the bottom right corner with the mouse and the head will come out:
The red cross near the ear allows temporarily hiding the head—only for the current tab and until it is reloaded.
Click the icon to open the main UI.
The ribbon at the top gives some quick access options: From left to right there are:
Click to toggle the menu
The changes made here are temporary for the session, (until reload) and for the current tab only.
- On the top is the API endpoint where AI prompts are sent to.
- If Ollama is used as an endpont, next dropdown will be populated with the available model names. It is mandatory to select one, otherwise an error will be trown.
- See Web Hook for more information
- Create a new prompt and save it for future use.
- Show a list of predefiend and save prompts.
- Show a list of the availabel system commands.
- The last menu will open extension's options in a new tab.
Chanking Ollama model
You can switch models in two ways: through the Menu or by hover the mouse over the current model name at the top of the Ribbon. The model you're using will have a checkmark beside it.
When you hover over the Ribbon, you'll see the extension's version number pop up.
- Show session list. Click to reload any.
Sessions can be managed also from the Options page.
- Edit system instructions. Only for the session. Use the Options page to set permanent system instructions.
If the pannel is not pinned (default) then clicking outside the panel will hide it. This behaviour can be changed from the Options page.
Note
Clicking the hide button will unpin the panel first, then close it. This means it won’t be pinned the next time you open it.
When empty, the field provides a brief overview of the available options.
A speech-to-text feature is available to dictate prompts in English. This can be activated by clicking the button located at the bottom right corner of the statusbar. Each click toggles the feature on and off. Once activated, the system will attempt to recognize spoken English words until it is deactivated. Transcriptions will appear in the prompt field.
Important
For speach recognition with Firefox browser look here
Additional information about which browser and how support it is available here (Ctrl + click
(Windows/Linux) or Cmd + click
(macOS) to open the link in a new tab manually).
Just drag and drop a file.
Note
Only plain text files can be used currently.
Click on the file icon to delete it.
There are two types of commands: system and custom. System commands start with @
and are enclosed within double brackets {{}}
. Those are predefined commands and cannot be modified. To view the list of the available system commands type /help
.
Custom commands are user defined prompts. Usually, those are prompts often used and this will help avoiding repetitive typing the same prompt again and again.
To list all available commands type /list
and press Enter
.
On the top of the list there are two buttons: Close
() on the right and New
() on the left. Custom commands could be imported () and exported () from here.
Following are a few predefined commands which connot be changed, with their descriptions: /add
, /list
and /error
.
The rest in the list are the commands created by the User. Above each command there are a few buttons:
To use a predefined custom command type its name after a slash /
and press Enter
, or use any of the buttons available:
Pressing Enter
will execute it as if it has been typed as a prompt text followed by Enter
key. Buttons above each command give alternative actions related with the command:
- copy and paste command's content into the prompt area.
Warning
No Undo
is available,
To view available custom commands, type /list
in the prompt.
Custom command can include system commands. Example:
summarise @{{page}}
This will send the content of the page from current active tab to the AI with a requiest to generate a summary.
New
() and edit
() will open a simple editor:
Add the end point used to query the LLM. Use the buttons to add ( ), delete ( ), delete all ( ) sort ascending ( ) or descending ( ), and copy ( ).
The model list, determined by the endpoint, includes an additional reload button ( ). Deleting or adding models is not possible within this interface and depends on the associated tool.
If Ollama is defined as End Point, Model list will be automatically populated. Open the list and click the preferable model. You can temporary change it from the Menu in the Ribbon.
Allows adding a list of predefined API end points to be called before sending the prompt to the model. The resource used must return plain text. Any other type will either be treated as text or throw an error, potentially misleading the model.
The purpose is to enrich the context by providing relevant information when needed, which will improve the quality of the generated response.
The user has complete freedom to choose the type of service they want to use, but the intention is to run the service locally. If needed, this hook can be easily extended to call external services.
Example Project: An example project is available on GitHub here. It provides a simple HTTP server and an option to extend it.
To embed a Web Hook, follow this structure:
!#
indicates the start of an external call construction./path/to/the/resource
: This is the endpoint API defined in Web Hooks.?
: A separator used if any parameters will be passed.key=value
: A sequence used to pass parameters as aPOST
body.#!
: Indicates the end of the external call construction.
To add web resources as prompt contexts, consider this example project. A script that queries and returns text content is available on this GitHub repository. Once set up, you can pass it in the prompt like so:
!#/readweb?resource=https://github.com/ivostoykov/localAI#!
The result will be the text content added to the rest of the prompt. The purpose is for this retrieved content to be used as context by the model.
Note
If another API is used, it must abide by two rules:
- Understand the content enclosed between
!#...#!
. - Return plain text.
After installing or updating Ollama it is likely to hit 403 Forbidden error. In this case follow the instruction below.
- Edit Ollama service
sudo nano /etc/systemd/system/ollama.service
or with the preferable editor, i.e.:
sudo vim /etc/systemd/system/ollama.service
- Add this line in the mentioned section
[Service]
Environment="OLLAMA_ORIGINS=*"
- Save and exit
- restart the service
sudo systemctl daemon-reload && sudo systemctl restart ollama