- Clone the repository to get started.
- Install LM Studio for Windows to access its features.
- Install Anaconda and create a new Conda environment via the Anaconda Prompt.
- Activate your Conda environment by entering
conda activate [YourEnvironmentName]
in the terminal, replacing[YourEnvironmentName]
with the name of your Conda environment. - Install the required packages from
requirements.txt
by executingconda install --file [PathToYourFile]/requirements.txt
, substituting[PathToYourFile]
with the path to yourrequirements.txt
file. - In LM Studio, download your preferred instruct model for use.
- Start the LM Studio Local Inference Server with your chosen model.
- Launch
chat.toe
to initiate the interface. - Within the interface, find and select the CondaEnv component. You'll need to enter your Windows username and the name of your Conda environment here.
- Activate the environment by clicking the 'activate' button.
- Enter Perform Mode to start chatting with the model using the UI.
- Created Conda environment component.
- Established LM Studio Local Server client access.
- Implemented JSON reply parsing functionality.
- Developed an interactive UI for dynamic conversations with the model.
- Add functionality for editing system prompts.
- Conduct comprehensive testing of all components.
- Enhance the UI for better user experience.
- Update documentation to include recent changes and enhancements.