Skip to content

Latest commit

 

History

History
26 lines (22 loc) · 1.57 KB

README.md

File metadata and controls

26 lines (22 loc) · 1.57 KB

How to Set Up and Use

Setup Instructions

  1. Clone the repository to get started.
  2. Install LM Studio for Windows to access its features.
  3. Install Anaconda and create a new Conda environment via the Anaconda Prompt.
  4. Activate your Conda environment by entering conda activate [YourEnvironmentName] in the terminal, replacing [YourEnvironmentName] with the name of your Conda environment.
  5. Install the required packages from requirements.txt by executing conda install --file [PathToYourFile]/requirements.txt, substituting [PathToYourFile] with the path to your requirements.txt file.
  6. In LM Studio, download your preferred instruct model for use.
  7. Start the LM Studio Local Inference Server with your chosen model.
  8. Launch chat.toe to initiate the interface.
  9. Within the interface, find and select the CondaEnv component. You'll need to enter your Windows username and the name of your Conda environment here.
  10. Activate the environment by clicking the 'activate' button.
  11. Enter Perform Mode to start chatting with the model using the UI.

Progress Checklist

  • Created Conda environment component.
  • Established LM Studio Local Server client access.
  • Implemented JSON reply parsing functionality.
  • Developed an interactive UI for dynamic conversations with the model.
  • Add functionality for editing system prompts.
  • Conduct comprehensive testing of all components.
  • Enhance the UI for better user experience.
  • Update documentation to include recent changes and enhancements.