Skip to content

A simple little coding buddy in a website, like ChatGPT but running locally.

License

Notifications You must be signed in to change notification settings

jeffWelling/codey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Codey

This is my first attempt at using LLMs to write a coding buddy. This was written on an Apple Silicon laptop and requires that the model chosen fits within the memory constraints of your system.

Getting Started

Choose a model. You'll need to choose one that fits within your available memory, this defaults to llama3:latest which is one of the smallest models I can find. Set your model in my_model on line 19.

Setup a virtual env

python3 -m venv .venv
source .venv/bin/activate

Install the required packages

pip install -r requirements.txt
pip install llama-index-readers-github

Configure the server

Read from GitHub repo jeffwelling/giticket

export CODEY_SOURCE="github"
export CODEY_GITHUB_OWNER="jeffwelling"
export CODEY_GITHUB_REPO="giticket"
export GITHUB_TOKEN="SomeSuperSecretTokenGoesHere"

Read from local directory called codey_data

export CODEY_SOURCE="dir"

Start the server

streamlit run codey.py

Questions

Feel free to ask questions and file issues, but this is really nothing more than some glue holding together streamlit and llama_index. I'm happy to help but I'm no expert and you may need to ask around those communities for assistance.

Further reading

License

This project is under BSD-3-Clause license.

Copyright (c) 2024, Jeff Welling

About

A simple little coding buddy in a website, like ChatGPT but running locally.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages