Pet project / Capstone project for DataTalks.Club LLM ZoomCamp`24:
RAG application based on exam preparation questions for Azure and Google Cloud certification exams.
Project can be tested and deployed in cloud virtual machine (AWS, Azure, GCP), GitHub CodeSpaces (the easiest option, and free), or even locally with/without GPU! Works with Ollama and ChatGPT.
For GitHub CodeSpace option you don't need to use anything extra at all - just your favorite web browser + GitHub account is totally enough.
At some point of IT career many of us think about getting certified - to have better chances to be hired, or get better position/salary, or just to confirm expertise. And then we face those exam guides, certification preparation books, exam questions and mock tests. Many things we need to remember. But crumming doesn't work well, especially with hundreds of terms, services and tools - thanks to cloud providers - they made a lot for us.
How can we increase chances to remember material well to pass exam? Understand it better, discover more connections. For this we need to have opportunity to ask questions - about things that are not clear yet. But you cannot ask the book or exam guide! Let's be honest, googling exam related questions is not efficient (sorry Goggle) and can be quite distracting (wikipedia-effect - attention lost). Thanks to technology, we have all those LLMs and "chatgpt"s - now we can ask chatbots. Still, they can hallucinate, are not trained well for specific topics yet.
And here is RAG comes to help! RAG is Retrieval Augmented Generation - the process of optimizing the output of a large language model (LLM). It references an authoritative knowledge base outside of its training data sources before generating a response. So instead of asking LLM about exam topics "from scratch", you first get context from prepared knowledge base (exam flashcards, question bank) and then get better focused answers. This is what I decided to do in my project.
Just imagine, you can 'talk to your data'!
This is my LLM project started during LLM ZoomCamp'24.
LLM Exam Assistant is a RAG application designed to assist users with their [data/cloud] exam preparation. It makes possible conversational interaction - via chatbot-like interface to easily get information without looking through guides or websites.
Actually, I strive to make inner logic universal enough, so knowledge base can be on any topic, data/cloud related exams is what I have been working this year.
Thanks to LLM ZoomCamp for the reason to approach exams and learning with many new tools!
I assembled question banks for 2 exams: Azure DP 900 and Google Cloud Professional Data Engineer. Azure flashcards I extracted from shared Anki deck. Google PDE flashcards I collected from official study guide. Adding more data is a matter of time.
CSV files are located in data directory. Structure: id, question, answer, exam, section.
Section helps to focus on specific parts of exam.
- Elastic search to index question bank
- OpenAI-compatible API, that supports working with Ollama locally, even without GPU
- Ollama tested with Microsoft Phi 3/3.5 model, performs better than Gemma
- You can pull and test any model from Ollama library
- with your own OPENAI_API_KEY you can choose gpt-3.5/gpt-4
- Docker and docker-compose for containerization
- Streamlit web application for conversational interface
- PostgreSQL to store asked questions, answers, evaluation (relevance) and user feedback
- Grafana to monitor performance
- Fork this repo on GitHub. Or use
git clone https://github.com/dmytrovoytko/llm-exam-assistant.git
command to clone it locally, thencd llm-exam-assistant
. - Create GitHub CodeSpace from the repo, use 4-core - 16GB RAM machine type.
- Start CodeSpace
- As app works in docker containers, the only package needed to install locally is
dotenv
for setting up Grafana dashboard - runpip install dotenv
to install required package. - Go to the app directory
cd exam_assistant
- If you want to play with/develop the project locally, you can run
pip install -r requirements.txt
(project tested on python 3.11/3.12). - If you want to use gpt-3.5/gpt-4 API you need to correct OPENAI_API_KEY in
.env
file.
-
Run
bash deploy.sh
to start all containers - elasticsearch, ollama, postgres, streamlit, grafana. It takes at least couple of minutes to download/build corresponding images, then get services ready to serve. When new log messages stop appering, press enter to return to command line.
If you want to use other models, you can modify this script accordingly, then update sl-app.py
to add your model names.
- Finally, open streamlit app: switch to ports tab and click on link with port 8501 (🌐 icon).
-
Set query parameters - choose exam, model, enter question.
-
Press 'Ask' button, wait for response. For Ollama Phi3 in CodeSpace response time is around a minute.
-
Give your feedback by pressing 👍 or 👎
-
You can switch to wide mode in streamlit settings (upper right corner)
You can monitor app performance in Grafana dashboard
-
As with streamlit switch to ports tab and click on link with port 3000 (🌐 icon).
- Login: "admin"
- Password: "admin"
- Click 'Dashboards' in left pane and choose 'Exam Assistant'.
Run docker compose down
in command line to stop all services.
Don't forget to remove downloaded images if you experimented with project locally!
- Hybrid search: combining both text and vector search (Elastic search, encoding)
- User query rewriting (adding context)
I plan to add more questions to knowledge database and test more models.
Stay tuned!
🙏 Thank you for your attention and time!
- If you experience any issue while following this instruction (or something left unclear), please add it to Issues, I'll be glad to help/fix. And your feedback, questions & suggestions are welcome as well!
- Feel free to fork and submit pull requests.
If you find this project helpful, please ⭐️star⭐️ my repo https://github.com/dmytrovoytko/llm-exam-assistant to help other people discover it 🙏
Made with ❤️ in Ukraine 🇺🇦 Dmytro Voytko