Here is a sample step-by-step guide to set up a simple Open AI chat endpoint so that you can experiment . Note that this guide is NOT meant to be used for any experimentation that will involve private data. Please only use this for testing with interactions that do not contain proprietary information or anything that should be kept secure.
- Install jupyter notebook (sample guide for MacOS)
- Follow this Jupyter notebook guide to get started with using it.
- Note: To close a notebook, close the browser and then type
ctrl+c
in the terminal where the notebook is open.
- Note: To close a notebook, close the browser and then type
- Obtain Open AI API key by signing up on their developer portal. You will be charged on end-point usage based on the model you are invoking and volume of usage.
- Set up your API key using this detailed walkthrough from OpenAI, or quickly just follow these steps:
- In your mac terminal:
echo "export OPENAI_API_KEY='YOUR-API-KEY-HERE'" >> ~/.bash_profile
, hit enter - In terminal again:
source ~/.bash_profile
, hit enter - After doing the above steps and re-start the jupyter notebook, otherwise python won't register the change.
- In your mac terminal:
- Install OpenAI on Jupyter by open a new cell on a notebook and type
!pip install --upgrade openai
. Leave this cell untouched when installation is done - Open AI quickstarter and play around with the user prompt:
"Compose a poem that explains the concept of recursion in programming."
- For multi-line prompting, use triple-double quotes to enclose. For example, see the following code snippet, which enables you to replace the user prompt with the PROMPT variable.
document = "JEE is an unnecessarily difficult engineering entrance exam that gives Indian kids nightmares"
PROMPT = """
Answer the following question using the given context:
What is JEE?
CONTEXT:
{context}""".format(context = document)
- Now you can add your custom prompt by modifying the quickstart code.
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant that provides answers to user questions based on a given source context document."},
{"role": "user", "content": PROMPT}
]
)
print(completion.choices[0].message)
Points 7 & 8 show the basics of creating a Retrieval Augmented Generation ( RAG ) chatbot. In a production system, the variable document
can be programmed to store content from a knowledge base that most accurately answers user questions. The contents of document
can be injected into a prompt ( see the PROMPT
variable ). The prompt asks the chat endpoint to answer the user question using the document
variable's content as the source.