Skip to content

Commit

Permalink
Clean-up template READMEs (#12403)
Browse files Browse the repository at this point in the history
Normalize, and update notebooks.
  • Loading branch information
rlancemartin authored Oct 27, 2023
1 parent 4254028 commit d6acb3e
Show file tree
Hide file tree
Showing 8 changed files with 88 additions and 161 deletions.
30 changes: 1 addition & 29 deletions templates/extraction-openai-functions/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,32 +10,4 @@ By default, it will extract the title and author of papers.

This template will use `OpenAI` by default.

Be sure that `OPENAI_API_KEY` is set in your environment.

## Adding the template

Install the langchain package
```
pip install -e packages/extraction_openai_functions
```

Edit app/server.py to add that package to the routes
```
from fastapi import FastAPI
from langserve import add_routes
from extraction_openai_functions.chain import chain
app = FastAPI()
add_routes(app, chain)
```

Run the app
```
python app/server.py
```

You can use this template in the Playground:

http://127.0.0.1:8000/extraction-openai-functions/playground/

Also, see Jupyter notebook `openai_functions` for various other ways to connect to the template.
Be sure that `OPENAI_API_KEY` is set in your environment.
Original file line number Diff line number Diff line change
Expand Up @@ -29,22 +29,10 @@
"source": [
"## Run Template\n",
"\n",
"\n",
"As shown in the README, add template and start server:\n",
"```\n",
"langchain serve add openai-functions\n",
"langchain start\n",
"In `server.py`, set -\n",
"```\n",
"\n",
"We can now look at the endpoints:\n",
"\n",
"http://127.0.0.1:8000/docs#\n",
"\n",
"And specifically at our loaded template:\n",
"\n",
"http://127.0.0.1:8000/docs#/default/invoke_openai_functions_invoke_post\n",
" \n",
"We can also use remote runnable to call it."
"add_routes(app, chain_ext, path=\"/extraction_openai_functions\")\n",
"```"
]
},
{
Expand All @@ -55,40 +43,38 @@
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"oai_function = RemoteRunnable('http://localhost:8000/openai-functions')"
"oai_function = RemoteRunnable('http://0.0.0.0:8001/extraction_openai_functions')"
]
},
{
"cell_type": "markdown",
"id": "68046695",
"metadata": {},
"source": [
"The function call will perform tagging:\n",
"\n",
"* summarize\n",
"* provide keywords\n",
"* provide language"
"The function wille extract paper titles and authors from an input."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 8,
"id": "6dace748",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'Overview', 'arguments': '{\\n \"summary\": \"This article discusses the concept of building agents with LLM (large language model) as their core controller. It explores the potentiality of LLM as a general problem solver and describes the key components of an LLM-powered autonomous agent system, including planning, memory, and tool use. The article also presents case studies and challenges related to building LLM-powered agents.\",\\n \"language\": \"English\",\\n \"keywords\": \"LLM, autonomous agents, planning, memory, tool use, case studies, challenges\"\\n}'}})"
"[{'title': 'Chain of Thought', 'author': 'Wei et al. 2022'},\n",
" {'title': 'Tree of Thoughts', 'author': 'Yao et al. 2023'},\n",
" {'title': 'LLM+P', 'author': 'Liu et al. 2023'}]"
]
},
"execution_count": 3,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"oai_function.invoke(text[0].page_content[0:1500])"
"oai_function.invoke({\"input\":text[0].page_content[0:4000]})"
]
}
],
Expand Down
22 changes: 1 addition & 21 deletions templates/rag-chroma-private/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,24 +26,4 @@ This template will create and add documents to the vector database in `chain.py`

By default, this will load a popular blog post on agents.

However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).

## Adding the template

Create your LangServe app:
```
langchain serve new my-app
cd my-app
```

Add template:
```
langchain serve add rag-chroma-private
```

Start server:
```
langchain start
```

See Jupyter notebook `rag_chroma_private` for various way to connect to the template.
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
61 changes: 61 additions & 0 deletions templates/rag-chroma-private/rag_chroma_private.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "232fd40d-cf6a-402d-bcb8-414184a8e924",
"metadata": {},
"source": [
"## Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_private, path=\"/rag_chroma_private\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "888494ca-0509-4070-b36f-600a042f352c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' Based on the given context, the answer to the question \"How does agent memory work?\" can be inferred as follows:\\n\\nAgent memory refers to the long-term memory module of an autonomous agent system, which records a comprehensive list of agents\\' experiences in natural language. Each element is an observation or event directly provided by the agent, and inter-agent communication can trigger new natural language statements. The retrieval model surfaces the context to inform the agent\\'s behavior according to relevance, recency, and importance.\\n\\nIn other words, the agent memory is a component of the autonomous agent system that stores and manages the agent\\'s experiences and observations in a long-term memory module, which is based on natural language processing and generation capabilities of a large language model (LLM). The memory is used to inform the agent\\'s behavior and decision-making, and it can be triggered by inter-agent communication.'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langserve.client import RemoteRunnable\n",
"rag_app = RemoteRunnable('http://0.0.0.0:8001/rag_chroma_private/')\n",
"rag_app.invoke(\"How does agent memory work?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
20 changes: 0 additions & 20 deletions templates/rag-chroma/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,23 +13,3 @@ These documents can be loaded from [many sources](https://python.langchain.com/d
## LLM

Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.

## Adding the template

Create your LangServe app:
```
langchain serve new my-app
cd my-app
```

Add template:
```
langchain serve add rag-chroma
```

Start server:
```
langchain start
```

See Jupyter notebook `rag_chroma` for various way to connect to the template.
40 changes: 14 additions & 26 deletions templates/rag-conversation/rag_conversation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,38 +7,26 @@
"source": [
"## Run Template\n",
"\n",
"\n",
"As shown in the README, add template and start server:\n",
"```\n",
"langchain serve add rag-conversation\n",
"langchain start\n",
"In `server.py`, set -\n",
"```\n",
"\n",
"We can now look at the endpoints:\n",
"\n",
"http://127.0.0.1:8000/docs#\n",
"\n",
"And specifically at our loaded template:\n",
"\n",
"http://127.0.0.1:8000/docs#/default/invoke_rag_conversation_invoke_post\n",
" \n",
"We can also use remote runnable to call it."
"add_routes(app, chain_rag_conv, path=\"/rag_conversation\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 2,
"id": "5f521923",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"rag_app = RemoteRunnable('http://localhost:8000/rag-conversation')"
"rag_app = RemoteRunnable('http://0.0.0.0:8001/rag_conversation')"
]
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 5,
"id": "679bd83b",
"metadata": {},
"outputs": [],
Expand All @@ -52,17 +40,17 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 8,
"id": "94a05616",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Agent memory works by utilizing both short-term memory and long-term memory mechanisms. \\n\\nShort-term memory allows the agent to learn and retain information within the current context or task. This in-context learning helps the agent handle complex tasks efficiently. \\n\\nOn the other hand, long-term memory enables the agent to retain and recall an unlimited amount of information over extended periods. This is achieved by leveraging an external vector store, such as a memory stream, which serves as a comprehensive database of the agent's past experiences in natural language. The memory stream records observations and events directly provided by the agent, and inter-agent communication can also trigger new natural language statements to be added to the memory.\\n\\nTo access and utilize the stored information, a retrieval model is employed. This model determines the context that is most relevant, recent, and important to inform the agent's behavior. By retrieving information from memory, the agent can reflect on past actions, learn from mistakes, and refine its behavior for future steps, ultimately improving the quality of its results.\")"
"'Based on the given context, it is mentioned that the design of generative agents combines LLM (which stands for language, learning, and memory) with memory mechanisms. However, the specific workings of agent memory are not explicitly described in the given context.'"
]
},
"execution_count": 27,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
Expand All @@ -73,12 +61,12 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 9,
"id": "ce206c8a",
"metadata": {},
"outputs": [],
"source": [
"chat_history = [(question, answer.content)]\n",
"chat_history = [(question, answer)]\n",
"answer = rag_app.invoke({\n",
" \"question\": \"What are the different types?\",\n",
" \"chat_history\": chat_history,\n",
Expand All @@ -87,17 +75,17 @@
},
{
"cell_type": "code",
"execution_count": 30,
"execution_count": 10,
"id": "4626f167",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The different types of memory utilized by the agent are sensory memory, short-term memory, and long-term memory.')"
"\"Based on the given context, two types of memory are mentioned: short-term memory and long-term memory. \\n\\n1. Short-term memory: It refers to the ability of the agent to retain and recall information for a short period. In the context, short-term memory is described as the in-context learning that allows the model to learn.\\n\\n2. Long-term memory: It refers to the capability of the agent to retain and recall information over extended periods. In the context, long-term memory is described as the ability to retain and recall infinite information by leveraging an external vector store and fast retrieval.\\n\\nIt's important to note that these are just the types of memory mentioned in the given context. There may be other types of memory as well, depending on the specific design and implementation of the agent.\""
]
},
"execution_count": 30,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
Expand Down
20 changes: 0 additions & 20 deletions templates/rag-pinecone/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,23 +15,3 @@ Be sure that you have set a few env variables in `chain.py`:
## LLM

Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.

## Installation

Create your LangServe app:
```
langchain serve new my-app
cd my-app
```

Add template:
```
langchain serve add rag-pinecone
```

Start server:
```
langchain start
```

See Jupyter notebook `rag_pinecone` for various way to connect to the template.
20 changes: 0 additions & 20 deletions templates/summarize-anthropic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,23 +14,3 @@ To do this, we can use various prompts from LangChain hub, such as:
This template will use `Claude2` by default.

Be sure that `ANTHROPIC_API_KEY` is set in your enviorment.

## Adding the template

Create your LangServe app:
```
langchain serve new my-app
cd my-app
```

Add template:
```
langchain serve add summarize-anthropic
```

Start server:
```
langchain start
```

See Jupyter notebook `summarize_anthropic` for various way to connect to the template.

0 comments on commit d6acb3e

Please sign in to comment.