Caching for Tools
#26676
Replies: 1 comment 4 replies
-
Hey @logar16! I'm here to help you with any bugs, questions, or contributions. If you need assistance, feel free to ask. To implement persistent caching for a search API tool beyond using
Example with SQLiteCachefrom langchain_community.cache import SQLiteCache
# Initialize the SQLiteCache with the desired database path
cache = SQLiteCache(database_path="path_to_your_database.db")
# Example usage
prompt = "example prompt"
llm_string = "example_llm_string"
return_val = ["example_return_value"]
# Update the cache
cache.update(prompt, llm_string, return_val)
# Lookup in the cache
cached_value = cache.lookup(prompt, llm_string)
print(cached_value) Example with CassandraCachefrom langchain_community.cache import CassandraCache
from langchain_core.globals import set_llm_cache
set_llm_cache(CassandraCache())
# Example usage
prompt = "example prompt"
llm_string = "example_llm_string"
return_val = ["example_return_value"]
# Update the cache
cache.update(prompt, llm_string, return_val)
# Lookup in the cache
cached_value = cache.lookup(prompt, llm_string)
print(cached_value) Example with AstraDBCachefrom langchain_astradb import AstraDBCache
from langchain_core.globals import set_llm_cache
set_llm_cache(
AstraDBCache(
api_endpoint="your_astra_db_api_endpoint",
token="your_astra_db_application_token",
)
)
# Example usage
prompt = "example prompt"
llm_string = "example_llm_string"
return_val = ["example_return_value"]
# Update the cache
cache.update(prompt, llm_string, return_val)
# Lookup in the cache
cached_value = cache.lookup(prompt, llm_string)
print(cached_value) Example with AzureCosmosDBSemanticCachefrom langchain_community.cache import AzureCosmosDBSemanticCache
from langchain_openai import OpenAIEmbeddings
set_llm_cache(
AzureCosmosDBSemanticCache(
cosmosdb_connection_string="your_cosmosdb_connection_string",
embedding=OpenAIEmbeddings(),
database_name="your_database_name",
collection_name="your_collection_name",
)
)
# Example usage
prompt = "example prompt"
llm_string = "example_llm_string"
return_val = ["example_return_value"]
# Update the cache
cache.update(prompt, llm_string, return_val)
# Lookup in the cache
cached_value = cache.lookup(prompt, llm_string)
print(cached_value) Example with ElasticsearchCachefrom langchain_elasticsearch import ElasticsearchCache
from langchain_core.globals import set_llm_cache
set_llm_cache(
ElasticsearchCache(
es_url="http://localhost:9200",
index_name="llm-chat-cache",
metadata={"project": "my_chatgpt_project"},
)
)
# Example usage
prompt = "example prompt"
llm_string = "example_llm_string"
return_val = ["example_return_value"]
# Update the cache
cache.update(prompt, llm_string, return_val)
# Lookup in the cache
cached_value = cache.lookup(prompt, llm_string)
print(cached_value) Example with OpenSearchSemanticCachefrom langchain_community.cache import OpenSearchSemanticCache
from langchain_openai import OpenAIEmbeddings
set_llm_cache(
OpenSearchSemanticCache(
opensearch_url="http://localhost:9200",
embedding=OpenAIEmbeddings()
)
)
# Example usage
prompt = "example prompt"
llm_string = "example_llm_string"
return_val = ["example_return_value"]
# Update the cache
cache.update(prompt, llm_string, return_val)
# Lookup in the cache
cached_value = cache.lookup(prompt, llm_string)
print(cached_value) Example with MongoDBCachefrom langchain_mongodb.cache import MongoDBCache
from langchain_core.globals import set_llm_cache
set_llm_cache(
MongoDBCache(
connection_string="your_mongodb_connection_string",
database_name="your_database_name",
collection_name="your_collection_name",
)
)
# Example usage
prompt = "example prompt"
llm_string = "example_llm_string"
return_val = ["example_return_value"]
# Update the cache
cache.update(prompt, llm_string, return_val)
# Lookup in the cache
cached_value = cache.lookup(prompt, llm_string)
print(cached_value) Example with MongoDBAtlasSemanticCachefrom langchain_mongodb.cache import MongoDBAtlasSemanticCache
from langchain_openai import OpenAIEmbeddings
set_llm_cache(
MongoDBAtlasSemanticCache(
connection_string="your_mongodb_atlas_connection_string",
embedding=OpenAIEmbeddings(),
database_name="your_database_name",
collection_name="your_collection_name",
)
)
# Example usage
prompt = "example prompt"
llm_string = "example_llm_string"
return_val = ["example_return_value"]
# Update the cache
cache.update(prompt, llm_string, return_val)
# Lookup in the cache
cached_value = cache.lookup(prompt, llm_string)
print(cached_value) These solutions provide various options for persistent caching, helping you stay within the free tier by reducing the number of API calls. Choose the one that best fits your infrastructure and requirements [1][2][3]. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am using a search API tool and wanting to keep within the free tier (1k requests/month). As I do development, I will be reusing the same prompt a lot and was wondering if there is a convenient way to do caching for tools? It looks like
BaseCache
is very much model-focused. That's fine, but what if tools want persistent caching beyond adding a simple@lru_cache
? Or is the assumption that tools should always go get latest value (could be expensive)?Beta Was this translation helpful? Give feedback.
All reactions