Replies: 2 comments
-
As for the error, this can be resolved using But the main question remains: What are best practices for structured input? |
Beta Was this translation helpful? Give feedback.
-
Hey, @robsyc! 👋 I'm here to help you with any bugs, questions, or contributions you need assistance with. Let's squash those bugs together! I found a similar open issue that might be relevant to your problem: Error happened in PromptTemplate + PydanticOutputParser. The issue was closed, and the suggested solution involves attaching instructions after formatting the prompt template. Here's the code snippet provided: def main():
output_parser = PydanticOutputParser(pydantic_object=CommandResponse)
instruction = output_parser.get_format_instructions()
_prompt = """
## User Demand
{user_input}
## Pending Control Task Queue
{task_queue}
"""
prompt = PromptTemplate(
template=f"{_prompt}",
input_variables=["user_input", "task_queue"],
)
_input = prompt.format_prompt(user_input="hello", task_queue="aaa")
_input = f"{_input}\n{instruction}"
print(_input) However, this solution is not ideal as it may still have issues determining if a variable does not exist [1]. To resolve the Here's a snippet to illustrate how you can modify the schema dictionary to include the if isinstance(schema, type) and is_basemodel_subclass(schema):
schema_dict = cast(dict, schema.schema())
else:
schema_dict = cast(dict, schema)
# Ensure the schema_dict includes the "text" key
if "text" not in schema_dict:
schema_dict["text"] = "Default text value or appropriate value"
function = {
"name": schema_dict["title"],
"description": schema_dict["description"],
"parameters": schema_dict,
} This modification ensures that the |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Input for the example selector (i.e.
task
)Line that gives the error
KeyError: '"text"'
Description
Goal:
Problem: When formatting the
ANNOTATOR_EXAMPLES_PROMPT
, the"input"
field in the examples is not being treated as a regular string to be embedded and compared against the current input. Instead a KeyError is thrown. Based on what I found online I think it's somehow related to the curly brackets in the json input, however, using ar
-string or placing escape\
before the brackets is not resolving the issue.Larger context: Little documentation is given online on what best practices are for provided structured input to LLMs... Should I even be passing json as input to an LLM? Downstream, I add these examples to the system prompt message like this:
But I'm not sure if this is really the way to go.
System Info
langchain==0.2.16
langchain-chroma==0.1.2
langchain-community==0.2.16
langchain-core==0.2.38
langchain-openai==0.1.21
langchain-text-splitters==0.2.2
Platform: Linux
Python version: Python 3.10.12
Beta Was this translation helpful? Give feedback.
All reactions