Skip to content

Commit

Permalink
Chains and steps
Browse files Browse the repository at this point in the history
  • Loading branch information
csansoon committed Nov 14, 2024
1 parent 9bae2bf commit 7e554a4
Show file tree
Hide file tree
Showing 5 changed files with 239 additions and 8 deletions.
2 changes: 1 addition & 1 deletion docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@
"promptl/syntax/configuration",
"promptl/syntax/messages",
"promptl/syntax/variables",
"promptl/syntax/conditions",
"promptl/syntax/conditionals",
"promptl/syntax/loops"
]
}
Expand Down
159 changes: 159 additions & 0 deletions docs/promptl/advanced/chains.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
---
title: Chains and Steps
description: Chains and Steps are used to create multi-step prompts that can interact with the AI model in stages.
---

## Overview

LLMs have been observed to perform complex operations better when broken down into smaller steps.

Chains allow you to create multi-step prompts that can interact with the AI model in stages. You can pass the result of one step to the next one. This enables more complex workflows and dynamic conversations.

## Syntax

Use the `<step>` tag in your prompt to add a step. The engine will pause at that step, generate a response, and add it to the conversation as an assistant message, before continuing with the prompt.

```
<step>
Analyze the following sentence and identify the subject, verb, and object:
<user>
"The curious cat chased the playful mouse."
</user>
</step>
<step>
Now, using this information, create a new sentence by
replacing each part of speech with a different word, but keep the same structure.
</step>
<step>
Finally, translate the new sentence you created into French and return just the sentence.
</step>
```

## Configuration

All steps will use the configuration defined at the beginning of the prompt by default. However, you can override the configuration for each step by adding attributes to the `<step>` tag:

```
---
model: gpt-4o
---
<step model="gpt-4o-mini" temperature={{0.1}}>
/* This step will use a smaller model and lower temperature */
Analyze the following sentence and identify the subject, verb, and object:
<user>
"The curious cat chased the playful mouse."
</user>
</step>
```

## Store step responses

You can store the text of the response to a variable by adding an `as` attribute, followed by the variable name. This allows you to reuse the response later in your prompt or use it in conditionals or other logic.

```
<step as="result">
Is this statement correct?
{{ statement }}
Respond only with "correct" or "incorrect".
</step>
{{ if result == "correct" }}
Great, now respond with an explanation about the statement.
{{ else }}
Now, provide an explanation of why the statement is incorrect, and give the correct answer.
{{ endif }}
```

### Store the whole message

The `⁠as` attribute stores the text of the generated response. However, the response often contains additional relevant information that can be useful in the prompt.
To store the entire message object, use the ⁠raw attribute. This attribute stores the whole message object in a variable, which can then be accessed later.

```
<step raw="generatedMessage">
...
</step>
```

The ⁠generatedMessage variable will contain attributes like `⁠role` and `⁠content`, as well as any additional data provided by your LLM provider. The ⁠`content` attribute is always an array of objects, each with a ⁠`type` such as `text`, `image`, or `tool-call`.

If you want to debug the contents of the message, you can interpolate it into the prompt and run the chain to see what it contains.

## Isolating steps

All steps will automatically receive all the messages from previous steps as context. In some cases, some steps may not need context from previous steps, and including them would be and unnecessary cost and even confuse the model.

If you want to isolate a step from the general context, you can add the `isolated` attribute to the `<step>` tag. Isolated steps will not receive any context from previous steps, and future steps will not receive the context from isolated steps either.

```
<step isolated as="summary1">
Generate a summary of the following text:
{{ text1 }} /* Long text */
</step>
<step isolated as="summary2">
Generate a summary of the following text:
{{ text2 }} /* Long text */
</step>
<step>
Compare these two summaries and provide a conclusion.
{{ summary1 }}
{{ summary2 }}
</step>
```

# Implementation

In order to run chains, PromptL will evaluate the prompt in steps, and wait for the response of each step before continuing to the next one. To do this, you must use the `Chain` class to define an instance with the prompt, parameters, and the rest of the configuration, and run the `.step()` method to generate the strucure for each step.

The first time `step` is called, it must be called without any response, as there has not been any input yet. After that, you can call `step` with the response of the previous step. For each step, the method will return an object with both `messages` and `config`, as usual, but also with a `completed` boolean, which will be `true` when the chain is finished and no more responses are required to continue.

Let's see an example of how to use the `Chain` class with the `openai` provider:

```javascript
import { Chain } from '@latitude-data/promptl';
import OpenAI from 'openai';

// Create the OpenAI client
const client = new OpenAI();

// Create a function to generate a response based on the step messages and config
async function generateResponse({ config, messages }) {
const response = await client.chat.completions.create({
...config,
messages,
})

// return the response message
return resopnse.choices[0]!.message;
}

// Create a new chain
const chain = new Chain({
prompt: '...', // Your PromptL prompt as a string
parameters: {...} // Your prompt parameters
})

// Compile the first step
let result = chain.step()
let last_response

// Iterate over the chain until it is completed
while (!result.completed) {
// Generate the response
last_response = await generateResponse(result)

// Compile the next step
result = chain.step(last_response)
}

console.log(last_response)
```
Empty file.
2 changes: 1 addition & 1 deletion docs/promptl/getting-started/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: Get started with PromptL

## What is PromptL?

PromptL is a common, easy-to-use syntax to define dynamic prompts for LLMs. It is a simple, yet powerful language that allows you to define prompts in a human-readable format, while still being able to leverage the full power of LLMs.
[PromptL](https://promptl.ai/) offers a common, easy-to-use syntax for defining dynamic prompts for LLMs. It is a simple, yet powerful language that allows you to define prompts in a human-readable format, while still being able to leverage the full power of LLMs.

## Why PromptL?

Expand Down
84 changes: 78 additions & 6 deletions docs/promptl/syntax/messages.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,33 @@ description: Learn how to define messages in PromptL

Messages are the core of LLM prompting. They define the conversation between the user and the assistant. Messages can have different roles, such as `system`, `user`, `assistant`, or `tool`. Each message role has a different meaning and purpose in the conversation.

## Message Tags
- [Message Tags](#message-tags)
- [Message Content](#message-content)
- [Image Content](#image-content)
- [Tool Call Content](#tool-call-content)
- [Roles](#roles)
- [System Messages](#system-messages)
- [User Messages](#user-messages)
- [Assistant Messages](#assistant-messages)
- [Tool Messages](#tool-messages)

Plain text is automatically parsed by PromptL as a `system` message, although this can be changed in code.
# Message Tags

To define other types of messages, you can use the following tags: `<system>`, `<user>`, `<assistant>`, and `<tool>`. In addition to this, you can also define messages with custom or dynamic roles by using the `<message>` tag.
To define a message, you can use the `<message>` tag, followed by a `role` attribute to define the role:

```plaintext
<message role="system">
This is a system message.
</message>
```

By default, all text not wrapped into a message tag will be considered a `system` message, although this can be changed in the implementation.

```plaintext
This is a system message.
```

For convenience, there are tags for each specific role: `<system>`, `<user>`, `<assistant>`, and `<tool>`. These tags are equivalent to the `<message>` tag with the corresponding role attribute.

```plaintext
<system>
Expand All @@ -26,11 +48,61 @@ To define other types of messages, you can use the following tags: `<system>`, `
<assistant>
Here's a draft blog post about {{ topic }}...
</assistant>
```

## Message Content

Depending on the provider, some messages can contain more than just text. For example, user messages may contain images, and assistant messages may contain tool call requests. Read more about the capabilities of your LLM provider to know what kind of content you can include in your messages.

Similar to `<message>` tags, you can add `<content>` tags to define the content of a message, followed with a `type` attribute to define the type of the content, which can be `text`, `image`, `tool-call`, or any other type supported by your LLM provider.

<message role='example'>
Here's an example of high-quality content in this style...
</message>
```
<user>
<content type="text">Take a look at this image:</content>
<content type="image">[image url]</content>
</user>
<assistant>
<content
type="tool-call"
id="123"
name="get-weather"
arguments={{ { location: "Barcelona" } }}
/>
</assistant>
```

All plain text inside a message not wrapped into a content tag will automatically be considered a text content.

You can also use `<content-text>`, `<content-image>` and `<tool-call>` tags as shortcuts for the `<content>` tag with the corresponding type.

```
<user>
Take a look at this image:
<content-image>[image url]</content-image>
</user>
<assistant>
<tool-call
id="123"
name="get-weather"
arguments={{ { location: "Barcelona" } }}
/>
</assistant>
```

### Image Content

Images can be included by either using `<content type="image">` or `<content-image>`. The content of the image should contain the image as a string encoded in base64, or directly as a URL if the provider supports it.

### Tool Call Content

Tool calls can only be included inside assistant messages, and they must contain the following attributes:

- `id`: A unique identifier for the tool call.
- `name`: The name of the tool to call.
- `arguments` (optional): An object containing the arguments to pass to the tool.

## Roles

### System Messages

Expand Down

0 comments on commit 7e554a4

Please sign in to comment.