Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean the repo by removing the deprecated modules and merging the files with the same functionalities. #422

Merged
merged 5 commits into from
Aug 28, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 0 additions & 63 deletions docs/sphinx_doc/en/source/tutorial/206-prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -551,67 +551,4 @@ print(prompt)
]
```

## Prompt Engine (Will be deprecated in the future)

AgentScope provides the `PromptEngine` class to simplify the process of crafting
prompts for large language models (LLMs).

## About `PromptEngine` Class

The `PromptEngine` class provides a structured way to combine different components of a prompt, such as instructions, hints, conversation history, and user inputs, into a format that is suitable for the underlying language model.

### Key Features of PromptEngine

- **Model Compatibility**: It works with any `ModelWrapperBase` subclass.
- **Prompt Type**: It supports both string and list-style prompts, aligning with the model's preferred input format.

### Initialization

When creating an instance of `PromptEngine`, you can specify the target model and, optionally, the shrinking policy, the maximum length of the prompt, the prompt type, and a summarization model (could be the same as the target model).

```python
model = OpenAIChatWrapper(...)
engine = PromptEngine(model)
```

### Joining Prompt Components

The `join` method of `PromptEngine` provides a unified interface to handle an arbitrary number of components for constructing the final prompt.

#### Output String Type Prompt

If the model expects a string-type prompt, components are joined with a newline character:

```python
system_prompt = "You're a helpful assistant."
memory = ... # can be dict, list, or string
hint_prompt = "Please respond in JSON format."

prompt = engine.join(system_prompt, memory, hint_prompt)
# the result will be [ "You're a helpful assistant.", {"name": "user", "content": "What's the weather like today?"}]
```

#### Output List Type Prompt

For models that work with list-type prompts,e.g., OpenAI and Huggingface chat models, the components can be converted to Message objects, whose type is list of dict:

```python
system_prompt = "You're a helpful assistant."
user_messages = [{"name": "user", "content": "What's the weather like today?"}]

prompt = engine.join(system_prompt, user_messages)
# the result should be: [{"role": "assistant", "content": "You're a helpful assistant."}, {"name": "user", "content": "What's the weather like today?"}]
```

#### Formatting Prompts in Dynamic Way

The `PromptEngine` supports dynamic prompts using the `format_map` parameter, allowing you to flexibly inject various variables into the prompt components for different scenarios:

```python
variables = {"location": "London"}
hint_prompt = "Find the weather in {location}."

prompt = engine.join(system_prompt, user_input, hint_prompt, format_map=variables)
```

[[Return to the top]](#206-prompt-en)
58 changes: 0 additions & 58 deletions docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -485,62 +485,4 @@ print(prompt)
]
```

## 关于`PromptEngine`类 (将会在未来版本弃用)

`PromptEngine`类提供了一种结构化的方式来合并不同的提示组件,比如指令、提示、对话历史和用户输入,以适合底层语言模型的格式。

### 提示工程的关键特性

- **模型兼容性**:可以与任何 `ModelWrapperBase` 的子类一起工作。
- **提示类型**:支持字符串和列表风格的提示,与模型首选的输入格式保持一致。

### 初始化

当创建 `PromptEngine` 的实例时,您可以指定目标模型,以及(可选的)缩减原则、提示的最大长度、提示类型和总结模型(可以与目标模型相同)。

```python
model = OpenAIChatWrapper(...)
engine = PromptEngine(model)
```

### 合并提示组件

`PromptEngine` 的 `join` 方法提供了一个统一的接口来处理任意数量的组件,以构建最终的提示。

#### 输出字符串类型提示

如果模型期望的是字符串类型的提示,组件会通过换行符连接:

```python
system_prompt = "You're a helpful assistant."
memory = ... # 可以是字典、列表或字符串
hint_prompt = "Please respond in JSON format."

prompt = engine.join(system_prompt, memory, hint_prompt)
# 结果将会是 ["You're a helpful assistant.", {"name": "user", "content": "What's the weather like today?"}]
```

#### 输出列表类型提示

对于使用列表类型提示的模型,比如 OpenAI 和 Huggingface 聊天模型,组件可以转换为 `Message` 对象,其类型是字典列表:

```python
system_prompt = "You're a helpful assistant."
user_messages = [{"name": "user", "content": "What's the weather like today?"}]

prompt = engine.join(system_prompt, user_messages)
# 结果将会是: [{"role": "assistant", "content": "You're a helpful assistant."}, {"name": "user", "content": "What's the weather like today?"}]
```

#### 动态格式化提示

`PromptEngine` 支持使用 `format_map` 参数动态提示,允许您灵活地将各种变量注入到不同场景的提示组件中:

```python
variables = {"location": "London"}
hint_prompt = "Find the weather in {location}."

prompt = engine.join(system_prompt, user_input, hint_prompt, format_map=variables)
```

[[返回顶端]](#206-prompt-zh)
2 changes: 0 additions & 2 deletions src/agentscope/agents/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
from .dialog_agent import DialogAgent
from .dict_dialog_agent import DictDialogAgent
from .user_agent import UserAgent
from .text_to_image_agent import TextToImageAgent
from .rpc_agent import RpcAgent
from .react_agent import ReActAgent
from .rag_agent import LlamaIndexAgent
Expand All @@ -16,7 +15,6 @@
"Operator",
"DialogAgent",
"DictDialogAgent",
"TextToImageAgent",
DavdGao marked this conversation as resolved.
Show resolved Hide resolved
"UserAgent",
"ReActAgent",
"DistConf",
Expand Down
75 changes: 0 additions & 75 deletions src/agentscope/agents/text_to_image_agent.py

This file was deleted.

Empty file removed src/agentscope/file_manager.py
Empty file.
3 changes: 2 additions & 1 deletion src/agentscope/logging.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,11 @@

from loguru import logger

from .utils.tools import _guess_type_by_extension

from .message import Msg
from .serialize import serialize
from .studio._client import _studio_client
from .utils.common import _guess_type_by_extension
from .web.gradio.utils import (
generate_image_from_name,
send_msg,
Expand Down
18 changes: 12 additions & 6 deletions src/agentscope/manager/_file.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@
import numpy as np
from PIL import Image

from agentscope.utils.tools import _download_file
from agentscope.utils.tools import _hash_string
from agentscope.utils.tools import _get_timestamp
from agentscope.utils.tools import _generate_random_code
from agentscope.constants import (
from ..utils.common import _download_file
from ..utils.common import _hash_string
from ..utils.common import _get_timestamp
from ..utils.common import _generate_random_code
DavdGao marked this conversation as resolved.
Show resolved Hide resolved
from ..constants import (
_DEFAULT_SUBDIR_CODE,
_DEFAULT_SUBDIR_FILE,
_DEFAULT_SUBDIR_INVOKE,
Expand All @@ -32,7 +32,13 @@ def _get_text_embedding_record_hash(
if isinstance(embedding_model, dict):
# Format the dict to avoid duplicate keys
embedding_model = json.dumps(embedding_model, sort_keys=True)
embedding_model_hash = _hash_string(embedding_model, hash_method)
elif isinstance(embedding_model, str):
embedding_model_hash = _hash_string(embedding_model, hash_method)
else:
raise RuntimeError(
f"The embedding model must be a string or a dict, got "
f"{type(embedding_model)}.",
)

# Calculate the embedding id by hashing the hash codes of the
# original data and the embedding model
Expand Down
2 changes: 1 addition & 1 deletion src/agentscope/manager/_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from ._file import FileManager
from ._model import ModelManager
from ..logging import LOG_LEVEL, setup_logger
from ..utils.tools import (
from ..utils.common import (
_generate_random_code,
_get_process_creation_time,
_get_timestamp,
Expand Down
2 changes: 1 addition & 1 deletion src/agentscope/manager/_monitor.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
from sqlalchemy.orm import sessionmaker

from ._file import FileManager
from ..utils.tools import _is_windows
from ..utils.common import _is_windows
from ..constants import (
_DEFAULT_SQLITE_DB_NAME,
_DEFAULT_TABLE_NAME_FOR_CHAT_AND_EMBEDDING,
Expand Down
2 changes: 1 addition & 1 deletion src/agentscope/message/msg.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
from loguru import logger

from ..serialize import is_serializable
from ..utils.tools import (
from ..utils.common import (
_map_string_to_color_mark,
_get_timestamp,
)
Expand Down
2 changes: 1 addition & 1 deletion src/agentscope/message/placeholder.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from .msg import Msg
from ..rpc import RpcAgentClient, ResponseStub, call_in_thread
from ..serialize import deserialize, is_serializable, serialize
from ..utils.tools import _is_web_url
from ..utils.common import _is_web_url


class PlaceholderMessage(Msg):
Expand Down
2 changes: 1 addition & 1 deletion src/agentscope/models/dashscope_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

from ..manager import FileManager
from ..message import Msg
from ..utils.tools import _convert_to_str, _guess_type_by_extension
from ..utils.common import _convert_to_str, _guess_type_by_extension

try:
import dashscope
Expand Down
6 changes: 3 additions & 3 deletions src/agentscope/models/gemini_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@

from loguru import logger

from agentscope.message import Msg
from agentscope.models import ModelWrapperBase, ModelResponse
from agentscope.utils.tools import _convert_to_str
from ..message import Msg
from ..models import ModelWrapperBase, ModelResponse
from ..utils.common import _convert_to_str

try:
import google.generativeai as genai
Expand Down
2 changes: 1 addition & 1 deletion src/agentscope/models/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@
from ..manager import FileManager
from ..manager import MonitorManager
from ..message import Msg
from ..utils.tools import _get_timestamp, _convert_to_str
from ..utils.common import _get_timestamp, _convert_to_str
from ..constants import _DEFAULT_MAX_RETRIES
from ..constants import _DEFAULT_RETRY_INTERVAL

Expand Down
6 changes: 3 additions & 3 deletions src/agentscope/models/ollama_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@
from abc import ABC
from typing import Sequence, Any, Optional, List, Union, Generator

from agentscope.message import Msg
from agentscope.models import ModelWrapperBase, ModelResponse
from agentscope.utils.tools import _convert_to_str
from ..message import Msg
from ..models import ModelWrapperBase, ModelResponse
from ..utils.common import _convert_to_str

try:
import ollama
Expand Down
2 changes: 1 addition & 1 deletion src/agentscope/models/openai_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
from .model import ModelWrapperBase, ModelResponse
from ..manager import FileManager
from ..message import Msg
from ..utils.tools import _convert_to_str, _to_openai_image_url
from ..utils.common import _convert_to_str, _to_openai_image_url

from ..utils.token_utils import get_openai_max_length

Expand Down
7 changes: 6 additions & 1 deletion src/agentscope/models/response.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import json
from typing import Optional, Sequence, Any, Generator, Union, Tuple

from agentscope.utils.tools import _is_json_serializable
from ..utils.common import _is_json_serializable


class ModelResponse:
Expand Down Expand Up @@ -56,6 +56,11 @@ def text(self) -> str:
self._text += chunk
return self._text

@text.setter
def text(self, value: str) -> None:
"""Set the text field."""
self._text = value

@property
def stream(self) -> Union[None, Generator[Tuple[bool, str], None, None]]:
"""Return the stream generator if it exists."""
Expand Down
Loading