Skip to content

Commit

Permalink
Merge branch 'main' into patch-12
Browse files Browse the repository at this point in the history
  • Loading branch information
sonichi authored Oct 22, 2023
2 parents e099325 + 5c2cd99 commit e89f017
Show file tree
Hide file tree
Showing 5 changed files with 29 additions and 29 deletions.
4 changes: 2 additions & 2 deletions notebook/agentchat_groupchat_research.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"source": [
"# Auto Generated Agent Chat: Performs Research with Multi-Agent Group Chat\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"## Requirements\n",
Expand Down Expand Up @@ -93,7 +93,7 @@
"]\n",
"```\n",
"\n",
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n",
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choosing \"upload file\" icon.\n",
"\n",
"You can set the value of config_list in other ways you prefer, e.g., loading from a YAML file."
]
Expand Down
8 changes: 4 additions & 4 deletions notebook/agentchat_web_info.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@
"source": [
"# Auto Generated Agent Chat: Solving Tasks Requiring Web Info\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to perform tasks which require acquiring info from the web:\n",
"* discuss a paper based on its URL.\n",
"* discuss about stock market.\n",
"* discuss about the stock market.\n",
"\n",
"Here `AssistantAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `AssistantAgent`. By setting `human_input_mode` properly, the `UserProxyAgent` can also prompt the user for feedback to `AssistantAgent`. For example, when `human_input_mode` is set to \"TERMINATE\", the `UserProxyAgent` will execute the code written by `AssistantAgent` directly and return the execution results (success or failure and corresponding outputs) to `AssistantAgent`, and prompt the user for feedback when the task is finished. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `AssistantAgent`.\n",
"\n",
Expand Down Expand Up @@ -116,7 +116,7 @@
"]\n",
"```\n",
"\n",
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n",
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choosing \"upload file\" icon.\n",
"\n",
"You can set the value of config_list in other ways you prefer, e.g., loading from a YAML file."
]
Expand Down Expand Up @@ -162,7 +162,7 @@
"source": [
"## Example Task: Paper Talk from URL\n",
"\n",
"We invoke the `initiate_chat()` method of the user proxy agent to start the conversation. When you run the cell below, you will be prompted to provide feedback after the assistant agent sends a \"TERMINATE\" signal in the end of the message. If you don't provide any feedback (by pressing Enter directly), the conversation will finish. Before the \"TERMINATE\" signal, the user proxy agent will try to execute the code suggested by the assistant agent on behalf of the user."
"We invoke the `initiate_chat()` method of the user proxy agent to start the conversation. When you run the cell below, you will be prompted to provide feedback after the assistant agent sends a \"TERMINATE\" signal at the end of the message. If you don't provide any feedback (by pressing Enter directly), the conversation will finish. Before the \"TERMINATE\" signal, the user proxy agent will try to execute the code suggested by the assistant agent on behalf of the user."
]
},
{
Expand Down
20 changes: 10 additions & 10 deletions notebook/oai_completion.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -63,9 +63,9 @@
" - OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
" - Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
" - Azure OpenAI API base: os.environ[\"AZURE_OPENAI_API_BASE\"] or `aoai_api_base_file=\"base_aoai.txt\"`. Multiple bases can be stored, one per line.\n",
"* The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for environment variable `env_or_file` which needs to be a valid json string. If that variable is not found, it then looks for a json file with the same name. It filters the configs by filter_dict.\n",
"* The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for the environment variable `env_or_file`, which must be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n",
"\n",
"It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base. If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n"
"It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base. If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choosing \"upload file\" icon.\n"
]
},
{
Expand Down Expand Up @@ -253,10 +253,10 @@
" \"\"\"I think we all remember that feeling when the result of some long-awaited\n",
" event is finally known. The feelings and thoughts you have at that moment are\n",
" definitely worth noting down and comparing.\n",
" Your task is to determine if a person correctly guessed the results of a number of matches.\n",
" Your task is to determine if a person correctly guessed the results of several matches.\n",
" You are given two arrays of scores and guesses of equal length, where each index shows a match. \n",
" Return an array of the same length denoting how far off each guess was. If they have guessed correctly,\n",
" the value is 0, and if not, the value is the absolute difference between the guess and the score.\n",
" the value is 0; if not, the value is the absolute difference between the guess and the score.\n",
" \n",
" \n",
" example:\n",
Expand Down Expand Up @@ -344,7 +344,7 @@
" autogen.code_utils.eval_function_completions,\n",
" assertions=partial(autogen.code_utils.generate_assertions, config_list=config_list),\n",
" use_docker=False,\n",
" # Please set use_docker=True if you have docker available to run the generated code.\n",
" # Please set use_docker=True if docker is available to run the generated code.\n",
" # Using docker is safer than running the generated code directly.\n",
")\n"
]
Expand Down Expand Up @@ -399,9 +399,9 @@
"\n",
"* `inference_budget` is the target average inference budget per instance in the benchmark. For example, 0.02 means the target inference budget is 0.02 dollars, which translates to 1000 tokens (input + output combined) if the text Davinci model is used.\n",
"* `optimization_budget` is the total budget allowed to perform the tuning. For example, 5 means 5 dollars are allowed in total, which translates to 250K tokens for the text Davinci model.\n",
"* `num_sumples` is the number of different hyperparameter configurations which is allowed to try. The tuning will stop after either num_samples trials or after optimization_budget dollars spent, whichever happens first. -1 means no hard restriction in the number of trials and the actual number is decided by `optimization_budget`.\n",
"* `num_sumples` is the number of different hyperparameter configurations allowed to be tried. The tuning will stop after either num_samples trials or after optimization_budget dollars spent, whichever happens first. -1 means no hard restriction in the number of trials and the actual number is decided by `optimization_budget`.\n",
"\n",
"Users can specify tuning data, optimization metric, optimization mode, evaluation function, search spaces etc.. The default search space is:\n",
"Users can specify tuning data, optimization metric, optimization mode, evaluation function, search spaces, etc. The default search space is:\n",
"\n",
"```python\n",
"default_search_space = {\n",
Expand All @@ -424,8 +424,8 @@
"}\n",
"```\n",
"\n",
"The default search space can be overridden by users' input.\n",
"For example, the following code specifies three choices for the prompt and two choices of stop sequences. For hyperparameters which don't appear in users' input, the default search space will be used. If you don't have access to gpt-4 or would like to modify the choice of models, you can provide a different search space for model."
"Users' input can override the default search space.\n",
"For example, the following code specifies three choices for the prompt and two choices of stop sequences. The default search space will be used for hyperparameters that don't appear in users' input. If you don't have access to gpt-4 or would like to modify the choice of models, you can provide a different search space for the model."
]
},
{
Expand Down Expand Up @@ -534,7 +534,7 @@
}
},
"source": [
"### Make a request with the tuned config\n",
"### Request with the tuned config\n",
"\n",
"We can apply the tuned config on the request for an example task:"
]
Expand Down
24 changes: 12 additions & 12 deletions notebook/oai_openai_utils.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"source": [
"# In-depth Guide to OpenAI Utility Functions\n",
"\n",
"Managing API configurations can be tricky, especially when dealing with multiple models and API versions. The provided utility functions assist users in managing these configurations effectively. Ensure your API keys and other sensitive data are stored securely. For local development, you might store keys in `.txt` or `.env` files or environment variables. Never expose your API keys publicly. If you insist on having your key files stored locally on your repo (you shouldn't), make sure the key file path is added to the `.gitignore` file.\n",
"Managing API configurations can be tricky, especially when dealing with multiple models and API versions. The provided utility functions assist users in managing these configurations effectively. Ensure your API keys and other sensitive data are stored securely. You might store keys in `.txt` or `.env` files or environment variables for local development. Never expose your API keys publicly. If you insist on storing your key files locally on your repo (you shouldn't), ensure the key file path is added to the `.gitignore` file.\n",
"\n",
"#### Steps:\n",
"1. Obtain API keys from OpenAI and optionally from Azure OpenAI (or other provider).\n",
Expand All @@ -32,7 +32,7 @@
"metadata": {},
"source": [
"#### What is a `config_list`?\n",
"When instantiating an assistant, such as the example below, you see that it is being passed a `config_list`. This is used to tell the `AssistantAgent` which models or configurations it has access to:\n",
"When instantiating an assistant, such as the example below, it is passed a `config_list`. This is used to tell the `AssistantAgent` which models or configurations it has access to:\n",
"```python\n",
"\n",
"assistant = AssistantAgent(\n",
Expand All @@ -51,7 +51,7 @@
"- Generate creative content (using gpt-4).\n",
"- Answer general queries (using gpt-3.5-turbo).\n",
"\n",
"Different tasks may require different models, and the `config_list` aids in dynamically selecting the appropriate model configuration, managing API keys, endpoints, and versions for efficient operation of the intelligent assistant. In summary, the `config_list` help the agents work efficiently, reliably, and optimally by managing various configurations and interactions with the OpenAI API - enhancing the adaptability and functionality of the agents."
"Different tasks may require different models, and the `config_list` aids in dynamically selecting the appropriate model configuration, managing API keys, endpoints, and versions for efficient operation of the intelligent assistant. In summary, the `config_list` helps the agents work efficiently, reliably, and optimally by managing various configurations and interactions with the OpenAI API - enhancing the adaptability and functionality of the agents."
]
},
{
Expand Down Expand Up @@ -108,7 +108,7 @@
"source": [
"## config_list_openai_aoai\n",
"\n",
"This method creates a list of configurations using Azure OpenAI endpoints and OpenAI endpoints. It tries to extract API keys and bases either from environment variables or from local text files.\n",
"This method creates a list of configurations using Azure OpenAI endpoints and OpenAI endpoints. It tries to extract API keys and bases from environment variables or local text files.\n",
"\n",
"Steps:\n",
"- Store OpenAI API key in:\n",
Expand Down Expand Up @@ -151,7 +151,7 @@
" 2. Alternatively, save configurations in a local JSON file named `OAI_CONFIG_LIST.json`\n",
" 3. Add `OAI_CONFIG_LIST` to your `.gitignore` file on your local repository.\n",
"\n",
"Your JSON struction should look something like this:\n",
"Your JSON structure should look something like this:\n",
"\n",
"```json\n",
"# OAI_CONFIG_LIST file example\n",
Expand Down Expand Up @@ -195,7 +195,7 @@
"\n",
"The z parameter in `autogen.config_list_from_json` function is used to selectively filter the configurations loaded from the environment variable or JSON file based on specified criteria. It allows you to define criteria to select only those configurations that match the defined conditions.\n",
"\n",
"lets say you want to config an assistant agent to only LLM type. Take the below example: even though we have \"gpt-3.5-turbo\" and \"gpt-4\" in our `OAI_CONFIG_LIST`, this agent would only be configured to use"
"let's say you want to configure an assistant agent to only LLM type. Take the below example: even though we have \"gpt-3.5-turbo\" and \"gpt-4\" in our `OAI_CONFIG_LIST`, this agent would only be configured to use"
]
},
{
Expand Down Expand Up @@ -249,9 +249,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"With the `OAI_CONFIG_LIST` we set earlier, there isn't much to filter on. But when the complexity of a project grows and the and you're managing multiple models for various purposes, you can see how `filter_dict` can be useful. \n",
"With the `OAI_CONFIG_LIST` we set earlier, there isn't much to filter on. But when the complexity of a project grows and you're managing multiple models for various purposes, you can see how `filter_dict` can be useful. \n",
"\n",
"A more complex filtering criteria could be the following: Assuming we have a `OAI_CONFIG_LIST` several models used to create various agents - Lets say we want to load configurations for `gpt-4` using API version `\"2023-03-01-preview\"` and we want the `api_type` to be `aoai`, we can set up `filter_dict` as follows:"
"A more complex filtering criteria could be the following: Assuming we have an `OAI_CONFIG_LIST` several models used to create various agents - Let's say we want to load configurations for `gpt-4` using API version `\"2023-03-01-preview\"` and we want the `api_type` to be `aoai`, we can set up `filter_dict` as follows:"
]
},
{
Expand Down Expand Up @@ -310,9 +310,9 @@
"source": [
"## config_list_from_dotenv\n",
"\n",
"If you are interest in keeping all of your keys in a single location like a `.env` file rather than using a configuration specifically for OpenAI, you can use `config_list_from_dotenv`. The allows you to conveniently create a config list without creating a complex `OAI_CONFIG_LIST` file.\n",
"If you are interested in keeping all of your keys in a single location like a `.env` file rather than using a configuration specifically for OpenAI, you can use `config_list_from_dotenv`. This allows you to conveniently create a config list without creating a complex `OAI_CONFIG_LIST` file.\n",
"\n",
"The `model_api_key_map` parameter is a dictionary that maps model names to the environment variable names in the `.env` file where their respective API keys are stored. It allows the code to know which API key to use for each model. \n",
"The `model_api_key_map` parameter is a dictionary that maps model names to the environment variable names in the `.env` file where their respective API keys are stored. It lets the code know which API key to use for each model. \n",
"\n",
"If not provided, it defaults to using `OPENAI_API_KEY` for `gpt-4` and `OPENAI_API_KEY` for `gpt-3.5-turbo`.\n",
"\n",
Expand Down Expand Up @@ -354,7 +354,7 @@
"import autogen\n",
"\n",
"config_list = autogen.config_list_from_dotenv(\n",
" dotenv_file_path='.env', # If None the function will try find in the working directory\n",
" dotenv_file_path='.env', # If None the function will try to find in the working directory\n",
" filter_dict={\n",
" \"model\": {\n",
" \"gpt-4\",\n",
Expand Down Expand Up @@ -386,7 +386,7 @@
"source": [
"# gpt-3.5-turbo will default to OPENAI_API_KEY\n",
"config_list = autogen.config_list_from_dotenv(\n",
" dotenv_file_path='.env', # If None the function will try find in the working directory\n",
" dotenv_file_path='.env', # If None the function will try to find in the working directory\n",
" model_api_key_map={\n",
" \"gpt-4\": \"ANOTHER_API_KEY\", # String or dict accepted\n",
" },\n",
Expand Down
2 changes: 1 addition & 1 deletion website/docs/Examples/AutoGen-AgentChat.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@ Links to notebook examples:

5. **Agent Teaching and Learning**
- Teach Agents New Skills & Reuse via Automated Chat - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teaching.ipynb)
- Teach Agents New Facts, User Preferences and Skills Beyond Coding - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb)
- Teach Agents New Facts, User Preferences and Skills Beyond Coding - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb)

0 comments on commit e89f017

Please sign in to comment.