docs: enrich user documentation and add English content.
|
Before Width: | Height: | Size: 170 KiB After Width: | Height: | Size: 297 KiB |
|
Before Width: | Height: | Size: 181 KiB After Width: | Height: | Size: 306 KiB |
|
Before Width: | Height: | Size: 234 KiB After Width: | Height: | Size: 359 KiB |
|
Before Width: | Height: | Size: 341 KiB After Width: | Height: | Size: 652 KiB |
|
Before Width: | Height: | Size: 222 KiB After Width: | Height: | Size: 565 KiB |
|
Before Width: | Height: | Size: 121 KiB After Width: | Height: | Size: 600 KiB |
BIN
docs/guidebook/_picture/prompt_directory.png
Normal file
|
After Width: | Height: | Size: 61 KiB |
BIN
docs/guidebook/_picture/prompt_version.png
Normal file
|
After Width: | Height: | Size: 221 KiB |
@@ -50,92 +50,86 @@ The external key file template is located at the same level as the config path (
|
||||
Tips: The external key file generally contains all of your access keys (AK), which are very private and need to be strictly protected. This file should never be leaked or managed on code platforms like Git. In actual production projects, we typically separate this file from the project and implement strong system-level permission controls. The steps in the key configuration of this framework are mainly for the sake of production security.
|
||||
|
||||
##### step3. In the external key file, configure your commonly used model AK
|
||||
The key file contains dozens of common model service AK formats. You can fill in your own keys according to your needs, and don't forget to uncomment them. In subsequent tutorials, we will use the Qwen and GPT models as the LLM for the tutorial agent, so here we will configure the corresponding AK for Qianwen and GPT as shown in the image below:
|
||||
The key file contains dozens of common model service AK formats. You can fill in your own keys according to your needs. In subsequent tutorials, we will use the Qwen and GPT models as the LLM for the tutorial agent, so here we will configure the corresponding AK for Qwen and GPT as shown in the image below:
|
||||

|
||||
|
||||
## 2. Run the first example
|
||||
The agentUniverse currently includes tutorial examples, located at agentUniverse/examples.
|
||||
The `sample_standard_app` project already includes a basic agent instance, with its invocation entry point located at: agentUniverse/examples/sample_standard_app/intelligence/test/run_demo_agent.py
|
||||
|
||||
In this section, we will run the first example, we use demo_agent (Path: agentUniverse/examples/sample_standard_app/intelligence/test/demo_agent.py) to test our first tutorial example.
|
||||
In this section, we will run the first example, we use demo_agent (Path: agentUniverse/examples/sample_standard_app/intelligence/test/run_demo_agent.py) to test our first tutorial example.
|
||||
|
||||
### 2.1 Determine the agent used behind the example and its configuration
|
||||
For instance, in the case of demo_agent (Path: agentUniverse/examples/sample_standard_app/intelligence/test/demo_agent.py), we first identify the corresponding agent_name in the script. For the demo, it is demo_agent:
|
||||
For instance, in the case of demo_agent, we first identify the corresponding agent_name in the script. For the demo, it is demo_agent:
|
||||

|
||||
|
||||
After determining the agent used in the example, we go to the project agent directory (the directory path is: agentUniverse/examples/sample_standard_app/intelligence/agentic/agent/agent_instance) and find the corresponding agent configuration file demo_agent.yaml. Note that the name field in the agent configuration corresponds to the agent name in demo_agent.
|
||||
After determining the agent used in the example, we go to the project agent directory (the directory path is: agentUniverse/examples/sample_standard_app/intelligence/agentic/agent/agent_instance) and find the corresponding agent configuration file demo_agent.yaml. Note that the `name` item in the agent yaml configuration is the name of the agent invoked in the test script.
|
||||

|
||||
|
||||
Let's further examine the other configuration details in the demo_agent.yaml file, focusing on the llm_model configuration item. This item specifies the LLM used by the agent. By default, demo_agent uses the demo_llm as the model core. We further refer to the llm directory of the project (directory path: agentUniverse/examples/sample_standard_app/intelligence/agentic/llm) to find the corresponding llm configuration file demo_llm.yaml .
|
||||
Let's further examine the other configuration details in the demo_agent.yaml file, focusing on the llm_model configuration item. This item specifies the LLM used by the agent. The demo_agent uses the qwen2.5-72b-instruct as the model core. We further refer to the llm directory of the project (directory path: agentUniverse/examples/sample_standard_app/intelligence/agentic/llm) to find the corresponding llm configuration file qwen_2_5_72b_instruct.yaml.
|
||||
|
||||

|
||||
|
||||
The qwen2.5-72b-instruct model is used in demo_llm.
|
||||
|
||||
#### switch the llm
|
||||
|
||||
If you want to use a different series of model types, you can create the corresponding model instances under llm. AU has many common models. You can copy and replace them in the llm_model configuration of demo_agent.yaml. If needed, you can replace the model_name according to the official model codes provided by the service provider.
|
||||
If you have configured other series of model types during the key configuration phase, you can find the corresponding llm instances under the `llm` directory. The `sample_standard_app` project has already covered the configuration of commonly used llm instances. For example, you can copy the instance name and replace it under the `llm_model` configuration in `demo_agent.yaml`.
|
||||
|
||||
Qwen Series
|
||||
Qwen(qwen-max)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_qwen_llm'
|
||||
model_name: 'qwen-max'
|
||||
name: 'qwen-max'
|
||||
```
|
||||
|
||||
GPT Series
|
||||
GPT(gpt-4o)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_openai_llm'
|
||||
model_name: 'gpt-4o'
|
||||
name: 'gpt-4o'
|
||||
```
|
||||
|
||||
WenXin Series
|
||||
WenXin(ERNIE-4.0-Turbo-128K)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_wenxin_llm'
|
||||
model_name: 'ERNIE-3.5-8K'
|
||||
name: 'ERNIE-4.0-Turbo-128K'
|
||||
```
|
||||
|
||||
Kimi Series
|
||||
Kimi(moonshot-v1-128k)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_kimi_llm'
|
||||
model_name: 'moonshot-v1-8k'
|
||||
name: 'moonshot-v1-128k'
|
||||
```
|
||||
|
||||
Baichuan Series
|
||||
Baichuan(Baichuan4-Turbo)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_baichuan_llm'
|
||||
model_name: 'Baichuan2-Turbo'
|
||||
name: 'Baichuan4-Turbo'
|
||||
```
|
||||
|
||||
DeepSeek Series
|
||||
DeepSeek(deepseek-reasoner)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_deepseek_llm'
|
||||
model_name: 'deepseek-chat'
|
||||
name: 'deepseek-reasoner'
|
||||
```
|
||||
|
||||
Tips: To simplify the configuration process, we only list a selection of commonly used model services here. In addition to model service providers, local deployment models can also be configured. We will not demonstrate this in this chapter, but users with such needs can further refer to the chapters related to LLM configuration.
|
||||
Tips: To simplify the configuration process, we only list a selection of commonly used llm services here. In addition to model service providers, local deployment models can also be configured. We will not demonstrate this in this chapter, but users with such needs can further refer to the chapters related to LLM configuration.
|
||||
|
||||
#### switch the llm
|
||||
#### switch the tool
|
||||
The demo_agent uses the mock_search_tool by default. The tool has been set up to simulate the search engine results needed for the sample question, "Analyze the reasons behind Buffett's reduction in his stake in BYD."
|
||||
|
||||
You can switch to using the real retrieval tool demo_search_tool in demo_agent.yaml. This tool will provide online retrieval capabilities. To enhance your use case experience, we recommend that you apply for Serper and integrate the API key for the search tool in the key section in advance. Serper has already provided thousands of free retrieval requests for your experience.
|
||||
|
||||
You need to apply for a SERPER_API_KEY on the official Serper website and configure it. The official website address: https://serper.dev .
|
||||
|
||||
After completing the application, find the corresponding key in the custom_key.toml mentioned in the key configuration step, and uncomment it as follows.
|
||||
After the application is completed, locate the corresponding key in the `custom_key.toml` mentioned in the key configuration step, as shown below:
|
||||
|
||||
```toml
|
||||
#Google search
|
||||
#search.io api
|
||||
# You could sign up for a free account at https://www.searchapi.io/
|
||||
# And get the SEARCHAPI_API_KEY api key (100 free queries).
|
||||
SERPER_API_KEY='xxxxxx'
|
||||
```
|
||||
|
||||
### 2.2 Run the Example
|
||||
Through the steps above, you have completed all the actions. Run it in your IDE or in the shell.
|
||||
(Path: agentUniverse/examples/sample_standard_app/intelligence/test/demo_agent.py)
|
||||
(Path: agentUniverse/examples/sample_standard_app/intelligence/test/run_demo_agent.py)
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
# Docker容器化部署
|
||||
|
||||
AgentUniverse提供标准的工作环境镜像用于容器化部署AgentUniverse工程。本文档将介绍如何基于工作环境镜像部署您自己的工程项目。镜像tag列表可以在[这里获取](https://cr.console.aliyun.com/repository/cn-hangzhou/agent_universe/agent_universe/images)。如果您希望打包一个基于自己工程的镜像,请参考[工程镜像打包](./工程镜像打包.md)。
|
||||
agentUniverse提供标准的工作环境镜像用于容器化部署agentUniverse工程。本文档将介绍如何基于工作环境镜像部署您自己的工程项目。镜像tag列表可以在[这里获取](https://cr.console.aliyun.com/repository/cn-hangzhou/agent_universe/agent_universe/images)。如果您希望打包一个基于自己工程的镜像,请参考[工程镜像打包](./工程镜像打包.md)。
|
||||
|
||||
## 准备工作
|
||||
1. 按照AgentUniverse的标准结构目录搭建自己的项目,具体结构参考[应用工程结构及说明](../../../开始使用/1.标准应用工程结构说明.md)。为方便说明,在本文档中假设项目名称和工程目录为`sample_standard_app`。
|
||||
1. 按照agentUniverse的标准结构目录搭建自己的项目,具体结构参考[应用工程结构及说明](../../../开始使用/1.标准应用工程结构说明.md)。为方便说明,在本文档中假设项目名称和工程目录为`sample_standard_app`。
|
||||
2. 获取所需版本的镜像:
|
||||
```shell
|
||||
docker pull registry.cn-hangzhou.aliyuncs.com/agent_universe/agent_universe:0.0.14b1_centos8
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# K8S集群部署
|
||||
AgentUniverse 提供标准的工作环境镜像,并且支持在 Kubernetes (K8S) 集群上容器化部署。本指南将指导您如何利用这些工作环境镜像,在 K8S 上部署和搭建集群。镜像tag列表您可以在[这里获取](https://cr.console.aliyun.com/repository/cn-hangzhou/agent_universe/agent_universe/images)。如果您希望打包一个基于自己工程的镜像,请参考[工程镜像打包](./工程镜像打包.md)。
|
||||
agentUniverse 提供标准的工作环境镜像,并且支持在 Kubernetes (K8S) 集群上容器化部署。本指南将指导您如何利用这些工作环境镜像,在 K8S 上部署和搭建集群。镜像tag列表您可以在[这里获取](https://cr.console.aliyun.com/repository/cn-hangzhou/agent_universe/agent_universe/images)。如果您希望打包一个基于自己工程的镜像,请参考[工程镜像打包](./工程镜像打包.md)。
|
||||
|
||||
官方K8S使用文档:[Kubernetes Setup Documentation](https://kubernetes.io/docs/setup/)
|
||||
|
||||
@@ -56,7 +56,7 @@ spec:
|
||||
targetPort: 8888
|
||||
```
|
||||
|
||||
### 1.1 AgentUniverse项目环境变量设置
|
||||
### 1.1 agentUniverse项目环境变量设置
|
||||
|
||||
#### 方式1(推荐)
|
||||
|
||||
@@ -82,9 +82,9 @@ kubectl apply -f agentuniverse.yaml
|
||||
kubectl get all -n agent-namespace
|
||||
```
|
||||

|
||||
## 4. 从集群内部访问AgentUniverse 服务
|
||||
## 4. 从集群内部访问agentUniverse 服务
|
||||
|
||||
要从集群内部访问 AgentUniverse 服务,请使用以下命令行示例:
|
||||
要从集群内部访问 agentUniverse 服务,请使用以下命令行示例:
|
||||
|
||||
```
|
||||
kubectl exec -it [Pod名称] -n agent-namespace -- curl http://agentuniverse-service:9999
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# 工程镜像打包
|
||||
|
||||
我们在示例工程sample_standard_app中提供了一个将aU工程打包为镜像的[脚本](../../../../../../examples/sample_standard_app/image_build/start_build.sh),可以帮助您自动化的打包一个基于centos8系统,包含python运行环境、项目依赖的镜像,后续您可以参考[Docker容器化部署](./Docker容器化部署.md)和[K8S部署](./K8S部署.md)部署您的aU应用。
|
||||
我们在示例工程sample_standard_app中提供了一个将aU工程打包为镜像的[脚本](../../../../../examples/sample_standard_app/image_build/start_build.sh),可以帮助您自动化的打包一个基于centos8系统,包含python运行环境、项目依赖的镜像,后续您可以参考[Docker容器化部署](./Docker容器化部署.md)和[K8S部署](./K8S部署.md)部署您的aU应用。
|
||||
|
||||
## 执行步骤
|
||||
```shell
|
||||
@@ -50,73 +50,66 @@ pip install magent-ui ruamel.yaml
|
||||
Tips: 外置密钥文件一般包含了您的所有ak,这是非常私密且需要严格保护的,这个文件是绝对不允许被泄漏或者被git这类代码平台管理的。在实际的生产项目中,我们通常会将该文件剥离出项目外,并且加上一个系统级别的强权限管控,本框架的密钥配置之所以会有这些步骤更多的是出于生产安全性的考虑。
|
||||
|
||||
##### step3. 在外置密钥文件中配置您的常用模型ak
|
||||
密钥文件已包含了数十种常见的模型服务ak格式,你可以根据自身需求填写自己的密钥,别忘了放开注释。
|
||||
密钥文件已包含了数十种常见的模型服务ak格式,你可以根据自身需求填写自己的密钥。
|
||||
|
||||
后续的教程中,我们将以千问与GPT模型作为教程agent使用的llm,所以这里我们以千问、GPT为例配置对应的ak如下图:
|
||||

|
||||
|
||||
## 2. 运行第一个案例
|
||||
sample_standard_app工程中已经包含了一个最基础的agent实例,其调用入口路径在:
|
||||
agentUniverse/examples/sample_standard_app/intelligence/test/demo_agent.py
|
||||
agentUniverse/examples/sample_standard_app/intelligence/test/run_demo_agent.py
|
||||
|
||||
在本节我们将其作为第一个运行的agent案例,我们选取 demo_agent 作为第一个教程案例。
|
||||
|
||||
### 2.1 确定example背后使用的agent与其配置
|
||||
以 demo_agent案例 为例,我们在案例测试脚本中找到对应的agent_name为demo_agent。
|
||||
以 demo_agent案例为例,我们在案例测试脚本中找到对应的agent_name为demo_agent。
|
||||

|
||||
|
||||
确定example使用的agent后,我们到该项目的agent目录(目录路径为:agentUniverse/examples/sample_standard_app/intelligence/agentic/agent/agent_instance)下找到对应的agent配置 demo_agent.yaml,注意agent配置中的name项即为测试脚本中调用的agent名称
|
||||

|
||||
|
||||
我们进一步查看 demo_agent.yaml 中的其他配置详情,在这里请进一步关注 llm_model 这一个配置项,这一项为配置选择agent所使用的llm,demo_agent 中默认使用了 demo_llm 这一模型实例作为agent的模型内核。我们进一步到该项目的llm目录(目录路径为:agentUniverse/examples/sample_standard_app/intelligence/agentic/llm)下找到对应的llm配置 demo_llm.yaml。
|
||||
我们进一步查看 demo_agent.yaml 中的其他配置详情,在这里请进一步关注 llm_model 这一个配置项,这一项为配置选择agent所使用的llm,demo_agent 中使用了 qwen2.5-72b-instruct 这一模型实例作为agent的模型内核。我们进一步到该项目的llm目录(目录路径为:agentUniverse/examples/sample_standard_app/intelligence/agentic/llm)下找到对应的llm配置 qwen_2_5_72b_instruct.yaml。
|
||||
|
||||

|
||||

|
||||
|
||||
我们可以看到在demo_llm中使用了qwen2.5-72b-instruct模型。
|
||||
|
||||
#### 切换模型
|
||||
若您在密钥配置阶段配置的为其他系列模型类型可以在llm下创建对应的模型实例。aU已经为大家内置了常用的模型替换参考,如下。您可以复制并在demo_agent.yaml中的llm_model配置下替换,model_name有需要可以自行按照服务商官方模型code替换。
|
||||
若您在密钥配置阶段配置的为其他系列模型类型,可以在llm目录下找到对应的模型实例。aU sample_standard_app工程已经覆盖了常用的模型实例配置。如下,您可以复制实例名称并在demo_agent.yaml中的llm_model配置下完成替换。
|
||||
|
||||
千问系列
|
||||
千问(qwen-max)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_qwen_llm'
|
||||
model_name: 'qwen-max'
|
||||
name: 'qwen-max'
|
||||
```
|
||||
|
||||
gpt系列
|
||||
gpt(gpt-4o)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_openai_llm'
|
||||
model_name: 'gpt-4o'
|
||||
name: 'gpt-4o'
|
||||
```
|
||||
|
||||
文心系列
|
||||
文心(ERNIE-4.0-Turbo-128K)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_wenxin_llm'
|
||||
model_name: 'ERNIE-3.5-8K'
|
||||
name: 'ERNIE-4.0-Turbo-128K'
|
||||
```
|
||||
|
||||
kimi系列
|
||||
kimi(moonshot-v1-128k)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_kimi_llm'
|
||||
model_name: 'moonshot-v1-8k'
|
||||
name: 'moonshot-v1-128k'
|
||||
```
|
||||
|
||||
百川系列
|
||||
百川(Baichuan4-Turbo)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_baichuan_llm'
|
||||
model_name: 'Baichuan2-Turbo'
|
||||
name: 'Baichuan4-Turbo'
|
||||
```
|
||||
|
||||
DeepSeek系列
|
||||
DeepSeek(deepseek-reasoner)
|
||||
```text
|
||||
llm_model:
|
||||
name: 'default_deepseek_llm'
|
||||
model_name: 'deepseek-chat'
|
||||
name: 'deepseek-reasoner'
|
||||
```
|
||||
|
||||
Tips:为简化配置过程,这里我们只列举了部分常用模型服务,此外除了模型服务商、本地部署模型均可配置,我们在这个章节不演示,有这部分需求的用户可以进一步关注llm配置相关的章节。
|
||||
@@ -128,16 +121,18 @@ Tips:为简化配置过程,这里我们只列举了部分常用模型服务
|
||||
|
||||
你需要在serper官网申请SERPER_API_KEY并配置,官网地址: https://serper.dev
|
||||
|
||||
申请完成后,在密钥配置步骤中提到的 custom_key.toml 中找到对应的key填写,并放开注释如下:
|
||||
申请完成后,在密钥配置步骤中提到的 custom_key.toml 中找到对应的key填写,如下:
|
||||
|
||||
```toml
|
||||
#Google search
|
||||
#search.io api
|
||||
# You could sign up for a free account at https://www.searchapi.io/
|
||||
# And get the SEARCHAPI_API_KEY api key (100 free queries).
|
||||
SERPER_API_KEY='xxxxxx'
|
||||
```
|
||||
|
||||
|
||||
### 2.2 运行案例
|
||||
通过上述的步骤,您已经完成了所有前置工作,让我们来直接运行看一看效果。找到agentUniverse/examples/sample_standard_app/intelligence/test/demo_agent.py文件,在ide或在shell中运行。
|
||||
通过上述的步骤,您已经完成了所有前置工作,让我们来直接运行看一看效果。找到agentUniverse/examples/sample_standard_app/intelligence/test/run_demo_agent.py文件,在ide或在shell中运行。
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -2,9 +2,15 @@
|
||||
在本例中我们将进一步向大家介绍如何使用prompt管理模块。
|
||||
|
||||
# 使用prompt管理模块
|
||||
样例地址:[demo_startup_app_with_agent_templates](../../../../examples/startup_app/demo_startup_app_with_agent_templates)
|
||||
样例地址:[sample_standard_app](../../../../examples/sample_standard_app)
|
||||
|
||||
在实际搭建多智能体应用的过程中,我们面临大量prompt设置,这些prompt将存在于各个yaml中。随着应用的内容增多越来越的prompt将变得难以管理。我们使用prompt管理模块将每一个prompt赋予唯一的prompt_version进行管理与使用。
|
||||
以demo_startup_app_with_agent_templates工程中的,智能体[insurance_consult_pro_agent.yaml](../../../../examples/startup_app/demo_startup_app_with_agent_templates/intelligence/agentic/agent/agent_instance/insurance_consult_pro_agent.yaml)为例,在配置项中我们可以看到 prompt_version 配置为insurance_consult.cn,我们可以在 [prompt目录](../../../../examples/startup_app/demo_startup_app_with_agent_templates/intelligence/agentic/prompt)中找到其实际的prompt文件[insurance_multi_agent_cn.yaml](../../../../examples/startup_app/demo_startup_app_with_agent_templates/intelligence/agentic/prompt/insurance_multi_agent_cn.yaml)。
|
||||
在实际搭建多智能体应用的过程中,我们会面临单一智能体存在多种不同版本prompt的场景(如多语言、单智能体适配不同模型等),如果将prompt信息配置在各个agent yaml中,则增大了用户的prompt管理难度。
|
||||
|
||||
通过这种方式,我们可以将大量prompt单独管理并复用起来。
|
||||
我们推荐您将prompt以yaml文件的形式,统一管理在xxx/intelligence/agentic/prompt目录下(`xxx`为项目名称和工程目录),如样例工程的[prompt地址](../../../../examples/sample_standard_app/intelligence/agentic/prompt)。
|
||||

|
||||
|
||||
以agent名称作为子目录名称(如`demo_agent`),子目录下存放当前智能体的所有prompt版本(如`cn_v1` & `cn_v2`),agentUniverse prompt管理模块将为每一个prompt文件赋予唯一的prompt_version进行管理与使用,prompt_version通过**子目录名称+prompt文件名称**组成(如`demo_agent.cn_v1` & `demo_agent.cn_v2`)。
|
||||
以sample_standard_app工程中的智能体demo_agent.yaml为例,在配置项中我们可以看到`prompt_version`配置为`demo_agent.cn_v2`,则对应到实际的prompt文件[cn_v2.yaml](../../../../examples/sample_standard_app/intelligence/agentic/prompt/demo_agent/cn_v2.yaml)。
|
||||

|
||||
|
||||
通过这种方式,我们可以通过修改agent yaml中的prompt_version(prompt版本号)将大量prompt有序管理并灵活使用起来。
|
||||