diff --git a/docs/guidebook/_picture/long_translation_au.png b/docs/guidebook/_picture/long_translation_au.png new file mode 100644 index 00000000..a9cd0fe5 Binary files /dev/null and b/docs/guidebook/_picture/long_translation_au.png differ diff --git a/docs/guidebook/_picture/long_translation_wu.png b/docs/guidebook/_picture/long_translation_wu.png new file mode 100644 index 00000000..e223a3d7 Binary files /dev/null and b/docs/guidebook/_picture/long_translation_wu.png differ diff --git a/docs/guidebook/_picture/translation_execute_flow.png b/docs/guidebook/_picture/translation_execute_flow.png new file mode 100644 index 00000000..097c39ea Binary files /dev/null and b/docs/guidebook/_picture/translation_execute_flow.png differ diff --git a/docs/guidebook/_picture/translation_flow_graph.png b/docs/guidebook/_picture/translation_flow_graph.png new file mode 100644 index 00000000..122de458 Binary files /dev/null and b/docs/guidebook/_picture/translation_flow_graph.png differ diff --git a/docs/guidebook/zh/7_1_1_翻译案例.md b/docs/guidebook/zh/7_1_1_翻译案例.md new file mode 100644 index 00000000..5958ad2f --- /dev/null +++ b/docs/guidebook/zh/7_1_1_翻译案例.md @@ -0,0 +1,49 @@ +# 法律咨询案例 +近期,斯坦福大学教授吴恩达开源了一个AI智能体机器翻译——一个使用反思工作流进行翻译的智能体,该项目中演示了反思智能体工作流的机器翻译示例。该智能体的主要步骤如下: +1. 初始翻译,使用大模型(LLM)将文本从source_language翻译为target_language; +2. 进行反思,让大模型指出翻译结果中的不足,并给出建设性的改进建议; +3. 修改翻译,让大模型根据反思给出的建议,重新进行翻译; +项目中采用的反思工作流,让模型自己去发现不足,在根据不足做定向的修改,是一种典型的多智能体协同工作机制,这也是agentUniverse一直涉足的领域,于是我就有将吴恩达的这个翻译智能体迁移到agentUniverse上面的想法。 +该案例基于千文大模型和`DashScope`的embedding功能,使用前需要您在环境变量中配置`DASHSCOPE_API_KEY`。 + +## 多智能体的协同工作过程 +翻译过程中,优先判断模型的长度是否超过了模型所能承受的最大token,针对超过模型长度的情况,先对分本进行切块,再分块进行翻译, +但整个翻译过程都是基于初始翻译->反思->修改的流程进行的。 +![多智能体协同工作流程](../_picture/translation_flow_graph.png) + +在参考吴恩达教授的代码当中,处理分块翻译时,为了保证翻译的连贯性,翻译分块时,会携带响应的上下文。但是在他的代码当中将所有的翻译内容都放在的上下文当中,会使切分失去原本的意义,某些情况下会导致token超过模型所能接受的最大token,针对此问题我在对应的仓库地址提问了相关issue:[issue地址](https://github.com/andrewyng/translation-agent/issues/28) + +## 在aU中的实现 +在aU中实现该翻译的过程主要分为以下几步: +1. 定义翻译相关的prompt,短文本翻译三个,长文本翻译三个,相关文件如下: +[短文本翻译init prmpt](../../../sample_standard_app/app/core/prompt/translation/translation_init_en.yaml) +[短文本翻译reflection prmpt](../../../sample_standard_app/app/core/prompt/translation/translation_reflection_en.yaml) +[短文本翻译improve prmpt](../../../sample_standard_app/app/core/prompt/translation/translation_improve_en.yaml) +[长文本翻译init prmpt](../../../sample_standard_app/app/core/prompt/translation/multi_translation_init_en.yaml) +[长文本翻译reflection prmpt](../../../sample_standard_app/app/core/prompt/translation/multi_translation_improve_en.yaml) +[长文本翻译improve prmpt](../../../sample_standard_app/app/core/prompt/translation/multi_translation_improve_en.yaml) + +2. 定义三个智能体 +[短文本翻译智能体](../../../sample_standard_app/app/core/agent/translation_agent_case/translation_work_agent.yaml) +[短文本翻译智能体](../../../sample_standard_app/app/core/agent/translation_agent_case/translation_reflection_agent.yaml) +[短文本翻译智能体](../../../sample_standard_app/app/core/agent/translation_agent_case/translation_improve_agent.yaml) +该智能体当中,需要根据执行的时长文本翻译还是短文本翻译切换对应的prompt,具体逻辑可参考[agent](../../../sample_standard_app/app/core/agent/translation_agent_case/translation_agent.py) +3. 定义三个智能体的协同工作过程 +协同过程可参考前面介绍的多智能体协同工作过程的流程图 +![协同工作](../_picture/translation_execute_flow.png) +更细的流程可以参考[代码文件](../../../sample_standard_app/app/core/agent/translation_agent_case/translation_by_token_agent.py) +协同智能体的[配置文件](../../../sample_standard_app/app/core/agent/translation_agent_case/translation_agent.yaml) + +### 演示代码 +[代码链接](../../../sample_standard_app/app/test/test_translation_agent.py) + +[长翻译文本](../../../sample_standard_app/app/test/translation_data/long_text.txt) +[短翻译文本](../../../sample_standard_app/app/test/translation_data/short_text.txt) + +[短翻译结果](../../../sample_standard_app/app/test/translation_data/short_text_result.txt) +[长翻译结果](../../../sample_standard_app/app/test/translation_data/long_text_result.txt) + +### 演示结果 +可以看到,在相同模型下,使用aU与原本的translation_agent翻译的结果基本相同 +![aU执行长文本结果](../_picture/long_translation_au.png) +![translation执行长文本结果](../_picture/long_translation_wu.png) \ No newline at end of file diff --git a/sample_standard_app/app/core/agent/translation_agent_case/translation_agent.yaml b/sample_standard_app/app/core/agent/translation_agent_case/translation_agent.yaml index 3c292391..b2a3c068 100644 --- a/sample_standard_app/app/core/agent/translation_agent_case/translation_agent.yaml +++ b/sample_standard_app/app/core/agent/translation_agent_case/translation_agent.yaml @@ -6,7 +6,7 @@ profile: input_keys: ['source_lang','target_lang','source_text'] output_keys: ['output'] llm_model: - name: 'default_deepseek_llm' + name: 'default_qwen_llm' max_tokens: 4000 plan: action: diff --git a/sample_standard_app/app/core/agent/translation_agent_case/translation_improve_agent.yaml b/sample_standard_app/app/core/agent/translation_agent_case/translation_improve_agent.yaml index 90b9a336..308216be 100644 --- a/sample_standard_app/app/core/agent/translation_agent_case/translation_improve_agent.yaml +++ b/sample_standard_app/app/core/agent/translation_agent_case/translation_improve_agent.yaml @@ -6,7 +6,7 @@ profile: input_keys: ['source_lang','target_lang','source_text','init_agent_result','reflection_agent_result'] output_keys: ['output'] llm_model: - name: 'default_deepseek_llm' + name: 'default_qwen_llm' max_tokens: 4000 plan: planner: diff --git a/sample_standard_app/app/core/agent/translation_agent_case/translation_reflection_agent.yaml b/sample_standard_app/app/core/agent/translation_agent_case/translation_reflection_agent.yaml index 85cbebb3..83aa9a78 100644 --- a/sample_standard_app/app/core/agent/translation_agent_case/translation_reflection_agent.yaml +++ b/sample_standard_app/app/core/agent/translation_agent_case/translation_reflection_agent.yaml @@ -6,7 +6,7 @@ profile: input_keys: ['source_lang','target_lang','source_text','init_agent_result'] output_keys: ['output'] llm_model: - name: 'default_deepseek_llm' + name: 'default_qwen_llm' max_tokens: 4000 plan: planner: diff --git a/sample_standard_app/app/core/agent/translation_agent_case/translation_work_agent.yaml b/sample_standard_app/app/core/agent/translation_agent_case/translation_work_agent.yaml index e6704d67..ba2bd813 100644 --- a/sample_standard_app/app/core/agent/translation_agent_case/translation_work_agent.yaml +++ b/sample_standard_app/app/core/agent/translation_agent_case/translation_work_agent.yaml @@ -6,7 +6,7 @@ profile: input_keys: ['source_lang','target_lang','source_text'] output_keys: ['output'] llm_model: - name: 'default_deepseek_llm' + name: 'default_qwen_llm' max_tokens: 4000 plan: planner: