Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions docs/docs/agent/prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Prompt 管理

在实际生产过程中,Prompt 需要进行版本管理与动态下发,VeADK 提供了 Prompt 管理模块,用户可以通过 Prompt 管理功能来进行 Prompt 的模板化与热更新,在运行时动态切换 Prompt 版本。

## CozeLoop 提示词管理

在使用 CozeLoop 进行提示词管理前,您需要安装 Python 版本的 `cozeloop` SDK:

```bash
pip install cozeloop
```

您可以通过 CozeLoop 云端提示词管理功能来对接您的 Agent 系统提示词。例如:

```python title="agent.py" linenums="1" hl_lines="4 6-11 13"
import asyncio

from veadk import Agent, Runner
from veadk.prompts.prompt_manager import CozeloopPromptManager

prompt_manager = CozeloopPromptManager(
cozeloop_workspace_id="", # CozeLoop workspace ID
cozeloop_token="", # CozeLoop token
prompt_key="", # CozeLoop 中创建的 Prompt Key
label="production", # CozeLoop 中创建的 Prompt 标签
)

agent = Agent(name="test_prompt_mgr", prompt_manager=prompt_manager)

runner = Runner(agent=agent)

response = asyncio.run(runner.run(messages="你的任务是什么?"))
print(response)
```

!!! note "提示"
CozeLoop 会在本地进行 Prompt 缓存,**更新时间为 1 分钟**。当获取 Prompt 失败时,会返回 VeADK 默认的 Agent 系统提示词。

您可以在日志中看到,每次处理用户请求之前,都会执行 `CozeloopPromptManager` 中的 `get_prompt` 方法,来获取最新的 Prompt 模板内容。效果如下:

![img](../assets/images/agents/cozeloop_prompt_mgr.png)

更加详细的说明,请参考 [CozeLoop 提示词管理](https://loop.coze.cn/open/docs/cozeloop/what-is-prompt)。

## 实现您自己的 Prompt Manager

若您想实现更高阶的 Prompt 模板或变量组合,您可以继承 `PromptManager` 类,实现自定义的 Prompt 管理逻辑。

在实现自定义 Prompt Manager 时,您需要实现 `get_prompt` 方法,该方法会在每次处理用户请求之前被调用,用于获取最新的 Prompt 模板内容。
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
17 changes: 10 additions & 7 deletions docs/docs/optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ veadk prompt
```

选项包括:

```shell
--path:指定要优化的 Agent 文件路径,默认值为当前目录下 agent.py。注意,必须将定义的智能体作为全局变量导出
--feedback:指定优化后的提示词反馈,用于优化模型
Expand Down Expand Up @@ -45,20 +46,22 @@ VeADK 与方舟平台 Agent RL 集成,用户使用 VeADK 提供的脚手架,

在你的终端中运行以下命令,初始化一个强化学习项目:

```shell
```bash
veadk rl init --platform ark --workspace veadk_rl_ark_project
```

该命令会在当前目录下创建一个名为 `veadk_rl_ark_project` 的文件夹,其中包含了一个基本的强化学习项目结构。
然后在终端中运行以下命令,提交任务到方舟平台:

```shell
```bash
cd veadk_rl_ark_project
veadk rl submit --platform ark
```

#### 原理说明

生成后的项目结构如下,其中核心文件包括:

- 数据集: `data/*.jsonl`
- `/plugins`文件夹下的rollout和reward:
- rollout :用以规定agent的工作流,`raw_async_veadk_rollout.py`提供了使用在方舟rl中使用veadk agent的示例,
Expand All @@ -83,6 +86,7 @@ veadk_rl_ark_project
```

#### 最佳实践案例

1. 脚手架中,基于 VeADK 的天气查询 Agent 进行强化学习优化
2. 提交任务 (veadk rl submit --platform ark)

Expand All @@ -104,20 +108,21 @@ VeADK 与 Agent Lightning 集成,用户使用 VeADK 提供的脚手架,可

在你的终端中运行以下命令,初始化一个 Agent Lightning 项目:

```shell
```bash
veadk rl init --platform lightning --workspace veadk_rl_lightning_project
```

该命令会在当前目录下创建一个名为 `veadk_rl_lightning_project` 的文件夹,其中包含了一个基本的基于 VeADK 和 Agent Lightning 的强化学习项目结构。
然后在终端1中运行以下命令,启动 client:

```shell
```bash
cd veadk_rl_lightning_project
veadk rl run --platform lightning --client
```

然后在终端2中运行以下命令,启动 server:

```shell
```bash
cd veadk_rl_lightning_project
veadk rl run --platform lightning --server
```
Expand Down Expand Up @@ -160,5 +165,3 @@ veadk_rl_lightning_project
![启动client](./assets/images/optimization/lightning_client.png)

![启动server](./assets/images/optimization/lightning_training_server.png)


1 change: 1 addition & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ nav:
- 构建智能体: agent/agent.md
- 多智能体交互: agent/agents.md
- A2A 协议: agent/agent-to-agent.md
- Prompt 管理: agent/prompt.md
- 执行引擎:
- Runner: runner.md
- 个性化引擎——记忆:
Expand Down
23 changes: 19 additions & 4 deletions veadk/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,11 @@
from veadk.memory.long_term_memory import LongTermMemory
from veadk.memory.short_term_memory import ShortTermMemory
from veadk.processors import BaseRunProcessor, NoOpRunProcessor
from veadk.prompts.agent_default_prompt import DEFAULT_DESCRIPTION, DEFAULT_INSTRUCTION
from veadk.prompts.agent_default_prompt import (
DEFAULT_DESCRIPTION,
DEFAULT_INSTRUCTION,
)
from veadk.prompts.prompt_manager import BasePromptManager
from veadk.tracing.base_tracer import BaseTracer
from veadk.utils.logger import get_logger
from veadk.utils.patches import patch_asyncio, patch_tracer
Expand Down Expand Up @@ -96,6 +100,8 @@ class Agent(LlmAgent):

sub_agents: list[BaseAgent] = Field(default_factory=list, exclude=True)

prompt_manager: Optional[BasePromptManager] = None

knowledgebase: Optional[KnowledgeBase] = None

short_term_memory: Optional[ShortTermMemory] = None
Expand Down Expand Up @@ -202,6 +208,9 @@ def model_post_init(self, __context: Any) -> None:
else:
self.before_agent_callback = check_agent_authorization

if self.prompt_manager:
self.instruction = self.prompt_manager.get_prompt

logger.info(f"VeADK version: {VERSION}")

logger.info(f"{self.__class__.__name__} `{self.name}` init done.")
Expand Down Expand Up @@ -274,14 +283,20 @@ def _prepare_tracers(self):
return

if not self.tracers:
from veadk.tracing.telemetry.opentelemetry_tracer import OpentelemetryTracer
from veadk.tracing.telemetry.opentelemetry_tracer import (
OpentelemetryTracer,
)

self.tracers.append(OpentelemetryTracer())

exporters = self.tracers[0].exporters # type: ignore

from veadk.tracing.telemetry.exporters.apmplus_exporter import APMPlusExporter
from veadk.tracing.telemetry.exporters.cozeloop_exporter import CozeloopExporter
from veadk.tracing.telemetry.exporters.apmplus_exporter import (
APMPlusExporter,
)
from veadk.tracing.telemetry.exporters.cozeloop_exporter import (
CozeloopExporter,
)
from veadk.tracing.telemetry.exporters.tls_exporter import TLSExporter

if enable_apmplus_tracer and not any(
Expand Down
79 changes: 79 additions & 0 deletions veadk/prompts/prompt_manager.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Copyright (c) 2025 Beijing Volcano Engine Technology Co., Ltd. and/or its affiliates.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from abc import ABC, abstractmethod

from google.adk.agents.readonly_context import ReadonlyContext
from typing_extensions import override

from veadk.prompts.agent_default_prompt import DEFAULT_INSTRUCTION
from veadk.utils.logger import get_logger

logger = get_logger(__name__)


class BasePromptManager(ABC):
def __init__(self) -> None: ...

@abstractmethod
def get_prompt(self, context: ReadonlyContext, **kwargs) -> str: ...


class CozeloopPromptManager(BasePromptManager):
def __init__(
self,
cozeloop_workspace_id: str,
cozeloop_token: str,
prompt_key: str,
version: str = "",
label: str = "",
) -> None:
import cozeloop

self.cozeloop_workspace_id = cozeloop_workspace_id
self.cozeloop_token = cozeloop_token

self.prompt_key = prompt_key
self.version = version
self.label = label

self.client = cozeloop.new_client(
workspace_id=self.cozeloop_workspace_id,
api_token=self.cozeloop_token,
)

super().__init__()

@override
def get_prompt(self, context: ReadonlyContext, **kwargs) -> str:
logger.info(f"Get prompt for agent {context.agent_name} from CozeLoop.")

prompt = self.client.get_prompt(
prompt_key=self.prompt_key,
version=self.version,
label=self.label,
)
if (
prompt
and prompt.prompt_template
and prompt.prompt_template.messages
and prompt.prompt_template.messages[0].content
):
return prompt.prompt_template.messages[0].content

logger.warning(
f"Prompt {self.prompt_key} version {self.version} label {self.label} not found, get prompt result is {prompt}"
f"return default instruction"
)
return DEFAULT_INSTRUCTION