feat(sdk): AI SDK custom useChat transport & chat.task harness#3173
feat(sdk): AI SDK custom useChat transport & chat.task harness#3173
Conversation
🦋 Changeset detectedLatest commit: fdade69 The changes in this PR will be included in the next version bump. This PR includes changesets to release 30 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a browser-safe chat transport and factory ( Estimated code review effort🎯 5 (Critical) | ⏱️ ~150 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
6530655 to
97f967e
Compare
e16f19c to
b61c052
Compare
b61c052 to
b84db78
Compare
fbc4106 to
169dc4f
Compare
…ircuit TriggerChatTransport.reconnectToStream previously returned null any time state.isStreaming was falsy, which included undefined. That meant a caller who dropped isStreaming from their ChatSession persistence (a reasonable simplification now that the server can tell the client when a session is settled via X-Session-Settled on the session.out SSE) would get null on every reconnect and the UI would never resume streaming. Tighten the check to state.isStreaming === false so only an explicit false triggers the fast-path skip. Undefined now falls through to open the SSE and let the server decide — on a settled session the server already closes the connection in ~1s via wait=0, so there is no 60s hang to worry about. Backward compatible: callers who still persist and hydrate isStreaming (true/false) keep today's behavior exactly; callers who drop the flag now get the server-authoritative path.
Three dashboard-scoped stream routes were passing request.signal into realtimeStream.streamResponse. That signal is broken under Remix+Express (see apps/webapp/CLAUDE.md, nodejs/node#55428 — the chain is severed when Remix internally clones the Request), so when a user closes their dashboard tab the signal never fires. The underlying RedisRealtimeStreams.streamResponse loops while(!signal.aborted) over XREAD BLOCK and only exits on its 15s inactivity timeout; the S2 path keeps the upstream fetch open for up to its 60s wait window. Thread getRequestAbortSignal() through: - resources/orgs/.../runs/$runParam/realtime/v1/streams/$runId/$streamId - resources/orgs/.../runs/$runParam/realtime/v1/streams/$runId/input/$streamId - resources/orgs/.../playground/realtime/v1/streams/$runId/$streamId Each picks up the Express res.on('close')-backed signal that fires reliably when the downstream client disconnects.
Pulls PENDING_MESSAGE_INJECTED_TYPE, ChatTaskWirePayload, and the client-data inference helpers out of ai.ts (~7000 lines, statically imports node:* via the skills runtime) into a new ai-shared.ts that stays free of node-only imports. chat.ts and chat-react.ts now reach for these via ai-shared so browser bundlers don't trace ai.ts's entire module graph (Turbopack rejected the node: builtins outright).
The webapp's peek-tail-settled shortcut on /realtime/v1/sessions/:id/out previously fired on every io=out subscription. That race-tripped active send-a-message paths: the SSE peek would see the prior turn's trigger:turn-complete record before the newly-triggered run wrote its first chunk, return wait=0 + X-Session-Settled:true, and close the stream before any of the new turn's records landed. Make the peek opt-in via an X-Peek-Settled: 1 request header. Only TriggerChatTransport.reconnectToStream sets it (true reload-resume case where settling early is fine); sendMessages and the rest leave it off and stay on the normal long-poll. On the server side, streamResponseFromSessionStream gates the peek on options.peekSettled and skips it otherwise. - apps/webapp: read X-Peek-Settled from the request, thread to streamResponseFromSessionStream - packages/trigger-sdk/chat.ts: peekSettled option on subscribeToSessionStream + reconnectToStream sets it; sendMessages does not - docs/ai-chat/client-protocol.mdx + docs/sessions/reference.mdx: document the opt-in semantics - .server-changes/session-out-settled-signal.md: record the change
Companion to the SDK opt-in. Webapp routes read X-Peek-Settled from the request and skip the tail peek when it isn't set, so active send-a-message paths can't race a stale trigger:turn-complete. Docs note the opt-in semantics; .server-changes records the change for the deploy log.
chat.agent now runs on top of the Session-as-run-manager primitive.
Public surface (`chat.agent({...})`, `useTriggerChatTransport`,
`chat.store` / `chat.defer` / `chat.history`, `AgentChat`) is unchanged;
the wiring underneath moves from per-run streams to the durable Session
row that owns its own runs.
Transport (TriggerChatTransport):
- Drop `getStartToken`. Replace with
`startSession({chatId, taskId, clientData}) => {publicAccessToken}` —
wraps a server action that calls `chat.createStartSessionAction`.
Idempotent on `(env, externalId)`.
- `clientData` (typed via `withClientData`) is threaded through
`startSession`'s params, so the first run's `basePayload.metadata`
matches per-turn `metadata`. Live-updated via `setClientData` when
the hook's `clientData` option changes.
- Drop transport-level `triggerConfig` / `triggerOptions` /
`idleTimeoutInSeconds`. All trigger config lives server-side in the
customer's `chat.createStartSessionAction(taskId, options)`.
- `transport.preload(chatId)` and lazy first `sendMessage` both route
through `startSession`, deduped via the in-flight pendingStarts map.
- `ChatSession` persistable shape drops `runId`; just `{lastEventId}`.
chat.agent runtime:
- New `chat.createStartSessionAction(taskId, options?)` — server-side
wrapper that calls `sessions.start` with `basePayload.{messages:[],
trigger: "preload"}` defaults plus the customer's overrides. Returns
`{sessionId, runId, publicAccessToken}`.
- `chat.requestUpgrade` calls `apiClient.endAndContinueSession` before
emitting the `trigger:upgrade-required` chunk. Server orchestrates
the swap; browser keeps streaming across the run handoff.
Webapp dashboard:
- Playground: `startSession` + `accessToken` both wired through the
Remix action (idempotent server-side start path). Preload button
now works. New session proxy routes for HEAD/GET on `/out` and POST
on `/in/append`; old run-stream proxies deleted.
- Run inspector Agent tab: SSE proxy now uses the canonical addressing
key (externalId if set, else friendlyId), matching what the agent
writes via `session.out`. Fixes the case where the Agent tab read
from a different S2 stream than the agent wrote to.
References (ai-chat):
- `chat-view` useEffect dance gone (just hydrates `initialSession`).
- `chat-app` `transport.preload(id)` routes through `startSession`.
- New `upgrade-test` agent + sidebar option for exercising
`chat.requestUpgrade` end-to-end.
- `ChatSession` schema simplified: drop `runId` / `sessionId`, keep
`publicAccessToken` + `lastEventId`.
- `chat-client-test` fixed for the new transport shape.
- Hello-world smoke stubs gutted to TODO placeholders — sessions
are now task-bound, so standalone-session smokes need rewriting.
Persistent listeners registered via `session.in.on(...)` (e.g. chat.agent's `stopInput.on` for the stop signal) must not 'consume' chunks. They filter by `kind` and ignore non-matching chunks, so previously `#dispatch` was silently dropping any chunk that arrived before a once-waiter had registered. This race surfaced on test cloud (network round-trip > sync subscribe-time) but not locally (zero-latency). Symptom: chat.agent's first user message landed in S2 before `messagesInput.waitWithIdleTimeout` registered its waiter, the tail received it, `#dispatch` saw the `stopInput` handler and returned without buffering, the message was gone, the waitWithIdleTimeout fell through to a durable waitpoint, and the race-check skipped seq 0 (since the tail's onPart had advanced `lastSeqNum` to 0). Fix: when no once-waiter exists, invoke handlers AND buffer the chunk. Handlers observe; they don't consume.
…omic persist in reference onTurnComplete
- chat.createStartSessionAction now adds 'chat:{chatId}' as the first tag on the triggered run, matching the browser-mediated transport.doStart path. Customer-provided tags merge after, capped at 5. Without this, runs created via server actions were untagged, breaking the dashboard chat-id filter.
- references/ai-chat onTurnComplete persists Chat.messages and ChatSession.lastEventId in a single prisma.$transaction. Two parallel reads on the next page load (Promise.all([getChatMessages, getSessionForChat])) can otherwise observe messages post-write but lastEventId pre-write. The transport then resumes from the stale cursor and replays this turn's chunks on top of the already-persisted assistant message, duplicating the render. Applies to both the main chat.agent and the hydrated variant.
The reference's onTurnStart was using chat.defer for the messages write, which is fire-and-forget. If a user refreshed the page mid-stream, getChatMessages returned [] (the deferred write hadn't landed yet), useChat hydrated with empty initialMessages, and the resumed SSE stream pushed the assistant into an empty array — the user's message vanished from the rendered conversation forever. Switch to await prisma.chat.update(...) so the write is durable before chat.agent begins streaming. Verified end-to-end against test cloud: mid-stream refresh now yields [user, assistant] with no duplication. Aligns with the Warning added to docs/ai-chat/patterns/database-persistence.mdx in the docs branch.
…lates The reference's Chat / ChatSession Postgres tables are shared between local and test cloud targets. A row created with one webapp's PAT and lastEventId is poison if you switch the .env to the other target and reuse the same chatId — the transport gets a 401 or resumes from a sequence that doesn't exist on the other backend. Adds: - prisma/reset-chats.sql: TRUNCATE Chat, ChatSession (User survives — it's upserted by onPreload/onChatStart anyway). - package.json db:reset:chats script wrapping prisma db execute --file. Run `pnpm run db:reset:chats` between target switches and at the top of every smoke test. Codified in the ai-chat-e2e skill as a required prereq.
… panel + sendAction bridge UX cleanup discovered during the Sessions e2e sweep. Three changes, one commit because they all live in the chat input row / debug panel area: - Explicit "Preload" button next to "Send" that only renders when the chat has no messages and no session yet. Clicking calls transport.preload(chatId), which mints the session and triggers the first run with trigger:"preload". Self-hides once session is truthy. Replaces the inert "Preload new chats" sidebar checkbox (the visible `+ New Chat` button only navigated and never called transport.preload — preloadEnabled was wired through the context but read by nobody, since ChatApp.tsx is no longer the mounted chat sidebar). Drops the dead preloadEnabled state + checkbox from chat-settings-context, chat-sidebar, chat-sidebar-wrapper, and the chat-app.tsx legacy code path. - Debug panel "Runs → View in dashboard" row, gated on dashboardUrl + a new NEXT_PUBLIC_TRIGGER_PROJECT_DASHBOARD_PATH env var. Resolves to the runs-list page filtered by chat:<chatId> tag — so opening the link drops you straight into the run list for the active chat. Threads the new prop through chat-view → chat → DebugPanel. - window.__chat.sendAction(action) bridge wrapper that delegates to transport.sendAction(chatId, action). Lets smoke tests drive aiChatHydrated's actionSchema (undo/rollback/remove/replace) without reaching into React internals.
CreateSessionRequestBody now requires `taskIdentifier` and `triggerConfig` because Sessions are task-bound (the server reuses the config for every run scheduled by the session — initial + continuations). The MCP `agentChat` tool was still passing only `{ type, externalId }` from the pre-Sessions-as-run-manager API. Add `taskIdentifier: input.agentId` and a minimal `triggerConfig` with `basePayload: { chatId, ...clientData }` and the `chat:{chatId}` auto-tag.
Unblocks typecheck on PR #3173 (and Windows CLI v3 e2e, which builds cli-v3 in pre-test).
Migration 029 added `task_kind` to `task_runs_v2`, and TASK_RUN_COLUMNS was updated, but the four test-data arrays in src/taskRuns.test.ts were not. ClickHouse rejects the inserts with "Cannot parse input: expected ',' before: ']'" because the array length is one short of the column count. All 7 internal/clickhouse unit-test shards on PR #3173 fail on this. Pre-existing bug (predates my Sessions work) but blocking CI; verified the fix locally — `vitest run src/taskRuns.test.ts` now passes 4/4.
…messages: []` in basePayload
Server-to-agent flows (`AgentChat` SDK class + cli-v3 MCP `start_agent_chat`) were building `triggerConfig.basePayload` without the `trigger: "preload"` and `messages: []` fields the agent runtime branches on. Result: the auto-triggered first run had `payload.trigger === undefined`, neither `onPreload` nor `onChatStart` fired, and `onTurnStart`'s DB-write blew up with PrismaClient "No record found" because no Chat row had been created.
Browser-mediated flows already had this right (`chat.createStartSessionAction` in `ai.ts:6951`); the server-side path now mirrors that shape.
- packages/trigger-sdk/src/v3/chat-client.ts — `AgentChat.ensureStarted` adds the two fields to `basePayload`. `chat-client-test`'s `pong` orchestrator now returns the assistant text instead of an empty string.
- packages/cli-v3/src/mcp/tools/agentChat.ts — same fix on `start_agent_chat`'s `createSession` call. Also drops the redundant separate `apiClient.triggerTask(...)` call: `POST /api/v1/sessions` now auto-triggers the first run and returns its runId, so a second trigger from the MCP would have produced a competing run on the same session. Use `session.runId` from the create response. The `preload` input flag becomes a no-op signal (response message wording only) since session-create always triggers a run now.
Verified end-to-end against local:
- `chat-client-test` orchestrator returns `{ text: "pong" }`
- MCP `start_agent_chat` → `send_agent_message` x2 → `close_agent_chat` succeeds, both turns reuse the same runId
The realtime stream caps each record at ~1 MiB. Today the chat.agent path through StreamsWriterV2 surfaces a generic S2Error from deep in the batching layer when a chunk exceeds the cap, with no chunk-type context and no guidance for callers. Add a pre-write byte check in StreamsWriterV2.initializeServerStream that fires before the chunk hits the underlying batcher, and a typed ChatChunkTooLargeError carrying the chunk's discriminant (type/kind), serialized size, and cap. Also exports an isChatChunkTooLargeError guard from the SDK so callers can branch cleanly. Threshold is 1 MiB minus 1 KiB to leave headroom for the JSON record envelope. The error message links to the new docs pattern (Pattern: ID-reference for large tool outputs / out-of-band streams.writer for run-scoped data).
b87084d to
bc751bd
Compare
- typesVersions: add `ai/skills-runtime` mapping (was missing → check-exports
failed with NoResolution on `@trigger.dev/sdk/ai/skills-runtime`).
- chat.store JSON Patch: reject `__proto__`, `constructor`, `prototype`
segments at parseJsonPointer. Closes the two CodeQL prototype-pollution
alerts on chat-client.ts:108 / :120 — a malicious patch like
`{ op: "replace", path: "/__proto__/x", value: 1 }` would otherwise
walk into Object.prototype via `parent[lastToken] = value`. Throws a
clear error on the whole patch instead.
- typesVersions: add `v3/chat-client` mapping. The export was declared in `tshy.exports` and the conditional export block but missing from `typesVersions` — `attw --pack` flagged "@trigger.dev/core/v3/chat-client" as `node10: 💀 Resolution failed`. - chat.store JSON Patch: add an `assertSafeKey` guard at the assignment sites in `removeAt` / `insertAt`. parseJsonPointer already rejects `__proto__` / `constructor` / `prototype`, but CodeQL's prototype-pollution analysis doesn't trace through the parser boundary — the local check at the assignment keeps the static analysis happy and is also a real defense-in-depth backstop against any future caller that bypasses parseJsonPointer.
…SessionTriggerConfig + sync playground transport clientData Two fixes from Devin's review on PR #3173. ## SessionTriggerConfig is missing 3 fields the playground UI shows The playground sidebar (`PlaygroundSidebar`) renders working controls for `maxDuration`, `version`, and `region`. The action received the form fields, but `SessionTriggerConfig` didn't accept them so they were `void`-suppressed and silently dropped. Runs ignored the user's max-duration cap, the version pin didn't apply, and region selection had no effect. - `packages/core/src/v3/schemas/api.ts` — add three optional fields to `SessionTriggerConfig`: `maxDuration` (positive int, seconds), `lockToVersion` (string), `region` (string). All three forward to the matching field on `TaskRunOptions`. - `apps/webapp/app/services/realtime/sessionRunManager.server.ts` — extend `triggerSessionRun`'s `body.options` to thread the three fields through to `TriggerTaskService` when present. - `apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.playground.action.tsx` — fold the three form fields into `triggerConfig`; remove the `void` suppressions. ## Playground transport's clientData becomes stale after edits The route constructs `TriggerChatTransport` directly via `useRef` (to avoid the React-version mismatch the hook had). The hook normally calls `setClientData` whenever `clientData` changes, but this manual construction bypassed that — so `clientData` was captured at construction and never updated. Per-turn `metadata` merges (`this.defaultMetadata` in `packages/trigger-sdk/src/v3/chat.ts`) used the stale initial value for the whole conversation. `startSession` was already reading from the live ref so session creation was unaffected; this only fixed the per-turn path. - `apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.playground.$agentParam/route.tsx` — add a `useEffect` that calls `transport.setClientData(...)` whenever `clientDataJson` changes. Changeset (patch, @trigger.dev/core) for the schema additions; server- changes file for the webapp-only behaviour fix.
Roll up all the chat.agent feature work that's been accumulating on this branch into 8 user-facing CHANGELOG entries. No behavior change — just tidying up the .changeset/ directory before merge. Final shape: - chat-agent.md (sdk minor + core patch) — the headline; folds 13: ai-sdk-chat-transport, ai-chat-sandbox-and-ctx, chat-agent-*, chat-customagent-session-binding-and-stop-fixes, chat-reconnect-isstreaming-optional, chat-run-pat-renewal, chat-store-primitive, chat-transport-session-renew-plus-preload, drop-legacy-chat-stream-constants, dry-sloths-divide, trigger-chat-transport-watch-mode. - sessions-primitive.md (core + sdk patch) — folds 3: session-primitive, session-sdk-toolkit, session-trigger-config-extra-fields. - agent-skills.md (sdk + core + build + cli patch) — folds 2: chat-agent-skills-phase-1, skills-runtime-subpath. - ai-tool-helpers.md (sdk patch) — folds 2: ai-tool-execute-helper, ai-tool-toolset-typing. - mock-chat-agent-test-harness.md (sdk + core patch) — folds 3: mock-chat-agent-test-harness, mock-task-context-test-infra, mock-chat-agent-setup-locals. - mcp-agent-chat-sessions.md (cli patch) — kept standalone. - add-is-replay-context.md (core patch) — kept standalone (general task feature). - truncate-error-stacks.md (core patch) — kept standalone (general infra). Bumps preserved (chat-agent stays minor on sdk; everything else patch). Auto-named "dry-sloths-divide" got merged into chat-agent and dropped.
The previous pass rolled 26 changesets into 8 but the consolidated descriptions read like docs (full API surface dumps, multiple sections, docs-style headers). Rewrote each so they fit a release-notes bullet list — short, what-shipped framing, with one or two snippets where they help, no exhaustive type / option enumeration.
- inline prototype-pollution guards at JSON Patch assignment sites in chat-client.ts so CodeQL can statically verify them (Set.has() check upstream wasn't being traced) - wrap JSON.parse(payloadStr) in playground action's start handler to return 400 on malformed JSON instead of 500
No description provided.