Conversation
- Added new internal API endpoint for documentation tools, allowing actions such as listing available docs, searching, and fetching specific documentation by ID. - Updated environment configuration to support optional internal secret for enhanced security. - Refactored existing search functionality to utilize the new docs tools API instead of the previous MCP server. - Improved error handling and response parsing for documentation-related requests. - Expanded documentation to clarify the relationship between the new tools and existing API functionalities. This update streamlines the documentation access process and enhances the overall developer experience.
- Introduced error capturing for failed HTTP requests in the docs tools API, improving debugging capabilities. - Updated the API response for unsupported methods to include an 'Allow' header, clarifying the expected request type. These changes enhance the robustness of the documentation tools integration and improve developer experience.
- Updated the key name in the capabilities section of the API documentation to follow a consistent naming convention, improving clarity and maintainability.
The .gitmodules was updated in d22593d to point at apps/backend/src/private/implementation, but the gitlink entry (mode 160000) was never added to the tree. This caused `git clone --recurse-submodules` to silently skip the private submodule. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…on for docs tools - Added `STACK_DOCS_INTERNAL_BASE_URL` to backend `.env` and `.env.development` files for AI tool bundle configuration. - Removed references to `STACK_INTERNAL_DOCS_TOOLS_SECRET` from backend and docs environment files and validation logic from the docs tools API route. - Introduced a new `.env` file for the docs app with essential configuration variables.
… into llm-mcp-flow
📝 WalkthroughWalkthroughThis PR refactors the AI query pipeline by extracting common authorization, model execution, and logging concerns into shared handler and utility modules. It extends SpacetimeDB with AI query logging tables and operator management, updates the internal tool with usage analytics views and enrollment flows, and adjusts environment configuration for the database connection. Changes
Sequence DiagramssequenceDiagram
participant Client
participant Route as Query Route<br/>[mode]
participant Handler as Mode Handler<br/>(Stream/Generate)
participant Upstream as Upstream Model<br/>(OpenRouter)
participant Logger as AI Query Logger
participant DB as SpacetimeDB
Client->>Route: POST /api/latest/ai/query/[mode]
Note over Route: Validate auth & project access<br/>Prepare ModeContext & systemMessage<br/>Cache control for Anthropic
Route->>Handler: handleStreamMode(ctx) or<br/>handleGenerateMode(ctx, ...)
alt Streaming Mode
Handler->>Upstream: streamText() with step limit
Upstream-->>Handler: Stream tokens
Handler->>Client: Return streaming response
Handler->>Logger: logAiQuery() on completion
else Generate Mode
Handler->>Upstream: generateText() with 120s timeout
Upstream-->>Handler: Complete response
Handler->>Logger: logAiQuery(success/failure)
Handler->>Logger: logMcpCallAndReview() if MCP metadata
Handler->>Client: Return { content, finalText }
end
Logger->>DB: callReducer('log_ai_query', [...])
DB-->>Logger: ✓ Logged
sequenceDiagram
participant Client as Proxy Client
participant Route as AI Proxy Route
participant Sanitize as sanitizeBody()
participant Upstream as OpenRouter/Production
participant Observe as observeAndLog()
participant Logger as Log Pipeline
participant DB as SpacetimeDB
Client->>Route: POST /api/latest/integrations/ai-proxy
Route->>Sanitize: sanitizeBody(request body)
Sanitize-->>Route: { parsed, bytes }
Route->>Upstream: Forward request with bytes
Upstream-->>Route: Response stream or body
alt Streaming Response
Route->>Observe: observeAndLog(response, sanitized)
Observe->>Observe: Tee upstream stream
Observe->>Observe: scanSseForUsage() async
Observe->>Logger: Schedule log with usage
Observe-->>Route: Client response (original stream)
Route-->>Client: Stream response
else Non-Streaming Response
Route->>Observe: observeAndLog(response, sanitized)
Observe->>Observe: Read full body
Observe->>Observe: Extract usage from metadata
Observe->>Logger: Schedule log with usage
Observe-->>Route: Response bytes
Route-->>Client: JSON response
end
Logger->>DB: callReducer('log_ai_query', [...])
sequenceDiagram
participant User as User (Reviewer)
participant Dashboard as Internal Dashboard
participant Hook as useAiQueryLogs()
participant Enroll as Enrollment Check
participant DB as SpacetimeDB
participant Query as Query: SELECT
User->>Dashboard: Load usage analytics
Dashboard->>Hook: useAiQueryLogs(ensureEnrolled)
Hook->>Enroll: ensureEnrolled(identity)
Enroll->>DB: callReducer('add_operator', [token, identity, ...])
DB-->>Enroll: ✓ Enrolled
Enroll-->>Hook: Enrollment complete
Hook->>DB: Subscribe to ai_query_log
Query->>DB: SELECT * FROM ai_query_log<br/>WHERE shard = %shardFor(identity)
DB-->>Hook: Stream rows
Hook-->>Dashboard: { rows, connectionState }
Dashboard->>Dashboard: Filter, sort, paginate
Dashboard-->>User: Render analytics table
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
Greptile SummaryThis PR adds an AI analytics dashboard to the internal tool and centralises AI query observability. Every call through the
Confidence Score: 4/5Safe to merge after fixing the · rendering bug in the multi-call footer. One confirmed P1 display bug: the · HTML entity inside a template literal on ConversationReplay.tsx:570 will render as literal text in multi-call conversations. All backend logging, auth, and proxy code is clean and well-structured. apps/internal-tool/src/components/ConversationReplay.tsx (line 570) Important Files Changed
Sequence DiagramsequenceDiagram
participant Client
participant AIProxy as AI Proxy Route
participant AIQuery as AI Query Route
participant OpenRouter
participant SpacetimeDB
Client->>AIProxy: POST with caller identity header
AIProxy->>AIProxy: sanitizeBody (model allowlist)
AIProxy->>OpenRouter: forward request
OpenRouter-->>AIProxy: response (stream or JSON)
AIProxy->>AIProxy: scanSseForUsage / extractOpenRouterUsage
AIProxy-->>Client: proxied response
AIProxy-)SpacetimeDB: logAiQuery (background task)
Client->>AIQuery: POST /ai/query/stream or generate
AIQuery->>OpenRouter: streamText / generateText
OpenRouter-->>AIQuery: result + providerMetadata
AIQuery-->>Client: response
AIQuery-)SpacetimeDB: logAiQuery (background task)
AIQuery-)SpacetimeDB: logMcpCall + reviewMcpCall (if MCP)
Note over SpacetimeDB: Internal Tool reads my_visible_ai_query_log via WebSocket
|
| resolve(connInstance); | ||
| }) | ||
| .subscribe("SELECT * FROM mcp_call_log"); | ||
| .subscribe(["SELECT * FROM mcp_call_log", "SELECT * FROM ai_query_log"]); | ||
| }) |
There was a problem hiding this comment.
Unnecessary
ai_query_log subscription loads entire table into backend memory
The backend server subscribes to SELECT * FROM ai_query_log but never reads from this table's in-memory cache — it only calls logAiQuery reducers to write records. SpacetimeDB client subscriptions download the full table into the in-memory cache on connection. As every AI request appends a row to ai_query_log, this subscription will load an unboundedly growing dataset into the backend server's heap on startup, eventually causing memory pressure.
| resolve(connInstance); | |
| }) | |
| .subscribe("SELECT * FROM mcp_call_log"); | |
| .subscribe(["SELECT * FROM mcp_call_log", "SELECT * FROM ai_query_log"]); | |
| }) | |
| .subscribe(["SELECT * FROM mcp_call_log"]); |
The ai_query_log subscription serves the internal-tool frontend, not the backend.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/backend/src/lib/ai/mcp-logger.ts
Line: 24-27
Comment:
**Unnecessary `ai_query_log` subscription loads entire table into backend memory**
The backend server subscribes to `SELECT * FROM ai_query_log` but never reads from this table's in-memory cache — it only calls `logAiQuery` reducers to write records. SpacetimeDB client subscriptions download the full table into the in-memory cache on connection. As every AI request appends a row to `ai_query_log`, this subscription will load an unboundedly growing dataset into the backend server's heap on startup, eventually causing memory pressure.
```suggestion
.subscribe(["SELECT * FROM mcp_call_log"]);
```
The `ai_query_log` subscription serves the internal-tool frontend, not the backend.
How can I resolve this? If you propose a fix, please make it concise.| case "7d": { | ||
| return now - 7 * 24 * 60 * 60 * 1000; | ||
| } | ||
| case "30d": { | ||
| return now - 30 * 24 * 60 * 60 * 1000; | ||
| } | ||
| case "all": { | ||
| return 0; | ||
| } | ||
| } | ||
| }, [timeRange, now]); | ||
|
|
||
| const filtered = useMemo(() => { | ||
| const q = search.trim().toLowerCase(); | ||
| return rows.filter(r => { | ||
| const ts = toDate(r.createdAt).getTime(); | ||
| if (ts < rangeStart) return false; |
There was a problem hiding this comment.
now recalculated every render defeats memoization
const now = Date.now() is declared inside the component body and listed as a dependency of rangeStart (and transitively stats). Because now is a new primitive on every render, both memos recompute on every render — including when new rows arrive from SpacetimeDB — defeating the purpose of memoization. Stabilize it with a state-based clock:
| case "7d": { | |
| return now - 7 * 24 * 60 * 60 * 1000; | |
| } | |
| case "30d": { | |
| return now - 30 * 24 * 60 * 60 * 1000; | |
| } | |
| case "all": { | |
| return 0; | |
| } | |
| } | |
| }, [timeRange, now]); | |
| const filtered = useMemo(() => { | |
| const q = search.trim().toLowerCase(); | |
| return rows.filter(r => { | |
| const ts = toDate(r.createdAt).getTime(); | |
| if (ts < rangeStart) return false; | |
| const [now, setNow] = useState(() => Date.now()); | |
| useEffect(() => { | |
| const id = setInterval(() => setNow(Date.now()), 60_000); | |
| return () => clearInterval(id); | |
| }, []); | |
| const rangeStart = useMemo(() => { |
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/internal-tool/src/components/Usage.tsx
Line: 54-70
Comment:
**`now` recalculated every render defeats memoization**
`const now = Date.now()` is declared inside the component body and listed as a dependency of `rangeStart` (and transitively `stats`). Because `now` is a new primitive on every render, both memos recompute on every render — including when new rows arrive from SpacetimeDB — defeating the purpose of memoization. Stabilize it with a state-based clock:
```suggestion
const [now, setNow] = useState(() => Date.now());
useEffect(() => {
const id = setInterval(() => setNow(Date.now()), 60_000);
return () => clearInterval(id);
}, []);
const rangeStart = useMemo(() => {
```
How can I resolve this? If you propose a fix, please make it concise.| function useTableSubscription<Row extends { id: bigint }>( | ||
| binding: TableBinding<Row>, | ||
| ) { | ||
| const [rows, setRows] = useState<Row[]>([]); | ||
| const [connectionState, setConnectionState] = useState<ConnectionState>("connecting"); | ||
| const connRef = useRef<DbConnection | null>(null); |
There was a problem hiding this comment.
Two independent WebSocket connections for a single-user session
useMcpCallLogs and useAiQueryLogs each call useTableSubscription with a separate binding, creating two independent DbConnection instances. A single shared connection with a combined subscription (as the backend already does in mcp-logger.ts) would halve round-trips. If the two-connection approach is intentional, a comment explaining the trade-off would help.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/internal-tool/src/hooks/useSpacetimeDB.ts
Line: 30-35
Comment:
**Two independent WebSocket connections for a single-user session**
`useMcpCallLogs` and `useAiQueryLogs` each call `useTableSubscription` with a separate `binding`, creating two independent `DbConnection` instances. A single shared connection with a combined subscription (as the backend already does in `mcp-logger.ts`) would halve round-trips. If the two-connection approach is intentional, a comment explaining the trade-off would help.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Pull request overview
Adds internal monitoring/review capabilities for OpenRouter-backed AI usage by logging MCP calls + unified AI endpoint usage into SpacetimeDB, and exposing an internal Next.js “review tool” UI to inspect, QA-review, and curate human-verified Q&A.
Changes:
- Add SpacetimeDB-backed logging for AI queries, MCP calls, and automated QA reviews (plus endpoints to apply human review/corrections and publish verified Q&A).
- Introduce a new
apps/internal-toolNext.js app for browsing logs, analytics, conversation replays, and maintaining a curated Q&A knowledge base. - Extend MCP ask_stack_auth tool + AI query schema to carry
userPromptandconversationId, enabling per-conversation grouping and analytics.
Reviewed changes
Copilot reviewed 93 out of 94 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| docs/src/lib/docs-tools-operations.ts | Mark OpenAPI-spec read failures as MCP tool errors (isError). |
| docs/src/app/api/internal/[transport]/route.ts | Add userPrompt + conversationId plumbing for the MCP ask tool and surface conversationId in responses. |
| docker/dependencies/docker.compose.yaml | Add a local SpacetimeDB service for dev. |
| apps/internal-tool/tsconfig.json | Add TS config for the new internal tool app. |
| apps/internal-tool/tailwind.config.js | Add Tailwind config for internal tool UI. |
| apps/internal-tool/src/utils.ts | Add helper to convert SpacetimeDB timestamps to JS Date. |
| apps/internal-tool/src/types.ts | Re-export generated SpacetimeDB row types. |
| apps/internal-tool/src/stack.ts | Configure Stack Auth client for internal-tool auth. |
| apps/internal-tool/src/module_bindings/update_mcp_qa_review_reducer.ts | Generated SpacetimeDB reducer bindings. |
| apps/internal-tool/src/module_bindings/update_human_correction_reducer.ts | Generated SpacetimeDB reducer bindings. |
| apps/internal-tool/src/module_bindings/types/reducers.ts | Generated types for reducer params. |
| apps/internal-tool/src/module_bindings/types/procedures.ts | Generated types for procedures (none defined). |
| apps/internal-tool/src/module_bindings/types.ts | Generated SpacetimeDB table row types. |
| apps/internal-tool/src/module_bindings/mcp_call_log_table.ts | Generated table schema binding for MCP call logs. |
| apps/internal-tool/src/module_bindings/mark_human_reviewed_reducer.ts | Generated SpacetimeDB reducer binding. |
| apps/internal-tool/src/module_bindings/log_mcp_call_reducer.ts | Generated SpacetimeDB reducer binding. |
| apps/internal-tool/src/module_bindings/log_ai_query_reducer.ts | Generated SpacetimeDB reducer binding. |
| apps/internal-tool/src/module_bindings/index.ts | Generated SpacetimeDB module entrypoint for client. |
| apps/internal-tool/src/module_bindings/delete_qa_entry_reducer.ts | Generated SpacetimeDB reducer binding. |
| apps/internal-tool/src/module_bindings/ai_query_log_table.ts | Generated table schema binding for AI query logs. |
| apps/internal-tool/src/module_bindings/add_manual_qa_reducer.ts | Generated SpacetimeDB reducer binding. |
| apps/internal-tool/src/lib/mcp-review-api.ts | Client wrapper for backend MCP review endpoints. |
| apps/internal-tool/src/hooks/useSpacetimeDB.ts | Subscribe to SpacetimeDB tables from the internal tool UI. |
| apps/internal-tool/src/components/markdown-components.tsx | UI markdown render components with copy affordances. |
| apps/internal-tool/src/components/UsageDetail.tsx | Detailed view for a single AI query log (messages, steps, tools, cost/tokens). |
| apps/internal-tool/src/components/KnowledgeBase.tsx | UI for drafts/published Q&A management. |
| apps/internal-tool/src/components/ConversationReplay.tsx | Animated replay UI for multi-call conversations and QA reviewer steps. |
| apps/internal-tool/src/components/CallLogList.tsx | Table UI for browsing/filtering/sorting MCP call logs. |
| apps/internal-tool/src/components/CallLogDetail.tsx | Detail UI for a single MCP call including QA review + human correction workflows. |
| apps/internal-tool/src/components/Analytics.tsx | Aggregated MCP QA metrics and charts. |
| apps/internal-tool/src/components/AddManualQa.tsx | Modal UI to add a manual Q&A entry. |
| apps/internal-tool/src/app/questions/page.tsx | Public-ish “published Q&A” page for curated knowledge base. |
| apps/internal-tool/src/app/page.tsx | Root page delegating to the client app. |
| apps/internal-tool/src/app/loading.tsx | Loading UI for the internal tool. |
| apps/internal-tool/src/app/layout.tsx | Internal tool layout with StackProvider + theme + suspense. |
| apps/internal-tool/src/app/handler/[...stack]/page.tsx | Stack Auth handler route for auth flows. |
| apps/internal-tool/src/app/globals.css | Tailwind base + minimal global styles. |
| apps/internal-tool/src/app/app-client.tsx | Main internal tool UI: tabs, auth gating, API wiring, detail panes. |
| apps/internal-tool/spacetimedb/tsconfig.json | TS config for the SpacetimeDB module project. |
| apps/internal-tool/spacetimedb/src/index.ts | SpacetimeDB schema + reducers for logs, QA review, human curation, publishing. |
| apps/internal-tool/spacetimedb/package.json | SpacetimeDB module package config. |
| apps/internal-tool/spacetime.json | Spacetime dev config pointing at module path. |
| apps/internal-tool/scripts/spacetime-token.mjs | Cross-platform log token injection/restore for module publish. |
| apps/internal-tool/scripts/spacetime-publish.mjs | Cross-platform publish script (inject token, publish, restore). |
| apps/internal-tool/scripts/pre-dev.mjs | Pre-dev hook to publish module locally when CLI is available. |
| apps/internal-tool/postcss.config.js | PostCSS config for Tailwind. |
| apps/internal-tool/package.json | Internal tool app dependencies/scripts. |
| apps/internal-tool/next.config.mjs | Next.js config for internal tool. |
| apps/internal-tool/.eslintrc.cjs | ESLint config for internal tool workspace app. |
| apps/internal-tool/.env.development | Dev env defaults for internal tool. |
| apps/internal-tool/.env | Placeholder env file for internal tool. |
| apps/dev-launchpad/public/index.html | Advertise SpacetimeDB + Internal Tool ports in the dev launchpad. |
| apps/dashboard/src/lib/ai-dashboard/shared-prompt.ts | Change dashboard prompt composition to always include all type defs (cached) and deterministically sort routes. |
| apps/backend/src/lib/ai/verified-qa.ts | Build a “human-verified KB” prompt section from published entries in SpacetimeDB. |
| apps/backend/src/lib/ai/tools/docs.ts | Update docs-tools dev base URL to docs app port and harden error handling. |
| apps/backend/src/lib/ai/spacetimedb-bindings/update_mcp_qa_review_reducer.ts | Generated SpacetimeDB bindings used by backend. |
| apps/backend/src/lib/ai/spacetimedb-bindings/update_human_correction_reducer.ts | Generated SpacetimeDB bindings used by backend. |
| apps/backend/src/lib/ai/spacetimedb-bindings/types/reducers.ts | Generated reducer param types (backend). |
| apps/backend/src/lib/ai/spacetimedb-bindings/types/procedures.ts | Generated procedure types (backend). |
| apps/backend/src/lib/ai/spacetimedb-bindings/types.ts | Generated row types (backend). |
| apps/backend/src/lib/ai/spacetimedb-bindings/mcp_call_log_table.ts | Generated table binding (backend). |
| apps/backend/src/lib/ai/spacetimedb-bindings/mark_human_reviewed_reducer.ts | Generated reducer binding (backend). |
| apps/backend/src/lib/ai/spacetimedb-bindings/log_mcp_call_reducer.ts | Generated reducer binding (backend). |
| apps/backend/src/lib/ai/spacetimedb-bindings/log_ai_query_reducer.ts | Generated reducer binding (backend). |
| apps/backend/src/lib/ai/spacetimedb-bindings/index.ts | Generated module entrypoint (backend). |
| apps/backend/src/lib/ai/spacetimedb-bindings/delete_qa_entry_reducer.ts | Generated reducer binding (backend). |
| apps/backend/src/lib/ai/spacetimedb-bindings/ai_query_log_table.ts | Generated table binding (backend). |
| apps/backend/src/lib/ai/spacetimedb-bindings/add_manual_qa_reducer.ts | Generated reducer binding (backend). |
| apps/backend/src/lib/ai/schema.ts | Add mcpCallMetadata to the AI query request schema. |
| apps/backend/src/lib/ai/qa-reviewer.ts | Add async QA reviewer that uses DeepWiki MCP tools + writes review results back to SpacetimeDB. |
| apps/backend/src/lib/ai/prompts.ts | Add “human-verified KB” to prompt priority instructions. |
| apps/backend/src/lib/ai/openrouter-usage.ts | Add usage extraction utilities for OpenRouter responses/SSE streams. |
| apps/backend/src/lib/ai/mcp-logger.ts | Add SpacetimeDB connection + reducer logging for MCP calls. |
| apps/backend/src/lib/ai/ai-query-logger.ts | Add SpacetimeDB logging for AI query usage. |
| apps/backend/src/lib/ai/ai-query-handlers.ts | Refactor query handling; add logging of usage, MCP call log+review pipeline, conversationId. |
| apps/backend/src/lib/ai/ai-proxy-handlers.ts | Add proxy-side usage observation + logging for OpenRouter proxy calls. |
| apps/backend/src/app/api/latest/internal/mcp-review/update-correction/route.ts | Add internal endpoint to update human correction + publish. |
| apps/backend/src/app/api/latest/internal/mcp-review/mark-reviewed/route.ts | Add internal endpoint to mark a call as human reviewed. |
| apps/backend/src/app/api/latest/internal/mcp-review/delete/route.ts | Add internal endpoint to delete a Q&A entry. |
| apps/backend/src/app/api/latest/internal/mcp-review/add-manual/route.ts | Add internal endpoint to create a manual Q&A entry. |
| apps/backend/src/app/api/latest/integrations/ai-proxy/[[...path]]/route.ts | Use shared sanitize/logging utilities and add async usage logging. |
| apps/backend/src/app/api/latest/ai/query/[mode]/route.ts | Integrate verified-QA context, logging, conversation grouping, and handler refactor. |
| apps/backend/package.json | Add SpacetimeDB dependency for backend logging/bindings. |
| apps/backend/.env.development | Add dev env vars for SpacetimeDB logging. |
| apps/backend/.env | Document SpacetimeDB logging env vars. |
| README.md | Add DeepWiki badge link. |
| .github/workflows/lint-and-build.yaml | Ensure internal-tool env file exists in CI builds. |
| .github/workflows/e2e-fallback-tests.yaml | Ensure internal-tool env file exists for tests. |
| .github/workflows/e2e-custom-base-port-api-tests.yaml | Ensure internal-tool env file exists for tests. |
| .github/workflows/e2e-api-tests.yaml | Ensure internal-tool env file exists for tests. |
| .github/workflows/e2e-api-tests-local-emulator.yaml | Ensure internal-tool env file exists for tests. |
| .github/workflows/db-migration-backwards-compatibility.yaml | Ensure internal-tool env file exists for tests/build. |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const responseConversationId = body.conversationId ?? conversationId ?? ""; | ||
|
|
||
| return { | ||
| content: [{ type: "text", text: text.length > 0 ? text : "(empty response)" }], | ||
| content: [{ type: "text", text: `${text.length > 0 ? text : "(empty response)"}\n\n[conversationId: ${responseConversationId} — pass this value as the conversationId parameter in your next ask_stack_auth call to continue this conversation]` }], | ||
| }; |
There was a problem hiding this comment.
The response always appends a conversationId instruction even when no conversationId is available (it falls back to an empty string). This yields responses like [conversationId: — ...] and makes it unclear what to pass next.
Consider only appending this footer when you have a non-empty conversationId (prefer the server-returned value), and omit it otherwise (or return the conversationId as a separate structured field if the MCP client supports it).
| mcpCallMetadata: yupObject({ | ||
| toolName: yupString().defined(), | ||
| reason: yupString().defined(), | ||
| userPrompt: yupString().defined(), | ||
| conversationId: yupString().optional().nullable(), | ||
| }).optional().nullable(), |
There was a problem hiding this comment.
mcpCallMetadata.userPrompt is accepted verbatim and then logged to SpacetimeDB. Since this is intended for analytics, it would be safer to enforce a max length and/or server-side redaction/truncation to reduce the risk of accidentally persisting secrets or large payloads.
| const logPromise = logMcpCall({ | ||
| correlationId, | ||
| toolName: mcpCallMetadata.toolName, | ||
| reason: mcpCallMetadata.reason, | ||
| userPrompt: mcpCallMetadata.userPrompt, | ||
| conversationId, | ||
| question, | ||
| response: finalText, | ||
| stepCount, | ||
| innerToolCallsJson, | ||
| durationMs: BigInt(Date.now() - startedAt), |
There was a problem hiding this comment.
This logs mcpCallMetadata.userPrompt directly into the MCP call log without any truncation/redaction. Given the field is user-provided and the SpacetimeDB tables are configured as public in the module, this can unintentionally persist sensitive data.
Consider truncating to a safe maximum, stripping obvious secrets (tokens/keys), and/or storing a hashed/derived form instead of raw text.
| function envOrDevDefault(value: string | undefined, devDefault: string): string { | ||
| if (!value || value === PLACEHOLDER) { | ||
| return IS_DEV ? devDefault : ""; | ||
| } | ||
| return value; | ||
| } | ||
|
|
||
| const PORT_PREFIX = process.env.NEXT_PUBLIC_STACK_PORT_PREFIX ?? "81"; | ||
| const API_URL = envOrDevDefault(process.env.NEXT_PUBLIC_STACK_API_URL, `http://localhost:${PORT_PREFIX}02`); | ||
| const PROJECT_ID = envOrDevDefault(process.env.NEXT_PUBLIC_STACK_PROJECT_ID, "internal"); | ||
| const PUBLISHABLE_CLIENT_KEY = envOrDevDefault( |
There was a problem hiding this comment.
In non-dev environments, missing/placeholder NEXT_PUBLIC_STACK_* env vars cause API_URL/keys to become empty strings, which will produce requests like fetch("/api/latest/internal/mcp-review/..." ...) and fail in a non-obvious way.
Prefer failing fast (throw) with a clear configuration error (similar to src/stack.ts), or at least guard post() with an explicit error when API_URL/credentials are empty.
| connInstance.subscriptionBuilder() | ||
| .onApplied(() => { | ||
| resolve(connInstance); | ||
| }) | ||
| .subscribe(["SELECT * FROM mcp_call_log"]); |
There was a problem hiding this comment.
The backend connection subscribes to all rows in mcp_call_log (SELECT *). As the log grows, this will increase memory usage and replication cost on every backend instance, even though most backend operations only need to write via reducers (or, for verified Q&A, only need published rows).
Consider narrowing the subscription (e.g., only published entries) or avoiding a subscription entirely unless a read is required.
| connInstance.subscriptionBuilder() | |
| .onApplied(() => { | |
| resolve(connInstance); | |
| }) | |
| .subscribe(["SELECT * FROM mcp_call_log"]); | |
| resolve(connInstance); |
| const pairs: Array<{ question: string; answer: string }> = []; | ||
| for (const row of conn.db.mcpCallLog.iter()) { | ||
| if (row.publishedToQa) { | ||
| pairs.push({ | ||
| question: row.humanCorrectedQuestion ?? row.question, | ||
| answer: row.humanCorrectedAnswer ?? row.response, | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| if (pairs.length === 0) return ""; | ||
|
|
||
| const formatted = pairs.map((p, i) => | ||
| `${i + 1}. Q: ${p.question}\n A: ${p.answer}` | ||
| ).join("\n\n"); | ||
|
|
There was a problem hiding this comment.
getVerifiedQaContextInner() iterates the entire mcpCallLog and appends all published Q&A pairs into the system prompt. Over time this can become very large (token/cost explosion and prompt-length failures) and also adds per-request latency.
Consider limiting the number/size of entries (e.g., newest N, or total character/token budget), and caching the rendered context for a short TTL instead of rebuilding it on every request.
There was a problem hiding this comment.
Actionable comments posted: 1
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (22)
apps/backend/src/lib/ai/tools/docs.ts-26-34 (1)
26-34:⚠️ Potential issue | 🟠 MajorAdd a timeout to this outbound call.
This fetch currently has no timeout; transient network stalls can hold backend execution indefinitely. Add an
AbortControllerwith a timeout (e.g., 120 seconds, consistent withai-query-handlers.ts) and pass it as thesignaloption. The existing error handling at lines 51-54 will map timeout failures to the fallback message appropriately.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/tools/docs.ts` around lines 26 - 34, The outbound fetch that sets const res = await fetch(..., body: JSON.stringify(action)) lacks a timeout; create an AbortController, set a 120000ms timer that calls controller.abort(), and pass controller.signal to the fetch options so the request cancels on timeout (clear the timer after fetch completes); mirror the 120s timeout used in ai-query-handlers.ts and let the existing error handling map abort errors to the fallback message.apps/backend/src/lib/ai/tools/docs.ts-53-56 (1)
53-56:⚠️ Potential issue | 🟠 MajorReplace the blanket catch block with explicit error handling.
The
catch (err)on line 53 swallows all exceptions indiscriminately, violating the coding guideline to NEVER try-catch-all. Handle expected transport errors (fetch failures, JSON parsing errors) explicitly and let other errors propagate for visibility into unexpected defects.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/tools/docs.ts` around lines 53 - 56, The current catch (err) in the docs transport code blindly swallows all errors; change it to explicitly detect and handle expected transport-related errors (e.g., network/fetch failures, JSON.parse/SyntaxError, TypeError from missing fields) by checking err instanceof FetchError / SyntaxError / TypeError or by inspecting err.name/message, call captureError("docs-tools-transport-error", err) for those cases and return the friendly "Stack Auth docs tools are temporarily unavailable..." message; for any other unexpected error, rethrow it so it surfaces (do not return or swallow it). Ensure you update the catch block around the code that uses captureError so only transport/parsing errors are handled and all other exceptions propagate.apps/internal-tool/.env.development-5-6 (1)
5-6:⚠️ Potential issue | 🟠 MajorUse
NEXT_PUBLIC_STACK_prefix for these public env vars.
NEXT_PUBLIC_SPACETIMEDB_HOSTandNEXT_PUBLIC_SPACETIMEDB_DB_NAMEbreak the repo’s env naming rule. Rename toNEXT_PUBLIC_STACK_...and align all references.Suggested rename
-NEXT_PUBLIC_SPACETIMEDB_HOST=ws://localhost:${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}39 -NEXT_PUBLIC_SPACETIMEDB_DB_NAME=stack-auth-llm +NEXT_PUBLIC_STACK_SPACETIMEDB_HOST=ws://localhost:${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}39 +NEXT_PUBLIC_STACK_SPACETIMEDB_DB_NAME=stack-auth-llmAs per coding guidelines
{.env*,**/*.{ts,tsx}}: All environment variables should be prefixed with STACK_ (or NEXT_PUBLIC_STACK_ if public) to ensure Turborepo picks up changes and improves readability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/.env.development` around lines 5 - 6, Rename the public environment variables NEXT_PUBLIC_SPACETIMEDB_HOST and NEXT_PUBLIC_SPACETIMEDB_DB_NAME to use the required NEXT_PUBLIC_STACK_ prefix (e.g., NEXT_PUBLIC_STACK_SPACETIMEDB_HOST and NEXT_PUBLIC_STACK_SPACETIMEDB_DB_NAME) and preserve their values (keep the ws://localhost:${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}39 expression). Update every reference to these symbols across .env* files and in all TS/TSX code (search for NEXT_PUBLIC_SPACETIMEDB_HOST and NEXT_PUBLIC_SPACETIMEDB_DB_NAME and replace with the new NEXT_PUBLIC_STACK_* names) so imports/uses (process.env or next/config) remain consistent.apps/internal-tool/.env-8-9 (1)
8-9:⚠️ Potential issue | 🟠 MajorRename public SpacetimeDB env keys to follow repo prefix convention.
NEXT_PUBLIC_SPACETIMEDB_HOSTandNEXT_PUBLIC_SPACETIMEDB_DB_NAMEviolate the enforced env naming convention and can break Turborepo env change detection consistency. Please rename these toNEXT_PUBLIC_STACK_...and update consumers accordingly.Suggested rename
-NEXT_PUBLIC_SPACETIMEDB_HOST=REPLACE_ME -NEXT_PUBLIC_SPACETIMEDB_DB_NAME=REPLACE_ME +NEXT_PUBLIC_STACK_SPACETIMEDB_HOST=REPLACE_ME +NEXT_PUBLIC_STACK_SPACETIMEDB_DB_NAME=REPLACE_MEAs per coding guidelines
{.env*,**/*.{ts,tsx}}: All environment variables should be prefixed with STACK_ (or NEXT_PUBLIC_STACK_ if public) to ensure Turborepo picks up changes and improves readability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/.env` around lines 8 - 9, Rename the two public env keys to use the repo prefix and update all consumers: change NEXT_PUBLIC_SPACETIMEDB_HOST -> NEXT_PUBLIC_STACK_SPACETIMEDB_HOST and NEXT_PUBLIC_SPACETIMEDB_DB_NAME -> NEXT_PUBLIC_STACK_SPACETIMEDB_DB_NAME in the .env file, then update every code reference that reads the old names (e.g., process.env.NEXT_PUBLIC_SPACETIMEDB_HOST, process.env.NEXT_PUBLIC_SPACETIMEDB_DB_NAME, import.meta.env or any Next.js runtime/config usages) to the new keys, and also update any examples/docs/CI vars that mention them..github/workflows/db-migration-backwards-compatibility.yaml-141-143 (1)
141-143:⚠️ Potential issue | 🟠 MajorGuard internal-tool
.envcopy in base-branch scenarios.Line 142 and Line 339 can fail when the checked-out base branch does not yet contain
apps/internal-tool/.env.development, which makes migration-compat CI fail for the wrong reason.💡 Suggested hardening
- - name: Create .env.test.local file for apps/internal-tool - run: cp apps/internal-tool/.env.development apps/internal-tool/.env.test.local + - name: Create .env.test.local file for apps/internal-tool + run: | + if [ -f apps/internal-tool/.env.development ]; then + cp apps/internal-tool/.env.development apps/internal-tool/.env.test.local + else + echo "apps/internal-tool/.env.development not found; skipping." + fiAlso applies to: 338-340
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/db-migration-backwards-compatibility.yaml around lines 141 - 143, The workflow step named "Create .env.test.local file for apps/internal-tool" attempts to copy apps/internal-tool/.env.development unconditionally which fails if the base branch lacks that file; change the step to check for the source file's existence before copying (e.g., test for the file and only run cp when present) and apply the same guarded logic to the other identical step later in the file so both occurrences no longer cause a hard failure in base-branch scenarios.apps/backend/src/lib/ai/prompts.ts-155-160 (1)
155-160:⚠️ Potential issue | 🟠 MajorConflicting prompt rules can cancel the verified-QA fast path.
Line 156 permits answering directly from verified QA, but Line 186 and Line 269 still require docs retrieval for every question. This contradiction can make behavior inconsistent and ignore the new priority order.
🧭 Suggested wording alignment
-**CRITICAL**: Keep responses SHORT and concise. ALWAYS use the available tools to pull relevant documentation for every question. There should almost never be a question where you don't retrieve relevant docs. +**CRITICAL**: Keep responses SHORT and concise. Use the available tools to pull relevant documentation unless there is an exact/near-exact Human-Verified Q&A match. @@ -2. For every question, use the available tools to retrieve the most relevant documentation sections +2. For every question without an exact/near-exact Human-Verified Q&A match, use the available tools to retrieve the most relevant documentation sections @@ -This is not optional - retrieve relevant documentation for every question. +This is not optional - retrieve relevant documentation for every question that is not covered by an exact/near-exact Human-Verified Q&A match.apps/backend/src/lib/ai/verified-qa.ts-19-45 (1)
19-45:⚠️ Potential issue | 🟠 MajorBound the verified-QA payload size to avoid request-path latency/context blowups.
Line 19 iterates the full table and Line 34 injects every published pair into the prompt. As the table grows, this will increase per-request latency/cost and can exceed model context limits.
Please cap the number of entries (and/or total characters/tokens) included in
formatted, and prefer a deterministic subset strategy (e.g., most recent N).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/verified-qa.ts` around lines 19 - 45, The current loop over conn.db.mcpCallLog.iter() collects all publishedToQa rows into pairs and injects every entry into formatted, which can grow unbounded; change the logic to include only a deterministic, bounded subset (e.g., the most recent N entries or stop when total characters/tokens exceed a limit). Specifically, when iterating conn.db.mcpCallLog.iter(), accumulate entries into pairs but break once you reach a configured maxCount (e.g., MAX_VERIFIED_QA = N) or a maxChars threshold, preferring deterministic selection like newest-by-timestamp or iterating in reverse, and then build formatted from that bounded pairs array; ensure any selection uses fields like publishedToQa, humanCorrectedQuestion/humanCorrectedAnswer and preserve the existing output ordering and rules.apps/internal-tool/src/components/AddManualQa.tsx-92-92 (1)
92-92:⚠️ Potential issue | 🟠 MajorAvoid voiding save promises in click handlers.
Line 92 and Line 102 currently
voidthe async call. Replace this with the project-standard async utility (runAsynchronously/runAsynchronouslyWithAlert) so promise lifecycle/error flow is handled consistently.As per coding guidelines, "
NEVER try-catch-all, NEVER void a promise, and NEVER use.catch(console.error)or similar. UserunAsynchronouslyorrunAsynchronouslyWithAlertinstead for error handling with loading indicators."Also applies to: 102-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/components/AddManualQa.tsx` at line 92, The click handlers in AddManualQa.tsx are voiding the async promise from handleSave (onClick={() => void handleSave(false)} and the similar call at line 102); replace these with the project async helper to preserve promise lifecycle and error/loading handling: call runAsynchronously or runAsynchronouslyWithAlert (per context) and pass the handleSave invocation (e.g., () => handleSave(false) / () => handleSave(true)) so the utility manages errors and loading indicators; update both occurrences referencing the handleSave function and ensure you import the appropriate runAsynchronously* helper if not already imported.apps/backend/src/app/api/latest/internal/mcp-review/update-correction/route.ts-42-49 (1)
42-49:⚠️ Potential issue | 🟠 MajorAvoid storing reviewer email in persisted correction metadata.
Line 48 falls back to
user.primary_email, which adds unnecessary PII to the review log. Prefer a stable non-PII identifier.🧹 Proposed fix
await conn.reducers.updateHumanCorrection({ token, correlationId: body.correlationId, correctedQuestion: body.correctedQuestion, correctedAnswer: body.correctedAnswer, publish: body.publish, - reviewedBy: user.display_name ?? user.primary_email ?? user.id, + reviewedBy: user.id, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/internal/mcp-review/update-correction/route.ts` around lines 42 - 49, The call to conn.reducers.updateHumanCorrection is currently populating reviewedBy with user.display_name ?? user.primary_email ?? user.id which can persist PII (user.primary_email); change it to use a stable non-PII identifier (for example user.id or an internal reviewerId) and remove the email fallback. Update the payload sent to updateHumanCorrection to set reviewedBy (or reviewerId if you prefer) to user.id (or user.display_name only if confirmed non-PII) and, if necessary, adjust the updateHumanCorrection reducer/signature to accept and persist that non-PII identifier instead of any email.apps/internal-tool/scripts/spacetime-token.mjs-15-15 (1)
15-15:⚠️ Potential issue | 🟠 MajorDo not fall back to a static token during
inject.Line 15 defaults to
"change-me". That can accidentally publish a known token. Fail fast whenSTACK_MCP_LOG_TOKENis missing.🔒 Proposed fix
if (action === "inject") { - const token = process.env.STACK_MCP_LOG_TOKEN || "change-me"; + const token = process.env.STACK_MCP_LOG_TOKEN; + if (token == null || token === "") { + console.error("STACK_MCP_LOG_TOKEN is required for inject."); + process.exit(1); + } if (existsSync(BACKUP)) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/scripts/spacetime-token.mjs` at line 15, The constant token currently falls back to a static string ("change-me") which is unsafe; change the logic around the token declaration (the token constant that reads process.env.STACK_MCP_LOG_TOKEN) to fail fast when the env var is missing by throwing an Error or exiting with a non-zero status instead of defaulting to "change-me" (do this where the token is defined and used, e.g., the token constant in spacetime-token.mjs and any inject-related call sites).apps/backend/src/app/api/latest/internal/mcp-review/delete/route.ts-7-8 (1)
7-8:⚠️ Potential issue | 🟠 MajorRoute path is outside the documented API resource hierarchy.
/api/latest/internal/mcp-review/deletedoes not match the required resource organization. Please move this under a documented resource namespace (or explicitly document an allowed internal exception).As per coding guidelines:
apps/backend/src/app/api/latest/**/*.{ts,tsx}: API routes should follow RESTful design patterns organized by resource type: Auth endpoints at /api/latest/auth/*, user management at /api/latest/users/*, team management at /api/latest/teams/*, and OAuth providers at /api/latest/oauth-providers/*.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/internal/mcp-review/delete/route.ts` around lines 7 - 8, The route exported as POST via createSmartRouteHandler in this file uses the path /api/latest/internal/mcp-review/delete which violates the documented API hierarchy; move this route into a documented resource namespace (for example under /api/latest/mcp-reviews/ or another appropriate resource like /api/latest/reviews/) by relocating the file into that resource folder and updating any references/imports accordingly, ensure the exported symbol POST and createSmartRouteHandler usage remain unchanged, and update any tests or callers that reference the old path to the new resource path.apps/backend/src/app/api/latest/integrations/ai-proxy/[[...path]]/route.ts-22-23 (1)
22-23:⚠️ Potential issue | 🟠 MajorUse a monotonic clock for the logging duration.
startedAt = Date.now()can jump with wall-clock changes, which skews the latency thatobserveAndLogrecords. Since this endpoint feeds analytics, switch the elapsed-time path toperformance.now()consistently, both in route.ts line 23 and in the duration calculations within ai-proxy-handlers.ts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/integrations/ai-proxy/`[[...path]]/route.ts around lines 22 - 23, startedAt is using Date.now() which is non‑monotonic; change the route.ts variable declaration (startedAt) to use performance.now() and update any duration calculations in ai-proxy-handlers.ts (e.g., inside observeAndLog) to compute elapsed as performance.now() - startedAt so all latency logging uses the monotonic clock; keep correlationId and other logic unchanged and ensure types/imports for performance are available where you update startedAt and duration math.apps/internal-tool/src/app/app-client.tsx-67-69 (1)
67-69:⚠️ Potential issue | 🟠 MajorRequire
isAiChatReviewer === truehere.
!metadata?.isAiChatReviewerapproves any truthy JSON value. A string like"false"would pass this check (accessing a property on a string returnsundefined, making!undefinedtrue), allowing access to internal logs without proper authorization. The backend already implements explicit validation inmcp-reviewroutes—mirror that pattern by removing the cast and checking the property's type.Suggested fix
- const metadata = user.clientReadOnlyMetadata as Record<string, unknown> | null; - if (!metadata?.isAiChatReviewer) { + const metadata = user.clientReadOnlyMetadata; + const isAiChatReviewer = + metadata != null && + typeof metadata === "object" && + "isAiChatReviewer" in metadata && + metadata.isAiChatReviewer === true; + if (!isAiChatReviewer) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/app/app-client.tsx` around lines 67 - 69, The current guard inside the NODE_ENV check treats any truthy value as authorized because it uses a cast and truthiness; replace that with an explicit boolean check against the actual metadata object: read user.clientReadOnlyMetadata into metadata (no cast) and require that metadata is a non-null object and metadata.isAiChatReviewer === true before allowing access. Update the conditional around the metadata variable (the block that currently uses !metadata?.isAiChatReviewer) to explicitly verify typeof metadata === "object" && metadata !== null && metadata.isAiChatReviewer === true so only a real boolean true permits access.apps/internal-tool/src/app/app-client.tsx-192-200 (1)
192-200:⚠️ Potential issue | 🟠 MajorDon't swallow these mutation failures.
Each callback converts the mutation into fire-and-forget with
.catch(() => {}), preventing child components from handling errors. Return the promise from an async callback instead.Suggested fix pattern
- onMarkReviewed={(correlationId) => { - getApi() - .then(api => api.markReviewed({ correlationId })) - .catch(() => { /* errors are surfaced by UI state */ }); - }} + onMarkReviewed={async (correlationId) => { + const api = await getApi(); + await api.markReviewed({ correlationId }); + }}Also applies to: 213-221
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/app/app-client.tsx` around lines 192 - 200, The callbacks onSaveCorrection and onMarkReviewed currently call getApi().then(...).catch(() => {}) which swallows mutation failures; change each to return the promise so callers can handle errors: have onSaveCorrection return getApi().then(api => api.updateCorrection({ correlationId, correctedQuestion, correctedAnswer, publish })) and have onMarkReviewed return getApi().then(api => api.markReviewed({ correlationId })); remove the empty .catch handlers (and apply the same change to the similar callbacks around the 213-221 block) so errors propagate to the UI.apps/internal-tool/src/components/CallLogList.tsx-143-154 (1)
143-154:⚠️ Potential issue | 🟠 MajorMake sorting and row selection keyboard-accessible.
Line 146 binds sorting to a plain
<th>, and Line 259 binds selection to a plain<tr>. Neither element is focusable or keyboard-operable, so keyboard users cannot sort the table or open a call from this screen. Please move these interactions onto real buttons (or add equivalent focus/keyboard handling and ARIA semantics).Also applies to: 257-263
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/components/CallLogList.tsx` around lines 143 - 154, The SortHeader currently attaches onClick to a non-focusable <th>; change SortHeader to render a native <button> inside the <th> (keep the visual layout) and move the click to that button so handleSort(field) is invoked from a focusable control, include an accessible label for screen readers and preserve the sort indicator. Likewise, for row selection (the clickable <tr> logic at lines ~257-263), move the interaction into a focusable element inside each row (e.g., a button or link that calls the existing row-open handler) or add tabindex, role="button", and key handlers (Enter/Space) plus aria-label on the existing control so keyboard users can activate rows; ensure existing handler names (handleSort, SortHeader, and the row open handler) are reused and no behavior is lost.apps/backend/src/lib/ai/qa-reviewer.ts-133-158 (1)
133-158:⚠️ Potential issue | 🟠 MajorValidate the full QA payload before building
update.After the partial guard, Line 142 trusts fields that were never checked. If the model omits
improvementSuggestionsor returns malformedflagsentries,qaImprovementSuggestions/qaFlagsJsoncan be built from invalid data and the reducer call fails at runtime instead of falling back cleanly.Proposed fix
+ const isObject = (value: unknown): value is Record<string, unknown> => + value != null && typeof value === "object"; + + const isFlag = ( + value: unknown, + ): value is { type: string, severity: "low" | "medium" | "high" | "critical", explanation: string } => { + if (!isObject(value)) return false; + return ( + typeof value.type === "string" && + (value.severity === "low" || + value.severity === "medium" || + value.severity === "high" || + value.severity === "critical") && + typeof value.explanation === "string" + ); + }; + if ( typeof raw.needsHumanReview !== "boolean" || typeof raw.answerCorrect !== "boolean" || typeof raw.answerRelevant !== "boolean" || !Array.isArray(raw.flags) || + !raw.flags.every(isFlag) || + typeof raw.improvementSuggestions !== "string" || typeof raw.overallScore !== "number" ) { throw new Error(`Invalid QA review response shape: ${JSON.stringify(raw).slice(0, 200)}`); }As per coding guidelines "Validate all assumptions either through the type system (preferred), assertions, or tests. Ideally, use at least two out of three validation methods."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/qa-reviewer.ts` around lines 133 - 158, The current guard validates only a subset of fields then builds update using unchecked properties; enhance validation for parsed.flags and parsed.improvementSuggestions before creating update: assert parsed.improvementSuggestions is a string (or default to ""), validate parsed.flags is an array of objects where each item has string properties type, severity, explanation (filter out or normalize invalid entries), then set qaFlagsJson = JSON.stringify(sanitizedFlags) and qaImprovementSuggestions = parsed.improvementSuggestions || "" when constructing update so malformed or missing fields cannot cause runtime errors (refer to raw, parsed, parsed.flags, parsed.improvementSuggestions, update, qaFlagsJson, qaImprovementSuggestions).apps/backend/src/lib/ai/openrouter-usage.ts-72-94 (1)
72-94:⚠️ Potential issue | 🟠 MajorParse SSE events by block, not by individual
data:lines.This loop only recognizes
\n\nboundaries and then tries toJSON.parseeachdata:line separately. Valid SSE streams can use CRLF separators, split one event across multipledata:lines, or end without a trailing blank line. In all of those cases the usage event is skipped, so the analytics added by this PR will undercount tokens/cost.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/openrouter-usage.ts` around lines 72 - 94, The SSE parser must treat each SSE event as a block (joining multiple "data:" lines and handling CRLF) and also process a final event that may lack a trailing blank line: normalize newlines (replace "\r\n" with "\n") as you append decoder.decode(value, {stream:true}) into buffer, detect event boundaries by "\n\n" on the normalized buffer, for each event block collect all lines starting with "data:" and join their contents with "\n" into a single JSON string before JSON.parse and calling mergeUsageFromEvent(acc,...), skip empty/"[DONE]" events as before; after the reader loop finishes also parse any remaining buffer as a final event block so partial/no trailing blank-line events are not lost (use isUsageEmpty(acc) unchanged).apps/internal-tool/spacetimedb/src/index.ts-3-5 (1)
3-5:⚠️ Potential issue | 🟠 MajorFail fast if the publish-time token was not injected.
If this module is published with
EXPECTED_LOG_TOKENstill set to the checked-in placeholder, every write reducer ends up protected by a predictable public string. That turns a deployment mistake into a real write-auth bypass.Proposed fix
-const EXPECTED_LOG_TOKEN = '__SPACETIMEDB_LOG_TOKEN__'; +const TOKEN_PLACEHOLDER = '__SPACETIMEDB_LOG_TOKEN__'; +const EXPECTED_LOG_TOKEN = TOKEN_PLACEHOLDER; + +if (EXPECTED_LOG_TOKEN === TOKEN_PLACEHOLDER) { + throw new Error("STACK_MCP_LOG_TOKEN was not injected before publishing the SpacetimeDB module"); +}As per coding guidelines "Validate all assumptions either through the type system (preferred), assertions, or tests. Ideally, use at least two out of three validation methods."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/spacetimedb/src/index.ts` around lines 3 - 5, The published placeholder EXPECTED_LOG_TOKEN ('__SPACETIMEDB_LOG_TOKEN__') must never remain in a release; add a runtime assertion at module init to fail fast if EXPECTED_LOG_TOKEN === '__SPACETIMEDB_LOG_TOKEN__' (throw an Error with a clear message including EXPECTED_LOG_TOKEN) so any publish without injection aborts startup; place this check near the top of index.ts where EXPECTED_LOG_TOKEN is defined (use a plain throw or Node assert) and add/adjust unit/integration tests to cover the failure path to satisfy the "at least two validations" guideline.apps/internal-tool/spacetimedb/src/index.ts-7-44 (1)
7-44:⚠️ Potential issue | 🟠 MajorAdd
.unique()constraint tocorrelationIdfield and switch mutation reducers to keyed lookup.The
mcpCallLogtable usescorrelationIdas the functional lookup key across all mutation reducers (lines 136, 169, 197, 260), but the schema has no uniqueness constraint. Each reducer iterates the entire table with.iter()and comparesrow.correlationId === args.correlationId, causing O(n) performance and risking duplicate collision behavior.Mark
correlationId: t.string().unique()in the schema, then switch these reducers to usectx.db.mcpCallLog.correlationId.get(args.correlationId)for efficient keyed access.Also applies to: 118-267
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/spacetimedb/src/index.ts` around lines 7 - 44, The mcpCallLog schema must mark correlationId as unique and the reducers that currently scan the table should use keyed lookups: update the mcpCallLog table definition to change correlationId to t.string().unique() (in the table(...) block) and in each mutation reducer that currently does table.iter() + row.correlationId === args.correlationId (the reducers referencing correlationId) replace that scan with ctx.db.mcpCallLog.correlationId.get(args.correlationId) to fetch the single row for O(1) access and remove the full-table iteration.apps/backend/src/lib/ai/qa-reviewer.ts-173-183 (1)
173-183:⚠️ Potential issue | 🟠 MajorDon't use
.catch()to swallow promise rejections—let the error propagate to the caller.The
.catch()handler here violates the coding guideline: "NEVER use.catch(console.error)or similar. UserunAsynchronouslyorrunAsynchronouslyWithAlertinstead." This swallows the rejection, preventing proper error flow to the caller's async wrapper (runAsynchronouslyAndWaitUntil), which owns the retry/reporting path.Use
try/catchwith re-throw instead. The error will still be logged viarunAsynchronously()in the caller, but the promise rejection will flow correctly through the async chain.Proposed fix
- await conn.reducers.updateMcpQaReview({ - token, - correlationId: entry.correlationId, - qaReviewModelId: REVIEW_MODEL_ID, - ...update, - }).catch((err: unknown) => { - captureError("qa-reviewer", err instanceof Error ? err : new Error(String(err))); - }); + try { + await conn.reducers.updateMcpQaReview({ + token, + correlationId: entry.correlationId, + qaReviewModelId: REVIEW_MODEL_ID, + ...update, + }); + } catch (err) { + captureError("qa-reviewer", err instanceof Error ? err : new Error(String(err))); + throw err; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/qa-reviewer.ts` around lines 173 - 183, The current use of .catch() on conn.reducers.updateMcpQaReview swallows promise rejections; change this to an explicit try/catch around the await call in the function that calls getConnection(), i.e., wrap the await conn.reducers.updateMcpQaReview({...}) in try { ... } catch (err) { captureError("qa-reviewer", err instanceof Error ? err : new Error(String(err))); throw err; } so the error is still logged but re-thrown to let the caller (runAsynchronouslyAndWaitUntil / runAsynchronously) handle retries/reporting; keep the existing token, correlationId and update payload unchanged and preserve the early return when getConnection() is falsy.apps/backend/src/lib/ai/ai-query-handlers.ts-170-173 (1)
170-173:⚠️ Potential issue | 🟠 MajorLog the triggering prompt instead of the first user turn.
messages.find(...)always picks the oldest user message, so multi-turn MCP logs/reviews will be attached to stale context. Use the latest user turn here, or reusemcpCallMetadata.userPrompt, which already represents the prompt that triggered this call.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/ai-query-handlers.ts` around lines 170 - 173, The current code picks the oldest user message via messages.find(...) resulting in stale context; change it to use the latest user turn or directly reuse mcpCallMetadata.userPrompt as the triggering prompt. Replace the logic that computes firstUserMessage and question so that question is set from mcpCallMetadata.userPrompt (or from the last element in messages with role === "user") and ensure you reference the existing variables messages, firstUserMessage, question, and mcpCallMetadata.userPrompt when updating the computation.apps/backend/src/lib/ai/ai-query-handlers.ts-90-98 (1)
90-98:⚠️ Potential issue | 🟠 MajorValidate
toolResult.outputbefore assigning toContentBlock.result.The
as Jsoncast bypasses the type system without validating the actual value. SincebuildStepsJsoncallsJSON.stringifydirectly ontoolResult.outputwithout validation, non-JSON-serializable values (BigInt, Symbol, undefined, functions, cyclic objects) will cause runtime failures in the logging path. UseisJsonSerializableto validate before storing:result: (toolResult?.output && isJsonSerializable(toolResult.output) ? toolResult.output : null)Aligns with coding guidelines: "Do NOT use
as/any/type casts to bypass the type system" and "Validate all assumptions either through the type system (preferred), assertions, or tests."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/ai-query-handlers.ts` around lines 90 - 98, The code is casting toolResult.output to Json and may store non-serializable values into the ContentBlock.result; replace that cast with a runtime validation using isJsonSerializable before assigning result. In the blocks.push call (the object with type "tool-call", toolName/toolCallId/args/argsText), use resultsByCallId.get(toolCall.toolCallId) to obtain toolResult and set result to toolResult.output only if toolResult?.output exists and isJsonSerializable(toolResult.output); otherwise set result to null, and remove the "as Json" cast; this mirrors the validation used by buildStepsJson and prevents runtime JSON.stringify failures.
🟡 Minor comments (10)
apps/dashboard/src/lib/ai-dashboard/shared-prompt.ts-10-13 (1)
10-13:⚠️ Potential issue | 🟡 MinorNarrow
DashboardMessage.roleto a literal union instead ofstring.The current type
role: stringweakens compile-time validation. All actual usages in the codebase construct messages with only"user"and"assistant"roles. Update torole: "user" | "assistant"to enforce this through the type system and prevent invalid role values from being introduced. This also aligns with similar message types elsewhere in the codebase (e.g.,ai-conversations.ts).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard/src/lib/ai-dashboard/shared-prompt.ts` around lines 10 - 13, Update the DashboardMessage type to narrow the role field from string to the literal union "user" | "assistant": change the role property in the DashboardMessage type definition so it reads role: "user" | "assistant" to match usages and other message types (e.g., ai-conversations.ts) and ensure compile-time validation prevents invalid roles from being created.apps/internal-tool/src/utils.ts-7-8 (1)
7-8:⚠️ Potential issue | 🟡 MinorRemove the
ascast and use direct property access orReflect.get().After the type narrowing with
typeof, null check, andinoperator on line 7, TypeScript should allow accessing the property without a cast. The suggested refactor usingReflect.get(ts, "__timestamp_micros_since_unix_epoch__")is a valid approach that avoids the cast while remaining fully type-safe. The return value will beunknownand is properly validated on line 9–10.Suggested refactor
if (typeof ts === "object" && ts !== null && "__timestamp_micros_since_unix_epoch__" in ts) { - const micros = (ts as Record<string, unknown>).__timestamp_micros_since_unix_epoch__; + const micros = Reflect.get(ts, "__timestamp_micros_since_unix_epoch__");This aligns with the coding guideline:
**/*.ts{,x}: Do NOT use as/any/type casts to bypass the type system unless explicitly asked by the user.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/utils.ts` around lines 7 - 8, After the type-narrowing that checks typeof ts === "object", ts !== null and the "in" operator for "__timestamp_micros_since_unix_epoch__", remove the unnecessary cast "(ts as Record<string, unknown>)" and instead access the property in a type-safe way, e.g. const micros = Reflect.get(ts, "__timestamp_micros_since_unix_epoch__") or const micros = (ts as any)["__timestamp_micros_since_unix_epoch__"] replaced by Reflect.get to avoid casts; keep the subsequent validation of micros unchanged so it remains unknown-typed and validated on lines that follow.apps/internal-tool/src/utils.ts-17-19 (1)
17-19:⚠️ Potential issue | 🟡 MinorGuard against non-finite numbers before creating
Date.When
tsisNaNorInfinity,new Date(ts)creates an Invalid Date object silently, which breaks downstream sorting and filtering operations. Since the function already validates the bigint type (line 9-11) and throws for unsupported types (line 20), add a finite number check for consistency.Suggested guard
if (typeof ts === "number") { + if (!Number.isFinite(ts)) { + throw new TypeError(`Expected finite timestamp number, got ${ts}`); + } return new Date(ts); }This aligns with the coding guideline: "Validate all assumptions either through the type system (preferred), assertions, or tests."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/utils.ts` around lines 17 - 19, In the typeof ts === "number" branch of the timestamp-to-Date conversion function, validate that ts is a finite number (e.g., via Number.isFinite) before calling new Date(ts); if it's not finite (NaN, Infinity), throw the same kind of unsupported/invalid timestamp error used elsewhere so callers get a consistent failure mode—this keeps behavior aligned with the existing bigint check in the function and prevents creating Invalid Date objects during sorting/filtering.apps/internal-tool/src/app/app-client.tsx-141-144 (1)
141-144:⚠️ Potential issue | 🟡 MinorReset the usage selection in this tab handler.
This button clears
selectedRow, notselectedUsageRow, so reopening the Usage tab keeps the previous usage detail pane selected.Suggested fix
onClick={() => { setTab("usage"); - setSelectedRow(null); + setSelectedUsageRow(null); }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/app/app-client.tsx` around lines 141 - 144, The onClick handler that switches to the "usage" tab calls setSelectedRow(null) but does not clear the usage-specific selection, so the prior usage detail remains; update the handler used when setting tab to "usage" (the onClick block invoking setTab("usage")) to also call setSelectedUsageRow(null) to reset usage selection (in addition to or instead of setSelectedRow) so reopening the Usage tab shows no previously selected usage.apps/backend/src/app/api/latest/internal/mcp-review/add-manual/route.ts-15-19 (1)
15-19:⚠️ Potential issue | 🟡 MinorReject blank questions and answers at the API boundary.
defined()only guarantees the fields are present;""and whitespace-only payloads still reachaddManualQa(). That lets a bad client create empty knowledge entries or publish blank answers. Add a trim/non-empty check in the schema or a handler assertion before calling the reducer.As per coding guidelines, "Validate all assumptions either through the type system (preferred), assertions, or tests. Ideally, use at least two out of three validation methods."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/internal/mcp-review/add-manual/route.ts` around lines 15 - 19, The request body schema currently uses defined() which allows empty or whitespace-only strings; update the yup schema for question and answer to enforce non-empty trimmed content (e.g., use yupString().trim().min(1).defined() or .required()) and additionally add a defensive assertion before calling addManualQa() that ensures question.trim().length > 0 and answer.trim().length > 0 so blank payloads are rejected at the API boundary; reference the body schema block (yupObject/yupString) and the call site addManualQa() when making these changes.apps/backend/src/lib/ai/mcp-logger.ts-48-59 (1)
48-59:⚠️ Potential issue | 🟡 Minor
STACK_MCP_LOG_TOKENthrows if not configured.Line 54 uses
getEnvVariable("STACK_MCP_LOG_TOKEN")without a default, which will throw if the token is not set. This could cause unexpected failures when SpacetimeDB is configured but the log token isn't. Consider either providing a default or handling this case explicitly.🛡️ Proposed fix: Early return if token not configured
export async function logMcpCall(entry: McpLogEntry): Promise<void> { const conn = await getConnection(); if (!conn) { return; } - const token = getEnvVariable("STACK_MCP_LOG_TOKEN"); + const token = getEnvVariable("STACK_MCP_LOG_TOKEN", ""); + if (!token) { + return; + } + await conn.reducers.logMcpCall({ token, ...entry, }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/mcp-logger.ts` around lines 48 - 59, The call to getEnvVariable("STACK_MCP_LOG_TOKEN") inside logMcpCall can throw if the env var is missing; change logMcpCall to handle a missing token by calling getEnvVariable with a safe accessor or catching the absence and doing an early return (or no-op) before invoking conn.reducers.logMcpCall; specifically, in function logMcpCall use a non-throwing lookup for STACK_MCP_LOG_TOKEN (or wrap it in a try/catch) and if token is falsy return immediately so that getConnection and conn.reducers.logMcpCall are not invoked without a valid token.apps/backend/src/lib/ai/ai-proxy-handlers.ts-97-122 (1)
97-122:⚠️ Potential issue | 🟡 MinorUse
performance.now()for duration measurement instead ofDate.now().Lines 116 and 135 use
Date.now() - startedAtfor duration calculation. Per coding guidelines,performance.now()should be used for measuring elapsed real time as it's more precise and not subject to system clock adjustments. Additionally, the caller inapps/backend/src/app/api/latest/integrations/ai-proxy/[[...path]]/route.tsat line 23 also initializesstartedAtwithDate.now()and needs to be updated to useperformance.now()as well.🔧 Proposed fix
In
route.tsline 23, change:-const startedAt = Date.now(); +const startedAt = performance.now();In
ai-proxy-handlers.tslines 116 and 135, change:-durationMs: BigInt(Date.now() - startedAt), +durationMs: BigInt(Math.round(performance.now() - startedAt)),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/ai-proxy-handlers.ts` around lines 97 - 122, Change elapsed-time measurements to use high-resolution timers: replace Date.now() usage with performance.now() when computing duration in observeAndLog (the two places that compute BigInt(Date.now() - startedAt) for building the log row and scheduling the log in the streaming and non‑streaming branches), and also update the caller that sets startedAt in the ai-proxy route handler (the initialization in route.ts) to use performance.now() instead of Date.now(); keep all other variables and the BigInt conversion intact so buildProxyLogRow, scheduleLog, and scanSseForUsage behavior is unchanged.apps/backend/src/app/api/latest/ai/query/[mode]/route.ts-73-73 (1)
73-73:⚠️ Potential issue | 🟡 MinorUse
performance.now()instead ofDate.now()for elapsed time measurement.The
startedAtvalue is used to calculate duration for logging. As per coding guidelines,Date.now()should not be used for measuring elapsed real time since it can be affected by system clock adjustments. Useperformance.now()instead for monotonic timing.Proposed fix
- const startedAt = Date.now(); + const startedAt = performance.now();Note: Ensure the downstream logging code that calculates duration also uses
performance.now()for the end time.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/ai/query/`[mode]/route.ts at line 73, The timing measurement uses Date.now() (const startedAt) which is susceptible to system clock changes; replace Date.now() with performance.now() when setting startedAt in the route handler and update any downstream duration/end-time calculations that currently use Date.now() (or compute Date-based differences) to use performance.now() so the logged duration is monotonic and accurate.apps/internal-tool/src/hooks/useSpacetimeDB.ts-7-8 (1)
7-8:⚠️ Potential issue | 🟡 MinorEnvironment variables should use the
NEXT_PUBLIC_STACK_prefix.As per coding guidelines, all environment variables should be prefixed with
STACK_(orNEXT_PUBLIC_STACK_if public) to ensure Turborepo picks up changes and improves readability.Suggested fix
-const rawHost = process.env.NEXT_PUBLIC_SPACETIMEDB_HOST; -const rawDbName = process.env.NEXT_PUBLIC_SPACETIMEDB_DB_NAME; +const rawHost = process.env.NEXT_PUBLIC_STACK_SPACETIMEDB_HOST; +const rawDbName = process.env.NEXT_PUBLIC_STACK_SPACETIMEDB_DB_NAME;Update corresponding
.envfiles and theresolveEnverror messages accordingly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/hooks/useSpacetimeDB.ts` around lines 7 - 8, The code is reading environment vars rawHost and rawDbName from NEXT_PUBLIC_SPACETIMEDB_* which must be renamed to the STACK-prefixed public names; update the references in useSpacetimeDB.ts to read NEXT_PUBLIC_STACK_SPACETIMEDB_HOST and NEXT_PUBLIC_STACK_SPACETIMEDB_DB_NAME (update any helper like resolveEnv or error messages that mention the old names), and also update your .env files to use those new keys so Turborepo picks up changes and the error messages/reporting reflect the new variable names.apps/backend/src/lib/ai/ai-query-handlers.ts-122-123 (1)
122-123:⚠️ Potential issue | 🟡 MinorUse a monotonic clock for these latency metrics.
These durations are part of the analytics path, so
Date.now()can skew them on wall-clock adjustments. Measure elapsed time withperformance.now()end-to-end and convert explicitly when persisting the value.As per coding guidelines, "Don't use
Date.now()for measuring elapsed real time. Instead useperformance.now()"Also applies to: 143-144, 186-186
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/ai-query-handlers.ts` around lines 122 - 123, Replace wall-clock timing with a monotonic clock: capture startedAt using performance.now() (instead of Date.now()) and compute durationMs by taking the difference from performance.now(), converting to integer milliseconds (e.g. Math.round or Math.floor) and wrapping with BigInt for persistence; update the three places that set durationMs (the current occurrence where durationMs: BigInt(Date.now() - startedAt) and the other two occurrences referenced) to use BigInt(Math.round(performance.now() - startedAt)) and ensure any startedAt declarations are initialized from performance.now().
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ec17aeec-5320-4a28-b172-0049e8b2afe6
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (93)
.github/workflows/db-migration-backwards-compatibility.yaml.github/workflows/e2e-api-tests-local-emulator.yaml.github/workflows/e2e-api-tests.yaml.github/workflows/e2e-custom-base-port-api-tests.yaml.github/workflows/e2e-fallback-tests.yaml.github/workflows/lint-and-build.yamlREADME.mdapps/backend/.envapps/backend/.env.developmentapps/backend/package.jsonapps/backend/src/app/api/latest/ai/query/[mode]/route.tsapps/backend/src/app/api/latest/integrations/ai-proxy/[[...path]]/route.tsapps/backend/src/app/api/latest/internal/mcp-review/add-manual/route.tsapps/backend/src/app/api/latest/internal/mcp-review/delete/route.tsapps/backend/src/app/api/latest/internal/mcp-review/mark-reviewed/route.tsapps/backend/src/app/api/latest/internal/mcp-review/update-correction/route.tsapps/backend/src/lib/ai/ai-proxy-handlers.tsapps/backend/src/lib/ai/ai-query-handlers.tsapps/backend/src/lib/ai/ai-query-logger.tsapps/backend/src/lib/ai/mcp-logger.tsapps/backend/src/lib/ai/openrouter-usage.tsapps/backend/src/lib/ai/prompts.tsapps/backend/src/lib/ai/qa-reviewer.tsapps/backend/src/lib/ai/schema.tsapps/backend/src/lib/ai/spacetimedb-bindings/add_manual_qa_reducer.tsapps/backend/src/lib/ai/spacetimedb-bindings/ai_query_log_table.tsapps/backend/src/lib/ai/spacetimedb-bindings/delete_qa_entry_reducer.tsapps/backend/src/lib/ai/spacetimedb-bindings/index.tsapps/backend/src/lib/ai/spacetimedb-bindings/log_ai_query_reducer.tsapps/backend/src/lib/ai/spacetimedb-bindings/log_mcp_call_reducer.tsapps/backend/src/lib/ai/spacetimedb-bindings/mark_human_reviewed_reducer.tsapps/backend/src/lib/ai/spacetimedb-bindings/mcp_call_log_table.tsapps/backend/src/lib/ai/spacetimedb-bindings/types.tsapps/backend/src/lib/ai/spacetimedb-bindings/types/procedures.tsapps/backend/src/lib/ai/spacetimedb-bindings/types/reducers.tsapps/backend/src/lib/ai/spacetimedb-bindings/update_human_correction_reducer.tsapps/backend/src/lib/ai/spacetimedb-bindings/update_mcp_qa_review_reducer.tsapps/backend/src/lib/ai/tools/docs.tsapps/backend/src/lib/ai/verified-qa.tsapps/dashboard/src/lib/ai-dashboard/shared-prompt.tsapps/dev-launchpad/public/index.htmlapps/internal-tool/.envapps/internal-tool/.env.developmentapps/internal-tool/.eslintrc.cjsapps/internal-tool/next.config.mjsapps/internal-tool/package.jsonapps/internal-tool/postcss.config.jsapps/internal-tool/scripts/pre-dev.mjsapps/internal-tool/scripts/spacetime-publish.mjsapps/internal-tool/scripts/spacetime-token.mjsapps/internal-tool/spacetime.jsonapps/internal-tool/spacetimedb/package.jsonapps/internal-tool/spacetimedb/src/index.tsapps/internal-tool/spacetimedb/tsconfig.jsonapps/internal-tool/src/app/app-client.tsxapps/internal-tool/src/app/globals.cssapps/internal-tool/src/app/handler/[...stack]/page.tsxapps/internal-tool/src/app/layout.tsxapps/internal-tool/src/app/loading.tsxapps/internal-tool/src/app/page.tsxapps/internal-tool/src/app/questions/page.tsxapps/internal-tool/src/components/AddManualQa.tsxapps/internal-tool/src/components/Analytics.tsxapps/internal-tool/src/components/CallLogDetail.tsxapps/internal-tool/src/components/CallLogList.tsxapps/internal-tool/src/components/ConversationReplay.tsxapps/internal-tool/src/components/KnowledgeBase.tsxapps/internal-tool/src/components/Usage.tsxapps/internal-tool/src/components/UsageDetail.tsxapps/internal-tool/src/components/markdown-components.tsxapps/internal-tool/src/hooks/useSpacetimeDB.tsapps/internal-tool/src/lib/mcp-review-api.tsapps/internal-tool/src/module_bindings/add_manual_qa_reducer.tsapps/internal-tool/src/module_bindings/ai_query_log_table.tsapps/internal-tool/src/module_bindings/delete_qa_entry_reducer.tsapps/internal-tool/src/module_bindings/index.tsapps/internal-tool/src/module_bindings/log_ai_query_reducer.tsapps/internal-tool/src/module_bindings/log_mcp_call_reducer.tsapps/internal-tool/src/module_bindings/mark_human_reviewed_reducer.tsapps/internal-tool/src/module_bindings/mcp_call_log_table.tsapps/internal-tool/src/module_bindings/types.tsapps/internal-tool/src/module_bindings/types/procedures.tsapps/internal-tool/src/module_bindings/types/reducers.tsapps/internal-tool/src/module_bindings/update_human_correction_reducer.tsapps/internal-tool/src/module_bindings/update_mcp_qa_review_reducer.tsapps/internal-tool/src/stack.tsapps/internal-tool/src/types.tsapps/internal-tool/src/utils.tsapps/internal-tool/tailwind.config.jsapps/internal-tool/tsconfig.jsondocker/dependencies/docker.compose.yamldocs/src/app/api/internal/[transport]/route.tsdocs/src/lib/docs-tools-operations.ts
| @@ -0,0 +1,6 @@ | |||
| NEXT_PUBLIC_STACK_API_URL=http://localhost:${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}02 | |||
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Next.js (current stable) support ${VAR:-default}shell-style fallback expansion in.env*files, or only$VAR/${VAR} interpolation?
💡 Result:
No, the current stable version of Next.js (16.x as of 2026-04-15) does not support shell-style fallback expansion like ${VAR:-default} in .env* files. It only supports basic variable interpolation using @next/env, which internally relies on dotenv for parsing and dotenv-expand for interpolation. While dotenv-expand v6+ supports ${VAR:-default} (added via upstream PR in 2021), Next.js docs do not document or exemplify this syntax. Early adoption in 2022 (Next.js 12.2+) had bugs (e.g., resulting in literal ":-default}"), and official guidance remains limited to simple $VAR references. For defaults, use code-level checks (e.g., process.env.VAR || 'default') or multiple .env files with load order precedence.
Citations:
- 1: https://nextjs.org/docs/15/app/guides/environment-variables
- 2: https://nextjs.org/docs/pages/guides/environment-variables
- 3: https://nextjs.org/docs/15/pages/guides/environment-variables
- 4: https://nextjs.org/docs/app/guides/environment-variables
- 5: feat: allow dotenv default expansion vercel/next.js#34765
- 6: dotenv-expand' "default" expansion not working vercel/next.js#34718
- 7: https://nextjs.org/support-policy
- 8: https://www.npmjs.com/package/@next/env
Replace with explicit port values — Next.js does not support ${VAR:-default} shell-style fallback syntax.
Next.js only supports $VAR and ${VAR} interpolation in .env* files. The :-81 fallback syntax is unsupported and will not evaluate, resulting in malformed URLs and broken local connectivity.
Set explicit values: either define NEXT_PUBLIC_STACK_PORT_PREFIX in .env.development (e.g., NEXT_PUBLIC_STACK_PORT_PREFIX=81) or use literal URLs (e.g., NEXT_PUBLIC_STACK_API_URL=http://localhost:8102).
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 1-1: [SubstitutionKey] The NEXT_PUBLIC_STACK_API_URL key is not assigned properly
(SubstitutionKey)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/internal-tool/.env.development` at line 1, The NEXT_PUBLIC_STACK_API_URL
entry uses unsupported shell-style fallback
`${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}` causing malformed URLs; update the .env
entry so NEXT_PUBLIC_STACK_API_URL is a concrete URL or define
NEXT_PUBLIC_STACK_PORT_PREFIX explicitly. Either add a line setting
NEXT_PUBLIC_STACK_PORT_PREFIX=81 and keep NEXT_PUBLIC_STACK_API_URL using
${NEXT_PUBLIC_STACK_PORT_PREFIX} or replace NEXT_PUBLIC_STACK_API_URL with the
literal URL (e.g., http://localhost:8102) to ensure Next.js can interpolate it
at build time.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
apps/backend/src/app/api/latest/ai/query/[mode]/route.ts (1)
99-101: Avoid casting request messages toModelMessage[].The
messages as ModelMessage[]assertion bypasses the last type boundary before we hand request data to the AI SDK. Prefer makingrequestBodySchema/ the route input type produceModelMessage[]directly, or add an explicit converter/assertion here so incompatible content parts fail loudly instead of slipping intostreamText/generateText. As per coding guidelines, "Do NOT useas/any/type casts to bypass the type system unless explicitly asked by the user. Most places where you would use type casts don't actually need them."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/ai/query/`[mode]/route.ts around lines 99 - 101, The code is unsafely casting incoming messages into ModelMessage[] when building cachedMessages; replace the cast with proper validation/conversion so invalid shapes fail early. Update the route's input schema (requestBodySchema) or the route handler to parse/validate the incoming messages into ModelMessage objects (or map/convert each message into ModelMessage) before using them, and then construct cachedMessages = [systemMessage, ...validatedMessages]; ensure ModeContext (ctx) receives the validated array so downstream functions like streamText/generateText only get well-typed ModelMessage instances rather than relying on an "as" cast.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/backend/src/app/api/latest/ai/query/`[mode]/route.ts:
- Around line 74-87: The current common object includes messagesJson:
JSON.stringify(messages) which persists full prompt bodies; replace this with a
redacted summary or deterministic hash instead: build a redaction function that
maps each message to metadata only (role, name if present, char/word/token
length, and tool name) or compute a SHA-256 hash of the concatenated message
bodies, then set messagesJson to that redactedSummaryOrHash; update the code
that constructs common (the common object and messagesJson field) and ensure
downstream callers like logSuccess and logFailure receive only the
redactedSummaryOrHash (not raw messages) so no raw prompt content is logged.
- Around line 89-101: The timing uses Date.now() for startedAt which can be
affected by wall-clock changes; replace the assignment const startedAt =
Date.now(); with a monotonic clock call (e.g. performance.now()) so durationMs
calculations (Date.now() - startedAt) become monotonic; update the declaration
of startedAt in route.ts (the ModeContext payload where startedAt is set) to use
performance.now() and ensure any consumers that compute durationMs continue to
subtract the same clock values (no other changes to duration calculation logic
needed).
In `@apps/dashboard/src/lib/ai-dashboard/shared-prompt.ts`:
- Around line 48-50: getAllTypeDefinitionFiles currently returns every bundled
type file and, together with loadSelectedTypeDefinitions, causes the entire SDK
to be embedded into prompts; instead implement a deterministic relevance filter
(e.g., match by feature keywords, AST/type references, or filename heuristics)
or enforce a hard byte/token cap before building cachedText so only the most
relevant files are included. Update
getAllTypeDefinitionFiles/loadSelectedTypeDefinitions to accept a relevance
selector or maxTokens parameter, apply the filter/sorting to pick
highest‑relevance files, and truncate by cumulative byte/token size before
concatenation; ensure the logic is deterministic (same inputs => same selection)
and reference getAllTypeDefinitionFiles and loadSelectedTypeDefinitions in your
change.
---
Nitpick comments:
In `@apps/backend/src/app/api/latest/ai/query/`[mode]/route.ts:
- Around line 99-101: The code is unsafely casting incoming messages into
ModelMessage[] when building cachedMessages; replace the cast with proper
validation/conversion so invalid shapes fail early. Update the route's input
schema (requestBodySchema) or the route handler to parse/validate the incoming
messages into ModelMessage objects (or map/convert each message into
ModelMessage) before using them, and then construct cachedMessages =
[systemMessage, ...validatedMessages]; ensure ModeContext (ctx) receives the
validated array so downstream functions like streamText/generateText only get
well-typed ModelMessage instances rather than relying on an "as" cast.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b430826b-bfaa-4365-83f6-ad98c65b7fd7
📒 Files selected for processing (3)
apps/backend/src/app/api/latest/ai/query/[mode]/route.tsapps/dashboard/src/lib/ai-dashboard/shared-prompt.tsapps/internal-tool/src/module_bindings/index.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/internal-tool/src/module_bindings/index.ts
| function getAllTypeDefinitionFiles(): string[] { | ||
| return BUNDLED_TYPE_DEFINITIONS.map((f: { path: string }) => f.path); | ||
| } |
There was a problem hiding this comment.
Loading every bundled type definition into every prompt will not scale.
getAllTypeDefinitionFiles() plus loadSelectedTypeDefinitions(...) makes each dashboard request ship the entire bundled SDK surface, regardless of the current task. Even with prompt caching on the first context message, cold requests and cache misses still pay that full token cost, and the prompt size now grows monotonically with every file added to BUNDLED_TYPE_DEFINITIONS. Please keep a deterministic relevance filter or enforce a hard byte/token cap before building cachedText.
Also applies to: 81-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/lib/ai-dashboard/shared-prompt.ts` around lines 48 - 50,
getAllTypeDefinitionFiles currently returns every bundled type file and,
together with loadSelectedTypeDefinitions, causes the entire SDK to be embedded
into prompts; instead implement a deterministic relevance filter (e.g., match by
feature keywords, AST/type references, or filename heuristics) or enforce a hard
byte/token cap before building cachedText so only the most relevant files are
included. Update getAllTypeDefinitionFiles/loadSelectedTypeDefinitions to accept
a relevance selector or maxTokens parameter, apply the filter/sorting to pick
highest‑relevance files, and truncate by cumulative byte/token size before
concatenation; ensure the logic is deterministic (same inputs => same selection)
and reference getAllTypeDefinitionFiles and loadSelectedTypeDefinitions in your
change.
N2D4
left a comment
There was a problem hiding this comment.
lmk once this is ready for review
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
apps/internal-tool/src/lib/ai/spacetimedb-bindings/my_visible_mcp_call_log_table.ts (1)
13-42:⚠️ Potential issue | 🟠 MajorRegenerate this binding from the updated SpacetimeDB schema.
The
apps/internal-tool/src/lib/ai/spacetimedb-bindings/my_visible_mcp_call_log_table.tsbinding is out of sync with the current schema. It is missing theshard: __t.u8()column and incorrectly modelspublishedToQaas optional when it is now required. Update the module source and regenerate the binding to match the schema used inapps/internal-tool/src/module_bindings/my_visible_mcp_call_log_table.tsandapps/backend/src/lib/ai/spacetimedb-bindings/my_visible_mcp_call_log_table.ts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/lib/ai/spacetimedb-bindings/my_visible_mcp_call_log_table.ts` around lines 13 - 42, The exported default __t.row binding is out of sync with the SpacetimeDB schema: add the missing column shard using shard: __t.u8() and change publishedToQa from an optional field to required by replacing __t.option(__t.bool()).name("published_to_qa") with __t.bool().name("published_to_qa"); update the file that defines the row (export default __t.row({...})) accordingly and regenerate the binding so it matches the schema used by the corresponding my_visible_mcp_call_log_table module_bindings/backend bindings.apps/backend/src/lib/ai/qa-reviewer.ts (1)
132-157:⚠️ Potential issue | 🟠 MajorValidate every reducer-bound QA field before casting.
Line 139 accepts responses where
improvementSuggestionsis missing or non-string, and Line 149 allowsNaNthrough becausetypeof NaN === "number". Those values are later sent to requiredstring/u32reducer parameters, so a malformed LLM response can make the QA update fail instead of recording the review result.Proposed fix
if ( typeof raw.needsHumanReview !== "boolean" || typeof raw.answerCorrect !== "boolean" || typeof raw.answerRelevant !== "boolean" || !Array.isArray(raw.flags) || - typeof raw.overallScore !== "number" + raw.flags.some(flag => + typeof flag !== "object" || + flag == null || + Array.isArray(flag) || + typeof flag.type !== "string" || + typeof flag.severity !== "string" || + typeof flag.explanation !== "string" + ) || + typeof raw.improvementSuggestions !== "string" || + typeof raw.overallScore !== "number" || + !Number.isFinite(raw.overallScore) ) { throw new StackAssertionError(`Invalid QA review response shape: ${JSON.stringify(raw).slice(0, 200)}`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/qa-reviewer.ts` around lines 132 - 157, The QA-parsing block must validate all reducer-bound fields before casting: add an explicit check that raw.improvementSuggestions is a string (and that raw.flags is an array of objects with expected keys) and replace the naive typeof overallScore check with Number.isFinite(raw.overallScore) to avoid NaN passing through; then safely coerce/fallback (e.g., default empty string for improvementSuggestions and clamp/floor a finite overallScore to 0–100) before assigning to parsed and building the update object (references: the parsed shape, parsed.improvementSuggestions, parsed.overallScore, and update keys qaImprovementSuggestions and qaOverallScore).apps/internal-tool/src/app/app-client.tsx (1)
64-93:⚠️ Potential issue | 🟠 MajorAdd explicit reviewer gating and move metadata check before subscriptions.
Lines 66-67 initialize subscriptions before the
!usercheck (line 69), allowing unauthenticated users to attempt connection with a stored SpacetimeDB token. Additionally, line 92 uses a type cast to bypass the type system and line 93 treats any truthyisAiChatReviewervalue as approved, while the backend requires strict=== true. Compute a strict reviewer flag early, move subscriptions into the authenticated block, and return early if access is denied.Move the metadata check from line 92-93 to immediately after the user check, compute an explicit
isAiChatReviewer === truecheck (without type casts), and only calluseMcpCallLogs/useAiQueryLogsafter confirming the reviewer is authorized. This prevents unauthenticated connection attempts and aligns metadata validation with backend expectations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/app/app-client.tsx` around lines 64 - 93, Compute an explicit reviewer flag (const isReviewer = metadata?.isAiChatReviewer === true) immediately after confirming user exists, remove the cast to Record<string, unknown>, and return early if isReviewer is false; then only initialize subscriptions/hooks (useMcpCallLogs, useAiQueryLogs) and use the memoized ensureEnrolled after that authorization check so unauthenticated users cannot trigger SpacetimeDB connections via ensureEnrolled or the hooks.
🧹 Nitpick comments (6)
apps/internal-tool/src/components/KnowledgeBase.tsx (2)
126-127: Minor accessibility gaps inConfirmDialog.The modal has no
role="dialog"/aria-modal, no Escape-key handler, and no focus management, so keyboard users can't easily dismiss it and screen readers won't announce it. For an internal tool this is low priority, but adding a keydown listener for Escape and anaria-modal="true"+role="dialog"would be a cheap win.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/components/KnowledgeBase.tsx` around lines 126 - 127, The modal wrapper currently lacks accessibility attributes and an Escape-key handler; update the ConfirmDialog (the outer div that uses onCancel) to include role="dialog" and aria-modal="true", add a ref to the dialog container and call focus() on mount (or focus the first focusable element) to set initial focus, and register a keydown listener (via useEffect) that calls the existing onCancel when Escape is pressed; ensure you also stopPropagation on inner content (existing onClick stop) and clean up the event listener on unmount.
274-285: Consider dropping the "Edit" confirmation.Unlike publish/unpublish/delete, opening the inline editor is a non-destructive action (the user can still cancel without saving). Requiring a confirm dialog here adds friction without a safety benefit — consider wiring the Edit button directly to
onStartEdit()and keepingConfirmDialogonly forpublish/unpublish/delete.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/components/KnowledgeBase.tsx` around lines 274 - 285, Remove the confirmation step for edit actions by deleting the conditional block that renders ConfirmDialog when pending === "edit" and instead invoke onStartEdit() directly from the Edit button handler; update any handlers that previously setPending("edit") to call onStartEdit() (and clear pending state if needed via setPending(null) inside the Edit button flow), leaving ConfirmDialog usage intact only for "publish"/"unpublish"/"delete" flows (symbols to change: pending, ConfirmDialog, setPending, onStartEdit).apps/internal-tool/src/lib/mcp-review-api.ts (1)
67-67: Use the project URL helper for this new endpoint.This newly added
fetchURL uses normal string interpolation. Please switch it to the repo’s URL helper pattern, or otherwise encode dynamic URL pieces consistently. As per coding guidelines, useurlString`` orencodeURIComponent()` instead of normal string interpolation for URLs, for consistency even if it's not strictly necessary.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/lib/mcp-review-api.ts` at line 67, The fetch call assigned to `res` currently uses a plain template literal for the endpoint; replace it with the repo URL helper so dynamic URL pieces are encoded consistently—use the `urlString` template tag (e.g., urlString`/api/latest/internal/spacetimedb-enroll-reviewer`) or, if you must manually build the path, wrap any dynamic segments with `encodeURIComponent()`; update the `fetch` invocation that references `API_URL`/`spacetimedb-enroll-reviewer` to use that helper so URL encoding and project conventions are preserved.apps/backend/src/lib/ai/mcp-logger.ts (1)
55-57: Encode URL path components before constructing SpacetimeDB endpoints.
dbNamecomes from env andreduceris a dynamic helper argument; encode them before placing them in the path.Suggested fix
- const dbName = getEnvVariable("STACK_SPACETIMEDB_DB_NAME"); - const res = await fetch(`${base}/v1/database/${dbName}/call/${reducer}`, { + const dbName = encodeURIComponent(getEnvVariable("STACK_SPACETIMEDB_DB_NAME")); + const reducerName = encodeURIComponent(reducer); + const res = await fetch(`${base}/v1/database/${dbName}/call/${reducerName}`, {- const dbName = getEnvVariable("STACK_SPACETIMEDB_DB_NAME"); - const res = await fetch(`${base}/v1/database/${dbName}/sql`, { + const dbName = encodeURIComponent(getEnvVariable("STACK_SPACETIMEDB_DB_NAME")); + const res = await fetch(`${base}/v1/database/${dbName}/sql`, {As per coding guidelines, “Use urlString`` or encodeURIComponent() instead of normal string interpolation for URLs, for consistency even if it's not strictly necessary”.
Also applies to: 96-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/mcp-logger.ts` around lines 55 - 57, The URL path is constructed with unescaped values dbName and reducer in the fetch call inside mcp-logger (e.g., the const dbName = getEnvVariable(...) and the POST to `${base}/v1/database/${dbName}/call/${reducer}`); encode both path components before interpolation (use encodeURIComponent() or your project's urlString`` helper) so the SpacetimeDB endpoint is safe; apply the same change to the other fetch usage around lines 96-98 that builds a similar path.apps/backend/src/lib/ai/ai-query-handlers.ts (1)
91-98: Validate or normalize tool outputs instead of casting toJson.
toolResult?.outputis not proven JSON-safe here; the cast can hide values that later fail response/log serialization. Normalize through an existing JSON utility or add a small serializer boundary before putting it intoContentBlock.As per coding guidelines, “Do NOT use
as/any/type casts or anything else like that to bypass the type system.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/ai/ai-query-handlers.ts` around lines 91 - 98, The code is unsafely casting toolResult?.output to Json when building the "tool-call" ContentBlock; replace the cast with a real serialization/validation boundary: locate the array push that creates the block (the object with type: "tool-call", toolName, toolCallId, argsText, result) and run toolResult?.output through an existing JSON sanitizer/serializer utility (or JSON.stringify then parse back via a safeDeserialize helper) to ensure the value is valid JSON or normalized (and handle/omit non-serializable values or fallback to null), then assign that sanitized value to result instead of using (toolResult?.output as Json).apps/internal-tool/src/components/Usage.tsx (1)
226-229: Don’t silently drop malformed tool-usage data.If
requestedToolsJsonis invalid, the dashboard currently hides that row from tool analytics with no signal. Surface an “invalid tools JSON” bucket or route the parse through a shared safe parser with explicit error handling.As per coding guidelines, “NEVER try-catch-all” and “NEVER just silently use fallback values when you don't know how to fix type errors.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/internal-tool/src/components/Usage.tsx` around lines 226 - 229, The current try/catch around JSON.parse(requestedToolsJson) swallows parse errors and hides rows; change it to use an explicit safe parse and surface malformed entries by catching the error into a named variable and incrementing a dedicated bucket (e.g. toolCounts.set("invalid_tools_json", ...)) or calling the shared safe parser (e.g. parseJsonSafe) that returns an error result; also log the parse error with context (requestedToolsJson and record id) instead of an empty catch. Update the block that references requestedToolsJson and toolCounts so malformed JSON increments the "invalid_tools_json" counter and the error is reported rather than silently ignored.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/backend/.env.development`:
- Around line 117-123: The env currently enables SpacetimeDB by setting
STACK_SPACETIMEDB_URL while leaving STACK_SPACETIMEDB_SERVICE_TOKEN empty
causing getServiceToken() to throw; fix by either leaving STACK_SPACETIMEDB_URL
blank in the default .env.development until a token is provisioned, or update
the dev bootstrap logic that provisions the token to write
STACK_SPACETIMEDB_SERVICE_TOKEN into the env (or equivalent config) before the
backend starts; alternatively, make getServiceToken() tolerant of an empty token
by returning null/undefined and gating SpacetimeDB initialization behind a token
check (referencing getServiceToken(), STACK_SPACETIMEDB_URL, and
STACK_SPACETIMEDB_SERVICE_TOKEN).
In `@apps/backend/prisma/seed.ts`:
- Around line 428-434: The update call to usersCrudHandlers.adminUpdate is
overwriting client_read_only_metadata instead of merging; fetch the existing
user's read-only metadata (e.g., load the default user by defaultUserId within
internalTenancy) and merge its keys with the new { isAiChatReviewer: true }
before calling adminUpdate so you preserve any preexisting keys on
client_read_only_metadata.
In
`@apps/backend/src/app/api/latest/internal/mcp-review/update-correction/route.ts`:
- Around line 38-45: The reducer call for the required mutation
update_human_correction currently uses the best-effort callReducer which can
early-return when the SpacetimeDB token is missing, allowing the API to respond
{ success: true } without persisting the correction; modify the route handler to
ensure the reducer actually ran by either invoking a strict reducer variant
(e.g., callReducerStrict or the equivalent API that throws on missing token) or
by checking callReducer's return and throwing an error when it indicates it was
skipped, and only return { success: true } after a successful reducer result;
references: callReducer, update_human_correction, the route handler in route.ts.
In
`@apps/backend/src/app/api/latest/internal/spacetimedb-enroll-reviewer/route.ts`:
- Around line 38-43: The call to callReducer uses the reducer "add_operator" but
incorrectly passes the identity as an array ([`0x${body.identity}`]) which
shifts all subsequent args; change the second argument so the identity is a
scalar string `0x${body.identity}` (not wrapped in an array) when invoking
callReducer("add_operator", ...), keeping the other args (token, user.id,
user.display_name ?? user.primary_email ?? user.id) unchanged.
In `@apps/backend/src/lib/ai/ai-query-handlers.ts`:
- Around line 170-174: The code currently uses messages.find(...) which returns
the first user message; change it to pick the latest user turn so `question`
reflects the most recent user input. Locate the symbols firstUserMessage and
question in ai-query-handlers.ts and replace the find logic with a
search-from-end (e.g., iterate messages from the last index or reverse and find)
to get the last message with role === "user", then derive question from that
message; keep innerToolCallsJson unchanged.
In `@apps/backend/src/lib/ai/mcp-logger.ts`:
- Around line 56-70: The fetch calls in rawCallReducer and callSql currently
have no timeout and can hang; update both fetch invocations to pass an abort
signal using AbortSignal.timeout(10_000) (or a shared FETCH_TIMEOUT_MS constant
set to 10_000) by adding signal: AbortSignal.timeout(10_000) to the fetch
options so the request is aborted after 10s; ensure the same pattern is applied
in both functions (rawCallReducer and callSql) and that any response/error
handling accounts for an aborted request.
In `@apps/internal-tool/scripts/pre-dev.mjs`:
- Line 88: The URL used in the fetch call builds a path with dbName directly
(spacetimeHttpUrl and dbName), which can break for special characters; update
the fetch URL construction to encode the path segment by applying
encodeURIComponent(dbName) or using the urlString utility so the database name
is percent-encoded before interpolation into
`${spacetimeHttpUrl}/v1/database/${...}/sql`, keeping the rest of the fetch call
and variable names (spacetimeHttpUrl, dbName) unchanged.
- Around line 67-79: The minted token is not validated before being written, so
if res.json() yields malformed data we end up writing
STACK_SPACETIMEDB_SERVICE_TOKEN=undefined; after parsing the response (where
body is assigned from await res.json() and token = body.token) validate that
token is a non-empty string (e.g., typeof token === "string" && token.trim() !==
"") and if it fails, log a clear warning and return without calling
appendFileSync; ensure the validation happens before computing
prefix/existingContent and before writing to backendEnvLocal so only valid
tokens are persisted.
In `@apps/internal-tool/spacetimedb/src/index.ts`:
- Around line 122-125: The anonymous public view publishedQa currently exposes
the full mcp_call_log row (mcpCallLog.rowType) via spacetimedb.anonymousView;
narrow the payload by replacing the rowType with an explicit public type or
projection that whitelists only the fields the public /questions page needs, and
map ctx.db.mcpCallLog.publishedToQa.filter(true) to return objects of that shape
(i.e., pick allowed fields and remove prompts, raw responses, model IDs, review
metadata, reviewer identity, and inner tool-call JSON). Update the view
signature (the t.array(...) argument) and the callback mapping in publishedQa so
only the sanitized fields are returned to anonymous/public consumers.
In `@apps/internal-tool/src/app/app-client.tsx`:
- Around line 208-221: The callbacks onSaveCorrection, onMarkReviewed, and
onUnmarkReviewed currently fire-and-forget (return void) and swallow errors;
change them to return the Promise from getApi().then(...) so callers can await
results and handle failures. Specifically, in the handlers that call getApi()
and then api.updateCorrection / api.markReviewed / api.unmarkReviewed, remove
the terminal .catch that swallows errors and instead either return the promise
directly or call captureError(...) and rethrow the error so the rejection
propagates to the caller; keep getApi, updateCorrection, markReviewed, and
unmarkReviewed as the referenced symbols to locate changes. Apply the same
change to the other callbacks around lines 234–242.
In `@apps/internal-tool/src/components/CallLogDetail.tsx`:
- Around line 55-62: The optimistic-review handlers (handleMark and
handleUnmark) currently set setOptimisticReviewed(true/false) and call
onMarkReviewed/onUnmarkReviewed directly so a rejection leaves the optimistic
state in place; update both handlers to call the async operation via
runAsynchronouslyWithAlert (imported from
stackframe/stack-shared/dist/utils/promises), pass a function that invokes
onMarkReviewed?.(row.correlationId) / onUnmarkReviewed?.(row.correlationId), and
in the rejection/finally logic reset setOptimisticReviewed back to its previous
value (or clear it) and still call
captureError("call-log-mark-reviewed"/"call-log-unmark-reviewed", err) on error
so the UI reverts when the request fails.
In `@apps/internal-tool/src/components/Usage.tsx`:
- Around line 558-565: The table rows rendered in Usage.tsx are clickable but
not keyboard accessible; update the <tr> that uses onSelect(row) (and references
selectedId and row.id) to be focusable and operable by keyboard by adding
tabIndex={0}, role="button" (or role="row" with an inner button),
aria-selected={selectedId === row.id}, and an onKeyDown handler that calls
onSelect(row) when Enter or Space is pressed; ensure the handler prevents
default for Space so it doesn't scroll and keep the existing onClick behavior.
- Around line 185-193: The time-series bucket window uses rangeStart and now
(spanMs) which makes "all" (rangeStart === 0) produce a capped recent window;
update the logic in the time-bucket section (spanMs, bucketMs, bucketCount,
bucketStart) to compute spanMs from the actual filtered data bounds (use
seriesStart and seriesEnd) when timeRange === "all", then derive bucketMs,
bucketCount (still cap to 48) and bucketStart from those seriesStart/seriesEnd
values so the chart spans the same rows as the rest of the dashboard.
In `@apps/internal-tool/src/hooks/useSpacetimeDB.ts`:
- Around line 45-54: The effect that opens the SpacetimeDB subscription uses
ensureEnrolledRef.current but only lists binding in its dependency array, so the
effect runs before memoizedEnsureEnrolled is available and the onConnect
callback (onConnect) subscribes without enrollment; update the effect around the
query/connection logic (the useEffect that defines query = `SELECT * FROM
${binding.tableName}` and uses ensureEnrolledRef) to either include
ensureEnrolled in the dependency array or gate/defer the initial connection
until ensureEnrolled (or ensureEnrolledRef.current) is available so that
onConnect reads a valid enrollment function; ensure the same fix is applied to
the related onConnect subscription logic referenced by ensureEnrolledRef/current
to avoid unenrolled subscriptions (affects the code using ensureEnrolledRef and
onConnect).
In `@apps/internal-tool/src/lib/ai/spacetimedb-bindings/index.ts`:
- Around line 36-47: The generated bindings entrypoint is missing the new
exports for the published_qa view and the unmark_human_reviewed reducer;
regenerate or update apps/internal-tool/src/lib/ai/spacetimedb-bindings/index.ts
to register and export published_qa and unmark_human_reviewed alongside the
existing reducers (e.g., AddManualQaReducer, MarkHumanReviewedReducer,
UpdateHumanCorrectionReducer), and ensure the generated table/view schema
includes published_qa so typed clients can subscribe to it and see the public QA
view.
---
Outside diff comments:
In `@apps/backend/src/lib/ai/qa-reviewer.ts`:
- Around line 132-157: The QA-parsing block must validate all reducer-bound
fields before casting: add an explicit check that raw.improvementSuggestions is
a string (and that raw.flags is an array of objects with expected keys) and
replace the naive typeof overallScore check with
Number.isFinite(raw.overallScore) to avoid NaN passing through; then safely
coerce/fallback (e.g., default empty string for improvementSuggestions and
clamp/floor a finite overallScore to 0–100) before assigning to parsed and
building the update object (references: the parsed shape,
parsed.improvementSuggestions, parsed.overallScore, and update keys
qaImprovementSuggestions and qaOverallScore).
In `@apps/internal-tool/src/app/app-client.tsx`:
- Around line 64-93: Compute an explicit reviewer flag (const isReviewer =
metadata?.isAiChatReviewer === true) immediately after confirming user exists,
remove the cast to Record<string, unknown>, and return early if isReviewer is
false; then only initialize subscriptions/hooks (useMcpCallLogs, useAiQueryLogs)
and use the memoized ensureEnrolled after that authorization check so
unauthenticated users cannot trigger SpacetimeDB connections via ensureEnrolled
or the hooks.
In
`@apps/internal-tool/src/lib/ai/spacetimedb-bindings/my_visible_mcp_call_log_table.ts`:
- Around line 13-42: The exported default __t.row binding is out of sync with
the SpacetimeDB schema: add the missing column shard using shard: __t.u8() and
change publishedToQa from an optional field to required by replacing
__t.option(__t.bool()).name("published_to_qa") with
__t.bool().name("published_to_qa"); update the file that defines the row (export
default __t.row({...})) accordingly and regenerate the binding so it matches the
schema used by the corresponding my_visible_mcp_call_log_table
module_bindings/backend bindings.
---
Nitpick comments:
In `@apps/backend/src/lib/ai/ai-query-handlers.ts`:
- Around line 91-98: The code is unsafely casting toolResult?.output to Json
when building the "tool-call" ContentBlock; replace the cast with a real
serialization/validation boundary: locate the array push that creates the block
(the object with type: "tool-call", toolName, toolCallId, argsText, result) and
run toolResult?.output through an existing JSON sanitizer/serializer utility (or
JSON.stringify then parse back via a safeDeserialize helper) to ensure the value
is valid JSON or normalized (and handle/omit non-serializable values or fallback
to null), then assign that sanitized value to result instead of using
(toolResult?.output as Json).
In `@apps/backend/src/lib/ai/mcp-logger.ts`:
- Around line 55-57: The URL path is constructed with unescaped values dbName
and reducer in the fetch call inside mcp-logger (e.g., the const dbName =
getEnvVariable(...) and the POST to
`${base}/v1/database/${dbName}/call/${reducer}`); encode both path components
before interpolation (use encodeURIComponent() or your project's urlString``
helper) so the SpacetimeDB endpoint is safe; apply the same change to the other
fetch usage around lines 96-98 that builds a similar path.
In `@apps/internal-tool/src/components/KnowledgeBase.tsx`:
- Around line 126-127: The modal wrapper currently lacks accessibility
attributes and an Escape-key handler; update the ConfirmDialog (the outer div
that uses onCancel) to include role="dialog" and aria-modal="true", add a ref to
the dialog container and call focus() on mount (or focus the first focusable
element) to set initial focus, and register a keydown listener (via useEffect)
that calls the existing onCancel when Escape is pressed; ensure you also
stopPropagation on inner content (existing onClick stop) and clean up the event
listener on unmount.
- Around line 274-285: Remove the confirmation step for edit actions by deleting
the conditional block that renders ConfirmDialog when pending === "edit" and
instead invoke onStartEdit() directly from the Edit button handler; update any
handlers that previously setPending("edit") to call onStartEdit() (and clear
pending state if needed via setPending(null) inside the Edit button flow),
leaving ConfirmDialog usage intact only for "publish"/"unpublish"/"delete" flows
(symbols to change: pending, ConfirmDialog, setPending, onStartEdit).
In `@apps/internal-tool/src/components/Usage.tsx`:
- Around line 226-229: The current try/catch around
JSON.parse(requestedToolsJson) swallows parse errors and hides rows; change it
to use an explicit safe parse and surface malformed entries by catching the
error into a named variable and incrementing a dedicated bucket (e.g.
toolCounts.set("invalid_tools_json", ...)) or calling the shared safe parser
(e.g. parseJsonSafe) that returns an error result; also log the parse error with
context (requestedToolsJson and record id) instead of an empty catch. Update the
block that references requestedToolsJson and toolCounts so malformed JSON
increments the "invalid_tools_json" counter and the error is reported rather
than silently ignored.
In `@apps/internal-tool/src/lib/mcp-review-api.ts`:
- Line 67: The fetch call assigned to `res` currently uses a plain template
literal for the endpoint; replace it with the repo URL helper so dynamic URL
pieces are encoded consistently—use the `urlString` template tag (e.g.,
urlString`/api/latest/internal/spacetimedb-enroll-reviewer`) or, if you must
manually build the path, wrap any dynamic segments with `encodeURIComponent()`;
update the `fetch` invocation that references
`API_URL`/`spacetimedb-enroll-reviewer` to use that helper so URL encoding and
project conventions are preserved.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0a0b1aca-18c3-4efd-9dbb-8b3cfda94ad2
📒 Files selected for processing (64)
apps/backend/.envapps/backend/.env.developmentapps/backend/prisma/seed.tsapps/backend/src/app/api/latest/ai/query/[mode]/route.tsapps/backend/src/app/api/latest/internal/mcp-review/add-manual/route.tsapps/backend/src/app/api/latest/internal/mcp-review/delete/route.tsapps/backend/src/app/api/latest/internal/mcp-review/mark-reviewed/route.tsapps/backend/src/app/api/latest/internal/mcp-review/unmark-reviewed/route.tsapps/backend/src/app/api/latest/internal/mcp-review/update-correction/route.tsapps/backend/src/app/api/latest/internal/spacetimedb-enroll-reviewer/route.tsapps/backend/src/lib/ai/ai-query-handlers.tsapps/backend/src/lib/ai/ai-query-logger.tsapps/backend/src/lib/ai/mcp-logger.tsapps/backend/src/lib/ai/qa-reviewer.tsapps/backend/src/lib/ai/spacetimedb-bindings/add_operator_reducer.tsapps/backend/src/lib/ai/spacetimedb-bindings/enroll_service_reducer.tsapps/backend/src/lib/ai/spacetimedb-bindings/index.tsapps/backend/src/lib/ai/spacetimedb-bindings/my_visible_ai_query_log_table.tsapps/backend/src/lib/ai/spacetimedb-bindings/my_visible_mcp_call_log_table.tsapps/backend/src/lib/ai/spacetimedb-bindings/operators_table.tsapps/backend/src/lib/ai/spacetimedb-bindings/published_qa_table.tsapps/backend/src/lib/ai/spacetimedb-bindings/remove_operator_reducer.tsapps/backend/src/lib/ai/spacetimedb-bindings/types.tsapps/backend/src/lib/ai/spacetimedb-bindings/types/reducers.tsapps/backend/src/lib/ai/spacetimedb-bindings/unmark_human_reviewed_reducer.tsapps/backend/src/lib/ai/verified-qa.tsapps/internal-tool/scripts/pre-dev.mjsapps/internal-tool/spacetimedb/src/index.tsapps/internal-tool/src/app/app-client.tsxapps/internal-tool/src/app/questions/page.tsxapps/internal-tool/src/components/CallLogDetail.tsxapps/internal-tool/src/components/CallLogList.tsxapps/internal-tool/src/components/KnowledgeBase.tsxapps/internal-tool/src/components/Usage.tsxapps/internal-tool/src/hooks/useSpacetimeDB.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/add_manual_qa_reducer.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/add_operator_reducer.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/delete_qa_entry_reducer.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/enroll_service_reducer.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/index.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/log_ai_query_reducer.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/log_mcp_call_reducer.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/mark_human_reviewed_reducer.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/my_visible_ai_query_log_table.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/my_visible_mcp_call_log_table.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/operators_table.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/remove_operator_reducer.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/types.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/types/procedures.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/types/reducers.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/update_human_correction_reducer.tsapps/internal-tool/src/lib/ai/spacetimedb-bindings/update_mcp_qa_review_reducer.tsapps/internal-tool/src/lib/mcp-review-api.tsapps/internal-tool/src/module_bindings/add_operator_reducer.tsapps/internal-tool/src/module_bindings/enroll_service_reducer.tsapps/internal-tool/src/module_bindings/index.tsapps/internal-tool/src/module_bindings/my_visible_ai_query_log_table.tsapps/internal-tool/src/module_bindings/my_visible_mcp_call_log_table.tsapps/internal-tool/src/module_bindings/operators_table.tsapps/internal-tool/src/module_bindings/published_qa_table.tsapps/internal-tool/src/module_bindings/remove_operator_reducer.tsapps/internal-tool/src/module_bindings/types.tsapps/internal-tool/src/module_bindings/types/reducers.tsapps/internal-tool/src/module_bindings/unmark_human_reviewed_reducer.ts
✅ Files skipped from review due to trivial changes (32)
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/types/procedures.ts
- apps/backend/src/lib/ai/spacetimedb-bindings/enroll_service_reducer.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/delete_qa_entry_reducer.ts
- apps/backend/src/lib/ai/spacetimedb-bindings/remove_operator_reducer.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/remove_operator_reducer.ts
- apps/internal-tool/src/module_bindings/enroll_service_reducer.ts
- apps/internal-tool/src/module_bindings/unmark_human_reviewed_reducer.ts
- apps/internal-tool/src/module_bindings/remove_operator_reducer.ts
- apps/internal-tool/src/module_bindings/add_operator_reducer.ts
- apps/backend/src/lib/ai/spacetimedb-bindings/add_operator_reducer.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/add_operator_reducer.ts
- apps/backend/src/lib/ai/spacetimedb-bindings/operators_table.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/update_human_correction_reducer.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/add_manual_qa_reducer.ts
- apps/backend/src/lib/ai/spacetimedb-bindings/types/reducers.ts
- apps/internal-tool/src/module_bindings/operators_table.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/my_visible_ai_query_log_table.ts
- apps/internal-tool/src/module_bindings/published_qa_table.ts
- apps/internal-tool/src/module_bindings/my_visible_ai_query_log_table.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/log_mcp_call_reducer.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/mark_human_reviewed_reducer.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/types/reducers.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/operators_table.ts
- apps/backend/src/lib/ai/spacetimedb-bindings/unmark_human_reviewed_reducer.ts
- apps/internal-tool/src/module_bindings/types/reducers.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/log_ai_query_reducer.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/enroll_service_reducer.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/types.ts
- apps/backend/src/lib/ai/spacetimedb-bindings/my_visible_ai_query_log_table.ts
- apps/internal-tool/src/lib/ai/spacetimedb-bindings/update_mcp_qa_review_reducer.ts
- apps/internal-tool/src/module_bindings/my_visible_mcp_call_log_table.ts
- apps/backend/src/lib/ai/spacetimedb-bindings/published_qa_table.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- apps/backend/src/lib/ai/ai-query-logger.ts
- apps/internal-tool/src/module_bindings/index.ts
| STACK_SPACETIMEDB_URL=http://localhost:${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}39 | ||
| STACK_SPACETIMEDB_DB_NAME=stack-auth-llm | ||
| STACK_MCP_LOG_TOKEN=change-me | ||
|
|
||
| # To provision locally: `curl -X POST http://127.0.0.1:${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}39/v1/identity` | ||
| # then copy the `token` field from the response. | ||
| STACK_SPACETIMEDB_SERVICE_TOKEN= |
There was a problem hiding this comment.
Avoid enabling SpacetimeDB with an empty service token by default.
With STACK_SPACETIMEDB_URL set and STACK_SPACETIMEDB_SERVICE_TOKEN empty, getServiceToken() throws as soon as the backend hits the logging/reducer path. Either keep the URL empty until the token is provisioned, or ensure the dev bootstrap always writes the token before backend startup.
Suggested safer default
-STACK_SPACETIMEDB_URL=http://localhost:${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}39
+STACK_SPACETIMEDB_URL=
STACK_SPACETIMEDB_DB_NAME=stack-auth-llm
STACK_MCP_LOG_TOKEN=change-me
# To provision locally: `curl -X POST http://127.0.0.1:${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}39/v1/identity`
# then copy the `token` field from the response.
STACK_SPACETIMEDB_SERVICE_TOKEN=🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 117-117: [SubstitutionKey] The STACK_SPACETIMEDB_URL key is not assigned properly
(SubstitutionKey)
[warning] 118-118: [UnorderedKey] The STACK_SPACETIMEDB_DB_NAME key should go before the STACK_SPACETIMEDB_URL key
(UnorderedKey)
[warning] 119-119: [UnorderedKey] The STACK_MCP_LOG_TOKEN key should go before the STACK_SPACETIMEDB_DB_NAME key
(UnorderedKey)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/.env.development` around lines 117 - 123, The env currently
enables SpacetimeDB by setting STACK_SPACETIMEDB_URL while leaving
STACK_SPACETIMEDB_SERVICE_TOKEN empty causing getServiceToken() to throw; fix by
either leaving STACK_SPACETIMEDB_URL blank in the default .env.development until
a token is provisioned, or update the dev bootstrap logic that provisions the
token to write STACK_SPACETIMEDB_SERVICE_TOKEN into the env (or equivalent
config) before the backend starts; alternatively, make getServiceToken()
tolerant of an empty token by returning null/undefined and gating SpacetimeDB
initialization behind a token check (referencing getServiceToken(),
STACK_SPACETIMEDB_URL, and STACK_SPACETIMEDB_SERVICE_TOKEN).
| await usersCrudHandlers.adminUpdate({ | ||
| tenancy: internalTenancy, | ||
| user_id: defaultUserId, | ||
| data: { | ||
| client_read_only_metadata: { isAiChatReviewer: true }, | ||
| }, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the usersCrudHandlers implementation
fd -t f "usersCrudHandlers\|adminUpdate" apps/backend --type ts --type tsxRepository: stack-auth/stack-auth
Length of output: 234
🏁 Script executed:
# Search for adminUpdate function definition
rg "adminUpdate" apps/backend --type ts -A 5 -B 2 | head -100Repository: stack-auth/stack-auth
Length of output: 7486
🏁 Script executed:
# Look for the actual CRUD handlers file
fd -t f -e ts -e tsx "crud" apps/backend | grep -i userRepository: stack-auth/stack-auth
Length of output: 397
🏁 Script executed:
# Read the users CRUD file to understand adminUpdate implementation
wc -l apps/backend/src/app/api/latest/users/crud.tsxRepository: stack-auth/stack-auth
Length of output: 116
🏁 Script executed:
# Look for the structure of the users crud handlers and metadata handling
cat -n apps/backend/src/app/api/latest/users/crud.tsx | head -150Repository: stack-auth/stack-auth
Length of output: 8083
🏁 Script executed:
# Search for metadata handling in the users CRUD file
rg "client_read_only_metadata\|metadata" apps/backend/src/app/api/latest/users/crud.tsx -B 2 -A 2Repository: stack-auth/stack-auth
Length of output: 47
🏁 Script executed:
# Search for the update logic and metadata field handling
rg "adminUpdate|update.*data" apps/backend/src/app/api/latest/users/crud.tsx -B 3 -A 10 | head -100Repository: stack-auth/stack-auth
Length of output: 593
🏁 Script executed:
# Search for the createCrud function and adminUpdate implementation
rg "createCrud|function adminUpdate" apps/backend/src/app/api/latest/users/crud.tsx -B 2 -A 15 | head -150Repository: stack-auth/stack-auth
Length of output: 4485
🏁 Script executed:
# Look for where the user CRUD is defined with adminUpdateSchema
rg "adminUpdateSchema\|usersCrud\|createCrudHandlers" apps/backend/src/app/api/latest/users/crud.tsx -B 3 -A 3Repository: stack-auth/stack-auth
Length of output: 47
🏁 Script executed:
# Find where usersCrud is defined
find . -name "*.ts" -o -name "*.tsx" | xargs grep -l "export.*usersCrud\s*=" | grep -v node_modulesRepository: stack-auth/stack-auth
Length of output: 116
🏁 Script executed:
# Look at the createCrud function to understand how updates are handled
rg "function createCrud|export.*createCrud" -A 50 | head -200Repository: stack-auth/stack-auth
Length of output: 18605
🏁 Script executed:
# Find the createCrud function definition
fd -type f "crud" packages/stack-shared/src | head -20Repository: stack-auth/stack-auth
Length of output: 235
🏁 Script executed:
# Look for createCrud function
rg "export.*createCrud|function createCrud" packages/stack-shared/src -A 30 | head -150Repository: stack-auth/stack-auth
Length of output: 14595
🏁 Script executed:
# Find the createCrud function definition file
rg "^export.*function createCrud|^export const createCrud" packages/stack-shared/src -B 2 -A 50Repository: stack-auth/stack-auth
Length of output: 3353
🏁 Script executed:
# Search for the file containing createCrud function
fd "crud-base\|crud\.ts" packages/stack-shared/srcRepository: stack-auth/stack-auth
Length of output: 47
🏁 Script executed:
# Search for createCrudHandlers function which implements the actual update logic
rg "export.*createCrudHandlers|function createCrudHandlers" apps/backend/src -B 2 -A 100 | head -200Repository: stack-auth/stack-auth
Length of output: 19232
🏁 Script executed:
# Look at the route-handlers crud-handler file
cat -n apps/backend/src/route-handlers/crud-handler.ts | head -200Repository: stack-auth/stack-auth
Length of output: 144
🏁 Script executed:
# Search for the onUpdate handler in the users CRUD file
rg "onUpdate.*async|async.*onUpdate" apps/backend/src/app/api/latest/users/crud.tsx -A 100 | head -200Repository: stack-auth/stack-auth
Length of output: 4897
🏁 Script executed:
# Look for where adminUpdate is called and how it actually updates metadata
cat -n apps/backend/src/app/api/latest/users/crud.tsx | grep -A 150 "onUpdate"Repository: stack-auth/stack-auth
Length of output: 8422
🏁 Script executed:
# Search for where the projectUser is actually updated with metadata in the onUpdate handler
rg "projectUser.*update|tx\.projectUser\.update" apps/backend/src/app/api/latest/users/crud.tsx -B 2 -A 10 | head -150Repository: stack-auth/stack-auth
Length of output: 644
🏁 Script executed:
# Search for how metadata is merged or replaced when updating
rg "client_read_only_metadata|metadata" apps/backend/src/app/api/latest/users/crud.tsx -B 2 -A 2Repository: stack-auth/stack-auth
Length of output: 2762
🏁 Script executed:
# Let me check if the default user has any existing metadata already set in seed
rg "defaultUserId\|defaultUser" apps/backend/prisma/seed.ts -B 3 -A 3 | head -100Repository: stack-auth/stack-auth
Length of output: 47
🏁 Script executed:
# Check what prisma.JsonNull does and confirm replacement semantics
rg "JsonNull" packages/stack-shared apps/backend -B 2 -A 2 | head -50Repository: stack-auth/stack-auth
Length of output: 4230
🏁 Script executed:
# Look for where defaultUserId is created/initialized in seed
cat -n apps/backend/prisma/seed.ts | head -450 | tail -100Repository: stack-auth/stack-auth
Length of output: 4002
🏁 Script executed:
# Search around line 428 in seed.ts to see context of defaultUserId and metadata
cat -n apps/backend/prisma/seed.ts | sed -n '380,440p'Repository: stack-auth/stack-auth
Length of output: 2443
🏁 Script executed:
# Final verification: check if there's any metadata merging utility in the codebase
rg "merge.*metadata\|deepMerge\|spread.*metadata" apps/backend/src -B 2 -A 2 | head -50Repository: stack-auth/stack-auth
Length of output: 47
Merge metadata instead of replacing.
The adminUpdate call replaces the entire client_read_only_metadata field rather than merging keys. If future code or seed runs set other read-only metadata on the default user before this line, it will be lost. Merge the new key with existing metadata to prevent accidental data loss:
client_read_only_metadata: {
...oldUser.clientReadOnlyMetadata,
isAiChatReviewer: true
}
Alternatively, fetch the current metadata first and merge it in the update call.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/prisma/seed.ts` around lines 428 - 434, The update call to
usersCrudHandlers.adminUpdate is overwriting client_read_only_metadata instead
of merging; fetch the existing user's read-only metadata (e.g., load the default
user by defaultUserId within internalTenancy) and merge its keys with the new {
isAiChatReviewer: true } before calling adminUpdate so you preserve any
preexisting keys on client_read_only_metadata.
| await callReducer("update_human_correction", [ | ||
| token, | ||
| correlationId: body.correlationId, | ||
| correctedQuestion: body.correctedQuestion, | ||
| correctedAnswer: body.correctedAnswer, | ||
| publish: body.publish, | ||
| reviewedBy: user.display_name ?? user.primary_email ?? user.id, | ||
| }); | ||
| body.correlationId, | ||
| body.correctedQuestion, | ||
| body.correctedAnswer, | ||
| body.publish, | ||
| user.display_name ?? user.primary_email ?? user.id, | ||
| ]); |
There was a problem hiding this comment.
Don’t return success if the reducer call is skipped.
callReducer is best-effort per the provided implementation: it returns early when the SpacetimeDB service token is unavailable. For this review mutation, that means the API can return { success: true } without persisting the correction. Please use a strict reducer path here, or make callReducer throw for required mutations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/backend/src/app/api/latest/internal/mcp-review/update-correction/route.ts`
around lines 38 - 45, The reducer call for the required mutation
update_human_correction currently uses the best-effort callReducer which can
early-return when the SpacetimeDB token is missing, allowing the API to respond
{ success: true } without persisting the correction; modify the route handler to
ensure the reducer actually ran by either invoking a strict reducer variant
(e.g., callReducerStrict or the equivalent API that throws on missing token) or
by checking callReducer's return and throwing an error when it indicates it was
skipped, and only return { success: true } after a successful reducer result;
references: callReducer, update_human_correction, the route handler in route.ts.
| await callReducer("add_operator", [ | ||
| token, | ||
| [`0x${body.identity}`], | ||
| user.id, | ||
| user.display_name ?? user.primary_email ?? user.id, | ||
| ]); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify how generated bindings and existing reducer calls represent SpacetimeDB identity values.
rg -n -C4 'add_operator|addOperator|__t\.identity|identity:' \
apps/backend/src/lib/ai/spacetimedb-bindings \
apps/internal-tool/src/module_bindings \
apps/internal-tool/src/lib/ai/spacetimedb-bindings \
apps/backend/src/app/api/latest/internalRepository: stack-auth/stack-auth
Length of output: 20180
🏁 Script executed:
#!/bin/bash
# Search for all callReducer invocations to understand argument patterns
rg -n 'callReducer' \
apps/backend/src/app/api/latest/internal \
apps/internal-tool/src \
-A 5 -B 1 \
| head -100Repository: stack-auth/stack-auth
Length of output: 9745
🏁 Script executed:
#!/bin/bash
# Find callReducer implementation
rg -n 'export.*callReducer|function callReducer' apps/backend/src/lib/ai/mcp-logger -A 20Repository: stack-auth/stack-auth
Length of output: 139
🏁 Script executed:
#!/bin/bash
# Also search for remove_operator calls to see if identity is wrapped
rg -n 'remove_operator|removeOperator' apps/backend/src -B 2 -A 5Repository: stack-auth/stack-auth
Length of output: 2858
🏁 Script executed:
#!/bin/bash
# Find where callReducer is imported/defined
rg -n 'callReducer' apps/backend/src/app/api/latest/internal/spacetimedb-enroll-reviewer/route.ts -B 5Repository: stack-auth/stack-auth
Length of output: 368
🏁 Script executed:
#!/bin/bash
# Search for the actual implementation of callReducer
find apps/backend/src -name '*.ts' -o -name '*.tsx' | xargs rg -l 'export.*callReducer|export function callReducer|export const callReducer'Repository: stack-auth/stack-auth
Length of output: 102
🏁 Script executed:
#!/bin/bash
# Read the callReducer implementation
cat apps/backend/src/lib/ai/mcp-logger.tsRepository: stack-auth/stack-auth
Length of output: 4720
Remove array wrapping from the identity argument.
The add_operator reducer schema expects identity as a scalar __t.identity() value, but the call passes it wrapped in an array. This misaligns all subsequent arguments. The identity should be passed as the scalar 0x... string directly.
Fix
await callReducer("add_operator", [
token,
- [`0x${body.identity}`],
+ `0x${body.identity}`,
user.id,
user.display_name ?? user.primary_email ?? user.id,
]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/backend/src/app/api/latest/internal/spacetimedb-enroll-reviewer/route.ts`
around lines 38 - 43, The call to callReducer uses the reducer "add_operator"
but incorrectly passes the identity as an array ([`0x${body.identity}`]) which
shifts all subsequent args; change the second argument so the identity is a
scalar string `0x${body.identity}` (not wrapped in an array) when invoking
callReducer("add_operator", ...), keeping the other args (token, user.id,
user.display_name ?? user.primary_email ?? user.id) unchanged.
| const firstUserMessage = messages.find(m => m.role === "user"); | ||
| const question = typeof firstUserMessage?.content === "string" | ||
| ? firstUserMessage.content | ||
| : JSON.stringify(firstUserMessage?.content ?? ""); | ||
| const innerToolCallsJson = JSON.stringify(contentBlocks.filter(b => b.type === "tool-call")); |
There was a problem hiding this comment.
Use the latest user message for MCP QA logging.
messages.find(...) picks the first user message in a conversation. For follow-up questions, the logged/reviewed question can be from an earlier turn while finalText is for the latest turn, corrupting QA review data.
Suggested fix
- const firstUserMessage = messages.find(m => m.role === "user");
- const question = typeof firstUserMessage?.content === "string"
- ? firstUserMessage.content
- : JSON.stringify(firstUserMessage?.content ?? "");
+ const latestUserMessage = [...messages].reverse().find(m => m.role === "user");
+ const question = typeof latestUserMessage?.content === "string"
+ ? latestUserMessage.content
+ : JSON.stringify(latestUserMessage?.content ?? "");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const firstUserMessage = messages.find(m => m.role === "user"); | |
| const question = typeof firstUserMessage?.content === "string" | |
| ? firstUserMessage.content | |
| : JSON.stringify(firstUserMessage?.content ?? ""); | |
| const innerToolCallsJson = JSON.stringify(contentBlocks.filter(b => b.type === "tool-call")); | |
| const latestUserMessage = [...messages].reverse().find(m => m.role === "user"); | |
| const question = typeof latestUserMessage?.content === "string" | |
| ? latestUserMessage.content | |
| : JSON.stringify(latestUserMessage?.content ?? ""); | |
| const innerToolCallsJson = JSON.stringify(contentBlocks.filter(b => b.type === "tool-call")); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/src/lib/ai/ai-query-handlers.ts` around lines 170 - 174, The
code currently uses messages.find(...) which returns the first user message;
change it to pick the latest user turn so `question` reflects the most recent
user input. Locate the symbols firstUserMessage and question in
ai-query-handlers.ts and replace the find logic with a search-from-end (e.g.,
iterate messages from the last index or reverse and find) to get the last
message with role === "user", then derive question from that message; keep
innerToolCallsJson unchanged.
| const handleMark = () => { | ||
| setOptimisticReviewed(true); | ||
| Promise.resolve(onMarkReviewed?.(row.correlationId)).catch(err => captureError("call-log-mark-reviewed", err)); | ||
| }; | ||
| const handleUnmark = () => { | ||
| setOptimisticReviewed(false); | ||
| Promise.resolve(onUnmarkReviewed?.(row.correlationId)).catch(err => captureError("call-log-unmark-reviewed", err)); | ||
| }; |
There was a problem hiding this comment.
Rollback optimistic review state when the request fails.
If onMarkReviewed or onUnmarkReviewed rejects, Lines 56/60 leave the optimistic override in place indefinitely, so the detail pane can show the opposite review state from the database. Route the promise through the shared async handler and reset the optimistic override on rejection.
Suggested shape
-import { captureError } from "@stackframe/stack-shared/dist/utils/errors";
+import { runAsynchronouslyWithAlert } from "@stackframe/stack-shared/dist/utils/promises";
...
const handleMark = () => {
+ const previous = isReviewed;
setOptimisticReviewed(true);
- Promise.resolve(onMarkReviewed?.(row.correlationId)).catch(err => captureError("call-log-mark-reviewed", err));
+ runAsynchronouslyWithAlert(
+ (async () => {
+ await onMarkReviewed?.(row.correlationId);
+ })().then(undefined, (err) => {
+ setOptimisticReviewed(previous);
+ throw err;
+ })
+ );
};
const handleUnmark = () => {
+ const previous = isReviewed;
setOptimisticReviewed(false);
- Promise.resolve(onUnmarkReviewed?.(row.correlationId)).catch(err => captureError("call-log-unmark-reviewed", err));
+ runAsynchronouslyWithAlert(
+ (async () => {
+ await onUnmarkReviewed?.(row.correlationId);
+ })().then(undefined, (err) => {
+ setOptimisticReviewed(previous);
+ throw err;
+ })
+ );
};Based on learnings, use runAsynchronouslyWithAlert from stackframe/stack-shared/dist/utils/promises for async button click handlers and form submissions instead of manual try/catch blocks.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/internal-tool/src/components/CallLogDetail.tsx` around lines 55 - 62,
The optimistic-review handlers (handleMark and handleUnmark) currently set
setOptimisticReviewed(true/false) and call onMarkReviewed/onUnmarkReviewed
directly so a rejection leaves the optimistic state in place; update both
handlers to call the async operation via runAsynchronouslyWithAlert (imported
from stackframe/stack-shared/dist/utils/promises), pass a function that invokes
onMarkReviewed?.(row.correlationId) / onUnmarkReviewed?.(row.correlationId), and
in the rejection/finally logic reset setOptimisticReviewed back to its previous
value (or clear it) and still call
captureError("call-log-mark-reviewed"/"call-log-unmark-reviewed", err) on error
so the UI reverts when the request fails.
| // Time-bucketed series | ||
| const spanMs = now - rangeStart; | ||
| const bucketMs = spanMs <= 24 * 60 * 60 * 1000 ? 60 * 60 * 1000 : 24 * 60 * 60 * 1000; | ||
| const bucketLabelFmt: Intl.DateTimeFormatOptions = bucketMs === 60 * 60 * 1000 | ||
| ? { hour: "numeric" } | ||
| : { month: "short", day: "numeric" }; | ||
| const bucketCount = Math.min(48, Math.max(1, Math.ceil(spanMs / bucketMs))); | ||
| const bucketStart = now - bucketCount * bucketMs; | ||
| const timeBuckets: Array<{ label: string, start: number, calls: number, inputTokens: number, outputTokens: number, cachedInputTokens: number }> = []; |
There was a problem hiding this comment.
Make the “all” time-series span the actual filtered data.
With timeRange === "all", rangeStart is 0, but bucketCount is capped at 48, so bucketStart becomes roughly “last 48 days”. Older rows still count in metric cards and distributions but disappear from the time charts.
Suggested fix direction
- const spanMs = now - rangeStart;
- const bucketMs = spanMs <= 24 * 60 * 60 * 1000 ? 60 * 60 * 1000 : 24 * 60 * 60 * 1000;
+ const timestamps = filtered.map(r => toDate(r.createdAt).getTime());
+ const seriesEnd = now;
+ const seriesStart = timeRange === "all" && timestamps.length > 0
+ ? Math.min(...timestamps)
+ : rangeStart;
+ const spanMs = Math.max(1, seriesEnd - seriesStart);
+ const bucketMs = spanMs <= 24 * 60 * 60 * 1000
+ ? 60 * 60 * 1000
+ : Math.max(24 * 60 * 60 * 1000, Math.ceil(spanMs / 48));Then derive bucketCount/bucketStart from seriesStart/seriesEnd so the chart covers the same rows as the rest of the dashboard.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/internal-tool/src/components/Usage.tsx` around lines 185 - 193, The
time-series bucket window uses rangeStart and now (spanMs) which makes "all"
(rangeStart === 0) produce a capped recent window; update the logic in the
time-bucket section (spanMs, bucketMs, bucketCount, bucketStart) to compute
spanMs from the actual filtered data bounds (use seriesStart and seriesEnd) when
timeRange === "all", then derive bucketMs, bucketCount (still cap to 48) and
bucketStart from those seriesStart/seriesEnd values so the chart spans the same
rows as the rest of the dashboard.
| <tr | ||
| key={String(row.id)} | ||
| onClick={() => onSelect(row)} | ||
| className={clsx( | ||
| "border-b border-gray-100 cursor-pointer hover:bg-blue-50", | ||
| selectedId === row.id && "bg-blue-50" | ||
| )} | ||
| > |
There was a problem hiding this comment.
Make selectable table rows keyboard accessible.
The row is clickable but cannot be focused or activated from the keyboard. Add tabIndex, role, and Enter/Space handling, or move the interaction into a button/link inside the row.
Suggested fix
<tr
key={String(row.id)}
+ role="button"
+ tabIndex={0}
onClick={() => onSelect(row)}
+ onKeyDown={(e) => {
+ if (e.key === "Enter" || e.key === " ") {
+ e.preventDefault();
+ onSelect(row);
+ }
+ }}
className={clsx(
- "border-b border-gray-100 cursor-pointer hover:bg-blue-50",
+ "border-b border-gray-100 cursor-pointer hover:bg-blue-50 focus:outline-none focus:ring-2 focus:ring-blue-500",
selectedId === row.id && "bg-blue-50"
)}
>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <tr | |
| key={String(row.id)} | |
| onClick={() => onSelect(row)} | |
| className={clsx( | |
| "border-b border-gray-100 cursor-pointer hover:bg-blue-50", | |
| selectedId === row.id && "bg-blue-50" | |
| )} | |
| > | |
| <tr | |
| key={String(row.id)} | |
| role="button" | |
| tabIndex={0} | |
| onClick={() => onSelect(row)} | |
| onKeyDown={(e) => { | |
| if (e.key === "Enter" || e.key === " ") { | |
| e.preventDefault(); | |
| onSelect(row); | |
| } | |
| }} | |
| className={clsx( | |
| "border-b border-gray-100 cursor-pointer hover:bg-blue-50 focus:outline-none focus:ring-2 focus:ring-blue-500", | |
| selectedId === row.id && "bg-blue-50" | |
| )} | |
| > |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/internal-tool/src/components/Usage.tsx` around lines 558 - 565, The
table rows rendered in Usage.tsx are clickable but not keyboard accessible;
update the <tr> that uses onSelect(row) (and references selectedId and row.id)
to be focusable and operable by keyboard by adding tabIndex={0}, role="button"
(or role="row" with an inner button), aria-selected={selectedId === row.id}, and
an onKeyDown handler that calls onSelect(row) when Enter or Space is pressed;
ensure the handler prevents default for Space so it doesn't scroll and keep the
existing onClick behavior.
| const ensureEnrolledRef = useRef(ensureEnrolled); | ||
| useEffect(() => { | ||
| ensureEnrolledRef.current = ensureEnrolled; | ||
| }, [ensureEnrolled]); | ||
|
|
||
| useEffect(() => { | ||
| let cancelled = false; | ||
| let retryCount = 0; | ||
| let retryTimer: ReturnType<typeof setTimeout> | null = null; | ||
|
|
||
| console.log("[SpacetimeDB] Connecting to", HOST, "db:", DB_NAME); | ||
| const query = `SELECT * FROM ${binding.tableName}`; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n apps/internal-tool/src/hooks/useSpacetimeDB.ts | head -170Repository: stack-auth/stack-auth
Length of output: 7341
🏁 Script executed:
rg "useTableSubscription|useSpacetimeDB" apps/internal-tool/src --type ts --type tsx -A 3 -B 2Repository: stack-auth/stack-auth
Length of output: 92
🏁 Script executed:
rg "useTableSubscription|useSpacetimeDB" apps/internal-tool/src -A 3 -B 2Repository: stack-auth/stack-auth
Length of output: 2751
🏁 Script executed:
cat -n apps/internal-tool/src/app/app-client.tsx | head -150Repository: stack-auth/stack-auth
Length of output: 7464
🏁 Script executed:
tail -n +170 apps/internal-tool/src/hooks/useSpacetimeDB.tsRepository: stack-auth/stack-auth
Length of output: 1649
Include enrollment state in the effect dependency array or defer subscription until enrollment is available.
The effect at line 50 depends only on binding and won't rerun when ensureEnrolled changes. Since memoizedEnsureEnrolled (line 64 in app-client.tsx) is explicitly undefined while auth is loading and becomes available later, the following sequence occurs:
- Component renders with
user=null, passesundefinedtouseMcpCallLogs - Effect runs and establishes connection;
ensureEnrolledRef.currentisundefined onConnectcallback executes, readsundefinedfrom ref at line 104, and subscribes without enrollment- Auth completes,
ensureEnrolledfunction is now available in the ref - Main effect doesn't rerun (binding is stable), so subscription remains in unenrolled state
Since myVisibleMcpCallLog and myVisibleAiQueryLog likely filter based on enrollment status, the subscription initialized without enrollment may miss or return different data than an enrolled subscription would. Add ensureEnrolled to the dependency array at line 162, or defer the initial connection until enrollment is available if it's required.
Also applies to: 104–115
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/internal-tool/src/hooks/useSpacetimeDB.ts` around lines 45 - 54, The
effect that opens the SpacetimeDB subscription uses ensureEnrolledRef.current
but only lists binding in its dependency array, so the effect runs before
memoizedEnsureEnrolled is available and the onConnect callback (onConnect)
subscribes without enrollment; update the effect around the query/connection
logic (the useEffect that defines query = `SELECT * FROM ${binding.tableName}`
and uses ensureEnrolledRef) to either include ensureEnrolled in the dependency
array or gate/defer the initial connection until ensureEnrolled (or
ensureEnrolledRef.current) is available so that onConnect reads a valid
enrollment function; ensure the same fix is applied to the related onConnect
subscription logic referenced by ensureEnrolledRef/current to avoid unenrolled
subscriptions (affects the code using ensureEnrolledRef and onConnect).
| // Import all reducer arg schemas | ||
| import AddManualQaReducer from "./add_manual_qa_reducer"; | ||
| import AddOperatorReducer from "./add_operator_reducer"; | ||
| import DeleteQaEntryReducer from "./delete_qa_entry_reducer"; | ||
| import EnrollServiceReducer from "./enroll_service_reducer"; | ||
| import LogAiQueryReducer from "./log_ai_query_reducer"; | ||
| import LogMcpCallReducer from "./log_mcp_call_reducer"; | ||
| import MarkHumanReviewedReducer from "./mark_human_reviewed_reducer"; | ||
| import RemoveOperatorReducer from "./remove_operator_reducer"; | ||
| import UpdateHumanCorrectionReducer from "./update_human_correction_reducer"; | ||
| import UpdateMcpQaReviewReducer from "./update_mcp_qa_review_reducer"; | ||
|
|
There was a problem hiding this comment.
Regenerate these bindings; they are missing new module exports.
The SpacetimeDB source defines published_qa and unmark_human_reviewed, but this generated entrypoint does not register either one. Typed clients using these bindings will not see the public QA view or the unmark reducer.
Expected generated shape
import RemoveOperatorReducer from "./remove_operator_reducer";
+import UnmarkHumanReviewedReducer from "./unmark_human_reviewed_reducer";
import UpdateHumanCorrectionReducer from "./update_human_correction_reducer"; __reducerSchema("remove_operator", RemoveOperatorReducer),
+ __reducerSchema("unmark_human_reviewed", UnmarkHumanReviewedReducer),
__reducerSchema("update_human_correction", UpdateHumanCorrectionReducer),Also ensure the generated table schema includes the published_qa view/table if clients subscribe to it.
Also applies to: 50-84, 87-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/internal-tool/src/lib/ai/spacetimedb-bindings/index.ts` around lines 36
- 47, The generated bindings entrypoint is missing the new exports for the
published_qa view and the unmark_human_reviewed reducer; regenerate or update
apps/internal-tool/src/lib/ai/spacetimedb-bindings/index.ts to register and
export published_qa and unmark_human_reviewed alongside the existing reducers
(e.g., AddManualQaReducer, MarkHumanReviewedReducer,
UpdateHumanCorrectionReducer), and ensure the generated table/view schema
includes published_qa so typed clients can subscribe to it and see the public QA
view.
|
@greptile-ai review |
This PR adds a section to the internal tool that allows us to monitor openrouter usage.
Summary by CodeRabbit
New Features
Improvements
Backend Infrastructure