Conversation
…ad endpoint - Add SessionRecording and SessionRecordingChunk Prisma models - Add migration for session_recordings_mvp - Add seed data for session recordings - Add S3 uploadBytes, downloadBytes, and getS3PublicUrl utilities - Add POST /session-recordings/batch endpoint for client uploads
- Add StackAnalyticsInternal component with rrweb DOM capture and batched flush - Accept AnalyticsOptions prop in StackProvider - Conditionally render StackAnalyticsInternal in client provider - Export AnalyticsOptions types from template package - Add rrweb dependency to template, js, react, and stack packages
…ings - Add SessionRecording and SessionRecordingChunk CRUD response types - Add listSessionRecordings, listChunks, getEvents to admin interface - Implement admin SDK methods for session recording access - Add GET /internal/session-recordings endpoint (list sessions) - Add GET /internal/session-recordings/[id]/chunks endpoint (list chunks) - Add GET /internal/session-recordings/[id]/chunks/[id]/events endpoint (fetch from S3) - Add E2E tests for batch upload and admin endpoints
- Add server component wrapper and main replay viewer (page + page-client) - Add playback timing/state logic (session-replay-playback) - Add multi-tab stream grouping utilities (session-replay-streams) - Add unit tests for stream grouping - Register "Replays" nav item in apps-frontend - Disable session recording on dashboard itself - Add rrweb dependency to dashboard package
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds session-recording support: DB schema and seed, S3-backed chunk upload/download, backend batch and admin/internal APIs, client-side rrweb recorder and batching, dashboard replay UI and playback engine with tests, private S3 bucket config, and removal of an E2E GitHub Actions workflow. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Web Client
participant RRWeb as RRWeb Recorder
participant SDK as Stack SDK
participant Backend as Backend API
participant S3 as S3 Storage
participant DB as Database
Client->>RRWeb: start recording
RRWeb->>SDK: buffer events, build batch
SDK->>Backend: POST /session-recordings/batch (gzipped)
activate Backend
Backend->>DB: find/upsert SessionRecording (idle-time reuse)
Backend->>S3: PutObject gzipped chunk
S3-->>Backend: s3_key
Backend->>DB: create SessionRecordingChunk (metadata)
Backend-->>SDK: { session_recording_id, batch_id, s3_key, deduped }
deactivate Backend
sequenceDiagram
participant Dashboard as Dashboard UI
participant Backend as Backend API
participant DB as Database
participant S3 as S3 Storage
participant Replayer as RRWeb Replayer
Dashboard->>Backend: GET /internal/session-recordings (cursor)
Backend->>DB: query sessions (limit+1)
Backend-->>Dashboard: sessions + next_cursor
Dashboard->>Backend: GET /internal/session-recordings/{id}/chunks (cursor)
Backend->>DB: query chunks
Backend-->>Dashboard: chunks + next_cursor
Dashboard->>Backend: GET /internal/session-recordings/{id}/chunks/{chunkId}/events
activate Backend
Backend->>S3: GetObject gzipped bytes
S3-->>Backend: gzipped bytes
Backend->>Backend: gunzip & validate -> events
deactivate Backend
Backend-->>Dashboard: events array
Dashboard->>Replayer: init with events -> render replay
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ❌ 4❌ Failed checks (3 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile OverviewGreptile SummaryThis PR adds a new Dashboard "Session Replays" viewer under Analytics that can play back multi-tab session recordings using rrweb. It introduces new client-side playback/state logic (global timeline, gap fast-forwarding between tabs, buffering while chunks download), stream-grouping utilities with unit tests, a nav item for Replays, and disables replay capture on the dashboard app itself via StackProvider analytics settings. Main issue found is in the replay viewer’s parallel chunk-event fetching: the worker queue uses a shared mutable index across async workers, which can skip/duplicate chunk fetch tasks and lead to missing/duplicated replay data. There’s also a paging merge that can drop recordings if list requests overlap or return out of order. Confidence Score: 3/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant U as Admin user
participant P as Replays page (PageClient)
participant A as Stack AdminApp (session recordings API)
participant R as rrweb Replayer
U->>P: Open Analytics > Replays
P->>A: listSessionRecordings(limit, cursor)
A-->>P: recordings[] + nextCursor
U->>P: Select a recording
P->>A: listSessionRecordingChunks(recordingId, paginated)
A-->>P: chunk metadata (tabId, first/last timestamps)
P->>P: groupChunksIntoTabStreams(chunks)
P->>P: computeGlobalTimeline(streams)
par CHUNK_EVENTS_CONCURRENCY workers
P->>A: getSessionRecordingChunkEvents(recordingId, chunkId)
A-->>P: rrweb events[]
P->>P: coerceRrwebEvents + append to eventsByTab
P->>R: (ensureReplayerForTab) new Replayer(events, config)
P->>R: addEvent(event) (incremental)
end
U->>P: Play / Pause / Seek
P->>R: play(localOffset) / pause(localOffset)
R-->>P: finish event
P->>P: choose next tab / gap fast-forward / buffer / finish
|
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In
`@apps/dashboard/src/app/`(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx:
- Around line 419-447: The loadPage callback closes over recordings causing
stale appends; change the logic so when appending pages you call
setRecordings(prev => [...prev, ...res.items]) instead of using the recordings
variable, keep the initial-set path as setRecordings(res.items) when cursor is
null, and then remove recordings from the useCallback dependency array (keep
adminApp, loadingMore and any refs like hasFetchedInitialRef/hasAutoSelectedRef
and PAGE_SIZE as needed) so loadPage no longer relies on a potentially stale
recordings snapshot.
In
`@apps/dashboard/src/app/`(main)/(protected)/projects/[projectId]/analytics/replays/session-replay-machine.ts:
- Around line 689-740: The TOGGLE_PLAY_PAUSE handler incorrectly treats
"finished" like a playable state: when state.playbackMode === "finished"
pausedAtGlobalMs equals globalTotalMs so resuming immediately re-triggers
REPLAYER_FINISH; modify the Play branch in the TOGGLE_PLAY_PAUSE case to detect
when state.playbackMode === "finished" (or pausedAtGlobalMs >= globalTotalMs)
and reset the resume target (pausedAtGlobalMs) to 0 (or another safe start)
before computing buffering and calling playEffectsForAllTabs so the replay
restarts from the beginning instead of instantly finishing; update any uses of
target/localTarget computation accordingly (symbols: TOGGLE_PLAY_PAUSE,
playbackMode, pausedAtGlobalMs, globalTotalMs, playEffectsForAllTabs,
globalOffsetToLocalOffset).
- Around line 1-12: Remove the unused TabStream import from the top import list
and replace the three non-null assertions that dereference ranges[mid] and
candidates[0] with defensive nullish coalescing using the project's throwErr
helper; specifically, change occurrences that read ranges[mid]! to ranges[mid]
?? throwErr("Expected valid range at binary search position") and change the
return that uses candidates[0]! to (candidates[0] ?? throwErr("Expected at least
one candidate tab")).tabKey so the binary-search/range lookups and candidate
selection in this file use null-safe checks.
🧹 Nitpick comments (12)
apps/backend/src/app/api/latest/session-recordings/batch/route.tsx (3)
28-28: Unnecessaryas any— TypeScript narrows sufficiently after theincheck.After the guards on lines 26–27 (
typeof e !== "object",!("timestamp" in e)), TypeScript already narrowsetoRecord<"timestamp", unknown>, so you can accesse.timestampdirectly. Per coding guidelines,anycasts require a justifying comment, but here the cast can simply be removed.Suggested fix
- const ts = (e as any).timestamp; + const ts = e.timestamp;
82-84: Body-size gate runs after the buffer is already in memory.
fullReq.bodyBufferis populated bycreateSmartRouteHandler(viareq.arrayBuffer()) before the handler is invoked, so a client can still force the server to allocate up to the framework's own limit before this check rejects it. This line is still useful as defense-in-depth validation, but consider also enforcing an upstream limit (e.g., reverse-proxyclient_max_body_sizeor a streaming check) if you haven't already.
126-140: Consider moving the dedup check before the upsert.The
existingChunklookup (line 143) is done after the upsert, so duplicate batches still perform an unnecessary DB write to updatestartedAt/lastEventAt. Since the timestamps are derived from the same events, the result is idempotent, but you can save a round-trip by checking for the existing chunk first and returning early before the upsert.apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx (5)
83-93:as unknown as RrwebEventWithTime[]double-cast bypasses type safety entirely.After filtering, the objects are only validated to have a
numbertimestamp. TheRrwebEventWithTimetype likely requires additional fields (type,data, etc.). This cast silently permits structurally invalid events into downstream rrweb code that expects the full shape.Given that these events come from an untrusted external source (chunk storage), this is a reasonable tradeoff for a "best-effort coerce" function — but it should at minimum have a comment explaining why the cast is safe (e.g., "rrweb is tolerant of extra/missing fields at runtime") so future readers don't assume full structural validation has occurred.
As per coding guidelines: "Try to avoid the
anytype. Whenever you need to useany, leave a comment explaining why you're using it."
441-442:catch (e: any)used in multiple places — prefer typed error access.Lines 441, 657, and 956 all use
catch (e: any)to accesse?.message. Per coding guidelines,anyshould be avoided. Use a helper or inline narrowing instead:Suggested pattern
- } catch (e: any) { - setListError(e?.message ?? "Failed to load session recordings."); + } catch (e: unknown) { + setListError(e instanceof Error ? e.message : "Failed to load session recordings.");Apply the same pattern at lines 657 and 956.
As per coding guidelines: "Try to avoid the
anytype."
1132-1136:!idxwould incorrectly return"Tab"if a label index were0.Currently safe because
tabLabelIndexvalues start at 1 (line 891:i + 1), but the guardif (!idx)is fragile — it conflates "not in map" (undefined) with a hypothetical0value. Usingidx === undefinedis more precise and resilient to future changes.Suggested fix
const getTabLabel = useCallback((tabKey: TabKey) => { const idx = ms.tabLabelIndex.get(tabKey); - if (!idx) return "Tab"; + if (idx === undefined) return "Tab"; return `Tab ${idx}`; }, [ms.tabLabelIndex]);
498-520: Pervasive silentcatch { // ignore }blocks throughout the file.There are 15+ empty catch blocks swallowing errors from rrweb replayer operations (pause, play, addEvent, etc.). While individual rrweb calls can throw during teardown or invalid state, silently ignoring all errors makes debugging very difficult when something actually goes wrong.
Consider at minimum using
console.debugor a structured logger at a low level so failures are visible during development. Alternatively, create a small helper likesafeReplayerCall(r, fn)that catches and optionally logs, reducing the boilerplate.
1304-1308: Remove unnecessaryas anycast ongridRow.React's
CSSProperties["gridRow"]type accepts bothstringandnumbervalues. The cast is not needed — the string"1 / -1"is a valid grid row value that TypeScript already allows without any cast.Proposed fix
- gridRow: isActive ? ("1 / -1" as any) : ((miniIndex ?? 0) + 1), + gridRow: isActive ? "1 / -1" : ((miniIndex ?? 0) + 1),apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/session-replay-machine.test.ts (3)
73-81: Unused helperdispatchChain.
dispatchChainis defined but never called anywhere in this test file. Remove it or add tests that use it.
340-344: Avoidas any— narrow the discriminated union instead.Lines 343, 1133-1134, 1172-1173 all use
as anyto access effect properties without narrowing theReplayEffectunion. Per coding guidelines,anyshould be avoided and needs a comment when used. You can narrow first:const play = playEffects[0]; expect(play.type).toBe("play_replayer"); if (play.type === "play_replayer") { expect(play.localOffsetMs).toBe(2000); }Or define a small test helper:
function expectEffect<T extends ReplayEffect["type"]>( effects: ReplayEffect[], type: T, ): Extract<ReplayEffect, { type: T }> { const e = effects.find(e => e.type === type); expect(e).toBeDefined(); return e as Extract<ReplayEffect, { type: T }>; }This applies to the same pattern at lines 1133-1134 and 1172-1173.
As per coding guidelines: "Try to avoid the
anytype. Whenever you need to useany, leave a comment explaining why you're using it."
1372-1385: Vacuous effect-tabKey validation in the fuzz test.The inner
if (tabKeys.size > 0)block contains only comments — it never actually asserts anything. Either implement the check or remove the dead code to avoid giving a false sense of coverage.Suggested implementation
- if (tabKeys.size > 0) { - // Note: effects reference tabs from pre-transition state which - // might not be in post-transition state after SELECT_RECORDING. - // This is ok — the effect executor checks generation. - } + // Effects may reference tabs from the pre-transition state (e.g. + // after SELECT_RECORDING clears streams). Skip the check in that + // case since the effect executor validates the generation.apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/session-replay-machine.ts (1)
206-265: Duplicated binary search acrossfindBestTabAtGlobalOffsetandisTabInRangeAtGlobalOffset.Lines 216-229 and 251-264 contain identical binary-search-over-chunk-ranges logic. Consider extracting a shared helper:
function isInChunkRanges(ranges: ChunkRange[], ts: number): boolean { let lo = 0; let hi = ranges.length - 1; while (lo <= hi) { const mid = (lo + hi) >> 1; const r = ranges[mid] ?? throwErr("binary search OOB"); if (ts < r.startTs) hi = mid - 1; else if (ts > r.endTs) lo = mid + 1; else return true; } return false; }This also addresses the non-null assertions (
ranges[mid]!) on lines 220 and 255 — the helper can use?? throwErr(...)in one place. As per coding guidelines: "Prefer?? throwErr(...)over non-null assertions."
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@apps/backend/src/app/api/latest/internal/session-recordings/`[session_recording_id]/events/route.tsx:
- Around line 139-155: The response currently returns a full chunks metadata
array and a paginated chunk_events slice; add a concise inline comment above the
return that explains this is intentional: chunks (the chunks array) contains
metadata for all chunks to let the frontend build a full chunk->tab mapping,
while chunk_events (chunkEvents) is paginated to avoid downloading all S3
objects and consumers must match events to chunks by chunk_id lookup rather than
array alignment. Place this comment immediately where the response object with
fields chunks and chunk_events is constructed in the events route.
🧹 Nitpick comments (8)
packages/stack-shared/src/interface/crud/session-recordings.ts (1)
29-44: Chunk shape is duplicated and inconsistent between the two response types.
AdminListSessionRecordingChunksResponseitems includebrowser_session_id(line 34) butAdminGetSessionRecordingAllEventsResponse.chunks(lines 51-59) omits it. If this is intentional, consider extracting the common fields into a shared base type to reduce duplication and make the difference explicit. If unintentional, addbrowser_session_idto the all-events response as well.♻️ Proposed refactor: extract shared chunk type
+type SessionRecordingChunkBase = { + id: string, + batch_id: string, + tab_id: string | null, + event_count: number, + byte_length: number, + first_event_at_millis: number, + last_event_at_millis: number, + created_at_millis: number, +}; + export type AdminListSessionRecordingChunksResponse = { - items: Array<{ - id: string, - batch_id: string, - tab_id: string | null, - browser_session_id: string | null, - event_count: number, - byte_length: number, - first_event_at_millis: number, - last_event_at_millis: number, - created_at_millis: number, - }>, + items: Array<SessionRecordingChunkBase & { + browser_session_id: string | null, + }>, pagination: { next_cursor: string | null, }, };Also applies to: 50-65
apps/backend/src/app/api/latest/internal/session-recordings/[session_recording_id]/events/route.tsx (3)
81-84: Invalidoffset/limitvalues silently fall back to defaults instead of failing.Per coding guidelines, the code should fail early and loud on invalid input. Currently, a non-numeric or negative
offsetsilently becomes0, and an invalidlimitsilently becomeschunks.length. This can mask client bugs.Proposed fix
- const parsedOffset = query.offset != null ? Number.parseInt(query.offset, 10) : 0; - const offset = Number.isFinite(parsedOffset) && parsedOffset >= 0 ? parsedOffset : 0; - const parsedLimit = query.limit != null ? Number.parseInt(query.limit, 10) : chunks.length; - const limit = Number.isFinite(parsedLimit) && parsedLimit >= 1 ? parsedLimit : chunks.length; + const offset = query.offset != null ? Number.parseInt(query.offset, 10) : 0; + if (!Number.isFinite(offset) || offset < 0) { + throw new StackAssertionError("Invalid offset parameter", { offset: query.offset }); + } + const limit = query.limit != null ? Number.parseInt(query.limit, 10) : chunks.length; + if (!Number.isFinite(limit) || limit < 1) { + throw new StackAssertionError("Invalid limit parameter", { limit: query.limit }); + }As per coding guidelines, "Fail early, fail loud. Fail fast with an error instead of silently continuing."
89-89: Multiple uses ofanywithout explanatory comments.The coding guidelines require a comment explaining why
anyis used, why the type system fails, and how errors would still be caught. This applies to:
- Line 89:
any[]inchunkEventstype- Line 100:
catch (e: any)- Line 109:
let parsed: any- Line 129:
parsed.events as any[]At minimum, add a brief comment at each site (e.g., "S3 payload shape is validated at runtime below" for the parsed data, and "AWS SDK error type is untyped" for the catch).
As per coding guidelines, "Try to avoid the
anytype. Whenever you need to useany, leave a comment explaining why you're using it, why the type system fails, and how you can be certain that errors would still be flagged."Also applies to: 100-100, 109-109, 129-129
90-94: Shared mutable index in concurrent workers is correct but fragile.The
nextIndex++pattern works because JS is single-threaded and noawaitsits between thewhilecondition check and the index capture. However, this is a subtle invariant — any future refactor inserting anawaitbeforenextIndex++would introduce a race.Consider a safer pull-from-queue approach:
Suggested alternative
- const chunkEvents: Array<{ chunk_id: string, events: any[] }> = new Array(chunksToDownload.length); - let nextIndex = 0; - - async function worker() { - while (nextIndex < chunksToDownload.length) { - const idx = nextIndex++; - const chunk = chunksToDownload[idx]; + const chunkEvents: Array<{ chunk_id: string, events: any[] }> = new Array(chunksToDownload.length); + const indexQueue = chunksToDownload.map((_, i) => i); + + async function worker() { + while (indexQueue.length > 0) { + const idx = indexQueue.shift()!; + const chunk = chunksToDownload[idx];apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx (4)
82-92: Add justification comments for theascasts, especially the double-cast on line 91.The runtime checks on lines 84–88 validate shape incrementally, so the intermediate casts are defensible. However, line 91's
as unknown as RrwebEventWithTime[]is a full type-system bypass — per guidelines, everyas/anyshould include a comment explaining why the cast is needed, why the type system fails, and how you're sure errors would still surface (e.g. "rrweb tolerates extra fields; we validatedtimestampwhich is the only field required for replay ordering").
383-384: Usecatch (e: unknown)instead ofcatch (e: any).Per coding guidelines, avoid the
anytype. Useunknownand narrow:Proposed fix
- } catch (e: any) { - setListError(e?.message ?? "Failed to load session recordings."); + } catch (e: unknown) { + setListError(e instanceof Error ? e.message : "Failed to load session recordings.");The same pattern appears at lines 599 and 929 — apply the same fix there.
609-733: Pervasive silentcatch { // ignore }blocks mask potential failures.There are 10+ silent catch blocks in
executeEffectsalone (and more elsewhere). While rrweb player calls can fail during teardown, silently swallowing all errors violates the "never silently swallow errors" guideline and makes debugging near-impossible. Consider at minimum aconsole.debugor structured log in development so failures are observable:Example for one block (apply pattern throughout)
try { r.play(effect.localOffsetMs); - } catch { - // ignore + } catch (e) { + console.debug("[session-replay] play failed for tab", effect.tabKey, e); }
799-801: Use rrweb'sEventTypeenum instead of magic number2withas any.rrweb exports
EventType.FullSnapshot(value2). Importing and using it avoids both theas anycast and the magic number:Proposed fix
+// At top of file, add to rrweb type imports: +type RrwebEventType = typeof import("rrweb").EventType; +// Or import dynamically alongside Replayer const hasFullSnapshot = !msRef.current.hasFullSnapshotByTab.has(tabKey) - // eslint-disable-next-line `@typescript-eslint/no-unsafe-member-access` - && events.some(e => (e as any).type === 2); + && events.some(e => "type" in e && e.type === 2); // EventType.FullSnapshotAt minimum, add a comment explaining the magic number. Based on learnings: "Do NOT use
as/any/type casts to bypass the type system unless you specifically asked the user about it."
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/e2e/tests/backend/endpoints/api/v1/session-recordings.test.ts`:
- Around line 455-472: The test is using a chunk ID
(otherChunks.body?.items?.[0]?.id) as the pagination cursor, which may not equal
the API's cursor format; change the test to take otherCursor from
otherChunks.body?.pagination?.next_cursor (use pagination.next_cursor) and pass
that encoded value to the /chunks?cursor= call to verify cross-session cursor
rejection; if session2 currently only has one chunk (no next_cursor),
create/upload a second chunk for recording2 before fetching otherChunks so a
real pagination cursor exists and the test exercises rejection of a valid cursor
from another session when calling the endpoint via niceBackendFetch for
recording1.
🧹 Nitpick comments (2)
apps/e2e/tests/backend/endpoints/api/v1/session-recordings.test.ts (2)
27-46: Consider using.toMatchInlineSnapshotfor response assertions.The coding guidelines prefer
.toMatchInlineSnapshotover other selectors where possible. Throughout this file, tests use broad range checks liketoBeGreaterThanOrEqual(400)/toBeLessThan(500)andtoMatchObject. Inline snapshots would capture the exact response shape and status, making regressions easier to detect. This applies to most tests in this file (e.g., lines 44–45, 73–79, 96–101, 121–122, etc.).As per coding guidelines: "When writing tests, prefer .toMatchInlineSnapshot over other selectors, if possible."
312-315: UseencodeURIComponent()for path parameters in URL template literals.Several URLs in this file interpolate
recordingIdandchunkIddirectly into paths without encoding (lines 312, 323, 434, 447, 456, 467, 503, 514). While UUIDs are URL-safe, the coding guidelines require consistent encoding. Some query-string usages already correctly useencodeURIComponent(e.g., line 372).As per coding guidelines: "Use urlString`` or encodeURIComponent() instead of normal string interpolation for URLs, for consistency."
https://www.loom.com/share/3b7c9288149e4f878693281778c9d7e0 ## Todos (future PRs) - Fix pre-login recording - Better session search (filters, cmd-k, etc) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Analytics → Replays: session recording & multi-tab replay with timeline, speed, seek, and playback settings; dashboard UI for listing and viewing replays. * **Admin APIs** * Admin endpoints to list recordings, list chunks, fetch chunk events, and retrieve all events (paginated). * **Client** * Client-side rrweb recording with batching, deduplication, upload API and a send-batch client method. * **Configuration** * New STACK_S3_PRIVATE_BUCKET for private session storage. * **Tests** * Extensive unit and end-to-end tests for replay logic, streams, playback, and APIs. * **Chores** * Removed an E2E API test GitHub Actions workflow. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
https://www.loom.com/share/3b7c9288149e4f878693281778c9d7e0
Todos (future PRs)
Summary by CodeRabbit
New Features
Admin APIs
Client
Configuration
Tests
Chores