Skip to content

feat: migrate from Next.js to Vite 8 SSR#23

Open
rsbh wants to merge 28 commits intomainfrom
feat_vite_migration
Open

feat: migrate from Next.js to Vite 8 SSR#23
rsbh wants to merge 28 commits intomainfrom
feat_vite_migration

Conversation

@rsbh
Copy link
Member

@rsbh rsbh commented Mar 17, 2026

Summary

  • Vite 8 replaces Next.js — no more binary dependency issues, Turbopack filesystem root problems, or monorepo resolution headaches
  • Custom content loader via import.meta.glob — eager frontmatter for sidebar, lazy MDX loading per page
  • SSR with renderToString — server resolves all data (config, page tree, MDX component, API specs) before rendering
  • PageContext provides config, tree, page data, API specs to all components — no fs/path imports on client
  • MiniSearch replaces fumadocs search — indexes docs + API operations, suggestions on empty query
  • @mdx-js/rollup with remark-gfm, remark-frontmatter, remark-mdx-frontmatter, remark-directive
  • @shikijs/rehype for syntax highlighting (github-light/dark dual themes)
  • Content dir served as static files — images, assets accessible in both dev and prod (.md/.mdx filtered)
  • Self-contained prod bundlessr.noExternal: true bundles all deps, no dual React instance issues
  • API specs fetched via /api/specs endpoint instead of embedded in HTML (avoids 2MB+ bloat)
  • Canary release workflow for testing from npm

Commands

Command Description
chronicle dev Vite dev server with HMR + SSR
chronicle build Client + server bundles to dist/
chronicle start Production server from built dist/
chronicle serve Build + start combined

Architecture

CLI (Node) → Vite dev server
  → vite.ssrLoadModule(router.ts) for API routes
  → vite.ssrLoadModule(source.ts) for page data (import.meta.glob)
  → renderToString with PageContext → full HTML response
  → Client hydrates, loads MDX on navigation via dynamic import

Test plan

  • chronicle dev — basic example, frontier example
  • chronicle build + chronicle start — production mode
  • Search (empty suggestions + query results)
  • API pages (landing, endpoint detail, sidebar)
  • Mermaid diagrams
  • Syntax highlighting
  • Images from content dir
  • Details/summary (accordion) styling
  • Client-side navigation (sidebar stays, content swaps)
  • Docker build
  • HMR for content changes

🤖 Generated with Claude Code

rsbh and others added 21 commits March 16, 2026 17:43
Introduce in-process Vite dev/prod server to replace Next.js child process spawning.
Adds HTML shell, SSR entry points, URL router with stub handlers, and programmatic Vite config with fumadocs-mdx/vite plugin.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Port Next.js page components to framework-agnostic pages (DocsPage, DocsLayout, ApiPage, ApiLayout, NotFound). Add Head component for SSR meta/OG/JSON-LD rendering. Wire App component to route URLs to correct page components. Add vitest config with mocks for fumadocs source, next/font, and next/navigation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add minimal client-side router (~100 lines) with RouterProvider, usePathname, useRouter, and Link component. Replace all next/navigation and next/link imports across themes (default, paper), search, and MDX link components. Replace next/font/google with static font declaration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace custom ~100 line router with react-router-dom v7. Use StaticRouter for SSR, BrowserRouter for client hydration. Rename NextLink to RouterLink across all themes and components. Delete src/lib/router.tsx.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace all Apsara Link with react-router-dom Link. Remove url prop from App (useLocation from router context instead). Rename MdxLink export. Use plain <a> for external links.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Port all Next.js route handlers to framework-agnostic handlers: search, health, apis-proxy, OG image (satori), llms.txt, sitemap.xml, robots.txt. Wire real handlers into router replacing stubs. Add satori dependency for OG image generation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dev: vite.createServer() in-process, no child process spawning
build: vite.build() for client + server bundles
start: node server with pre-built dist/
init: creates content/ + chronicle.yaml only, no .chronicle/ scaffold
serve: build + start in sequence

Delete process.ts (no child processes), strip scaffold.ts to just detectPackageManager + getChronicleVersion.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Delete src/app/ (all Next.js routes), next.config.mjs. Remove next from dependencies, add react-router-dom and satori. Move API layout CSS module to src/pages/. Add handler tests (health, robots, sitemap, apis-proxy, llms, og). Add CLI config and scaffold utility tests. 38 tests passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- vite ^8.0.0, @vitejs/plugin-react ^6.0.1
- @mdx-js/rollup with remark-frontmatter, remark-mdx-frontmatter, remark-gfm
- Add gray-matter, minisearch, glob for custom content/search
- Remove fumadocs-core, fumadocs-mdx, zod, @types/unist
- rollupOptions → rolldownOptions for Vite 8
- ssr.noExternal for @raystack/apsara CSS
- build-cli reads externals from package.json automatically

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- source.ts: import.meta.glob for eager frontmatter + lazy MDX components
- search.ts: minisearch + gray-matter for server-side indexing
- llms.ts: custom index handler, no fumadocs llms helper
- search.tsx: custom useSearch hook with fetch-based client
- sitemap.ts: async, uses new source API
- Remove handleLlmsFull, get-llm-text.ts

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- PageContext provides config, tree, page data to all components
- dev.ts resolves all data server-side before render
- entry-server uses renderToString (no streaming)
- entry-client hydrates with embedded __PAGE_DATA__
- Client navigation loads MDX via import.meta.glob on demand
- Remove loadConfig() from all client components
- API pages temporarily stubbed (need server-side spec loading)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Delete source.config.ts, __mocks__/ (next-*, source-server)
- image.tsx: plain <img> instead of next/image
- init.ts: remove .source from gitignore entries
- vitest.config: remove @/.source and next/* aliases, add @content
- Fix async handler tests (sitemap, llms)
- Fix duplicate key warning in Layout.tsx navbar

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add apiSpecs to PageContext, SSRData, and embedded __PAGE_DATA__
- dev.ts loads specs via loadApiSpecs() server-side
- ApiLayout reads specs from context, builds API tree
- ApiPage reads specs from context, renders endpoint pages
- No fs/path imports in client components

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add /api/specs endpoint serving serialized OpenAPI specs
- Remove apiSpecs from __PAGE_DATA__ (was 2MB+ bloating HTML)
- Client fetches specs on demand when navigating to /apis
- SSR still loads specs server-side for full render

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- entry-prod.ts: full server with startServer(), render, router, config
- prod.ts: thin wrapper that loads built entry-prod.js
- Build server with ssr.noExternal: true (bundles all deps)
- rollupOptions → rolldownOptions in build/serve commands
- No dual React instance issues — everything in one bundle

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Raw HTML <details> in MDX bypasses component mapping, so style
them directly in .content selector using Apsara design tokens.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Serve content dir as static files, skip .md/.mdx for security
- Works in both dev (dev.ts) and prod (entry-prod.ts)
- Fixes image rendering in MDX (e.g. Frontier flow diagram)
- Add max-width: 100% for content images

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
MdxPre detects language-mermaid code blocks and renders
them with the Mermaid component instead of plain code.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add @shikijs/rehype as rehype plugin in MDX config
- Dual theme: github-light / github-dark via CSS variables
- CSS applies --shiki-light/--shiki-dark per data-theme

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Publishes canary npm release on every PR commit with version
format 0.1.0-canary.<short-sha> using --tag canary.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Mar 17, 2026

Note

Currently processing new changes in this PR. This may take a few minutes, please wait...

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 10f5cc4a-17cb-451c-a378-e9126910d5ba

📥 Commits

Reviewing files that changed from the base of the PR and between c5d277e and d9f273b.

📒 Files selected for processing (7)
  • packages/chronicle/src/cli/commands/build.ts
  • packages/chronicle/src/server/adapters/vercel.ts
  • packages/chronicle/src/server/entry-prod.ts
  • packages/chronicle/src/server/entry-vercel.ts
  • packages/chronicle/src/server/request-handler.ts
  • packages/chronicle/src/server/vite-config.ts
  • packages/chronicle/src/themes/default/Page.module.css
 ___________________________________________________________________________________
< Almost every programming language is overrated by its practitioners. - Larry Wall >
 -----------------------------------------------------------------------------------
  \
   \   \
        \ /\
        ( )
      .( o ).
📝 Walkthrough

Walkthrough

Migrates Chronicle from Next.js to a Vite+SSR architecture: removes Next.js routes/config, adds Vite dev/prod servers, SSR client/server entrypoints, route handlers/router, import.meta.glob content source, React Router-driven client, and updated CLI build/dev/serve flows.

Changes

Cohort / File(s) Summary
Skill Config Files
\.claude/skills/agent-browser, \.claude/skills/dogfood, \.claude/skills/electron, \.claude/skills/slack, \.claude/skills/vercel-sandbox
Added small manifest files referencing agent skill paths.
Repo & CI / Test config
.github/workflows/canary.yml, vitest.config.ts, package.json
Added canary publish workflow, Vitest config and top-level deps (vitest, react-router-dom, satori).
Chronicle package manifest & lock
packages/chronicle/package.json, packages/chronicle/skills-lock.json
Large dependency reshuffle toward Vite/MDX tooling and satori; added skills-lock.json listing agent skills.
Build tooling
packages/chronicle/build-cli.ts, packages/chronicle/tsconfig.json
Bun build externals from package.json; removed .source path mapping and includes in tsconfig.
Removed Next.js configs & sources
packages/chronicle/next.config.mjs, packages/chronicle/source.config.ts
Deleted Next.js/MDX fumadocs integration and source.config exports.
Removed Next.js app routes & APIs
packages/chronicle/src/app/... (layout, pages, api routes, og, sitemap, robots, llms, providers, etc.)
Removed file-based Next.js routes and their exported handlers/layouts/metadata.
Vite server infra & SSR entrypoints
packages/chronicle/src/server/dev.ts, .../entry-client.tsx, .../entry-server.tsx, .../entry-prod.ts, .../prod.ts, .../vite-config.ts, .../index.html
Added Vite dev/prod servers, SSR render entry, production bootstrap, Vite config generator, and server HTML template.
Server handlers & router
packages/chronicle/src/server/handlers/*, packages/chronicle/src/server/router.ts
New route handlers (health, search, specs, apis-proxy, og, sitemap, robots, llms) and a route matcher for dispatch.
Server app & exports
packages/chronicle/src/server/App.tsx, packages/chronicle/src/server/entry-prod.ts
Added main App component for client routing and startServer export for production.
Content/source & page context
packages/chronicle/src/lib/source.ts, packages/chronicle/src/lib/page-context.tsx, packages/chronicle/src/lib/head.tsx
Replaced fumadocs loader with import.meta.glob-driven source model, async page tree and component loader; added PageProvider and Head component.
Client pages & layouts
packages/chronicle/src/pages/DocsPage.tsx, DocsLayout.tsx, ApiPage.tsx, ApiLayout.tsx, NotFound.tsx
Added React Router-driven page and layout components (named exports) for docs and API.
MDX & UI changes
packages/chronicle/src/components/mdx/*, packages/chronicle/src/components/ui/*
MDX components adapted for React Router, Mermaid code-block rendering, simplified image component, link changes to react-router-dom, and new client-side search hook.
Theme updates
packages/chronicle/src/themes/.../*, packages/chronicle/src/themes/default/font.ts
Replaced next/font usage with plain CSS/class style, switched routing hooks to React Router, added CSS for details/images.
CLI commands & utils
packages/chronicle/src/cli/commands/*, packages/chronicle/src/cli/utils/*
Reworked build/dev/serve/start to use Vite servers and dynamic imports; added content/out options; removed scaffold/process lifecycle helpers; updated exported utilities.
Tests added
packages/chronicle/src/**/*__tests__/*
Added unit/integration tests for Head, CLI utils, server handlers, router, vite-config, SSR entry and server handlers.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Client
participant DevServer
participant ViteMiddleware
participant Router
participant Handler
Client->>DevServer: HTTP request (asset/page/API)
DevServer->>ViteMiddleware: handle asset/HMR or transform
DevServer->>Router: matchRoute(url)
Router->>Handler: invoke handler(req)
Handler-->>DevServer: Response (JSON/XML/HTML/image)
DevServer-->>Client: HTTP response

mermaid
sequenceDiagram
participant Server
participant Source
participant Renderer
participant Client
Server->>Source: getPage / buildPageTree / loadPageComponent
Source-->>Server: page data + component
Server->>Renderer: render(url, SSRData)
Renderer-->>Server: HTML string
Server-->>Client: HTML + embedded PAGE_DATA

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • rohilsurana
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 1.79% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: migrate from Next.js to Vite 8 SSR' accurately summarizes the main change in the changeset, which is a complete architectural migration from Next.js to Vite 8 with SSR support.
Description check ✅ Passed The description is comprehensive and directly related to the changeset, detailing the migration rationale, new architecture, commands, test plan, and specific implementation details about Vite, content loading, SSR, and other key changes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat_vite_migration
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/chronicle/src/cli/commands/init.ts (1)

77-110: ⚠️ Potential issue | 🟠 Major

Existing Chronicle installs will not actually get migrated here.

This path only fills missing keys. If a repo already has dev/build/start wired to Next.js or pins an older @raystack/chronicle, it will still be reported as initialized even though it does not match the new Vite toolchain. At minimum, detect Chronicle-managed entries that differ from the current template and rewrite or warn on them.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/cli/commands/init.ts` around lines 77 - 110, The
current merge logic only adds missing keys and thus won't migrate repos that
have existing Chronicle-managed entries (like dev/build/start scripts or a
pinned `@raystack/chronicle`) that differ from the new template; update the code
that manipulates existing.scripts, existing.dependencies and
existing.devDependencies to detect keys that are managed by Chronicle (compare
against template.scripts/template.dependencies/template.devDependencies) and if
values differ either overwrite them with the template or emit a clear warning
and set updated=true; ensure the change is applied before writing
packageJsonPath and include references to the symbols existing.scripts,
template.scripts, existing.devDependencies, template.devDependencies,
existing.dependencies, template.dependencies and the updated flag so the logic
can both replace mismatched Chronicle entries or log a warning indicating a
manual migration is required.
packages/chronicle/src/lib/source.ts (1)

97-137: ⚠️ Potential issue | 🟠 Major

The current tree builder can't represent section index pages or nested folders.

Only page.slugs[0] is used for grouping, so guides/index.mdx is pushed into rootPages, guides/setup/linux.mdx is flattened directly under guides, and folders are appended after all root pages regardless of order. That yields duplicated sections and incorrect navigation once content is nested.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/lib/source.ts` around lines 97 - 137, The tree builder
flattens nested content by only grouping on page.slugs[0], causing index pages
like guides/index.mdx to end up in rootPages and nested pages like
guides/setup/linux.mdx to be mis-placed; update the pages.forEach grouping logic
to: classify an index page when page.slugs.length === 1 (or url ===
`/${folder}`) and attach it as the folder's index, otherwise push multi-segment
pages into a per-folder list keyed by the first slug but keep their full slug
arrays so deeper nesting can be built; then in the folder assembly (symbols:
folders map, folderItems, sortByOrder, indexPage, folderOrder) build each
folder's children from the full-item list (preserving nested structure if
page.slugs.length > 2), set folder.order from the folder index page if present
or from the first child, and finally merge rootPages and folderItems and run
sortByOrder on the combined list so folders and root pages are ordered
correctly.
🟠 Major comments (22)
packages/chronicle/src/components/ui/search.tsx-24-39 (1)

24-39: ⚠️ Potential issue | 🟠 Major

Prevent stale/out-of-order search results from overwriting newer queries.

The current effect does not cancel in-flight fetches, so older responses can win races. Combined with rendering at Line 142 without an isLoading guard, users can see stale results during active loading.

Proposed fix
 function useSearch(query: string) {
   const [results, setResults] = useState<SearchResult[]>([]);
   const [isLoading, setIsLoading] = useState(false);
-  const timerRef = useRef<ReturnType<typeof setTimeout>>();
+  const timerRef = useRef<ReturnType<typeof setTimeout> | null>(null);
+  const abortRef = useRef<AbortController | null>(null);

   useEffect(() => {
-    clearTimeout(timerRef.current);
+    if (timerRef.current) clearTimeout(timerRef.current);
+    abortRef.current?.abort();

     timerRef.current = setTimeout(async () => {
       setIsLoading(true);
       try {
+        abortRef.current = new AbortController();
         const params = new URLSearchParams();
         if (query) params.set("query", query);
-        const res = await fetch(`/api/search?${params}`);
+        const res = await fetch(`/api/search?${params.toString()}`, {
+          signal: abortRef.current.signal,
+        });
+        if (!res.ok) throw new Error("Search request failed");
         setResults(await res.json());
-      } catch {
+      } catch (err) {
+        if ((err as DOMException).name === "AbortError") return;
         setResults([]);
+      } finally {
+        setIsLoading(false);
       }
-      setIsLoading(false);
     }, 100);
-    return () => clearTimeout(timerRef.current);
+    return () => {
+      if (timerRef.current) clearTimeout(timerRef.current);
+      abortRef.current?.abort();
+    };
   }, [query]);

   return { results, isLoading };
 }
...
-              {search.length > 0 &&
+              {search.length > 0 &&
+                !isLoading &&
                 results.map((result) => (

Also applies to: 142-157

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/components/ui/search.tsx` around lines 24 - 39, The
search effect can suffer from response races: update the fetch logic in the
useEffect that uses timerRef.current, setIsLoading, setResults and
URLSearchParams to create an AbortController for each scheduled request, pass
controller.signal to fetch, and call controller.abort() in the timeout/cleanup
so in-flight requests are canceled; only call setResults after confirming the
response was not aborted (handle AbortError separately) to avoid overwriting
newer results. Additionally, ensure the component's render path that displays
results (the block referenced around lines 142-157) respects isLoading (do not
render previous results while isLoading) so stale results aren’t shown during an
active query.
packages/chronicle/src/components/ui/search.tsx-166-171 (1)

166-171: 🛠️ Refactor suggestion | 🟠 Major

Add method field to SearchResult type returned from /api/search.

The getResultIcon function currently parses the HTTP method from result.content (line 168), which contains the operation description—not the method. This is unreliable. The method is already computed in the search handler (line 83 of handlers/search.ts) but only stored in the title. Add a dedicated method field to the SearchResult interface and return it from the search handler so getResultIcon can render directly from structured data instead of parsing text.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/components/ui/search.tsx` around lines 166 - 171,
Update the SearchResult shape to include a dedicated method:string (e.g., add
method to the SearchResult interface) and modify the search handler
(handlers/search.ts, where the result title is currently set) to populate and
return that method field from the API; then update getResultIcon in
packages/chronicle/src/components/ui/search.tsx to use result.method directly
(and remove the fragile content.split parsing) so MethodBadge is rendered from
the structured result.method value.
packages/chronicle/src/components/ui/footer.tsx-21-25 (1)

21-25: ⚠️ Potential issue | 🟠 Major

Fix external link handling in footer navigation.

Footer links can contain external URLs (e.g., https://github.com/raystack/chronicle), but the current implementation using react-router-dom's Link component with the to prop won't handle them correctly. React Router's Link is designed for client-side routing only and will treat external URLs as relative routes.

Conditionally render an <a> tag for external URLs and a Link component for internal routes:

🔧 Proposed fix for external link handling
 {config.links.map((link) => (
-  <Link key={link.href} to={link.href} className={styles.link}>
-    {link.label}
-  </Link>
+  link.href.startsWith('http') ? (
+    <a key={link.href} href={link.href} className={styles.link} target="_blank" rel="noopener noreferrer">
+      {link.label}
+    </a>
+  ) : (
+    <Link key={link.href} to={link.href} className={styles.link}>
+      {link.label}
+    </Link>
+  )
 ))}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/components/ui/footer.tsx` around lines 21 - 25, The
footer currently maps config.links and always renders the react-router-dom Link
component (config.links, Link, styles.link), which breaks external URLs; update
the mapping to detect external URLs (e.g., href startsWith 'http://' or
'https://', or new URL parse) and conditionally render a native anchor element
for externals (use href, target="_blank", rel="noopener noreferrer",
className=styles.link and the same key) while keeping Link with to={link.href}
for internal routes; ensure the existing key and styles.link are applied in both
branches.
packages/chronicle/src/server/handlers/specs.ts-4-8 (1)

4-8: ⚠️ Potential issue | 🟠 Major

Add defensive error handling in the specs handler.

A failure in loadConfig() or loadApiSpecs() currently bubbles out of the route. Returning a controlled 500 JSON response here will make /api/specs failures predictable and easier to handle client-side.

Proposed fix
 export function handleSpecs(): Response {
-  const config = loadConfig()
-  const specs = config.api?.length ? loadApiSpecs(config.api) : []
-
-  return Response.json(specs)
+  try {
+    const config = loadConfig()
+    const specs = config.api?.length ? loadApiSpecs(config.api) : []
+    return Response.json(specs)
+  } catch {
+    return Response.json(
+      { error: 'Failed to load API specs' },
+      { status: 500 },
+    )
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/handlers/specs.ts` around lines 4 - 8, The
handleSpecs function currently lets errors from loadConfig() or loadApiSpecs()
bubble up; wrap the body of handleSpecs in a try/catch, call loadConfig() and
loadApiSpecs() inside the try, and on error log the exception (e.g.,
console.error or the repo logger) and return a controlled JSON 500 response
(e.g., Response.json({ error: 'Internal Server Error' }, { status: 500 })); keep
the existing happy-path return Response.json(specs) inside the try so normal
behavior is unchanged.
packages/chronicle/src/themes/default/Layout.tsx-120-120 (1)

120-120: ⚠️ Potential issue | 🟠 Major

Remove conditionally-called useMemo hook in SidebarNode.

The useMemo call at line 120 violates React's Rules of Hooks. The component returns early for separators (line 97) and folders (line 101), meaning the hook is not executed on all render paths.

Proposed fix
-import { useMemo, useEffect, useRef } from "react";
+import { useEffect, useRef } from "react";
@@
-  const link = useMemo(() => <Link to={href} />, [href]);
@@
-      as={link}
+      as={<Link to={href} />}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/themes/default/Layout.tsx` at line 120, The useMemo
call creating the `link` element in `SidebarNode` (const link = useMemo(() =>
<Link to={href} />, [href])) is conditionally called because the component
returns early for separators and folders; move the hook out of conditional
render paths or remove useMemo entirely so Rules of Hooks aren’t violated:
either declare `const link = <Link to={href} />` inline where used, or place
`const link = useMemo(..., [href])` at the top of `SidebarNode` before any early
returns so `useMemo`, `Link`, and the `href` dependency are executed on every
render path.
packages/chronicle/src/server/handlers/apis-proxy.ts-9-13 (1)

9-13: ⚠️ Potential issue | 🟠 Major

Invalid JSON requests currently return 500 instead of 400.

await req.json() can throw on malformed request bodies, which bubbles to the server's outer error handler and returns a 500 status. Wrap the JSON parsing in a try-catch to return a proper 400 response.

Proposed fix
-  const { specName, method, path, headers, body } = await req.json()
+  let payload: unknown
+  try {
+    payload = await req.json()
+  } catch {
+    return Response.json({ error: 'Invalid JSON body' }, { status: 400 })
+  }
+
+  if (!payload || typeof payload !== 'object') {
+    return Response.json({ error: 'Invalid request payload' }, { status: 400 })
+  }
+
+  const { specName, method, path, headers, body } = payload as Record<string, unknown>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/handlers/apis-proxy.ts` around lines 9 - 13,
The JSON parsing step using await req.json() can throw and currently bubbles up
to a 500; wrap the destructuring (const { specName, method, path, headers, body
} = await req.json()) in a try-catch, and on any parsing error return
Response.json({ error: 'Invalid JSON body' }, { status: 400 }); then proceed to
validate specName/method/path as before—this ensures malformed JSON causes a 400
instead of a 500.
packages/chronicle/src/lib/head.tsx-37-41 (1)

37-41: ⚠️ Potential issue | 🟠 Major

Escape < in JSON-LD before injecting into <script> tag with dangerouslySetInnerHTML.

The HTML parser closes <script> tags as soon as it encounters </script> literally in the source, regardless of JSON context. An attacker-controlled string like </script><script>alert(1)</script> breaks out of the script block and executes injected JavaScript. JSON.stringify() alone does not prevent this; escape < (and other sensitive characters) before assignment.

Proposed fix
 export function Head({ title, description, config, jsonLd }: HeadProps) {
+  const safeJsonLd = jsonLd
+    ? JSON.stringify(jsonLd)
+        .replace(/</g, '\\u003c')
+        .replace(/\u2028/g, '\\u2028')
+        .replace(/\u2029/g, '\\u2029')
+    : undefined
@@
-      {jsonLd && (
+      {safeJsonLd && (
         <script
           type="application/ld+json"
-          dangerouslySetInnerHTML={{ __html: JSON.stringify(jsonLd, null, 2) }}
+          dangerouslySetInnerHTML={{ __html: safeJsonLd }}
         />
       )}

Note: The fix removes pretty-printing (null, 2 parameters), but JSON-LD formatting is not required by spec—minified JSON is fully compatible.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/lib/head.tsx` around lines 37 - 41, The JSON-LD is
being injected via dangerouslySetInnerHTML using JSON.stringify(jsonLd, null, 2)
which can be broken out by attacker-controlled sequences like "</script>";
update the injection to first serialize jsonLd (remove pretty-printing) and
escape HTML-sensitive characters (at minimum replace all '<' with the safe
escape '\u003c') before passing to dangerouslySetInnerHTML so the script tag
cannot be prematurely closed—locate the usage of jsonLd and the <script> block
in head.tsx and replace the current JSON.stringify(jsonLd, null, 2) input with a
safely escaped serialized string (e.g., JSON.stringify(jsonLd) then
.replace(/</g, '\\u003c')).
packages/chronicle/src/server/handlers/search.ts-18-19 (1)

18-19: ⚠️ Potential issue | 🟠 Major

The cached index will drift from the filesystem in chronicle dev.

getIndex() memoizes once per process, but the empty-query path rescans content on every request. After editing docs, /api/search?query= can show a new page while /api/search?query=foo still searches the old index until restart. Rebuild or invalidate the cache when content/specs change, or skip memoization in dev.

Also applies to: 94-129

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/handlers/search.ts` around lines 18 - 19, The
cached MiniSearch instance (searchIndex) is memoized by getIndex(), causing
stale searches after content edits; update getIndex() (and any related functions
around the search index initialisation referenced in lines ~94-129) to either
disable memoization in development or automatically rebuild/invalidate the cache
when content/spec/specs change: detect dev mode (e.g., NODE_ENV or an isDev
helper) and return a freshly built MiniSearch on each call, or add a file-watch
hook that sets searchIndex = null and triggers a rebuild when source files
change so queries always use the current filesystem state.
packages/chronicle/src/lib/page-context.tsx-59-67 (1)

59-67: ⚠️ Potential issue | 🟠 Major

Clear stale page state when the route is not backed by docs content.

The /apis branch returns without touching page, and the docs branch also leaves page unchanged when getPage(slug) returns null. After client navigation, consumers can keep rendering the previous document's metadata/body on API or missing routes.

Suggested fix
     if (pathname.startsWith('/apis')) {
+      setPage(null)
       // Fetch API specs if not already loaded
       if (apiSpecs.length === 0) {
         fetch('/api/specs')
           .then((res) => res.json())
           .then((specs) => { if (!cancelled) setApiSpecs(specs) })
@@
-      const [sourcePage, newTree] = await Promise.all([getPage(slug), buildPageTree()])
-      if (cancelled || !sourcePage) return
+      const [sourcePage, newTree] = await Promise.all([getPage(slug), buildPageTree()])
+      if (cancelled) return
+      setTree(newTree)
+      if (!sourcePage) {
+        setPage(null)
+        return
+      }

       const component = await loadPageComponent(sourcePage)
       if (cancelled) return

-      setTree(newTree)
       setPage({

Also applies to: 73-75

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/lib/page-context.tsx` around lines 59 - 67, The effect
currently skips updating the page state for non-docs routes and when
getPage(slug) returns null, leaving stale page data visible; update the logic so
that in the /apis branch you explicitly clear the page state (call setPage(null)
or equivalent) before returning, and in the docs branch when pageResult is null
also call setPage(null) so consumers don't retain previous document
body/metadata; locate the logic around pathname, apiSpecs, fetch('/api/specs'),
getPage(slug), setApiSpecs and setPage to apply these fixes.
packages/chronicle/src/server/handlers/og.ts-6-17 (1)

6-17: ⚠️ Potential issue | 🟠 Major

Avoid a remote font download on the request path.

The first /og request blocks on an outbound font fetch with no timeout, and a transient failure is memoized forever as a zero-byte “font”. That makes this endpoint depend on external network health until the process restarts. Bundle a local font, or at least only cache successful downloads and fail without poisoning the cache.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/handlers/og.ts` around lines 6 - 17, The
loadFont function currently fetches a remote font and memoizes any result
(including a zero-length ArrayBuffer on failure) which blocks the /og request
path and can permanently poison the cache via the fontData variable; change
loadFont to avoid synchronous remote fetch on request path by either using a
bundled local font asset (load from disk or import a font file into the build
and return its ArrayBuffer) or, if remote fetching is required, only set
fontData after a successful response (check response.ok and that the arrayBuffer
length > 0), add a short fetch timeout, and do not memoize the fallback empty
buffer so transient failures don't persist; update references to loadFont and
the shared fontData variable accordingly.
packages/chronicle/src/server/handlers/search.ts-132-139 (1)

132-139: ⚠️ Potential issue | 🟠 Major

Cap the result set before returning it.

index.search(query) returns the full match list. Common queries can serialize the entire corpus even though the UI only needs top hits. Slice the response server-side to a sane maximum.

Suggested fix
-  const results = index.search(query).map((r) => ({
+  const results = index.search(query).slice(0, 20).map((r) => ({
     id: r.id,
     url: r.url,
     type: r.type,
     content: r.title,
   }))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/handlers/search.ts` around lines 132 - 139,
index.search(query) returns the entire match list and the handler currently maps
and returns all matches (results) via Response.json, which can serialize the
whole corpus; update the handler to cap the result set before mapping/returning
by defining a sane MAX_RESULTS constant (or use an existing config), slice the
array returned by index.search(query) to the top N (e.g., resultsRaw.slice(0,
MAX_RESULTS)), then map that sliced array into the current shape (id, url, type,
content) and return only those entries via Response.json to avoid sending
excessive data.
packages/chronicle/src/components/mdx/link.tsx-13-35 (1)

13-35: ⚠️ Potential issue | 🟠 Major

Normalize URL classification and prop forwarding in MdxLink.

mailto:/tel: URIs currently fall through to the router branch, the internal branch drops every anchor prop except className, and the external branch lets ...props override the safe rel/target. That will break non-HTTP links and make accessibility/analytics attrs behave inconsistently.

Suggested fix
 export function MdxLink({ href, children, ...props }: LinkProps) {
   if (!href) {
     return <span {...props}>{children}</span>
   }

-  const isExternal = href.startsWith('http://') || href.startsWith('https://')
+  const isHttpUrl = /^(https?:)?\/\//i.test(href)
+  const isSpecialScheme = /^[a-z][a-z\d+\-.]*:/i.test(href)
   const isAnchor = href.startsWith('#')

-  if (isExternal) {
+  if (isHttpUrl || isSpecialScheme) {
     return (
-      <a href={href} target="_blank" rel="noopener noreferrer" {...props}>
+      <a
+        href={href}
+        {...props}
+        target={isHttpUrl ? props.target ?? '_blank' : props.target}
+        rel={isHttpUrl ? props.rel ?? 'noopener noreferrer' : props.rel}
+      >
         {children}
       </a>
     )
   }

   if (isAnchor) {
     return (
       <a href={href} {...props}>
         {children}
       </a>
     )
   }

   return (
-    <Link to={href} className={props.className}>
+    <Link to={href} {...props}>
       {children}
     </Link>
   )
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/components/mdx/link.tsx` around lines 13 - 35, Update
MdxLink's URL classification and prop forwarding: treat schemes like mailto: and
tel: as external (extend isExternal to include href.startsWith('mailto:') ||
href.startsWith('tel:')), ensure the external anchor sets target="_blank" and
rel="noopener noreferrer" after spreading props so callers cannot override them,
allow the anchor anchor branch (isAnchor) to forward all props (not just
className), and for internal routing (the Link component) forward the full props
object (not only props.className) to <Link>; reference the isExternal/isAnchor
checks and the Link component in your changes.
packages/chronicle/src/server/handlers/og.ts-22-24 (1)

22-24: ⚠️ Potential issue | 🟠 Major

Clamp title and description before rendering.

Both values come straight from the query string and are handed to Satori without a size cap. Very long inputs can turn /og into an expensive CPU/memory endpoint.

Suggested fix
 export async function handleOg(req: Request): Promise<Response> {
   const url = new URL(req.url)
-  const title = url.searchParams.get('title') ?? loadConfig().title
-  const description = url.searchParams.get('description') ?? ''
-  const siteName = loadConfig().title
+  const config = loadConfig()
+  const title = (url.searchParams.get('title') ?? config.title).trim().slice(0, 120)
+  const description = (url.searchParams.get('description') ?? '').trim().slice(0, 240)
+  const siteName = config.title
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/handlers/og.ts` around lines 22 - 24, Clamp the
untrusted query inputs before they reach Satori: truncate the title and
description obtained via url.searchParams.get (the constants title and
description in og.ts) to reasonable max lengths (e.g., title ~100 chars,
description ~200 chars) falling back to loadConfig().title or empty string as
currently done; perform truncation right after obtaining the values (before any
calls to the Satori render/OG generation logic) so the handler never processes
arbitrarily large strings.
packages/chronicle/src/cli/commands/init.ts-151-153 (1)

151-153: ⚠️ Potential issue | 🟠 Major

Prefer the generated dev script over a package-manager exec shim.

The scaffold already writes a local dev script into package.json (line 25). Using npx/bunx/dlx here is problematic: pnpm dlx and yarn dlx fetch packages from the registry (they are not designed for running local scripts), while bunx only works by coincidence (it checks local first). The instruction should instead invoke the local script, which is part of the project contract and works consistently across all supported package managers.

Suggested fix
-    const runCmd = pm === 'npm' ? 'npx' : pm === 'bun' ? 'bunx' : `${pm} dlx`
+    const runCmd =
+      pm === 'npm' ? 'npm run dev' :
+      pm === 'bun' ? 'bun run dev' :
+      `${pm} dev`
     console.log(chalk.green('\n\u2713 Chronicle initialized!'))
-    console.log('\nRun', chalk.cyan(`${runCmd} chronicle dev`), 'to start development server')
+    console.log('\nRun', chalk.cyan(runCmd), 'to start development server')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/cli/commands/init.ts` around lines 151 - 153, Replace
the exec-shim usage with a direct invocation of the generated local script:
modify the runCmd definition (currently "const runCmd = pm === 'npm' ? 'npx' :
pm === 'bun' ? 'bunx' : `${pm} dlx`") to use the package manager's run form
(e.g. `${pm} run`) and leave the console.log lines as-is so the message prints
`${runCmd} dev`, ensuring it instructs users to run the local "dev" script
rather than an exec shim.
.github/workflows/canary.yml-31-33 (1)

31-33: ⚠️ Potential issue | 🟠 Major

Canary version is not unique across reruns of the same commit.

Re-running the workflow for the same PR SHA will attempt to publish the exact same npm version and fail.

🐛 Proposed fix
-          VERSION=$(jq -r .version package.json)-canary.${SHORT_SHA}
+          VERSION="$(jq -r .version package.json)-canary.${SHORT_SHA}.${GITHUB_RUN_NUMBER}.${GITHUB_RUN_ATTEMPT}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/canary.yml around lines 31 - 33, The canary VERSION
computed from SHORT_SHA is not unique across workflow reruns; update the VERSION
construction to append a per-run unique identifier (e.g. GITHUB_RUN_NUMBER or
GITHUB_RUN_ID) so repeated runs of the same PR SHA produce distinct versions;
modify the step that builds VERSION (the line referencing SHORT_SHA and VERSION
and the jq assignment) to include the chosen environment variable (or timestamp)
when forming the package.json .version so publishing won't fail on duplicate
versions.
packages/chronicle/src/pages/DocsPage.tsx-12-12 (1)

12-12: ⚠️ Potential issue | 🟠 Major

Render a not-found state instead of null.

If page is missing here, this renders an empty docs shell with no 404 signal. Since non-API paths are funneled into docs rendering, an unresolved slug becomes a blank page instead of a recoverable not-found UX.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/pages/DocsPage.tsx` at line 12, Replace the early
return that yields null when the docs page data is missing ("if (!page) return
null") with a proper not-found render: in DocsPage.tsx detect the missing page
and return a NotFound/404 UI component (or call the framework
notFound()/router.replace to the 404 route) so the user sees a clear 404 state;
update imports to include your NotFound component or next/notFound helper and
ensure the response/status is appropriate rather than rendering an empty shell.
packages/chronicle/src/server/handlers/sitemap.ts-18-20 (1)

18-20: ⚠️ Potential issue | 🟠 Major

Guard invalid lastModified values.

new Date(page.frontmatter.lastModified).toISOString() throws on malformed dates, so one bad frontmatter value will take down the entire sitemap response.

🛡️ Proposed fix
   const docPages = pages.map((page) => {
-    const lastmod = page.frontmatter.lastModified
-      ? `<lastmod>${new Date(page.frontmatter.lastModified).toISOString()}</lastmod>`
-      : ''
+    const parsedLastModified = page.frontmatter.lastModified
+      ? Date.parse(page.frontmatter.lastModified)
+      : Number.NaN
+    const lastmod = Number.isNaN(parsedLastModified)
+      ? ''
+      : `<lastmod>${new Date(parsedLastModified).toISOString()}</lastmod>`
     return `<url><loc>${baseUrl}/${page.slugs.join('/')}</loc>${lastmod}</url>`
   })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/handlers/sitemap.ts` around lines 18 - 20, The
sitemap code builds lastmod using new
Date(page.frontmatter.lastModified).toISOString(), which will throw for
malformed dates; update the sitemap handler to validate/parse
page.frontmatter.lastModified before using it: check that
page.frontmatter.lastModified exists and produces a valid Date (e.g., via const
d = new Date(value); if (!isNaN(d.getTime())) then use d.toISOString()),
otherwise fall back to an empty string; modify the logic around the lastmod
variable (where page.frontmatter.lastModified is referenced) to perform this
safe-parse and only include the <lastmod> element when the date is valid.
packages/chronicle/src/server/App.tsx-14-16 (1)

14-16: ⚠️ Potential issue | 🟠 Major

Match /apis as a path segment, not a string prefix.

startsWith('/apis') also catches docs URLs like /apisync and /apis-foo, so those pages get routed to ApiPage with a mangled slug.

🧭 Proposed fix
 function resolveRoute(pathname: string) {
-  if (pathname.startsWith('/apis')) {
+  if (pathname === '/apis' || pathname.startsWith('/apis/')) {
     const slug = pathname.replace(/^\/apis\/?/, '').split('/').filter(Boolean)
     return { type: 'api' as const, slug }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/App.tsx` around lines 14 - 16, The routing
check using pathname.startsWith('/apis') incorrectly matches prefixes like
'/apisync'; change the guard in App.tsx to match '/apis' as a full path segment
instead (for example test that the first path segment equals 'apis' or use a
regex like ^\/apis(\/|$)); keep the existing slug extraction logic (the slug
variable) but only run it when the segment-matching condition is true so ApiPage
routing and slug computation remain correct.
packages/chronicle/src/server/entry-prod.ts-33-38 (1)

33-38: ⚠️ Potential issue | 🟠 Major

Don't coerce route-handler bodies to text.

response.text() corrupts binary bodies and forces full buffering. That breaks endpoints like /og, which need to send image bytes unchanged.

Suggested fix
-        const body = await response.text()
-        res.end(body)
+        const body = Buffer.from(await response.arrayBuffer())
+        res.end(body)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/entry-prod.ts` around lines 33 - 38, The code
currently calls response.text(), which buffers and corrupts binary responses;
instead, stop using response.text() and stream the Web Response body to the Node
res: keep setting res.statusCode and response.headers as you already do, then if
response.body exists use a proper stream bridge (e.g., use Node's
stream.Readable.fromWeb(response.body) or otherwise pipe the ReadableStream from
response.body into res) so binary endpoints (like /og) are forwarded unchanged;
ensure you only fall back to ending the response when there's no body.
packages/chronicle/src/pages/ApiPage.tsx-29-30 (1)

29-30: ⚠️ Potential issue | 🟠 Major

Return a visible not-found state for unknown API slugs.

return null turns /apis/... misses into a blank page, and the server layer currently still responds 200. Render an explicit not-found state here so bad slugs are user-visible and can be mapped to a 404 upstream.

Suggested fix
-  if (!match) return null
+  if (!match) {
+    return (
+      <>
+        <Head
+          title="API operation not found"
+          description="The requested API operation could not be found."
+          config={config}
+        />
+        <Text size={3}>API operation not found.</Text>
+      </>
+    )
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/pages/ApiPage.tsx` around lines 29 - 30, The code
currently returns null when findApiOperation(apiSpecs, slug) yields no match in
the ApiPage component; replace the early return with rendering an explicit
not-found UI and ensure the request is mapped to a 404 upstream (e.g., render a
NotFound/ApiNotFound component or call your framework’s notFound helper) so
unknown slugs show a visible error and produce a 404 response instead of a blank
200 page; update the logic around match and findApiOperation to invoke that
not-found path.
packages/chronicle/src/server/vite-config.ts-45-48 (1)

45-48: ⚠️ Potential issue | 🟠 Major

Don't inline runtime content paths into the SSR bundle.

Vite's define option statically replaces process.env.CHRONICLE_* references during the build for all processed modules, including the server bundle. When createViteConfig() is reused for server build, these environment variables are inlined as build-time strings, making runtime --content overrides ineffective and pinning production bundles to CI-only paths. Split client/server config or gate these replacements behind a client-only flag.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/vite-config.ts` around lines 45 - 48, The
define entries for 'process.env.CHRONICLE_CONTENT_DIR' and
'process.env.CHRONICLE_PROJECT_ROOT' are being inlined into the SSR bundle;
update createViteConfig to only add these defines for client builds (or when a
new clientOnly flag is true) and omit them for server/SSR builds so runtime
--content overrides work; locate the define block in createViteConfig and gate
adding JSON.stringify(contentDir) and JSON.stringify(root) behind a check like
isServer === false (or a passed-in clientOnly boolean), and instead ensure
server code reads process.env or a runtime config injected at startup rather
than relying on compile-time define.
packages/chronicle/src/server/dev.ts-122-123 (1)

122-123: ⚠️ Potential issue | 🟠 Major

Potential XSS via JSON injection in script tag.

If embeddedData contains a string like </script><script>alert(1)//, the resulting HTML will break out of the script tag. Use a safe serializer that escapes </script> sequences.

Proposed fix
+      // Safely serialize JSON for embedding in HTML script tag
+      const safeJson = JSON.stringify(embeddedData).replace(/</g, '\\u003c')
       // Embed page data for client hydration
-      const dataScript = `<script>window.__PAGE_DATA__ = ${JSON.stringify(embeddedData)}</script>`
+      const dataScript = `<script>window.__PAGE_DATA__ = ${safeJson}</script>`
       template = template.replace('<!--head-outlet-->', `<!--head-outlet-->${dataScript}`)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/dev.ts` around lines 122 - 123, The current
injection creates dataScript = `<script>window.__PAGE_DATA__ =
${JSON.stringify(embeddedData)}</script>` which allows embeddedData to break out
of the script tag (XSS); update the serialization used in dev.ts to a safe
serializer that escapes closing script sequences and other dangerous characters
(e.g., produce a JSON string where "</script>" is replaced with "<\\/script>"
and other control chars are escaped) before embedding into dataScript, and use
that safeSerialized string in the template.replace call so window.__PAGE_DATA__
is assigned from a safe, escaped JSON literal.
🟡 Minor comments (6)
packages/chronicle/src/cli/__tests__/config.test.ts-10-24 (1)

10-24: ⚠️ Potential issue | 🟡 Minor

Guard env var restoration with try/finally to prevent test pollution.

process.env.CHRONICLE_CONTENT_DIR is restored only on success paths. If a test fails early, global env state can leak to later tests.

Proposed fix
   it('returns env var when set', () => {
     const original = process.env.CHRONICLE_CONTENT_DIR
-    process.env.CHRONICLE_CONTENT_DIR = '/env/content'
-    const result = resolveContentDir()
-    expect(result).toContain('env/content')
-    process.env.CHRONICLE_CONTENT_DIR = original
+    try {
+      process.env.CHRONICLE_CONTENT_DIR = '/env/content'
+      const result = resolveContentDir()
+      expect(result).toContain('env/content')
+    } finally {
+      process.env.CHRONICLE_CONTENT_DIR = original
+    }
   })
 
   it('defaults to content directory', () => {
     const original = process.env.CHRONICLE_CONTENT_DIR
-    delete process.env.CHRONICLE_CONTENT_DIR
-    const result = resolveContentDir()
-    expect(result).toContain('content')
-    process.env.CHRONICLE_CONTENT_DIR = original
+    try {
+      delete process.env.CHRONICLE_CONTENT_DIR
+      const result = resolveContentDir()
+      expect(result).toContain('content')
+    } finally {
+      process.env.CHRONICLE_CONTENT_DIR = original
+    }
   })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/cli/__tests__/config.test.ts` around lines 10 - 24,
Tests that mutate process.env.CHRONICLE_CONTENT_DIR should restore the original
value in a finally block to avoid leaking state; update both tests ('returns env
var when set' and 'defaults to content directory') to save const original =
process.env.CHRONICLE_CONTENT_DIR, then perform the env mutation and call
resolveContentDir(), and in a finally restore the env by doing (original ===
undefined ? delete process.env.CHRONICLE_CONTENT_DIR :
process.env.CHRONICLE_CONTENT_DIR = original) so the original undefined vs set
value is handled correctly.
packages/chronicle/src/pages/DocsPage.tsx-15-15 (1)

15-15: ⚠️ Potential issue | 🟡 Minor

Normalize config.url before composing pageUrl.

A trailing slash in config.url produces // in the JSON-LD URL here, e.g. https://site//foo or https://site// on the index page.

🔧 Proposed fix
-  const pageUrl = config.url ? `${config.url}/${slug.join('/')}` : undefined
+  const baseUrl = config.url?.replace(/\/$/, '')
+  const pageUrl = baseUrl
+    ? slug.length > 0
+      ? `${baseUrl}/${slug.join('/')}`
+      : baseUrl
+    : undefined
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/pages/DocsPage.tsx` at line 15, The composed pageUrl
can end up with a double slash when config.url has a trailing slash; normalize
config.url before composing pageUrl by trimming any trailing slashes (e.g.,
normalize config.url via a regex or trim function) and then join with slug (use
slug.join('/')) so that pageUrl is always built from a clean base; update the
code that sets pageUrl (referencing config.url, pageUrl, and slug) to first
compute a normalized baseUrl and then build pageUrl from that base.
packages/chronicle/src/cli/commands/dev.ts-10-12 (1)

10-12: ⚠️ Potential issue | 🟡 Minor

Reject invalid --port values up front.

parseInt() accepts partial strings and can return NaN, so values like 3000abc or abc get passed into startDevServer(). Fail fast here with an integer/range check so the CLI returns a clear error instead of a downstream server failure.

Suggested fix
-    const port = parseInt(options.port, 10)
+    const port = Number(options.port)
+    if (!Number.isInteger(port) || port < 1 || port > 65535) {
+      throw new Error(`Invalid port: ${options.port}`)
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/cli/commands/dev.ts` around lines 10 - 12, The CLI
currently parses options.port with parseInt and may produce NaN or accept
partial strings; update the .action callback to validate options.port before
calling startDevServer by ensuring options.port is a whole integer within the
allowed port range (1–65535) and that parseInt(options.port, 10) yields a finite
integer; if validation fails, throw or exit with a clear error message
referencing options.port so resolveContentDir and startDevServer are never
called with an invalid port value.
packages/chronicle/src/server/handlers/llms.ts-30-31 (1)

30-31: ⚠️ Potential issue | 🟡 Minor

Missing error handling for file read operation.

If fs.readFile fails (e.g., permission denied, file deleted between readdir and read), the entire scan will throw and the handler will fail. Consider wrapping in try/catch to skip unreadable files gracefully, consistent with the readdir error handling on line 16-17.

Proposed fix
       if (!entry.name.endsWith('.mdx') && !entry.name.endsWith('.md')) continue

-      const raw = await fs.readFile(fullPath, 'utf-8')
-      const { data: fm } = matter(raw)
+      let raw: string
+      try {
+        raw = await fs.readFile(fullPath, 'utf-8')
+      } catch {
+        continue // Skip unreadable files
+      }
+      const { data: fm } = matter(raw)
       const baseName = entry.name.replace(/\.(mdx|md)$/, '')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/handlers/llms.ts` around lines 30 - 31, Wrap
the fs.readFile(fullPath, 'utf-8') and matter(raw) calls in a try/catch inside
the scan loop in llms.ts so a failed read or parse of a single file doesn't
abort the entire scan; on error, log or warn (e.g., using the existing logger or
console.warn) including fullPath and the error, then continue to the next file,
mirroring the graceful handling used for readdir.
packages/chronicle/src/cli/commands/start.ts-14-14 (1)

14-14: ⚠️ Potential issue | 🟡 Minor

Missing validation for parsed port number.

parseInt returns NaN for non-numeric input. Passing NaN to startProdServer will cause unexpected behavior.

Proposed fix
     const contentDir = resolveContentDir(options.content)
     const port = parseInt(options.port, 10)
+    if (isNaN(port) || port < 1 || port > 65535) {
+      console.error(chalk.red('Invalid port number'))
+      process.exit(1)
+    }
     const distDir = path.resolve(options.dist)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/cli/commands/start.ts` at line 14, The parsed port
value assigned to "port" using parseInt(options.port, 10) can be NaN for invalid
input; before calling startProdServer ensure the port is a valid integer in the
acceptable TCP port range (e.g., 1–65535). Update the start command to validate
Number.isInteger(port) && !Number.isNaN(port) (or use Number.parseInt then
check) and that port >= 1 && port <= 65535; if invalid, either throw a
descriptive error or fall back to a safe default and log the issue so
startProdServer is never invoked with NaN or out-of-range values. Ensure
references to "port" and the call to startProdServer use the validated value.
packages/chronicle/src/cli/commands/serve.ts-14-14 (1)

14-14: ⚠️ Potential issue | 🟡 Minor

Missing validation for parsed port number.

Same issue as in start.ts - parseInt returns NaN for non-numeric input.

Proposed fix
     const contentDir = resolveContentDir(options.content)
     const port = parseInt(options.port, 10)
+    if (isNaN(port) || port < 1 || port > 65535) {
+      console.error(chalk.red('Invalid port number'))
+      process.exit(1)
+    }
     const outDir = path.resolve(options.outDir)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/cli/commands/serve.ts` at line 14, The parsed port
assignment using parseInt(options.port, 10) in serve.ts can yield NaN for
invalid input; update the code that defines const port to validate the result
(e.g., check Number.isInteger(port) and !Number.isNaN(port) and that port is
within the valid TCP port range 1–65535) and throw or exit with a clear error
when validation fails; reference the const port variable and the
parseInt(options.port, 10) expression and mirror the same validation/exit
behavior used for start.ts so invalid/non-numeric port inputs are rejected
early.
🧹 Nitpick comments (8)
packages/chronicle/src/server/__tests__/og.test.ts (1)

4-17: These assertions are effectively out of CI right now.

The only behavioral tests for handleOg are skipped unless TEST_OG is set, so regressions in headers or rendering will not be caught in the default test run. Mock the font fetch or inject the font buffer so this suite can run deterministically in CI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/__tests__/og.test.ts` around lines 4 - 17, The
tests for handleOg are skipped behind TEST_OG, so make the suite deterministic
by providing a mocked font fetch or injecting a font buffer: remove or stop
using describe.skipIf(!process.env.TEST_OG) for this suite and instead in
og.test.ts mock the global fetch (or the module method that loads fonts) in a
beforeAll to return a Response whose body is the expected ArrayBuffer/Uint8Array
of a simple valid font, and restore the original fetch in afterAll;
alternatively, refactor handleOg to accept an injected font buffer or
font-loader function and call that injected dependency from the tests with a
known buffer, then keep the assertions of content-type and cache-control
unchanged to run reliably in CI.
packages/chronicle/src/server/__tests__/handlers.test.ts (1)

18-35: Content-Type assertions are brittle.

Exact equality can fail with valid charset suffixes. Prefer media-type pattern checks.

♻️ Suggested update
-    expect(response.headers.get('content-type')).toBe('text/plain')
+    expect(response.headers.get('content-type') ?? '').toMatch(/^text\/plain\b/)
...
-    expect(response.headers.get('content-type')).toBe('application/xml')
+    expect(response.headers.get('content-type') ?? '').toMatch(/^application\/xml\b/)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/__tests__/handlers.test.ts` around lines 18 -
35, The content-type assertions are brittle because they expect exact equality
and will fail if a charset is appended; update the tests for handleRobots and
handleSitemap to assert the media type only (e.g., check that
response.headers.get('content-type') matches a media-type pattern such as
/^text\/plain\b/ for handleRobots and /^application\/xml\b/ for handleSitemap or
use startsWith('text/plain') / startsWith('application/xml')) so the tests
accept valid charset suffixes while still verifying the correct media type.
packages/chronicle/src/server/__tests__/entry-server.test.tsx (1)

26-34: Route tests are too weak to catch routing regressions.

Both tests only check truthiness, which duplicates prior coverage and won’t fail if / and /apis render the same markup.

♻️ Suggested strengthening
-  it('renders docs route for root URL', () => {
-    const html = render('http://localhost:3000/', mockData)
-    expect(html).toBeTruthy()
-  })
-
-  it('renders api route for /apis URL', () => {
-    const html = render('http://localhost:3000/apis', mockData)
-    expect(html).toBeTruthy()
-  })
+  it('renders different markup for docs and api routes', () => {
+    const docsHtml = render('http://localhost:3000/', mockData)
+    const apisHtml = render('http://localhost:3000/apis', mockData)
+    expect(docsHtml).not.toEqual(apisHtml)
+  })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/__tests__/entry-server.test.tsx` around lines
26 - 34, The tests 'renders docs route for root URL' and 'renders api route for
/apis URL' only assert truthiness; update them to assert route-specific output
so routing regressions fail. Use the existing render(url, mockData) call and
replace expect(html).toBeTruthy() with checks for distinct markers (e.g.,
route-specific headings, IDs, or text) that only appear on the docs page vs the
apis page; for example assert that the output from
render('http://localhost:3000/') contains the docs marker and that
render('http://localhost:3000/apis') contains the APIs marker. Keep using the
same test names and the render and mockData symbols so the change is minimal and
targeted.
packages/chronicle/src/server/__tests__/vite-config.test.ts (1)

16-24: isDev test currently doesn’t verify isDev behavior.

This case passes even if the option is ignored. Either assert a dev/prod-differing field or rename the test to reflect that it only checks root stability.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/__tests__/vite-config.test.ts` around lines 16
- 24, The test "accepts isDev option" currently only checks root and doesn't
assert any dev/prod differences; update the test that calls createViteConfig to
verify a field that changes with isDev (e.g., config.mode should be
'development' when isDev: true and 'production' when isDev: false) by adding
assertions for config.mode for both true and false cases (or alternatively
rename the test to reflect it only checks root stability if you don't want to
assert mode); reference the createViteConfig call in the test to add the extra
expect(config.mode).toBe(...) assertions.
.github/workflows/canary.yml (1)

37-40: Guard publish step to avoid fork/secret-related failures.

This reduces noisy CI failures when NPM_TOKEN is unavailable (e.g., fork PRs).

♻️ Suggested hardening
       - name: Publish
+        if: ${{ github.event.pull_request.head.repo.full_name == github.repository && secrets.NPM_TOKEN != '' }}
         run: bun publish --tag canary --access public
         env:
           NPM_CONFIG_TOKEN: ${{ secrets.NPM_TOKEN }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/canary.yml around lines 37 - 40, The Publish step
currently runs unguarded and fails on fork PRs without secrets; modify the step
named "Publish" (the one running "bun publish --tag canary --access public") to
include an if condition that only runs when NPM_TOKEN is present and the
workflow is not from a fork PR — e.g. add an expression like if:
github.event.pull_request == null && secrets.NPM_TOKEN != '' (or alternatively
check github.repository == 'yourOrg/yourRepo' if you prefer repository-scoped
gating) so the publish only executes when the secret is available and the run is
from the main repository.
packages/chronicle/src/server/__tests__/router.test.ts (1)

62-71: Add coverage for the new /api/specs route.

This suite exercises the other built-in endpoints, but the API pages in this migration depend on /api/specs. A small route-match assertion here would keep that wiring from regressing silently.

🧪 Proposed addition
   it('matches /api/apis-proxy route', () => {
     const handler = matchRoute('http://localhost:3000/api/apis-proxy')
     expect(handler).not.toBeNull()
   })
+
+  it('matches /api/specs route', () => {
+    const handler = matchRoute('http://localhost:3000/api/specs')
+    expect(handler).not.toBeNull()
+  })
 
   it('returns 405 for non-POST to apis-proxy', async () => {
     const handler = matchRoute('http://localhost:3000/api/apis-proxy')
     const response = await handler!(new Request('http://localhost:3000/api/apis-proxy'))
     expect(response.status).toBe(405)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/__tests__/router.test.ts` around lines 62 - 71,
Add a test that asserts the router recognizes the new /api/specs route: call
matchRoute('http://localhost:3000/api/specs') and expect the returned handler to
be non-null, and also exercise the handler with a non-POST Request (using new
Request('http://localhost:3000/api/specs')) to assert it returns 405 (mirroring
the existing apis-proxy tests); update the test file by adding these assertions
alongside the existing matchRoute and handler invocation for apis-proxy so the
new /api/specs route wiring is covered.
packages/chronicle/src/server/prod.ts (1)

10-17: Use root when resolving the production entry.

The public API accepts root, but this function resolves distDir from the current working directory and then passes the unresolved value downstream. A relative distDir will point at the wrong bundle whenever the caller is not already in the project root.

📦 Proposed fix
 export async function startProdServer(options: ProdServerOptions) {
-  const { port, distDir } = options
+  const { port, root, distDir } = options
+  const resolvedDistDir = path.resolve(root, distDir)
 
-  const serverEntry = path.resolve(distDir, 'server/entry-prod.js')
+  const serverEntry = path.resolve(resolvedDistDir, 'server/entry-prod.js')
   const { startServer } = await import(serverEntry)
 
   console.log(chalk.cyan('Starting production server...'))
-  return startServer({ port, distDir })
+  return startServer({ port, distDir: resolvedDistDir })
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/prod.ts` around lines 10 - 17, The
startProdServer function currently resolves serverEntry using the raw distDir
which can be relative; instead resolve distDir against the provided root (from
ProdServerOptions) before building serverEntry so relative distDir works
regardless of CWD. In practice, inside startProdServer resolve a baseDir =
path.resolve(root, distDir) (or path.resolve(root, distDir || '') if distDir can
be undefined), use that baseDir when computing serverEntry =
path.resolve(baseDir, 'server/entry-prod.js'), and pass the resolved baseDir
(not the original relative distDir) to startServer({ port, distDir: baseDir })
so downstream code gets the absolute bundle path; update references to
serverEntry, distDir, root and the startServer call accordingly.
packages/chronicle/src/server/entry-server.tsx (1)

20-24: Pass the full location into StaticRouter.

Only forwarding pathname drops query strings and hashes, so any component that reads useLocation().search or useLocation().hash will render different output on the server and client.

♻️ Proposed fix
 export function render(url: string, data: SSRData): string {
-  const pathname = new URL(url, 'http://localhost').pathname
+  const location = new URL(url, 'http://localhost')
 
   return renderToString(
-    <StaticRouter location={pathname}>
+    <StaticRouter location={`${location.pathname}${location.search}${location.hash}`}>
       <PageProvider
         initialConfig={data.config}
         initialTree={data.tree}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/entry-server.tsx` around lines 20 - 24, The
server render drops query and hash by passing only pathname to StaticRouter;
update the render function so StaticRouter receives the full location (including
search and hash) instead of just pathname. In the render function (symbol:
render) replace the pathname-only value passed to <StaticRouter location={...}>
with the full URL/location (e.g., the original url or a constructed string
including pathname+search+hash) so useLocation() on server matches the client.

function useSearch(query: string) {
const [results, setResults] = useState<SearchResult[]>([]);
const [isLoading, setIsLoading] = useState(false);
const timerRef = useRef<ReturnType<typeof setTimeout>>();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

In current React TypeScript typings, is useRef<T>() without an initial value valid, or must useRef be initialized (e.g., null)?

💡 Result:

In the current @types/react (React 19) TypeScript typings, useRef<T>() with no argument is not valid—you must pass an explicit initial value (e.g. undefined or null). The change explicitly forbids an implicit undefined initializer. [1], [2]

  • Use useRef<T | undefined>(undefined) if you want an “initially empty” ref that you’ll fill later. [1]
  • Use useRef<T | null>(null) for DOM refs (since they start out null before the element mounts). (Common pattern; aligns with React’s runtime behavior.) [2]

For comparison, older typings (React 18 era) included a no-arg overload like useRef<T>(): MutableRefObject<T | undefined>, which made useRef<T>() valid back then. [3]

Sources: [1] (github.com) [2] (github.com) [3] (use-form.netlify.app)

Citations:


Add initial value to useRef to satisfy React TypeScript typings.

Line 22: useRef<ReturnType<typeof setTimeout>>() lacks a required initial value. Current React (v19) TypeScript typings require an explicit initializer (e.g., null or undefined). Initialize with null and guard clearTimeout calls.

Fix
-  const timerRef = useRef<ReturnType<typeof setTimeout>>();
+  const timerRef = useRef<ReturnType<typeof setTimeout> | null>(null);

   useEffect(() => {
-    clearTimeout(timerRef.current);
+    if (timerRef.current) clearTimeout(timerRef.current);
     timerRef.current = setTimeout(async () => {
       ...
     }, 100);
-    return () => clearTimeout(timerRef.current);
+    return () => {
+      if (timerRef.current) clearTimeout(timerRef.current);
+    };
   }, [query]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/components/ui/search.tsx` at line 22, The useRef
declaration for timerRef in the search component is missing an initial value;
change timerRef = useRef<ReturnType<typeof setTimeout> | null>(null) so it
satisfies React v19 TypeScript typings, then update any places that call
clearTimeout(timerRef.current) to first guard that timerRef.current is not null
(e.g., if (timerRef.current) { clearTimeout(timerRef.current); timerRef.current
= null; }) and reset timerRef.current appropriately when setting a new timeout.

Comment on lines +40 to +57
// Serve static files from content dir (skip .md/.mdx)
const contentFile = path.join(contentDir, decodeURIComponent(url.split('?')[0]))
if (!url.endsWith('.md') && !url.endsWith('.mdx')) {
try {
const stat = await fsPromises.stat(contentFile)
if (stat.isFile()) {
const ext = path.extname(contentFile).toLowerCase()
const mimeTypes: Record<string, string> = {
'.png': 'image/png', '.jpg': 'image/jpeg', '.jpeg': 'image/jpeg',
'.gif': 'image/gif', '.svg': 'image/svg+xml', '.webp': 'image/webp',
'.ico': 'image/x-icon', '.pdf': 'application/pdf', '.json': 'application/json',
'.yaml': 'text/yaml', '.yml': 'text/yaml', '.txt': 'text/plain',
}
res.setHeader('Content-Type', mimeTypes[ext] || 'application/octet-stream')
createReadStream(contentFile).pipe(res)
return
}
} catch { /* fall through to SSR */ }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Path traversal vulnerability in static file serving.

decodeURIComponent(url) can decode %2e%2e%2f to ../, and path.join does not prevent directory traversal. An attacker could access files outside contentDir (e.g., /..%2f..%2f..%2fetc/passwd).

Proposed fix - validate resolved path is within contentDir
       // Serve static files from content dir (skip .md/.mdx)
-      const contentFile = path.join(contentDir, decodeURIComponent(url.split('?')[0]))
+      const requestedPath = decodeURIComponent(url.split('?')[0])
+      const contentFile = path.resolve(contentDir, requestedPath.replace(/^\/+/, ''))
+      
+      // Prevent path traversal
+      if (!contentFile.startsWith(contentDir + path.sep) && contentFile !== contentDir) {
+        res.statusCode = 403
+        res.end('Forbidden')
+        return
+      }
+
       if (!url.endsWith('.md') && !url.endsWith('.mdx')) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Serve static files from content dir (skip .md/.mdx)
const contentFile = path.join(contentDir, decodeURIComponent(url.split('?')[0]))
if (!url.endsWith('.md') && !url.endsWith('.mdx')) {
try {
const stat = await fsPromises.stat(contentFile)
if (stat.isFile()) {
const ext = path.extname(contentFile).toLowerCase()
const mimeTypes: Record<string, string> = {
'.png': 'image/png', '.jpg': 'image/jpeg', '.jpeg': 'image/jpeg',
'.gif': 'image/gif', '.svg': 'image/svg+xml', '.webp': 'image/webp',
'.ico': 'image/x-icon', '.pdf': 'application/pdf', '.json': 'application/json',
'.yaml': 'text/yaml', '.yml': 'text/yaml', '.txt': 'text/plain',
}
res.setHeader('Content-Type', mimeTypes[ext] || 'application/octet-stream')
createReadStream(contentFile).pipe(res)
return
}
} catch { /* fall through to SSR */ }
// Serve static files from content dir (skip .md/.mdx)
const requestedPath = decodeURIComponent(url.split('?')[0])
const contentFile = path.resolve(contentDir, requestedPath.replace(/^\/+/, ''))
// Prevent path traversal
if (!contentFile.startsWith(contentDir + path.sep) && contentFile !== contentDir) {
res.statusCode = 403
res.end('Forbidden')
return
}
if (!url.endsWith('.md') && !url.endsWith('.mdx')) {
try {
const stat = await fsPromises.stat(contentFile)
if (stat.isFile()) {
const ext = path.extname(contentFile).toLowerCase()
const mimeTypes: Record<string, string> = {
'.png': 'image/png', '.jpg': 'image/jpeg', '.jpeg': 'image/jpeg',
'.gif': 'image/gif', '.svg': 'image/svg+xml', '.webp': 'image/webp',
'.ico': 'image/x-icon', '.pdf': 'application/pdf', '.json': 'application/json',
'.yaml': 'text/yaml', '.yml': 'text/yaml', '.txt': 'text/plain',
}
res.setHeader('Content-Type', mimeTypes[ext] || 'application/octet-stream')
createReadStream(contentFile).pipe(res)
return
}
} catch { /* fall through to SSR */ }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/dev.ts` around lines 40 - 57, The static file
serving uses decodeURIComponent(url) + path.join(contentDir, ...) to compute
contentFile, which allows path traversal (e.g., %2e%2e). Fix: after computing
the candidate path, resolve it (using path.resolve) and verify the resolved path
starts with the resolved contentDir prefix before calling
fsPromises.stat/createReadStream; if the check fails, skip serving or return
403. Update the logic around contentFile, contentDir, decodeURIComponent(url),
path.join, fsPromises.stat, and createReadStream to perform this containment
check and deny requests that escape the contentDir.

Comment on lines +42 to +60
// Serve static files from content dir (skip .md/.mdx)
const contentDir = process.env.CHRONICLE_CONTENT_DIR || process.cwd()
const contentFile = path.join(contentDir, decodeURIComponent(url.split('?')[0]))
if (!url.endsWith('.md') && !url.endsWith('.mdx')) {
try {
const stat = await fsPromises.stat(contentFile)
if (stat.isFile()) {
const ext = path.extname(contentFile).toLowerCase()
const mimeTypes: Record<string, string> = {
'.png': 'image/png', '.jpg': 'image/jpeg', '.jpeg': 'image/jpeg',
'.gif': 'image/gif', '.svg': 'image/svg+xml', '.webp': 'image/webp',
'.ico': 'image/x-icon', '.pdf': 'application/pdf', '.json': 'application/json',
'.yaml': 'text/yaml', '.yml': 'text/yaml', '.txt': 'text/plain',
}
res.setHeader('Content-Type', mimeTypes[ext] || 'application/octet-stream')
createReadStream(contentFile).pipe(res)
return
}
} catch { /* fall through */ }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Guard static content serving against path traversal.

The decoded request path is joined directly onto contentDir, so /%2e%2e/... can escape the content root and expose arbitrary files. Resolve against contentDir, reject anything whose relative path leaves that root, and run the markdown-extension check on that normalized pathname.

Suggested fix
-      const contentDir = process.env.CHRONICLE_CONTENT_DIR || process.cwd()
-      const contentFile = path.join(contentDir, decodeURIComponent(url.split('?')[0]))
-      if (!url.endsWith('.md') && !url.endsWith('.mdx')) {
+      const contentDir = process.env.CHRONICLE_CONTENT_DIR || process.cwd()
+      const requestPath = decodeURIComponent(new URL(url, `http://localhost:${port}`).pathname)
+      const contentFile = path.resolve(contentDir, `.${requestPath}`)
+      const relativePath = path.relative(contentDir, contentFile)
+      if (relativePath.startsWith('..') || path.isAbsolute(relativePath)) {
+        res.statusCode = 403
+        res.end('Forbidden')
+        return
+      }
+      if (!requestPath.endsWith('.md') && !requestPath.endsWith('.mdx')) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/entry-prod.ts` around lines 42 - 60, The static
file serving logic uses decodeURIComponent(url) and path.join(contentDir, ...)
directly (contentDir, contentFile, url), allowing path traversal; change to
resolve and normalize the requested path (e.g. const requestedPath =
decodeURIComponent(url.split('?')[0]); const resolved = path.resolve(contentDir,
'.' + requestedPath) or use path.join followed by path.normalize/path.resolve),
then verify the resolved path stays inside contentDir (e.g.
path.relative(contentDir, resolved) startsWith('..') check) and return a
404/ignore if it escapes; use the normalized/resolved pathname for the
markdown-extension check (the url.endsWith('.md'/.mdx') should be run against
the normalized pathname) and then call fsPromises.stat and stream from the
resolved path instead of the original contentFile.

Comment on lines +97 to +102
const html = render(url, { config, tree, page: pageData, apiSpecs })

const dataScript = `<script>window.__PAGE_DATA__ = ${JSON.stringify(embeddedData)}</script>`
const finalHtml = template
.replace('<!--head-outlet-->', `<!--head-outlet-->${dataScript}`)
.replace('<!--ssr-outlet-->', html)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Escape window.__PAGE_DATA__ before interpolating it into a <script>.

JSON.stringify() alone is not safe here; a title or description containing </script> or U+2028/U+2029 can terminate the script block and execute injected HTML/JS. Escape the serialized payload, or use a serializer that is safe for inline script contexts.

Suggested fix
-      const dataScript = `<script>window.__PAGE_DATA__ = ${JSON.stringify(embeddedData)}</script>`
+      const serializedPageData = JSON.stringify(embeddedData)
+        .replace(/</g, '\\u003c')
+        .replace(/\u2028/g, '\\u2028')
+        .replace(/\u2029/g, '\\u2029')
+      const dataScript = `<script>window.__PAGE_DATA__ = ${serializedPageData}</script>`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const html = render(url, { config, tree, page: pageData, apiSpecs })
const dataScript = `<script>window.__PAGE_DATA__ = ${JSON.stringify(embeddedData)}</script>`
const finalHtml = template
.replace('<!--head-outlet-->', `<!--head-outlet-->${dataScript}`)
.replace('<!--ssr-outlet-->', html)
const html = render(url, { config, tree, page: pageData, apiSpecs })
const serializedPageData = JSON.stringify(embeddedData)
.replace(/</g, '\\u003c')
.replace(/\u2028/g, '\\u2028')
.replace(/\u2029/g, '\\u2029')
const dataScript = `<script>window.__PAGE_DATA__ = ${serializedPageData}</script>`
const finalHtml = template
.replace('<!--head-outlet-->', `<!--head-outlet-->${dataScript}`)
.replace('<!--ssr-outlet-->', html)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/entry-prod.ts` around lines 97 - 102, The
inline script injection is unsafe: before interpolating embeddedData into
dataScript in entry-prod.ts (where dataScript is built and template.replace is
used), serialize and escape the JSON payload for an inline script context
instead of using JSON.stringify(...) raw; specifically ensure that
characters/sequences that can break a script tag (like "</script>"), U+2028 and
U+2029, and leading "<" are escaped (or use a known-safe serializer) so
window.__PAGE_DATA__ receives a safe escaped JSON string before inserting into
template; update the dataScript construction that references embeddedData to use
that escaping function/serializer.

Comment on lines +23 to +30
const url = spec.server.url + path

try {
const response = await fetch(url, {
method,
headers,
body: body ? JSON.stringify(body) : undefined,
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n packages/chronicle/src/server/handlers/apis-proxy.ts

Repository: raystack/chronicle

Length of output: 1997


🏁 Script executed:

cat package.json | jq '.engines'

Repository: raystack/chronicle

Length of output: 82


🏁 Script executed:

grep -r "AbortSignal\|fetch.*signal" packages/chronicle/src --include="*.ts" --include="*.js"

Repository: raystack/chronicle

Length of output: 44


Use URL resolution and a fetch timeout for upstream calls.

String concatenation can produce malformed targets (e.g., if the user-provided path lacks a leading slash or contains multiple slashes). Additionally, fetch without a timeout risks hanging request handling on slow or unresponsive upstreams, which can exhaust connection resources.

Proposed fix
-  const url = spec.server.url + path
+  let url: URL
+  try {
+    url = new URL(String(path), spec.server.url)
+  } catch {
+    return Response.json({ error: 'Invalid target path' }, { status: 400 })
+  }
@@
-    const response = await fetch(url, {
-      method,
-      headers,
+    const response = await fetch(url, {
+      method: String(method),
+      headers: headers as HeadersInit,
       body: body ? JSON.stringify(body) : undefined,
+      signal: AbortSignal.timeout(10_000),
     })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const url = spec.server.url + path
try {
const response = await fetch(url, {
method,
headers,
body: body ? JSON.stringify(body) : undefined,
})
let url: URL
try {
url = new URL(String(path), spec.server.url)
} catch {
return Response.json({ error: 'Invalid target path' }, { status: 400 })
}
try {
const response = await fetch(url, {
method: String(method),
headers: headers as HeadersInit,
body: body ? JSON.stringify(body) : undefined,
signal: AbortSignal.timeout(10_000),
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/handlers/apis-proxy.ts` around lines 23 - 30,
The upstream call constructs the target via string concatenation (const url =
spec.server.url + path) and uses fetch without a timeout; replace that with a
robust URL resolution using the URL constructor (new URL(path, spec.server.url))
to avoid malformed URLs, then call fetch with an AbortController-based timeout
(create AbortController, pass signal to fetch, set a setTimeout to call
controller.abort() after a configured timeout, and clearTimeout on
success/failure) so slow/unresponsive upstreams won't hang request handling;
update the fetch invocation that currently uses method, headers, body to include
the controller.signal and ensure timeout cleanup.

rsbh and others added 2 commits March 17, 2026 13:47
MDX files in user's content dir can't find react because node_modules
is in PACKAGE_ROOT. Add resolve.dedupe for react and server.fs.allow
for content dir access.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
react-device-detect causes ESM export issues when run from npx cache.
Replace with inline navigator.platform check for Mac detection.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/chronicle/src/server/vite-config.ts (1)

24-27: Consider removing @content alias on Line 26 if unused.

Current server paths appear to use process.env.CHRONICLE_CONTENT_DIR instead; keeping an unused alias increases config surface.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/server/vite-config.ts` around lines 24 - 27, The
`@content` alias in the Vite config (the alias entry '@content') appears unused
and duplicates the runtime use of process.env.CHRONICLE_CONTENT_DIR; remove the
'@content' entry from the alias map in the vite-config code (where alias: { '@':
path.resolve(root, 'src'), '@content': contentDir } is defined), then run a
quick grep for any imports using '@content' and update them to import via
relative paths or use the environment-backed contentDir where needed to avoid
breaking imports.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/chronicle/src/server/vite-config.ts`:
- Around line 24-27: The `@content` alias in the Vite config (the alias entry
'@content') appears unused and duplicates the runtime use of
process.env.CHRONICLE_CONTENT_DIR; remove the '@content' entry from the alias
map in the vite-config code (where alias: { '@': path.resolve(root, 'src'),
'@content': contentDir } is defined), then run a quick grep for any imports
using '@content' and update them to import via relative paths or use the
environment-backed contentDir where needed to avoid breaking imports.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3e50e3cc-a73a-4526-a774-b7d6d81b5abb

📥 Commits

Reviewing files that changed from the base of the PR and between 5a730d4 and 1f5227c.

📒 Files selected for processing (1)
  • packages/chronicle/src/server/vite-config.ts

rsbh and others added 2 commits March 17, 2026 14:08
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use CSS variables only (no inline styles) for syntax highlighting.
Both light and dark themes controlled via CSS selectors.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/chronicle/src/components/ui/search.tsx (1)

141-156: ⚠️ Potential issue | 🟡 Minor

Results shown during loading state when query is active.

Lines 141-156 render results whenever search.length > 0, regardless of isLoading. This can briefly show stale results while a new search is in progress. Consider guarding with !isLoading for consistency with other conditional blocks.

🛠️ Suggested fix
-             {search.length > 0 &&
+             {!isLoading && search.length > 0 &&
                 results.map((result) => (
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/components/ui/search.tsx` around lines 141 - 156, The
results list is rendered whenever search.length > 0 even during an active fetch,
causing stale entries to briefly appear; update the conditional that renders
Command.Item mapping (the block using results.map, Command.Item, getResultIcon,
onSelect and Text) to only render when search.length > 0 && !isLoading so
results are hidden while a new query is loading.
🧹 Nitpick comments (1)
packages/chronicle/package.json (1)

38-66: Add an explicit engines.node constraint for the new Vite stack.

Given the new dependencies on Vite and related tooling, package installs can pass on unsupported Node versions and fail later at runtime. Please pin a Node engine range in this manifest.

The minimum should be >=20.19.0 to satisfy the requirements of vite@8.0.0 and @vitejs/plugin-react@6.0.1 (^20.19.0 || >=22.12.0).

Proposed manifest update
 {
   "name": "@raystack/chronicle",
   "version": "0.1.0",
   "description": "Config-driven documentation framework",
   "license": "Apache-2.0",
   "type": "module",
+  "engines": {
+    "node": ">=20.19.0"
+  },
   "files": [
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/package.json` around lines 38 - 66, Add an explicit Node
engine constraint to package.json by adding an "engines" field with a "node"
range pinned to at least ">=20.19.0" (to satisfy vite@8.0.0 and
`@vitejs/plugin-react`@6.0.1 which require ^20.19.0 || >=22.12.0); update the
manifest so the top-level package.json includes "engines": { "node": ">=20.19.0"
} to prevent installs on unsupported Node versions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/chronicle/src/components/ui/search.tsx`:
- Around line 23-38: The useEffect debounce fetch currently lets in-flight
requests complete and overwrite newer results; modify the effect in search.tsx
to create an AbortController for each fetch (and attach its signal to the fetch
call), call controller.abort() in the cleanup before starting a new fetch (and
when the component unmounts), and ignore/handle AbortError so it doesn't set
state; also check res.ok before calling res.json() and setResults([]) or surface
an error when res.ok is false. Update references to timerRef, the existing
fetch(`/api/search?...`), setIsLoading, and setResults inside the effect
accordingly.

---

Outside diff comments:
In `@packages/chronicle/src/components/ui/search.tsx`:
- Around line 141-156: The results list is rendered whenever search.length > 0
even during an active fetch, causing stale entries to briefly appear; update the
conditional that renders Command.Item mapping (the block using results.map,
Command.Item, getResultIcon, onSelect and Text) to only render when
search.length > 0 && !isLoading so results are hidden while a new query is
loading.

---

Nitpick comments:
In `@packages/chronicle/package.json`:
- Around line 38-66: Add an explicit Node engine constraint to package.json by
adding an "engines" field with a "node" range pinned to at least ">=20.19.0" (to
satisfy vite@8.0.0 and `@vitejs/plugin-react`@6.0.1 which require ^20.19.0 ||
>=22.12.0); update the manifest so the top-level package.json includes
"engines": { "node": ">=20.19.0" } to prevent installs on unsupported Node
versions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 89c18a07-c343-414e-839b-cd41b6daba1d

📥 Commits

Reviewing files that changed from the base of the PR and between 1f5227c and c5d277e.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • packages/chronicle/package.json
  • packages/chronicle/src/components/ui/search.tsx

Comment on lines +23 to +38
useEffect(() => {
clearTimeout(timerRef.current);
timerRef.current = setTimeout(async () => {
setIsLoading(true);
try {
const params = new URLSearchParams();
if (query) params.set("query", query);
const res = await fetch(`/api/search?${params}`);
setResults(await res.json());
} catch {
setResults([]);
}
setIsLoading(false);
}, 100);
return () => clearTimeout(timerRef.current);
}, [query]);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Race condition: stale fetch responses can overwrite current results.

When the query changes while a fetch is in-flight, the pending request continues and its response will overwrite results—potentially showing stale data briefly. Use an AbortController to cancel pending requests on cleanup or query change.

Additionally, res.ok is not checked before parsing the response.

🐛 Proposed fix using AbortController
   useEffect(() => {
+    const controller = new AbortController();
     clearTimeout(timerRef.current);
     timerRef.current = setTimeout(async () => {
       setIsLoading(true);
       try {
         const params = new URLSearchParams();
         if (query) params.set("query", query);
-        const res = await fetch(`/api/search?${params}`);
+        const res = await fetch(`/api/search?${params}`, { signal: controller.signal });
+        if (!res.ok) throw new Error('Search request failed');
         setResults(await res.json());
-      } catch {
-        setResults([]);
+      } catch (e) {
+        if ((e as Error).name !== 'AbortError') {
+          setResults([]);
+        }
       }
       setIsLoading(false);
     }, 100);
-    return () => clearTimeout(timerRef.current);
+    return () => {
+      clearTimeout(timerRef.current);
+      controller.abort();
+    };
   }, [query]);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
useEffect(() => {
clearTimeout(timerRef.current);
timerRef.current = setTimeout(async () => {
setIsLoading(true);
try {
const params = new URLSearchParams();
if (query) params.set("query", query);
const res = await fetch(`/api/search?${params}`);
setResults(await res.json());
} catch {
setResults([]);
}
setIsLoading(false);
}, 100);
return () => clearTimeout(timerRef.current);
}, [query]);
useEffect(() => {
const controller = new AbortController();
clearTimeout(timerRef.current);
timerRef.current = setTimeout(async () => {
setIsLoading(true);
try {
const params = new URLSearchParams();
if (query) params.set("query", query);
const res = await fetch(`/api/search?${params}`, { signal: controller.signal });
if (!res.ok) throw new Error('Search request failed');
setResults(await res.json());
} catch (e) {
if ((e as Error).name !== 'AbortError') {
setResults([]);
}
}
setIsLoading(false);
}, 100);
return () => {
clearTimeout(timerRef.current);
controller.abort();
};
}, [query]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chronicle/src/components/ui/search.tsx` around lines 23 - 38, The
useEffect debounce fetch currently lets in-flight requests complete and
overwrite newer results; modify the effect in search.tsx to create an
AbortController for each fetch (and attach its signal to the fetch call), call
controller.abort() in the cleanup before starting a new fetch (and when the
component unmounts), and ignore/handle AbortError so it doesn't set state; also
check res.ok before calling res.json() and setResults([]) or surface an error
when res.ok is false. Update references to timerRef, the existing
fetch(`/api/search?...`), setIsLoading, and setResults inside the effect
accordingly.

Moves API routing + SSR rendering logic into request-handler.ts
so it can be reused by different deployment adapters.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
rsbh and others added 2 commits March 17, 2026 15:27
Exports a Node.js handler that delegates to the shared
request handler for SSR + API routes on Vercel.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds post-build step that generates .vercel/output/ with:
- Static assets from client build + content dir
- Bundled serverless function (catch-all)
- Build Output API config (v3)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant