Conversation
Introduce in-process Vite dev/prod server to replace Next.js child process spawning. Adds HTML shell, SSR entry points, URL router with stub handlers, and programmatic Vite config with fumadocs-mdx/vite plugin. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Port Next.js page components to framework-agnostic pages (DocsPage, DocsLayout, ApiPage, ApiLayout, NotFound). Add Head component for SSR meta/OG/JSON-LD rendering. Wire App component to route URLs to correct page components. Add vitest config with mocks for fumadocs source, next/font, and next/navigation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add minimal client-side router (~100 lines) with RouterProvider, usePathname, useRouter, and Link component. Replace all next/navigation and next/link imports across themes (default, paper), search, and MDX link components. Replace next/font/google with static font declaration. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace custom ~100 line router with react-router-dom v7. Use StaticRouter for SSR, BrowserRouter for client hydration. Rename NextLink to RouterLink across all themes and components. Delete src/lib/router.tsx. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace all Apsara Link with react-router-dom Link. Remove url prop from App (useLocation from router context instead). Rename MdxLink export. Use plain <a> for external links. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Port all Next.js route handlers to framework-agnostic handlers: search, health, apis-proxy, OG image (satori), llms.txt, sitemap.xml, robots.txt. Wire real handlers into router replacing stubs. Add satori dependency for OG image generation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dev: vite.createServer() in-process, no child process spawning build: vite.build() for client + server bundles start: node server with pre-built dist/ init: creates content/ + chronicle.yaml only, no .chronicle/ scaffold serve: build + start in sequence Delete process.ts (no child processes), strip scaffold.ts to just detectPackageManager + getChronicleVersion. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Delete src/app/ (all Next.js routes), next.config.mjs. Remove next from dependencies, add react-router-dom and satori. Move API layout CSS module to src/pages/. Add handler tests (health, robots, sitemap, apis-proxy, llms, og). Add CLI config and scaffold utility tests. 38 tests passing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- vite ^8.0.0, @vitejs/plugin-react ^6.0.1 - @mdx-js/rollup with remark-frontmatter, remark-mdx-frontmatter, remark-gfm - Add gray-matter, minisearch, glob for custom content/search - Remove fumadocs-core, fumadocs-mdx, zod, @types/unist - rollupOptions → rolldownOptions for Vite 8 - ssr.noExternal for @raystack/apsara CSS - build-cli reads externals from package.json automatically Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- source.ts: import.meta.glob for eager frontmatter + lazy MDX components - search.ts: minisearch + gray-matter for server-side indexing - llms.ts: custom index handler, no fumadocs llms helper - search.tsx: custom useSearch hook with fetch-based client - sitemap.ts: async, uses new source API - Remove handleLlmsFull, get-llm-text.ts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- PageContext provides config, tree, page data to all components - dev.ts resolves all data server-side before render - entry-server uses renderToString (no streaming) - entry-client hydrates with embedded __PAGE_DATA__ - Client navigation loads MDX via import.meta.glob on demand - Remove loadConfig() from all client components - API pages temporarily stubbed (need server-side spec loading) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Delete source.config.ts, __mocks__/ (next-*, source-server) - image.tsx: plain <img> instead of next/image - init.ts: remove .source from gitignore entries - vitest.config: remove @/.source and next/* aliases, add @content - Fix async handler tests (sitemap, llms) - Fix duplicate key warning in Layout.tsx navbar Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add apiSpecs to PageContext, SSRData, and embedded __PAGE_DATA__ - dev.ts loads specs via loadApiSpecs() server-side - ApiLayout reads specs from context, builds API tree - ApiPage reads specs from context, renders endpoint pages - No fs/path imports in client components Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add /api/specs endpoint serving serialized OpenAPI specs - Remove apiSpecs from __PAGE_DATA__ (was 2MB+ bloating HTML) - Client fetches specs on demand when navigating to /apis - SSR still loads specs server-side for full render Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- entry-prod.ts: full server with startServer(), render, router, config - prod.ts: thin wrapper that loads built entry-prod.js - Build server with ssr.noExternal: true (bundles all deps) - rollupOptions → rolldownOptions in build/serve commands - No dual React instance issues — everything in one bundle Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Raw HTML <details> in MDX bypasses component mapping, so style them directly in .content selector using Apsara design tokens. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Serve content dir as static files, skip .md/.mdx for security - Works in both dev (dev.ts) and prod (entry-prod.ts) - Fixes image rendering in MDX (e.g. Frontier flow diagram) - Add max-width: 100% for content images Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
MdxPre detects language-mermaid code blocks and renders them with the Mermaid component instead of plain code. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add @shikijs/rehype as rehype plugin in MDX config - Dual theme: github-light / github-dark via CSS variables - CSS applies --shiki-light/--shiki-dark per data-theme Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Publishes canary npm release on every PR commit with version format 0.1.0-canary.<short-sha> using --tag canary. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Currently processing new changes in this PR. This may take a few minutes, please wait... ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (7)
📝 WalkthroughWalkthroughMigrates Chronicle from Next.js to a Vite+SSR architecture: removes Next.js routes/config, adds Vite dev/prod servers, SSR client/server entrypoints, route handlers/router, import.meta.glob content source, React Router-driven client, and updated CLI build/dev/serve flows. Changes
Sequence Diagram(s)mermaid mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/chronicle/src/cli/commands/init.ts (1)
77-110:⚠️ Potential issue | 🟠 MajorExisting Chronicle installs will not actually get migrated here.
This path only fills missing keys. If a repo already has
dev/build/startwired to Next.js or pins an older@raystack/chronicle, it will still be reported as initialized even though it does not match the new Vite toolchain. At minimum, detect Chronicle-managed entries that differ from the current template and rewrite or warn on them.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/cli/commands/init.ts` around lines 77 - 110, The current merge logic only adds missing keys and thus won't migrate repos that have existing Chronicle-managed entries (like dev/build/start scripts or a pinned `@raystack/chronicle`) that differ from the new template; update the code that manipulates existing.scripts, existing.dependencies and existing.devDependencies to detect keys that are managed by Chronicle (compare against template.scripts/template.dependencies/template.devDependencies) and if values differ either overwrite them with the template or emit a clear warning and set updated=true; ensure the change is applied before writing packageJsonPath and include references to the symbols existing.scripts, template.scripts, existing.devDependencies, template.devDependencies, existing.dependencies, template.dependencies and the updated flag so the logic can both replace mismatched Chronicle entries or log a warning indicating a manual migration is required.packages/chronicle/src/lib/source.ts (1)
97-137:⚠️ Potential issue | 🟠 MajorThe current tree builder can't represent section index pages or nested folders.
Only
page.slugs[0]is used for grouping, soguides/index.mdxis pushed intorootPages,guides/setup/linux.mdxis flattened directly underguides, and folders are appended after all root pages regardless oforder. That yields duplicated sections and incorrect navigation once content is nested.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/lib/source.ts` around lines 97 - 137, The tree builder flattens nested content by only grouping on page.slugs[0], causing index pages like guides/index.mdx to end up in rootPages and nested pages like guides/setup/linux.mdx to be mis-placed; update the pages.forEach grouping logic to: classify an index page when page.slugs.length === 1 (or url === `/${folder}`) and attach it as the folder's index, otherwise push multi-segment pages into a per-folder list keyed by the first slug but keep their full slug arrays so deeper nesting can be built; then in the folder assembly (symbols: folders map, folderItems, sortByOrder, indexPage, folderOrder) build each folder's children from the full-item list (preserving nested structure if page.slugs.length > 2), set folder.order from the folder index page if present or from the first child, and finally merge rootPages and folderItems and run sortByOrder on the combined list so folders and root pages are ordered correctly.
🟠 Major comments (22)
packages/chronicle/src/components/ui/search.tsx-24-39 (1)
24-39:⚠️ Potential issue | 🟠 MajorPrevent stale/out-of-order search results from overwriting newer queries.
The current effect does not cancel in-flight fetches, so older responses can win races. Combined with rendering at Line 142 without an
isLoadingguard, users can see stale results during active loading.Proposed fix
function useSearch(query: string) { const [results, setResults] = useState<SearchResult[]>([]); const [isLoading, setIsLoading] = useState(false); - const timerRef = useRef<ReturnType<typeof setTimeout>>(); + const timerRef = useRef<ReturnType<typeof setTimeout> | null>(null); + const abortRef = useRef<AbortController | null>(null); useEffect(() => { - clearTimeout(timerRef.current); + if (timerRef.current) clearTimeout(timerRef.current); + abortRef.current?.abort(); timerRef.current = setTimeout(async () => { setIsLoading(true); try { + abortRef.current = new AbortController(); const params = new URLSearchParams(); if (query) params.set("query", query); - const res = await fetch(`/api/search?${params}`); + const res = await fetch(`/api/search?${params.toString()}`, { + signal: abortRef.current.signal, + }); + if (!res.ok) throw new Error("Search request failed"); setResults(await res.json()); - } catch { + } catch (err) { + if ((err as DOMException).name === "AbortError") return; setResults([]); + } finally { + setIsLoading(false); } - setIsLoading(false); }, 100); - return () => clearTimeout(timerRef.current); + return () => { + if (timerRef.current) clearTimeout(timerRef.current); + abortRef.current?.abort(); + }; }, [query]); return { results, isLoading }; } ... - {search.length > 0 && + {search.length > 0 && + !isLoading && results.map((result) => (Also applies to: 142-157
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/components/ui/search.tsx` around lines 24 - 39, The search effect can suffer from response races: update the fetch logic in the useEffect that uses timerRef.current, setIsLoading, setResults and URLSearchParams to create an AbortController for each scheduled request, pass controller.signal to fetch, and call controller.abort() in the timeout/cleanup so in-flight requests are canceled; only call setResults after confirming the response was not aborted (handle AbortError separately) to avoid overwriting newer results. Additionally, ensure the component's render path that displays results (the block referenced around lines 142-157) respects isLoading (do not render previous results while isLoading) so stale results aren’t shown during an active query.packages/chronicle/src/components/ui/search.tsx-166-171 (1)
166-171: 🛠️ Refactor suggestion | 🟠 MajorAdd
methodfield toSearchResulttype returned from/api/search.The
getResultIconfunction currently parses the HTTP method fromresult.content(line 168), which contains the operation description—not the method. This is unreliable. The method is already computed in the search handler (line 83 ofhandlers/search.ts) but only stored in thetitle. Add a dedicatedmethodfield to theSearchResultinterface and return it from the search handler sogetResultIconcan render directly from structured data instead of parsing text.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/components/ui/search.tsx` around lines 166 - 171, Update the SearchResult shape to include a dedicated method:string (e.g., add method to the SearchResult interface) and modify the search handler (handlers/search.ts, where the result title is currently set) to populate and return that method field from the API; then update getResultIcon in packages/chronicle/src/components/ui/search.tsx to use result.method directly (and remove the fragile content.split parsing) so MethodBadge is rendered from the structured result.method value.packages/chronicle/src/components/ui/footer.tsx-21-25 (1)
21-25:⚠️ Potential issue | 🟠 MajorFix external link handling in footer navigation.
Footer links can contain external URLs (e.g.,
https://github.com/raystack/chronicle), but the current implementation using react-router-dom'sLinkcomponent with thetoprop won't handle them correctly. React Router'sLinkis designed for client-side routing only and will treat external URLs as relative routes.Conditionally render an
<a>tag for external URLs and aLinkcomponent for internal routes:🔧 Proposed fix for external link handling
{config.links.map((link) => ( - <Link key={link.href} to={link.href} className={styles.link}> - {link.label} - </Link> + link.href.startsWith('http') ? ( + <a key={link.href} href={link.href} className={styles.link} target="_blank" rel="noopener noreferrer"> + {link.label} + </a> + ) : ( + <Link key={link.href} to={link.href} className={styles.link}> + {link.label} + </Link> + ) ))}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/components/ui/footer.tsx` around lines 21 - 25, The footer currently maps config.links and always renders the react-router-dom Link component (config.links, Link, styles.link), which breaks external URLs; update the mapping to detect external URLs (e.g., href startsWith 'http://' or 'https://', or new URL parse) and conditionally render a native anchor element for externals (use href, target="_blank", rel="noopener noreferrer", className=styles.link and the same key) while keeping Link with to={link.href} for internal routes; ensure the existing key and styles.link are applied in both branches.packages/chronicle/src/server/handlers/specs.ts-4-8 (1)
4-8:⚠️ Potential issue | 🟠 MajorAdd defensive error handling in the specs handler.
A failure in
loadConfig()orloadApiSpecs()currently bubbles out of the route. Returning a controlled 500 JSON response here will make/api/specsfailures predictable and easier to handle client-side.Proposed fix
export function handleSpecs(): Response { - const config = loadConfig() - const specs = config.api?.length ? loadApiSpecs(config.api) : [] - - return Response.json(specs) + try { + const config = loadConfig() + const specs = config.api?.length ? loadApiSpecs(config.api) : [] + return Response.json(specs) + } catch { + return Response.json( + { error: 'Failed to load API specs' }, + { status: 500 }, + ) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/handlers/specs.ts` around lines 4 - 8, The handleSpecs function currently lets errors from loadConfig() or loadApiSpecs() bubble up; wrap the body of handleSpecs in a try/catch, call loadConfig() and loadApiSpecs() inside the try, and on error log the exception (e.g., console.error or the repo logger) and return a controlled JSON 500 response (e.g., Response.json({ error: 'Internal Server Error' }, { status: 500 })); keep the existing happy-path return Response.json(specs) inside the try so normal behavior is unchanged.packages/chronicle/src/themes/default/Layout.tsx-120-120 (1)
120-120:⚠️ Potential issue | 🟠 MajorRemove conditionally-called
useMemohook inSidebarNode.The
useMemocall at line 120 violates React's Rules of Hooks. The component returns early for separators (line 97) and folders (line 101), meaning the hook is not executed on all render paths.Proposed fix
-import { useMemo, useEffect, useRef } from "react"; +import { useEffect, useRef } from "react"; @@ - const link = useMemo(() => <Link to={href} />, [href]); @@ - as={link} + as={<Link to={href} />}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/themes/default/Layout.tsx` at line 120, The useMemo call creating the `link` element in `SidebarNode` (const link = useMemo(() => <Link to={href} />, [href])) is conditionally called because the component returns early for separators and folders; move the hook out of conditional render paths or remove useMemo entirely so Rules of Hooks aren’t violated: either declare `const link = <Link to={href} />` inline where used, or place `const link = useMemo(..., [href])` at the top of `SidebarNode` before any early returns so `useMemo`, `Link`, and the `href` dependency are executed on every render path.packages/chronicle/src/server/handlers/apis-proxy.ts-9-13 (1)
9-13:⚠️ Potential issue | 🟠 MajorInvalid JSON requests currently return 500 instead of 400.
await req.json()can throw on malformed request bodies, which bubbles to the server's outer error handler and returns a 500 status. Wrap the JSON parsing in a try-catch to return a proper 400 response.Proposed fix
- const { specName, method, path, headers, body } = await req.json() + let payload: unknown + try { + payload = await req.json() + } catch { + return Response.json({ error: 'Invalid JSON body' }, { status: 400 }) + } + + if (!payload || typeof payload !== 'object') { + return Response.json({ error: 'Invalid request payload' }, { status: 400 }) + } + + const { specName, method, path, headers, body } = payload as Record<string, unknown>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/handlers/apis-proxy.ts` around lines 9 - 13, The JSON parsing step using await req.json() can throw and currently bubbles up to a 500; wrap the destructuring (const { specName, method, path, headers, body } = await req.json()) in a try-catch, and on any parsing error return Response.json({ error: 'Invalid JSON body' }, { status: 400 }); then proceed to validate specName/method/path as before—this ensures malformed JSON causes a 400 instead of a 500.packages/chronicle/src/lib/head.tsx-37-41 (1)
37-41:⚠️ Potential issue | 🟠 MajorEscape
<in JSON-LD before injecting into<script>tag withdangerouslySetInnerHTML.The HTML parser closes
<script>tags as soon as it encounters</script>literally in the source, regardless of JSON context. An attacker-controlled string like</script><script>alert(1)</script>breaks out of the script block and executes injected JavaScript.JSON.stringify()alone does not prevent this; escape<(and other sensitive characters) before assignment.Proposed fix
export function Head({ title, description, config, jsonLd }: HeadProps) { + const safeJsonLd = jsonLd + ? JSON.stringify(jsonLd) + .replace(/</g, '\\u003c') + .replace(/\u2028/g, '\\u2028') + .replace(/\u2029/g, '\\u2029') + : undefined @@ - {jsonLd && ( + {safeJsonLd && ( <script type="application/ld+json" - dangerouslySetInnerHTML={{ __html: JSON.stringify(jsonLd, null, 2) }} + dangerouslySetInnerHTML={{ __html: safeJsonLd }} /> )}Note: The fix removes pretty-printing (
null, 2parameters), but JSON-LD formatting is not required by spec—minified JSON is fully compatible.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/lib/head.tsx` around lines 37 - 41, The JSON-LD is being injected via dangerouslySetInnerHTML using JSON.stringify(jsonLd, null, 2) which can be broken out by attacker-controlled sequences like "</script>"; update the injection to first serialize jsonLd (remove pretty-printing) and escape HTML-sensitive characters (at minimum replace all '<' with the safe escape '\u003c') before passing to dangerouslySetInnerHTML so the script tag cannot be prematurely closed—locate the usage of jsonLd and the <script> block in head.tsx and replace the current JSON.stringify(jsonLd, null, 2) input with a safely escaped serialized string (e.g., JSON.stringify(jsonLd) then .replace(/</g, '\\u003c')).packages/chronicle/src/server/handlers/search.ts-18-19 (1)
18-19:⚠️ Potential issue | 🟠 MajorThe cached index will drift from the filesystem in
chronicle dev.
getIndex()memoizes once per process, but the empty-query path rescans content on every request. After editing docs,/api/search?query=can show a new page while/api/search?query=foostill searches the old index until restart. Rebuild or invalidate the cache when content/specs change, or skip memoization in dev.Also applies to: 94-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/handlers/search.ts` around lines 18 - 19, The cached MiniSearch instance (searchIndex) is memoized by getIndex(), causing stale searches after content edits; update getIndex() (and any related functions around the search index initialisation referenced in lines ~94-129) to either disable memoization in development or automatically rebuild/invalidate the cache when content/spec/specs change: detect dev mode (e.g., NODE_ENV or an isDev helper) and return a freshly built MiniSearch on each call, or add a file-watch hook that sets searchIndex = null and triggers a rebuild when source files change so queries always use the current filesystem state.packages/chronicle/src/lib/page-context.tsx-59-67 (1)
59-67:⚠️ Potential issue | 🟠 MajorClear stale
pagestate when the route is not backed by docs content.The
/apisbranch returns without touchingpage, and the docs branch also leavespageunchanged whengetPage(slug)returnsnull. After client navigation, consumers can keep rendering the previous document's metadata/body on API or missing routes.Suggested fix
if (pathname.startsWith('/apis')) { + setPage(null) // Fetch API specs if not already loaded if (apiSpecs.length === 0) { fetch('/api/specs') .then((res) => res.json()) .then((specs) => { if (!cancelled) setApiSpecs(specs) }) @@ - const [sourcePage, newTree] = await Promise.all([getPage(slug), buildPageTree()]) - if (cancelled || !sourcePage) return + const [sourcePage, newTree] = await Promise.all([getPage(slug), buildPageTree()]) + if (cancelled) return + setTree(newTree) + if (!sourcePage) { + setPage(null) + return + } const component = await loadPageComponent(sourcePage) if (cancelled) return - setTree(newTree) setPage({Also applies to: 73-75
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/lib/page-context.tsx` around lines 59 - 67, The effect currently skips updating the page state for non-docs routes and when getPage(slug) returns null, leaving stale page data visible; update the logic so that in the /apis branch you explicitly clear the page state (call setPage(null) or equivalent) before returning, and in the docs branch when pageResult is null also call setPage(null) so consumers don't retain previous document body/metadata; locate the logic around pathname, apiSpecs, fetch('/api/specs'), getPage(slug), setApiSpecs and setPage to apply these fixes.packages/chronicle/src/server/handlers/og.ts-6-17 (1)
6-17:⚠️ Potential issue | 🟠 MajorAvoid a remote font download on the request path.
The first
/ogrequest blocks on an outbound font fetch with no timeout, and a transient failure is memoized forever as a zero-byte “font”. That makes this endpoint depend on external network health until the process restarts. Bundle a local font, or at least only cache successful downloads and fail without poisoning the cache.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/handlers/og.ts` around lines 6 - 17, The loadFont function currently fetches a remote font and memoizes any result (including a zero-length ArrayBuffer on failure) which blocks the /og request path and can permanently poison the cache via the fontData variable; change loadFont to avoid synchronous remote fetch on request path by either using a bundled local font asset (load from disk or import a font file into the build and return its ArrayBuffer) or, if remote fetching is required, only set fontData after a successful response (check response.ok and that the arrayBuffer length > 0), add a short fetch timeout, and do not memoize the fallback empty buffer so transient failures don't persist; update references to loadFont and the shared fontData variable accordingly.packages/chronicle/src/server/handlers/search.ts-132-139 (1)
132-139:⚠️ Potential issue | 🟠 MajorCap the result set before returning it.
index.search(query)returns the full match list. Common queries can serialize the entire corpus even though the UI only needs top hits. Slice the response server-side to a sane maximum.Suggested fix
- const results = index.search(query).map((r) => ({ + const results = index.search(query).slice(0, 20).map((r) => ({ id: r.id, url: r.url, type: r.type, content: r.title, }))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/handlers/search.ts` around lines 132 - 139, index.search(query) returns the entire match list and the handler currently maps and returns all matches (results) via Response.json, which can serialize the whole corpus; update the handler to cap the result set before mapping/returning by defining a sane MAX_RESULTS constant (or use an existing config), slice the array returned by index.search(query) to the top N (e.g., resultsRaw.slice(0, MAX_RESULTS)), then map that sliced array into the current shape (id, url, type, content) and return only those entries via Response.json to avoid sending excessive data.packages/chronicle/src/components/mdx/link.tsx-13-35 (1)
13-35:⚠️ Potential issue | 🟠 MajorNormalize URL classification and prop forwarding in
MdxLink.
mailto:/tel:URIs currently fall through to the router branch, the internal branch drops every anchor prop exceptclassName, and the external branch lets...propsoverride the saferel/target. That will break non-HTTP links and make accessibility/analytics attrs behave inconsistently.Suggested fix
export function MdxLink({ href, children, ...props }: LinkProps) { if (!href) { return <span {...props}>{children}</span> } - const isExternal = href.startsWith('http://') || href.startsWith('https://') + const isHttpUrl = /^(https?:)?\/\//i.test(href) + const isSpecialScheme = /^[a-z][a-z\d+\-.]*:/i.test(href) const isAnchor = href.startsWith('#') - if (isExternal) { + if (isHttpUrl || isSpecialScheme) { return ( - <a href={href} target="_blank" rel="noopener noreferrer" {...props}> + <a + href={href} + {...props} + target={isHttpUrl ? props.target ?? '_blank' : props.target} + rel={isHttpUrl ? props.rel ?? 'noopener noreferrer' : props.rel} + > {children} </a> ) } if (isAnchor) { return ( <a href={href} {...props}> {children} </a> ) } return ( - <Link to={href} className={props.className}> + <Link to={href} {...props}> {children} </Link> ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/components/mdx/link.tsx` around lines 13 - 35, Update MdxLink's URL classification and prop forwarding: treat schemes like mailto: and tel: as external (extend isExternal to include href.startsWith('mailto:') || href.startsWith('tel:')), ensure the external anchor sets target="_blank" and rel="noopener noreferrer" after spreading props so callers cannot override them, allow the anchor anchor branch (isAnchor) to forward all props (not just className), and for internal routing (the Link component) forward the full props object (not only props.className) to <Link>; reference the isExternal/isAnchor checks and the Link component in your changes.packages/chronicle/src/server/handlers/og.ts-22-24 (1)
22-24:⚠️ Potential issue | 🟠 MajorClamp
titleanddescriptionbefore rendering.Both values come straight from the query string and are handed to Satori without a size cap. Very long inputs can turn
/oginto an expensive CPU/memory endpoint.Suggested fix
export async function handleOg(req: Request): Promise<Response> { const url = new URL(req.url) - const title = url.searchParams.get('title') ?? loadConfig().title - const description = url.searchParams.get('description') ?? '' - const siteName = loadConfig().title + const config = loadConfig() + const title = (url.searchParams.get('title') ?? config.title).trim().slice(0, 120) + const description = (url.searchParams.get('description') ?? '').trim().slice(0, 240) + const siteName = config.title🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/handlers/og.ts` around lines 22 - 24, Clamp the untrusted query inputs before they reach Satori: truncate the title and description obtained via url.searchParams.get (the constants title and description in og.ts) to reasonable max lengths (e.g., title ~100 chars, description ~200 chars) falling back to loadConfig().title or empty string as currently done; perform truncation right after obtaining the values (before any calls to the Satori render/OG generation logic) so the handler never processes arbitrarily large strings.packages/chronicle/src/cli/commands/init.ts-151-153 (1)
151-153:⚠️ Potential issue | 🟠 MajorPrefer the generated
devscript over a package-manager exec shim.The scaffold already writes a local
devscript intopackage.json(line 25). Usingnpx/bunx/dlxhere is problematic:pnpm dlxandyarn dlxfetch packages from the registry (they are not designed for running local scripts), whilebunxonly works by coincidence (it checks local first). The instruction should instead invoke the local script, which is part of the project contract and works consistently across all supported package managers.Suggested fix
- const runCmd = pm === 'npm' ? 'npx' : pm === 'bun' ? 'bunx' : `${pm} dlx` + const runCmd = + pm === 'npm' ? 'npm run dev' : + pm === 'bun' ? 'bun run dev' : + `${pm} dev` console.log(chalk.green('\n\u2713 Chronicle initialized!')) - console.log('\nRun', chalk.cyan(`${runCmd} chronicle dev`), 'to start development server') + console.log('\nRun', chalk.cyan(runCmd), 'to start development server')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/cli/commands/init.ts` around lines 151 - 153, Replace the exec-shim usage with a direct invocation of the generated local script: modify the runCmd definition (currently "const runCmd = pm === 'npm' ? 'npx' : pm === 'bun' ? 'bunx' : `${pm} dlx`") to use the package manager's run form (e.g. `${pm} run`) and leave the console.log lines as-is so the message prints `${runCmd} dev`, ensuring it instructs users to run the local "dev" script rather than an exec shim..github/workflows/canary.yml-31-33 (1)
31-33:⚠️ Potential issue | 🟠 MajorCanary version is not unique across reruns of the same commit.
Re-running the workflow for the same PR SHA will attempt to publish the exact same npm version and fail.
🐛 Proposed fix
- VERSION=$(jq -r .version package.json)-canary.${SHORT_SHA} + VERSION="$(jq -r .version package.json)-canary.${SHORT_SHA}.${GITHUB_RUN_NUMBER}.${GITHUB_RUN_ATTEMPT}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/canary.yml around lines 31 - 33, The canary VERSION computed from SHORT_SHA is not unique across workflow reruns; update the VERSION construction to append a per-run unique identifier (e.g. GITHUB_RUN_NUMBER or GITHUB_RUN_ID) so repeated runs of the same PR SHA produce distinct versions; modify the step that builds VERSION (the line referencing SHORT_SHA and VERSION and the jq assignment) to include the chosen environment variable (or timestamp) when forming the package.json .version so publishing won't fail on duplicate versions.packages/chronicle/src/pages/DocsPage.tsx-12-12 (1)
12-12:⚠️ Potential issue | 🟠 MajorRender a not-found state instead of
null.If
pageis missing here, this renders an empty docs shell with no 404 signal. Since non-API paths are funneled into docs rendering, an unresolved slug becomes a blank page instead of a recoverable not-found UX.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/pages/DocsPage.tsx` at line 12, Replace the early return that yields null when the docs page data is missing ("if (!page) return null") with a proper not-found render: in DocsPage.tsx detect the missing page and return a NotFound/404 UI component (or call the framework notFound()/router.replace to the 404 route) so the user sees a clear 404 state; update imports to include your NotFound component or next/notFound helper and ensure the response/status is appropriate rather than rendering an empty shell.packages/chronicle/src/server/handlers/sitemap.ts-18-20 (1)
18-20:⚠️ Potential issue | 🟠 MajorGuard invalid
lastModifiedvalues.
new Date(page.frontmatter.lastModified).toISOString()throws on malformed dates, so one bad frontmatter value will take down the entire sitemap response.🛡️ Proposed fix
const docPages = pages.map((page) => { - const lastmod = page.frontmatter.lastModified - ? `<lastmod>${new Date(page.frontmatter.lastModified).toISOString()}</lastmod>` - : '' + const parsedLastModified = page.frontmatter.lastModified + ? Date.parse(page.frontmatter.lastModified) + : Number.NaN + const lastmod = Number.isNaN(parsedLastModified) + ? '' + : `<lastmod>${new Date(parsedLastModified).toISOString()}</lastmod>` return `<url><loc>${baseUrl}/${page.slugs.join('/')}</loc>${lastmod}</url>` })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/handlers/sitemap.ts` around lines 18 - 20, The sitemap code builds lastmod using new Date(page.frontmatter.lastModified).toISOString(), which will throw for malformed dates; update the sitemap handler to validate/parse page.frontmatter.lastModified before using it: check that page.frontmatter.lastModified exists and produces a valid Date (e.g., via const d = new Date(value); if (!isNaN(d.getTime())) then use d.toISOString()), otherwise fall back to an empty string; modify the logic around the lastmod variable (where page.frontmatter.lastModified is referenced) to perform this safe-parse and only include the <lastmod> element when the date is valid.packages/chronicle/src/server/App.tsx-14-16 (1)
14-16:⚠️ Potential issue | 🟠 MajorMatch
/apisas a path segment, not a string prefix.
startsWith('/apis')also catches docs URLs like/apisyncand/apis-foo, so those pages get routed toApiPagewith a mangled slug.🧭 Proposed fix
function resolveRoute(pathname: string) { - if (pathname.startsWith('/apis')) { + if (pathname === '/apis' || pathname.startsWith('/apis/')) { const slug = pathname.replace(/^\/apis\/?/, '').split('/').filter(Boolean) return { type: 'api' as const, slug } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/App.tsx` around lines 14 - 16, The routing check using pathname.startsWith('/apis') incorrectly matches prefixes like '/apisync'; change the guard in App.tsx to match '/apis' as a full path segment instead (for example test that the first path segment equals 'apis' or use a regex like ^\/apis(\/|$)); keep the existing slug extraction logic (the slug variable) but only run it when the segment-matching condition is true so ApiPage routing and slug computation remain correct.packages/chronicle/src/server/entry-prod.ts-33-38 (1)
33-38:⚠️ Potential issue | 🟠 MajorDon't coerce route-handler bodies to text.
response.text()corrupts binary bodies and forces full buffering. That breaks endpoints like/og, which need to send image bytes unchanged.Suggested fix
- const body = await response.text() - res.end(body) + const body = Buffer.from(await response.arrayBuffer()) + res.end(body)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/entry-prod.ts` around lines 33 - 38, The code currently calls response.text(), which buffers and corrupts binary responses; instead, stop using response.text() and stream the Web Response body to the Node res: keep setting res.statusCode and response.headers as you already do, then if response.body exists use a proper stream bridge (e.g., use Node's stream.Readable.fromWeb(response.body) or otherwise pipe the ReadableStream from response.body into res) so binary endpoints (like /og) are forwarded unchanged; ensure you only fall back to ending the response when there's no body.packages/chronicle/src/pages/ApiPage.tsx-29-30 (1)
29-30:⚠️ Potential issue | 🟠 MajorReturn a visible not-found state for unknown API slugs.
return nullturns/apis/...misses into a blank page, and the server layer currently still responds 200. Render an explicit not-found state here so bad slugs are user-visible and can be mapped to a 404 upstream.Suggested fix
- if (!match) return null + if (!match) { + return ( + <> + <Head + title="API operation not found" + description="The requested API operation could not be found." + config={config} + /> + <Text size={3}>API operation not found.</Text> + </> + ) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/pages/ApiPage.tsx` around lines 29 - 30, The code currently returns null when findApiOperation(apiSpecs, slug) yields no match in the ApiPage component; replace the early return with rendering an explicit not-found UI and ensure the request is mapped to a 404 upstream (e.g., render a NotFound/ApiNotFound component or call your framework’s notFound helper) so unknown slugs show a visible error and produce a 404 response instead of a blank 200 page; update the logic around match and findApiOperation to invoke that not-found path.packages/chronicle/src/server/vite-config.ts-45-48 (1)
45-48:⚠️ Potential issue | 🟠 MajorDon't inline runtime content paths into the SSR bundle.
Vite's
defineoption statically replacesprocess.env.CHRONICLE_*references during the build for all processed modules, including the server bundle. WhencreateViteConfig()is reused for server build, these environment variables are inlined as build-time strings, making runtime--contentoverrides ineffective and pinning production bundles to CI-only paths. Split client/server config or gate these replacements behind a client-only flag.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/vite-config.ts` around lines 45 - 48, The define entries for 'process.env.CHRONICLE_CONTENT_DIR' and 'process.env.CHRONICLE_PROJECT_ROOT' are being inlined into the SSR bundle; update createViteConfig to only add these defines for client builds (or when a new clientOnly flag is true) and omit them for server/SSR builds so runtime --content overrides work; locate the define block in createViteConfig and gate adding JSON.stringify(contentDir) and JSON.stringify(root) behind a check like isServer === false (or a passed-in clientOnly boolean), and instead ensure server code reads process.env or a runtime config injected at startup rather than relying on compile-time define.packages/chronicle/src/server/dev.ts-122-123 (1)
122-123:⚠️ Potential issue | 🟠 MajorPotential XSS via JSON injection in script tag.
If
embeddedDatacontains a string like</script><script>alert(1)//, the resulting HTML will break out of the script tag. Use a safe serializer that escapes</script>sequences.Proposed fix
+ // Safely serialize JSON for embedding in HTML script tag + const safeJson = JSON.stringify(embeddedData).replace(/</g, '\\u003c') // Embed page data for client hydration - const dataScript = `<script>window.__PAGE_DATA__ = ${JSON.stringify(embeddedData)}</script>` + const dataScript = `<script>window.__PAGE_DATA__ = ${safeJson}</script>` template = template.replace('<!--head-outlet-->', `<!--head-outlet-->${dataScript}`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/dev.ts` around lines 122 - 123, The current injection creates dataScript = `<script>window.__PAGE_DATA__ = ${JSON.stringify(embeddedData)}</script>` which allows embeddedData to break out of the script tag (XSS); update the serialization used in dev.ts to a safe serializer that escapes closing script sequences and other dangerous characters (e.g., produce a JSON string where "</script>" is replaced with "<\\/script>" and other control chars are escaped) before embedding into dataScript, and use that safeSerialized string in the template.replace call so window.__PAGE_DATA__ is assigned from a safe, escaped JSON literal.
🟡 Minor comments (6)
packages/chronicle/src/cli/__tests__/config.test.ts-10-24 (1)
10-24:⚠️ Potential issue | 🟡 MinorGuard env var restoration with
try/finallyto prevent test pollution.
process.env.CHRONICLE_CONTENT_DIRis restored only on success paths. If a test fails early, global env state can leak to later tests.Proposed fix
it('returns env var when set', () => { const original = process.env.CHRONICLE_CONTENT_DIR - process.env.CHRONICLE_CONTENT_DIR = '/env/content' - const result = resolveContentDir() - expect(result).toContain('env/content') - process.env.CHRONICLE_CONTENT_DIR = original + try { + process.env.CHRONICLE_CONTENT_DIR = '/env/content' + const result = resolveContentDir() + expect(result).toContain('env/content') + } finally { + process.env.CHRONICLE_CONTENT_DIR = original + } }) it('defaults to content directory', () => { const original = process.env.CHRONICLE_CONTENT_DIR - delete process.env.CHRONICLE_CONTENT_DIR - const result = resolveContentDir() - expect(result).toContain('content') - process.env.CHRONICLE_CONTENT_DIR = original + try { + delete process.env.CHRONICLE_CONTENT_DIR + const result = resolveContentDir() + expect(result).toContain('content') + } finally { + process.env.CHRONICLE_CONTENT_DIR = original + } })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/cli/__tests__/config.test.ts` around lines 10 - 24, Tests that mutate process.env.CHRONICLE_CONTENT_DIR should restore the original value in a finally block to avoid leaking state; update both tests ('returns env var when set' and 'defaults to content directory') to save const original = process.env.CHRONICLE_CONTENT_DIR, then perform the env mutation and call resolveContentDir(), and in a finally restore the env by doing (original === undefined ? delete process.env.CHRONICLE_CONTENT_DIR : process.env.CHRONICLE_CONTENT_DIR = original) so the original undefined vs set value is handled correctly.packages/chronicle/src/pages/DocsPage.tsx-15-15 (1)
15-15:⚠️ Potential issue | 🟡 MinorNormalize
config.urlbefore composingpageUrl.A trailing slash in
config.urlproduces//in the JSON-LD URL here, e.g.https://site//fooorhttps://site//on the index page.🔧 Proposed fix
- const pageUrl = config.url ? `${config.url}/${slug.join('/')}` : undefined + const baseUrl = config.url?.replace(/\/$/, '') + const pageUrl = baseUrl + ? slug.length > 0 + ? `${baseUrl}/${slug.join('/')}` + : baseUrl + : undefined🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/pages/DocsPage.tsx` at line 15, The composed pageUrl can end up with a double slash when config.url has a trailing slash; normalize config.url before composing pageUrl by trimming any trailing slashes (e.g., normalize config.url via a regex or trim function) and then join with slug (use slug.join('/')) so that pageUrl is always built from a clean base; update the code that sets pageUrl (referencing config.url, pageUrl, and slug) to first compute a normalized baseUrl and then build pageUrl from that base.packages/chronicle/src/cli/commands/dev.ts-10-12 (1)
10-12:⚠️ Potential issue | 🟡 MinorReject invalid
--portvalues up front.
parseInt()accepts partial strings and can returnNaN, so values like3000abcorabcget passed intostartDevServer(). Fail fast here with an integer/range check so the CLI returns a clear error instead of a downstream server failure.Suggested fix
- const port = parseInt(options.port, 10) + const port = Number(options.port) + if (!Number.isInteger(port) || port < 1 || port > 65535) { + throw new Error(`Invalid port: ${options.port}`) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/cli/commands/dev.ts` around lines 10 - 12, The CLI currently parses options.port with parseInt and may produce NaN or accept partial strings; update the .action callback to validate options.port before calling startDevServer by ensuring options.port is a whole integer within the allowed port range (1–65535) and that parseInt(options.port, 10) yields a finite integer; if validation fails, throw or exit with a clear error message referencing options.port so resolveContentDir and startDevServer are never called with an invalid port value.packages/chronicle/src/server/handlers/llms.ts-30-31 (1)
30-31:⚠️ Potential issue | 🟡 MinorMissing error handling for file read operation.
If
fs.readFilefails (e.g., permission denied, file deleted between readdir and read), the entire scan will throw and the handler will fail. Consider wrapping in try/catch to skip unreadable files gracefully, consistent with thereaddirerror handling on line 16-17.Proposed fix
if (!entry.name.endsWith('.mdx') && !entry.name.endsWith('.md')) continue - const raw = await fs.readFile(fullPath, 'utf-8') - const { data: fm } = matter(raw) + let raw: string + try { + raw = await fs.readFile(fullPath, 'utf-8') + } catch { + continue // Skip unreadable files + } + const { data: fm } = matter(raw) const baseName = entry.name.replace(/\.(mdx|md)$/, '')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/handlers/llms.ts` around lines 30 - 31, Wrap the fs.readFile(fullPath, 'utf-8') and matter(raw) calls in a try/catch inside the scan loop in llms.ts so a failed read or parse of a single file doesn't abort the entire scan; on error, log or warn (e.g., using the existing logger or console.warn) including fullPath and the error, then continue to the next file, mirroring the graceful handling used for readdir.packages/chronicle/src/cli/commands/start.ts-14-14 (1)
14-14:⚠️ Potential issue | 🟡 MinorMissing validation for parsed port number.
parseIntreturnsNaNfor non-numeric input. PassingNaNtostartProdServerwill cause unexpected behavior.Proposed fix
const contentDir = resolveContentDir(options.content) const port = parseInt(options.port, 10) + if (isNaN(port) || port < 1 || port > 65535) { + console.error(chalk.red('Invalid port number')) + process.exit(1) + } const distDir = path.resolve(options.dist)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/cli/commands/start.ts` at line 14, The parsed port value assigned to "port" using parseInt(options.port, 10) can be NaN for invalid input; before calling startProdServer ensure the port is a valid integer in the acceptable TCP port range (e.g., 1–65535). Update the start command to validate Number.isInteger(port) && !Number.isNaN(port) (or use Number.parseInt then check) and that port >= 1 && port <= 65535; if invalid, either throw a descriptive error or fall back to a safe default and log the issue so startProdServer is never invoked with NaN or out-of-range values. Ensure references to "port" and the call to startProdServer use the validated value.packages/chronicle/src/cli/commands/serve.ts-14-14 (1)
14-14:⚠️ Potential issue | 🟡 MinorMissing validation for parsed port number.
Same issue as in
start.ts-parseIntreturnsNaNfor non-numeric input.Proposed fix
const contentDir = resolveContentDir(options.content) const port = parseInt(options.port, 10) + if (isNaN(port) || port < 1 || port > 65535) { + console.error(chalk.red('Invalid port number')) + process.exit(1) + } const outDir = path.resolve(options.outDir)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/cli/commands/serve.ts` at line 14, The parsed port assignment using parseInt(options.port, 10) in serve.ts can yield NaN for invalid input; update the code that defines const port to validate the result (e.g., check Number.isInteger(port) and !Number.isNaN(port) and that port is within the valid TCP port range 1–65535) and throw or exit with a clear error when validation fails; reference the const port variable and the parseInt(options.port, 10) expression and mirror the same validation/exit behavior used for start.ts so invalid/non-numeric port inputs are rejected early.
🧹 Nitpick comments (8)
packages/chronicle/src/server/__tests__/og.test.ts (1)
4-17: These assertions are effectively out of CI right now.The only behavioral tests for
handleOgare skipped unlessTEST_OGis set, so regressions in headers or rendering will not be caught in the default test run. Mock the font fetch or inject the font buffer so this suite can run deterministically in CI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/__tests__/og.test.ts` around lines 4 - 17, The tests for handleOg are skipped behind TEST_OG, so make the suite deterministic by providing a mocked font fetch or injecting a font buffer: remove or stop using describe.skipIf(!process.env.TEST_OG) for this suite and instead in og.test.ts mock the global fetch (or the module method that loads fonts) in a beforeAll to return a Response whose body is the expected ArrayBuffer/Uint8Array of a simple valid font, and restore the original fetch in afterAll; alternatively, refactor handleOg to accept an injected font buffer or font-loader function and call that injected dependency from the tests with a known buffer, then keep the assertions of content-type and cache-control unchanged to run reliably in CI.packages/chronicle/src/server/__tests__/handlers.test.ts (1)
18-35: Content-Type assertions are brittle.Exact equality can fail with valid charset suffixes. Prefer media-type pattern checks.
♻️ Suggested update
- expect(response.headers.get('content-type')).toBe('text/plain') + expect(response.headers.get('content-type') ?? '').toMatch(/^text\/plain\b/) ... - expect(response.headers.get('content-type')).toBe('application/xml') + expect(response.headers.get('content-type') ?? '').toMatch(/^application\/xml\b/)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/__tests__/handlers.test.ts` around lines 18 - 35, The content-type assertions are brittle because they expect exact equality and will fail if a charset is appended; update the tests for handleRobots and handleSitemap to assert the media type only (e.g., check that response.headers.get('content-type') matches a media-type pattern such as /^text\/plain\b/ for handleRobots and /^application\/xml\b/ for handleSitemap or use startsWith('text/plain') / startsWith('application/xml')) so the tests accept valid charset suffixes while still verifying the correct media type.packages/chronicle/src/server/__tests__/entry-server.test.tsx (1)
26-34: Route tests are too weak to catch routing regressions.Both tests only check truthiness, which duplicates prior coverage and won’t fail if
/and/apisrender the same markup.♻️ Suggested strengthening
- it('renders docs route for root URL', () => { - const html = render('http://localhost:3000/', mockData) - expect(html).toBeTruthy() - }) - - it('renders api route for /apis URL', () => { - const html = render('http://localhost:3000/apis', mockData) - expect(html).toBeTruthy() - }) + it('renders different markup for docs and api routes', () => { + const docsHtml = render('http://localhost:3000/', mockData) + const apisHtml = render('http://localhost:3000/apis', mockData) + expect(docsHtml).not.toEqual(apisHtml) + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/__tests__/entry-server.test.tsx` around lines 26 - 34, The tests 'renders docs route for root URL' and 'renders api route for /apis URL' only assert truthiness; update them to assert route-specific output so routing regressions fail. Use the existing render(url, mockData) call and replace expect(html).toBeTruthy() with checks for distinct markers (e.g., route-specific headings, IDs, or text) that only appear on the docs page vs the apis page; for example assert that the output from render('http://localhost:3000/') contains the docs marker and that render('http://localhost:3000/apis') contains the APIs marker. Keep using the same test names and the render and mockData symbols so the change is minimal and targeted.packages/chronicle/src/server/__tests__/vite-config.test.ts (1)
16-24:isDevtest currently doesn’t verifyisDevbehavior.This case passes even if the option is ignored. Either assert a dev/prod-differing field or rename the test to reflect that it only checks root stability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/__tests__/vite-config.test.ts` around lines 16 - 24, The test "accepts isDev option" currently only checks root and doesn't assert any dev/prod differences; update the test that calls createViteConfig to verify a field that changes with isDev (e.g., config.mode should be 'development' when isDev: true and 'production' when isDev: false) by adding assertions for config.mode for both true and false cases (or alternatively rename the test to reflect it only checks root stability if you don't want to assert mode); reference the createViteConfig call in the test to add the extra expect(config.mode).toBe(...) assertions..github/workflows/canary.yml (1)
37-40: Guard publish step to avoid fork/secret-related failures.This reduces noisy CI failures when
NPM_TOKENis unavailable (e.g., fork PRs).♻️ Suggested hardening
- name: Publish + if: ${{ github.event.pull_request.head.repo.full_name == github.repository && secrets.NPM_TOKEN != '' }} run: bun publish --tag canary --access public env: NPM_CONFIG_TOKEN: ${{ secrets.NPM_TOKEN }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/canary.yml around lines 37 - 40, The Publish step currently runs unguarded and fails on fork PRs without secrets; modify the step named "Publish" (the one running "bun publish --tag canary --access public") to include an if condition that only runs when NPM_TOKEN is present and the workflow is not from a fork PR — e.g. add an expression like if: github.event.pull_request == null && secrets.NPM_TOKEN != '' (or alternatively check github.repository == 'yourOrg/yourRepo' if you prefer repository-scoped gating) so the publish only executes when the secret is available and the run is from the main repository.packages/chronicle/src/server/__tests__/router.test.ts (1)
62-71: Add coverage for the new/api/specsroute.This suite exercises the other built-in endpoints, but the API pages in this migration depend on
/api/specs. A small route-match assertion here would keep that wiring from regressing silently.🧪 Proposed addition
it('matches /api/apis-proxy route', () => { const handler = matchRoute('http://localhost:3000/api/apis-proxy') expect(handler).not.toBeNull() }) + + it('matches /api/specs route', () => { + const handler = matchRoute('http://localhost:3000/api/specs') + expect(handler).not.toBeNull() + }) it('returns 405 for non-POST to apis-proxy', async () => { const handler = matchRoute('http://localhost:3000/api/apis-proxy') const response = await handler!(new Request('http://localhost:3000/api/apis-proxy')) expect(response.status).toBe(405)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/__tests__/router.test.ts` around lines 62 - 71, Add a test that asserts the router recognizes the new /api/specs route: call matchRoute('http://localhost:3000/api/specs') and expect the returned handler to be non-null, and also exercise the handler with a non-POST Request (using new Request('http://localhost:3000/api/specs')) to assert it returns 405 (mirroring the existing apis-proxy tests); update the test file by adding these assertions alongside the existing matchRoute and handler invocation for apis-proxy so the new /api/specs route wiring is covered.packages/chronicle/src/server/prod.ts (1)
10-17: Userootwhen resolving the production entry.The public API accepts
root, but this function resolvesdistDirfrom the current working directory and then passes the unresolved value downstream. A relativedistDirwill point at the wrong bundle whenever the caller is not already in the project root.📦 Proposed fix
export async function startProdServer(options: ProdServerOptions) { - const { port, distDir } = options + const { port, root, distDir } = options + const resolvedDistDir = path.resolve(root, distDir) - const serverEntry = path.resolve(distDir, 'server/entry-prod.js') + const serverEntry = path.resolve(resolvedDistDir, 'server/entry-prod.js') const { startServer } = await import(serverEntry) console.log(chalk.cyan('Starting production server...')) - return startServer({ port, distDir }) + return startServer({ port, distDir: resolvedDistDir }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/prod.ts` around lines 10 - 17, The startProdServer function currently resolves serverEntry using the raw distDir which can be relative; instead resolve distDir against the provided root (from ProdServerOptions) before building serverEntry so relative distDir works regardless of CWD. In practice, inside startProdServer resolve a baseDir = path.resolve(root, distDir) (or path.resolve(root, distDir || '') if distDir can be undefined), use that baseDir when computing serverEntry = path.resolve(baseDir, 'server/entry-prod.js'), and pass the resolved baseDir (not the original relative distDir) to startServer({ port, distDir: baseDir }) so downstream code gets the absolute bundle path; update references to serverEntry, distDir, root and the startServer call accordingly.packages/chronicle/src/server/entry-server.tsx (1)
20-24: Pass the full location intoStaticRouter.Only forwarding
pathnamedrops query strings and hashes, so any component that readsuseLocation().searchoruseLocation().hashwill render different output on the server and client.♻️ Proposed fix
export function render(url: string, data: SSRData): string { - const pathname = new URL(url, 'http://localhost').pathname + const location = new URL(url, 'http://localhost') return renderToString( - <StaticRouter location={pathname}> + <StaticRouter location={`${location.pathname}${location.search}${location.hash}`}> <PageProvider initialConfig={data.config} initialTree={data.tree}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/entry-server.tsx` around lines 20 - 24, The server render drops query and hash by passing only pathname to StaticRouter; update the render function so StaticRouter receives the full location (including search and hash) instead of just pathname. In the render function (symbol: render) replace the pathname-only value passed to <StaticRouter location={...}> with the full URL/location (e.g., the original url or a constructed string including pathname+search+hash) so useLocation() on server matches the client.
| function useSearch(query: string) { | ||
| const [results, setResults] = useState<SearchResult[]>([]); | ||
| const [isLoading, setIsLoading] = useState(false); | ||
| const timerRef = useRef<ReturnType<typeof setTimeout>>(); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In current React TypeScript typings, is useRef<T>() without an initial value valid, or must useRef be initialized (e.g., null)?
💡 Result:
In the current @types/react (React 19) TypeScript typings, useRef<T>() with no argument is not valid—you must pass an explicit initial value (e.g. undefined or null). The change explicitly forbids an implicit undefined initializer. [1], [2]
- Use
useRef<T | undefined>(undefined)if you want an “initially empty” ref that you’ll fill later. [1] - Use
useRef<T | null>(null)for DOM refs (since they start outnullbefore the element mounts). (Common pattern; aligns with React’s runtime behavior.) [2]
For comparison, older typings (React 18 era) included a no-arg overload like useRef<T>(): MutableRefObject<T | undefined>, which made useRef<T>() valid back then. [3]
Sources: [1] (github.com) [2] (github.com) [3] (use-form.netlify.app)
Citations:
- 1: [react] Require initial value for
useRefDefinitelyTyped/DefinitelyTyped#64920 - 2: https://github.com/eps1lon/types-react-codemod
- 3: https://use-form.netlify.app/modules/_node_modules__types_react_index_d_.react
Add initial value to useRef to satisfy React TypeScript typings.
Line 22: useRef<ReturnType<typeof setTimeout>>() lacks a required initial value. Current React (v19) TypeScript typings require an explicit initializer (e.g., null or undefined). Initialize with null and guard clearTimeout calls.
Fix
- const timerRef = useRef<ReturnType<typeof setTimeout>>();
+ const timerRef = useRef<ReturnType<typeof setTimeout> | null>(null);
useEffect(() => {
- clearTimeout(timerRef.current);
+ if (timerRef.current) clearTimeout(timerRef.current);
timerRef.current = setTimeout(async () => {
...
}, 100);
- return () => clearTimeout(timerRef.current);
+ return () => {
+ if (timerRef.current) clearTimeout(timerRef.current);
+ };
}, [query]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/chronicle/src/components/ui/search.tsx` at line 22, The useRef
declaration for timerRef in the search component is missing an initial value;
change timerRef = useRef<ReturnType<typeof setTimeout> | null>(null) so it
satisfies React v19 TypeScript typings, then update any places that call
clearTimeout(timerRef.current) to first guard that timerRef.current is not null
(e.g., if (timerRef.current) { clearTimeout(timerRef.current); timerRef.current
= null; }) and reset timerRef.current appropriately when setting a new timeout.
| // Serve static files from content dir (skip .md/.mdx) | ||
| const contentFile = path.join(contentDir, decodeURIComponent(url.split('?')[0])) | ||
| if (!url.endsWith('.md') && !url.endsWith('.mdx')) { | ||
| try { | ||
| const stat = await fsPromises.stat(contentFile) | ||
| if (stat.isFile()) { | ||
| const ext = path.extname(contentFile).toLowerCase() | ||
| const mimeTypes: Record<string, string> = { | ||
| '.png': 'image/png', '.jpg': 'image/jpeg', '.jpeg': 'image/jpeg', | ||
| '.gif': 'image/gif', '.svg': 'image/svg+xml', '.webp': 'image/webp', | ||
| '.ico': 'image/x-icon', '.pdf': 'application/pdf', '.json': 'application/json', | ||
| '.yaml': 'text/yaml', '.yml': 'text/yaml', '.txt': 'text/plain', | ||
| } | ||
| res.setHeader('Content-Type', mimeTypes[ext] || 'application/octet-stream') | ||
| createReadStream(contentFile).pipe(res) | ||
| return | ||
| } | ||
| } catch { /* fall through to SSR */ } |
There was a problem hiding this comment.
Path traversal vulnerability in static file serving.
decodeURIComponent(url) can decode %2e%2e%2f to ../, and path.join does not prevent directory traversal. An attacker could access files outside contentDir (e.g., /..%2f..%2f..%2fetc/passwd).
Proposed fix - validate resolved path is within contentDir
// Serve static files from content dir (skip .md/.mdx)
- const contentFile = path.join(contentDir, decodeURIComponent(url.split('?')[0]))
+ const requestedPath = decodeURIComponent(url.split('?')[0])
+ const contentFile = path.resolve(contentDir, requestedPath.replace(/^\/+/, ''))
+
+ // Prevent path traversal
+ if (!contentFile.startsWith(contentDir + path.sep) && contentFile !== contentDir) {
+ res.statusCode = 403
+ res.end('Forbidden')
+ return
+ }
+
if (!url.endsWith('.md') && !url.endsWith('.mdx')) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Serve static files from content dir (skip .md/.mdx) | |
| const contentFile = path.join(contentDir, decodeURIComponent(url.split('?')[0])) | |
| if (!url.endsWith('.md') && !url.endsWith('.mdx')) { | |
| try { | |
| const stat = await fsPromises.stat(contentFile) | |
| if (stat.isFile()) { | |
| const ext = path.extname(contentFile).toLowerCase() | |
| const mimeTypes: Record<string, string> = { | |
| '.png': 'image/png', '.jpg': 'image/jpeg', '.jpeg': 'image/jpeg', | |
| '.gif': 'image/gif', '.svg': 'image/svg+xml', '.webp': 'image/webp', | |
| '.ico': 'image/x-icon', '.pdf': 'application/pdf', '.json': 'application/json', | |
| '.yaml': 'text/yaml', '.yml': 'text/yaml', '.txt': 'text/plain', | |
| } | |
| res.setHeader('Content-Type', mimeTypes[ext] || 'application/octet-stream') | |
| createReadStream(contentFile).pipe(res) | |
| return | |
| } | |
| } catch { /* fall through to SSR */ } | |
| // Serve static files from content dir (skip .md/.mdx) | |
| const requestedPath = decodeURIComponent(url.split('?')[0]) | |
| const contentFile = path.resolve(contentDir, requestedPath.replace(/^\/+/, '')) | |
| // Prevent path traversal | |
| if (!contentFile.startsWith(contentDir + path.sep) && contentFile !== contentDir) { | |
| res.statusCode = 403 | |
| res.end('Forbidden') | |
| return | |
| } | |
| if (!url.endsWith('.md') && !url.endsWith('.mdx')) { | |
| try { | |
| const stat = await fsPromises.stat(contentFile) | |
| if (stat.isFile()) { | |
| const ext = path.extname(contentFile).toLowerCase() | |
| const mimeTypes: Record<string, string> = { | |
| '.png': 'image/png', '.jpg': 'image/jpeg', '.jpeg': 'image/jpeg', | |
| '.gif': 'image/gif', '.svg': 'image/svg+xml', '.webp': 'image/webp', | |
| '.ico': 'image/x-icon', '.pdf': 'application/pdf', '.json': 'application/json', | |
| '.yaml': 'text/yaml', '.yml': 'text/yaml', '.txt': 'text/plain', | |
| } | |
| res.setHeader('Content-Type', mimeTypes[ext] || 'application/octet-stream') | |
| createReadStream(contentFile).pipe(res) | |
| return | |
| } | |
| } catch { /* fall through to SSR */ } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/chronicle/src/server/dev.ts` around lines 40 - 57, The static file
serving uses decodeURIComponent(url) + path.join(contentDir, ...) to compute
contentFile, which allows path traversal (e.g., %2e%2e). Fix: after computing
the candidate path, resolve it (using path.resolve) and verify the resolved path
starts with the resolved contentDir prefix before calling
fsPromises.stat/createReadStream; if the check fails, skip serving or return
403. Update the logic around contentFile, contentDir, decodeURIComponent(url),
path.join, fsPromises.stat, and createReadStream to perform this containment
check and deny requests that escape the contentDir.
| // Serve static files from content dir (skip .md/.mdx) | ||
| const contentDir = process.env.CHRONICLE_CONTENT_DIR || process.cwd() | ||
| const contentFile = path.join(contentDir, decodeURIComponent(url.split('?')[0])) | ||
| if (!url.endsWith('.md') && !url.endsWith('.mdx')) { | ||
| try { | ||
| const stat = await fsPromises.stat(contentFile) | ||
| if (stat.isFile()) { | ||
| const ext = path.extname(contentFile).toLowerCase() | ||
| const mimeTypes: Record<string, string> = { | ||
| '.png': 'image/png', '.jpg': 'image/jpeg', '.jpeg': 'image/jpeg', | ||
| '.gif': 'image/gif', '.svg': 'image/svg+xml', '.webp': 'image/webp', | ||
| '.ico': 'image/x-icon', '.pdf': 'application/pdf', '.json': 'application/json', | ||
| '.yaml': 'text/yaml', '.yml': 'text/yaml', '.txt': 'text/plain', | ||
| } | ||
| res.setHeader('Content-Type', mimeTypes[ext] || 'application/octet-stream') | ||
| createReadStream(contentFile).pipe(res) | ||
| return | ||
| } | ||
| } catch { /* fall through */ } |
There was a problem hiding this comment.
Guard static content serving against path traversal.
The decoded request path is joined directly onto contentDir, so /%2e%2e/... can escape the content root and expose arbitrary files. Resolve against contentDir, reject anything whose relative path leaves that root, and run the markdown-extension check on that normalized pathname.
Suggested fix
- const contentDir = process.env.CHRONICLE_CONTENT_DIR || process.cwd()
- const contentFile = path.join(contentDir, decodeURIComponent(url.split('?')[0]))
- if (!url.endsWith('.md') && !url.endsWith('.mdx')) {
+ const contentDir = process.env.CHRONICLE_CONTENT_DIR || process.cwd()
+ const requestPath = decodeURIComponent(new URL(url, `http://localhost:${port}`).pathname)
+ const contentFile = path.resolve(contentDir, `.${requestPath}`)
+ const relativePath = path.relative(contentDir, contentFile)
+ if (relativePath.startsWith('..') || path.isAbsolute(relativePath)) {
+ res.statusCode = 403
+ res.end('Forbidden')
+ return
+ }
+ if (!requestPath.endsWith('.md') && !requestPath.endsWith('.mdx')) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/chronicle/src/server/entry-prod.ts` around lines 42 - 60, The static
file serving logic uses decodeURIComponent(url) and path.join(contentDir, ...)
directly (contentDir, contentFile, url), allowing path traversal; change to
resolve and normalize the requested path (e.g. const requestedPath =
decodeURIComponent(url.split('?')[0]); const resolved = path.resolve(contentDir,
'.' + requestedPath) or use path.join followed by path.normalize/path.resolve),
then verify the resolved path stays inside contentDir (e.g.
path.relative(contentDir, resolved) startsWith('..') check) and return a
404/ignore if it escapes; use the normalized/resolved pathname for the
markdown-extension check (the url.endsWith('.md'/.mdx') should be run against
the normalized pathname) and then call fsPromises.stat and stream from the
resolved path instead of the original contentFile.
| const html = render(url, { config, tree, page: pageData, apiSpecs }) | ||
|
|
||
| const dataScript = `<script>window.__PAGE_DATA__ = ${JSON.stringify(embeddedData)}</script>` | ||
| const finalHtml = template | ||
| .replace('<!--head-outlet-->', `<!--head-outlet-->${dataScript}`) | ||
| .replace('<!--ssr-outlet-->', html) |
There was a problem hiding this comment.
Escape window.__PAGE_DATA__ before interpolating it into a <script>.
JSON.stringify() alone is not safe here; a title or description containing </script> or U+2028/U+2029 can terminate the script block and execute injected HTML/JS. Escape the serialized payload, or use a serializer that is safe for inline script contexts.
Suggested fix
- const dataScript = `<script>window.__PAGE_DATA__ = ${JSON.stringify(embeddedData)}</script>`
+ const serializedPageData = JSON.stringify(embeddedData)
+ .replace(/</g, '\\u003c')
+ .replace(/\u2028/g, '\\u2028')
+ .replace(/\u2029/g, '\\u2029')
+ const dataScript = `<script>window.__PAGE_DATA__ = ${serializedPageData}</script>`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const html = render(url, { config, tree, page: pageData, apiSpecs }) | |
| const dataScript = `<script>window.__PAGE_DATA__ = ${JSON.stringify(embeddedData)}</script>` | |
| const finalHtml = template | |
| .replace('<!--head-outlet-->', `<!--head-outlet-->${dataScript}`) | |
| .replace('<!--ssr-outlet-->', html) | |
| const html = render(url, { config, tree, page: pageData, apiSpecs }) | |
| const serializedPageData = JSON.stringify(embeddedData) | |
| .replace(/</g, '\\u003c') | |
| .replace(/\u2028/g, '\\u2028') | |
| .replace(/\u2029/g, '\\u2029') | |
| const dataScript = `<script>window.__PAGE_DATA__ = ${serializedPageData}</script>` | |
| const finalHtml = template | |
| .replace('<!--head-outlet-->', `<!--head-outlet-->${dataScript}`) | |
| .replace('<!--ssr-outlet-->', html) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/chronicle/src/server/entry-prod.ts` around lines 97 - 102, The
inline script injection is unsafe: before interpolating embeddedData into
dataScript in entry-prod.ts (where dataScript is built and template.replace is
used), serialize and escape the JSON payload for an inline script context
instead of using JSON.stringify(...) raw; specifically ensure that
characters/sequences that can break a script tag (like "</script>"), U+2028 and
U+2029, and leading "<" are escaped (or use a known-safe serializer) so
window.__PAGE_DATA__ receives a safe escaped JSON string before inserting into
template; update the dataScript construction that references embeddedData to use
that escaping function/serializer.
| const url = spec.server.url + path | ||
|
|
||
| try { | ||
| const response = await fetch(url, { | ||
| method, | ||
| headers, | ||
| body: body ? JSON.stringify(body) : undefined, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/chronicle/src/server/handlers/apis-proxy.tsRepository: raystack/chronicle
Length of output: 1997
🏁 Script executed:
cat package.json | jq '.engines'Repository: raystack/chronicle
Length of output: 82
🏁 Script executed:
grep -r "AbortSignal\|fetch.*signal" packages/chronicle/src --include="*.ts" --include="*.js"Repository: raystack/chronicle
Length of output: 44
Use URL resolution and a fetch timeout for upstream calls.
String concatenation can produce malformed targets (e.g., if the user-provided path lacks a leading slash or contains multiple slashes). Additionally, fetch without a timeout risks hanging request handling on slow or unresponsive upstreams, which can exhaust connection resources.
Proposed fix
- const url = spec.server.url + path
+ let url: URL
+ try {
+ url = new URL(String(path), spec.server.url)
+ } catch {
+ return Response.json({ error: 'Invalid target path' }, { status: 400 })
+ }
@@
- const response = await fetch(url, {
- method,
- headers,
+ const response = await fetch(url, {
+ method: String(method),
+ headers: headers as HeadersInit,
body: body ? JSON.stringify(body) : undefined,
+ signal: AbortSignal.timeout(10_000),
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const url = spec.server.url + path | |
| try { | |
| const response = await fetch(url, { | |
| method, | |
| headers, | |
| body: body ? JSON.stringify(body) : undefined, | |
| }) | |
| let url: URL | |
| try { | |
| url = new URL(String(path), spec.server.url) | |
| } catch { | |
| return Response.json({ error: 'Invalid target path' }, { status: 400 }) | |
| } | |
| try { | |
| const response = await fetch(url, { | |
| method: String(method), | |
| headers: headers as HeadersInit, | |
| body: body ? JSON.stringify(body) : undefined, | |
| signal: AbortSignal.timeout(10_000), | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/chronicle/src/server/handlers/apis-proxy.ts` around lines 23 - 30,
The upstream call constructs the target via string concatenation (const url =
spec.server.url + path) and uses fetch without a timeout; replace that with a
robust URL resolution using the URL constructor (new URL(path, spec.server.url))
to avoid malformed URLs, then call fetch with an AbortController-based timeout
(create AbortController, pass signal to fetch, set a setTimeout to call
controller.abort() after a configured timeout, and clearTimeout on
success/failure) so slow/unresponsive upstreams won't hang request handling;
update the fetch invocation that currently uses method, headers, body to include
the controller.signal and ensure timeout cleanup.
MDX files in user's content dir can't find react because node_modules is in PACKAGE_ROOT. Add resolve.dedupe for react and server.fs.allow for content dir access. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
react-device-detect causes ESM export issues when run from npx cache. Replace with inline navigator.platform check for Mac detection. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/chronicle/src/server/vite-config.ts (1)
24-27: Consider removing@contentalias on Line 26 if unused.Current server paths appear to use
process.env.CHRONICLE_CONTENT_DIRinstead; keeping an unused alias increases config surface.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/server/vite-config.ts` around lines 24 - 27, The `@content` alias in the Vite config (the alias entry '@content') appears unused and duplicates the runtime use of process.env.CHRONICLE_CONTENT_DIR; remove the '@content' entry from the alias map in the vite-config code (where alias: { '@': path.resolve(root, 'src'), '@content': contentDir } is defined), then run a quick grep for any imports using '@content' and update them to import via relative paths or use the environment-backed contentDir where needed to avoid breaking imports.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/chronicle/src/server/vite-config.ts`:
- Around line 24-27: The `@content` alias in the Vite config (the alias entry
'@content') appears unused and duplicates the runtime use of
process.env.CHRONICLE_CONTENT_DIR; remove the '@content' entry from the alias
map in the vite-config code (where alias: { '@': path.resolve(root, 'src'),
'@content': contentDir } is defined), then run a quick grep for any imports
using '@content' and update them to import via relative paths or use the
environment-backed contentDir where needed to avoid breaking imports.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3e50e3cc-a73a-4526-a774-b7d6d81b5abb
📒 Files selected for processing (1)
packages/chronicle/src/server/vite-config.ts
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use CSS variables only (no inline styles) for syntax highlighting. Both light and dark themes controlled via CSS selectors. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/chronicle/src/components/ui/search.tsx (1)
141-156:⚠️ Potential issue | 🟡 MinorResults shown during loading state when query is active.
Lines 141-156 render results whenever
search.length > 0, regardless ofisLoading. This can briefly show stale results while a new search is in progress. Consider guarding with!isLoadingfor consistency with other conditional blocks.🛠️ Suggested fix
- {search.length > 0 && + {!isLoading && search.length > 0 && results.map((result) => (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/components/ui/search.tsx` around lines 141 - 156, The results list is rendered whenever search.length > 0 even during an active fetch, causing stale entries to briefly appear; update the conditional that renders Command.Item mapping (the block using results.map, Command.Item, getResultIcon, onSelect and Text) to only render when search.length > 0 && !isLoading so results are hidden while a new query is loading.
🧹 Nitpick comments (1)
packages/chronicle/package.json (1)
38-66: Add an explicitengines.nodeconstraint for the new Vite stack.Given the new dependencies on Vite and related tooling, package installs can pass on unsupported Node versions and fail later at runtime. Please pin a Node engine range in this manifest.
The minimum should be
>=20.19.0to satisfy the requirements of vite@8.0.0 and@vitejs/plugin-react@6.0.1 (^20.19.0 || >=22.12.0).Proposed manifest update
{ "name": "@raystack/chronicle", "version": "0.1.0", "description": "Config-driven documentation framework", "license": "Apache-2.0", "type": "module", + "engines": { + "node": ">=20.19.0" + }, "files": [🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/package.json` around lines 38 - 66, Add an explicit Node engine constraint to package.json by adding an "engines" field with a "node" range pinned to at least ">=20.19.0" (to satisfy vite@8.0.0 and `@vitejs/plugin-react`@6.0.1 which require ^20.19.0 || >=22.12.0); update the manifest so the top-level package.json includes "engines": { "node": ">=20.19.0" } to prevent installs on unsupported Node versions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/chronicle/src/components/ui/search.tsx`:
- Around line 23-38: The useEffect debounce fetch currently lets in-flight
requests complete and overwrite newer results; modify the effect in search.tsx
to create an AbortController for each fetch (and attach its signal to the fetch
call), call controller.abort() in the cleanup before starting a new fetch (and
when the component unmounts), and ignore/handle AbortError so it doesn't set
state; also check res.ok before calling res.json() and setResults([]) or surface
an error when res.ok is false. Update references to timerRef, the existing
fetch(`/api/search?...`), setIsLoading, and setResults inside the effect
accordingly.
---
Outside diff comments:
In `@packages/chronicle/src/components/ui/search.tsx`:
- Around line 141-156: The results list is rendered whenever search.length > 0
even during an active fetch, causing stale entries to briefly appear; update the
conditional that renders Command.Item mapping (the block using results.map,
Command.Item, getResultIcon, onSelect and Text) to only render when
search.length > 0 && !isLoading so results are hidden while a new query is
loading.
---
Nitpick comments:
In `@packages/chronicle/package.json`:
- Around line 38-66: Add an explicit Node engine constraint to package.json by
adding an "engines" field with a "node" range pinned to at least ">=20.19.0" (to
satisfy vite@8.0.0 and `@vitejs/plugin-react`@6.0.1 which require ^20.19.0 ||
>=22.12.0); update the manifest so the top-level package.json includes
"engines": { "node": ">=20.19.0" } to prevent installs on unsupported Node
versions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 89c18a07-c343-414e-839b-cd41b6daba1d
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
packages/chronicle/package.jsonpackages/chronicle/src/components/ui/search.tsx
| useEffect(() => { | ||
| clearTimeout(timerRef.current); | ||
| timerRef.current = setTimeout(async () => { | ||
| setIsLoading(true); | ||
| try { | ||
| const params = new URLSearchParams(); | ||
| if (query) params.set("query", query); | ||
| const res = await fetch(`/api/search?${params}`); | ||
| setResults(await res.json()); | ||
| } catch { | ||
| setResults([]); | ||
| } | ||
| setIsLoading(false); | ||
| }, 100); | ||
| return () => clearTimeout(timerRef.current); | ||
| }, [query]); |
There was a problem hiding this comment.
Race condition: stale fetch responses can overwrite current results.
When the query changes while a fetch is in-flight, the pending request continues and its response will overwrite results—potentially showing stale data briefly. Use an AbortController to cancel pending requests on cleanup or query change.
Additionally, res.ok is not checked before parsing the response.
🐛 Proposed fix using AbortController
useEffect(() => {
+ const controller = new AbortController();
clearTimeout(timerRef.current);
timerRef.current = setTimeout(async () => {
setIsLoading(true);
try {
const params = new URLSearchParams();
if (query) params.set("query", query);
- const res = await fetch(`/api/search?${params}`);
+ const res = await fetch(`/api/search?${params}`, { signal: controller.signal });
+ if (!res.ok) throw new Error('Search request failed');
setResults(await res.json());
- } catch {
- setResults([]);
+ } catch (e) {
+ if ((e as Error).name !== 'AbortError') {
+ setResults([]);
+ }
}
setIsLoading(false);
}, 100);
- return () => clearTimeout(timerRef.current);
+ return () => {
+ clearTimeout(timerRef.current);
+ controller.abort();
+ };
}, [query]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| clearTimeout(timerRef.current); | |
| timerRef.current = setTimeout(async () => { | |
| setIsLoading(true); | |
| try { | |
| const params = new URLSearchParams(); | |
| if (query) params.set("query", query); | |
| const res = await fetch(`/api/search?${params}`); | |
| setResults(await res.json()); | |
| } catch { | |
| setResults([]); | |
| } | |
| setIsLoading(false); | |
| }, 100); | |
| return () => clearTimeout(timerRef.current); | |
| }, [query]); | |
| useEffect(() => { | |
| const controller = new AbortController(); | |
| clearTimeout(timerRef.current); | |
| timerRef.current = setTimeout(async () => { | |
| setIsLoading(true); | |
| try { | |
| const params = new URLSearchParams(); | |
| if (query) params.set("query", query); | |
| const res = await fetch(`/api/search?${params}`, { signal: controller.signal }); | |
| if (!res.ok) throw new Error('Search request failed'); | |
| setResults(await res.json()); | |
| } catch (e) { | |
| if ((e as Error).name !== 'AbortError') { | |
| setResults([]); | |
| } | |
| } | |
| setIsLoading(false); | |
| }, 100); | |
| return () => { | |
| clearTimeout(timerRef.current); | |
| controller.abort(); | |
| }; | |
| }, [query]); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/chronicle/src/components/ui/search.tsx` around lines 23 - 38, The
useEffect debounce fetch currently lets in-flight requests complete and
overwrite newer results; modify the effect in search.tsx to create an
AbortController for each fetch (and attach its signal to the fetch call), call
controller.abort() in the cleanup before starting a new fetch (and when the
component unmounts), and ignore/handle AbortError so it doesn't set state; also
check res.ok before calling res.json() and setResults([]) or surface an error
when res.ok is false. Update references to timerRef, the existing
fetch(`/api/search?...`), setIsLoading, and setResults inside the effect
accordingly.
Moves API routing + SSR rendering logic into request-handler.ts so it can be reused by different deployment adapters. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Exports a Node.js handler that delegates to the shared request handler for SSR + API routes on Vercel. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds post-build step that generates .vercel/output/ with: - Static assets from client build + content dir - Bundled serverless function (catch-all) - Build Output API config (v3) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
import.meta.glob— eager frontmatter for sidebar, lazy MDX loading per pagerenderToString— server resolves all data (config, page tree, MDX component, API specs) before renderingfs/pathimports on client.md/.mdxfiltered)ssr.noExternal: truebundles all deps, no dual React instance issues/api/specsendpoint instead of embedded in HTML (avoids 2MB+ bloat)Commands
chronicle devchronicle builddist/chronicle startdist/chronicle serveArchitecture
Test plan
chronicle dev— basic example, frontier examplechronicle build+chronicle start— production mode🤖 Generated with Claude Code