Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
135 changes: 135 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
# CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

## Common Development Commands

### Installation and Setup

```bash
# Install dependencies using pnpm (recommended)
pnpm install

# Or using npm
npm install
```

### Development

```bash
# Start development server on http://localhost:3000
pnpm dev

# Build the project
pnpm build

# Start production server
pnpm start

# Process MDX files (runs automatically after install)
pnpm postinstall
```

### Code Quality

```bash
# Run linting
pnpm lint

# Run type checking
pnpm typecheck

# Check image compliance with project rules
pnpm lint:images

# Migrate images to proper directory structure
pnpm migrate:images
```

### Git Commits

- The project uses Husky for git hooks and lint-staged for pre-commit formatting
- Prettier will automatically format files on commit
- On Windows + VSCode/Cursor, use command line (`git commit`) instead of GUI to avoid Husky bugs

## Project Architecture

### Tech Stack

- **Framework**: Next.js 15 with App Router
- **Documentation**: Fumadocs MDX (文档系统)
- **Styling**: Tailwind CSS v4
- **UI Components**: Fumadocs UI + custom components
- **Authentication**: NextAuth (beta)
- **AI Integration**: Vercel AI SDK with Assistant UI
- **Database**: Prisma with Neon (PostgreSQL)

### Directory Structure

```
app/
├── api/ # API routes (auth, chat, docs-tree)
├── components/ # React components
│ ├── assistant-ui/ # AI assistant components
│ └── ui/ # Reusable UI components
├── docs/ # MDX documentation content
│ ├── ai/ # AI-related documentation
│ ├── computer-science/ # CS topics
│ ├── frontend/ # Frontend development
│ └── [...slug]/ # Dynamic routing for docs
├── hooks/ # Custom React hooks
└── layout.tsx # Root layout with providers
```

### Documentation Structure

- Uses "Folder as a Book" pattern - each folder can have an `index.mdx` for overview
- URLs are auto-generated from file structure (e.g., `docs/ai/llm-basics/index.mdx` → `/ai/llm-basics`)
- File naming: use `kebab-case` and numeric prefixes for ordering (e.g., `01-intro.mdx`)
- Numeric prefixes are stripped from final URLs

### Image Management

- Images should be placed in `./<basename>.assets/` directory alongside the MDX file
- Example: `foo.mdx` → images go in `./foo.assets/`
- Auto-migration scripts handle image placement during commits
- Site-wide images: `/images/site/*`
- Component demos: `/images/components/<name>/*`

### MDX Frontmatter

Required fields:

```yaml
---
title: Document Title
---
```

Optional fields:

```yaml
---
description: Brief description
date: "2025-01-01"
tags:
- tag1
- tag2
---
```

### Key Features

1. **AI Assistant**: Integrated chat interface with support for multiple AI providers
2. **Internationalization**: Using next-intl for multi-language support
3. **Search**: Orama search integration for documentation
4. **Comments**: Giscus integration for discussion
5. **Math Support**: KaTeX for mathematical expressions
6. **Authentication**: GitHub OAuth integration

### Development Considerations

- The project uses Fumadocs for documentation, refer to [Fumadocs docs](https://fumadocs.dev/docs) for UI components
- Math expressions use remark-math and rehype-katex plugins
- Authentication is handled via NextAuth with Neon database adapter
- The project includes pre-configured GitHub Actions for automated deployment
2 changes: 2 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@ git push origin doc_raven
```

---

## Q&A

> Windows + VSCode(Cursor) 用户:如遇 Husky 在 VSCode 内置终端阻止提交,请使用外部命令行执行 `git commit`。
Expand Down Expand Up @@ -154,6 +155,7 @@ pnpm lint:images # 检查图片是否符合规范
pnpm migrate:images # 自动迁移图片到对应 assets 目录
pnpm postinstall # 同步必要的 Husky/Fumadocs 配置
```

---

## 📚 文档规范
Expand Down
213 changes: 213 additions & 0 deletions app/api/chat/route.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
import { describe, it, expect, vi, beforeEach } from "vitest";
import { POST } from "./route";
import { streamText } from "ai";
import { getModel } from "@/lib/ai/models";

// Mock the dependencies
vi.mock("@/lib/ai/models", () => ({
getModel: vi.fn(),
requiresApiKey: vi.fn((provider) => provider !== "intern"),
}));

vi.mock("@/lib/ai/prompt", () => ({
buildSystemMessage: vi.fn((system) => {
return system || "You are a helpful AI assistant.";
}),
}));

vi.mock("ai", () => ({
streamText: vi.fn(),
convertToModelMessages: vi.fn((messages) => messages),
UIMessage: {},
}));

describe("chat API route", () => {
const mockStreamText = vi.mocked(streamText);
const mockGetModel = vi.mocked(getModel);

beforeEach(() => {
vi.clearAllMocks();
});

it("should return error when API key is missing for openai", async () => {
const request = new Request("http://localhost:3000/api/chat", {
method: "POST",
body: JSON.stringify({
messages: [{ role: "user", content: "Hello" }],
provider: "openai",
}),
});

const response = await POST(request);
const data = await response.json();

expect(response.status).toBe(400);
expect(data.error).toContain("API key is required");
});

it("should return error when API key is empty string for gemini", async () => {
const request = new Request("http://localhost:3000/api/chat", {
method: "POST",
body: JSON.stringify({
messages: [{ role: "user", content: "Hello" }],
provider: "gemini",
apiKey: "",
}),
});

const response = await POST(request);
const data = await response.json();

expect(response.status).toBe(400);
expect(data.error).toContain("API key is required");
});

it("should use intern provider by default", async () => {
const mockModel = { id: "intern-model" } as any;
mockGetModel.mockReturnValue(mockModel);

const mockStreamResponse = {
toUIMessageStreamResponse: vi.fn(() => new Response()),
} as any;
mockStreamText.mockReturnValue(mockStreamResponse);

const request = new Request("http://localhost:3000/api/chat", {
method: "POST",
body: JSON.stringify({
messages: [{ role: "user", content: "Hello" }],
}),
});

await POST(request);

expect(mockGetModel).toHaveBeenCalledWith("intern", undefined);
expect(mockStreamText).toHaveBeenCalledWith({
model: mockModel,
system: expect.stringContaining("You are a helpful AI assistant"),
messages: [{ role: "user", content: "Hello" }],
});
});

it("should use OpenAI provider when specified", async () => {
const mockModel = { id: "openai-model" } as any;
mockGetModel.mockReturnValue(mockModel);

const mockStreamResponse = {
toUIMessageStreamResponse: vi.fn(() => new Response()),
} as any;
mockStreamText.mockReturnValue(mockStreamResponse);

const request = new Request("http://localhost:3000/api/chat", {
method: "POST",
body: JSON.stringify({
messages: [{ role: "user", content: "Hello" }],
provider: "openai",
apiKey: "test-api-key",
}),
});

await POST(request);

expect(mockGetModel).toHaveBeenCalledWith("openai", "test-api-key");
});

it("should include page context in system message", async () => {
const mockModel = { id: "test-model" } as any;
mockGetModel.mockReturnValue(mockModel);

const mockStreamResponse = {
toUIMessageStreamResponse: vi.fn(() => new Response()),
} as any;
mockStreamText.mockReturnValue(mockStreamResponse);

const pageContext = {
title: "Test Page",
description: "A test page",
content: "Page content here",
slug: "test-page",
};

const request = new Request("http://localhost:3000/api/chat", {
method: "POST",
body: JSON.stringify({
messages: [{ role: "user", content: "Hello" }],
pageContext,
}),
});

await POST(request);

const { buildSystemMessage } = await import("@/lib/ai/prompt");
expect(buildSystemMessage).toHaveBeenCalledWith(undefined, pageContext);
});

it("should use custom system message when provided", async () => {
const mockModel = { id: "test-model" } as any;
mockGetModel.mockReturnValue(mockModel);

const mockStreamResponse = {
toUIMessageStreamResponse: vi.fn(() => new Response()),
} as any;
mockStreamText.mockReturnValue(mockStreamResponse);

const customSystem = "You are a specialized AI assistant.";

const request = new Request("http://localhost:3000/api/chat", {
method: "POST",
body: JSON.stringify({
messages: [{ role: "user", content: "Hello" }],
system: customSystem,
}),
});

await POST(request);

const { buildSystemMessage } = await import("@/lib/ai/prompt");
expect(buildSystemMessage).toHaveBeenCalledWith(customSystem, undefined);
});

it("should handle API errors gracefully", async () => {
const mockModel = { id: "test-model" } as any;
mockGetModel.mockReturnValue(mockModel);

mockStreamText.mockImplementation(() => {
throw new Error("Stream failed");
});

const request = new Request("http://localhost:3000/api/chat", {
method: "POST",
body: JSON.stringify({
messages: [{ role: "user", content: "Hello" }],
}),
});

const response = await POST(request);
const data = await response.json();

expect(response.status).toBe(500);
expect(data).toEqual({ error: "Failed to process chat request" });
});

it("should handle getModel API key errors", async () => {
mockGetModel.mockImplementation(() => {
throw new Error("OpenAI API key is required");
});

const request = new Request("http://localhost:3000/api/chat", {
method: "POST",
body: JSON.stringify({
messages: [{ role: "user", content: "Hello" }],
provider: "openai",
}),
});

const response = await POST(request);
const data = await response.json();

expect(response.status).toBe(400);
expect(data.error).toBe(
"API key is required. Please configure your API key in the settings.",
);
});
});
Loading