Built entirely by AI — coded, tested, and debugged by AI agents. Human-directed.
An iOS reader for EPUB, PDF, TXT, and Markdown — built entirely by AI coding agents, with Swift 6, SwiftUI, and SwiftData.
VReader is a modern reading app designed for iPhone and iPad, built entirely by AI coding agents (Claude Code + Codex CLI) with human direction on requirements and testing. It provides a dual-mode reading experience (native UIKit + unified TextKit 2 reflow) across multiple document formats with annotations, full-text search, AI assistant, TTS, book source scraping, and WebDAV backup.
- Multi-format — EPUB, PDF, TXT, Markdown in a single app
- Dual-mode engine — Native (UIKit bridges) + Unified (TextKit 2 reflow) rendering
- Reading position — Auto-saves scroll position, survives app kills and relaunches
- CJK encoding — Auto-detect GBK, Big5, Shift-JIS, EUC-KR (8KB sample-based)
- Large file support — Chunked UITableView for TXT files >500K characters
- Paginated mode — CSS columns (EPUB), TextKit containers (TXT/MD), PDFKit pages
- Page turn animations — Slide, cover-flip, or instant
- Auto page turning — Timer-based advancement with configurable interval
- Configurable tap zones — Left/center/right zones mapped to customizable actions
- Bookmarks, highlights, notes — Full CRUD for all formats (TXT/MD/PDF/EPUB)
- EPUB highlights — CSS Highlight API with JS bridge + buffered delivery
- PDF highlights — PDFAnnotation-based with selection detection
- TXT/MD highlights — NSAttributedString with persistent rendering
- Export/import — Markdown + JSON export, VReader JSON round-trip import
- Full-text search — SQLite FTS5 with CJK tokenization, persistent index
- Reading progress bar — Draggable scrubber (continuous, page-based, chapter-based)
- Table of contents — EPUB nav/NCX, PDF outline, TXT auto-detection (25 Legado rules), MD headings
- Dictionary — System dictionary lookup + AI translation on text selection
- Summarization — Section and chapter summaries via OpenAI-compatible API
- Chat — Multi-turn conversation with book context
- Translation — Bilingual view (9 languages)
- General chat — AI chat without book context
- Grid/list view — Persistent sort order and view mode
- Collections — Tags, series, custom groups
- Custom covers — Set from photo library
- Context menu — Info, share, set cover, delete
- OPDS catalog — Browse and download from OPDS 1.2 feeds
- Book sources — Legado-compatible rule engine for web novel scraping
- TTS — System (AVSpeechSynthesizer) + cloud HTTP TTS with playback controls
- TTS sentence highlight — NLTokenizer-based sentence detection synced to speech position
- TTS auto-scroll — Text view follows speech position in real-time
- Simp/Trad Chinese — Toggle conversion via ICU (live re-apply without reloading)
- Content replacement — Regex rules for text cleanup (live re-apply via source text storage)
- Reading time tracking — Per-book session stats and speed calculations
- iCloud Sync (foundation) — CloudKit sync engine, record mapper (8 types), device identity, change tokens, durable tombstones, settings bridge (NSUbiquitousKeyValueStore)
- WebDAV backup — Archive to any WebDAV server (Nutstore compatible)
- Per-book settings — Font, theme, spacing overrides per book (JSON-persisted)
- Theme backgrounds — Custom background images via PhotosPicker with per-theme opacity
| Component | Technology |
|---|---|
| UI | SwiftUI |
| Persistence | SwiftData (SchemaV4) |
| EPUB | WKWebView bridge with CSS theme injection + JS highlight API |
| PDFKit + PDFAnnotation for highlights | |
| TXT | TextKit 1 (UITextView) + chunked UITableView |
| Markdown | NSAttributedString rendering via MDParser |
| Search | SQLite FTS5 with CJK tokenization |
| AI | OpenAI-compatible API (summarize, chat, translate) |
| TTS | AVSpeechSynthesizer + HTTP cloud TTS |
| Backup | WebDAV client + iCloud Sync foundation (CloudKit) |
| Encoding | ICU + heuristic detection (UTF-8/GBK/Big5/Shift-JIS) |
| Concurrency | Swift 6 strict concurrency |
- iOS 17.0+
- Xcode 16+
- XcodeGen
# Generate the Xcode project
xcodegen generate
# Open in Xcode
open vreader.xcodeprojThen select a simulator or device and run.
See docs/architecture.md for the full architecture document.
vreader/
├── App/ # App entry point, SwiftData schema init
├── Models/ # SwiftData models, DocumentFingerprint, Locator
├── ViewModels/ # Library and per-format reader view models
├── Views/
│ ├── Reader/ # Reader container, format bridges, chrome overlay
│ ├── Bookmarks/ # BookmarkListView, TOCListView
│ ├── Annotations/ # HighlightListView, AnnotationListView
│ └── Settings/ # ReaderSettingsPanel, AI/TTS/WebDAV settings
├── Services/
│ ├── TXT/, EPUB/ # Format-specific parsing and loading
│ ├── Search/ # FTS5 indexing, text extraction
│ ├── AI/, TTS/ # AI service, TTS providers
│ ├── Backup/ # WebDAV client, BackupProvider
│ ├── Sync/ # iCloud sync engine, CloudKit mapper, tombstones
│ ├── TextMapping/ # Simp/Trad, replacement rules
│ └── Locator/ # Reading position (Readium-inspired)
vreaderTests/ # Unit tests (3200+ test cases)
vreaderUITests/ # UI tests (XCUITest)
- TextKit 1 for TXT rendering — UITextView with
NSLayoutManagerfor reliable offset-to-scroll mapping. TextKit 2 has better performance but lacks thecharOffset ↔ scrollOffsetAPIs needed for position persistence. - Chunked rendering for large files — Files over 500K UTF-16 code units use a UITableView where each cell renders one ~16K chunk. Only visible cells build attributed strings (LRU cache of 20 chunks).
- Two-phase scroll restore — Position restore uses a Phase 1 (t+0.15s) + Phase 2 (t+0.8s) pattern to handle TextKit 1 compatibility mode relayout storms that reset
contentOffset. @Statefor one-shot values — Rapidly-mutating@Observableproperties are never read in SwiftUI body to avoid observation feedback loops. Position restore uses@Statecaptured once afteropen().- Background task protection —
UIApplication.beginBackgroundTaskwraps all critical saves (close(),onBackground()) to prevent data loss when iOS suspends the process.
All code, tests, bug fixes, and documentation are produced by AI coding agents. The human role is directing requirements, reporting bugs, and verifying on device.
| Tool | Role |
|---|---|
| Claude Code | Primary coding agent — implementation, editing, code review, fixes |
| Codex CLI | Architecture review, auditing, autonomous implementation in sandbox |
The development process follows a gated, multi-agent pipeline:
- Plan — Features are designed as detailed implementation plans with work items, acceptance criteria, and test requirements (
docs/codex-plans/) - Review — Plans go through multi-round architecture review via Codex (consistency, completeness, feasibility, ambiguity, risk)
- Implement — Work items are implemented by the implementer agent following TDD (RED-GREEN-REFACTOR)
- Audit — Code is audited across 9 dimensions (correctness, security, concurrency, performance, etc.)
- Fix — Audit findings are fixed and verified in iterative loops until clean
- Commit — Changes are committed only on explicit request after passing all gates
Shared rules for all AI agents live in AGENTS.md:
- Test-first is mandatory — Write a failing test before implementing any new behavior
- Research before building — Search for established patterns and proven solutions before inventing
- Edge cases are not optional — Brainstorm and test: empty input, null values, Unicode/CJK, concurrent access, network failures
- Keep files under ~300 lines — Split proactively to maintain readability
- Keep diffs focused — No drive-by refactors; only change what's needed
.claude/rules/— Rule files for TDD, UI consistency, design tokens, keyboard shortcuts, version bumping.claude/skills/— Custom skill definitions (plan-audit, etc.)CLAUDE.md— Claude Code project instructionsAGENTS.md— Shared instructions for all AI coding agents
Active development. See features (38 done) and bugs (87 fixed) for current state.
MIT