feat: route document uploads through ChittyStorage#85
Conversation
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ✅ Deployment successful! View logs |
chittycommand-ui | 66b8da6 | Apr 08 2026, 06:55 AM |
|
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 17 minutes and 37 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughUpdated document upload flow to use ChittyStorage service via MCP JSON-RPC instead of direct evidence integration. Added Changes
Sequence DiagramsequenceDiagram
participant Client
participant Worker as /upload Handler
participant ChittyStorage as SVC_STORAGE<br/>(ChittyStorage)
participant R2 as R2 Storage<br/>(Fallback)
participant DB as Database<br/>(cc_documents)
Client->>Worker: POST /upload (file, entity_slug, origin)
activate Worker
Worker->>Worker: Compute SHA-256 hash<br/>Derive chittyId = scan-{prefix}
alt SVC_STORAGE Available
Worker->>ChittyStorage: MCP storage_ingest call<br/>(chitty_id, filename, content,<br/>mime_type, origin, entity_slugs)
activate ChittyStorage
ChittyStorage-->>Worker: {r2_key, ...}
deactivate ChittyStorage
Worker->>DB: INSERT cc_documents<br/>(r2_key, processing_status='synced',<br/>metadata={content_hash, chitty_id})
activate DB
DB-->>Worker: Success
deactivate DB
else SVC_STORAGE Unavailable
Worker->>R2: Store file<br/>(r2Key = sha256/{hash})
activate R2
R2-->>Worker: Success
deactivate R2
Worker->>DB: INSERT cc_documents<br/>(r2_key=sha256/{hash},<br/>processing_status='pending',<br/>source='manual')
activate DB
DB-->>Worker: Success
deactivate DB
end
Worker-->>Client: Upload response<br/>(status, content_hash, chitty_id)
deactivate Worker
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
To use Codex here, create a Codex account and connect to github. |
- Add SVC_STORAGE service binding to chittystorage worker - Upload route now calls storage_ingest MCP tool (content-addressed, entity-linked) - Fallback to direct R2 if SVC_STORAGE unavailable (legacy path) - Batch upload also routes through ChittyStorage - Removes evidenceClient fire-and-forget (ChittyStorage handles pipeline) - Adds entity_slug and origin params to upload form - Content hash computed locally for chitty_id until ChittyIdentity integration Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
b765879 to
66b8da6
Compare
|
|
To use Codex here, create a Codex account and connect to github. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b76587923d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| INSERT INTO cc_documents (doc_type, source, filename, r2_key, processing_status) | ||
| VALUES ('upload', 'manual', ${safeName}, ${r2Key}, 'pending') |
There was a problem hiding this comment.
Preserve linked_dispute_id when inserting uploads
The /upload path no longer persists linked_dispute_id, so documents uploaded from dispute context are silently detached from their dispute. The UI still sends this field (ui/src/lib/api.ts), and dispute detail fetches documents with WHERE linked_dispute_id = :id (src/routes/disputes.ts), so these uploads stop appearing in dispute timelines after this change.
Useful? React with 👍 / 👎.
| await c.env.SVC_STORAGE.fetch('https://internal/mcp', { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| body: JSON.stringify({ |
There was a problem hiding this comment.
Reject batch success when storage ingest returns HTTP error
This batch branch awaits fetch but never checks response.ok (or MCP error payload), so a 4xx/5xx from ChittyStorage still falls through and is reported as status: 'ok'. That creates silent data-loss behavior where clients believe uploads succeeded even though ingest was rejected.
Useful? React with 👍 / 👎.
| const contentHash = Array.from(new Uint8Array(hashBuf)).map(b => b.toString(16).padStart(2, '0')).join(''); | ||
|
|
||
| if (c.env.SVC_STORAGE) { | ||
| const content_base64 = btoa(String.fromCharCode(...bytes)); |
There was a problem hiding this comment.
Avoid spreading file bytes into fromCharCode
String.fromCharCode(...bytes) passes one argument per byte, which exceeds JS argument limits for normal document sizes and throws RangeError before ingest. In this batch path that means otherwise valid files fail whenever SVC_STORAGE is enabled; use chunked/base64 encoding that does not spread the whole array into function arguments.
Useful? React with 👍 / 👎.
| customMetadata: { filename: safeName, source: 'chittycommand' }, | ||
| }); | ||
| } | ||
| results.push({ filename: safeName, status: 'ok', content_hash: contentHash }); |
There was a problem hiding this comment.
Insert batch-ingested files into cc_documents
After a successful batch ingest, the handler only appends to results and never writes a cc_documents row. Since document list and gap endpoints read from cc_documents, batch-uploaded files become invisible to dashboard/gap workflows despite returning success.
Useful? React with 👍 / 👎.
Summary
Co-Authored-By: Claude Opus 4.6 (1M context) noreply@anthropic.com
Summary by CodeRabbit
Breaking Changes
Improvements
entity_slugandoriginparameters to upload requests