Conversation
…cookie Secure flag, CSP headers - Add Origin header validation on WebSocket endpoint to prevent cross-site hijacking (High) - Add slowapi rate limiting on auth endpoints: login, register, me/token, resend-verification (Medium) - Add Secure flag to frontend-set mfbt_session cookie on HTTPS (Low) - Add Content-Security-Policy, X-Frame-Options, X-Content-Type-Options, Referrer-Policy, Permissions-Policy headers to frontend (Next.js) and backend (FastAPI) (Low) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Greptile SummaryThis PR addresses four VAPT-identified vulnerabilities: Cross-Site WebSocket Hijacking (CSWSH), missing rate limiting on auth endpoints, unset Key observations:
Confidence Score: 3/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Browser
participant NextJS as Next.js Frontend
participant Nginx as Reverse Proxy
participant FastAPI as FastAPI Backend
participant WS as WebSocket Handler
Note over Browser,NextJS: CSP + security headers on every page response
Browser->>Nginx: POST /api/v1/auth/login
Nginx->>FastAPI: Forwards real client IP via proxy header
FastAPI->>FastAPI: ProxyHeadersMiddleware sets real client IP
FastAPI->>FastAPI: SlowAPI checks rate bucket (10 per min per IP)
alt Limit exceeded
FastAPI-->>Browser: 429 Too Many Requests
else Within limit
FastAPI-->>Browser: 200 + JWT token
Browser->>Browser: Set mfbt_session cookie (Secure flag on HTTPS)
end
Browser->>Nginx: WebSocket upgrade to /api/v1/ws/orgs/id/jobs
Nginx->>FastAPI: WebSocket upgrade request
FastAPI->>WS: Dispatch to websocket_jobs_endpoint
WS->>WS: Read Origin header
alt Origin absent
WS->>WS: Allow - non-browser client, JWT auth still required
else Origin present but not in allowed set
WS-->>Browser: Close 1008 - CSWSH blocked
else Origin allowed
WS->>WS: Validate JWT token
WS-->>Browser: WebSocket connection accepted
end
|
- Remove unused imports (JSONResponse, urlparse) - Fix import ordering (rate_limit) - Fix indentation in client.ts Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add ProxyHeadersMiddleware so rate limiting uses real client IP behind reverse proxy - Restrict CSP connect-src WebSocket to specific backend host instead of wildcard ws:/wss: - Only include 'unsafe-eval' in CSP script-src during development - Document intentional absent-Origin allowance for MCP/CLI WebSocket clients Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…onfigurable - Only allow localhost CORS/WebSocket origins in development environment - Add TRUSTED_PROXY_IPS setting to lock down ProxyHeadersMiddleware in production Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| from slowapi import Limiter | ||
| from slowapi.util import get_remote_address | ||
|
|
||
| limiter = Limiter(key_func=get_remote_address) |
There was a problem hiding this comment.
In-memory rate limiting broken under multiple workers
The Limiter is constructed here without a shared backing store, so it defaults to an in-process memory counter. In a production deployment with multiple uvicorn workers (e.g. --workers 4), each process holds its own independent counter. A client can therefore make limit × worker_count attempts per window — e.g. 5 register calls × 4 workers = 20 effective attempts per minute — defeating the purpose of the limit entirely.
The app already has a Redis connection configured in Settings. SlowAPI's Limiter can be pointed at Redis so that all workers share a single rate-limit bucket per client. Without a shared backing store, the rate limiting added in this PR will not function correctly in any multi-worker production environment.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/app/rate_limit.py
Line: 6
Comment:
**In-memory rate limiting broken under multiple workers**
The `Limiter` is constructed here without a shared backing store, so it defaults to an in-process memory counter. In a production deployment with multiple uvicorn workers (e.g. `--workers 4`), each process holds its own independent counter. A client can therefore make `limit × worker_count` attempts per window — e.g. 5 register calls × 4 workers = 20 effective attempts per minute — defeating the purpose of the limit entirely.
The app already has a Redis connection configured in `Settings`. SlowAPI's `Limiter` can be pointed at Redis so that all workers share a single rate-limit bucket per client. Without a shared backing store, the rate limiting added in this PR will not function correctly in any multi-worker production environment.
How can I resolve this? If you propose a fix, please make it concise.
…cookie Secure flag, CSP headers
What
Why
How
Can this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)
Database Migrations
Env Config
Relevant Docs
Related Issues or PRs
Dependencies Versions
Notes on Testing
Screenshots
Checklist
I have read and understood the Contribution Guidelines.