-
Notifications
You must be signed in to change notification settings - Fork 0
feat: Full-stack analystOS with Terminal Luxe design #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Backend (FastAPI): - JWT auth with rotating refresh tokens and token family tracking - Research router with document upload, URL scraping, report generation - Crypto router with price data, trending, market overview, AI chat - Automation router for Notion workflow integration - Job manager with ARQ support (dev: asyncio, prod: Redis) - TTL cache service with stale-while-error support - Sliding window rate limiter (100/min auth, 30/min unauth) - Upload middleware with 50MB streaming limit Frontend (Next.js 14): - Terminal Luxe "Bloomberg Terminal" design aesthetic - Complete auth flow with access/refresh token handling - Research page: document upload, URL analysis, model selector, chat - Crypto page: AI chat interface, price charts, watchlist, trending - Automation page: workflow status, queue, history - Settings page: profile, security, notifications, appearance - Reusable UI components: Button, Input, Card, Dialog, Tabs, etc. - API proxy for development (localhost:3000/api/* -> localhost:8000/*) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Launch a full-stack analystOS with Terminal Luxe UI by adding Next.js dashboards (Research, Crypto, Automation), FastAPI routers (
|
- auth_router: catch bcrypt ValueError for malformed hashes - auth_router: deny token refresh for disabled users - upload_job: validate filenames/file_contents length match - upload_job: use context manager for PyMuPDF (resource cleanup) - upload_job: proper DOCX fallback using XML extraction - scrape_job: validate FireCrawl markdown is string - scrape_job: use asyncio.to_thread for sync FireCrawl call - job_models: add progress validation (0-100) to JobListItem - api_main: fix aioredis.from_url (sync factory, not coroutine) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- generate_job: use proper markdown separator (\n\n---\n\n)
- generate_job: fail early when no sources found
- scrape_job: fallback to "unknown" for empty filenames
- auth_router: iterate snapshot of _users dict for thread safety
- auth_router: use .get("role", "user") for safe default
- jobs/__init__: document missing extract_entities_job
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- jobs/__init__: remove extract_entities_job from docstring (not implemented) - upload_job: return empty string for PDF when PyMuPDF missing (not garbage) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- market-overview: clamp sentiment bar width to 0-100% - queue-list: use item_id (not id) for keys and triggers - queue-list: add fallbacks for Badge variant/label - price-chart: show fractional digits for values < 1 - api proxy: remove useless try-catch around body assignment - settings: add type="button" and onClick to ColorSwatch - types: add thumb property to TrendingCoin interface Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
| async with httpx.AsyncClient(timeout=SCRAPE_TIMEOUT) as client: | ||
| response = await client.get(url, follow_redirects=True) | ||
| response.raise_for_status() | ||
|
|
||
| soup = BeautifulSoup(response.text, "html.parser") | ||
|
|
||
| # Remove script and style elements | ||
| for script in soup(["script", "style", "nav", "footer", "header"]): | ||
| script.decompose() | ||
|
|
||
| # Get text | ||
| text = soup.get_text(separator="\n", strip=True) | ||
|
|
||
| # Clean up whitespace | ||
| lines = [line.strip() for line in text.splitlines() if line.strip()] | ||
| return "\n\n".join(lines) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Process response content inside the async with httpx.AsyncClient block (e.g., read .text/.json) or cache it before exit; using the body after the client closes can fail due to lazy reads.
| async with httpx.AsyncClient(timeout=SCRAPE_TIMEOUT) as client: | |
| response = await client.get(url, follow_redirects=True) | |
| response.raise_for_status() | |
| soup = BeautifulSoup(response.text, "html.parser") | |
| # Remove script and style elements | |
| for script in soup(["script", "style", "nav", "footer", "header"]): | |
| script.decompose() | |
| # Get text | |
| text = soup.get_text(separator="\n", strip=True) | |
| # Clean up whitespace | |
| lines = [line.strip() for line in text.splitlines() if line.strip()] | |
| return "\n\n".join(lines) | |
| async with httpx.AsyncClient(timeout=SCRAPE_TIMEOUT) as client: | |
| response = await client.get(url, follow_redirects=True) | |
| response.raise_for_status() | |
| soup = BeautifulSoup(response.text, "html.parser") | |
| # Remove script and style elements | |
| for script in soup(["script", "style", "nav", "footer", "header"]): | |
| script.decompose() | |
| # Get text | |
| text = soup.get_text(separator="\n", strip=True) | |
| # Clean up whitespace | |
| lines = [line.strip() for line in text.splitlines() if line.strip()] | |
| return "\n\n".join(lines) |
🚀 Want me to fix this? Reply ex: "fix it for me".
| try: | ||
| import redis.asyncio as aioredis | ||
| redis_url = os.getenv("REDIS_URL", "redis://localhost:6379") | ||
| redis_client = aioredis.from_url(redis_url) # Sync factory, returns client | ||
| await redis_client.ping() | ||
| await redis_client.aclose() | ||
| services["redis"] = "healthy" | ||
| except Exception as e: | ||
| services["redis"] = f"unhealthy: {e}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
redis_client isn’t closed if await redis_client.ping() throws. Consider a try/finally so the client is always closed.
| try: | |
| import redis.asyncio as aioredis | |
| redis_url = os.getenv("REDIS_URL", "redis://localhost:6379") | |
| redis_client = aioredis.from_url(redis_url) # Sync factory, returns client | |
| await redis_client.ping() | |
| await redis_client.aclose() | |
| services["redis"] = "healthy" | |
| except Exception as e: | |
| services["redis"] = f"unhealthy: {e}" | |
| # Check Redis | |
| redis_client = None | |
| try: | |
| import redis.asyncio as aioredis | |
| redis_url = os.getenv("REDIS_URL", "redis://localhost:6379") | |
| redis_client = aioredis.from_url(redis_url) # Sync factory, returns client | |
| await redis_client.ping() | |
| services["redis"] = "healthy" | |
| except Exception as e: | |
| services["redis"] = f"unhealthy: {e}" | |
| finally: | |
| if redis_client is not None: | |
| await redis_client.aclose() |
🚀 Want me to fix this? Reply ex: "fix it for me".
| for file_path in user_dir.glob(f"{source_id}*.txt"): | ||
| content = file_path.read_text() | ||
| sources.append({ | ||
| "source_id": source_id, | ||
| "filename": file_path.name, | ||
| "content": content, | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_load_sources only matches {source_id}*.txt, but generated reports are saved as {job_id}.md, so completed generate jobs aren’t found. Consider also matching .md files (or document that sources must reference non-generate jobs).
| for file_path in user_dir.glob(f"{source_id}*.txt"): | |
| content = file_path.read_text() | |
| sources.append({ | |
| "source_id": source_id, | |
| "filename": file_path.name, | |
| "content": content, | |
| }) | |
| # Look for files matching this source ID | |
| for pattern in (f"{source_id}*.txt", f"{source_id}*.md"): | |
| for file_path in user_dir.glob(pattern): | |
| content = file_path.read_text() | |
| sources.append({ | |
| "source_id": source_id, | |
| "filename": file_path.name, | |
| "content": content, | |
| }) |
🚀 Want me to fix this? Reply ex: "fix it for me".
Summary
This PR adds the complete full-stack implementation of analystOS - an AI-powered research platform with a premium "Terminal Luxe" Bloomberg Terminal-inspired design.
Backend (FastAPI)
Frontend (Next.js 14)
Test plan
🤖 Generated with Claude Code