Skip to content

Conversation

@sheeki03
Copy link
Owner

Summary

This PR adds the complete full-stack implementation of analystOS - an AI-powered research platform with a premium "Terminal Luxe" Bloomberg Terminal-inspired design.

Backend (FastAPI)

  • Authentication: JWT with rotating refresh tokens, token family tracking for reuse detection
  • Research API: Document upload, URL scraping, AI-powered report generation, entity extraction
  • Crypto API: Real-time price data, trending coins, market overview, AI chat assistant
  • Automation API: Notion workflow integration with job queue
  • Infrastructure:
    • Job manager (ARQ for prod, asyncio for dev)
    • TTL cache with stale-while-error support
    • Sliding window rate limiter (100/min auth, 30/min unauth)
    • Upload middleware with 50MB streaming limit

Frontend (Next.js 14)

  • Design System: Terminal Luxe aesthetic with dark theme, electric teal accents, JetBrains Mono + Inter fonts
  • Auth Flow: Complete login/logout with access/refresh token handling
  • Pages:
    • Research: Document upload, URL analysis, model selector, chat panel
    • Crypto: AI chat interface, price charts (Recharts), watchlist, trending
    • Automation: Workflow status, queue management, history
    • Settings: Profile, security, notifications, appearance, API keys
  • Components: Reusable UI library (Button, Input, Card, Dialog, Tabs, Dropdown, etc.)
  • API Proxy: Development proxy route (localhost:3000/api/* → localhost:8000/*)

Test plan

  • Login with demo/demo123 credentials
  • Verify redirect to /research after login
  • Test Research page: document upload zone, URL input, model selector, chat panel
  • Test Crypto page: AI Chat tab, Market tab with price charts, Search tab
  • Test Automation page: status cards, queue/history tabs
  • Test Settings page: profile editing, navigation between sections
  • Test logout: redirects back to login page
  • Verify protected routes redirect unauthenticated users to login

🤖 Generated with Claude Code

Backend (FastAPI):
- JWT auth with rotating refresh tokens and token family tracking
- Research router with document upload, URL scraping, report generation
- Crypto router with price data, trending, market overview, AI chat
- Automation router for Notion workflow integration
- Job manager with ARQ support (dev: asyncio, prod: Redis)
- TTL cache service with stale-while-error support
- Sliding window rate limiter (100/min auth, 30/min unauth)
- Upload middleware with 50MB streaming limit

Frontend (Next.js 14):
- Terminal Luxe "Bloomberg Terminal" design aesthetic
- Complete auth flow with access/refresh token handling
- Research page: document upload, URL analysis, model selector, chat
- Crypto page: AI chat interface, price charts, watchlist, trending
- Automation page: workflow status, queue, history
- Settings page: profile, security, notifications, appearance
- Reusable UI components: Button, Input, Card, Dialog, Tabs, etc.
- API proxy for development (localhost:3000/api/* -> localhost:8000/*)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@macroscopeapp
Copy link

macroscopeapp bot commented Jan 16, 2026

Launch a full-stack analystOS with Terminal Luxe UI by adding Next.js dashboards (Research, Crypto, Automation), FastAPI routers (/auth, /research, /crypto, /automation), ARQ worker jobs, and upload size enforcement at 50MB

Add authenticated dashboards with chat, uploads, scraping, report generation, crypto market views, and automation; implement FastAPI app wiring, routers, caching, jobs, rate limiting, token store, and a worker; introduce UI component library and Tailwind theme; proxy frontend API requests to the backend.

📍Where to Start

Start with the FastAPI app setup and router registration in api_main.py at api_main.py, then follow the job execution flow beginning in WorkerSettings in worker.py.


Macroscope summarized 4445859.

- auth_router: catch bcrypt ValueError for malformed hashes
- auth_router: deny token refresh for disabled users
- upload_job: validate filenames/file_contents length match
- upload_job: use context manager for PyMuPDF (resource cleanup)
- upload_job: proper DOCX fallback using XML extraction
- scrape_job: validate FireCrawl markdown is string
- scrape_job: use asyncio.to_thread for sync FireCrawl call
- job_models: add progress validation (0-100) to JobListItem
- api_main: fix aioredis.from_url (sync factory, not coroutine)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- generate_job: use proper markdown separator (\n\n---\n\n)
- generate_job: fail early when no sources found
- scrape_job: fallback to "unknown" for empty filenames
- auth_router: iterate snapshot of _users dict for thread safety
- auth_router: use .get("role", "user") for safe default
- jobs/__init__: document missing extract_entities_job

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- jobs/__init__: remove extract_entities_job from docstring (not implemented)
- upload_job: return empty string for PDF when PyMuPDF missing (not garbage)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- market-overview: clamp sentiment bar width to 0-100%
- queue-list: use item_id (not id) for keys and triggers
- queue-list: add fallbacks for Badge variant/label
- price-chart: show fractional digits for values < 1
- api proxy: remove useless try-catch around body assignment
- settings: add type="button" and onClick to ColorSwatch
- types: add thumb property to TrendingCoin interface

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Comment on lines +178 to +193
async with httpx.AsyncClient(timeout=SCRAPE_TIMEOUT) as client:
response = await client.get(url, follow_redirects=True)
response.raise_for_status()

soup = BeautifulSoup(response.text, "html.parser")

# Remove script and style elements
for script in soup(["script", "style", "nav", "footer", "header"]):
script.decompose()

# Get text
text = soup.get_text(separator="\n", strip=True)

# Clean up whitespace
lines = [line.strip() for line in text.splitlines() if line.strip()]
return "\n\n".join(lines)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Process response content inside the async with httpx.AsyncClient block (e.g., read .text/.json) or cache it before exit; using the body after the client closes can fail due to lazy reads.

Suggested change
async with httpx.AsyncClient(timeout=SCRAPE_TIMEOUT) as client:
response = await client.get(url, follow_redirects=True)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
# Remove script and style elements
for script in soup(["script", "style", "nav", "footer", "header"]):
script.decompose()
# Get text
text = soup.get_text(separator="\n", strip=True)
# Clean up whitespace
lines = [line.strip() for line in text.splitlines() if line.strip()]
return "\n\n".join(lines)
async with httpx.AsyncClient(timeout=SCRAPE_TIMEOUT) as client:
response = await client.get(url, follow_redirects=True)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
# Remove script and style elements
for script in soup(["script", "style", "nav", "footer", "header"]):
script.decompose()
# Get text
text = soup.get_text(separator="\n", strip=True)
# Clean up whitespace
lines = [line.strip() for line in text.splitlines() if line.strip()]
return "\n\n".join(lines)

🚀 Want me to fix this? Reply ex: "fix it for me".

Comment on lines +202 to +210
try:
import redis.asyncio as aioredis
redis_url = os.getenv("REDIS_URL", "redis://localhost:6379")
redis_client = aioredis.from_url(redis_url) # Sync factory, returns client
await redis_client.ping()
await redis_client.aclose()
services["redis"] = "healthy"
except Exception as e:
services["redis"] = f"unhealthy: {e}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

redis_client isn’t closed if await redis_client.ping() throws. Consider a try/finally so the client is always closed.

Suggested change
try:
import redis.asyncio as aioredis
redis_url = os.getenv("REDIS_URL", "redis://localhost:6379")
redis_client = aioredis.from_url(redis_url) # Sync factory, returns client
await redis_client.ping()
await redis_client.aclose()
services["redis"] = "healthy"
except Exception as e:
services["redis"] = f"unhealthy: {e}"
# Check Redis
redis_client = None
try:
import redis.asyncio as aioredis
redis_url = os.getenv("REDIS_URL", "redis://localhost:6379")
redis_client = aioredis.from_url(redis_url) # Sync factory, returns client
await redis_client.ping()
services["redis"] = "healthy"
except Exception as e:
services["redis"] = f"unhealthy: {e}"
finally:
if redis_client is not None:
await redis_client.aclose()

🚀 Want me to fix this? Reply ex: "fix it for me".

Comment on lines +126 to +132
for file_path in user_dir.glob(f"{source_id}*.txt"):
content = file_path.read_text()
sources.append({
"source_id": source_id,
"filename": file_path.name,
"content": content,
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_load_sources only matches {source_id}*.txt, but generated reports are saved as {job_id}.md, so completed generate jobs aren’t found. Consider also matching .md files (or document that sources must reference non-generate jobs).

Suggested change
for file_path in user_dir.glob(f"{source_id}*.txt"):
content = file_path.read_text()
sources.append({
"source_id": source_id,
"filename": file_path.name,
"content": content,
})
# Look for files matching this source ID
for pattern in (f"{source_id}*.txt", f"{source_id}*.md"):
for file_path in user_dir.glob(pattern):
content = file_path.read_text()
sources.append({
"source_id": source_id,
"filename": file_path.name,
"content": content,
})

🚀 Want me to fix this? Reply ex: "fix it for me".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants