This repository contains a deployable full-stack app for analyzing interview transcripts.
- FastAPI backend at
backend/ - React + Vite frontend at
frontend/ - Strict JSON prompt for topics, profile, and summary
- Provider support for Gemini, Groq, and OpenAI
backend/- FastAPI service, prompt logic, and LLM provider adaptersbackend/api/index.py- Vercel serverless entrypoint for FastAPI appbackend/vercel.json- backend Vercel routing/build configfrontend/- Vite app with transcript upload, loading states, and polished output UIfrontend/vercel.json- frontend SPA rewrite config for Vercelsummarizer.py- standalone CLI from the earlier prompt iteration workprompt_iterations.md- prompt iteration log
Create provider keys in the appropriate .env files:
backend/.env
Create backend/.env from backend/.env.example. Do NOT commit your real .env file — it was accidentally committed earlier and has been removed from the repository. The repo includes backend/.env.example as a template.
Example (copy into backend/.env and fill keys):
LLM_PROVIDER=gemini
GEMINI_API_KEY=your_key_here
GEMINI_MODEL=gemini-2.0-flash
GROQ_API_KEY=
GROQ_MODEL=llama-3.1-8b-instant
OPENAI_API_KEY=
OPENAI_MODEL=gpt-4o-mini
ALLOWED_ORIGINS=http://localhost:5173frontend/.env
VITE_API_BASE_URL=http://localhost:8000
VITE_LLM_PROVIDER=geminiBackend:
cd backend
pip install -r requirements.txt
uvicorn app.main:app --reload --port 8000Frontend:
cd frontend
npm install
npm run devOpen http://localhost:5173 and submit a transcript.
Use one Vercel project from repo root. The root vercel.json builds both parts:
- Python serverless function from
backend/api/index.py - Static frontend build from
frontend/package.json
- In Vercel, create one project and set Root Directory to the repository root.
- Keep framework auto-detection/defaults. Root
vercel.jsoncontrols build and routing. - Add backend environment variables in Vercel Project Settings:
LLM_PROVIDERGEMINI_API_KEYGEMINI_MODELGROQ_API_KEYGROQ_MODELOPENAI_API_KEYOPENAI_MODELALLOWED_ORIGINS(set to your Vercel app URL)
- Optional frontend env var:
VITE_API_BASE_URL(leave unset for same-origin/apirouting in monolithic mode)
- Deploy.
After deploy:
- App UI:
https://<your-app-domain>/ - Health check:
https://<your-app-domain>/health - Analyze API:
https://<your-app-domain>/api/analyze
POST /api/analyze (local: http://localhost:8000/api/analyze, deployed: backend domain + /api/analyze)
Request body:
{
"transcript": "...",
"provider": "gemini",
"model": "gemini-2.0-flash"
}Response body:
{
"topics_covered": [],
"profile": {
"role": "",
"level": "",
"justification": ""
},
"candidate_summary": ""
}The app is provider-agnostic. The recommended default is Gemini gemini-2.0-flash, with Groq llama-3.1-8b-instant and OpenAI gpt-4o-mini supported as alternatives.
The biggest surprise was how much more reliable the output became once the prompt forced concrete topics, a dominant role decision, and an explicit fallback for missing evidence. The backend is intentionally simple and modular so it can be swapped to another model provider without touching the UI. A deterministic provider fallback and transient-retry logic have been implemented so the service will attempt Gemini → Groq → OpenAI (configurable via LLM_PROVIDER) and retry transient errors automatically.
The main limitation is that transcript quality still controls the ceiling of the result: vague interviews produce weaker profiles and more Not enough evidence fields. Another practical limitation is that the app requires valid LLM API keys set in the backend environment before it can analyze anything; see backend/.env.example for the expected variables.
- During development an actual
.envwith API keys was accidentally created and has been removed from the repository. - If you used any real API keys while developing, PLEASE ROTATE/REVOKE those keys now — they may have been exposed.
- To remove secrets from Git history, use a tool such as
git filter-repoor the BFG Repo-Cleaner. Example (run locally):
# Install git-filter-repo (if not installed) and run to remove a path from history
pip install git-filter-repo
git filter-repo --path backend/.env --invert-pathsAfter scrubbing history, force-push the cleaned branch to your remote and rotate any keys that were committed.