Open-source AI-powered qualitative research interview platform. Conduct deep, nuanced interviews at scale with AI interviewers that adapt their style based on participant responses.
- AI-Powered Interviews: Configurable AI interviewer with structured, standard, or exploratory modes
- Profile Extraction: Automatically gather participant demographic information during natural conversation
- Multi-Question Support: Define core research questions that the AI weaves into conversation naturally
- Study Management: Save, edit, and manage multiple studies from the dashboard
- Real-time Analysis: Automatic synthesis of stated vs revealed preferences, themes, and contradictions
- Aggregate Synthesis: Cross-interview analysis to identify patterns across all participants
- Follow-up Studies: Generate new studies based on synthesis findings
- Secure Deployment: API keys stay server-side, never exposed to participants
- One-Click Deploy: Deploy your own instance to Vercel in minutes
- Click the "Deploy with Vercel" button above
- Connect your GitHub account (if not already)
- Enter the required environment variables:
GEMINI_API_KEY: Your Google Gemini API key (Get one here)ADMIN_PASSWORD: Password to access the researcher dashboard
- Click "Deploy"
- Wait for deployment to complete (~2 minutes)
- Visit your app and configure your study!
| Variable | Required | Description |
|---|---|---|
GEMINI_API_KEY |
Yes | Google Gemini API key for AI interviews |
ADMIN_PASSWORD |
Yes | Password to protect researcher dashboard |
ANTHROPIC_API_KEY |
No | Optional: Use Claude instead of Gemini for interviews |
AI_PROVIDER |
No | gemini (default) or claude |
AI_MODEL |
No | Override default model (see Model Selection below) |
- Setup Study (
/setup): Configure your research questions, profile fields, and AI behavior - Save Study: Studies are saved to your dashboard for reuse and editing
- Generate Link: Create a shareable participant link with your study configuration embedded
- Share: Distribute the link to participants via email, survey tools, or social media
- View Results (
/dashboard): Access individual transcripts and per-interview synthesis - Aggregate Analysis: View cross-interview patterns, themes, and divergent views
- Generate Follow-ups: Create new studies based on synthesis findings to dig deeper
- Click Link: Participants visit the shared URL
- Consent: Read study information and consent to participate
- Interview: Chat naturally with the AI interviewer
- Complete: View summary and thank you message
Researcher Participant
│ │
├── Setup Study │
├── Save to Dashboard │
├── Generate Link ──────────────────►│
│ ├── Consent
│ ├── Interview
│ │ ↓
│ │ AI Interviewer (Gemini/Claude)
│ │ ↓
│ └── Complete
│ ↓
│◄───────────────────────────── Vercel KV (Storage)
│
├── View Individual Synthesis
├── Run Aggregate Analysis
└── Generate Follow-up Studies
# Install dependencies
npm install
# Set environment variables
cp .env.example .env.local
# Edit .env.local with your API keys
# Run development server
npm run dev
# Build for production
npm run buildThe app works without Vercel KV during development:
- Interview data is not persisted (warning shown)
- Dashboard shows empty state
- All other features work normally
To test with KV locally, install the Vercel CLI and run:
vercel link
vercel env pull .env.localOpenInterviewer uses Vercel KV (powered by Upstash Redis) to persist studies and interview data. Without storage configured, studies won't be saved and interviews won't persist.
Step 1: Create Upstash Redis Database
- Go to your Vercel Dashboard
- Select your openinterviewer project
- Click the "Storage" tab
- Click "Upstash"
- Click "Create Database"
- Select "Redis" (not Kafka)
- Fill in:
- Name:
openinterviewer(or any name) - Primary Region: Choose closest to your users (e.g.,
us-east-1) - Leave other settings as default
- Name:
- Click "Create"
Step 2: Connect Database to Project
- After creation, click "Connect Project" button
- Select your openinterviewer project from the dropdown
- Choose environments to connect (select all: Production, Preview, Development)
- Click "Connect"
Step 3: Redeploy
- Go to the "Deployments" tab
- Find your latest deployment
- Click "..." menu → "Redeploy"
- Wait for deployment to complete (~1-2 minutes)
Step 4: Verify
- Visit your app and log in
- Create and save a study
- Navigate to "My Studies" - the study should now appear!
- 10,000 commands/day
- 256MB storage
- Sufficient for testing and small-scale research
When you connect the database, these are automatically added:
KV_REST_API_URLKV_REST_API_TOKENKV_REST_API_READ_ONLY_TOKENKV_URL
To explore the full platform workflow without running actual interviews, you can load demo data:
- Log in to your researcher dashboard
- Navigate to My Studies (
/studies) - Click "Load Demo" button (purple button in header)
- Or, if no studies exist, click "Load Demo Data" in the empty state
The demo includes:
-
1 Demo Study: "The Adaptive Self: Professional Identity in the Age of AI"
- 5 core research questions about AI impact on professional identity
- Profile schema: role, AI usage frequency, comfort level, industry
- AI reasoning enabled for synthesis
-
3 Complete Interviews:
- Sarah (Product Manager) - Enthusiastic AI adopter, found new strategic role
- Marcus (UX Designer) - Initial skeptic turned converted user
- Priya (Content Manager) - Efficiency vs authenticity tension
-
Full Analysis: Each interview includes synthesis with themes, contradictions, and insights
With demo data loaded, you can:
- View Individual Interviews: Click any interview to see full transcript
- Per-Interview Synthesis: See stated vs revealed preferences, themes
- Aggregate Analysis: Run cross-interview analysis to see patterns
- Follow-up Studies: Generate new studies based on findings
Click "Clear Demo" (amber button in header) to remove all demo data and start fresh.
/src
├── app/ # Next.js App Router pages
│ ├── api/ # API routes (server-side)
│ │ ├── interview/ # AI interview generation
│ │ ├── greeting/ # Interview greeting
│ │ ├── synthesis/ # Individual + aggregate analysis
│ │ ├── studies/ # Study CRUD operations
│ │ ├── generate-link/ # Participant URL generation
│ │ ├── interviews/ # Interview CRUD + export
│ │ ├── auth/ # Authentication
│ │ └── config/ # API key status check
│ ├── setup/ # Study configuration
│ ├── consent/ # Participant consent
│ ├── interview/ # Interview chat
│ ├── synthesis/ # Analysis view
│ ├── export/ # Data export
│ ├── dashboard/ # Researcher dashboard
│ ├── studies/ # Study list + detail views
│ ├── login/ # Researcher login
│ └── p/[token]/ # Participant entry point
├── components/ # React components
├── hooks/ # Custom React hooks
├── lib/ # Server-side utilities
│ ├── ai.ts # AI provider abstraction
│ ├── providers/ # Gemini & Claude implementations
│ └── kv.ts # Vercel KV client
├── utils/ # Client-side utilities
├── services/ # Client-side services
├── store.ts # Zustand state management
├── types.ts # TypeScript types
└── middleware.ts # Auth protection
The app uses Gemini by default for all AI operations:
- Interview responses
- Greeting generation
- Interview synthesis
Models can be selected at two levels:
- Per-study (UI): Choose a model in the Study Setup page for each study
- Environment default: Set default models via environment variables
Priority: Study UI selection > Provider-specific env var > Legacy AI_MODEL > Default
# Gemini default model
GEMINI_MODEL=gemini-2.5-flash
# Claude default model
CLAUDE_MODEL=claude-sonnet-4-5
# Legacy (deprecated - use provider-specific vars above)
AI_MODEL=gemini-2.5-flashGemini:
| Model | Description |
|---|---|
gemini-2.5-flash |
Fast, cost-effective (default) |
gemini-2.5-pro |
Higher quality |
gemini-3-pro-preview |
Most intelligent (preview, may require allowlisting) |
Claude:
| Model | Description | Pricing |
|---|---|---|
claude-haiku-4-5 |
Fastest | $1/$5 per MTok |
claude-sonnet-4-5 |
Balanced (default) | $3/$15 per MTok |
claude-opus-4-5 |
Most capable | $15/$75 per MTok |
Note: Preview models may require API access approval. Check Google AI docs and Anthropic docs for the latest model availability.
To use Claude instead of Gemini:
AI_PROVIDER=claude
ANTHROPIC_API_KEY=your-claude-api-key
CLAUDE_MODEL=claude-sonnet-4-5The app automatically uses enhanced reasoning (thinking mode) for analytical operations like synthesis, while keeping interviews fast and conversational.
Default Behavior:
| Operation | Reasoning | Model Used |
|---|---|---|
| Interview responses | OFF | User-selected model |
| Greeting generation | OFF | User-selected model |
| Per-interview synthesis | ON (high) | Auto-upgraded (Gemini 3 Pro / Claude Opus) |
| Aggregate synthesis | ON (high) | Auto-upgraded |
| Follow-up study generation | ON (high) | Auto-upgraded |
Per-Study Override:
In Study Setup, you can override the default behavior:
- Automatic (recommended): Use defaults above
- Always enabled: Force reasoning ON for all operations (slower interviews)
- Always disabled: Force reasoning OFF for all operations (faster but less thorough synthesis)
Cost Implications:
- Synthesis operations automatically use premium models for best quality
- Gemini: Uses
gemini-3-pro-previewfor synthesis - Claude: Uses
claude-opus-4-5($15/$75 per MTok) for synthesis - Reasoning tokens count toward billing
Troubleshooting:
- If synthesis fails silently, check API quotas for premium models
gemini-3-pro-previewmay require allowlisting in Google AI Studio- Claude Opus ($15/$75/MTok) is used for synthesis - monitor costs
- Set reasoning to "Always disabled" if you want to use your selected model without upgrades
API keys are managed through environment variables in your Vercel dashboard:
- Go to your Vercel project → Settings → Environment Variables
- Add or update the required variables
- Redeploy for changes to take effect (Production deployments pick up new values automatically)
| Variable | Purpose |
|---|---|
GEMINI_API_KEY |
Powers AI interviews (server-side) |
ADMIN_PASSWORD |
Protects researcher dashboard |
| Variable | Purpose | When Needed |
|---|---|---|
ANTHROPIC_API_KEY |
Use Claude for interviews | When AI Provider is set to "Claude" |
GEMINI_MODEL |
Override default Gemini model | To change from gemini-2.5-flash |
CLAUDE_MODEL |
Override default Claude model | To change from claude-sonnet-4-5 |
SESSION_SECRET |
Separate session signing key | Advanced: separate from ADMIN_PASSWORD |
PARTICIPANT_TOKEN_SECRET |
Separate token signing key | Advanced: separate from ADMIN_PASSWORD |
- Gemini: Google AI Studio - Free tier available
- Claude: Anthropic Console - Requires account with credits
When creating a study, you can set participant links to expire after:
- 7 days
- 30 days
- 90 days
- Never (default)
Expired links show an error message directing participants to request a new link.
From the Study Detail page, you can instantly revoke all participant links by toggling "Links Enabled" off. This is useful if:
- You've finished data collection
- You suspect the link has been shared inappropriately
- You need to pause the study temporarily
- Server-side keys (
GEMINI_API_KEY,ANTHROPIC_API_KEY): Stored as environment variables, never exposed to browser - Participant URLs: Signed JWT tokens that cannot be tampered with
- Dashboard: Password-protected with HTTP-only cookie authentication
- Data: Stored in Vercel KV (Redis) with encrypted connections
MIT
Contributions welcome! Please read the contributing guidelines before submitting PRs.