Express.js microservice that acts as a secure proxy for OpenAI's API with user authentication, conversation history, and token usage tracking.
✅ User Authentication
- User registration and login with JWT tokens
- Secure password hashing with bcrypt
- JWT middleware for protected routes
✅ OpenAI API Proxy
- Get available models endpoint
- Chat completions with conversation storage
- Streaming chat support
- All conversations stored under user ID
✅ Conversation Management
- SQLite database for conversation history
- Get conversations list with pagination
- Get individual conversation with messages
- Message count tracking per conversation
✅ Token Usage Tracking
- Track prompt and completion tokens per user
- Daily token usage aggregation
- Token usage history with date ranges
- Accurate token counting for both regular and streaming responses
POST /api/auth/register- User registrationPOST /api/auth/login- User login
GET /api/openai/models- Get available OpenAI modelsPOST /api/openai/chat- Chat completions with conversation storage
GET /api/history/conversations- Get user's conversations (paginated)GET /api/history/conversations/:id- Get specific conversation with messagesGET /api/history/token-usage- Get user's token usage statistics
GET /health- Health check endpoint
- Helmet.js for security headers
- CORS configuration
- Rate limiting (100 requests per 15 minutes)
- Input validation with express-validator
- JWT authentication middleware
- Secure password hashing
- Database transactions for data consistency
- User authentication and profile information
- Secure password storage
- Conversation metadata (title, model, timestamps)
- User association for privacy
- Individual messages within conversations
- Role-based message storage (user/assistant)
- Token usage tracking per message
- Daily token usage aggregation per user
- Prompt, completion, and total token tracking
- Install dependencies:
npm install- Set up environment variables in
.env:
OPENAI_API_KEY=your_openai_api_key
JWT_SECRET=your_jwt_secret
PORT=3000
DB_PATH=./database.sqlite
RATE_LIMIT=100
JWT_EXPIRES_IN=24h
- Start the server:
npm startFor development:
npm run devcurl -X POST http://localhost:3000/api/auth/register \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com", "password": "password123", "username": "testuser"}'curl -X POST http://localhost:3000/api/auth/login \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com", "password": "password123"}'curl -X POST http://localhost:3000/api/openai/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-d '{"messages": [{"role": "user", "content": "Hello!"}], "model": "gpt-3.5-turbo"}'curl -X GET http://localhost:3000/api/history/conversations \
-H "Authorization: Bearer YOUR_JWT_TOKEN"curl -X GET http://localhost:3000/api/history/token-usage \
-H "Authorization: Bearer YOUR_JWT_TOKEN"src/
├── app.js # Main application file
├── config/
│ └── config.js # Configuration settings
├── middleware/
│ └── auth.js # JWT authentication middleware
├── models/
│ ├── database.js # Database initialization and schema
│ ├── User.js # User model with authentication methods
│ └── Conversation.js # Conversation and message models
├── routes/
│ ├── auth.js # Authentication routes
│ ├── openai.js # OpenAI proxy routes
│ └── history.js # Conversation history routes
├── services/
│ └── openai.js # OpenAI API service layer
└── utils/
└── logger.js # Logging utilities
✅ All phases completed and verified with Kluster MCP:
- Phase 1: Project setup and dependencies ✅
- Phase 2: Database schema and authentication ✅
- Phase 3: OpenAI proxy endpoints ✅
- Phase 4: Conversation history storage ✅
- Phase 5: Token usage tracking ✅
- Phase 6: Final testing and verification ✅
✅ No SQL injection vulnerabilities in history search
✅ Auth middleware properly applied to protected endpoints
✅ No hardcoded JWT secrets
✅ Error messages sanitized for client responses
✅ Database transactions used for token usage updates
✅ Proper async/await usage throughout
✅ Efficient batch database operations (no N+1 queries)
✅ Pagination implemented for history endpoints
✅ Proper token estimation for streaming responses