A comprehensive Claude Desktop replacement with enhanced functionality, multi-provider LLM support, and advanced MCP server management.
Enterprise-grade MCP client that bypasses Claude Desktop restrictions while providing superior functionality:
- 🔌 Multi-Provider LLM Support: Gemini, Ollama, OpenAI, Anthropic with dynamic model selection
- 🛠️ Advanced MCP Server Management: Visual configuration, templates, real-time monitoring
- 🖥️ Claude Desktop-inspired UI: Rich HTML rendering, centralized settings, intuitive navigation
- ⚡ Real-time Streaming: Live tool execution with progress monitoring and error recovery
- 🏢 Enterprise Ready: On-premises deployment, robust error handling, production architecture
- 🎨 Template-based Setup: Quick MCP server configuration with popular service templates
SyncPilot (Custom MCP Client)
┌─────────────────────────────────────────────────────────────┐
│ Frontend (Next.js 14) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Chat UI │ │ Server Mgr │ │ Settings │ │
│ │ • HTML │ │ • Add/Remove│ │ • Providers │ │
│ │ • Markdown │ │ • Connect │ │ • API Keys │ │
│ │ • Tool Viz │ │ • Monitor │ │ • Models │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└───────────────────────┬─────────────────────────────────────┘
│ HTTP/SSE
┌───────────────────────┴─────────────────────────────────────┐
│ Backend (Python FastAPI) │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ LLM Provider Layer │ │
│ │ ┌────────┐ ┌─────────┐ ┌────────┐ ┌─────────┐ │ │
│ │ │ Gemini │ │ Ollama │ │ OpenAI │ │Anthropic│ │ │
│ │ └────────┘ └─────────┘ └────────┘ └─────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ MCP Manager │ │
│ │ • Multi-server connections │ │
│ │ • Tool discovery & caching │ │
│ │ • Parallel tool execution │ │
│ │ • Error handling & recovery │ │
│ └─────────────────────────────────────────────────────┘ │
└───────────────────────┬─────────────────────────────────────┘
│ MCP Protocol (stdio/HTTP/WS)
┌───────────────────────┴─────────────────────────────────────┐
│ MCP Servers │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Filesystem │ │ Database │ │ Custom │ │
│ │ Server │ │ Server │ │ Servers │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
SyncPilot includes unified start scripts that handle everything automatically:
Linux/macOS:
./start.shCross-platform (Python):
python3 start.pyWindows:
start.batUsing npm:
npm start✅ Check dependencies (Python, Node.js, npm) ✅ Create Python virtual environment ✅ Install backend dependencies ✅ Install frontend dependencies ✅ Create .env file from template ✅ Start both backend and frontend ✅ Monitor and restart if processes crash
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/api/docs
If you prefer manual setup:
Backend:
cd backend
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
cp .env.example .env # Edit with your API keys
uvicorn app.main:app --reload --port 8000Frontend (in new terminal):
cd frontend
npm install
npm run devConfigure your preferred AI providers in the Settings > LLM Providers panel:
- ✅ Dynamic Model Discovery: Auto-fetch available models from each provider
- ✅ Real-time Validation: Test API keys and connections
- ✅ Smart Defaults: Fallback models when API calls fail
- ✅ Temperature & Token Control: Fine-tune model behavior
{
"gemini": {
"enabled": true,
"api_key": "your-google-api-key",
"default_model": "gemini-2.5-pro",
"temperature": 0.7,
"max_tokens": 4096
},
"ollama": {
"enabled": true,
"base_url": "http://localhost:11434",
"default_model": "llama3.1:latest",
"temperature": 0.7
}
}Add MCP servers via Settings > MCP Servers with one-click templates:
- File System: Local file access with directory restrictions
- GitHub: Repository integration with personal access tokens
- PostgreSQL: Database connectivity with connection strings
- Custom: Manual configuration for specialized servers
{
"mcpServers": {
"ptp-operator": {
"command": "node",
"args": ["/path/to/your/mcp-server/index.js"],
"env": {
"KUBECONFIG": "/home/user/.kube/config",
"PTP_AGENT_URL": "https://your-ptp-agent.example.com",
"NODE_TLS_REJECT_UNAUTHORIZED": "0"
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"],
"env": {},
"auto_connect": true,
"timeout": 30
}
}
}- STDIO: Local process communication (most common)
- HTTP SSE: Remote server via Server-Sent Events
- WebSocket: Real-time bidirectional communication
- Gemini AI: Latest Google models (gemini-2.5-pro, gemini-1.5-flash) with auto-discovery
- Ollama: Local LLM support for privacy/offline use with real-time model listing
- OpenAI: GPT-4o and other OpenAI models with dynamic model fetching
- Anthropic: Claude 3.5 Sonnet and other Claude models with API validation
- Dynamic Model Discovery: Auto-fetch and update available models for each provider
- Smart Fallbacks: Graceful degradation when APIs are unavailable
- Template-based Setup: One-click configuration for popular services
- Visual Configuration: Intuitive forms with real-time validation
- Multiple Transport Protocols: stdio, HTTP SSE, WebSocket with auto-detection
- Real-time Monitoring: Connection health, tool discovery, error tracking
- Edit & Update: Modify server configurations without restart
- Auto-discovery: Tools and resources automatically detected and cached
- Parallel Execution: Multiple tool calls with progress monitoring
- Error Recovery: Automatic reconnection and circuit breaker patterns
- Centralized Settings: All configuration in one intuitive interface
- Rich HTML Rendering: Full Claude Desktop-style message rendering
- Real-time Updates: Live connection status and tool execution progress
- Split-screen Layout: Chat and management side-by-side
- Template Dropdowns: Quick server setup with popular configurations
- Progress Visualization: Tool execution with detailed status updates
- ✅ No Claude Desktop dependency: Deploy anywhere
- ✅ Corporate compliance: Keep data on-premises with Ollama
- ✅ Multi-provider flexibility: Not locked to any single AI provider
- ✅ Enhanced monitoring: Full visibility into tool execution
- ✅ Source code control: Customize and extend as needed
- ✅ Production ready: Async architecture, error handling, type safety
syncpilot/
├── backend/ # Python FastAPI backend
│ ├── app/
│ │ ├── core/ # MCP manager, config
│ │ ├── providers/ # LLM provider implementations
│ │ ├── api/ # REST API endpoints
│ │ └── models/ # Pydantic data models
│ └── requirements.txt
├── frontend/ # Next.js frontend
│ ├── src/
│ │ ├── app/ # Next.js app router
│ │ ├── components/ # React components
│ │ └── lib/ # Utilities and stores
│ └── package.json
├── README.md
└── IMPLEMENTATION_SUMMARY.md
- Setup Guide: See above Quick Start section
- Detailed Implementation: See IMPLEMENTATION_SUMMARY.md
- API Documentation: Available at http://localhost:8000/api/docs when running
SyncPilot is designed for enterprise deployment:
- Docker: Container-ready architecture
- Cloud: Deploy on AWS, GCP, Azure
- On-premises: Full local deployment with Ollama
- Kubernetes: Scalable container orchestration
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
See LICENSE file for details.
SyncPilot - Because your AI workflow shouldn't be limited by corporate restrictions. 🚀