- Best Solution is option-seven
1. option-one
> source ~/.bashrc
> pipecat init
Let's create your Pipecat project!
✔ Project name: option-one
✔ Bot type: Web/Mobile
✔ Client framework: Vanilla JS
✔ Transport: Daily (WebRTC)
✔ Add another transport for local testing? No
✔ Pipeline architecture: Cascade (STT → LLM → TTS)
✔ Speech-to-Text: Deepgram
✔ Language model: OpenAI
✔ Text-to-Speech: ElevenLabs
Default feature settings:
• Audio recording: No
• Transcription logging: No
• Smart turn-taking: Yes (recommended)
• Video avatar service: No
• Video input: No
• Video output: No
• Observability: No
✔ Customize feature settings? No
✔ Deploy to Pipecat Cloud? Yes
✔ Enable Krisp noise cancellation? Yes
✨ Project created successfully!
/Users/sunilkhedar/workspace/pipecat-integrations/option-one
Next steps:
• Go to your project: cd option-one
Client setup:
• Go to client: In a separate terminal window or tab cd client
• Install dependencies: npm install
• Run dev server: npm run dev
Server setup:
• Go to server: cd server
• Add tourch: uv add torch
• Install dependencies: uv sync
• Create .env file: cp .env.example .env
• Edit .env and add your API keys
• Run your bot:
• Daily: uv run bot.py --transport daily
See https://docs.pipecat.ai/deployment/pipecat-cloud for deployment info.
2. option-two
> source ~/.bashrc
> pipecat init
Let's create your Pipecat project!
✔ Project name: option-two
✔ Bot type: Web/Mobile
✔ Client framework: Vanilla JS
✔ Transport: SmallWebRTC
✔ Add another transport for local testing? No
✔ Pipeline architecture: Cascade (STT → LLM → TTS)
✔ Speech-to-Text: Deepgram
✔ Language model: OpenAI
✔ Text-to-Speech: ElevenLabs
Default feature settings:
• Audio recording: No
• Transcription logging: No
• Smart turn-taking: Yes (recommended)
• Video avatar service: No
• Video input: No
• Video output: No
• Observability: No
✔ Customize feature settings? Yes
✔ Audio recording? No
✔ Transcription logging? Yes
✔ Smart turn-taking? Yes
✔ Use video avatar service? No
✔ Video input? No
✔ Video output? No
✔ Enable observability? No
✔ Deploy to Pipecat Cloud? No
✨ Project created successfully!
/Users/sunilkhedar/workspace/pipecat-integrations/option-two
Next steps:
• Go to your project: cd option-two
Client setup:
• Go to client: In a separate terminal window or tab cd client
• Install dependencies: npm install
• Run dev server: npm run dev
Server setup:
• Go to server: cd server
• Add tourch: uv add torch
• Install dependencies: uv sync
• Create .env file: cp .env.example .env
• Edit .env and add your API keys
• Run your bot:
• SmallWebRTC: uv run bot.py
See README.md for detailed setup instructions.
3. option-three
> source ~/.bashrc
> pipecat init
Let's create your Pipecat project!
✔ Project name: option-three
✔ Bot type: Web/Mobile
✔ Client framework: Vanilla JS
✔ Transport: SmallWebRTC
✔ Add another transport for local testing? No
✔ Pipeline architecture: Cascade (STT → LLM → TTS)
✔ Speech-to-Text: AWS Transcribe
✔ Language model: AWS Bedrock
✔ Text-to-Speech: AWS Polly
Default feature settings:
• Audio recording: No
• Transcription logging: No
• Smart turn-taking: Yes (recommended)
• Video avatar service: No
• Video input: No
• Video output: No
• Observability: No
✔ Customize feature settings? Yes
✔ Audio recording? No
✔ Transcription logging? Yes
✔ Smart turn-taking? Yes
✔ Use video avatar service? No
✔ Video input? No
✔ Video output? No
✔ Enable observability? No
✔ Deploy to Pipecat Cloud? No
✨ Project created successfully!
/Users/sunilkhedar/workspace/pipecat-integrations/option-three
Next steps:
• Go to your project: cd option-three
Client setup:
• Go to client: In a separate terminal window or tab cd client
• Install dependencies: npm install
• Run dev server: npm run dev
Server setup:
• Go to server: cd server
• Add tourch: uv add torch
• Install dependencies: uv sync
• Create .env file: cp .env.example .env
• Edit .env and add your API keys
• Run your bot:
• SmallWebRTC: uv run bot.py
See README.md for detailed setup instructions.
4. option-four: MCP Servers Response is breaking the Agent
> source ~/.bashrc
> pipecat init
Let's create your Pipecat project!
✔ Project name: option-four
✔ Bot type: Web/Mobile
✔ Client framework: Vanilla JS
✔ Transport: SmallWebRTC
✔ Add another transport for local testing? No
✔ Pipeline architecture: Realtime (speech-to-speech)
✔ Realtime service: AWS Nova Sonic
Default feature settings:
• Audio recording: No
• Transcription logging: No
• Video avatar service: No
• Video input: No
• Video output: No
• Observability: No
✔ Customize feature settings? Yes
✔ Audio recording? No
✔ Transcription logging? Yes
✔ Use video avatar service? No
✔ Video input? No
✔ Video output? No
✔ Enable observability? No
✔ Deploy to Pipecat Cloud? No
✨ Project created successfully!
/Users/sunilkhedar/workspace/pipecat-integrations/option-four
Next steps:
• Go to your project: cd option-four
Client setup:
• Go to client: In a separate terminal window or tab cd client
• Install dependencies: npm install
• Run dev server: npm run dev
Server setup:
• Go to server: cd server
• Add tourch: uv add torch
• Add aioboto3: uv add aioboto3
• Install dependencies: uv sync
• Create .env file: cp .env.example .env
• Edit .env and add your API keys
• Run your bot:
• SmallWebRTC: uv run bot.py
See README.md for detailed setup instructions.
5. option-five: Internal tools instead of MCP servers
> source ~/.bashrc
> pipecat init
Let's create your Pipecat project!
✔ Project name: option-five
✔ Bot type: Web/Mobile
✔ Client framework: Vanilla JS
✔ Transport: SmallWebRTC
✔ Add another transport for local testing? No
✔ Pipeline architecture: Realtime (speech-to-speech)
✔ Realtime service: AWS Nova Sonic
Default feature settings:
• Audio recording: No
• Transcription logging: No
• Video avatar service: No
• Video input: No
• Video output: No
• Observability: No
✔ Customize feature settings? Yes
✔ Audio recording? No
✔ Transcription logging? Yes
✔ Use video avatar service? No
✔ Video input? No
✔ Video output? No
✔ Enable observability? No
✔ Deploy to Pipecat Cloud? No
✨ Project created successfully!
/Users/sunilkhedar/workspace/pipecat-integrations/option-five
Next steps:
• Go to your project: cd option-five
Client setup:
• Go to client: In a separate terminal window or tab cd client
• Install dependencies: npm install
• Run dev server: npm run dev
Server setup:
• Go to server: cd server
• Add tourch: uv add torch
• Add aioboto3: uv add aioboto3
• Install dependencies: uv sync
• Create .env file: cp .env.example .env
• Edit .env and add your API keys
• Run your bot:
• SmallWebRTC: uv run bot.py
See README.md for detailed setup instructions.
6. option-six
> source ~/.bashrc
> pipecat init
Let's create your Pipecat project!
✔ Project name: option-six
✔ Bot type: Web/Mobile
✔ Client framework: Vanilla JS
✔ Transport: Daily (WebRTC)
✔ Add another transport for local testing? No
✔ Pipeline architecture: Realtime (speech-to-speech)
✔ Realtime service: AWS Nova Sonic
Default feature settings:
• Audio recording: No
• Transcription logging: No
• Video avatar service: No
• Video input: No
• Video output: No
• Observability: No
✔ Customize feature settings? Yes
✔ Audio recording? No
✔ Transcription logging? Yes
✔ Use video avatar service? No
✔ Video input? No
✔ Video output? No
✔ Enable observability? No
✔ Deploy to Pipecat Cloud? No
✨ Project created successfully!
/Users/sunilkhedar/workspace/pipecat-integrations/option-six
Next steps:
• Go to your project: cd option-six
Client setup:
• Go to client: In a separate terminal window or tab cd client
• Install dependencies: npm install
• Run dev server: npm run dev
Server setup:
• Go to server: cd server
• Add tourch: uv add torch
• Add aioboto3: uv add aioboto3
• Install dependencies: uv sync
• Create .env file: cp .env.example .env
• Edit .env and add your API keys
• Run your bot:
• Daily: uv run bot.py --transport daily
See README.md for detailed setup instructions.
7. option-seven
> source ~/.bashrc
> pipecat init
Let's create your Pipecat project!
✔ Project name: option-seven
✔ Bot type: Web/Mobile
✔ Client framework: Vanilla JS
✔ Transport: Daily (WebRTC)
✔ Add another transport for local testing? No
✔ Pipeline architecture: Realtime (speech-to-speech)
✔ Realtime service: OpenAI Realtime
Default feature settings:
• Audio recording: No
• Transcription logging: No
• Video avatar service: No
• Video input: No
• Video output: No
• Observability: No
✔ Customize feature settings? Yes
✔ Audio recording? No
✔ Transcription logging? Yes
✔ Use video avatar service? No
✔ Video input? No
✔ Video output? No
✔ Enable observability? No
✔ Deploy to Pipecat Cloud? No
✨ Project created successfully!
/Users/sunilkhedar/workspace/pipecat-integrations/option-seven
Next steps:
• Go to your project: cd option-seven
Client setup:
• Go to client: In a separate terminal window or tab cd client
• Install dependencies: npm install
• Run dev server: npm run dev -- --port 4000
Server setup:
• Go to server: cd server
• Install dependencies: uv sync
• Create .env file: cp .env.example .env
• Edit .env and add your API keys
• Run your bot:
• Daily: uv run bot.py --transport daily
See README.md for detailed setup instructions.
8. option-eigth
> source ~/.bashrc
> pipecat init
Let's create your Pipecat project!
✔ Project name: option-eigth
✔ Bot type: Web/Mobile
✔ Client framework: Vanilla JS
✔ Transport: SmallWebRTC
✔ Add another transport for local testing? No
✔ Pipeline architecture: Realtime (speech-to-speech)
✔ Realtime service: Gemini Live
Default feature settings:
• Audio recording: No
• Transcription logging: No
• Video avatar service: No
• Video input: No
• Video output: No
• Observability: No
✔ Customize feature settings? Yes
✔ Audio recording? No
✔ Transcription logging? Yes
✔ Use video avatar service? No
✔ Video input? No
✔ Video output? No
✔ Enable observability? No
✔ Deploy to Pipecat Cloud? No
Next steps:
• Go to your project: cd option-eight
Client setup:
• Go to client: In a separate terminal window or tab cd client
• Install dependencies: npm install
• Run dev server: npm run dev -- --port 4000
Server setup:
• Go to server: cd server
• Install dependencies: uv sync
• Create .env file: cp .env.example .env
• Edit .env and add your API keys
• Run your bot:
• SmallWebRTC: uv run bot.py
See README.md for detailed setup instructions.
9. option-nine
> source ~/.bashrc
> pipecat init
Let's create your Pipecat project!
✔ Project name: option-nine
✔ Bot type: Web/Mobile
✔ Client framework: Vanilla JS
✔ Transport: SmallWebRTC
✔ Add another transport for local testing? No
✔ Pipeline architecture: Realtime (speech-to-speech)
✔ Realtime service: OpenAI Realtime
Default feature settings:
• Audio recording: No
• Transcription logging: No
• Video avatar service: No
• Video input: No
• Video output: No
• Observability: No
✔ Customize feature settings? Yes
✔ Audio recording? No
✔ Transcription logging? Yes
✔ Use video avatar service? No
✔ Video input? No
✔ Video output? No
✔ Enable observability? No
✔ Deploy to Pipecat Cloud? No
✨ Project created successfully!
/Users/sunilkhedar/workspace/pipecat-integrations/option-nine
Next steps:
• Go to your project: cd option-nine
Client setup:
• Go to client: In a separate terminal window or tab cd client
• Install dependencies: npm install
• Run dev server: npm run dev -- --port 4000
Server setup:
• Go to server: cd server
• Install dependencies: uv sync
• Create .env file: cp .env.example .env
• Edit .env and add your API keys
• Run your bot:
• SmallWebRTC: uv run bot.py
See README.md for detailed setup instructions.