Skip to content

DaveSimoes/CrustAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🦀 CrustAI — Private Local AI Assistant for Messaging Platforms

Run a multi-platform AI assistant on your own machine with full privacy (Telegram, Discord, WhatsApp, Slack).

Node.js Ollama License: MIT Contributions Welcome


🔒 100% Private · 🖥️ Runs Locally · 🧠 Powered by Open-Source LLMs · 🌐 Multi-Platform


🌍 Language / Idioma


⚡ 30-second preview

30-second Preview chat gif

🇺🇸 What is CrustAI?

CrustAI is a self-hosted AI assistant that runs 100% locally using Ollama. It integrates with Telegram, Discord, WhatsApp and Slack so you can chat with your assistant in tools you already use—without sending your conversation data to cloud LLM providers.

Built with Node.js and powered by Ollama (local LLM runtime), CrustAI is designed for developers, privacy enthusiasts, and anyone who wants an AI assistant that truly belongs to them.


✨ Key Features

Feature Description
🔒 100% Local & Private Conversations stay on your machine
🧠 LLM via Ollama Use tinyllama, llama3.2, phi3 and more
📱 Multi-platform Adapters Telegram, WhatsApp, Discord, Slack
🧬 Long-term Memory Store and retrieve user facts
REST API Integrate CrustAI into external workflows
🎭 Personality Config Customize tone, style and identity
🌐 Bilingual UX English + Portuguese support

🎬 Demo

🖥️ Step 1 — Starting CrustAI

Watch the system boot up and connect to the local AI model

Terminal Demo terminal gif


📱 Step 2 — Bot Connected on Telegram

The bot responds instantly — running 100% offline

Ping Demo ping gif


🧠 Step 3 — AI Responding in Real Time

Ask anything — the answer comes from your own machine

Chat Demo chat gif


Commands available:

/ping      → Check if the bot is alive
/help      → Show all commands
/model     → Show which AI model is running
/remember  → Store a fact in long-term memory
/forget    → Erase all stored facts
/clear     → Clear conversation history

🏗️ Architecture

Adapters (Telegram / Discord / WhatsApp / Slack)
                │
                ▼
         Message Orchestrator
        ┌────────┴────────┐
        ▼                 ▼
   Ollama Client      Memory Store
        │                 │
        └────────┬────────┘
                 ▼
             REST API

Design note: adapter boundaries make it easy to add new channels without changing core conversation logic.


🚀 Quick Start

Prerequisites

Installation

# 1. Clone the repository
git clone https://github.com/DaveSimoes/CrustAI.git
cd CrustAI

# 2. Install dependencies
npm install

# 3. Start Ollama and pull a model
ollama serve
ollama pull tinyllama   # lightweight (600MB)
# or
ollama pull llama3.2    # more powerful (2GB, needs 8GB RAM)

# 4. Configure the project
cp config/config.example.yml config/config.yml
# Edit config/config.yml with your Telegram token and model

# 5. Run CrustAI
npm start

⚙️ Configuration

Edit config/config.yml:

model: tinyllama          # or llama3.2, phi3, mistral...
ollama_url: http://localhost:11434
language: pt-BR

telegram:
  enabled: true
  token: YOUR_BOT_TOKEN_HERE
  allowed_user_ids: []    # leave empty to allow all users

discord:
  enabled: false
  token: ""

whatsapp:
  enabled: false

voice:
  enabled: false
  port: 8765

🛠️ Tech Stack

Technology Purpose
Node.js Runtime environment
Ollama Local LLM inference engine
node-telegram-bot-api Telegram integration
@whiskeysockets/baileys WhatsApp integration
discord.js Discord integration
@slack/bolt Slack integration
Fastify REST API server
sql.js Embedded database for memory
yaml Configuration management

📁 Project Structure

crustai/
├── src/
│   ├── core/
│   │   ├── index.js        # Main orchestrator
│   │   ├── llm.js          # Ollama LLM client
│   │   └── commands.js     # Command handler
│   ├── adapters/
│   │   ├── telegram/       # Telegram bot
│   │   ├── discord/        # Discord bot
│   │   ├── whatsapp/       # WhatsApp bot
│   │   └── slack/          # Slack bot
│   ├── memory/
│   │   └── store.js        # Long-term memory
│   ├── personality/
│   │   └── prompt.js       # System prompt builder
│   ├── voice/
│   │   └── server.js       # Voice WebSocket server
│   └── api/
│       └── server.js       # REST API
├── config/
│   ├── config.yml          # Your configuration (git-ignored)
│   ├── config.example.yml  # Template
│   └── personality.yml     # Assistant personality
├── demo/
│   ├── terminal.gif        # Boot demo
│   ├── ping.gif            # Telegram connection demo
│   └── chat.gif            # AI conversation demo
└── data/                   # Local database (git-ignored)

🔐 Privacy First

CrustAI was built with privacy as its core principle:

  • ✅ All conversations stay on your machine
  • ✅ No API keys sent to external AI services
  • ✅ No telemetry or usage tracking
  • ✅ Open source — inspect every line of code
  • ✅ Your data, your rules

🗺️ Roadmap

  • Web UI dashboard
  • Image understanding (multimodal LLMs)
  • Plugin system for custom tools
  • Docker one-click deployment
  • Mobile app companion

👨‍💻 Author

Dave Simoes


📄 License

This project is licensed under the MIT License — see the LICENSE file for details.



🇧🇷 O que é o CrustAI?

CrustAI é um assistente de IA totalmente privado e auto-hospedado que roda inteiramente na sua própria máquina — nenhum dado sai do seu computador. Ele se conecta a plataformas de mensagens populares como Telegram, WhatsApp, Discord e Slack, oferecendo o poder de uma IA conversacional sem abrir mão da sua privacidade.

Construído com Node.js e alimentado pelo Ollama (motor de LLM local), o CrustAI foi projetado para desenvolvedores, entusiastas de privacidade e qualquer pessoa que queira um assistente de IA que realmente lhe pertença.


✨ Funcionalidades Principais

Funcionalidade Descrição
🔒 100% Privado Todos os dados ficam na sua máquina. Sem nuvem
🧠 LLM Local Powered by Ollama — suporta llama3.2, tinyllama e mais
📱 Multi-Plataforma Telegram, WhatsApp, Discord, Slack — um só bot
🧬 Memória Longa Lembra fatos sobre você entre conversas
🗣️ Voz Offline Fala e escuta sem internet (pt-BR)
REST API API integrada para integrações customizadas
🎭 Personalidade Configure o nome, tom e comportamento do assistente

🎬 Demonstração

🖥️ Passo 1 — Iniciando o CrustAI

O sistema inicializando e conectando ao modelo de IA local

Terminal Demo terminal gif


📱 Passo 2 — Bot Conectado no Telegram

O bot respondendo instantaneamente — 100% offline

Ping Demo ping gif


🧠 Passo 3 — IA Respondendo em Tempo Real

Pergunte qualquer coisa — a resposta vem da sua própria máquina

Chat Demo chat gif


🚀 Início Rápido

# 1. Clone o repositório
git clone https://github.com/DaveSimoes/CrustAI.git
cd CrustAI

# 2. Instale as dependências
npm install

# 3. Inicie o Ollama e baixe um modelo
ollama serve
ollama pull tinyllama

# 4. Configure o projeto
cp config/config.example.yml config/config.yml
# Edite config/config.yml com seu token do Telegram

# 5. Inicie o CrustAI
npm start

👨‍💻 Autor

Dave Simoes — Desenvolvedor apaixonado por IA, privacidade e código aberto.


⭐ Se este projeto te ajudou, deixe uma estrela! / If this project helped you, leave a star! ⭐

Made with 🦀 and ❤️ by Dave Simoes