A general-purpose AI assistant with Linux server administration capabilities.
Ask questions, diagnose problems, run commands on remote servers, search the web, and save files β all through a streaming chat interface backed by any LLM.
- SSH access β run commands on remote Linux servers; read-only by default, write mode opt-in per host
- Web search & fetch β DuckDuckGo search (no API key) + direct fetching from a whitelisted domain list
- File storage β read and write files in a local
./files/sandbox - Date/time awareness β always knows the current time and can build precise time-range queries
- Persistent memory β SQLite + FTS5 stores conversations and saved solutions; relevant past solutions are injected into context automatically
- Model agnostic β Ollama, Anthropic Claude, OpenAI, Gemini or any OpenAI-compatible endpoint; switch per-conversation
- Web UI β dark/light mode, collapsible tool call panels, token usage, conversation history
- CLI client β rich terminal UI, connects to a remote server so every team member can use it
- OpenAI-compatible
/v1API β connect opencode, Cursor, or any OpenAI-protocol client
git clone https://github.com/c0m4r/aurora.git
cd aurora
./install.shThe installer creates a virtual environment, installs dependencies, and copies config.example.yaml β config.yaml if it doesn't exist.
Start the server with:
./start.shOpen http://localhost:8000.
Everything lives in config.yaml. Environment variables override the corresponding keys.
Enable at least one. Models are referenced as provider/model-id.
Ollama provider is enabled by default.
Connect to Linux servers and run shell commands.
tools:
ssh:
enabled: true
allow_writes: false # true = allow state-changing commands when user asks
hosts:
- name: "web-01"
host: "10.0.0.10"
port: 22
user: "aurora"
key_file: "~/.ssh/id_ed25519"
# allow_writes: true # per-host override
- name: "db-01"
host: "10.0.0.20"
user: "root"
key_file: "~/.ssh/id_ed25519"Safety model:
| Mode | What's blocked |
|---|---|
| Read-only (default) | Any write/modify operation: redirects (>), package managers, systemctl start/stop/restart, rm, chmod, useradd, mount, kill, reboot, and ~40 more patterns |
Write (allow_writes: true) |
Only catastrophic/irreversible operations: rm -rf /, mkfs, dd to block devices, fork bombs |
The model is instructed to only use write commands when the user has explicitly asked for a change, and to announce what each command will do before running it.
Searches DuckDuckGo (falls back to Bing). No API key required. Uses trafilatura for content extraction when available.
tools:
websearch:
enabled: true
max_results: 5
fetch_content: true # extract page text from top results
max_content_length: 4000 # characters per page
# Domains the model may visit directly with a URL (without a search query).
# null = use built-in defaults (GitHub, PyPI, Arch wiki, NVD, Stack Exchange, β¦)
# [] = disable direct URL fetching
whitelist: null
# whitelist:
# - github.com
# - wiki.archlinux.org
# - your-internal-docs.example.comAlways enabled. The model can read and write files inside ./files/ (relative to the server's working directory). Path traversal is blocked β nothing outside that directory is accessible.
./files/
report.md
scripts/setup.sh
data/output.json
memory:
db_path: "~/.local/share/aurora/memory.db"Conversations and saved solutions are stored in SQLite with FTS5 full-text search. Relevant past solutions are automatically injected into the system prompt for each new query. Save a solution via the web UI's π Solutions panel.
server:
host: "0.0.0.0"
port: 8000
api_key: "strong-random-secret" # or: AURORA_API_KEY env varIf api_key is empty or change-me-please, authentication is disabled (fine for local use).
Open http://localhost:8000 after starting the server.
| Feature | Details |
|---|---|
| Streaming | Responses stream token-by-token via SSE |
| Stop | Red β button (or Esc) aborts the current generation; partial response is kept |
| Continue | Button appears automatically when the agent hits max tool iterations |
| Thinking | Claude's extended reasoning in a collapsible block |
| Tool calls | Each tool invocation shows input + output, collapsible |
| Token usage | Per-message and session-total token counts |
| Dark / light | Toggle in the sidebar footer |
| Copy | Per-message copy button; copy entire conversation; copy code blocks |
| History | All conversations saved and listed in the sidebar |
| Solutions | Saved solutions panel (π) β browse, re-ask, delete |
| Settings | Right-click the β‘ logo to set server URL and API key |
Thin client that connects to any running server β install it on any machine.
pip install -e .
# or run directly without installing:
python cli/main.pyInteractive:
aurora
aurora --server http://server:8000 --api-key my-secretSingle shot:
aurora -m "check disk and memory on all servers"
aurora -m "what is the latest kernel version?" --quietEnvironment variables:
export AURORA_SERVER=http://server:8000
export AURORA_API_KEY=my-secret
export AURORA_MODEL=anthropic/claude-sonnet-4-6
auroraIn-session commands:
| Command | Description |
|---|---|
/models |
List all available models from all providers |
/use anthropic/claude-opus-4-6 |
Switch model for this session |
/new |
Start a new conversation |
/history |
List recent conversations |
/load <id-prefix> |
Resume a past conversation |
/quit |
Exit |
The server exposes /v1/chat/completions in the OpenAI format, so any tool that accepts a custom base URL works out of the box.
opencode:
{
"providers": {
"agent": {
"api": "openai",
"base": "http://localhost:8000/v1",
"key": "your-api-key"
}
}
}Cursor / Continue / VS Code:
- Base URL:
http://localhost:8000/v1 - API Key: value of
api_keyinconfig.yaml
Python:
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="your-key")
stream = client.chat.completions.create(
model="anthropic/claude-sonnet-4-6",
messages=[{"role": "user", "content": "Check nginx status on web-01"}],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")βββββββββββββββ SSE stream ββββββββββββββββββββββββββββββββββββββββ
β Web UI ββββββββββββββββββΊβ β
βββββββββββββββ€ β FastAPI Server β
β CLI client ββββ SSE stream ββΊβ /api/chat/stream (native SSE) β
βββββββββββββββ€ β /v1/chat/completions (OpenAI compat)β
β opencode ββββ OpenAI API ββΊβ β
βββββββββββββββ ββββββββββββββββ¬ββββββββββββββββββββββββ
β
ββββββββββΌβββββββββ
β Agent Loop β async generator
β (loop.py) β β SSE events
ββββββββββ¬βββββββββ
β
ββββββββββββββββββΌβββββββββββββββββ
β β β
ββββββββββΌβββ ββββββββββΌβββββββββ βββββΌβββββββββββ
β Provider β β Tool Registry β β Memory β
β Registry β β β β (SQLite) β
βββββββ¬ββββββ β ssh β ββββββββββββββββ
β β web (search + β
ββββββββββββββ€ β whitelisted β
β Anthropic β β fetch) β
β OpenAI β β file_read β
β Gemini β β file_write β
β Ollama β β get_datetime β
β Custom β βββββββββββββββββββ
ββββββββββββββ
SSE event stream (all clients receive the same format):
| Event | Description |
|---|---|
conv_id |
Conversation ID (first event on new conversation) |
thinking |
Claude extended reasoning delta |
text |
Response text delta |
tool_call |
Tool name + input being invoked |
tool_result |
Tool output (or error flag) |
usage |
Input / output token counts |
done |
Turn complete |
error |
Unrecoverable error |
- Create
aurora/tools/my_tool.py:
from .base import BaseTool, ToolDefinition
class MyTool(BaseTool):
def definition(self) -> ToolDefinition:
return ToolDefinition(
name="my_tool",
description="What this tool does and when to use it.",
parameters={
"type": "object",
"properties": {
"input": {"type": "string", "description": "..."},
},
"required": ["input"],
},
)
async def execute(self, input: str, **_) -> str:
return "result"- Register it in
aurora/tools/registry.pyβbuild_registry().
Overview:
| Package | Purpose |
|---|---|
fastapi + uvicorn |
Async HTTP server |
anthropic |
Claude API (native streaming + extended thinking) |
openai |
OpenAI / Gemini / Ollama / vLLM compatible APIs |
aiosqlite |
Async SQLite for conversation history and memory |
asyncssh |
SSH connections to remote servers |
httpx |
Async HTTP (web search, URL fetching) |
beautifulsoup4 + lxml |
HTML parsing for web search and page extraction |
trafilatura |
Better main-content extraction from web pages |
rich + typer + prompt_toolkit |
CLI |
pyyaml |
Config file parsing |
Full list: requirements.lock
See LICENSE file.