A complete system for chatting with Large Language Models (LLMs) through DNS TXT record queries. Includes a TypeScript DNS server and Python client for secure, authenticated AI conversations.
Designed for restricted network environments where only DNS queries are permitted (like unpaid planes WiFi 😉), enabling access to models like OpenAI o1, GPT-4, Claude, and others. This DNS-based communication can be extended to other applications like chat rooms, file transfer, or any data exchange over DNS infrastructure.
demo.mp4
Predefined Endpoints (no authentication):
PING→ ReturnsPONGLIST→ Returns available models
Chat Format: [10-char API key][1-char model index][user prompt]
# Test server connectivity
dig @127.0.0.1 -p5333 TXT "PING"
# Get available models
dig @127.0.0.1 -p5333 TXT "LIST"
# Chat examples (API key: my-api-key, and using Model 0)
dig @127.0.0.1 -p5333 TXT "my-api-key0What is AI?"
dig @127.0.0.1 -p5333 TXT "my-api-key0Tell me a programming joke"
dig @127.0.0.1 -p5333 TXT "my-api-key0How do I center a div in CSS?"# Call using python
python dns_chat.py ping
# Or install it for direct use
pip install -e .
gpt53 ping
# Example usage
gpt53 --api-key my-api-key --host 127.0.0.1 --port 5333 interactiveping: Test server connectivitylist: List available AI modelsinteractive: Start interactive chat mode
--host TEXT: DNS server host (default: 127.0.0.1)--port INTEGER: DNS server port (default: 5333)--api-key TEXT: 10-character API key for authentication (required for interactive mode)
Generate a secure 10-character API key:
# OpenSSL
openssl rand -base64 32 | tr -d "=+/" | cut -c1-10💡 This should be set in the server's API_KEY and set by the client when requestion a chat generation.
To deploy GPT-53 on a subdomain (e.g., gpt53.example.com), follow these steps:
Set up DNS records for your subdomain:
# A record pointing to your server IP
gpt53.example.com A YOUR_SERVER_IP
# NS record to delegate DNS queries to your server (optional, for direct DNS queries)
gpt53.example.com NS gpt53.example.com
Configure the server to bind to the appropriate interface:
export HOST=0.0.0.0 # Listen on all interfaces
export PORT=53 # Standard DNS port (requires root/sudo)
export API_KEY=your-secure-10-char-key
export OPENAI_API_KEY=your-openai-api-key
# Start the server
cd server
npm startEnsure your firewall allows DNS traffic:
sudo ufw allow 53/udp
sudo ufw allow 53/tcpOnce deployed, clients can connect using your subdomain:
# Python client
gpt53 --host gpt53.example.com --port 53 --api-key your-api-key interactive
# Direct DNS queries
dig @gpt53.example.com TXT "PING"
dig @gpt53.example.com TXT "your-api-key0Hello world"- Long Response Support: Bypass response length limits with chunked delivery (return
CONTINUE:[chunk_id]when response exceeds limit, client queriesGET_CHUNK:[chunk_id]:[part_number]) - Long Message Support: Add
START_LONG_MESSAGEandEND_LONG_MESSAGEcommands to bypass TXT record char limits (use API key as message identifier, server-side message store needed) - Chat History & Threads: Add chat history persistence and multiple thread support (commands like
LIST_THREADS,SELECT_THREAD:[thread_id],NEW_THREAD, server-side store required) - Function Calling: Add support for function calling in AI responses (return
FUNC_CALL:[function_name]:[base64_encoded_args], client executes function and responds withFUNC_RESULT:[base64_encoded_result])
