The Deterministic Runtime for Safety-Critical AI Agents with Autonomous Native Capabilities
Γon is a comprehensive, production-ready framework for building Neuro-Symbolic AI agents. Unlike stochastic-only systems, Γon combines the intuitive reasoning of LLMs (System 1) with the deterministic safety and control of code-level axioms (System 2).
It establishes a standard "Trust Stack" that enables agents to be Safety-Native, Protocol-First, and Extensible by Design. With deep integration of the Agent-to-Agent (A2A) and Model Context Protocol (MCP), Γon allows you to build interoperable agent ecosystems that can collaborate safely in high-stakes environments.
- π Autonomous Native Engine: Built-in support for browser automation (Playwright), persistent event-sourced memory (SQLite), and granular Trust Levels.
- ποΈ Developer First CLI: Transform from scripts to projects with the new
aeoncommand. Scaffold, run, and serve agents in seconds. - π Declarative Runtime: Define agents via
aeon.yamland launch a full Gateway Server for production deployments. - π‘οΈ Enhanced Safety executive: Improved SIL-4 compliant axioms with TMR (Triple Modular Redundancy) reasoning for mission-critical reliability.
- π Deep Persistence: Event-sourced memory system that survives reboots and provides a complete audit trail of agent thoughts and actions.
- β° Temporal Capabilities: Native scheduling for cron jobs and delayed tasks, enabling agents to act autonomously over time.
- Routing Layer: Intelligent pattern-based message routing with 5 distinct strategies (Priority, Weighted, etc.).
- Gateway Layer: Centralized communication hub with session management and TTL support.
- Security Layer: Policy-based access control, AES encryption, and multi-provider authentication.
- Health Layer: Real-time system monitoring, metrics collection (Counter, Gauge, etc.), and diagnostics.
- Deterministic Safety: Stop begging the model to be safe. Enforce safety at the runtime level with Axioms.
- Neuro-Symbolic Core: The perfect balance between LLM intuition and hard-coded rules.
- Protocol-First: Native support for A2A (Agent-to-Agent) and MCP (Model Context Protocol).
- Enterprise Ready: Built with observability, economics (cost tracking), and health monitoring from the ground up.
- Local-First & Private: Run entirely on your hardware with Ollama or connect to premium cloud providers.
- Stark visual Feedback: Terminal-native UI components for monitoring agent execution in real-time.
UV is the fastest way to manage Γon dependencies:
# Clone the repository
git clone https://github.com/richardsonlima/aeon-core.git
cd aeon-core
# Create environment and install
uv syncpip install aeon-coreFrom zero to agent in three commands:
# Initialize a new project
aeon init my-safety-agent
# Configure your model in aeon.yaml
# (Default: google/gemini-2.0-flash-001)
# Run a task interactively
aeon run "Check reactor thermal status"
# Start the production gateway
aeon serve --port 8000from aeon import Agent
from aeon.protocols import A2A, MCP
# Initialize the agent with the Trust Stack
agent = Agent(
name="Sentinel",
model="google/gemini-2.0-flash-001",
protocols=[A2A(port=8000), MCP(servers=["industrial_tools.py"])]
)
# Define an Unbreakable Axiom (System 2)
@agent.axiom(on_violation="OVERRIDE")
def safety_limit(command: dict) -> bool | dict:
"""SAFETY RULE: Power output cannot exceed 100%."""
if command.get("power", 0) > 100:
return {"power": 100, "warning": "AXIOM_LIMIT_REACHED"}
return True
if __name__ == "__main__":
agent.start()from aeon import Agent
from aeon.core.config import TrustLevel
agent = Agent(name="Researcher", trust_level=TrustLevel.FULL)
async def main():
# Agent can autonomously browse and remember
response = await agent.run("Find the latest paper on SIL-4 safety and save the summary.")
print(f"Agent Action: {response.last_thought}")
# Run via CLI: aeon run ...Γon now features a completely redesigned MCP implementation that provides robust, production-ready integration with external tools:
- Synapse Layer: Unified tool discovery and invocation.
- Standard Support: Full compliance with the latest MCP specification.
- Multi-Server: Connect to multiple MCP servers simultaneously (Stdio, SSE).
- Type Safety: Automatic parameter validation for tool calls.
Γon is organized into 4 distinct layers, each providing critical functionality for advanced agents:
- Cortex: Neuro-reasoning via LLMs.
- Executive: Deterministic control via Axioms.
- Hive: Standardized communication (A2A).
- Synapse: Tool integration (MCP).
- Integrations: Multi-platform connectivity (Telegram, Discord, Slack).
- Extensions: Dynamic capability loading.
- Dialogue: Persistent, event-sourced conversation history.
- Dispatcher: Event-driven pub/sub architecture.
- Automation: Temporal task scheduling (Cron/Interval).
- Observability: Life-cycle hooks and audit trails.
- Economics: Real-time token tracking and cost calculation.
- CLI: Premium developer interface.
- Routing: High-performance message distribution.
- Gateway: Centralized session and transport management.
- Security: Authentication, authorization, and encryption.
- Health: System diagnostics and metrics.
from aeon import Agent
from aeon.protocols import A2A, MCP
controller = Agent(
name="Reactor_Overseer_01",
role="Industrial Automation Monitor",
model="gemini-1.5-flash",
protocols=[
A2A(port=8000),
MCP(servers=["mcp-server-industrial"])
]
)
@controller.axiom(on_violation="REJECT")
def enforce_safety(command: dict):
# Any command attempting to disable cooling is rejected
if command.get("action") == "DISABLE_COOLING":
return False
return True
if __name__ == "__main__":
controller.start()π Γon Core v0.4.0-ULTRA initialized
βββ π‘ A2A Server: Online at http://0.0.0.0:8000 (Unified Standard)
βββ π MCP Client: Connected (4 tools loaded: read_sensor, adjust_valve...)
βββ π‘οΈ Axioms: 2 Active (enforce_safety, thermal_limit)
βββ π§ Brain: Gemini-2.0-Flash (Ready)
- GitHub Issues: Report bugs or request features.
- Aeon Landing Page: Visit our landing page for deep dives.
- Contributing Guide: Learn how to join the mission.
If you use Γon in your research, please cite it as:
@software{richardsonlima-aeon-framework,
author = {LIMA, Richardson Edson de},
title = {Aeon Framework: The Neuro-Symbolic Runtime for Deterministic AI Agents},
url = {https://github.com/richardsonlima/aeon-core},
version = {0.4.0-ULTRA},
year = {2026},
}**Richardson Lima (Rick) **
- GitHub: richardsonlima
- LinkedIn: richardsonlima
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Made with β€οΈ for AI Safety by Richardson Lima.