Skip to content

Fzkuji/Agentic-Programming

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

263 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Agentic Programming

Agentic Programming

Python functions that think.
A programming paradigm where Python controls flow and LLM handles reasoning.

PyPI Downloads CI License Python

Getting Started Β· API Reference Β· δΈ­ζ–‡


This is a paradigm proposal. Current LLM agent frameworks let the LLM control everything β€” what to do, when, and how. The result? Unpredictable execution, context explosion, and no output guarantees. We flip this: Python controls the flow, LLM only reasons when asked.

Agentic Programming code example

Quick Start

Prerequisites

Agentic Programming requires at least one LLM provider. Set up any one:

Provider Setup
Claude Code CLI npm i -g @anthropic-ai/claude-code && claude login
Codex CLI npm i -g @openai/codex && codex auth
Gemini CLI npm i -g @google/gemini-cli
Anthropic API export ANTHROPIC_API_KEY=...
OpenAI API export OPENAI_API_KEY=...
Gemini API export GOOGLE_API_KEY=...

Then choose how you want to use it:

Option A: Python β€” write agentic code

Install the package and start coding:

pip install agentic-programming          # core package
pip install "agentic-programming[openai]" # add API provider (or [anthropic], [gemini])
from agentic import agentic_function, create_runtime

runtime = create_runtime()

@agentic_function
def login_flow(username, password):
    """Complete login flow."""
    observe(task="find login form")       # Python decides what to do
    click(element="login button")         # Python decides the order
    return verify(expected="dashboard")   # Python decides when to stop

Option B: Skills β€” let your LLM agent use it

pip install agentic-programming
agentic install-skills                    # auto-detects Claude Code / Gemini CLI

Or manually:

git clone https://github.com/Fzkuji/Agentic-Programming.git
cp -r Agentic-Programming/skills/* ~/.claude/skills/    # Claude Code
cp -r Agentic-Programming/skills/* ~/.gemini/skills/    # Gemini CLI

Then talk to your agent: "Create a function that extracts emails from text"

The agent picks up the skill, calls agentic create, and the generated function handles everything from there.

Option C: MCP β€” connect any MCP client

Install with the MCP extra, then add to your client config:

pip install "agentic-programming[mcp]"
{
    "mcpServers": {
        "agentic": {
            "command": "python",
            "args": ["-m", "agentic.mcp"]
        }
    }
}

This starts a local MCP server that any compatible client (Claude Desktop, Cursor, VS Code, etc.) can connect to. Exposes: list_functions, run_function, create_function, create_application, fix_function.

Verify your setup with agentic providers.


Why Agentic Programming?

Python controls flow, LLM reasons

Principle How
Deterministic flow Python controls if/else/for/while. Execution is guaranteed, not suggested.
Minimal LLM calls Call the LLM only when reasoning is needed. 2 calls, not 10.
Docstring = Prompt Change the docstring, change the LLM's behavior. No separate prompt files.
Self-evolving Functions generate, fix, and improve themselves at runtime.
The problem with current frameworks

LLM as Scheduler

Current LLM agent frameworks place the LLM as the central scheduler. This creates three fundamental problems:

  • Unpredictable execution β€” the LLM may skip, repeat, or invent steps regardless of defined workflows
  • Context explosion β€” each tool-call round-trip accumulates history
  • No output guarantees β€” the LLM interprets instructions rather than executing them

The core issue: the LLM controls the flow, but nothing enforces it. Skills, prompts, and system messages are suggestions, not guarantees.

Tool-Calling / MCP Agentic Programming
Who schedules? LLM decides Python decides
Functions contain Code only Code + LLM reasoning
Context Flat conversation Structured tree
Prompt Hidden in agent config Docstring = prompt
Self-improvement Not built-in create β†’ fix β†’ evolve

MCP is the transport. Agentic Programming is the execution model. They're orthogonal.


Key Features

Automatic Context

Every @agentic_function call creates a Context node. Nodes form a tree that is automatically injected into LLM calls:

login_flow βœ“ 8.8s
β”œβ”€β”€ observe βœ“ 3.1s β†’ "found login form at (200, 300)"
β”œβ”€β”€ click βœ“ 2.5s β†’ "clicked login button"
└── verify βœ“ 3.2s β†’ "dashboard confirmed"

When verify calls the LLM, it automatically sees what observe and click returned. No manual context management.

Deep Work β€” Autonomous Quality Loop

For complex tasks that demand sustained effort and high standards, deep_work runs an autonomous plan-execute-evaluate loop until the result meets the specified quality level:

from agentic.functions.deep_work import deep_work

result = deep_work(
    task="Write a survey on context management in LLM agents.",
    level="phd",        # high_school β†’ bachelor β†’ master β†’ phd β†’ professor
    runtime=runtime,
)

The agent clarifies requirements upfront, then works fully autonomously β€” executing, self-evaluating, and revising until the output passes quality review. State is persisted to disk, so interrupted work resumes where it left off.

Self-Evolving Code

Functions can generate new functions, fix broken ones, and scaffold complete apps β€” all at runtime:

from agentic.meta_functions import create, create_app, fix

# Generate a function from description
sentiment = create("Analyze text sentiment", runtime=runtime, name="sentiment")
sentiment(text="I love this!")  # β†’ "positive"

# Generate a complete app (runtime + argparse + main)
create_app("Summarize articles from URLs", runtime=runtime, name="summarizer")
# β†’ agentic/apps/summarizer.py

# Fix a broken function β€” auto-reads source & error history
fixed = fix(fn=broken_fn, runtime=runtime, instruction="return JSON, not plain text")

The create β†’ run β†’ fail β†’ fix β†’ run cycle means programs improve themselves through use.

Ecosystem

Project Description
GUI Agent Harness Autonomous GUI agent that operates desktop apps via vision + agentic functions. Python controls observe→plan→act→verify loops; the LLM only reasons when asked.
ResearchΒ AgentΒ Harness Autonomous research agent: literature survey β†’ idea β†’ experiments β†’ paper writing β†’ cross-model review. Full pipeline from topic to submission-ready paper.

API Reference

Core

Import What it does
from agentic import agentic_function Decorator. Records execution into Context tree
from agentic import Runtime LLM runtime. exec() calls the LLM with auto-context
from agentic import Context Execution tree. tree(), save(), traceback()
from agentic import create_runtime Create a Runtime with auto-detection or explicit provider

Meta Functions

Import What it does
from agentic.meta_functions import create Generate a new @agentic_function from description
from agentic.meta_functions import create_app Generate a complete runnable app with main()
from agentic.meta_functions import fix Fix broken functions via LLM analysis
from agentic.meta_functions import create_skill Generate a SKILL.md for agent discovery

Built-in Functions

Import What it does
from agentic.functions.deep_work import deep_work Autonomous plan-execute-evaluate loop with quality levels
from agentic.functions.agent_loop import agent_loop General-purpose autonomous agent loop
from agentic.functions.general_action import general_action Give the LLM full freedom to complete a single task
from agentic.functions.wait import wait LLM decides how long to wait based on context

Providers

Six built-in providers: Anthropic, OpenAI, Gemini (API), Claude Code, Codex, Gemini (CLI). All CLI providers maintain session continuity across calls. See Provider docs for details.

Integration

Guide Description
Getting Started 3-minute setup and runnable examples
Claude Code Use without API key via Claude Code CLI
OpenClaw Use as OpenClaw skill
API Reference Full API documentation
Project Structure
agentic/
β”œβ”€β”€ __init__.py              # agentic_function, Runtime, Context, create_runtime
β”œβ”€β”€ function.py              # @agentic_function decorator
β”œβ”€β”€ runtime.py               # Runtime (exec + retry + context injection)
β”œβ”€β”€ context.py               # Context tree
β”œβ”€β”€ meta_functions/          # Self-evolving code generation
β”‚   β”œβ”€β”€ create.py            #   create() β€” generate a function
β”‚   β”œβ”€β”€ create_app.py        #   create_app() β€” generate a complete app
β”‚   β”œβ”€β”€ fix.py               #   fix() β€” rewrite broken functions
β”‚   └── create_skill.py      #   create_skill() β€” generate SKILL.md
β”œβ”€β”€ providers/               # Anthropic, OpenAI, Gemini, Claude Code, Codex, Gemini CLI
β”œβ”€β”€ mcp/                     # MCP server (python -m agentic.mcp)
β”œβ”€β”€ functions/               # Built-in agentic functions
β”‚   β”œβ”€β”€ deep_work.py         #   Autonomous quality loop
β”‚   β”œβ”€β”€ agent_loop.py        #   General agent loop
β”‚   β”œβ”€β”€ general_action.py    #   Single-task action
β”‚   └── wait.py              #   Context-aware waiting
└── apps/                    # generated apps (from create_app)
skills/                      # SKILL.md files for agent integration
examples/                    # runnable demos
tests/                       # pytest suite

Contributing

This is a paradigm proposal with a reference implementation. We welcome discussions, alternative implementations in other languages, use cases that validate or challenge the approach, and bug reports.

See CONTRIBUTING.md for details.

License

MIT

About

🧬 Agentic Programming β€” Python functions that think. | δΌšζ€θ€ƒηš„ Python 函数。 A paradigm where Python and LLM co-execute functions. | Python δΈŽε€§ζ¨‘εž‹εεŒζ‰§θ‘Œηš„ηΌ–η¨‹θŒƒεΌγ€‚

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors