A small, hackable terminal coding harness.
Think Claude Code or Codex CLI, but intentionally compact: small enough to read in an afternoon, understand end to end, and extend without fighting a framework.
mylo is a local REPL that lets a tool-calling model work inside the directory where you start it.
It can:
- read files
- write files
- make surgical file edits
- list directories
- run shell commands with approval
The model can choose tools, inspect the results, and keep iterating until it has a final answer. Responses stream live in the terminal, tool activity is shown inline, and file edits display a colored unified diff before they are written.
In this project, a harness is the layer between a model and your machine.
It is responsible for things like:
- the terminal REPL
- the system prompt and conversation state
- tool schemas and tool dispatch
- approval gates for risky actions
- streaming output and tool activity UI
- workspace scoping and local execution
Tools like Claude Code and Cursor package this experience as a polished product. mylo gives you the same core shape in a small codebase you can read, change, and extend yourself.
This project is for people who want to understand and extend a real tool-calling coding harness without starting from a large framework.
It is useful if you want to:
- learn how a coding agent works end to end
- study tool calling and streaming in a small codebase
- experiment with terminal UX for AI agents
- add your own tools, prompts, and workflows quickly
- streaming model output
- Rich-based terminal UI
- startup welcome panel
- slash commands for session control
- inline tool activity display
- cumulative token usage tracking
/help/model/model <name>/clear/history/usage/exit
read_fileRead an entire file or a 1-indexed line range.write_fileCreate a file or overwrite one when explicitly allowed.edit_fileReplace one exact unique string in a file and show a colored diff before writing.list_dirList directory contents with directories marked using/.run_bashRun shell commands with an approval prompt unless auto-approve is enabled.
- Python 3.11+
uv- an API key for the currently configured provider
git clone https://github.com/ritsource/mylo.git
cd mylouv syncToday, this repository is wired to OpenAI's API, so you need an OPENAI_API_KEY to run it as-is.
Either export it in your shell:
export OPENAI_API_KEY=sk-...Or store it in a local .env file:
OPENAI_API_KEY=sk-...uv run mylouv run mylo
uv run mylo -m gpt-4o-mini
uv run mylo -yIf you prefer using the installed console script directly:
source .venv/bin/activate
mylo| Flag | Default | Description |
|---|---|---|
-m, --model |
gpt-5-nano |
Model name to use |
--api-key |
env | API key, falls back to OPENAI_API_KEY |
-y, --auto-approve |
off | Skip bash approval prompts |
If neither --api-key nor OPENAI_API_KEY is set, mylo exits early with a readable error message.
Try a few prompts like:
[mylo]> list the files in fixtures
[mylo]> read fixtures/hello-world.py and tell me what it does
[mylo]> change fixtures/hello-world.py so it prints "Hello from mylo!"
[mylo]> run python3 fixtures/hello-world.py
[mylo]> /usage
A typical interaction looks like:
[mylo]> add type hints to all functions in utils.py
⎔ read_file {"path": "utils.py", "start_line": 1, "end_line": null} → import argparse
--- a/utils.py
+++ b/utils.py
@@ ...
-def parse_args():
+def parse_args() -> argparse.Namespace:
⎔ edit_file {"path": "utils.py", ...}
Edited utils.py successfully.
Done. Added return type hints to all 6 functions in utils.py.
| Command | Description |
|---|---|
/help |
Show available commands |
/model |
Open the interactive model picker |
/model <name> |
Switch models directly |
/clear |
Clear conversation history but keep the system prompt |
/history |
Show the number of messages currently in context |
/usage |
Show cumulative prompt, completion, and total token usage |
/exit |
Quit the session |
/model opens an interactive picker for the currently supported tool-calling models:
gpt-5-nanogpt-4.1gpt-4ogpt-4o-mini
On each turn, mylo sends:
- the conversation history
- the system prompt
- the tool schemas
to the currently configured model backend with streaming enabled.
If the model emits tool calls, mylo:
- reconstructs the streamed tool-call arguments
- executes the requested tools locally
- appends the tool results back into the conversation
- calls the model again
This repeats until the model returns a normal assistant response with no tool calls.
In the current codebase, that backend is OpenAI Chat Completions. The harness itself is intentionally small, so the provider-specific part is easier to swap later than it would be in a larger framework.
- file tools are scoped to the workspace where
mylowas started edit_fileonly succeeds when the target string matches exactly oncewrite_filerefuses to overwrite existing files unless explicitly told torun_bashrequires confirmation by default- shell commands time out after 60 seconds
- reads full files or 1-indexed line ranges
- rejects paths outside the workspace
- creates parent directories automatically
- blocks accidental overwrite unless
overwrite=true
- requires one exact unique match
- rejects zero-match and multi-match replacements
- prints a colored unified diff before writing
- sorts directories first, then files
- adds
/to directory names
- prompts for
y/napproval unless--auto-approveis set - returns stdout, stderr, and exit code
The codebase is split by responsibility so it is easy to extend:
mylo/
agent.py streaming chat loop and tool orchestration
cli.py argument parsing and dotenv loading
main.py process bootstrap
prompts.py system prompt
repl.py interactive shell and slash commands
tools.py compatibility exports for the tool surface
ui.py Rich terminal rendering
toolkit/
common.py workspace and path helpers
files.py file tool implementations
registry.py tool dispatch
schemas.py tool schema aggregation
shell.py shell tool implementation
Adding a tool follows a simple pattern:
- implement the function
- add its schema
- register it in the tool registry
The agent picks it up automatically on the next run.
Run from source:
uv run myloRun the entry point directly:
./.venv/bin/python mylo/main.pyQuick compile check:
./.venv/bin/python -m py_compile mylo/main.py mylo/agent.py mylo/repl.pyEither:
- export
OPENAI_API_KEY - or pass
--api-key
Usage only increases after successful model calls. Local slash commands and cancelled operations do not consume model tokens.
All file tools are scoped to the repository root where mylo was started.
run_bash requires confirmation unless --auto-approve is enabled.
- conversation history grows over time; use
/clearwhen context gets too large - shell commands run with your user permissions once approved
- direct
/model <name>does not validate names up front; invalid models fail on the next API call - this is a local single-user harness, not a multi-tenant sandbox
MIT. See LICENSE.