Skip to content

john-ayodeji/patch-pal

Repository files navigation

Patch-Pal

Patch-Pal is a small Boot.dev project that wires Gemini tool-calling to a local sandbox. It exposes a handful of file and process utilities as tools and lets the model request them in a loop until it returns a final answer.

What’s included

  • A chat runner that manages the Gemini request/response loop, tool calls, and conversation history.
  • Tool functions for listing files, reading files (with truncation), writing files, and running Python files.
  • A simple CLI entry point.
  • Test scripts that exercise the tool functions directly.

Project layout

  • main.py CLI entry point.
  • chat_runner.py Gemini request/response loop.
  • prompt.py system prompt used for the model.
  • tools.py tool registry and thin dispatcher (legacy).
  • functions/ LLM-callable functions and their schemas.
  • config.py configuration (e.g., MAX_CHARS for file reads).
  • calculator/ sample project the tools operate on.

Requirements

  • Python 3.11+ (uses a local .venv in this repo)
  • uv for running scripts
  • A Gemini API key in .env:
    • GEMINI_API_KEY=...

env

cp .env.example .env

Run

uv run main.py "run tests.py" --verbose

Tool tests

uv run test_get_files_info.py
uv run test_get_file_content.py
uv run test_write_file.py
uv run test_run_python_file.py

Notes

  • Tool functions always return strings and never raise errors to the model.
  • File reads are truncated at MAX_CHARS (see config.py).
  • The chat loop stops when the model returns a final response, or after a max iteration limit.

About

An ai coding assistant just like cursor or claude code, can read directories and file content, run python files and also make changes to the file content

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages