This repository is a hands-on workshop that teaches how to run an agentic inner loop inside Visual Studio Code by connecting:
- GitHub Copilot Chat (Ask / Edit / Agent / Plan)
- AI Toolkit for VS Code (prompt/agent iteration, bulk run, evaluation, version comparison)
- GitHub Issues (task decomposition, progress tracking, feedback loop closure)
You will repeatedly run a deep loop:
Spec → Plan → Tasks → Implement → Run/Evaluate → Feedback → (back to Spec/Plan)
The deliverable is a tiny but realistic “Issue Triage Assistant” CLI and a set of reusable workflows (prompt files, templates, issue templates) that make the loop repeatable.
A small Python CLI that takes a GitHub Issue title and body and produces a schema-validated JSON response:
{
"type": "bug | feature | docs | question",
"priority": "p0 | p1 | p2",
"labels": ["..."],
"rationale": "short explanation"
}The CLI ships with:
- a rule-based adapter (offline baseline, deterministic)
- a GitHub Models adapter (hosted models via your GitHub token)
- a Microsoft Foundry adapter (hosted models via Azure AI inference endpoints)
- an optional OpenAI-compatible adapter (fallback for existing OpenAI-style endpoints)
- tests and CI so you can keep “deterministic correctness” while iterating on “probabilistic quality”.
docs/workshop/— step-by-step workshop modulesdocs/templates/— templates for spec/plan/eval reportsdocs/providers.md— how to configure GitHub Models / Foundry for hosted-model runs.github/prompts/— Copilot prompt files invoked via/in the Chat view.github/ISSUE_TEMPLATE/— standardized issue templates for tasks, bugs, and evaluation regressionssrc/triage_assistant/— the CLI + schema + adaptersdatasets/— a small evaluation dataset for AI Toolkit and local evaluationreports/eval/— where you save evaluation notes so feedback becomes actionable work
- VS Code (latest stable recommended)
- Extensions:
- GitHub Copilot
- GitHub Copilot Chat
- AI Toolkit for VS Code
- GitHub Pull Requests and Issues
- Python (and Pylance)
- Python 3.11+ installed locally OR Docker for Dev Container support
Open this repository in VS Code and:
- Install the Dev Containers extension if not already installed
- When prompted, click Reopen in Container
- Or use Command Palette:
Dev Containers: Reopen in Container
- Or use Command Palette:
- VS Code will build the container and install dependencies automatically
- The default container uses Python 3.12
To switch to Python 3.11:
- Edit
.devcontainer/devcontainer.json - Change
"service": "py312"to"service": "py311" - Rebuild the container:
Dev Containers: Rebuild Container
See .devcontainer/README.md for more details.
Create and activate a virtual environment, then install:
python -m venv .venv
# Windows: .venv\Scripts\activate
source .venv/bin/activate
python -m pip install --upgrade pip
pip install -e ".[dev]"Run the CLI:
triage-assistant triage --title "Crash on startup" --body "Steps to reproduce: ..."Run tests:
pytest -qBy default, triage-assistant uses the deterministic dummy adapter so the workshop can run
without any external credentials.
Goal: Confirm a hosted provider can produce schema-valid JSON end-to-end.
Steps:
- Configure a provider (GitHub Models or Microsoft Foundry).
- Keep secrets in environment variables or a local
.envfile (start from.env.example). - Do not commit
.envor paste tokens/keys into docs/issues.
- Run a minimal smoke test:
triage-assistant triage --adapter auto --title "Crash on startup" --body "Steps to reproduce: ..." --prettyExpected output: A JSON response printed to stdout that validates against the schema (no stack trace).
If you want to call a hosted model from the CLI, configure a provider and then run with
--adapter ... (or set TRIAGE_PROVIDER).
For full details (including environment variables), see:
docs/providers.md
-
Create a token that can access GitHub Models.
-
Set environment variables:
export TRIAGE_GITHUB_TOKEN="..."
export TRIAGE_GITHUB_MODEL="openai/gpt-4.1" # optional; default shown- Run:
triage-assistant triage --adapter github --title "Crash on startup" --body "Steps to reproduce: ..." --pretty-
Deploy a model in Microsoft Foundry and copy the Azure AI inference endpoint and key.
-
Set environment variables:
export TRIAGE_FOUNDRY_ENDPOINT="https://<resource-name>.services.ai.azure.com/models"
export TRIAGE_FOUNDRY_API_KEY="..." # or AZURE_INFERENCE_CREDENTIAL
export TRIAGE_FOUNDRY_MODEL="<deployment-name>"- Run:
triage-assistant triage --adapter foundry --title "Crash on startup" --body "Steps to reproduce: ..." --prettyStart here:
docs/workshop/00_overview.md
Then follow modules in order:
- Setup (
01_setup.md) - Spec (
02_spec.md) - Plan (
03_plan.md) - Tasks (
04_issues.md) - Implement (
05_implement.md) - Run & Evaluate (
06_run_and_evaluate.md) - Feedback loop closure (
07_feedback.md) - Retro and mental model (
08_retro.md)
- Keep changes small and issue-scoped.
- Every task has a Definition of Done (DoD) and a validation command.
- Deterministic gates: tests, schema validation, linting.
- Probabilistic gates: AI Toolkit evaluation runs and version comparisons.
MIT. See LICENSE.