Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions .github/agents/CoverageGuardian.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
name: CoverageGuardian
description: Ensures test coverage never drops below 85% by identifying uncovered code and writing the missing tests.
tools: ["read", "edit", "test", "shell"]
---

# Agent Instructions: CoverageGuardian

Your primary goal is to ensure that every module in `app/` meets the minimum **85% test coverage** threshold defined in the project's coding guidelines.

1. **Iterative loop:** Run the coverage check after every new test you write. Do not open a Pull Request until all modules report 85%+ coverage.
2. **Strictly adhere to the coding guidelines** defined in `.github/copilot-instructions.md` — especially naming conventions, file placement, and the unit/integration test split.
3. **Prioritize uncovered lines:** Focus on the module with the lowest coverage first. Read the uncovered lines before writing tests — understand the logic, then test it.
4. **Test quality over quantity:** Write meaningful tests that assert real behaviour (status codes, response structure, return values). Do not write trivially-passing tests just to inflate coverage numbers.
5. **Commit messages:** Use clear, conventional commit messages prefixed with `test(coverage):`.

# Agent Execution

## Step 1 — Measure current coverage
Run the following command from the project root to get a line-by-line coverage report:

```bash
uv run pytest --cov=app --cov-report=term-missing tests/
```

Identify every module reporting less than 85% coverage and note the exact line numbers that are not covered.

## Step 2 — Analyse uncovered lines
For each uncovered line, read the source file to understand what the code does. Determine whether a **unit test** or an **integration test** is more appropriate:
- Pure functions and service logic → `tests/unit/test_<module>.py`
- HTTP endpoints → `tests/integration/test_<module>.py`

## Step 3 — Write the missing tests
Follow the project conventions from `.github/copilot-instructions.md`:
- Unit tests: isolated, use `unittest.mock` or `pytest-mock` for I/O, `pytest.raises` for exceptions.
- Integration tests: use `httpx.AsyncClient` via the `client` fixture from `tests/conftest.py`, mark with `@pytest.mark.asyncio` and `@pytest.mark.integration`.
- Name every test function: `test_<target>_<expected_behavior>`.

## Step 4 — Re-run coverage and verify
```bash
uv run pytest --cov=app --cov-report=term-missing tests/
```

Confirm all modules are at 85%+. If any remain below threshold, return to Step 2.

## Step 5 — Open a Pull Request
Only open a PR once the coverage check passes for all modules. The PR description must include:
- Which modules were below threshold before the fix
- Which tests were added and why
- The final coverage report (copy the terminal output table)
28 changes: 27 additions & 1 deletion .github/copilot-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,30 @@ This is a small RESTful API built with Python and the FastAPI framework. We prio
2. **Type Hints:** All function signatures (parameters and return values) **must** use explicit, descriptive type hints.
3. **Testing:** Any new endpoint or utility function **must** have a corresponding test in the `tests/` directory using `pytest` and `httpx.AsyncClient`.
4. **Model Location:** All Pydantic data models **must** be placed in a dedicated `app/models.py` file.
5. **Return Type:** API endpoints must return standard Python dicts/lists or Pydantic models, not f-strings or raw strings.
5. **Return Type:** API endpoints must return standard Python dicts/lists or Pydantic models, not f-strings or raw strings.

## Testing Standards
### Directory Structure
- Unit tests go in `tests/unit/`, integration tests go in `tests/integration/`.
- Shared fixtures (e.g., `app`, `client`) are defined in `tests/conftest.py`.
- Do **not** create test files outside the `tests/` directory.

### Naming Conventions
- Test files: `test_<module>.py` (e.g., `app/main.py` → `tests/unit/test_main.py`).
- Test functions: `test_<target>_<expected_behavior>` (e.g., `test_get_tasks_returns_list`).

### Unit Tests
- Test one function or class in isolation.
- Mock all external/async I/O dependencies using `unittest.mock` or `pytest-mock`.
- Use `pytest.raises` to assert expected exceptions.
- Do **not** spin up the FastAPI app for unit tests.

### Integration Tests
- Use `httpx.AsyncClient` with the FastAPI `app` directly (no live server needed).
- Mark every integration test with both `@pytest.mark.asyncio` and `@pytest.mark.integration`.
- Cover: happy path, validation errors (422), not-found cases (404), and edge cases.
- Assert HTTP status codes, JSON response structure, and Pydantic model conformance.

### Coverage
- Run coverage with: `uv run pytest --cov=app tests/`
- Target **85%+** coverage for any changed module.
20 changes: 20 additions & 0 deletions .github/prompts/logs-audit.prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
mode: 'ask'
description: 'Performs a production logging audit of selected code, producing an actionable TODO list.'
---
## Role: Production Logging Auditor

Analyze the selected code block (#selection) and perform a production-readiness audit focused on observability and logging practices.

Generate a structured report of issues found in the following format. Ensure the analysis is specific to the selected code, but consider the overall application context.

### 🔴 High Priority (Immediate Fix)
- List any missing critical log events (e.g., unhandled exceptions not logged, authentication failures not recorded, data mutations with no audit trail).

### 🟡 Medium Priority (Recommended Fix)
- List any issues with log quality (e.g., log messages that expose sensitive data like passwords or tokens, missing correlation IDs or request context, inappropriate log levels such as using DEBUG in a hot path or ERROR for expected conditions).

### 🟢 Low Priority (Best Practice)
- List any suggestions for improving production observability (e.g., missing structured/JSON logging, no log sampling strategy for high-volume endpoints, absence of performance/latency logging, lack of log retention or rotation configuration).

Return the report as a Markdown TODO list (using `- [ ]`) to facilitate tracking.
54 changes: 21 additions & 33 deletions app/main.py
Original file line number Diff line number Diff line change
@@ -1,30 +1,12 @@
from typing import Dict
import asyncio
from enum import Enum
from typing import List
from fastapi import FastAPI
from pydantic import BaseModel

class TaskStatus(str, Enum):
"""Available statuses for any task."""
PENDING = "pending"
IN_PROGRESS = "in_progress"
COMPLETE = "complete"
from typing import Dict, List

class DeveloperTask(BaseModel):
"""Model for a single task logged by a developer."""
task_id: int
title: str
status: TaskStatus = TaskStatus.PENDING
hours_spent: float = 0.0
from fastapi import FastAPI

class ProductivityReport(BaseModel):
"""The final calculated report."""
total_tasks: int
completed_tasks: int
total_hours_spent: float
completion_rate: float
from app.models import DeveloperTask, ProductivityReport, TaskStatus

# --- FastAPI Initialization ---
app = FastAPI(title="Productivity Reporting System")

# --- Mock Database / In-Memory Service Logic
MOCK_TASKS: Dict[int, DeveloperTask] = {
Expand All @@ -44,7 +26,7 @@ async def generate_productivity_report() -> ProductivityReport:
tasks = await fetch_all_tasks()

total_tasks = len(tasks)
completed_tasks = sum(1 for task in tasks if task.status == TaskStatus.PENDING)
completed_tasks = sum(1 for task in tasks if task.status == TaskStatus.COMPLETE)

total_hours_spent = sum(task.hours_spent for task in tasks)
completion_rate = round(completed_tasks / total_tasks, 2) if total_tasks > 0 else 0.0
Expand All @@ -57,11 +39,9 @@ async def generate_productivity_report() -> ProductivityReport:
)


# --- FastAPI Initialization and Routes ---
app = FastAPI(title="Productivity Reporting System")

# --- Routes ---
@app.get("/status")
def get_status():
async def get_status() -> dict:
return {"status": "ok"}


Expand All @@ -77,10 +57,18 @@ async def get_productivity_report():
return await generate_productivity_report()


@app.post("/log_task")
async def log_task(task: DeveloperTask):
@app.get("/task/{task_id}/status")
async def get_task_status(task_id: int) -> dict:
"""Returns the status of a specific task by its ID."""
task = MOCK_TASKS.get(task_id)
if not task:
return {"error": "Task not found"}
return {"task_id": task_id, "status": task.status}


@app.post("/log_task", response_model=DeveloperTask)
async def log_task(task: DeveloperTask) -> DeveloperTask:
new_id = max(MOCK_TASKS.keys()) + 1 if MOCK_TASKS else 1
task.task_id = new_id
task = task.model_copy(update={"task_id": new_id})
MOCK_TASKS[new_id] = task

return f"Task ID {task.task_id} logged successfully."
return task
26 changes: 26 additions & 0 deletions app/models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
from enum import Enum

from pydantic import BaseModel


class TaskStatus(str, Enum):
"""Available statuses for any task."""
PENDING = "pending"
IN_PROGRESS = "in_progress"
COMPLETE = "complete"


class DeveloperTask(BaseModel):
"""Model for a single task logged by a developer."""
task_id: int
title: str
status: TaskStatus = TaskStatus.PENDING
hours_spent: float = 0.0


class ProductivityReport(BaseModel):
"""The final calculated report."""
total_tasks: int
completed_tasks: int
total_hours_spent: float
completion_rate: float
6 changes: 6 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ dependencies = [
dev = [
"pytest>=8.1.1",
"pytest-asyncio>=0.21.0",
"pytest-cov>=7.1.0",
]


Expand All @@ -25,3 +26,8 @@ build-backend = "hatchling.build"

[tool.uv]
default-groups = ["dev"] # Ensures dev group is installed by default during uv sync

[tool.pytest.ini_options]
markers = [
"integration: marks tests as integration tests",
]
11 changes: 11 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
import pytest
import pytest_asyncio
from httpx import AsyncClient, ASGITransport

from app.main import app


@pytest_asyncio.fixture
async def client() -> AsyncClient:
async with AsyncClient(transport=ASGITransport(app=app), base_url="http://test") as ac:
yield ac
Empty file added tests/integration/__init__.py
Empty file.
126 changes: 126 additions & 0 deletions tests/integration/test_main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
import pytest
from httpx import AsyncClient

from app.models import DeveloperTask, ProductivityReport, TaskStatus


@pytest.mark.asyncio
@pytest.mark.integration
async def test_status_returns_200(client: AsyncClient) -> None:
response = await client.get("/status")
assert response.status_code == 200


@pytest.mark.asyncio
@pytest.mark.integration
async def test_status_returns_ok(client: AsyncClient) -> None:
response = await client.get("/status")
assert response.json() == {"status": "ok"}


@pytest.mark.asyncio
@pytest.mark.integration
async def test_get_all_tasks_returns_200(client: AsyncClient) -> None:
response = await client.get("/tasks")
assert response.status_code == 200


@pytest.mark.asyncio
@pytest.mark.integration
async def test_get_all_tasks_returns_list(client: AsyncClient) -> None:
response = await client.get("/tasks")
data = response.json()
assert isinstance(data, list)
assert len(data) > 0


@pytest.mark.asyncio
@pytest.mark.integration
async def test_get_all_tasks_validates_against_model(client: AsyncClient) -> None:
response = await client.get("/tasks")
tasks = [DeveloperTask(**item) for item in response.json()]
assert all(isinstance(t, DeveloperTask) for t in tasks)


@pytest.mark.asyncio
@pytest.mark.integration
async def test_get_report_returns_200(client: AsyncClient) -> None:
response = await client.get("/report")
assert response.status_code == 200


@pytest.mark.asyncio
@pytest.mark.integration
async def test_get_report_validates_against_model(client: AsyncClient) -> None:
response = await client.get("/report")
report = ProductivityReport(**response.json())
assert isinstance(report, ProductivityReport)


@pytest.mark.asyncio
@pytest.mark.integration
async def test_log_task_returns_201(client: AsyncClient) -> None:
payload = {"task_id": 0, "title": "New test task", "status": TaskStatus.PENDING, "hours_spent": 2.0}
response = await client.post("/log_task", json=payload)
assert response.status_code == 200


@pytest.mark.asyncio
@pytest.mark.integration
async def test_log_task_returns_developer_task(client: AsyncClient) -> None:
payload = {"task_id": 0, "title": "New test task", "status": TaskStatus.PENDING, "hours_spent": 2.0}
response = await client.post("/log_task", json=payload)
task = DeveloperTask(**response.json())
assert isinstance(task, DeveloperTask)
assert task.title == "New test task"


@pytest.mark.asyncio
@pytest.mark.integration
async def test_log_task_validation_error_on_missing_title(client: AsyncClient) -> None:
payload = {"task_id": 0, "hours_spent": 1.0}
response = await client.post("/log_task", json=payload)
assert response.status_code == 422


@pytest.mark.asyncio
@pytest.mark.integration
async def test_get_report_returns_correct_values(client: AsyncClient) -> None:
response = await client.get("/report")
report = ProductivityReport(**response.json())
assert report.total_tasks > 0
assert report.completed_tasks >= 1
assert report.total_hours_spent > 0
assert 0.0 <= report.completion_rate <= 1.0


@pytest.mark.asyncio
@pytest.mark.integration
async def test_get_report_completion_rate_is_float(client: AsyncClient) -> None:
response = await client.get("/report")
report = ProductivityReport(**response.json())
assert isinstance(report.completion_rate, float)


@pytest.mark.asyncio
@pytest.mark.integration
async def test_get_task_status_returns_200(client: AsyncClient) -> None:
response = await client.get("/task/1/status")
assert response.status_code == 200


@pytest.mark.asyncio
@pytest.mark.integration
async def test_get_task_status_returns_correct_status(client: AsyncClient) -> None:
response = await client.get("/task/1/status")
data = response.json()
assert data["task_id"] == 1
assert data["status"] == TaskStatus.COMPLETE


@pytest.mark.asyncio
@pytest.mark.integration
async def test_get_task_status_not_found_returns_error(client: AsyncClient) -> None:
response = await client.get("/task/9999/status")
assert response.status_code == 200
assert response.json() == {"error": "Task not found"}
Empty file added tests/unit/__init__.py
Empty file.
Loading