Skip to content

Conversation

@danenania
Copy link
Contributor

Summary

Add an AI-powered property management assistant that helps hosts manage their rental properties through natural language conversation.

Features

The assistant can help with:

  • Viewing properties and their details
  • Listing and filtering bookings by status
  • Approving or declining pending booking requests
  • Sending messages to guests
  • Updating property pricing and availability
  • Cancelling bookings when needed

New Endpoints

  • POST /authorized/:level/assistant/chat - Chat with the AI assistant
  • GET /authorized/:level/assistant/tools - List available tools

Files Added

  • src/routes/assistant.ts - Assistant chat endpoint
  • src/services/assistantTools.ts - Tool definitions and execution logic
  • src/types/assistant.ts - TypeScript type definitions
  • src/data/assistant-state.json - Sample state data (3 properties, 3 bookings)

Add an AI-powered assistant that can help property managers with various tasks through natural language. The assistant has access to tools for:
- Listing properties and bookings
- Approving/declining booking requests
- Sending messages to guests
- Updating property prices and availability
- Cancelling bookings

New files:
- src/routes/assistant.ts - Assistant chat endpoint
- src/services/assistantTools.ts - Tool definitions and execution
- src/types/assistant.ts - TypeScript types
- src/data/assistant-state.json - Sample state data
Copy link

@promptfoo-scanner promptfoo-scanner bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've reviewed the AI property management assistant implementation and identified several critical LLM security vulnerabilities. The most severe issue is a prompt injection vulnerability where user input flows directly to an LLM that has privileged access to booking approvals, guest communications, and property modifications. Combined with missing authorization controls and unvalidated tool execution, this creates significant risk of unauthorized operations.

Minimum severity threshold: 🟡 Medium | To re-scan after changes, comment @promptfoo-scanner
Learn more

Comment on lines +33 to +36
let messages: Array<{ role: string; content: string }> = [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userMessage },
];

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Critical

User messages from the API flow directly into LLM prompts without sanitization, and the LLM has immediate access to privileged tools like approving bookings, sending guest emails, and modifying prices. An attacker can use prompt injection to manipulate the LLM into executing unauthorized actions, such as approving fraudulent bookings or sending malicious messages to guests.

💡 Suggested Fix

Implement multiple defense layers: (1) Add input validation to block known prompt injection patterns, (2) Filter available tools based on authorization level, and (3) Require human confirmation for high-risk operations.

// Add validation after L35
if (userMessage.toLowerCase().includes('role:') ||
    userMessage.toLowerCase().includes('ignore previous') ||
    userMessage.toLowerCase().includes('"role"')) {
  throw new Error('Invalid input: message contains prohibited patterns');
}

// Filter tools by authorization level (L20-22)
const allowedTools = level === 'minnow'
  ? availableTools.filter(t => ['list_properties', 'list_bookings', 'get_booking_details'].includes(t.name))
  : availableTools;

const systemPrompt = `You are a helpful AI property management assistant. You have access to the following tools:

${allowedTools.map((t) => `- ${t.name}: ${t.description}`).join('\n')}
...`;

Also update the function signature to accept and use the level parameter for tool filtering.

🤖 AI Agent Prompt

The assistant route at src/routes/assistant.ts:33-36 has a critical prompt injection vulnerability. User input flows directly to an LLM that controls privileged operations (booking approvals, guest emails, price changes).

Investigate the authentication and authorization flow:

  1. Trace how the level parameter ('minnow' vs 'shark') should control tool access
  2. Check if there's a pattern in src/routes/chat.ts for handling authorization levels
  3. Determine appropriate tool sets for different permission levels

Implement defense-in-depth:

  1. Add input validation to detect and block common prompt injection patterns
  2. Filter availableTools based on the user's authorization level before constructing the system prompt
  3. Consider requiring explicit confirmation for write operations (approve_booking, send_message_to_guest, etc.)
  4. Validate that tool calls from the LLM match the user's allowed tool set before execution

The goal is to prevent unauthorized operations even if prompt injection successfully manipulates the LLM's output.


Was this helpful?  👍 Yes  |  👎 No 

Comment on lines +90 to +95
export function executeTool(toolName: string, args: Record<string, any>): string {
const state = loadState();

switch (toolName) {
case 'list_properties':
return JSON.stringify(state.properties, null, 2);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High

Tool arguments provided by the LLM are passed directly to execution functions without validation. An attacker using prompt injection can cause the LLM to generate tool calls with malicious parameters—arbitrary email addresses, extreme price values, or invalid booking IDs. The only check is whether the tool name exists in the availableTools list.

💡 Suggested Fix

Add Zod schema validation for all tool arguments before execution:

import { z } from 'zod';

const toolSchemas: Record<string, z.ZodSchema> = {
  list_properties: z.object({}),
  list_bookings: z.object({
    status: z.enum(['pending', 'approved', 'declined', 'cancelled']).optional(),
  }),
  get_booking_details: z.object({
    bookingId: z.string().regex(/^booking-\d{3}$/),
  }),
  approve_booking: z.object({
    bookingId: z.string().regex(/^booking-\d{3}$/),
  }),
  send_message_to_guest: z.object({
    guestEmail: z.string().email(),
    subject: z.string().min(1).max(200),
    body: z.string().min(1).max(2000),
  }),
  update_property_price: z.object({
    propertyId: z.string().regex(/^prop-\d{3}$/),
    newPrice: z.number().positive().max(10000),
  }),
  // ... other tools
};

export function executeTool(toolName: string, args: Record<string, any>): string {
  if (!toolSchemas[toolName]) {
    return `Unknown tool: ${toolName}`;
  }

  try {
    const validatedArgs = toolSchemas[toolName].parse(args);
    return executeToolInternal(toolName, validatedArgs);
  } catch (error) {
    if (error instanceof z.ZodError) {
      return `Invalid arguments: ${error.errors.map(e => e.message).join(', ')}`;
    }
    throw error;
  }
}

Additionally, for send_message_to_guest, validate that the email address exists in your booking records before sending.

🤖 AI Agent Prompt

At src/services/assistantTools.ts:90-95, the executeTool function executes tools based on LLM-provided arguments without validation. This compounds the prompt injection vulnerability.

Implement comprehensive input validation:

  1. Define Zod schemas for each tool's parameter structure (booking IDs, email formats, price ranges, etc.)
  2. Validate all arguments before the switch statement executes
  3. For send_message_to_guest (L130-141), add a whitelist check to ensure the email belongs to an actual guest in the system
  4. For update_property_price (L143-150), add reasonable bounds checking on price values
  5. Consider using TypeScript discriminated unions for type-safe tool definitions

Look at the tool parameter definitions (L15-87) and create corresponding runtime validation that matches those type signatures. Return clear error messages to the LLM when validation fails.


Was this helpful?  👍 Yes  |  👎 No 

Comment on lines +82 to +87
router.post('/authorized/:level/assistant/chat', async (req: Request, res: Response) => {
try {
const { level } = req.params as { level: 'minnow' | 'shark' };
const { message, model } = assistantQuerySchema.parse(req.body);

const result = await runAssistant(message, model);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High

The endpoint extracts a level parameter from the URL path ('minnow' or 'shark'), suggesting tiered authorization, but this parameter is never used. All authenticated users get access to all 9 tools regardless of their authorization level, violating the principle of least privilege. This represents a complete bypass of the intended access control mechanism.

💡 Suggested Fix

Pass the authorization level to runAssistant and use it to filter available tools:

// Update function signature (L14-16)
async function runAssistant(
  userMessage: string,
  level: 'minnow' | 'shark',
  model?: string
): Promise<{ response: string; toolsUsed: string[] }> {

  // Filter tools based on level (L20-22)
  const readOnlyTools = ['list_properties', 'list_bookings', 'get_booking_details'];
  const allowedTools = level === 'minnow'
    ? availableTools.filter(t => readOnlyTools.includes(t.name))
    : availableTools;

  const systemPrompt = `You are a helpful AI property management assistant. You have access to the following tools:

${allowedTools.map((t) => `- ${t.name}: ${t.description}`).join('\n')}
...`;

  // Later, check tool calls against allowedTools (L62)
  if (toolCall.tool && allowedTools.some((t) => t.name === toolCall.tool)) {
    // ...
  }
}

// Pass level when calling (L87)
const result = await runAssistant(message, level, model);
🤖 AI Agent Prompt

The assistant endpoint at src/routes/assistant.ts:82-87 extracts an authorization level parameter but doesn't use it. The endpoint pattern /authorized/:level/assistant/chat suggests 'minnow' and 'shark' should have different permissions.

Investigate the intended authorization model:

  1. Compare with src/routes/chat.ts to understand how the main chat handler uses the level parameter
  2. Check if there's documentation or types defining what 'minnow' vs 'shark' permissions should be
  3. Determine appropriate tool sets: likely 'minnow' should be read-only (list_properties, list_bookings, get_booking_details) while 'shark' gets write operations

Implement the authorization control:

  1. Create a getToolsForLevel() function in src/services/assistantTools.ts that filters tools by level
  2. Update runAssistant to accept the level parameter
  3. Filter availableTools before constructing the system prompt
  4. Enforce the same filter when validating tool calls from the LLM

Ensure the filtering happens in the application layer, not just in the prompt, so it can't be bypassed by prompt injection.


Was this helpful?  👍 Yes  |  👎 No 

Comment on lines +97 to +107
case 'list_bookings': {
let bookings = state.bookings;
if (args.status) {
bookings = bookings.filter((b) => b.status === args.status);
}
return JSON.stringify(bookings, null, 2);
}

case 'get_booking_details': {
const booking = state.bookings.find((b) => b.id === args.bookingId);
return booking ? JSON.stringify(booking, null, 2) : 'Booking not found';

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium

Guest PII (names and email addresses) is returned by these tools and fed back into the LLM context, which sends it to the external LLM provider. While this is standard behavior for LLM applications, vacation rental guests likely don't expect their personal information to be processed by OpenAI or other AI providers without explicit consent.

💡 Suggested Fix

Minimize PII exposure by returning only the information the LLM needs. The assistant doesn't need guest names/emails to approve bookings—it just needs booking IDs and dates:

case 'list_bookings': {
  let bookings = state.bookings;
  if (args.status) {
    bookings = bookings.filter((b) => b.status === args.status);
  }
  // Return minimal info without PII
  const minimalInfo = bookings.map(b => ({
    id: b.id,
    propertyId: b.propertyId,
    checkIn: b.checkIn,
    checkOut: b.checkOut,
    status: b.status,
    totalPrice: b.totalPrice,
    // Omit guestName and guestEmail
  }));
  return JSON.stringify(minimalInfo, null, 2);
}

case 'get_booking_details': {
  const booking = state.bookings.find((b) => b.id === args.bookingId);
  if (!booking) return 'Booking not found';

  // Return booking details without PII
  const { guestName, guestEmail, ...bookingInfo } = booking;
  return JSON.stringify(bookingInfo, null, 2);
}

When the LLM calls send_message_to_guest, the tool execution can access the full PII from the state file without exposing it to the LLM context.

🤖 AI Agent Prompt

The tools at src/services/assistantTools.ts:97-107 return booking data including guest names and email addresses, which flow through the LLM context to the external provider.

Evaluate what information the LLM actually needs:

  1. Review the assistant's tasks—can it approve/decline bookings using just booking IDs and dates?
  2. Check if guest PII is necessary for decision-making or if it's just being passed through
  3. Consider that when send_message_to_guest is called, the tool execution can look up the email from the state file without the LLM seeing it

Implement PII minimization:

  1. Modify list_bookings and get_booking_details to return booking information without guestName/guestEmail fields
  2. The LLM can still reference bookings by ID and make approval decisions
  3. When sending messages, the tool execution (which runs server-side) can retrieve the actual email address from state
  4. This way, PII never passes through the LLM context or gets sent to the AI provider

If PII exposure is necessary for functionality, add consent tracking and privacy documentation.


Was this helpful?  👍 Yes  |  👎 No 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants