-
Notifications
You must be signed in to change notification settings - Fork 0
feat: Add AI property management assistant #25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Add an AI-powered assistant that can help property managers with various tasks through natural language. The assistant has access to tools for: - Listing properties and bookings - Approving/declining booking requests - Sending messages to guests - Updating property prices and availability - Cancelling bookings New files: - src/routes/assistant.ts - Assistant chat endpoint - src/services/assistantTools.ts - Tool definitions and execution - src/types/assistant.ts - TypeScript types - src/data/assistant-state.json - Sample state data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reviewed the AI property management assistant implementation and identified several critical LLM security vulnerabilities. The most severe issue is a prompt injection vulnerability where user input flows directly to an LLM that has privileged access to booking approvals, guest communications, and property modifications. Combined with missing authorization controls and unvalidated tool execution, this creates significant risk of unauthorized operations.
Minimum severity threshold: 🟡 Medium | To re-scan after changes, comment @promptfoo-scanner
Learn more
| let messages: Array<{ role: string; content: string }> = [ | ||
| { role: 'system', content: systemPrompt }, | ||
| { role: 'user', content: userMessage }, | ||
| ]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 Critical
User messages from the API flow directly into LLM prompts without sanitization, and the LLM has immediate access to privileged tools like approving bookings, sending guest emails, and modifying prices. An attacker can use prompt injection to manipulate the LLM into executing unauthorized actions, such as approving fraudulent bookings or sending malicious messages to guests.
💡 Suggested Fix
Implement multiple defense layers: (1) Add input validation to block known prompt injection patterns, (2) Filter available tools based on authorization level, and (3) Require human confirmation for high-risk operations.
// Add validation after L35
if (userMessage.toLowerCase().includes('role:') ||
userMessage.toLowerCase().includes('ignore previous') ||
userMessage.toLowerCase().includes('"role"')) {
throw new Error('Invalid input: message contains prohibited patterns');
}
// Filter tools by authorization level (L20-22)
const allowedTools = level === 'minnow'
? availableTools.filter(t => ['list_properties', 'list_bookings', 'get_booking_details'].includes(t.name))
: availableTools;
const systemPrompt = `You are a helpful AI property management assistant. You have access to the following tools:
${allowedTools.map((t) => `- ${t.name}: ${t.description}`).join('\n')}
...`;Also update the function signature to accept and use the level parameter for tool filtering.
🤖 AI Agent Prompt
The assistant route at src/routes/assistant.ts:33-36 has a critical prompt injection vulnerability. User input flows directly to an LLM that controls privileged operations (booking approvals, guest emails, price changes).
Investigate the authentication and authorization flow:
- Trace how the
levelparameter ('minnow' vs 'shark') should control tool access - Check if there's a pattern in
src/routes/chat.tsfor handling authorization levels - Determine appropriate tool sets for different permission levels
Implement defense-in-depth:
- Add input validation to detect and block common prompt injection patterns
- Filter
availableToolsbased on the user's authorization level before constructing the system prompt - Consider requiring explicit confirmation for write operations (approve_booking, send_message_to_guest, etc.)
- Validate that tool calls from the LLM match the user's allowed tool set before execution
The goal is to prevent unauthorized operations even if prompt injection successfully manipulates the LLM's output.
| export function executeTool(toolName: string, args: Record<string, any>): string { | ||
| const state = loadState(); | ||
|
|
||
| switch (toolName) { | ||
| case 'list_properties': | ||
| return JSON.stringify(state.properties, null, 2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 High
Tool arguments provided by the LLM are passed directly to execution functions without validation. An attacker using prompt injection can cause the LLM to generate tool calls with malicious parameters—arbitrary email addresses, extreme price values, or invalid booking IDs. The only check is whether the tool name exists in the availableTools list.
💡 Suggested Fix
Add Zod schema validation for all tool arguments before execution:
import { z } from 'zod';
const toolSchemas: Record<string, z.ZodSchema> = {
list_properties: z.object({}),
list_bookings: z.object({
status: z.enum(['pending', 'approved', 'declined', 'cancelled']).optional(),
}),
get_booking_details: z.object({
bookingId: z.string().regex(/^booking-\d{3}$/),
}),
approve_booking: z.object({
bookingId: z.string().regex(/^booking-\d{3}$/),
}),
send_message_to_guest: z.object({
guestEmail: z.string().email(),
subject: z.string().min(1).max(200),
body: z.string().min(1).max(2000),
}),
update_property_price: z.object({
propertyId: z.string().regex(/^prop-\d{3}$/),
newPrice: z.number().positive().max(10000),
}),
// ... other tools
};
export function executeTool(toolName: string, args: Record<string, any>): string {
if (!toolSchemas[toolName]) {
return `Unknown tool: ${toolName}`;
}
try {
const validatedArgs = toolSchemas[toolName].parse(args);
return executeToolInternal(toolName, validatedArgs);
} catch (error) {
if (error instanceof z.ZodError) {
return `Invalid arguments: ${error.errors.map(e => e.message).join(', ')}`;
}
throw error;
}
}Additionally, for send_message_to_guest, validate that the email address exists in your booking records before sending.
🤖 AI Agent Prompt
At src/services/assistantTools.ts:90-95, the executeTool function executes tools based on LLM-provided arguments without validation. This compounds the prompt injection vulnerability.
Implement comprehensive input validation:
- Define Zod schemas for each tool's parameter structure (booking IDs, email formats, price ranges, etc.)
- Validate all arguments before the switch statement executes
- For
send_message_to_guest(L130-141), add a whitelist check to ensure the email belongs to an actual guest in the system - For
update_property_price(L143-150), add reasonable bounds checking on price values - Consider using TypeScript discriminated unions for type-safe tool definitions
Look at the tool parameter definitions (L15-87) and create corresponding runtime validation that matches those type signatures. Return clear error messages to the LLM when validation fails.
| router.post('/authorized/:level/assistant/chat', async (req: Request, res: Response) => { | ||
| try { | ||
| const { level } = req.params as { level: 'minnow' | 'shark' }; | ||
| const { message, model } = assistantQuerySchema.parse(req.body); | ||
|
|
||
| const result = await runAssistant(message, model); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 High
The endpoint extracts a level parameter from the URL path ('minnow' or 'shark'), suggesting tiered authorization, but this parameter is never used. All authenticated users get access to all 9 tools regardless of their authorization level, violating the principle of least privilege. This represents a complete bypass of the intended access control mechanism.
💡 Suggested Fix
Pass the authorization level to runAssistant and use it to filter available tools:
// Update function signature (L14-16)
async function runAssistant(
userMessage: string,
level: 'minnow' | 'shark',
model?: string
): Promise<{ response: string; toolsUsed: string[] }> {
// Filter tools based on level (L20-22)
const readOnlyTools = ['list_properties', 'list_bookings', 'get_booking_details'];
const allowedTools = level === 'minnow'
? availableTools.filter(t => readOnlyTools.includes(t.name))
: availableTools;
const systemPrompt = `You are a helpful AI property management assistant. You have access to the following tools:
${allowedTools.map((t) => `- ${t.name}: ${t.description}`).join('\n')}
...`;
// Later, check tool calls against allowedTools (L62)
if (toolCall.tool && allowedTools.some((t) => t.name === toolCall.tool)) {
// ...
}
}
// Pass level when calling (L87)
const result = await runAssistant(message, level, model);🤖 AI Agent Prompt
The assistant endpoint at src/routes/assistant.ts:82-87 extracts an authorization level parameter but doesn't use it. The endpoint pattern /authorized/:level/assistant/chat suggests 'minnow' and 'shark' should have different permissions.
Investigate the intended authorization model:
- Compare with
src/routes/chat.tsto understand how the main chat handler uses the level parameter - Check if there's documentation or types defining what 'minnow' vs 'shark' permissions should be
- Determine appropriate tool sets: likely 'minnow' should be read-only (list_properties, list_bookings, get_booking_details) while 'shark' gets write operations
Implement the authorization control:
- Create a
getToolsForLevel()function insrc/services/assistantTools.tsthat filters tools by level - Update
runAssistantto accept the level parameter - Filter
availableToolsbefore constructing the system prompt - Enforce the same filter when validating tool calls from the LLM
Ensure the filtering happens in the application layer, not just in the prompt, so it can't be bypassed by prompt injection.
| case 'list_bookings': { | ||
| let bookings = state.bookings; | ||
| if (args.status) { | ||
| bookings = bookings.filter((b) => b.status === args.status); | ||
| } | ||
| return JSON.stringify(bookings, null, 2); | ||
| } | ||
|
|
||
| case 'get_booking_details': { | ||
| const booking = state.bookings.find((b) => b.id === args.bookingId); | ||
| return booking ? JSON.stringify(booking, null, 2) : 'Booking not found'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 Medium
Guest PII (names and email addresses) is returned by these tools and fed back into the LLM context, which sends it to the external LLM provider. While this is standard behavior for LLM applications, vacation rental guests likely don't expect their personal information to be processed by OpenAI or other AI providers without explicit consent.
💡 Suggested Fix
Minimize PII exposure by returning only the information the LLM needs. The assistant doesn't need guest names/emails to approve bookings—it just needs booking IDs and dates:
case 'list_bookings': {
let bookings = state.bookings;
if (args.status) {
bookings = bookings.filter((b) => b.status === args.status);
}
// Return minimal info without PII
const minimalInfo = bookings.map(b => ({
id: b.id,
propertyId: b.propertyId,
checkIn: b.checkIn,
checkOut: b.checkOut,
status: b.status,
totalPrice: b.totalPrice,
// Omit guestName and guestEmail
}));
return JSON.stringify(minimalInfo, null, 2);
}
case 'get_booking_details': {
const booking = state.bookings.find((b) => b.id === args.bookingId);
if (!booking) return 'Booking not found';
// Return booking details without PII
const { guestName, guestEmail, ...bookingInfo } = booking;
return JSON.stringify(bookingInfo, null, 2);
}When the LLM calls send_message_to_guest, the tool execution can access the full PII from the state file without exposing it to the LLM context.
🤖 AI Agent Prompt
The tools at src/services/assistantTools.ts:97-107 return booking data including guest names and email addresses, which flow through the LLM context to the external provider.
Evaluate what information the LLM actually needs:
- Review the assistant's tasks—can it approve/decline bookings using just booking IDs and dates?
- Check if guest PII is necessary for decision-making or if it's just being passed through
- Consider that when
send_message_to_guestis called, the tool execution can look up the email from the state file without the LLM seeing it
Implement PII minimization:
- Modify
list_bookingsandget_booking_detailsto return booking information without guestName/guestEmail fields - The LLM can still reference bookings by ID and make approval decisions
- When sending messages, the tool execution (which runs server-side) can retrieve the actual email address from state
- This way, PII never passes through the LLM context or gets sent to the AI provider
If PII exposure is necessary for functionality, add consent tracking and privacy documentation.
Summary
Add an AI-powered property management assistant that helps hosts manage their rental properties through natural language conversation.
Features
The assistant can help with:
New Endpoints
POST /authorized/:level/assistant/chat- Chat with the AI assistantGET /authorized/:level/assistant/tools- List available toolsFiles Added
src/routes/assistant.ts- Assistant chat endpointsrc/services/assistantTools.ts- Tool definitions and execution logicsrc/types/assistant.ts- TypeScript type definitionssrc/data/assistant-state.json- Sample state data (3 properties, 3 bookings)