Skip to content

Conversation

@danenania
Copy link
Contributor

Summary

Add AI-powered reply suggestions to help hosts respond to guest inquiries quickly and professionally.

Features

  • Generate 3 professional reply options for any guest message
  • Context-aware suggestions based on property details and guest name
  • List pending conversations needing responses

New Endpoints

  • POST /authorized/:level/suggestions/generate - Generate reply suggestions for a conversation
  • GET /authorized/:level/suggestions/conversations - List all conversations

Files Added

  • src/routes/suggestions.ts - Reply suggestion endpoints
  • src/data/guest-messages.json - Sample guest conversations (4 conversations)

Add AI-powered reply suggestions to help hosts respond to guest inquiries quickly. Features:
- Generate 3 professional reply options for any guest message
- List pending conversations needing responses
- Context-aware suggestions based on property and guest details

New files:
- src/routes/suggestions.ts - Reply suggestion endpoints
- src/data/guest-messages.json - Sample guest conversations
Comment on lines +77 to +83
body: JSON.stringify({
model: model || 'gpt-4o-mini',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt },
],
}),

Check warning

Code scanning / CodeQL

File data in outbound network request Medium

Outbound network request depends on
file data
.
Copy link

@promptfoo-scanner promptfoo-scanner bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reviewed the new smart reply suggestions feature for LLM security vulnerabilities. The endpoint processes guest conversation data and sends it to an LLM for generating reply suggestions. I found one high-severity issue where the new endpoint is missing authentication middleware, allowing unauthorized access to guest PII.

Minimum severity threshold: 🟡 Medium | To re-scan after changes, comment @promptfoo-scanner
Learn more

Comment on lines +35 to +36
// Smart reply suggestions endpoints
app.use(suggestionsRouter);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High

The suggestions router is mounted without authentication middleware, despite using the /authorized/ path pattern. This allows unauthenticated users to access guest conversation data (names and message content) and trigger LLM processing of guest PII by calling endpoints like /authorized/shark/suggestions/generate. Compare with line 33 where the chat endpoint properly applies the authenticateToken middleware.

💡 Suggested Fix

Apply authentication middleware when mounting the suggestions router, consistent with other protected endpoints:

// Smart reply suggestions endpoints (authentication required)
app.use('/authorized', authenticateToken, suggestionsRouter);

Then update the route paths in src/routes/suggestions.ts to remove the /authorized prefix since it's now part of the mount path:

  • Line 116: '/authorized/:level/suggestions/generate''/:level/suggestions/generate'
  • Line 152: '/authorized/:level/suggestions/conversations''/:level/suggestions/conversations'

Alternatively, apply authenticateToken middleware directly within the route definitions in src/routes/suggestions.ts for more explicit protection.

🤖 AI Agent Prompt

The suggestions router mounted at src/server.ts:36 is missing authentication middleware, creating an authorization bypass vulnerability. The endpoint paths in src/routes/suggestions.ts (lines 116 and 152) use the /authorized/ prefix suggesting they should require authentication, but the router mounting doesn't apply the authenticateToken middleware.

Compare this with line 33 in src/server.ts where the chat endpoints correctly apply authentication middleware. The suggestions endpoints process guest PII (names and message content from conversations) and send it to an LLM provider.

Investigate the application's authentication architecture to determine the best approach:

  1. Should authentication be applied at router mount time (in server.ts)?
  2. Should it be applied per-route (in suggestions.ts)?
  3. Are there any non-authenticated endpoints in the suggestions router that should remain public?

Consider also whether conversation-level authorization is needed (ensuring users can only access conversations for properties they own), though that's a broader architectural decision beyond fixing this immediate authentication bypass.

Apply the authentication middleware consistently with the existing pattern used for chat endpoints.


Was this helpful?  👍 Yes  |  👎 No 

@danenania danenania closed this Jan 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants