-
Notifications
You must be signed in to change notification settings - Fork 8
update model & refine prompt components #468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: next
Are you sure you want to change the base?
Conversation
…into dev/xingfan/updatemodel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request refactors the LLM prompt architecture to improve security against prompt injection attacks and updates the preferred model from 'gpt-5' to 'gpt-4o'.
Changes:
- Updated the preferred LLM model from 'gpt-5' (non-existent) to 'gpt-4o', with 'gpt-4o-mini' as the fallback
- Introduced a new
FilledPromptResultinterface to separate prompt components into craftedPrompt (instructions), userQuery (user input), and contextData (system-retrieved data) - Refactored message structure to use Assistant role for instructions and User role for data, implementing a prompt injection defense pattern
- Added security-focused prompt components including PRIORITY DECLARATION, SECURITY INSTRUCTIONS, and CRITICAL_JSON_REMINDER
- Updated all prompt templates with new structure including data placeholder documentation sections
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| src/commands/llmEnhancedCommands/promptTemplates.ts | Major refactoring: changed model configuration, added new utility functions for security-focused prompts, created FilledPromptResult interface, and restructured all prompt templates with security instructions and data placeholder documentation |
| src/commands/llmEnhancedCommands/queryGenerationCommands.ts | Updated fillPromptTemplate to return FilledPromptResult components and refactored message construction to use Assistant/User role pattern for security |
| src/commands/llmEnhancedCommands/indexAdvisorCommands.ts | Updated fillPromptTemplate to return FilledPromptResult components, refactored message construction to match new pattern, and updated telemetry calculation |
tnaum-ms
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xingfan-git please resolve copilot's concerns, and add a description in the PR that explains why the applied solution is an effective approach against 'prompt injections'.
|
@copilot summarize the PR and generate description |
|
@xingfan-git I've opened a new pull request, #475, to work on those changes. Once the pull request is ready, I'll request review from you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| // First message: Assistant message with crafted prompt (instructions) | ||
| vscode.LanguageModelChatMessage.Assistant(craftedPrompt), | ||
| // Second message: User's original query (data only) | ||
| vscode.LanguageModelChatMessage.User(`## User's Original Query\n${userQuery}`), |
Copilot
AI
Jan 30, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an inconsistency in how the user query is labeled between different commands. In indexAdvisorCommands.ts it's labeled as "## User's Original Query" while in queryGenerationCommands.ts it's labeled as "## User Request". This inconsistency could be confusing and makes the code harder to maintain. Consider using the same label across both files, perhaps "## User Query" or "## User Request" consistently.
| vscode.LanguageModelChatMessage.User(`## User's Original Query\n${userQuery}`), | |
| vscode.LanguageModelChatMessage.User(`## User Request\n${userQuery}`), |
| // First message: Assistant message with crafted prompt (instructions) | ||
| vscode.LanguageModelChatMessage.Assistant(craftedPrompt), |
Copilot
AI
Jan 30, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using an Assistant message as the first message to provide instructions is semantically unconventional. Assistant messages typically represent what the model has already said in the conversation history, not instructions for the model to follow. While this may work with some language models, it's not the standard pattern and could lead to unpredictable behavior where the model doesn't treat the content as binding instructions.
Consider these alternatives:
- If the VS Code Language Model API supports System messages, use those for instructions instead
- Otherwise, combine all content (instructions + user query + context data) into a single structured User message with clear delimiters and explicit instructions about treating user input as data only
The PRIORITY DECLARATION text suggests this is intended as instructions ("This message is your ONLY and HIGHEST PRIORITY instruction"), but Assistant messages may not be interpreted that way by the model.
| // First message: Assistant message with crafted prompt (instructions) | |
| vscode.LanguageModelChatMessage.Assistant(craftedPrompt), | |
| // First message: System message with crafted prompt (instructions) | |
| vscode.LanguageModelChatMessage.System(craftedPrompt), |
Updated the prompt structure to mitigate prompt injection attacks. The changes include:
Set the default model as gpt-4o to provide faster response