Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Dec 10, 2025

  • Understand the current workspace context implementation
  • Modify PullRequestContextProvider to include current user login in workspace context
  • Ensure the current user is obtained for each repository manager
  • Run linting and hygiene checks
  • Add logging for error handling
  • Run security checks (CodeQL)

Summary

Successfully added the current user's GitHub login to the workspace context provided by PullRequestContextProvider. This enables the GHPR chat participant to know who the current user is when handling requests like "Summarize my work items for https://github.com/microsoft/vscode-copilot/issues/8836", allowing it to provide contextual responses based on the user's identity (e.g., lramos15).

Changes Made

  • Added current user retrieval in provideWorkspaceChatContext() method
  • Included error handling with debug logging for troubleshooting
  • Preserved existing code style and formatting
  • No breaking changes - the context addition is optional and gracefully handled

Security Summary

  • CodeQL analysis completed with 0 alerts
  • No security vulnerabilities introduced
Original prompt

This section details on the original issue you should resolve

<issue_title>GHPRI issue summary doesn't know who current user is</issue_title>
<issue_description>Testing #6321

@githubpr Summarize my work items for https://github.com/microsoft/vscode-copilot/issues/8836

Doesn't provide a better summary than just asking for a generic summary. I would like it to give me a contextual response and know that my GH tag is lramos15</issue_description>

<agent_instructions>Add the current user into the workspace context.</agent_instructions>

Comments on the Issue (you are @copilot in this section)

@aiday-mar I have started looking into how to fix that. To do that I would like to do the following:
  • include the current user's login in the summarization tool prompt
  • include the user's actual request in the summarization tool prompt

The reason I'd like to include the user's actual request is because I'd like the summarization tool to take into account the details of how summarization is done, that is in Logan's case, the summary should be only for the current user.

I added the user's request into our invocationOptions object of type LanguageModelToolInvocationOptions<object> that is passed in when calling vscode.lm.invokeTool. Meaning I did the following:

const invocationOptions: ToolInvocationOptions<any> = {
	parameters,
	requestPrompt: request.prompt,
	toolInvocationToken: request.toolInvocationToken
};
toolCalls.push({
	call: part,
	result: vscode.lm.invokeTool(tool.name, invocationOptions, token),
	tool
});

where ToolInvocationOptions extends from vscode.LanguageModelToolInvocationOptions. When testing I noticed that when calling invoke, the options object does not contain the field requestPrompt. I then looked on the definition of the method invokeTool on the VS Code side. I noticed that in the file extHostLanguageModelTools.ts on line 53, we essentially destructure the options object and call $invokeTool with only the parameters, the tokenizationOptions and the toolInvocationToken.

I see two ways to fix this:

  • Append directly the request prompt to the parameters object which is returned in the model result stream, and use it in the tool invocation
  • Perhaps we could generally add a new optional parameter called context which would be defined by the developer and which we could use to pass in additional data, into the tool invocation

Thoughts @roblourens ?

@roblourens There's a spectrum of options. For a tool that you register with `registerTool`, it's best to be able to have the LLM call the tool without any special handling of a particular tool. So then the "proper" thing to do IMO would be to figure out how to map that part of a request to a tool parameter that the LLM can fill in. I don't know what that would look like for your tool, maybe something like `"filter": "currentUser"` or something. If you need to do some other processing, you can also do `"query": "Summarize my work items"` and then set the description of `"query"` to something that will convince the LLM to fill it out correctly.

Then, it's sort of cheating, but you can also have a "query" that you just set manually in your code when this tool is called.

Besides that, I think it's more appropriate to have more flexibility like this if you don't use registerTool/invokeTool but rather just have a made-up tool that only exists in your code- then you can have this disconnect between the tool that the LLM calls, and you run whatever code you want internally to get a result that you report to the LLM.

Happy to chat about it more if this is confusing.</comment_new>
<comment_new>@aiday-mar
Thanks for the answer. I will think more about that. </comment_new>


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI and others added 3 commits December 10, 2025 11:41
Co-authored-by: alexr00 <38270282+alexr00@users.noreply.github.com>
Co-authored-by: alexr00 <38270282+alexr00@users.noreply.github.com>
Co-authored-by: alexr00 <38270282+alexr00@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

GHPRI issue summary doesn't know who current user is

2 participants