Skip to content

Add IterationCompleted delegate to FunctionInvokingChatClient to facilitate agentic patterns#7251

Open
PederHP wants to merge 2 commits intodotnet:mainfrom
PederHP:meai_function_invocation_hook
Open

Add IterationCompleted delegate to FunctionInvokingChatClient to facilitate agentic patterns#7251
PederHP wants to merge 2 commits intodotnet:mainfrom
PederHP:meai_function_invocation_hook

Conversation

@PederHP
Copy link
Contributor

@PederHP PederHP commented Jan 31, 2026

Add IterationCompleted hook to FunctionInvokingChatClient

Description

This PR adds an IterationCompleted callback to FunctionInvokingChatClient that is invoked after each iteration of the function invocation loop completes. This enables external code to monitor and control the agentic loop without modifying function implementations.

Motivation

When building agentic applications, there's often a need to:

  • Monitor token usage and trigger context compaction before hitting limits / at certain thresholds
  • Enforce cost limits by terminating loops that exceed budget thresholds
  • Apply content guardrails and moderation between iterations
  • Enforce time limits for long-running agent sessions
  • Implement custom logging or metrics at the iteration level

Currently, the only way to break out of the function-calling loop is from within a function via FunctionInvocationContext.Terminate. This requires coupling monitoring/control logic with function implementations, which is problematic when:

  • Functions are reusable across different contexts with different limits
  • Monitoring logic needs access to aggregated state (like total usage across iterations)
  • The decision to terminate depends on factors external to any single function
  • Function results need to be returned and added to message history for context compaction purposes or similar

Solution

Add an IterationCompleted property that accepts a callback invoked after each iteration:

[Experimental(DiagnosticIds.Experiments.AIIterationCompleted, UrlFormat = DiagnosticIds.UrlFormat)]
public Func<FunctionInvocationIterationContext, CancellationToken, ValueTask>? IterationCompleted { get; set; }

The callback receives a FunctionInvocationIterationContext containing:

  • Iteration - The current iteration number (0-based)
  • TotalUsage - Aggregated usage details across all iterations so far
  • Messages - All messages accumulated during the loop
  • Response - The response from the most recent inner client call
  • IsStreaming - Whether this is a streaming operation
  • Terminate - Set to true to stop the loop after this iteration

Example Usage

var client = new FunctionInvokingChatClient(innerClient)
{
    IterationCompleted = (ctx, ct) =>
    {
        if (ctx.TotalUsage?.TotalTokenCount > 50000)
        {
            logger.LogWarning("Token limit approaching, terminating loop");
            ctx.Terminate = true;
        }
        return default;
    }
};

Design Decisions

  1. Hook timing: The callback is invoked after function invocations complete for each iteration, ensuring function calls are never lost mid-execution and results are available for inspection and/or compaction. This also allows important function calls with side-effects to resolve, before a compaction if one triggers after the call.
  2. Single hook for streaming and non-streaming: Simplifies the API while the IsStreaming property allows differentiation when needed.
  3. Async delegate with ValueTask: Allows both synchronous and asynchronous implementations efficiently. Follows pattern from existing delegate.
  4. Defensive copy of TotalUsage: The callback receives a snapshot of usage details that won't be mutated by subsequent iterations.
  5. Marked as experimental: Uses MEAI001 diagnostic ID consistent with other experimental AI features.
  6. FunctionInvocationIterationContext not sealed: This matches the existing FunctionInvocationContext class in the same namespace. .NET coding guidelines stateto follow established patterns over the guidelines.

Changes

  • Added FunctionInvocationIterationContext class
  • Added IterationCompleted property to FunctionInvokingChatClient
  • Integrated callback invocation in both GetResponseAsync and GetStreamingResponseAsync
  • Added usage tracking in streaming path when callback is configured
  • Added comprehensive unit tests

Testing

Added 11 new tests covering:

  • Property getter/setter behavior
  • Callback invocation after each iteration (streaming and non-streaming)
  • Loop termination via Terminate property
  • Correct usage details in callback
  • Callback not invoked when no function calls occur
  • Message accumulation across iterations

Co-authored with OpenCode / Claude Opus 4.5

Microsoft Reviewers: Open in CodeFlow

@PederHP PederHP requested a review from a team as a code owner January 31, 2026 17:33
Copilot AI review requested due to automatic review settings January 31, 2026 17:33
@PederHP PederHP requested a review from a team as a code owner January 31, 2026 17:33
@github-actions github-actions bot added the area-ai Microsoft.Extensions.AI libraries label Jan 31, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds an IterationCompleted callback hook to FunctionInvokingChatClient that enables external monitoring and control of the agentic function-calling loop. The callback receives context including iteration number, aggregated usage details, accumulated messages, and the ability to terminate the loop.

Changes:

  • Added FunctionInvocationIterationContext class to provide context about completed iterations
  • Added IterationCompleted property to FunctionInvokingChatClient for the callback
  • Integrated callback invocation in both streaming and non-streaming paths after function invocations complete
  • Added experimental diagnostic ID AIIterationCompleted
  • Added 11 comprehensive tests covering various callback scenarios

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated no comments.

File Description
src/Libraries/Microsoft.Extensions.AI/ChatCompletion/FunctionInvocationIterationContext.cs New context class providing iteration details including iteration number, usage, messages, response, and termination control
src/Libraries/Microsoft.Extensions.AI/ChatCompletion/FunctionInvokingChatClient.cs Added IterationCompleted property, integrated callback invocation in both async paths, added CloneUsageDetails helper, and logging for callback-requested termination
src/Shared/DiagnosticIds/DiagnosticIds.cs Added AIIterationCompleted experimental diagnostic constant
test/Libraries/Microsoft.Extensions.AI.Tests/ChatCompletion/FunctionInvokingChatClientTests.cs Added 11 tests covering property behavior, callback invocation, termination, usage details, and message accumulation

@stephentoub
Copy link
Member

Currently, the only way to break out of the function-calling loop is from within a function via FunctionInvocationContext.Terminate

Using the FunctionInvoker delegate isn't sufficient?

@PederHP
Copy link
Contributor Author

PederHP commented Feb 3, 2026

Currently, the only way to break out of the function-calling loop is from within a function via FunctionInvocationContext.Terminate

Using the FunctionInvoker delegate isn't sufficient?

It only allows terminating between tool use and tool result, and aborting the entire tool call, which isn't ideal. It is much better to be able to exit the loop after a tool result.

The main use case I have for this are long-running agentic loops where the agent application needs to self-curate context. The model will happily call tools until the context becomes larger than desired (many models work best at no more than 40%-60% full context) - which means either terminating inside the invoker (throwing away a tool call) or waiting for context to reach max length, which is a really bad solution.

Terminating inside the FunctionInvoker is wasteful in terms of inference (as agentic tool calls can have quite large payloads) and it's also awkward to work with the state of context when the purpose is compaction and continuation.

It is much cleaner to break after the tool invocation. It would be even more ideal to be able to break after an arbitrary ChatMessage or AIContent, but that would require a much bigger change so I decided to see if this was something that'd be acceptable.

But the main thing is really to make control over the agentic loop more flexible, as the improvements to models makes it viable for them to have longer loops - but only if actively managing context during the loop.

@stephentoub
Copy link
Member

It only allows terminating between tool use and tool result which isn't ideal. It is much better to be able to exit the loop after the tool result.

I'm not understanding. If I write:

ficc.FunctionInvoker = async (context, cancellationToken) =>
{
    var toolResult = await context.Function.InvokeAsync(context.Arguments, cancellationToken);

    // handling here
    Console.WriteLine("this is after the toolResult");

    return toolResult;
};

is that not what you're talking about?

@PederHP
Copy link
Contributor Author

PederHP commented Feb 3, 2026

It only allows terminating between tool use and tool result which isn't ideal. It is much better to be able to exit the loop after the tool result.

I'm not understanding. If I write:

ficc.FunctionInvoker = async (context, cancellationToken) =>
{
    var toolResult = await context.Function.InvokeAsync(context.Arguments, cancellationToken);

    // handling here
    Console.WriteLine("this is after the toolResult");

    return toolResult;
};

is that not what you're talking about?

But that doesn't terminate the tool calling loop. I want to push the tool results onto the stack, and then have the chance to inspect and modify context. Which I've found is best done by returning and letting the outside code reinvoke the IChatClient with a reassembled / modified context.

Perhaps a cleaner and more general abstraction would be to allow ending the loop from the outside, but I couldn't think of a clean way to enable that as it is very function calling specific and also doesn't work well with non-streaming.

@PederHP
Copy link
Contributor Author

PederHP commented Feb 3, 2026

I suppose:

ficc.FunctionInvoker = async (context, cancellationToken) =>
{
    var toolResult = await context.Function.InvokeAsync(context.Arguments, cancellationToken);

    // save toolCall from context.Messages somewhere along with toolResult
    context.Terminate = true;    

    return toolResult;
};

Would work - and then adding the toolCall+toolResult back into message history manually. But I think it breaks for parallel tool calls.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area-ai Microsoft.Extensions.AI libraries

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants