Python: Foundry Evals integration for Python#4750
Draft
alliscode wants to merge 11 commits intomicrosoft:mainfrom
Draft
Python: Foundry Evals integration for Python#4750alliscode wants to merge 11 commits intomicrosoft:mainfrom
alliscode wants to merge 11 commits intomicrosoft:mainfrom
Conversation
a0edd5f to
fe9e621
Compare
python/packages/azure-ai/agent_framework_azure_ai/_foundry_evals.py
Outdated
Show resolved
Hide resolved
python/packages/azure-ai/agent_framework_azure_ai/_foundry_evals.py
Outdated
Show resolved
Hide resolved
python/packages/azure-ai/agent_framework_azure_ai/_foundry_evals.py
Outdated
Show resolved
Hide resolved
python/packages/azure-ai/agent_framework_azure_ai/_foundry_evals.py
Outdated
Show resolved
Hide resolved
python/packages/azure-ai/agent_framework_azure_ai/_foundry_evals.py
Outdated
Show resolved
Hide resolved
15d8640 to
aad92ac
Compare
Member
af0ccf6 to
45527ee
Compare
Merged and refactored eval module per Eduard's PR review: - Merge _eval.py + _local_eval.py into single _evaluation.py - Convert EvalItem from dataclass to regular class - Rename to_dict() to to_eval_data() - Convert _AgentEvalData to TypedDict - Simplify check system: unified async pattern with isawaitable - Parallelize checks and evaluators with asyncio.gather - Add all/any mode to tool_called_check - Fix bool(passed) truthy bug in _coerce_result - Remove deprecated function_evaluator/async_function_evaluator aliases - Remove _MinimalAgent, tighten evaluate_agent signature - Set self.name in __init__ (LocalEvaluator, FoundryEvals) - Limit FoundryEvals to AsyncOpenAI only - Type project_client as AIProjectClient - Remove NotImplementedError continuous eval code - Add evaluation samples in 02-agents/ and 03-workflows/ - Update all imports and tests (167 passing) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Use cast(list[Any], x) with type: ignore[redundant-cast] comments to satisfy both mypy (which considers casting Any redundant) and pyright strict mode (which needs explicit casts to narrow Unknown types). Also fix evaluator decorator check_name type annotation to be explicitly str, resolving mypy str|Any|None mismatch. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…attr - Apply pyupgrade: Sequence from collections.abc, remove forward-ref quotes - Add @overload signatures to evaluator() for proper @evaluator usage - Fix evaluate_workflow sample to use WorkflowBuilder(start_executor=) API - Fix _workflow.py executor.reset() to use getattr pattern for pyright - Remove unused EvalResults forward-ref string in default_factory lambda Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
5c6ab9b to
5dccdc2
Compare
The test_configure_otel_providers_with_env_file_and_vs_code_port test triggers gRPC OTLP exporter creation, but the grpc dependency is optional and not installed by default. Add skipif decorator matching the pattern used by all other gRPC exporter tests in the same file. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Move module docstrings before imports (after copyright header) - Add -> None return type to all main() and helper functions - Fix line-too-long in multiturn sample conversation data - Add Workflow import for typed return in all_patterns_sample Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…nings - Simplify _ensure_async_result to direct await (async-only clients) - Replace get_event_loop() with get_running_loop() - Narrow _fetch_output_items exception handling to specific types - Add warning log when _filter_tool_evaluators falls back to defaults - Add DeprecationWarning to options alias in Agent.__init__ - Add DeprecationWarning to evaluate_response() - Rename raw key to _raw_arguments in convert_message fallback - Fix evaluate_agent_sample.py: replace evals.select() with FoundryEvals() - Fix evaluate_multiturn_sample.py: use Message/Content/FunctionTool types - Fix evaluate_workflow_sample.py: replace evals.select() with FoundryEvals() - Update test mocks to use AsyncMock for awaited API calls Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Add num_repetitions=2 positive test verifying 2×items and 4 agent calls - Add _poll_eval_run tests: timeout, failed, and canceled paths - Add evaluate_traces tests: validation error, response_ids path, trace_ids path - Add evaluate_foundry_target happy-path test with target/query verification Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Wrap implicit string concatenation in parens in evaluate_multiturn_sample.py - Apply ruff formatter to 6 other files with minor formatting drift Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…nch) Reverts changes to _agents.py, _agent_executor.py, and _workflow.py back to upstream/main. These fixes are now in a separate PR. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Code fixes: - Fix _normalize_queries inverted condition (single query now replicates to match expected_count) - Fix substring match bug: 'end' in 'backend' matched; use exact set lookup for executor ID filtering - Fix used_available_tools sample: tool_definitions→tools param, use FunctionTool attribute access instead of dict .get() - Add None-check in _resolve_openai_client for misconfigured project - Add Returns section to evaluate_workflow docstring - Cache inspect.signature in @evaluator wrapper (avoid per-item reflection) Architecture: - Extract _evaluate_via_responses as module-level helper; evaluate_traces now calls it directly instead of creating a FoundryEvals instance - Move Foundry-specific typed-content conversion out of core to_eval_data; core now returns plain role/content dicts, FoundryEvals applies AgentEvalConverter in _evaluate_via_dataset Tests: - evaluate_response() deprecation warning emission and delegation - num_repetitions > 1 with expected_output and expected_tool_calls - Mock output_items.list in test_evaluate_calls_evals_api - Update to_eval_data assertions for plain-dict format - Unknown param error now raised at @evaluator decoration time Skipped (separate PR): executor reset loop, xfail removal, options alias Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add evaluation framework with local and Foundry-hosted evaluator support:
Contribution Checklist