Use -passed for result aggregation instead of _result#46833
Draft
Copilot wants to merge 2 commits into
Draft
Conversation
Agent-Logs-Url: https://github.com/Azure/azure-sdk-for-python/sessions/87213dd7-6430-4e46-b77c-805e00df187f Co-authored-by: m7md7sien <16615690+m7md7sien@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] Isolate changes from PR #46436 for evaluation module
[evaluation] Split May 11, 2026
_evaluate.py schema-alignment changes from #46436
m7md7sien
approved these changes
May 11, 2026
_evaluate.py schema-alignment changes from #46436
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR extracts only the
_evaluate.pyportion of #46436 into a standalone change targetingmain. Scope is intentionally limited to schema-alignment updates insdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluate/_evaluate.py.Scope
azure/ai/evaluation/_evaluate/_evaluate.pyBinary aggregation alignment (
_result→_passed)EVALUATION_PASS_FAIL_MAPPINGimport from this moduleoutputs.*_passedcolumnsPass-count computation update
Metric exclusion update during mean aggregation
*_resultto*_passedthreshold_and_result_cols→threshold_and_passed_cols)Original prompt
This PR isolates the changes made to
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluate/_evaluate.pyin PR #46436 (#46436) into a standalone PR targetingmain.Goal
Apply exactly the following changes (and only these changes) to the file
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluate/_evaluate.pyon a new branch created offmain, then open a PR targetingmain.Do not modify any other file in the repository. Do not run formatters, linters, or any tooling that may alter other lines. Do not change the changelog or version files.
Exact diff to apply
Apply this unified diff verbatim to
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluate/_evaluate.py(the file currently onmain):Authoritative reference
The exact content the file should have AFTER applying the diff matches the version currently on the
mohessie/standardize_output_schemabranch. You can copy the file directly from there to ensure byte-for-byte equivalence:sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluate/_evaluate.pyRecommended approach: fetch the file content from the
mohessie/standardize_output_schemabranch and commit it to your new branch offmain. Verify withgit diff origin/mainthat the only changed file is_evaluate.pyand the diff matches the one above exactly.Branch and PR requirements
main(suggested name:m7md7sien/split-evaluate-py-from-46436).sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluate/_evaluate.py.mainwith a clear title and description referencing PR Standardize Output Schema for Evalautors #46436._evaluate.pychanges.Validation
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluate/_evaluate.py.This pull request was created from Copilot chat.