Skip to content

Conversation

@justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Jan 15, 2026

This pull request refactors how operator schemas are handled throughout the autocast, converter, and evaluator modules. The main change is replacing direct usage of OpSchema with a new _schemas.OpSignature abstraction, leading to more consistent and modular code when dealing with operator signatures, especially for input casting and evaluation. Several related methods are renamed and refactored for clarity and encapsulation.

Important changes

  • The Evaluator interface now defines eval_op on onnx ops. The old eval was removed in favor of a more flexible eval_op. The exporter's eval will continue to function with a compatible logic in class Op
  • op_schema properties from Functions are removed

Operator signature abstraction and autocast refactor:

  • Replaced usage of OpSchema with _schemas.OpSignature in onnxscript/_internal/autocast.py, updating all relevant function signatures and internal logic to use the new abstraction. This includes changing how input parameters are filtered and type constraints are accessed.

AST Converter integration:

  • Updated the converter (onnxscript/_internal/converter.py) to pass op_signature instead of op_schema to autocast functions, ensuring compatibility with the new signature abstraction.

Evaluator refactor and encapsulation:

  • Refactored the evaluator (onnxscript/_internal/evaluator.py) to use _adapt_inputs, _adapt_attributes, and _adapt_outputs methods, encapsulating the logic for adapting inputs/outputs and removing unused or redundant methods. Now, operator signatures are consistently adapted from OpSchema using _schemas.OpSignature.

Addtionally:

  • Clean up typing annotation utilities
  • Now supply IR attrs directly when creating attributes to avoid proto serialization and loss of subgraph references

Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
@codecov
Copy link

codecov bot commented Jan 15, 2026

Codecov Report

❌ Patch coverage is 87.31343% with 17 lines in your changes missing coverage. Please review.
✅ Project coverage is 62.96%. Comparing base (1080d6a) to head (1eefa02).

Files with missing lines Patch % Lines
onnxscript/_internal/values.py 70.45% 11 Missing and 2 partials ⚠️
onnxscript/_internal/evaluator.py 93.10% 1 Missing and 1 partial ⚠️
onnxscript/_internal/autocast.py 92.30% 0 Missing and 1 partial ⚠️
onnxscript/_internal/converter.py 96.66% 0 Missing and 1 partial ⚠️

❗ There is a different number of reports uploaded between BASE (1080d6a) and HEAD (1eefa02). Click for more details.

HEAD has 18 uploads less than BASE
Flag BASE (1080d6a) HEAD (1eefa02)
19 1
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2771      +/-   ##
==========================================
- Coverage   70.45%   62.96%   -7.49%     
==========================================
  Files         228      228              
  Lines       27257    27140     -117     
  Branches     2760     2736      -24     
==========================================
- Hits        19203    17089    -2114     
- Misses       7102     9207    +2105     
+ Partials      952      844     -108     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors how operator schemas are handled by replacing direct usage of onnx.defs.OpSchema with a new _schemas.OpSignature abstraction throughout the autocast, converter, and evaluator modules. This change improves modularity and encapsulation when dealing with operator signatures.

Changes:

  • Replaced OpSchema with OpSignature in autocast functions for input type casting
  • Updated converter to pass op_signature instead of op_schema to autocast functions
  • Refactored evaluator methods to be private (_adapt_inputs, _adapt_attributes, _adapt_outputs) and removed the use_graph_attribute method along with its associated conditional logic

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
onnxscript/_internal/autocast.py Updated to use OpSignature instead of OpSchema, including parameter filtering and type constraint access changes
onnxscript/_internal/converter.py Changed to pass op_signature instead of op_schema to autocast functions
onnxscript/_internal/evaluator.py Made adapter methods private, removed use_graph_attribute method, and simplified _adapt_attributes to always use graph attributes
Comments suppressed due to low confidence (1)

onnxscript/_internal/evaluator.py:548

  • The use_graph_attribute method in ORTMixedEvaluator is now dead code since it's no longer called after removing the conditional logic in _adapt_attributes. This method should be removed from the ORTMixedEvaluator class.
    def use_graph_attribute(self, schema: onnx.defs.OpSchema) -> bool:
        return _schema_id(schema) not in self._python_ops

@justinchuby justinchuby marked this pull request as draft January 16, 2026 01:24
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
@justinchuby justinchuby marked this pull request as ready for review January 16, 2026 19:44
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.

Comments suppressed due to low confidence (1)

onnxscript/_internal/values.py:364

  • The docstring still mentions a 'functions' parameter that was removed from the method signature. This outdated documentation should be removed.
            functions: A list of functions to include in the model.
                By default, all functions called at least once are included.

Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
@titaiwangms
Copy link
Contributor

Is this ready?

@justinchuby
Copy link
Collaborator Author

I thought it was but looks not. Will update

@justinchuby justinchuby marked this pull request as draft January 20, 2026 21:53
@justinchuby
Copy link
Collaborator Author

Feel free to put in review comments though. Thanks!

Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
…xpression output

Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
@justinchuby justinchuby marked this pull request as ready for review January 23, 2026 02:49
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 13 out of 13 changed files in this pull request and generated 4 comments.

Comment on lines +256 to +260
assert op_signature is not None, f"Op {op.name} has no signature."
attributes = _unwrap_tensors_in_kwargs(kwargs)
attributes, closure = self._adapt_attributes(op_signature, attributes)
inputs = self._adapt_inputs(op_signature, args)
outputs = self._eval(op.op_schema, inputs, attributes, closure)
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eval_op uses an assert to require op.op_signature and then calls _eval(op.op_schema, ...), but op.op_schema can be None (e.g., custom/unknown ops). This will raise an AssertionError (or pass None into _eval) instead of a user-facing exception. Please replace the assert with an explicit check that raises a clear RuntimeError/ValueError, and also validate op.op_schema is present (or change _eval to take the needed identifier/signature instead of an OpSchema).

Copilot uses AI. Check for mistakes.
Comment on lines +410 to 414
for func in ir_functions:
domain = func.domain
if domain is not None and domain not in opset_imports:
opset_imports[domain] = func.meta.get("opset_version", 1)

Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_to_model_proto updates opset_imports using only func.domain (from func.meta['opset_version']) and partially handles the default domain, but it does not merge the full func.opset_imports map for other domains used inside nested functions. This can produce a model missing required opset imports. Consider merging all domains/versions from each func.opset_imports (and resolving conflicts), and avoid relying solely on func.meta['opset_version'] when the version is available in func.opset_imports.

Copilot uses AI. Check for mistakes.
fail(info.msg(str(e)))
attr = self._make_onnx_attr("value", tensor)

attr = ir.AttrTensor("value", ir.tensor(pyvalue, name=ovar))
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_emit_const no longer wraps tensor construction in a try/except that reports a source-location-aware error via self.fail(...). If ir.tensor(pyvalue, ...) rejects an unsupported constant, the exception will bubble up without source context. Please restore error handling so conversion failures are surfaced with info (consistent with the prior behavior).

Suggested change
attr = ir.AttrTensor("value", ir.tensor(pyvalue, name=ovar))
try:
tensor = ir.tensor(pyvalue, name=ovar)
except Exception as exc: # noqa: BLE001
self.fail(
info,
f"Failed to convert constant value {pyvalue!r} to ONNX tensor: {exc}",
)
attr = ir.AttrTensor("value", tensor)

Copilot uses AI. Check for mistakes.

@pytest.mark.xfail(
strict=True,
strict=False,
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pytest.mark.xfail(strict=False, ...) will no longer fail the suite if this test unexpectedly starts passing (XPASS), which can hide progress/regressions. If the test is expected to fail until optional outputs are implemented, keep strict=True (or remove xfail once it passes).

Suggested change
strict=False,
strict=True,

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Development

Successfully merging this pull request may close these issues.

3 participants