Generic numeric debugging#19317
Conversation
Differential Revision: D103956056
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19317
Note: Links to docs will display an error until the docs builds have been completed. ❌ 5 New Failures, 6 PendingAs of commit 7ad210a with merge base 0f9de6a ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@metascroy has exported this pull request. If you are a Meta employee, you can view the originating Diff in D103956056. |
This PR needs a
|
|
@metascroy has imported this pull request. If you are a Meta employee, you can view this in D103956056. |
| specs: list[TapSpec] = [] | ||
| new_tap_nodes: list[fx.Node] = [] | ||
|
|
||
| for node in candidate_nodes: |
There was a problem hiding this comment.
what happens if the delegate have to fuse some candidate nodes? Try lowering conv2d-->batch_norm, where XNNPACK doesn't support standalone batch_norm IIRC.
Also how is it better than forced-single-op-partitions?
There was a problem hiding this comment.
This is primarily to help RL in a numeric debugging investigation. I may clean it up to make it a generic utility if they find it useful, but will likely go through design review to get input from others if I do that.
To answer your question on fused candidates: this is tested with CoreML's quantized linear pattern [dequantize -> linear]. In that case, we tap the intermediate output after linear node, which is actually a quantized linear in both eager (from QDQ pattern) and CoreML (from its internal fusion). In the [conv2d-->batch_norm] case, I'd have to check. Tapping batch norm should mean we want the output of batch_norm, i.e., the intermediate output after batchnorm, which can be the result of a fused [conv2d-->batch_norm] op. If we did forced single-op partitions on con2d and batch_norm separately, we wouldn't get the fusion.
how is it better than forced-single-op-partitions
One reason is single-op partitions destroy fusion that backends do. Here we tap intermediates, so as long as we tap the final intermediate after a fusion pattern we should be good.
A second reason is this approach does keep the same big delegate blob, just with extra outputs. In CoreML's case that means it will still be routed to ANE, whereas if you break up into single op partitions, it will very likely reroute to CPU b/c they are small.
Differential Revision: D103956056