Conversation
There was a problem hiding this comment.
Pull request overview
Optimizes multipass JIT lowering for EVM SIGNEXTEND by replacing the previous “sign-bit-position + generic mask” construction with a more direct word/byte decomposition that better matches the interpreter’s structure and reduces intermediate arithmetic/live ranges.
Changes:
- Rewrote
EVMMirBuilder::handleSignextend()lowering to compute(index / 8, index % 8)and operate on the selected 64-bit limb. - Extracted the sign byte, sign-extended it to
i64, rebuilt the selected limb, and filled higher limbs using the sign. - Kept the existing
index >= 31no-op behavior via the existingNoExtensionselect.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| MInstruction *One = createIntConstInstruction(MirI64Type, 1); | ||
| MInstruction *Const3 = createIntConstInstruction(MirI64Type, 3); | ||
| MInstruction *Const8 = createIntConstInstruction(MirI64Type, 8); | ||
| MInstruction *Const63 = createIntConstInstruction(MirI64Type, 63); |
There was a problem hiding this comment.
Const8 is created but never used in this rewritten lowering, which can trigger unused-variable warnings and slightly undermines the stated goal of reducing intermediates. Remove it (or use it if it was intended for computing SignByteOffset).
| MInstruction *Const63 = createIntConstInstruction(MirI64Type, 63); |
⚡ Performance Regression Check Results✅ Performance Check Passed (interpreter)Performance Benchmark Results (threshold: 25%)
Summary: 194 benchmarks, 0 regressions ✅ Performance Check Passed (multipass)Performance Benchmark Results (threshold: 25%)
Summary: 194 benchmarks, 0 regressions |
This change optimizes EVM
SIGNEXTENDlowering in the multipass JIT.The previous lowering computed the sign bit position with a wider arithmetic
chain, derived the target 64-bit limb through
/ 64and% 64, then built theextended result from a generic bit-mask formulation. That shape was correct, but
it introduced extra intermediate values and a longer live range around the
selected component, sign mask, and extension path.
This patch rewrites the lowering to follow the interpreter-style decomposition
more directly:
sign_word_index = index / 8sign_byte_index = index % 8i64What Changed
handleSignextend()insrc/compiler/evm_frontend/evm_mir_compiler.cppsmaller word/byte-index based lowering
propagation
index >= 31fast return behaviorWhy This Is Better
and reason about
((index * 8 + 7) / 64, % 64)shape((1 << (bit_offset + 1)) - 1)pathreduce unnecessary copies and register pressure
Dependency Note
This commit is intentionally independent from the earlier u64 fast-path /
constant-folding chain.
In particular, it does not depend on:
perf(evm): add u64 constant fast paths and constant folding ...The
SIGNEXTENDoptimization only changeshandleSignextend()insrc/compiler/evm_frontend/evm_mir_compiler.cppand can be applied cleanly ontop of current
upstream/mainby itself.