Skip to content

Experimental: Add SLJIT JIT compiler integration for QuickJS#1332

Draft
stevefan1999-personal wants to merge 1 commit intoquickjs-ng:masterfrom
stevefan1999-personal:patch-sljit
Draft

Experimental: Add SLJIT JIT compiler integration for QuickJS#1332
stevefan1999-personal wants to merge 1 commit intoquickjs-ng:masterfrom
stevefan1999-personal:patch-sljit

Conversation

@stevefan1999-personal
Copy link

@stevefan1999-personal stevefan1999-personal commented Feb 8, 2026

This is an experimental, simple and primitive SLJIT based JIT engine for QuickJS. aided with help of a LLM. Assume that it is 64-bit only, despite some 32-bit code is in-place, but there is no test for that.

Why?

I really, really want to have a fast JS engine that is fast and tiny...As small as LuaJIT. For a long time. I don't like JavaScriptCore because it does not have good universal support, and it is still on the level of thousands of lines, if not million lines unlike the V8 monster.

Also did this just out of curious. I can always just implement the JIT myself, in fact, I tried to do one before but there are so many opcodes it is getting very mechanical to convert all the C code from QuickJIT to SLJIT

How did you do that?

I just dumped the source code of QuickJIT, SLJIT, the experience of my previous attempts at it manually, and let Opus 4.6 figure out the rest. Indeed Opus has guessed most of the arithmetics right (without revealing Opus what I did), but the suggestions of using icall to get a simpler JIT approach -- seems like it backfired.

Experimental result

Unsurprisingly, the JIT code was slower than that of in interpreter mode, reported by Opus:

Benchmark Non-JIT (O3) JIT Before Session JIT After Session Δ from Non-JIT
Richards 857 649 (-25%) 733 (-14.5%) Improved 10pp
DeltaBlue 831 727 (-8.6%) 757 (-8.9%) ~ same
Crypto 445 785 (+76%) 828 (+86%) Improved
RayTrace 1797 1573 (-12.7%) 1618 (-10%) Improved 3pp
EarleyBoyer 1927 1638 (-17.1%) 1735 (-10%) Improved 7pp
Score 1004 1015 (+0.3%) 1051 (+4.7%) JIT now ahead 🟢

Besides from the huge boost of Crypto benchmark, it is a total disaster for the rest. Clearly we need some more optimization. At the same time, SLJIT has a backend for PCRE. Maybe we can port the regex engine of SLJIT here for some good score improvement.

I've also instructed Opus to figure out more optimization for me, one of which is inline variable caching, it claims to learn this from V8, but I need someone professional in compiler to verify that claim. Indeed it is a valid optimization technique and it boosted performance quite a lot.

What's next?

Obviously, the code from LLM has a lot of repetitive part, and needs a lot of fixup..the main JIT is 3000 lines, and as usual LLM manner, it is a total mess. One of the most obvious hot path is to rewrite most, if not all helper code into SLJIT inlined code. This is where most of the performance improvement for integer came first.

I did not intend for it to be merged due to the concern of involving of LLM, after all, as the bias of LLM-generated code is often being nitpicked for being "AI-slop" (well whether LLM is really AI is another problem), but I think it is absolutely right you still need a human to fully verify the code generated first. I'm 99% sure there are some fishy stuff that is not

This was referenced Feb 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant