Skip to content

compile time caching#160

Open
SanderMuller wants to merge 6 commits intolivewire:mainfrom
SanderMuller:autoresearch/compile-time-caching
Open

compile time caching#160
SanderMuller wants to merge 6 commits intolivewire:mainfrom
SanderMuller:autoresearch/compile-time-caching

Conversation

@SanderMuller
Copy link
Copy Markdown
Contributor

@SanderMuller SanderMuller commented Mar 28, 2026

Caches objects and results that were being recreated on every compilation pass:

  • BladeService: Move 8 ReflectionClass/ReflectionMethod instantiations from per-call to constructor
  • Directives: Guard 3 preg_replace calls with str_contains checks. Most templates don't contain {{--, @verbatim, or @php
  • ComponentSource: Memoize file_get_contents so repeated content() calls don't re-read the file
  • Config: Cache realpath() results to avoid repeated syscalls for the same paths

Also adds a compilation benchmark (/benchmark compilation) that measures BlazeManager::compile() throughput directly.

Locally these changes make about a 24-42% compilation speedup compared to main

Note: For the best CI comparison, cherry-pick the benchmark commit (38c6d46) onto main first so the snapshot step can generate a baseline.

Move all ReflectionClass/ReflectionMethod instantiation from per-call to constructor-time. Eliminates ~8 reflection objects per compilation pass.
Guard preg_replace calls with str_contains checks for the trigger patterns ({{--, @verbatim, @php). Most component templates don't contain these patterns.
Memoize the file read so repeated content() calls on the same instance don't hit the filesystem.
Avoid repeated realpath() syscalls for the same file and path arguments in isEnabled().
@SanderMuller SanderMuller marked this pull request as draft March 28, 2026 23:02
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 29, 2026

Benchmark Result: Default

Attempt Blade Blaze Improvement
#1 347.80ms 16.09ms 95.4%
#2 350.06ms 15.68ms 95.5%
#3 349.50ms 15.93ms 95.4%
#4 352.91ms 15.82ms 95.5%
#5 346.72ms 15.71ms 95.5%
#6 345.89ms 15.72ms 95.5%
#7 348.05ms 16.31ms 95.3%
#8 341.48ms 15.68ms 95.4%
#9 349.19ms 16.14ms 95.4%
#10 343.24ms 15.74ms 95.4%
Snapshot 351.73ms 15.83ms 95.5%
Result 347.93ms (~) 15.78ms (~) 95.5% (~)

Median of 10 attempts, 5000 iterations x 10 rounds, 45.89s total

To run a specific benchmark, comment /benchmark <name> where name is one of: attributes, aware, class, default, forwarding, merge, named-slots, no-attributes, slot

@SanderMuller SanderMuller force-pushed the autoresearch/compile-time-caching branch from 6071e8c to 38c6d46 Compare March 29, 2026 10:36
@SanderMuller SanderMuller marked this pull request as ready for review March 29, 2026 10:38
@SanderMuller SanderMuller changed the title Autoresearch/compile time caching compile time caching Mar 29, 2026
@ganyicz
Copy link
Copy Markdown
Collaborator

ganyicz commented Mar 29, 2026

Aren't we going to blow up the memory by caching contents of every single component? The ComponentSource objects are being cached in a static property so they're going to remain in memory forever.

content() is only called once per component during normal compilation
(in the constructor). The cache never gets a second hit and just holds
file contents in memory unnecessarily.
@SanderMuller
Copy link
Copy Markdown
Contributor Author

Aren't we going to blow up the memory by caching contents of every single component? The ComponentSource objects are being cached in a static property so they're going to remain in memory forever.

Although I think the memory usage will be quite little for 99% of the apps, perhaps it can grow bigger for very large apps using Octane. I've removed the contentCache property and reverted content() to direct file_get_contents().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants