Skip to content

[pull] canary from vercel:canary#914

Merged
pull[bot] merged 3 commits intocode:canaryfrom
vercel:canary
Mar 26, 2026
Merged

[pull] canary from vercel:canary#914
pull[bot] merged 3 commits intocode:canaryfrom
vercel:canary

Conversation

@pull
Copy link

@pull pull bot commented Mar 26, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

acdlite and others added 3 commits March 26, 2026 01:08
This is the final step in the segment bundling system. The first commit
ensured correctness when inlining hints are stale or unavailable. The
second taught the build-time hint computation to identify which segments
can be omitted and how parent data flows through disabled segments to
static descendants. This commit makes the server and client act on those
decisions: small static segments are combined into a single prefetch
response rather than fetched individually.

This step is inherently larger than the previous ones. We moved as much
complexity as possible into the earlier commits — the hint bits, the
pass-through logic, the metadata assignment, the safety invariants — but
the server output changes and the client scheduling changes need to land
together to keep behavior coherent.

### Unified response model

The brute-force inlining path that previously existed is removed. Both
the old per-segment behavior and the brute-force "inline everything"
mode are now modeled in terms of the size-based system:

- **Per-segment (flag off):** each segment's response contains only its
own data. This is the behavior when `prefetchInlining` is not enabled,
or when a segment exceeds the size threshold.
- **Size-based (flag on):** setting `prefetchInlining: true` uses
default thresholds (2KB per-segment, 10KB total budget). Segments below
the threshold are bundled into their children's responses. Segments
above it get standalone responses.
- **Brute-force (infinite thresholds):** setting the thresholds to
infinity means every segment is bundled into a single response for the
entire route. The size-based heuristics should make this unnecessary.

There is no separate code path for any of these modes. The same server
and client logic handles all three — the only difference is the hint
bits computed at build time.

### Metadata bundling

The metadata (head/viewport) is bundled into the response of whichever
segment is responsible for it, avoiding a separate metadata fetch in
most cases.

On routes with runtime prefetch, the first runtime segment is
responsible — its response already includes the metadata. On purely
static routes, the first page terminal with budget room gets the
metadata. If no terminal has room, the metadata is fetched separately.

### Independent metadata prefetching

When navigating between sibling routes under a shared runtime layout,
the layout is already cached and no runtime segment request is needed.
But the metadata (head) may differ between siblings. This commit ensures
the metadata is always prefetched independently via a runtime request
when `SubtreeHasRuntimePrefetch` is set on the route tree, even when no
runtime segments need fetching.

### Skipping unnecessary static output

Segments with `instant = false` or runtime prefetch enabled don't
benefit from static prefetch responses. Now the server skips generating
static output for these segments entirely, reducing build output size.
The client also skips creating cache entries for them. They still
participate in the bundle chain as null placeholders — parent data flows
through them to static descendants — but no actual data is produced or
fetched.

This skipping is NOT gated by the `prefetchInlining` feature flag. The
hints that drive it (`HasRuntimePrefetch`, `PrefetchDisabled`) are
computed at build time and embedded in the route tree regardless of
bundling.

### Test plan

The prefetch-inlining test suite covers: basic inlining chains, large
segments that break chains, deep chains, parallel routes, dynamic routes
with concrete params, runtime prefetch boundaries (both leaf and
pass-through), `instant = false` segments, stale hint recovery, metadata
assignment, the `instant = false` at root fallback, independent metadata
prefetching for routes with runtime data in the head, and runtime
parallel slot pass-through. Each test verifies both the hint computation
(via snapshot assertions) and the actual navigation behavior (prefetch
via link accordion, then navigate to confirm data was prefetched).
…ion (#91924)

### What?

Add a `debug_assert!` in `StorageWriteGuard::track_modification` to enforce that transient `TaskId`s are never inserted into the persistence-modified set.

### Why?

Concurrent server component HMR updates triggered a runtime panic in Turbopack:

```
internal error: entered unreachable code: transient task_ids should never be enqueued to be persisted
```

This `unreachable!` lives in `snapshot_and_persist` (`backend/mod.rs:1254`), which is called during the persist cycle. It fires when a transient `TaskId` is found in `Storage::modified` — the set that tracks tasks whose state needs to be written to disk.

**Transient task IDs (high bit set) are never serialized.** The invariant was enforced only at the *caller layer*:

- `TaskGuardImpl::track_modification` (`operation/mod.rs:1029`) guards with `if !self.task_id.is_transient()`
- `TaskGuardImpl::invalidate_serialization` (`operation/mod.rs:970`) guards with `if !self.task_id.is_transient()`

But `StorageWriteGuard::track_modification` — the public method on the storage struct itself — had no such guard. Under concurrent HMR invalidations, a transient task ID could reach this method via a code path that bypasses the caller-level checks, inserting the ID into `Storage::modified` and causing the panic downstream during the next persist cycle.

The flaky test is `test/development/app-dir/hmr-iframe/hmr-iframe.test.ts` — it triggers two simultaneous server component file changes (one in an iframe, one in the parent), which races the persist cycle against concurrent invalidations.

### How?

Add a `debug_assert!` in `StorageWriteGuard::track_modification` (the sole public entry point into the storage mutation path) that matches the invariant already checked at the caller layer:

```rust
debug_assert!(
    !self.inner.key().is_transient(),
    "transient task_ids should never be enqueued to be persisted"
);
```

This enforces the invariant at the storage boundary in debug builds, surfacing the violation at the point of insertion rather than much later during the persist cycle. The message deliberately matches the existing `unreachable!` downstream so both failure modes are easy to correlate.

**Verification:**
- `cargo test -p turbo-tasks-backend` — all unit tests pass
- `pnpm build-all` — build succeeds
- `NEXT_SKIP_ISOLATE=1 NEXT_TEST_MODE=dev pnpm testheadless test/development/app-dir/hmr-iframe/hmr-iframe.test.ts` — 3/3 consecutive passes (was flaky before)
@pull pull bot locked and limited conversation to collaborators Mar 26, 2026
@pull pull bot added the ⤵️ pull label Mar 26, 2026
@pull pull bot merged commit 1693574 into code:canary Mar 26, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants