Open
Conversation
Stacked on #34946 This should be a noop, now that the legacy renderers are not being sync'd.
Adds a new `--format` option which can be `text` (default), `csv`, or `json`. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34992). * #34993 * __->__ #34992
Just a light reorganization.
500 tests failed from not using async act. Will fix the tests and then re-land this.
…35003) IO tasks can execute more than once. E.g. a connection may fire each time a new message or chunk comes in or a setInterval every time it executes. We used to treat these all as one I/O node and just updated the end time as we go. Most of the time this was fine because typically you would have a Promise instance whose end time is really the one that gets used as the I/O anyway. However, in a pattern like this it could be problematic: ```js setTimeout(() => { function App() { return Promise.resolve(123); } renderToReadableStream(<App />); }); ``` Because the I/O's end time is before the render started so it should be excluded from being considered I/O as part of the render. It happened outside of render. But because the `Promise.resolve()` is inside render its end time is after the render start so the promise is considered part of the render. This is usually not a problem because the end time of the I/O is still before the start of the render so even though the Promise is valid it has no I/O source so it's properly excluded. However, if the I/O's end time updates before we observe this then the I/O can be considered part of the render. E.g. if this was a setInterval it would be clearly wrong. But it turns out that even setTimeout can sometimes execute more than once in the async_hooks because each run of "process.nextTick" and microtasks respectively are ran in their own before/after. When a micro task executes after this main body it'll update the end time which can then turn the whole I/O as being inside the render. To solve this properly I create a new I/O node each time before() is invoked so that each one gets to observe a different end time. This has a potential CPU and memory allocation cost when there's a lot of them like in a quick stream.
… times along different paths (#35005) We avoid visiting the same async node twice but if we see it again we returned "null" indicating that there's no I/O there. This means that if you have two different Promises both resolving from the same I/O node then we only show one of them. However, in general we treat that as two different I/O entries to allow for things like batching to still show up separately. This fixes that by caching the return value for multiple visits. So if we found I/O (but no user space await) in one path and then we visit that path through a different Promise chain, then we'll still emit it twice. However, if we visit the same exact Promise that we emitted an await on then we skip it. Because there's no need to emit two awaits on the same thing. It only matters when the path ends up informing whether it has I/O or not.
…t order (#35011) Right now it's possible for things like server environments to appear before other content in the timeline just because it's in a different document order. Ofc the order in production is not guaranteed but we can at least use the timing information we have as a hint towards the actual order. Unfortunately since the end time of the RSC stream itself is always after the content that resolved to produce it, it becomes kind of determined by the chunking. Similarly since for a clean refresh, the scripts and styles will typically load after the server content they appear later. Similarly SSR typically finishes after the RSC parts. Therefore a hack here is that I artificially delay everything with a non-null environment (RSC) so that RSC always comes after client-side (Suspense). This is also consistent with how we color things that have an environment even if children are just Suspense. To ensure that we never show a child before a parent, in the timeline, each child has a minimum time of its parent.
…Document (#34641) It's annoying to have to try to find where it lines up with no hints. This way when you hover over something it should be on screen. The strategy I went with is that it scrolls to a percentage along the scrollable axis but the two might not be exactly the same. Partially because they have different aspect ratios but also because suspended boundaries can shrink the document while the suspense tab needs to still be able to show the boundaries that are currently invisible.
When rects are close together (or overlapping) the outline can end up being covered up by sibling rects or deeper rects. This renders the selected outline on top of everything so it's always visible. <img width="275" height="730" alt="Screenshot 2025-10-29 at 8 43 28 PM" src="https://github.com/user-attachments/assets/69224883-f548-45ec-ada1-1a04ec17eaf5" /> <img width="266" height="737" alt="Screenshot 2025-10-29 at 8 58 53 PM" src="https://github.com/user-attachments/assets/946f7dde-450d-49fd-9fbd-57487f67f461" /> Additionally, this makes it so that it's not part of the translucent tree when things are hidden by the timeline. That way it's easier to see what is selected inside a hidden tree. <img width="498" height="196" alt="Screenshot 2025-10-29 at 8 45 24 PM" src="https://github.com/user-attachments/assets/723107ab-a92c-42c2-8a7d-a548ac3332d0" /> <img width="571" height="735" alt="Screenshot 2025-10-29 at 8 59 06 PM" src="https://github.com/user-attachments/assets/d653f1a7-4096-45c3-b92a-ef155d4742e6" />
…penseList (#35018) We have warned about this for a while now so we can make the switch. Often when you reach for SuspenseList, you mean forwards. It doesn't make sense to have the default to just be a noop. While "together" is another useful mode that's more like a Group so isn't so associated with the default as List. So we're switching it. However, tail=hidden isn't as obvious of a default it does allow for a convenient pattern for streaming in list of items by default. This doesn't yet switch the rendering order of "backwards". That's coming in a follow up.
…rder (#35021) Stacked on #35018. This mounts the children of SuspenseList backwards. Meaning the first child is mounted last in the DOM (and effect list). It's like calling reverse() on the children. This is meant to set us up for allowing AsyncIterable children where the unknown number of children streams in at the end (which is the beginning in a backwards SuspenseList). For consistency we do that with other children too. `unstable_legacy-backwards` still exists for the old mode but is meant to be deprecated. <img width="100" alt="image" src="https://github.com/user-attachments/assets/5c2a95d7-34c4-4a4e-b602-3646a834d779" />
In #35019, we excluded debug I/O info from being considered for enhancing the owner stack if it resolved after the defined `endTime` option that can be passed to the Flight client. However, we should include any I/O that was awaited before that end time, even if it resolved later.
This PR adds a `unstable_reactFragments?: Set<FragmentInstance>` property to DOM nodes that belong to a Fragment with a ref (top level host components). This allows you to access a FragmentInstance from a DOM node. This is flagged behind `enableFragmentRefsInstanceHandles`. The primary use case to unblock is reusing IntersectionObserver instances. A fairly common practice is to cache and reuse IntersectionObservers that share the same config, with a map of node->callbacks to run for each entry in the IO callback. Currently this is not possible with Fragment Ref `observeUsing` because the key in the cache would have to be the `FragmentInstance` and you can't find it without a handle from the node. This works now by accessing `entry.target.fragments`. This also opens up possibilities to use `FragmentInstance` operations in other places, such as events. We can do `event.target.unstable_reactFragments`, then access `fragmentInstance.getClientRects` for example. In a future PR, we can assign an event's `currentTarget` as the Fragment Ref for a more direct handle when the event has been dispatched by the Fragment itself. The first commit here implemented a handle only on observed elements. This is awkward because there isn't a good way to document or expose this temporary property. `element.fragments` is closer to what we would expect from a DOM API if a standard was implemented here. And by assigning it to all top-level nodes of a Fragment, it can be used beyond the cached IntersectionObserver callback. One tradeoff here is adding extra work during the creation of FragmentInstances as well as keeping track of adding/removing nodes. Previously we only track the Fiber on creation but here we add a traversal which could apply to a large set of top-level host children. The `element.unstable_reactFragments` Set can also be randomly ordered.
…5039) For Edge Flight servers, that use Web Streams, we're defining the `debugChannel` option as: ``` debugChannel?: {readable?: ReadableStream, writable?: WritableStream, ...} ``` Whereas for Node.js Flight servers, that use Node.js Streams, we're defining it as: ``` debugChannel?: Readable | Writable | Duplex | WebSocket ``` For the Edge Flight clients, there is currently only one direction of the debug channel supported, so we define the option as: ``` debugChannel?: {readable?: ReadableStream, ...} ``` Consequently, for the Node.js Flight clients, we define the option as: ``` debugChannel?: Readable ``` The presence of a readable debug channel is passed to the Flight client internally via the `hasReadable` flag on the internal `debugChannel` option. For the Node.js clients, that flag was accidentally derived from the public option `debugChannel.readable`, which is conceptually incorrect, because `debugChannel` is a `Readable` stream, not an options object with a `readable` property. However, a `Readable` also has a `readable` property, which is a boolean that indicates whether the stream is in a readable state. This meant that the `hasReadable` flag was incidentally still set correctly. Regardless, this was confusing and unintentional, so we're now fixing it to always set `hasReadable` to `true` when a `debugChannel` is provided to the Node.js clients. We'll revisit this in case we ever add support for writable debug channels in Node.js (and Edge) clients.
We were not recording uEE calls in component/hook syntax. Easy fix. Added tests matching function component syntax for component syntax + added one for hooks
…35042) Normally if you suspend in a SuspenseList row above a Suspense boundary in that row, it'll suspend the parent. Which can itself delay the commit or resuspend a parent boundary. That's because SuspenseList mostly just coordinates the state of the inner boundaries and isn't a boundary itself. However, for tail "hidden" and "collapsed" this is not quite the case because the rows themselves can avoid being rendered. In the case of "collapsed" we require at least one Suspense boundary above to have successfully rendered before committing the list because the idea of this mode is that you should at least always show some indicator that things are still loading. Since we'd never try the next one after that at all, this just works. Expect there was an unrelated bug that meant that "suspend with delay" on a Retry didn't suspend the commit. This caused a scenario were it'd allow a commit proceed when it shouldn't. So I fixed that too. The counter intuitive thing here is that we won't actually show a previous completed row if the loading state of the next row is still loading. For tail "hidden" it's a little different because we don't actually require any loading indicator at all to be shown while it's loading. If we attempt a row and it suspends, we can just hide it (and the rest) and move to commit. Therefore this implements a path where if all the rest of the tail are new mounts (we wouldn't be required to unmount any existing boundaries) then we can treat the SuspenseList boundary itself as "catching" the suspense. This is more coherent semantics since any future row that we didn't attempt also wouldn't resuspend the parent. This allows simple cases like `<SuspenseList>{list}</SuspenseList>` to stream in each row without any indicator and no need for Suspense boundaries.
I don't think we're ready to land this yet since we're using it to run other experiments and our tests. I'm opening this PR to indicate intent to disable and to ensure tests in other combinations still work. Such as enableHalt without enablePostpone. I think we'll also need to rewrite some tests that depend on enablePostpone to preserve some coverage. The conclusion after this experiment is that try/catch around these are too likely to block these signals and consider them error. Throwing works for Hooks and `use()` because the lint rule can ensure that they're not wrapped in try/catch. Throwing in arbitrary functions not quite ecosystem compatible. It's also why there's `use()` and not just throwing a Promise. This might also affect the Catch proposal. The "prerender" for SSR that's supporting "Partial Prerendering" is still there. This just disables the `React.postpone()` API for creating the holes.
We're not shipping this and it's a lot of code to maintain that is blocking my refactor of Fizz for SuspenseList.
…mplement in SSR (#35022) We've long had the CPU suspense feature behind a flag under the terrible API `unstable_expectedLoadTime={arbitraryNumber}`. We've known for a long time we want it to just be `defer={true}` (or just `<Suspense defer>` in the short hand syntax). So this adds the new name and warns for the old name. For only the new name, I also implemented SSR semantics in Fizz. It has two effects here. 1) It renders the fallback before the content (similar to prerender) allowing siblings to complete quicker. 2) It always outlines the result. When streaming this should really happen naturally but if you defer a prerendered content it also implies that it's expensive and should be outlined. It gives you a opt-in to outlining similar to suspensey images and css but let you control it manually.
Follow up to #35022. It's now replaced by the `defer` option. Sounds like nobody is actually using this option, including Meta, so we can just delete it.
This is an alternative to #35059. If the name needs escaping, then instead of escaping it, we just use a base64 name. This wouldn't allow you to match on an escaped name in your own CSS like you should be able to if browsers worked properly. But at least it would provide matching name in current browsers which is probably sufficient if you're using auto-generated names. This also covers some cases where `CSS.escape()` isn't sufficient anyway like when the name ends in a dot.
…35063) Also, don't not skip hidden trees. Memoized state is null when an Offscreen boundary (Suspense or Activity) is visible. This logic was inversed in a couple of View Transition checks which caused pairs to be discovered or not discovered incorrectly for insertion and deletion of Suspense or Activity boundaries.
We already append `randomKey` to each handle name to prevent external libraries from accessing and relying on these internals. But more libraries recently have been getting around this by simply iterating over the element properties and using a `startsWith` check. This flag allows us to experiment with moving these handles to an internal map. This PR starts with the two most common internals, the props object and the fiber. We can consider moving additional properties such as the container root and others depending on perf results.
Small optimization for useEffectEvent. Not sure we even need a flag for it, but it will be a nice killswitch. As an added benefit, it fixes a bug when `enableViewTransition` is on, where we were not updating the useEffectEvent callback when a tree went from hidden to visible.
…35702) There is an existing issue with serialisation logic for the traces from Profiler panel. I've discovered that `TREE_OPERATION_UPDATE_TREE_BASE_DURATION` operation for some reason appears earlier in a sequence of operations, before the `TREE_OPERATION_ADD` that registers the new node. It ends up cloning non-existent node, which just creates an empty object and adds it to the map of nodes. This change only adds additional layer of validation to cloning logic, so we don't swallow the error, if we attempt to clone non-existent node.
Accidentally enabled this
…eam (#35724) ## Summary - Fixes the `createRequest` call in `renderToPipeableStream` to pass `debugChannelReadable !== undefined` instead of `debugChannel !== undefined` in the turbopack, esm, and parcel Node.js server implementations - The webpack version already had the correct check; this brings the other bundler implementations in line The bug was introduced in #33754. With `debugChannel !== undefined`, the server could signal that debug info should be emitted even when only a write-only debug channel is provided (no readable side), potentially causing the client to block forever waiting for debug data that never arrives.
`encodeReply` throws "React Element cannot be passed to Server Functions from the Client without a temporary reference set" when a React element is the root value of a `serializeModel` call (either passed directly or resolved from a promise), even when a temporary reference set is provided. The cause is that `resolveToJSON` hits the `REACT_ELEMENT_TYPE` switch case before reaching the `existingReference`/`modelRoot` check that regular objects benefit from. The synthetic JSON root created by `JSON.stringify` is never tracked in `writtenObjects`, so `parentReference` is `undefined` and the code falls through to the throw. This adds a `modelRoot` check in the `REACT_ELEMENT_TYPE` case, following the same pattern used for promises and plain objects. The added `JSX as root model` test also uncovered a pre-existing crash in the Flight Client: when the JSX element round-trips back, it arrives as a frozen object (client-created elements are frozen in DEV), and `Object.defineProperty` for `_debugInfo` fails because frozen objects are non-configurable. The same crash can occur with JSX exported as a client reference. For now, we're adding `!Object.isFrozen()` guards in `moveDebugInfoFromChunkToInnerValue` and `addAsyncInfo` to prevent the crash, which means debug info is silently dropped for frozen elements. The proper fix would likely be to clone the element so each rendering context gets its own mutable copy with correct debug info. closes #34984 closes #35690
…35731) When using a partial prerender stream, i.e. a prerender that is intentionally aborted before all I/O has resolved, consumers of `createFromReadableStream` would need to keep the stream unclosed to prevent React Flight from erroring on unresolved chunks. However, some browsers (e.g. Chrome, Firefox) keep unclosed ReadableStreams with pending reads as native GC roots, retaining the entire Flight response. With this PR we're adding an `unstable_allowPartialStream` option, that allows consumers to close the stream normally. The Flight Client's `close()` function then transitions pending chunks to halted instead of erroring them. Halted chunks keep Suspense fallbacks showing (i.e. they never resolve), and their `.then()` is a no-op so no new listeners accumulate. Inner stream chunks (ReadableStream/AsyncIterable) are closed gracefully, and `getChunk()` returns halted chunks for new IDs that are accessed after closing the response. Blocked chunks are left alone because they may be waiting on client-side async operations like module loading, or on forward references to chunks that appeared later in the stream, both of which resolve independently of closing.
…35723) Fixes #33423, #35245, #19732. As demoed [here](#33423 (comment)), React DevTools incorrectly highlights re-renders for descendants of filtered-out nodes that didn't actually render. There were multiple fixes suggesting changes in `didFiberRender()` function, but these doesn't seem right, because this function is used in a context of whether the Fiber actually rendered something (updated), not re-rendered compared to the previous Fiber. Instead, this PR adds additional validation at callsites that either used for highlighting re-renders or capturing tree base durations and are relying on `didFiberRender`. I've also added a few tests that reproduce the failure scenario. Without the changes, the tests are failing.
…filing (#35718) After #34089, when updating (possibly, mounting) inside disconnected subtree, we don't record this as an operation. This only happens during reconnect. The issue is that `recordProfilingDurations()` can be called, which diffs tree base duration and reports it to the Frontend: https://github.com/facebook/react/blob/65db1000b944c8a07b5947c06b38eb8364dce4f2/packages/react-devtools-shared/src/backend/fiber/renderer.js#L4506-L4521 This operation can be recorded before the "Add" operation, and it will not be resolved properly on the Frontend side. Before the fix: ``` commit tree › Suspense › should handle transitioning from fallback back to content during profiling Could not clone the node: commit tree does not contain fiber "5". This is a bug in React DevTools. 162 | const existingNode = nodes.get(id); 163 | if (existingNode == null) { > 164 | throw new Error( | ^ 165 | `Could not clone the node: commit tree does not contain fiber "${id}". This is a bug in React DevTools.`, 166 | ); 167 | } at getClonedNode (packages/react-devtools-shared/src/devtools/views/Profiler/CommitTreeBuilder.js:164:13) at updateTree (packages/react-devtools-shared/src/devtools/views/Profiler/CommitTreeBuilder.js:348:24) at getCommitTree (packages/react-devtools-shared/src/devtools/views/Profiler/CommitTreeBuilder.js:112:20) at ProfilingCache.getCommitTree (packages/react-devtools-shared/src/devtools/ProfilingCache.js:40:46) at Object.<anonymous> (packages/react-devtools-shared/src/__tests__/profilingCommitTreeBuilder-test.js:257:44) ```
…tate (#35740) Co-authored-by: Ruslan Lesiutin <28902667+hoxyq@users.noreply.github.com>
… shared suspenders removed (#35737)
Follow up to #35630 We don't currently have any operations that depend on the updating of text nodes added or removed after Fragment mount. But for the sake of completeness and extending the ability to any other host configs, this change calls `commitNewChildToFragmentInstance` and `deleteChildFromFragmentInstance` on HostText fibers. Both DOM and Fabric configs early return because we cannot attach event listeners or observers to text. In the future, there could be some stateful Fragment feature that uses text that could extend this.
Handles TODOs, small follow up refactors
## Overview While building the RSC sandboxes I notice error messages like: > An error occurred in the `<Offscreen>` component This is an internal component so it should show either: > An error occurred in the `<Suspense>` component. > An error occurred in the `<Activity>` component. It should only happen when there's a lazy in the direct child position of a `<Suspense>` or `<Activity>` component.
…ated code (#35755) This is unused.
…ren before removing (#35775) Currently, this silently removes the last child in the list, which doesn't contain the `id`.
## Summary ESLint v10.0.0 was released on February 7, 2026. The current `peerDependencies` for `eslint-plugin-react-hooks` only allows up to `^9.0.0`, which causes peer dependency warnings when installing with ESLint v10. This PR: - Adds `^10.0.0` to the eslint peer dependency range - Adds `eslint-v10` to devDependencies for testing - Adds an `eslint-v10` e2e fixture (based on the existing `eslint-v9` fixture) ESLint v10's main breaking changes (removal of legacy eslintrc config, deprecated context methods) don't affect this plugin - flat config is already supported since v7.0.0, and the deprecated APIs already have fallbacks in place. ## How did you test this change? Ran the existing unit test suite: ``` cd packages/eslint-plugin-react-hooks && yarn test ``` All 5082 tests passed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.4)
Can you help keep this open source service alive? 💖 Please sponsor : )