Skip to content

Fix chunk load handling during prerendering #75132

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

unstubbable
Copy link
Contributor

@unstubbable unstubbable commented Jan 21, 2025

It turns out that the following heuristic in warmFlightResponse is not sufficient for loading all necessary SSR chunks before handing off to prerendering:

// We'll wait at least one task, and then if no chunks have started to load
// we infer that there are no chunks to load from this flight response
trackChunkLoading(waitAtLeastOneReactRenderTask())
return new Promise((r) => {
  chunkListeners.push(r)
})

An example demonstrating this issue has been added as a test case in this PR. In the example, we have a layout with "use cache" and a page with "use cache" that renders a client component. With Turbopack in particular, it often happens that the client component’s chunk hasn’t started loading before warmFlightResponse is resolved. As a result, dynamic I/O errors are triggered during subsequent SSR prerendering because the chunk may not resolve within microtasks.

A more robust approach is to avoid a separate warmFlightResponse and instead rely on the cache signal during the initial prerender to track these chunk loads.

However, using the cache signal in the existing __next_chunk_load__ tracking is not sufficient because React caches loaded chunks in a module-scoped map and skips calling __next_chunk_load__ if a cached promise for that chunk already exists. This becomes problematic when a chunk starts loading during a normal dev request (i.e., not a prerender) while, in parallel, a prerender request (for dynamic validation) also requires the same chunk. In that scenario, we have no way to track the chunk load via the prerender cache signal, causing the initial prerender to complete prematurely and trigger a dynamic I/O error during the final prerender.

To address this limitation, we need a way to hook into React’s preloadModule function so we can track any module loading regardless of React’s internal chunk caching. This would also allow us to cover the resolution phase of async modules.

Unfortunately, React does not yet offer this API. In the meantime, we are patching preloadModule when generating the vendored packages to include our tracking logic.

An additional benefit of this approach is that it should be faster during next build because pages that are rendered concurrently don't need to wait for each other's chunks to finish loading, even if they don't necessarily need them. With the trackChunkLoading approach this is currently an accepted limitation.

Copy link
Contributor Author

Warning

This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more

This stack of pull requests is managed by Graphite. Learn more about stacking.

@unstubbable unstubbable changed the title Add failing test for chunk loading with inner "use cache" Fix chunk load handling during prerender warmup Jan 21, 2025
@ijjk
Copy link
Member

ijjk commented Jan 21, 2025

Tests Passed

@ijjk
Copy link
Member

ijjk commented Jan 21, 2025

Stats from current PR

Default Build (Increase detected ⚠️)
General Overall increase ⚠️
vercel/next.js canary vercel/next.js hl/fix-inner-use-cache-chunk-loading-warmup Change
buildDuration 21.8s 19.2s N/A
buildDurationCached 18s 15.5s N/A
nodeModulesSize 419 MB 419 MB ⚠️ +10.6 kB
nextStartRea..uration (ms) 474ms 474ms
Client Bundles (main, webpack)
vercel/next.js canary vercel/next.js hl/fix-inner-use-cache-chunk-loading-warmup Change
5306-HASH.js gzip 54.1 kB 54 kB N/A
8276.HASH.js gzip 169 B 168 B N/A
8377-HASH.js gzip 5.46 kB 5.46 kB N/A
bccd1874-HASH.js gzip 52.9 kB 52.9 kB N/A
framework-HASH.js gzip 57.5 kB 57.5 kB N/A
main-app-HASH.js gzip 240 B 242 B N/A
main-HASH.js gzip 34.6 kB 34.6 kB N/A
webpack-HASH.js gzip 1.71 kB 1.71 kB N/A
Overall change 0 B 0 B
Legacy Client Bundles (polyfills)
vercel/next.js canary vercel/next.js hl/fix-inner-use-cache-chunk-loading-warmup Change
polyfills-HASH.js gzip 39.4 kB 39.4 kB
Overall change 39.4 kB 39.4 kB
Client Pages
vercel/next.js canary vercel/next.js hl/fix-inner-use-cache-chunk-loading-warmup Change
_app-HASH.js gzip 193 B 193 B
_error-HASH.js gzip 193 B 193 B
amp-HASH.js gzip 512 B 510 B N/A
css-HASH.js gzip 343 B 342 B N/A
dynamic-HASH.js gzip 1.84 kB 1.84 kB
edge-ssr-HASH.js gzip 265 B 265 B
head-HASH.js gzip 363 B 362 B N/A
hooks-HASH.js gzip 393 B 392 B N/A
image-HASH.js gzip 4.59 kB 4.58 kB N/A
index-HASH.js gzip 268 B 268 B
link-HASH.js gzip 2.35 kB 2.35 kB N/A
routerDirect..HASH.js gzip 328 B 328 B
script-HASH.js gzip 397 B 397 B
withRouter-HASH.js gzip 323 B 326 B N/A
1afbb74e6ecf..834.css gzip 106 B 106 B
Overall change 3.59 kB 3.59 kB
Client Build Manifests
vercel/next.js canary vercel/next.js hl/fix-inner-use-cache-chunk-loading-warmup Change
_buildManifest.js gzip 748 B 747 B N/A
Overall change 0 B 0 B
Rendered Page Sizes
vercel/next.js canary vercel/next.js hl/fix-inner-use-cache-chunk-loading-warmup Change
index.html gzip 523 B 522 B N/A
link.html gzip 538 B 537 B N/A
withRouter.html gzip 520 B 520 B
Overall change 520 B 520 B
Edge SSR bundle Size Overall increase ⚠️
vercel/next.js canary vercel/next.js hl/fix-inner-use-cache-chunk-loading-warmup Change
edge-ssr.js gzip 129 kB 129 kB N/A
page.js gzip 209 kB 209 kB ⚠️ +114 B
Overall change 209 kB 209 kB ⚠️ +114 B
Middleware size
vercel/next.js canary vercel/next.js hl/fix-inner-use-cache-chunk-loading-warmup Change
middleware-b..fest.js gzip 670 B 667 B N/A
middleware-r..fest.js gzip 155 B 156 B N/A
middleware.js gzip 31.3 kB 31.3 kB N/A
edge-runtime..pack.js gzip 844 B 844 B
Overall change 844 B 844 B
Next Runtimes Overall increase ⚠️
vercel/next.js canary vercel/next.js hl/fix-inner-use-cache-chunk-loading-warmup Change
274-experime...dev.js gzip 322 B 322 B
274.runtime.dev.js gzip 314 B 314 B
app-page-exp...dev.js gzip 376 kB 377 kB ⚠️ +536 B
app-page-exp..prod.js gzip 131 kB 131 kB N/A
app-page-tur..prod.js gzip 144 kB 144 kB ⚠️ +219 B
app-page-tur..prod.js gzip 140 kB 140 kB ⚠️ +343 B
app-page.run...dev.js gzip 364 kB 364 kB ⚠️ +577 B
app-page.run..prod.js gzip 127 kB 127 kB ⚠️ +196 B
app-route-ex...dev.js gzip 37.6 kB 37.6 kB
app-route-ex..prod.js gzip 25.6 kB 25.6 kB
app-route-tu..prod.js gzip 25.6 kB 25.6 kB
app-route-tu..prod.js gzip 25.4 kB 25.4 kB
app-route.ru...dev.js gzip 39.2 kB 39.2 kB
app-route.ru..prod.js gzip 25.4 kB 25.4 kB
pages-api-tu..prod.js gzip 9.69 kB 9.69 kB
pages-api.ru...dev.js gzip 11.6 kB 11.6 kB
pages-api.ru..prod.js gzip 9.68 kB 9.68 kB
pages-turbo...prod.js gzip 21.9 kB 21.9 kB
pages.runtim...dev.js gzip 27.7 kB 27.7 kB
pages.runtim..prod.js gzip 21.9 kB 21.9 kB
server.runti..prod.js gzip 916 kB 916 kB
Overall change 2.35 MB 2.35 MB ⚠️ +1.87 kB
build cache Overall increase ⚠️
vercel/next.js canary vercel/next.js hl/fix-inner-use-cache-chunk-loading-warmup Change
0.pack gzip 2.1 MB 2.11 MB ⚠️ +1.96 kB
index.pack gzip 75.8 kB 76.1 kB ⚠️ +286 B
Overall change 2.18 MB 2.18 MB ⚠️ +2.24 kB
Diff details
Diff for edge-ssr.js

Diff too large to display

Diff for main-HASH.js

Diff too large to display

Diff for app-page-exp..ntime.dev.js
failed to diff
Diff for app-page-exp..time.prod.js

Diff too large to display

Diff for app-page-tur..time.prod.js

Diff too large to display

Diff for app-page-tur..time.prod.js

Diff too large to display

Diff for app-page.runtime.dev.js
failed to diff
Diff for app-page.runtime.prod.js

Diff too large to display

Commit: 679b325

@unstubbable unstubbable changed the title Fix chunk load handling during prerender warmup Fix chunk load handling during prerendering Jan 22, 2025
@unstubbable unstubbable marked this pull request as ready for review January 22, 2025 16:14
@unstubbable unstubbable marked this pull request as draft January 22, 2025 19:54
@unstubbable unstubbable force-pushed the hl/fix-use-cache-wrapper-source-mapping branch from fc2ebbb to e31ef97 Compare January 22, 2025 21:40
@unstubbable unstubbable force-pushed the hl/fix-inner-use-cache-chunk-loading-warmup branch from 0a346e3 to 77d1924 Compare January 22, 2025 21:41
@unstubbable unstubbable force-pushed the hl/fix-use-cache-wrapper-source-mapping branch from e31ef97 to 7a14af1 Compare January 22, 2025 23:41
@unstubbable unstubbable force-pushed the hl/fix-inner-use-cache-chunk-loading-warmup branch from 77d1924 to c5998ce Compare January 22, 2025 23:41
@unstubbable unstubbable changed the base branch from hl/fix-use-cache-wrapper-source-mapping to graphite-base/75132 January 23, 2025 00:08
@unstubbable unstubbable force-pushed the hl/fix-inner-use-cache-chunk-loading-warmup branch from c5998ce to 2325b78 Compare January 23, 2025 00:09
@unstubbable unstubbable changed the base branch from graphite-base/75132 to canary January 23, 2025 00:10
@unstubbable unstubbable force-pushed the hl/fix-inner-use-cache-chunk-loading-warmup branch from 2325b78 to 679b325 Compare January 23, 2025 00:10
@lubieowoce lubieowoce closed this in 9b45393 May 6, 2025
eps1lon pushed a commit that referenced this pull request May 7, 2025
This PR introduces some extra tracking which makes us treat
`import(...)` like a call to a cached function. Thanks to this, `await
import(...)` will no longer cause dynamicity errors in `dynamicIO`.
The motivation is the same as allowing `fs.readFileSync` -- if something
is available on the server at prerender time, we don't consider it IO.

Fixes #72589
Closes #75132

### Implementation notes

The tracking is implemented via an SWC transform
(`track_dynamic_imports.ts`) that turns `import(...)` into
`trackDynamicImport(import(...))`.

`trackDynamicImport(promise)` tracks the promise globally, without using
`workUnitStore.cacheSignal`. The prospective render subscribes to
pending modules using `trackPendingModules(cacheSignal)`, which causes
the prospective render to wait for all `import()`s to finish before
proceeding to the final render. The mechanism is analogous to `'use
cache'`, but the "result" of an `import()` is stored *in the module
cache* instead; when we invoke the `import()` again in the actual
prerender, it will resolve at microtask-speed, like we need it to.

The transform is enabled for all modules that run server-side, both RSC
and SSR, because the prerender runs both. This also includes route
handlers, because we also use a `cacheSignal` when prerendering those.
Notably, we also instrument `import()` in `node_modules` to account for
libraries that do lazy initialization.

I've also had to adjust the prospective client render in PPR mode to
wait for `cacheSignal.cacheReady()` - otherwise, it wouldn't wait for
`import()`s to resolve before doing the final client render, which'd
subsequently cause a dynamicity error.
(we didn't need to wait for `cacheSignal` before, because all cached
promises would've already been awaited in the server prerender)

Finally, we no longer need `warmFlightResponse`, because
`trackPendingModules` already makes the `cacheSignal` wait for all
loading chunks to finish, so we don't need to do it ahead of time.
@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 21, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants