Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow initial page render #23187

Closed
gwer opened this issue Mar 18, 2021 · 24 comments
Closed

Slow initial page render #23187

gwer opened this issue Mar 18, 2021 · 24 comments
Labels
bug Issue was opened via the bug report template. locked

Comments

@gwer
Copy link
Contributor

gwer commented Mar 18, 2021

What version of Next.js are you using?

10.0.9

What version of Node.js are you using?

v15.11.0

What browser are you using?

Chrome

What operating system are you using?

macOS

How are you deploying your application?

next start

Describe the Bug

First render of each page has large TTFB. It can be seen in any example (e.g. api-routes-apollo-server-and-client-auth-app).

$ yarn create next-app --example api-routes-apollo-server-and-client-auth
$ yarn build
$ NODE_ENV=production yarn start

First request of localhost:3000/about has TTFB about 1–1.5s. All subsequent requests will have TTFB about 20ms. This is also reproducible if using a simple custom server.

This overhead is due to the fact that dependencies are imported on demand when the user requests a page (

const components = await loadComponents(
this.distDir,
pagePath!,
!this.renderOpts.dev && this._isLikeServerless
)
).

In real complex applications this overhead can increase TTFB to tens of seconds.

The second time the page loads faster, since the modules are cached (in ssr-module-cache.js).

Module import is reading the file and interpreting the code. These are two very slow operations, especially if there is a lot of code.

Let's imagine an application with two pages: page1 and page2. It will compiled into the next 4 modules: _app, _document, page1, page2. Using this application can be summarized as the following table.

Cache Action Missing dependencies Speed
[] Server start [] fast
[] Request page1 [_app, _document, page1] slow
[_app, _document, page1] Request page1 [] fast
[_app, _document, page1] Request page2 [page2] medium
[_app, _document, page1, page2] Request page2 [] fast

You can see freeze even if page is statically optimized (because _app and _document are imported even in the case of static optimization).

I think some issues and duscussions about performance (e.g. #12447) is about this overhead.

But! I have a workaround. It looks like warming up the cache.

const path = require("path");
const serverPath = path.join(__dirname, "./.next/server");

module.exports = () => {
  try {
    const pagesManifest = require(path.join(serverPath, "pages-manifest.json"));

    Object.values(pagesManifest).forEach((dep) => {
      if (path.extname(dep) !== ".js") {
        return;
      }

      console.log("preimport ", dep);
      require(path.join(serverPath, dep));
    });
  } catch (e) {}
};

Full example:

If you run this function before starting the custom server, it will save you from overhead.

Cache Action Missing dependencies Speed
[] Server start [_app, _document, page1, page2] slow
[_app, _document, page1, page2] Request page1 [] fast
[_app, _document, page1, page2] Request page1 [] fast
[_app, _document, page1, page2] Request page2 [] fast
[_app, _document, page1, page2] Request page2 [] fast

This approach has two minor drawbacks:

  1. Server starts slower.
  2. Memory consumption grows faster. But the peak memory consumption is the same as without using the workaround.

How about doing the same, but inside the Next.js server?

Expected Behavior

There should be no overhead for the first page rendering.

To Reproduce

$ yarn create next-app --example api-routes-apollo-server-and-client-auth
$ yarn build
$ NODE_ENV=production yarn start

Open localhost:3000/about twice and look at TTFB.

Current state (2022-09)

_app and _document are preimporting by default since February 2022 (#23261).

For pages preimport you can use simple package next-pages-preimport.

@gwer gwer added the bug Issue was opened via the bug report template. label Mar 18, 2021
@timneutkens
Copy link
Member

Not all pages are loaded intentionally as when you start scaling next start to multiple processes the server does not need all pages loaded as it won't hit all pages. _app / _document could potentially be pre-warmed but pages definitely should not by design.

@gwer
Copy link
Contributor Author

gwer commented Mar 19, 2021

What type of scaling do you mean?

So far, I can only imagine a few instances of a large application, requests between which are distributed by an external load balancer, depending on the url mask. But I'm not sure if this is a common practice. In this case, it makes more sense to do something like a multi-zone application.

Other types of scaling are not strictly limited to the list of available pages, I think.

@mrguiman
Copy link

mrguiman commented Jun 14, 2021

I've resorted to using @gwer 's workaround, as our app has very little traffic, only 2 pages, and scales down to 0. Dependecy import upon request means additional latency on top of cold start, which isn't great.

Would love an option to handle pre-import without having to use a custom server ! Or better yet a to control what gets "lazy loaded" or not.

In the meantime, thanks @gwer !

@gwer
Copy link
Contributor Author

gwer commented Jul 21, 2021

Do you really care about such a performance drawdown? This is a really big problem that even shows up on an empty example.

And this problem has a fairly simple solution. It is enough to preload _app and _document, and also to have an additional option to preload all modules.

PR #23261 closes the first part of the problem. PR #23196 could be improved to make users' lives better. But both PRs were closed.

@ijjk
Copy link
Member

ijjk commented Jul 21, 2021

Hi, we definitely do care about performance and have investigated how loading of modules impacts request timings, we have actually seen the opposite and lazily loading only needed modules creates a more optimal experience especially when you're deploying on scalable infrastructure and you need to be able to start up new instances very fast to handle increasing request counts.

The pre-warming seems to help a very specific case of getServerSideProps/getInitialProps pages being the first requested page and being deployed in an environment with a less responsive disk.

To reconfirm our previous testing I tested some medium sized pages on the latest version of Next.js and with no pre-warming the TTFB for the first request was under 80ms and subsequent requests were under 20ms.

Can you provide additional information on how you are deploying your application? Is the disk access being restricted/slowed in your environment?

As mentioned on this PR there are different types of pages and assets in Next.js where this doesn't provide a speed-up and instead slows down the boot-up time for serving those assets such as automatically statically optimized pages, prerendered pages, and API routes.

@gwer
Copy link
Contributor Author

gwer commented Jul 21, 2021

Thanks for the answer! And thanks for explain about different types of pages and assets.

What about optional preloading? For example, adding a parameter to ServerConstructor.

Individual simple pages do not show any noticeable slowdown. But JS is famous for its immense volumes and memes about the size of node_modules. And with the development of the application, the number of modules increases.

Obviously, the problem is most important for large and complex applications, not for simple pages.

1.5s TTFB with api-routes-apollo-server-and-client-auth was obtained on a pure 2020 macbook pro with SSD and 16 gb RAM without any limitation and slowdowns in disk access.

@ijjk
Copy link
Member

ijjk commented Jul 21, 2021

Yeah It could potentially be an experimental option in next.config.js to allow opting into it for specific applications/deployment set-ups that benefit from it/to allow further benchmarking something like experimental: { prewarmRequiredPages: true }

@gwer
Copy link
Contributor Author

gwer commented Jul 22, 2021

I added an experimental option to the gwer:pages_preimport branch (https://github.com/gwer/next.js/tree/pages_preimport). The branch is linked to PR #23196, but it hasn't been updated. I suppose the point is that it's closed.

Should I create a new PR? Or do we need more discussion?

@ijjk
Copy link
Member

ijjk commented Jul 22, 2021

I think we should start with adding an experimental config in the PR I re-opened pre-warming specifically _app and _document and then after testing with that we could look into doing a follow-up PR expanding the experimental config for pre-warming other pages

@gwer
Copy link
Contributor Author

gwer commented Jul 23, 2021

I have now added experimental.prewarmRequiredPages for _app and _document prewarming in # 23261.

But I think that if we want to have different options for [_document, _app] and for [_document, _app, ... pages] in the future, then only for _app and _document we need another option name.

I also want to point out that we have been using a prewarming workaround for all pages in our projects for four months now. And it works well for us.

@chrskrchr
Copy link

My team was facing a similar issue w/ long TTFB on the first page request after startup and we came across this thread. We can confirm that after hacking the experimental code from #23261 into our local Next.js server, that we see a significant drop in first page load times after startup - from ~2s down to 0.1s.

@ijjk - could the Next.js team consider merging #23261?

@AMattRiley
Copy link

Very similarly to @chrskrchr my team saw the same thing - a whole order of magnitude reduction in TTFB when we implemented the workaround in our server for the initial page load after server spin-up. I would add my voice here in requesting for #23261 to be merged.

@jgabriele
Copy link

jgabriele commented Jan 15, 2022

Any news about this issue ? I am deploying a dummy app with Vercel and the app launches in 11 seconds which timeout for a free version.

I went with a paid version but 11sec to launch a page is not acceptable.

As I am using Vercel, the workaround with custom server does not apply so I would really hope for a solution from the NextJS team.

Please merge the above mentioned PR 🙏

@leerob
Copy link
Member

leerob commented Jan 17, 2022

I am deploying a dummy app with Vercel and the app launches in 11 seconds which timeout for a free version.

This is unrelated to the original question - this issue is about the local dev server next dev and not the production build next build && next start.

Your issue could be from having a blocking request inside getServerSideProps that is connecting to a slow API.

@leerob
Copy link
Member

leerob commented Jan 17, 2022

Could you share what you're importing in your _app and _document?

@gwer
Copy link
Contributor Author

gwer commented Jan 19, 2022

this issue is about the local dev server next dev and not the production build next build && next start.

Not really. This issue is about production too. But the workaround will work only with custom server.

@jgabriele
Copy link

jgabriele commented Jan 25, 2022

Hello @leerob , sorry for late reply. Here are my imports. They are "flattened" and obviously are spread among multiple components.

// _document.tsx

import * as React from "react";
import Document, { Html, Head, Main, NextScript } from "next/document";
import createEmotionServer from "@emotion/server/create-instance";
import createCache from "@emotion/cache";
import { createTheme, responsiveFontSizes } from '@mui/material/styles'

// _app.tsx
import type { AppProps } from "next/app";
import Head from "next/head";
import Link from "next/link";
import { useRouter } from "next/router";
import { ThemeProvider } from "@mui/material/styles";
import CssBaseline from "@mui/material/CssBaseline";
import { createTheme, responsiveFontSizes } from "@mui/material/styles";
import { Add, Person } from "@mui/icons-material";
import { Box, Button, Grid, Typography } from "@mui/material";
import { Avatar, Button, Menu, MenuItem } from "@mui/material";
import { TextField } from "@mui/material";
import { styled } from "@mui/system";
import { CacheProvider } from "@emotion/react";
import createCache from "@emotion/cache";
import { EmotionCache } from "@emotion/utils";
import { Session } from "next-auth";
import { SessionProvider } from "next-auth/react";
import { signIn, signOut, useSession } from "next-auth/react";

If you want to experience the slowness, here it is (or more minimal version which still takes 3 seconds for an empty page).

The imports for this minimal version are the following:

// _document.tsx

import * as React from "react";
import Document, { Html, Head, Main, NextScript } from "next/document";
import createEmotionServer from "@emotion/server/create-instance";
import createCache from "@emotion/cache";
import { createTheme, responsiveFontSizes } from '@mui/material/styles'

// _app.tsx
import type { AppProps } from "next/app";
import Head from "next/head";
import { ThemeProvider } from "@mui/material/styles";
import CssBaseline from "@mui/material/CssBaseline";
import { CacheProvider } from "@emotion/react";
import { EmotionCache } from "@emotion/utils";
import { SessionProvider } from "next-auth/react";
import createCache from "@emotion/cache";
import { createTheme, responsiveFontSizes } from "@mui/material/styles";

Remember that once the server is hot, it's quite fast. It's only the first hit that takes 10+ seconds.

If it helps, here are the chunks for the server build:

Capture d’écran 2022-01-25 à 20 27 28

image

I have a ticket open with the support of Vercel which told me that it seems linked with import styles that are not treeshaken. They propose to import xxx from @mui/material/xxx instead of import { xxx } from @mui/material` but their built solution seems to be equally slow.

If it can help you to troubleshoot the issue, I am willing to give you access to my repo / vercel workflow, just ask 🙂.

@ijjk
Copy link
Member

ijjk commented Jan 25, 2022

@jgabriele it looks like this is from importing directly from @mui/icons-material which requires all 1900+ icon components which is very slow, the below require is done on an M1 Pro so could be even slower on other systems. If you import the exact icon directly instead e.g. import Add from '@mui/icons-material/Add' you should see much better performance

time node -e "require('@mui/icons-material')"                                               
node -e "require('@mui/icons-material')"  1.35s user 0.40s system 96% cpu 1.822 total

@jgabriele
Copy link

jgabriele commented Jan 28, 2022

@ijjk thanks! Indeed I get better performances now, still not ideal though. Went from 12sec to 5sec, I will try to do the same trick for @mui/material imports if I can and see if I can reduce it to a more reasonable amount.

Do we agree that this is a workaround though? It can't be that one person by mistake use name imports and make the page load 2.5 times slower.

@jgabriele
Copy link

Hello, any news about this issue?

I read this thread again and it looks like we have a solution ready to be merged which is opt-in so will have no impact on existing NextJS users. Is there something blocking us from merging this PR?

Even with the workaround you proposed @ijjk I still see some "cold start" issues.

// First request

START RequestId: ecea6064-da57-4fed-95a2-164715f4d4c6 Version: $LATEST
10.910	[_app getInitialProps] start
10.922	[_app getInitialProps] getSession
// This is calling a NextJS /api endpoint, which is also cold at first. This could also be optimized, but out of scope
14.713	[_app getInitialProps] end
14.713	[Home] start getServerSideProps
15.289	[Home] end getServerSideProps
15.291	[_document getInitialProps] start
15.564	[_document getInitialProps] end
END RequestId: ecea6064-da57-4fed-95a2-164715f4d4c6
REPORT RequestId: ecea6064-da57-4fed-95a2-164715f4d4c6	Duration: 5980.98 ms	Billed Duration: 5981 ms	Memory Size: 1024 MB	Max Memory Used: 112 MB	Init Duration: 386.76 ms

// Subsequent requests

START RequestId: 13c297b4-9e01-4f98-ae22-ca8bd4d4065c Version: $LATEST
16.564	[_app getInitialProps] start 
16.564	[_app getInitialProps] getSession
// Now the NextJS /api endpoint is much faster
16.646	[_app getInitialProps] end
16.646	[Home] start getServerSideProps
16.807	[Home] end getServerSideProps
16.808	[_document getInitialProps] start
17.003	[_document getInitialProps] end
END RequestId: 13c297b4-9e01-4f98-ae22-ca8bd4d4065c
REPORT RequestId: 13c297b4-9e01-4f98-ae22-ca8bd4d4065c	Duration: 502.30 ms	Billed Duration: 503 ms	Memory Size: 1024 MB	Max Memory Used: 121 MB

As you can see between the start of _app.getInitialProps() and end of _document.getInitialProps() we have:

  • 15.564 - 10.910 = 4,654 for the cold run, which is 1.3sec lower than the duration logged by Vercel (5980.98 ms)
  • 17.003 - 16.564 = 0,439 for the hot run, which is 0.063sec lower than the duration logged by Vercel

@gwer
Copy link
Contributor Author

gwer commented Sep 8, 2022

Current state of this issue was updated in its description:

_app and _document are preimporting by default since February 2022 (#23261).

For pages preimport you can use simple package next-pages-preimport.

@kopach
Copy link

kopach commented Oct 10, 2023

Changes in this PR should speed things up #50900
Those are released in Next.js v13.4.8. You can also add such config manually (in earlier versions of next.js). I've tested it and things get a bit better.
In next.config.js

  modularizeImports: {
    '@mui/icons-material': {
      transform: '@mui/icons-material/{{member}}',
    },
  },

@feedthejim
Copy link
Contributor

Hi, since this issue is pretty stale and generic, I'll be closing this. With regard to start-up performance, we're considering changing Next 15 to always warm up all the code on boot. You can try this out with config.experimental.preloadEntriesOnStart on the latest canary.

Copy link
Contributor

github-actions bot commented May 6, 2024

This closed issue has been automatically locked because it had no new activity for 2 weeks. If you are running into a similar issue, please create a new issue with the steps to reproduce. Thank you.

@github-actions github-actions bot added the locked label May 6, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 6, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Issue was opened via the bug report template. locked
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants