We build websites for a living. Ours had better be fast.
Let's be honest — if a software company's own website is slow, bloated, and shipping unnecessary data, that says something. It's like a mechanic whose own car won't start. Nobody is going to hire that person.
This work didn't start because someone complained the site felt slow. It started because the build output showed two clear signals:
- a
large-page-datawarning First Load JS shared by allsitting at222 kB
Those two lines meant we were pushing more data and JavaScript to the browser than we needed to, before the user even got to interact with anything.
In a Next.js Pages Router app, these are two separate problems:
- Large page data means the browser has to receive and parse more serialized data before hydration
- Large shared JS means every route carries the same dependency bundle, even when some pages don't use any of it
If you try to fix both at once without separating them first, you'll likely optimize the wrong thing. So the first step was simple: open the build output, look at the numbers, and treat them as the source of truth.
Two questions, not one
Before touching any code, we split the investigation:
Which routes are carrying more data than they actually use? That's the page-data problem.
Which dependencies got promoted into the shared bundle for every route? That's the shared JS problem.
Without this split you just see one big number and guess. With it, you know whether to go after data scope, code-loading boundaries, or both.
Phase one — shrink the page data
Tracing the large-page-data warning back to its source, the pattern was familiar: some routes were receiving a much broader data payload than they needed to render.
This happens a lot in multilingual marketing sites:
- content from different contexts gets lumped into one wide namespace
- a route that only needs part of it still gets the whole thing
- locale fallback can duplicate data across languages
The system still worked fine. It just shipped more than it needed to. So the fix wasn't random trimming. It was reshaping data boundaries so each route only gets what it actually uses.
What that looked like in practice:
- split broad content groupings into route-appropriate scopes
- load only the content each page needs
- cut cross-locale duplication that added weight without user value
- make the content catalog more locale-aware
Results:
| Metric | Before | After |
|---|---|---|
large-page-data warning |
Present | Gone |
| Home page-data (TH) | Much higher | 87,194 bytes |
| Home page-data (EN) | Much higher | 42,936 bytes |
| Terms page-data (TH) | Much higher | 125,482 bytes |
| Terms page-data (EN) | Much higher | 59,720 bytes |
The warning was gone, but the real point was that routes stopped carrying content they didn't need.
With page data smaller, the next bottleneck got obvious
Once the page-data problem was handled, the shared bundle stood out more clearly. First Load JS shared by all was still 222 kB. _app was still 199 kB.
This is where a lot of teams stop. The warning is gone, feels like the job is done. But unless you actually inspect the chunks again, you don't know if the heavy stuff has really left the critical path.
We started with a reasonable guess: if some globally rendered UI is pulling in heavy client-side dependencies, those dependencies end up shared across every route.
So the first round focused on reducing weight in the shared UI path.
But here's the thing — after that first round, we checked the chunk again. The heavy dependency was still sitting in the _app bundle.
Meaning: the first round helped, but the root cause hadn't actually moved out. The fix looked good on paper but the artifact told a different story.
If you haven't checked the artifact after the fix, you don't know if the problem is solved.