Simon Obetko, Co-founder @ Stacktape
Updated August 9th, 2024
Vercel is the the default option that comes to mind when deploying a Next.js app. It's fast, easy to use, and in general, very nice to work with.
But Vercel is not the only way to host your Next.js app.
Hosting it on AWS has multiple advantages:
This blog post explores 3 different setups.
We will compare 3 different setups:
In every case, we use CloudFront CDN. Content Delivery Network (CDN) caches static content in more than 300 globally distributed points of presence.
We will deploy an example startup landing page that leverages Next.js 14 and app router.
All setups are primarily deployed to the eu-west-1
(Ireland) region.
The first set of tests evaluates the response time under a light load. The requests are sent one at a time from a dedicated EC2 instance in eu-west-1
and us-east-1
regions.
In this test, we use static site generation, with 'force-static'
option.
Since we use CDN, static content is cached globally. So the origin of the request doesn't matter. Neither does the compute engine, since the actual rendering is happening only once, and the cached version is returned for all the subsequent requests.
The results also tell us that we should static site generation whenever possible. Nothing beats static generation in terms of latency or infrastructure costs.
Next, we test server side rendering, with 'force-dynamic'
option.
As we can see, sending a request from a different region (us-east-1
to eu-west-1
) adds around 200ms.
The actual rendering took around 100ms. All compute engines have roughly the same resources (1GB memory and ~ 0.5 CPU), so this expected.
The clear winner is Lambda@Edge, which can deliver results much faster, if your users are geographically distributed.
This test is similar to the previous one, but has one difference: we perform a fetch request during the render, and use Next.js cache to cache the result. This simulates real world scenarios, when in order to render the page, you need data from different systems.
By default, Next.js stores the cached data in a file system. This is also true for our container based deployment.
The lambda based deployments use the default OpenNext architecture, which uses an S3 bucket to store the Next.js cache. This is because lambda functions have only a limited file system storage (512MB), and this file system is lost every time the Lambda instance running the Next.js app is removed.
Even when rendering the page at the edge, Next.js needs to access the cache stored only in one place - the S3 bucket, where the application is primarily deployed. This means that any performance gains are lost.
Container wins this test by a small margin, most likely because accessing files in S3 is slower than accessing files on a local file system.
We skipped the first response in each test to avoid measuring the cold starts of Lambda functions. Cold starts are longer due to the size of Next.js app packages, but they're not our focus here. They can be mitigated in real apps with techniques such as using a warmer.
This pricing chart represents pricing for the following Next.js app:
This pricing chart represents pricing for the following Next.js app:
This pricing chart represents pricing for the following Next.js app:
Our exploration into Next.js hosting on AWS has given us valuable insights into how different setups perform. It's important to note, though, that your app might see different results based on a variety of factors.
Watch out for Lambda@Edge costs if your app serves users globally. It's fast, but can get expensive even faster.
Use lambda when
Use containers when
Let Stacktape transform your AWS into a developer-friendly platform.
Learn more