For startups & SMBs

Docs


Careers

Blog

Pricing

Terms of Use

Privacy policy

Contact us

Sign up

Next.js on AWS: comparing price and performance

Simon Obetko, Co-founder @ Stacktape

Updated August 9th, 2024

Table of contents

  • Motivation
  • Test configuration
  • Latency under light load
    • Static site generated site
    • Server side rendered
    • Server side rendered with cached fetch
  • Latency with increasing load
    • Container
    • Lambda
    • Lambda@Edge
  • Error rates with increasing load
    • Container
    • Lambda
    • Lambda@Edge
  • Pricing comparison
    • Static page
    • Moderately dynamic page
    • Highly dynamic page
  • Conclusion
  • Vercel might not be the best way to host Next.js

    Vercel is the the default option that comes to mind when deploying a Next.js app. It's fast, easy to use, and in general, very nice to work with.

    But Vercel is not the only way to host your Next.js app.

    Hosting it on AWS has multiple advantages:

    • It can be up to up to 95% less expensive
    • You can host your Next.js app in a container, which has multiple advantages that we'll explore later
    • If the rest of your infrastructure is on AWS, integrating it will be easier and have less latency
    • You want to use free tier for a commercial site (which Vercel doesn't allow)
    • Private VPC networking without a costly enterprise plan
    • You can leverage AWS activate credits

    This blog post explores 3 different setups.

    Test configuration

    We will compare 3 different setups:

    In every case, we use CloudFront CDN. Content Delivery Network (CDN) caches static content in more than 300 globally distributed points of presence.

    We will deploy an example startup landing page that leverages Next.js 14 and app router.

    All setups are primarily deployed to the eu-west-1 (Ireland) region.

    Latency under light load

    The first set of tests evaluates the response time under a light load. The requests are sent one at a time from a dedicated EC2 instance in eu-west-1 and us-east-1 regions.

    Test 1: Static site generated site

    In this test, we use static site generation, with 'force-static' option.

    Average latency (ms)

    Since we use CDN, static content is cached globally. So the origin of the request doesn't matter. Neither does the compute engine, since the actual rendering is happening only once, and the cached version is returned for all the subsequent requests.

    The results also tell us that we should static site generation whenever possible. Nothing beats static generation in terms of latency or infrastructure costs.

    Test 2: Server side rendering

    Next, we test server side rendering, with 'force-dynamic' option.

    Average latency (ms)

    As we can see, sending a request from a different region (us-east-1 to eu-west-1) adds around 200ms.

    The actual rendering took around 100ms. All compute engines have roughly the same resources (1GB memory and ~ 0.5 CPU), so this expected.

    The clear winner is Lambda@Edge, which can deliver results much faster, if your users are geographically distributed.

    Test 3: Server side rendered with cached fetch

    This test is similar to the previous one, but has one difference: we perform a fetch request during the render, and use Next.js cache to cache the result. This simulates real world scenarios, when in order to render the page, you need data from different systems.

    By default, Next.js stores the cached data in a file system. This is also true for our container based deployment.

    The lambda based deployments use the default OpenNext architecture, which uses an S3 bucket to store the Next.js cache. This is because lambda functions have only a limited file system storage (512MB), and this file system is lost every time the Lambda instance running the Next.js app is removed.

    Even when rendering the page at the edge, Next.js needs to access the cache stored only in one place - the S3 bucket, where the application is primarily deployed. This means that any performance gains are lost.

    Container wins this test by a small margin, most likely because accessing files in S3 is slower than accessing files on a local file system.

    Average latency (ms)

    We skipped the first response in each test to avoid measuring the cold starts of Lambda functions. Cold starts are longer due to the size of Next.js app packages, but they're not our focus here. They can be mitigated in real apps with techniques such as using a warmer.

    Latency with increasing load

    Test 4: Container

    Fargate container average latency

    Test 5: Lambda

    Lambda average latency

    Test 6: Lambda@Edge

    Lambda@Edge average latency

    Error rates with increasing load

    Test 7: Container

    Fargate container error rate (%)

    Test 8: Lambda

    Lambda error rate (%)

    Test 9: Lambda@Edge

    Lambda@Edge error rate (%)

    Pricing Comparison

    Static page (high cache hit rate)

    This pricing chart represents pricing for the following Next.js app:

    • Pre-rendered and heavily cached content
    • High CDN hit rate (~90%)
    • Most costs come from CDN
    • Use cases: portfolios, blogs, static news sites

    Monthly costs

    Moderately dynamic page

    This pricing chart represents pricing for the following Next.js app:

    • Combines static with some dynamic content
    • Moderate CDN hit rate (~50%)
    • Some requests bypass CDN to hit the server
    • Use cases: e-commerce, content platforms

    Monthly costs

    Highly dynamic page

    This pricing chart represents pricing for the following Next.js app:

    • High personalization or real-time data
    • Low CDN hit rate (~20%)
    • Most requests go directly to the server
    • Use cases: real-time apps, personalized dashboards

    Monthly costs

    Conclusion

    Our exploration into Next.js hosting on AWS has given us valuable insights into how different setups perform. It's important to note, though, that your app might see different results based on a variety of factors.

    Watch out for Lambda@Edge costs if your app serves users globally. It's fast, but can get expensive even faster.

    Use lambda when

    • You're not hitting 5-10 requests per second all the time, which is common since CDN caching helps reduce direct hits to your Lambda.
    • You need to scale up fast because of unpredictable traffic. Lambda can handle this automatically.

    Use containers when

    • You've got steady traffic over 5-10 requests per second.
    • You're looking to avoid the delay from Lambda's cold starts.
    • Your app sees consistent traffic without big jumps, fitting the container's scaling model.

    Want to deploy production-grade apps to AWS in less than 20 minutes?

    Let Stacktape transform your AWS into a developer-friendly platform.

    Learn more

    Stay in touch

    Join our monthly product updates.

    input icon

    Copyright © Stacktape 2024