At DockerCon 2022, Kathleen Juell, a Full Stack Engineer at Sourcegraph, shared some tips for combining Next.js, Docker, and NGINX to serve static content. With nearly 400 million active websites today, efficient content delivery is key to attracting new web application users.
In some cases, using Next.js can boost deployment efficiency, accelerate time to market, and help attract web users. Follow along as we tackle building and running Next.js applications with Docker. We’ll also cover key processes and helpful practices for serving that static content.
Why serve static content with a web application?
According to Kathleen, the following are the benefits of serving static content:
- Fewer moving parts, like databases or other microservices, directly impact page rendering. This backend simplicity minimizes attack surfaces.
- Static content stands up better (with fewer uncertainties) to higher traffic loads.
- Static websites are fast since they don’t require repeated rendering.
- Static website code is stable and relatively unchanging, improving scalability.
- Simpler content means more deployment options.
Since we know why building a static web app is beneficial, let’s explore how.
Building our services stack
To serve static content efficiently, a three-pronged services approach composed of Next.js, NGINX, and Docker is useful. While it’s possible to run a Next.js server, offloading those tasks to an NGINX server is preferable. NGINX is event-driven and excels at rapidly serving content thanks to its single-threaded architecture. This enables performance optimization even during periods of higher traffic.
Luckily, containerizing a cross-platform NGINX server instance is pretty straightforward. This setup is also resource friendly. Below are some of the reasons why Kathleen — explicitly or perhaps implicitly — leveraged three technologies.
Docker Desktop also gives us the tools needed to build and deploy our application. It’s important to install Docker Desktop before recreating Kathleen’s development process.
The following trio of services will serve our static content:
First, our auth-backend
has a build context rooted in a directory and a port mapping. It’s based on a slimmer alpine
flavor of the Node.js Docker Official Image and uses named Dockerfile
build stages to prevent reordered COPY
instructions from breaking.
Second, our client
service has its own build context and a named volume mapped to the staticbuild:/app/out
directory. This lets us mount our volume within our NGINX container. We’re not mapping any ports since NGINX will serve our content.
Third, we’ll containerize an NGINX server that’s based on the NGINX Docker Official Image.
As Kathleen mentions, ending this client
service’s Dockerfile
with a RUN
command is key. We want the container to exit after completing the yarn build
process. This process generates our static content and should only happen once for a static web application.
Each component is accounted for within its own container. Now, how do we seamlessly spin up this multi-container deployment and start serving content? Let’s dive in!
Using Docker Compose and Docker volumes
The simplest way to orchestrate multi-container deployments is with Docker Compose. This lets us define multiple services within a unified configuration, without having to juggle multiple files or write complex code.
We use a compose.yml
file to describe our services, their contexts, networks, ports, volumes, and more. These configurations influence app behavior.
Here’s what our complete Docker Compose file looks like:
services:
auth-backend:
build:
context: ./auth-backend
ports:
- "3001:3001"
networks:
- dev
client:
build:
context: ./client
volumes:
- staticbuild:/app/out
networks:
- dev
nginx:
build:
context: ./nginx
volumes:
- staticbuild:/app/public
ports:
- “8080:80”
networks:
- dev
networks:
dev:
driver: bridge
volumes:
staticbuild:
You’ll also see that we’ve defined our networks and volumes in this file. These services all share the dev
network, which lets them communicate with each other while remaining discoverable. You’ll also see a common volume between these services. We’ll now explain why that’s significant.
Using mounted volumes to share files
Specifically, this example leverages named volumes to share files between containers. By mapping the staticbuild
volume to Next.js’ default out
directory location, you can export your build and serve content with your NGINX server. This typically exists as one or more HTML files. Note that NGINX uses the app/public
directory by comparison.
While Next.js helps present your content on the frontend, NGINX delivers those important resources from the backend.
Leveraging A/B testing to create tailored user experiences
You can customize your client-side code to change your app’s appearance, and ultimately the end-user experience. This code impacts how page content is displayed while something like an NGINX server is running. It may also determine which users see which content — something that’s common based on sign-in status, for example.
Testing helps us understand how application changes can impact these user experiences, both positively and negatively. A/B testing helps us uncover the “best” version of our application by comparing features and page designs. How does this look in practice?
Specifically, you can use cookies and hooks to track user login activity. When a user logs in, they’ll see something like user stories (from Kathleen’s example). Logged-out users won’t see this content. Alternatively, a web user might only have access to certain pages once they’re authenticated. It’s your job to monitor user activity, review any feedback, and determine if those changes bring clear value.
These are just two use cases for A/B testing, and the possibilities are nearly endless when it comes to conditionally rendering static content with Next.js.
Containerize your Next.js static web app
There are many different ways to serve static content. However, Kathleen’s three-service method remains an excellent example. It’s useful both during exploratory testing and in production. To learn more, check out Kathleen’s complete talk.
By containerizing each service, your application remains flexible and deployable across any platform. Docker can help developers craft accessible, customizable user experiences within their web applications. Get started with Next.js and Docker today to begin serving your static web content!
Additional Resources
- Check out the NGINX Docker Official Image
- Read about the Node Docker Official Image
- Learn about getting started with Docker Compose
- View our awesome-compose sample GitHub projects
Feedback
0 thoughts on "How to Build and Run Next.js Applications with Docker, Compose, & NGINX"