node.js – Docker https://www.docker.com Wed, 19 Apr 2023 13:35:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png node.js – Docker https://www.docker.com 32 32 How to Use the Node Docker Official Image https://www.docker.com/blog/how-to-use-the-node-docker-official-image/ Wed, 26 Oct 2022 14:04:08 +0000 https://www.docker.com/?p=38370 Topping Stack Overflow’s 2022 list of most popular web frameworks and technologies, Node.js continues to grow as a critical MERN stack component. And since Node applications are written in JavaScript — the world’s leading programming language — many developers will feel right at home using it. We introduced the Node Docker Official Image (DOI) due to Node.js’ popularity and to solve some common development challenges. 

The Node.js Foundation describes Node as “an open-source, cross-platform JavaScript runtime environment.” Developers use it to create performant, scalable server and networking applications. Despite Node’s advantages, building and deploying cross-platform services can be challenging with traditional workflows.

Conversely, the Node Docker Official Image accelerates and simplifies your development processes while allowing additional configuration. You can deploy containerized Node applications in minutes. Throughout this guide, we’ll discuss the Node Official Image, how to use it, and some valuable best practices. 

In this tutorial:

What is the Node Docker Official Image?

node js docker official image blog 900x600 1

The Node Docker Official Image contains all source code, core dependencies, tools, and libraries your application needs to work correctly. 

This image supports multiple CPU architectures like amd64, arm32v6, arm32v7, arm64v8, ppc641le, and s390x. You can also choose between multiple tags (or image versions) for any project. Choosing a pinned version like node:19.0.0-slim locks you into a stable, streamlined version of Node.js. 

Node.js use cases

Node.js lets developers write server-side code in JavaScript. The runtime environment then transforms this JavaScript into hardware-friendly machine code. As a result, the CPU can process these low-level instructions. 

Node is event-driven (through user actions), non-blocking, and known for being lightweight while simultaneously handling numerous operations. As a result, you can use the Node DOI to create the following: 

  • Web server applications
  • Networking applications

Node works well here because it supports HTTP requests and socket connections. An asynchronous I/O library lets Node containers read and write various system files that support applications. 

You could use the Node DOI to build streaming apps, single-page applications, chat apps, to-do list apps, and microservices. Or — if you’re like Community All-Hands’ Kathleen Juell — you could use Node.js to help serve static content. Containerized Node will shine in any scenario dictated by numerous client-server requests. 

Docker Captain Bret Fisher also offered his thoughts on Dockerized Node.js during DockerCon 2022. He discussed best practices for managing Node.js projects while diving into optimization. 

Lastly, we also maintain some Node sample applications within our GitHub Awesome Compose library. You can learn to use Node with different databases or even incorporate an NGINX proxy. 

About Docker Official Images

We’ve curated the Node Docker Official Image as one of many core container images on Docker Hub. The Node.js community maintains this image alongside members of the Docker community. 

Like other Docker Official Images, the Node DOI offers a common starting point for Node and JavaScript developers. We also maintain an evolving list of Node best practices while regularly pushing critical security updates. This distinguishes Docker Official Images from alternatives on Docker Hub. 

How to run Node in Docker

Before getting started, download the latest Docker Desktop release and install it. Docker Desktop includes the Docker CLI, Docker Compose, and additional core development tools. The Docker Dashboard (Docker Desktop’s UI component) will help you manage images and containers. 

You’re then ready to Dockerize Node!

Enter a quick pull command

Pulling the Node DOI is the quickest way to begin. Enter docker pull node in your terminal to grab the default latest Node version from Docker Hub. You can readily use this tag for testing or local development. But, a pinned version might be safer for production use. Here’s how the pull process works: 

Your CLI will display a status message once it’s done. You can also double-check this within Docker Desktop! Click the Images tab on the left sidebar and scan through your listed images. Docker Desktop will display your node image:

Docker UI listing local images, including the Node Docker Official Image..

Your node:latest image is a hefty 942.33 MB. If you inspect your Node image’s contents using docker sbom node, you’ll see that it currently includes 623 packages. The Node image contains numerous dependencies and modules that support Node and various applications. 

However, your final Node image can be much slimmer! We’ll tackle optimization while discussing Dockerfiles. After all, the Node DOI has 24 supported tags spread amongst four major Node versions. Each has its own impact on image size.  

Confirm that Node is functional

Want to run your new image as a container? Hover over your listed node image and click the blue “Run” button. In this state, your Node container will produce some minimal log entries and run continuously in case requests come through. 

Exit this container before moving on by clicking the square “stop” button in Docker Desktop or by entering docker stop YourContainerName in the CLI. 

Create your Node image from a Dockerfile

Building from a Dockerfile gives you ultimate control over image composition, configuration, and your overall application. However, Node requires very little to function properly. Here’s a barebones Dockerfile to get you up and running (using a pinned, Debian-based image version): 

FROM node:19-bullseye

Docker will build your image from your chosen Node version. 

It’s safest to use node:19-bullseye because this image supports numerous use cases. This version is also stable and prevents you from pulling in new breaking changes, which sometimes happens with latest tags. 

To build your image from a Dockerfile, run the docker build -t my-nodejs-app . command. You can then run your new image by entering docker run -it --rm --name my-running-app my-nodejs-app.

Optimize your Node image

The complete version of Node often includes extra packages that weigh your application down. This leaves plenty of room for optimization. 

For example, removing unneeded development dependencies reduces image bloat. You can do this by adding a RUN instruction to our previous file: 

FROM node:19-bullseye

RUN npm prune --production

This approach is pretty granular. It also relies on you knowing exactly what you do and don’t need for your project. Alternatively, switching to a slim image build offers the quickest results. You’ll encounter similar caveats but spend less time writing individual Dockerfile instructions. The easiest approach is to replace node:19-bullseye with its node:19-bullseye-slim counterpart. This alone shrinks image size by 75%. 

You can even pull node:19-alpine to save more disk space. However, this tag contains even fewer dependencies and isn’t officially supported by the Node.js Foundation. Keep this in mind while developing. 

Finally, multi-stage builds lead to smaller image sizes. These let you copy only what you need between build stages to combat bloat. 

Using Docker Compose

Say you have a start script, an existing package.json file, and (possibly) want to operate Node alongside other services. Spinning up Node containers with Docker Compose can be pretty handy in these situations.

Here’s a sample docker-compose.yml file: 

services:
  node:
    image: "node:19-bullseye"
    user: "node"
    working_dir: /home/node/app
    environment:
      - NODE_ENV=production
    volumes:
      - ./:/home/node/app
    ports:
      - "8888:8888"
    command: "npm start"

You’ll see some parameters that we didn’t specify earlier in our Dockerfile. For example, the user parameter lets you run your container as an unprivileged user. This follows the principle of least privilege. 

To jumpstart your Node container, simply enter the docker compose up -d command. Like before, you can verify that Node is running within Docker Desktop. The docker container ls --all command also displays all existing containers within the CLI.  

Running a simple Node script

Your project doesn’t always need a  Dockerfile. In these cases, you can directly leverage the Node DOI with the following command: 

docker run -it --rm --name my-running-script -v "$PWD":/usr/src/app -w /usr/src/app node:19-bullseye node your-daemon-or-script.js

This simplistic approach is ideal for single-file projects.

Docker Node best practices

It’s important to get the most out of Docker and the Node Official Image. We’ve briefly outlined the benefits of running as a non-root node user, but here are some useful tips for developing with Node: 

  • Easily pass secrets and other runtime configurations to your application by setting NODE_ENV to production, as seen here: -e “NODE_ENV=production”.
  • Place any installed, global Node dependencies into a non-root user directory.
  • Remember to manually install curl if using an alpine image tag, since it’s not included by default.
  • Wrap your Node process in an init system with the --init flag, so it can successfully run as PID1. 
  • Set memory limitations for your containers that run on the same host. 
  • Include the package.json start command directly within your Dockerfile, to reduce active container processes and let Node properly receive exit signals. 

This isn’t an exhaustive list. To view more details, check out our best practices documentation.

Get started with Node today

As you’ve seen, spinning up a Node container from the Node Docker Official Image is quick and requires just a few steps depending on your workflow. You’ll no longer need to worry about platform-specific builds or get bogged down with complex development processes. 

We’ve also covered many ways to help your Node builds perform better. Check out our top containerization tips article to learn even more about optimization and security. 

Ready to get started? Swing by Docker Hub and pull our Node image to start experimenting. In no time, you’ll have your server and networking applications up and running. You can also learn more on our GitHub read.me page.

]]>
node.js Archives | Docker nonadult
9 Tips for Containerizing Your Node.js Application https://www.docker.com/blog/9-tips-for-containerizing-your-node-js-application/ Thu, 13 Oct 2022 15:35:55 +0000 https://www.docker.com/?p=37997 Over the last five years, Node.js has maintained its position as a top platform among professional developers. It’s an open source, cross-platform JavaScript runtime environment designed to maximize throughput. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient — perfect for data intensive, real-time, and distributed applications. 

With over 90,500 stars and 24,400 forks, Node’s developer community is highly active. With more devs creating Node.js apps than ever before, finding efficient ways to build and deploy and cross platform is key. Let’s discuss how containerization can help before jumping into the meat of our guide. 

Why is containerizing a Node application important?

Containerizing your Node application has numerous benefits. First, Docker’s friendly, CLI-based workflow lets any developer build, share, and run containerized Node applications. Second, developers can install their app from a single package and get it up and running in minutes. Third, Node developers can code and test locally while ensuring consistency from development to production.

We’ll show you how to quickly package your Node.js app into a container. We’ll also tackle key concerns that are easy to forget — like image vulnerabilities, image bloat, missing image tags, and poor build performance. Let’s explore a simple todo list app and discuss how our nine tips might apply.

Analyzing a simple todo list application

Let’s first consider a simple todo list application. This is a basic React application with a Node.js backend and a MongoDB database. The source code of the complete project is available within our GitHub samples repo.

Building the application

Luckily, we can build our sample application in just a few steps. First, you’ll want to clone the appropriate awesome-compose sample to use it with your project:

git clone https://github.com/dockersamples/awesome-compose/
cd awesome-compose/react-express-mongodb
docker compose -f docker-compose.yaml up -d

Second, enter the docker compose ps command to list out your services in the terminal. This confirms that everything is accounted for and working properly:

docker compose ps
NAME                COMMAND                  SERVICE             STATUS              PORTS
backend             "docker-entrypoint.s…"   backend             running             3000/tcp
frontend            "docker-entrypoint.s…"   frontend            running             0.0.0.0:3000->3000/tcp
mongo               "docker-entrypoint.s…"   mongo               running             27017/tcp

Third, open your browser and navigate to https://localhost:3000 to view your application in action. You’ll see your todo list UI and be able to directly interact with your application:

List View

This is a great way to spin up a functional application in a short amount of time. However, remember that these samples are foundations you can build upon. They’re customizable to better suit your needs. And this can be important from a performance standpoint — since our above example isn’t fully optimized. Next, we’ll share some general optimization tips and more to help you build the best app possible. 

Our top nine tips for containerizing and optimizing Node applications

1) Use a specific base image tag instead of “version:latest”

When building Docker images, we always recommended specifying useful tags which codify version information, intended destination (prod or test, for instance), stability, or other useful information for deploying your application across environments.

Don’t rely on the latest tag that Docker automatically pulls, outside of local development. Using latest is unpredictable and may cause unexpected behavior. Each time you pull a latest image version, it could contain a new build or untested code that may break your application. 

Consider the following Dockerfile that uses the specific node:lts-buster Docker image as a base image instead of node:latest. This approach may be preferable since lts-buster is a stable image:

# Create image based on the official Node image from dockerhub
FROM node:lts-buster

# Create app directory
WORKDIR /usr/src/app

# Copy dependency definitions
COPY package.json ./package.json
COPY package-lock.json ./package-lock.json

# Install dependencies
#RUN npm set progress=false \
#    && npm config set depth 0 \
#    && npm i install
RUN npm ci

# Get all the code needed to run the app
COPY . .

# Expose the port the app runs in
EXPOSE 3000

# Serve the app
CMD ["npm", "start"]

Overall, it’s often best to avoid using FROM node:latest in your Dockerfile.

2) Use a multi-stage build

With multi-stage builds, a Docker build can use one base image for compilation, packaging, and unit testing. A separate image holds the application’s runtime. This makes the final image more secure and shrinks its footprint (since it doesn’t contain development or debugging tools). Multi-stage Docker builds help ensure your builds are 100% reproducible and lean. You can create multiple stages within a Dockerfile to control how you build that image.

You can containerize your Node application using a multi-layer approach. Each layer may contain different app components like source code, resources, and even snapshot dependencies. What if we want to package our application into its own image like we mentioned earlier? Check out the following Dockerfile to see how it’s done:

FROM node:lts-buster-slim AS development

WORKDIR /usr/src/app

COPY package.json ./package.json
COPY package-lock.json ./package-lock.json
RUN npm ci

COPY . .

EXPOSE 3000

CMD [ "npm", "run", "dev" ]

FROM development as dev-envs
RUN <<EOF
apt-get update
apt-get install -y --no-install-recommends git
EOF


# install Docker tools (cli, buildx, compose)
COPY --from=gloursdocker/docker / /
CMD [ "npm", "run", "dev" ]

We first add an AS development label to the node:lts-buster-slim statement. This lets us refer to this build stage in other build stages. Next, we add a new development stage labeled dev-envs. We’ll use this stage to run our development.

Now, let’s rebuild our image and run our development. We’ll use the same docker build command as above — while adding the --target development flag to specifically run the development build stage:

docker build -t node-docker --target dev-envs .

3) Fix security vulnerabilities in your Node image

Today’s developers rely on third-party code and apps while building their services. External software can introduce unwanted vulnerabilities into your code if you’re not careful. Leveraging trusted images and continually monitoring your containers helps protect you.

Whenever you build a node:lts-buster-slim Docker image, Docker Desktop prompts you to run security scans of the image to detect any known vulnerabilities.

Let’s use the the Snyk Extension for Docker Desktop to inspect our Node.js application. To begin, install Docker Desktop 4.8.0+ on your Mac, Windows, or Linux machine. Next, check the box within Settings > Extensions to Enable Docker Extensions.

You can then browse the Extensions Marketplace by clicking the “Add Extensions” button in the left sidebar, then searching for Snyk.

Snyk Extensions Marketplace

Snyk’s extension lets you rapidly scan both local and remote Docker images to detect vulnerabilities.

Snyk Install

Install the Snyk and enter the node:lts-buster-slim Node Docker Official Image into the “Select image name” field. You’ll have to log into Docker Hub to start scanning. Don’t worry if you don’t have an account — it’s free and takes just a minute to create.

When running a scan, you’ll see this result within Docker Desktop:

Snyk Image Scan

Snyk uncovered 70 vulnerabilities of varying severity during this scan. Once you’re aware of these, you can begin remediation to fortify your image.

That’s not all. In order to perform a vulnerability check, you can use  the docker scan command directly against your Dockerfile:

docker scan -f Dockerfile node:lts-buster-slim

4) Leverage HEALTHCHECK

The HEALTHCHECK instruction tells Docker how to test a container and confirm that it’s still working. For example, this can detect when a web server is stuck in an infinite loop and cannot handle new connections — even though the server process is still running.

When an application reaches production, an orchestrator like Kubernetes or a service fabric will most likely manage it. By using HEALTHCHECK, you’re sharing the status of your containers with the orchestrator to enable configuration-based management tasks. Here’s an example:

# syntax=docker/dockerfile:1.4

FROM node:lts-buster-slim AS development

# Create app directory
WORKDIR /usr/src/app

COPY package.json ./package.json
COPY package-lock.json ./package-lock.json
RUN npm ci

COPY . .

EXPOSE 3000

CMD [ "npm", "run", "dev" ]

FROM development as dev-envs
RUN <<EOF
apt-get update
apt-get install -y --no-install-recommends git
EOF

RUN <<EOF
useradd -s /bin/bash -m vscode
groupadd docker
usermod -aG docker vscode
EOF

HEALTHCHECK CMD curl --fail http://localhost:3000 || exit 1  


# install Docker tools (cli, buildx, compose)
COPY --from=gloursdocker/docker / /
CMD [ "npm", "run", "dev" ]
```

When HEALTHCHECK is present in a Dockerfile, you’ll see the container’s health in the STATUS column after running the docker ps command. A container that passes this check is healthy. The CLI will label unhealthy containers as unhealthy:

docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED          STATUS                             PORTS                    NAMES
1d0c5e3e7d6a   react-express-mongodb-frontend   "docker-entrypoint.s…"   23 seconds ago   Up 21 seconds (health: starting)   0.0.0.0:3000->3000/tcp   frontend
a89721d3c42d   react-express-mongodb-backend    "docker-entrypoint.s…"   23 seconds ago   Up 21 seconds (health: starting)   3000/tcp                 backend
194c953f5653   mongo:4.2.0                      "docker-entrypoint.s…"   3 minutes ago    Up 3 minutes                       27017/tcp                mongo

You can also define a healthcheck (note the case difference) within Docker Compose! This can be pretty useful when you’re not using a Dockerfile. Instead of writing a plain text instruction, you’ll write this configuration in YAML format. 

Here’s a sample configuration that lets you define healthcheck within your docker-compose.yml file:

backend:
    container_name: backend
    restart: always
    build: backend
    volumes:
      - ./backend:/usr/src/app
      - /usr/src/app/node_modules
    depends_on:
      - mongo
    networks:
      - express-mongo
      - react-express
    expose:
      - 3000
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000"]
      interval: 1m30s
      timeout: 10s
      retries: 3
      start_period: 40s

5) Use .dockerignore

To increase build performance, we recommend creating a .dockerignore file in the same directory as your Dockerfile. For this tutorial, your .dockerignore file should contain just one line:

node_modules

This line excludes the node_modules directory — which contains output from Maven — from the Docker build context. There are many good reasons to carefully structure a .dockerignore file, but this simple file is good enough for now.

Let’s now explain the build context and why it’s essential . The docker build command builds Docker images from a Dockerfile and a “context.” This context is the set of files located in your specified PATH or URL. The build process can reference any of these files. 

Meanwhile, the compilation context is where the developer works. It could be a folder on Mac, Windows, or a Linux directory. This directory contains all necessary application components like source code, configuration files, libraries, and plugins. With a .dockerignore file, we can determine which of the following elements like source code, configuration files, libraries, plugins, etc. to exclude while building your new image. 

Here’s how your .dockerignore file might look if you choose to exclude the node_modules directory from your build:

Backend:

Dockerignore Backend

Frontend:

Dockerignore Frontend

6) Run as a non-root user for security purpose

Running applications with user privileges is safer since it helps mitigate risks. The same applies to Docker containers. By default, Docker containers and their running apps have root privileges. It’s therefore best to run Docker containers as non-root users. 

You can do this by adding USER instructions within your Dockerfile. The USER instruction sets the preferred user name (or UID) and optionally the user group (or GID) while running the image — and for any subsequent RUN, CMD, or ENTRYPOINT instructions:

# syntax=docker/dockerfile:1.4
FROM node:lts-buster AS development

WORKDIR /usr/src/app

COPY package.json ./package.json
COPY package-lock.json ./package-lock.json

RUN npm ci

COPY . .

EXPOSE 3000


CMD ["npm", "start"]

FROM development as dev-envs


RUN <<EOF
   apt-get update
   apt-get install -y --no-install-recommends git
EOF

RUN <<EOF
   useradd -s /bin/bash -m vscode
   groupadd docker
   usermod -aG docker vscode
EOF

USER vscode

# install Docker tools (cli, buildx, compose)
COPY --from=gloursdocker/docker / /
CMD [ "npm", "start" ]

7) Favor multi-architecture Docker images

Your CPU can only run binaries for its native architecture. For example, Docker images built for an x86 system can’t run on an Arm-based system. With Apple fully transitioning to their custom Arm-based silicon, it’s possible that your x86 (Intel or AMD) container image won’t work with Apple’s M-series chips. 

Consequently, we always recommended building multi-arch container images. Below is the mplatform/mquery Docker image that lets you query the multi-platform status of any public image in any public registry:

docker run --rm mplatform/mquery node:lts-buster
Unable to find image 'mplatform/mquery:latest' locally
d0989420b6f0: Download complete
af74e063fc6e: Download complete
3441ed415baf: Download complete
a0c6ee298a93: Download complete
894bcacb16df: Downloading [=============================================>     ]  3.146MB/3.452MB
Image: node:lts-buster (digest: sha256:a5d9200d3b8c17f0f3d7717034a9c215015b7aae70cb2a9d5e5dae7ff8aa6ca8)
 * Manifest List: Yes (Image type: application/vnd.docker.distribution.manifest.list.v2+json)
 * Supported platforms:
   - linux/amd64
   - linux/arm/v7
   - linux/arm64/v8

We introduced the docker buildx command to help you build multi-architecture images. Buildx is a Docker component that enables many powerful build features with a familiar Docker user experience. 
All Buildx builds run using the Moby BuildKit engine.

BuildKit is designed to excel at multi-platform builds, or those not just targeting the user’s local platform. When you invoke a build, you can set the --platform flag to specify the build output’s target platform (like linux/amd64, linux/arm/v7, linux/arm64/v8, etc.):

docker buildx build --platform linux/amd64,linux/arm/v7 -t node-docker .

8) Explore graceful shutdown options for Node

Docker containers are ephemeral in nature. They can be stopped and destroyed, then either rebuilt or replaced with minimal effort. You can terminate containers by sending a SIGTERM notice signal to the process. This little grace period requires you to ensure that your app is handling ongoing requests and cleaning up resources in a timely fashion. 

On the other hand, Node.js accepts and forwards signals like SIGINT and SIGTERM from the OS, which is key to properly shutting down your app. Node.js lets your app decide how to handle those signals. If you don’t write code or use a module to handle them, your app won’t shut down gracefully. It’ll ignore those signals until Docker or Kubernetes kills it after a timeout period. 

Using certain init options like docker run --init or tini within your Dockerfile is viable when you can’t change your app code. However, we recommend writing code to handle proper signal handling for graceful shutdowns.

Check out this video from Docker Captain Bret Fisher (12:57) where he covers all three available Node shutdown options in detail.

9) Use the OpenTelemetry API to measure NodeJS performance

How do Node developers make their apps faster and more performant? Generally, developers rely on third-party observability tools to measure application performance. This performance monitoring is essential for creating multi-functional Node applications with top notch user experiences.

Observability extends beyond application performance. Metrics, traces, and logs are now front and center. Metrics help developers to understand what’s wrong with the system, while traces help you discover how it’s wrong. Logs tell you why it’s wrong. Developers can dig into particular metrics and traces to holistically understand system behavior.

Observing Node applications means tracking your Node metrics, requests rates, request error rate, and request durations. OpenTelemetry is one popular collection of tools and APIs that help you instrument your Node.js application.

You can also use an open-source tool like SigNoz to analyze your app’s performance. Since SigNoz offers a full-stack observability tool, you don’t need to rely on multiple tools.

Conclusion

In this guide, we explored many ways to optimize your Docker images — from carefully crafting your Dockerfile to securing your image via Snyk scanning. Building better Node.js apps doesn’t have to be complex. By nailing some core fundamentals, you’ll be in great shape. 

If you’d like to dig deeper, check out these additional recommendations and best practices for building secure, production-grade Docker images:

Docker blog NodeJS Best Practices v2 1
]]>
Containerizing a Slack Clone App Built with the MERN Stack https://www.docker.com/blog/containerizing-a-slack-clone-app-built-with-the-mern-stack/ Tue, 13 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37436 The MERN Stack is a fast growing, open source JavaScript stack that’s gained huge momentum among today’s web developers. MERN is a diverse collection of robust technologies (namely, Mongo, Express, React, and Node) for developing scalable web applications — supported by frontend, backend, and database components. Node, Express, and React even ranked highly among most-popular frameworks or technologies in Stack Overflow’s 2022 Developer Survey.

Stack Overflow Results

How does the MERN Stack work?

MERN has four components:

  1. MongoDB – a NoSQL database
  2. ExpressJS – a backend web-application framework for NodeJS
  3. ReactJS – a JavaScript library for developing UIs from UI components. 
  4. NodeJS – a JavaScript runtime environment that enables running JavaScript code outside the browser, among other things
User MERN Flow Chart

Here’s how those pieces interact within a typical application:

  • A user interacts with the frontend, via the web browser, which is built with ReactJS UI components.
  • The backend server delivers frontend content, via ExpressJS running atop NodeJS.
  • Data is fetched from the MongoDB database before it returns to the frontend. Here, your application displays it for the user.
  • Any interaction that causes a data-change request is sent to the Node-based Express server.

Why is the MERN stack so popular?

MERN Tech Visual

MERN stack is popular due to the following reasons:

  • Easy learning curve – If you’re familiar with JavaScript and JSON, then it’s easy to get started. MERN’s structure lets you easily build a three-tier architecture (frontend, backend, database) with just JavaScript and JSON.
  • Reduced context switching – Since MERN uses JavaScript for both frontend and backend development, developers don’t need to worry about switching languages. This boosts development efficiency.
  • Open source and active community support – The MERN stack is purely open source. All developers can build robust web applications. Its frameworks improve the coding efficiency and promote faster app development.
  • Model-view architecture – MERN supports the model-view-controller (MVC) architecture, enabling a smooth and seamless development process.

Running the Slack Clone app

Key Components

Deploying a Slack Clone app is a fast process. You’ll clone the repository, set up the client and backend, then bring up the application. Complete the following steps:

git clone https://github.com/dockersamples/slack-clone-docker
cd slack-clone-docker
yarn install
yarn start

You can then access Slack Clone App at http://localhost:3000 in your browser:

Slack Clone Login 1
Slack Clone UI

Why containerize the MERN stack?

The MERN stack gives developers the flexibility to build pages on their server as needed. However, developers can encounter issues as their projects grow. Challenges with compatibility, third-party integrations, and steep learning curves are common for non-JavaScript developers.

First, For the MERN stack to work, developers must run a Node version that’s compatible with each additional stack component. Second, React extensively uses third-party libraries that might lower developer productivity due to integration hurdles and unfamiliarity. React is merely a library and might not help prevent common coding errors during development. Completing a large project with many developers becomes difficult with MERN. 

How can you make things easier? Docker simplifies and accelerates your workflows by letting you freely innovate with your choice of tools, application stacks, and deployment environments for each project. You can set up a MERN stack with a single Docker Compose file. This lets you quickly create microservices. This guide will help you completely containerize your Slack clone app.

Containerizing your Slack clone app

Docker helps you containerize your MERN Stack — letting you bundle together your complete Slack clone application, runtime, configuration, and OS-level dependencies. This includes everything needed to ship a cross-platform, multi-architecture web application. 

We’ll explore how to run this app within a Docker container using Docker Official Images. First, you’ll need to download Docker Desktop and complete the installation process. This includes the Docker CLI, Docker Compose, and a user-friendly management UI. These components will each be useful later on.

Docker uses a Dockerfile to create each image’s layers. Each layer stores important changes stemming from your base image’s standard configuration. Let’s create an empty Dockerfile in the root of our project repository.

Containerizing your React frontend

We’ll build a Dockerfile to containerize our React.js frontend and Node.js backend.

A Dockerfile is a plain-text file that contains instructions for assembling a Docker container image. When Docker builds our image via the docker build command, it reads these instructions, executes them, and creates a final image.

Let’s walk through the process of creating a Dockerfile for our application. First create the following empty file with the name Dockerfile.reactUI in the root of your React app:

touch Dockerfile.reactUI


You’ll then need to define your base image in the Dockerfile.reactUI file. Here, we’ve chosen the stable LTS version of the Node Docker Official Image. This comes with every tool and package needed to run a Node.js application:

FROM node:16


Next, let’s quickly create a directory to house our image’s application code. This acts as the working directory for your application:

WORKDIR /app


The following COPY instruction copies the package.json and src file from the host machine to the container image. The COPY command takes two parameters. The first tells Docker what file(s) you’d like to copy into the image. The second tells Docker where you want those files to be copied. We’ll copy everything into our working directory called /app:

COPY ./package.json ./package.json
COPY ./public ./public


Next, we need to add our source code into the image. We’ll use the COPY command just like we previously did with our package.json file:

COPY ./src ./src


Then, use yarn install to install the package:

RUN yarn install


The EXPOSE instruction tells Docker which port the container listens on at runtime. You can specify whether the port listens on TCP or UDP. The default is TCP if the protocol isn’t specified:

EXPOSE 3000


Finally, we’ll start a project by using the yarn start command:

CMD ["yarn","start"]


Here’s our complete Dockerfile.reactUI file:

FROM node:16
WORKDIR /app
COPY ./package.json ./package.json
COPY ./public ./public
COPY ./src ./src
RUN yarn install
EXPOSE 3000
CMD ["yarn","start"]


Now, let’s build our image. We’ll run the docker build command as above, but with the -f Dockerfile.reactUI flag. The -f flag specifies your Dockerfile name. The “.” command tells Docker to locate that Dockerfile in the current directory. The -t tags the resulting image:

docker build . -f Dockerfile.reactUI  -t slackclone-fe:1

Containerizing your Node.js backend

Let’s walk through the process of creating a Dockerfile for our backend as the next step. First create the following empty Dockerfile.node in the root of your backend Node app (i.e server/ directory). Here’s your complete Dockerfile.node:

FROM node:16
WORKDIR /app
COPY ./package.json ./package.json
COPY ./server.js ./server.js
COPY ./messageModel.js ./messageModel.js 
COPY ./roomModel.js ./roomModel.js
COPY ./userModel.js ./userModel.js
RUN yarn install 
EXPOSE 9000
CMD ["node", "server.js"]


Now, let’s build our image. We’ll run the following docker build command:

docker build . -f Dockerfile.node  -t slackclone-be:1

Defining services using a Compose file

Here’s how our services appear within a Docker Compose file:

services:
  slackfrontend:
    build: 
      context: .
      dockefile: Dockerfile.reactUI
    ports:
      - "3000:3000"    
    depends_on:
      - db
  nodebackend: 
    build: 
      context: ./server
      dockerfile: Dockerfile.node
    ports: 
      - "9000:9000"    
    depends_on:
      - db
  db:
    volumes:
      - slack_db:/data/db
    image: mongo:latest
    ports:
      - "27017:27017"  
volumes:
   slack_db:


Your sample application has the following parts:

  • Three services backed by Docker images: your React.js frontend, Node.js backend, and Mongo database
  • A frontend accessible via port 3000
  • The depends_on parameter, letting you create the backend service before the frontend service starts
  • One persistent named volume called slack_db, which is attached to the database service and ensures the Mongo data is persisted across container restarts

You can clone the repository or download the docker-compose.yml file directly from here.

Bringing up the container services

You can start the MERN application stack by running the following command:

docker compose up -d —build


Then, use the docker compose ps command to confirm that your stack is running properly. Your terminal will produce the following output:

docker compose ps
               Name                             Command               State            Ports        -----------------------------------------------------------------------------
slack-clone-docker_db_1              docker-entrypoint.sh mongod      Up      0.0.0.0:27017->27017/tcp
slack-clone-docker_nodebackend_1     docker-entrypoint.sh node  ...   Up      0.0.0.0:9000->9000/tcp  
slack-clone-docker_slackfrontend_1   docker-entrypoint.sh yarn  ...   Up      0.0.0.0:3000->3000/tcp

Viewing the containers via Docker Dashboard

You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application:

Docker Desktop Container UI

Viewing the Messages

You can download and use Mongo Compass — an intuitive GUI for querying, optimizing, and analyzing your MongoDB data. This tool provides detailed schema visualization, real-time performance metrics, and sophisticated query abilities. It lets you view key insights, drag and drop to build pipelines, and more.

Mongo Compass

Conclusion

Congratulations! You’ve successfully learned how to containerize a MERN-backed Slack application with Docker. With a single YAML file, we’ve demonstrated how Docker Compose helps you easily build and deploy your MERN stack in seconds. With just a few extra steps, you can apply this tutorial while building applications with even greater complexity. Happy developing. 

References:

Polished Slack UI
]]>
How to Set Up Your Local Node.js Development Environment Using Docker https://www.docker.com/blog/how-to-setup-your-local-node-js-development-environment-using-docker/ Tue, 30 Aug 2022 21:16:50 +0000 https://www.docker.com/blog/?p=26568 Docker is the de facto toolset for building modern applications and setting up a CI/CD pipeline – helping you build, ship, and run your applications in containers on-prem and in the cloud. 

Whether you’re running on simple compute instances such as AWS EC2 or something fancier like a hosted Kubernetes service, Docker’s toolset is your new BFF. 

But what about your local Node.js development environment? Setting up local dev environments while also juggling the hurdles of onboarding can be frustrating, to say the least.

While there’s no silver bullet, with the help of Docker and its toolset, we can make things a whole lot easier.

Table of contents:

How to set up a local Node.js dev environment — Part 1

Docker's Moby Dock whale pointing to whiteboard with Node.js logo.

In this tutorial, we’ll walk through setting up a local Node.js development environment for a relatively complex application that uses React for its front end, Node and Express for a couple of micro-services, and MongoDb for our datastore. We’ll use Docker to build our images and Docker Compose to make everything a whole lot easier.

If you have any questions, comments, or just want to connect. You can reach me in our Community Slack or on Twitter at @rumpl.

Let’s get started.

Prerequisites

To complete this tutorial, you will need:

  • Docker installed on your development machine. You can download and install Docker Desktop.
  • Sign-up for a Docker ID through Docker Hub.
  • Git installed on your development machine.
  • An IDE or text editor to use for editing files. I would recommend VSCode.

Step 1: Fork the Code Repository

The first thing we want to do is download the code to our local development machine. Let’s do this using the following git command:

git clone https://github.com/rumpl/memphis.git

Now that we have the code local, let’s take a look at the project structure. Open the code in your favorite IDE and expand the root level directories. You’ll see the following file structure.

├── docker-compose.yml
├── notes-service
├── reading-list-service
├── users-service
└── yoda-ui

The application is made up of a couple of simple microservices and a front-end written in React.js. It also uses MongoDB as its datastore.

Typically at this point, we would start a local version of MongoDB or look through the project to find where our applications will be looking for MongoDB. Then, we would start each of our microservices independently and start the UI in hopes that the default configuration works.

However, this can be very complicated and frustrating. Especially if our microservices are using different versions of Node.js and configured differently.

Instead, let’s walk through making this process easier by dockerizing our application and putting our database into a container. 

Step 2: Dockerize your applications

Docker is a great way to provide consistent development environments. It will allow us to run each of our services and UI in a container. We’ll also set up things so that we can develop locally and start our dependencies with one docker command.

The first thing we want to do is dockerize each of our applications. Let’s start with the microservices because they are all written in Node.js, and we’ll be able to use the same Dockerfile.

Creating Dockerfiles

Create a Dockerfile in the notes-services directory and add the following commands.

Dockerfile in the notes-service directory using Node.js.

This is a very basic Dockerfile to use with Node.js. If you are not familiar with the commands, you can start with our getting started guide. Also, take a look at our reference documentation.

Building Docker Images

Now that we’ve created our Dockerfile, let’s build our image. Make sure you’re still located in the notes-services directory and run the following command:

cd notes-service
docker build -t notes-service .
Docker build terminal output located in the notes-service directory.

Now that we have our image built, let’s run it as a container and test that it’s working.

docker run --rm -p 8081:8081 --name notes notes-service

Docker run terminal output located in the notes-service directory.

From this error, we can see we’re having trouble connecting to the mongodb. Two things are broken at this point:

  1. We didn’t provide a connection string to the application.
  2. We don’t have MongoDB running locally.

To resolve this, we could provide a connection string to a shared instance of our database, but we want to be able to manage our database locally and not have to worry about messing up our colleagues’ data they might be using to develop. 

Step 3: Run MongoDB in a localized container

Instead of downloading MongoDB, installing, configuring, and then running the Mongo database service. We can use the Docker Official Image for MongoDB and run it in a container.

Before we run MongoDB in a container, we want to create a couple of volumes that Docker can manage to store our persistent data and configuration. I like to use the managed volumes that Docker provides instead of using bind mounts. You can read all about volumes in our documentation.

Creating volumes for Docker

To create our volumes, we’ll create one for the data and one for the configuration of MongoDB.

docker volume create mongodb

docker volume create mongodb_config

Creating a user-defined bridge network

Now we’ll create a network that our application and database will use to talk with each other. The network is called a user-defined bridge network and gives us a nice DNS lookup service that we can use when creating our connection string.

docker network create mongodb

Now, we can run MongoDB in a container and attach it to the volumes and network we created above. Docker will pull the image from Hub and run it for you locally.

docker run -it --rm -d -v mongodb:/data/db -v mongodb_config:/data/configdb -p 27017:27017 --network mongodb --name mongodb mongo

Step 4: Set your environment variables

Now that we have a running MongoDB, we also need to set a couple of environment variables so our application knows what port to listen on and what connection string to use to access the database. We’ll do this right in the docker run command.

docker run \
-it --rm -d \
--network mongodb \
--name notes \
-p 8081:8081 \
-e SERVER_PORT=8081 \
-e SERVER_PORT=8081 \
-e DATABASE_CONNECTIONSTRING=mongodb://mongodb:27017/yoda_notes \
notes-service

Step 5: Test your database connection

Let’s test that our application is connected to the database and is able to add a note.

curl --request POST \
--url http://localhost:8081/services/m/notes \
  --header 'content-type: application/json' \
  --data '{
"name": "this is a note",
"text": "this is a note that I wanted to take while I was working on writing a blog post.",
"owner": "peter"
}'

You should receive the following JSON back from our service.

{"code":"success","payload":{"_id":"5efd0a1552cd422b59d4f994","name":"this is a note","text":"this is a note that I wanted to take while I was working on writing a blog post.","owner":"peter","createDate":"2020-07-01T22:11:33.256Z"}}

Once we are done testing, run ‘docker stop notes mongodb’ to stop the containers.

Awesome! We’ve completed the first steps in Dockerizing our local development environment for Node.js. In Part II, we’ll take a look at how we can use Docker Compose to simplify the process we just went through.

How to set up a local Node.js dev environment — Part 2

In Part I, we took a look at creating Docker images and running containers for Node.js applications. We also took a look at setting up a database in a container and how volumes and networks play a part in setting up your local development environment.

In Part II, we’ll take a look at creating and running a development image where we can compile, add modules and debug our application all inside of a container. This helps speed up the developer setup time when moving to a new application or project. In this case, our image should have Node.js installed as well as NPM or YARN. 

We’ll also take a quick look at using Docker Compose to help streamline the processes of setting up and running a full microservices application locally on your development machine.

Let’s create a development image we can use to run our Node.js application.

Step 1: Develop your Dockerfile

Create a local directory on your development machine that we can use as a working directory to save our Dockerfile and any other files that we’ll need for our development image.

$ mkdir -p ~/projects/dev-image

Create a Dockerfile in this folder and add the following commands.

FROM node:18.7.0
RUN apt-get update && apt-get install -y \
  nano \
  Vim

We start off by using the node:18.7.0 official image. I’ve found that this image is fine for creating a development image. I like to add a couple of text editors to the image in case I want to quickly edit a file while inside the container.

We did not add an ENTRYPOINT or CMD to the Dockerfile because we will rely on the base image’s ENTRYPOINT, and we will override the CMD when we start the image.

Step 2: Build your Docker image

Let’s build our image.

$ docker build -t node-dev-image .

And now we can run it.

$ docker run -it --rm --name dev -v $(pwd):/code node-dev-image bash

You will be presented with a bash command prompt. Now, inside the container, we can create a JavaScript file and run it with Node.js.

Step 3: Test your image

Run the following commands to test our image.

$ node -e 'console.log("hello from inside our container")'
hello from inside our container

If all goes well, we have a working development image. We can now do everything that we would do in our normal bash terminal.

If you run the above Docker command inside of the notes-service directory, then you will have access to the code inside of the container. You can start the notes-service by simply navigating to the /code directory and running npm run start.

Step 4: Use Compose to Develop locally

The notes-service project uses MongoDB as its data store. If you remember from Part I, we had to start the Mongo container manually and connect it to the same network as our notes-service. We also had to create a couple of volumes so we could persist our data across restarts of our application and MongoDB.

Instead, we’ll create a Compose file to start our notes-service and the MongoDb with one command. We’ll also set up the Compose file to start the notes-service in debug mode. This way, we can connect a debugger to the running node process.

Open the notes-service in your favorite IDE or text editor and create a new file named docker-compose.dev.yml. Copy and paste the below commands into the file.

services:
 notes:
   build:
     context: .
   ports:
     - 8080:8080
     - 9229:9229
   environment:
     - SERVER_PORT=8080
     - DATABASE_CONNECTIONSTRING=mongodb://mongo:27017/notes
   volumes:
     - ./:/code
   command: npm run debug
 
 mongo:
   image: mongo:4.2.8
   ports:
     - 27017:27017
   volumes:
     - mongodb:/data/db
     - mongodb_config:/data/configdb
 volumes:
   mongodb:
   Mongodb_config:


This compose file is super convenient because now we don’t have to type all the parameters to pass to the `docker run` command. We can declaratively do that in the compose file.

We are exposing port 9229 so that we can attach a debugger. We are also mapping our local source code into the running container so that we can make changes in our text editor and have those changes picked up in the container.

One other really cool feature of using the compose file is that we have service resolution setup to use the service names. As a result, we are now able to use “mongo” in our connection string. We use “mongo” because that is what we have named our mongo service in the compose file.

Let’s start our application and confirm that it is running properly.

$ docker compose -f docker-compose.dev.yml up --build

We pass the “–build” flag so Docker will compile our image and then start it.

If all goes well, you should see the logs from the notes and mongo services:

Docker compose terminal ouput showing logs from the notes and mongo services.
[Click to Enlarge]

Now let’s test our API endpoint. Run the following curl command:

$ curl --request GET --url http://localhost:8080/services/m/notes

You should receive the following response:

{"code":"success","meta":{"total":0,"count":0},"payload":[]}

Step 5: Connect to a Debugger

We’ll use the debugger that comes with the Chrome browser. Open Chrome on your machine, and then type the following into the address bar.

About:inspect

The following screen will open.

The DevTools function on the Chrome browser, showing a list of devices and remote targets.

Click the “Open dedicated DevTools for Node” link. This will open the DevTools that are connected to the running Node.js process inside our container.

Let’s change the source code and then set a breakpoint. 

Add the following code to the server.js file on line 19 and save the file.

server.use( '/foo', (req, res) => {
   return res.json({ "foo": "bar" })
 })

If you take a look at the terminal where our compose application is running, you’ll see that nodemon noticed the changes and reloaded our application.

Docker compose terminal output showcasing the nodemon-reloaded application.
[Click to Enlarge]

Navigate back to the Chrome DevTools and set a breakpoint on line 20. Then, run the following curl command to trigger the breakpoint.

$ curl --request GET --url http://localhost:8080/foo

💥 BOOM 💥 You should have seen the code break on line 20, and now you are able to use the debugger just like you would normally. You can inspect and watch variables, set conditional breakpoints, view stack traces, and a bunch of other stuff.

Conclusion

In this article, we completed the first steps in Dockerizing our local development environment for Node.js. Then, we took things a step further and created a general development image that can be used like our normal command line. We also set up our compose file to map our source code into the running container and exposed the debugging port.

For further reading and additional resources:

]]>
Resources to Use Javascript, Python, Java, and Go with Docker https://www.docker.com/blog/resources-to-use-javascript-python-java-and-go-with-docker/ Fri, 08 Jul 2022 14:00:05 +0000 https://www.docker.com/?p=34686 With so many programming and scripting languages out there, developers can tackle development projects any number of ways. However, some languages — like JavaScript, Python, and Java — have been perennial favorites. (We’ve previously touched on this while unpacking Stack Overflow’s 2022 Developer Survey results.)

Programming Language Syntax

Image courtesy of Joan Gamell, via Unsplash

Many developers use Docker in tandem with these languages. We’ve seen our users create some amazing applications! Here are some resources and recommendations to level up your container game with these languages.

Getting Started with Docker

If you’ve never used Docker, you may want to familiarize yourself with some basic concepts first. You can learn the technical fundamentals of Docker and containerization via our “Orientation and Setup” guide and our introductory page. You’ll learn how containers work, and even how to harness tools like the Docker CLI or Docker Desktop.

Our Orientation page also serves as a foundation for many of our own official walkthroughs. This is a great resource if you’re completely new to Docker!

If you prefer hands-on learning, look no further than Shy Ruparel’s “Getting Started with Docker” video guide. Shy will introduce you to Docker’s architecture, essential CLI commands, Docker Desktop tips, and sample applications.

If you’re feeling comfortable with Docker, feel free to jump to your language-specific section using the links below. We’ve created language-specific workflows for each top language within our documentation (AKA “Our Language Modules” in this blog). These steps are linked below alongside some extra exploratory resources. We’ll also include some awesome-compose code samples to accelerate similar development projects — or to serve as inspiration.

Table of Contents

How to Use Docker with JavaScript

JavaScript has been the programming world’s leading language for 10 years running. Luckily, there are also many ways to use JavaScript and Docker together. Check out these resources to harness JavaScript, Node.js, and other runtimes or frameworks with Docker.

Docker Node.js Modules

Before exploring further, it’s worth completing our learning modules for Node. These take you through the basics and set you up for increasingly-complex projects later on. We recommend completing these in order:

  1. Overview for Node.js (covering learning objectives and containerization of your Node application)
  2. Build your Node image
  3. Run your image as a container
  4. Use containers for development
  5. Run your tests using Node.js and Mocha frameworks
  6. Configure CI/CD for your application
  7. Deploy your app

It’s also possible that you’ll want to explore more processes for building minimum viable products (MVPs) or pulling container images. You can read more by visiting the following links.

Other Essential Node Resources

How to Use Docker with Python

Python has consistently been one of our developer community’s favorite languages. From building simple sample apps to leveraging machine learning frameworks, the language supports a variety of workloads. You can learn more about the dynamic duo of Python and Docker via these links.

Docker Python Modules

Similar to Node.js, these pages from our documentation are a great starting point for harnessing Python and Docker:

  1. Overview for Python
  2. Build your Python image
  3. Run your image as a container
  4. Use containers for development (featuring Python and MySQL)
  5. Configure CI/CD for your application
  6. Deploy your app

Other Essential Python Resources

How to Use Docker with Java

Both its maturity and the popularity of Spring Boot have contributed to Java’s growth over the years. It’s easy to pair Java with Docker! Here are some resources to help you do it.

Docker Java Modules

Like with Python, these modules can help you hit the ground running with Java and Docker:

  1. Overview for Java
  2. Build your Java image
  3. Run your image as a container
  4. Use containers for development
  5. Run your tests
  6. Configure CI/CD for your application
  7. Deploy your app

Other Essential Java Resources

How to Use Docker with Go

Last, but not least, Go has become a popular language for Docker users. According to Stack Overflow’s 2022 Developer Survey, over 10,000 JavaScript users (of roughly 46,000) want to start or continue developing in Go or Rust. It’s often positioned as an alternative to C++, yet many Go users originally transition over from Python and Ruby.

There’s tremendous overlap there. Go’s ecosystem is growing, and it’s become increasingly useful for scaling workloads. Check out these links to jumpstart your Go and Docker development.

Docker Go Modules

  1. Overview for Go
  2. Build your Go image
  3. Run your image as a container
  4. Use containers for development
  5. Run your tests using Go test
  6. Configure CI/CD for your application
  7. Deploy your app

Other Essential Go Resources

Build in the Language You Want with Docker

Docker supports all of today’s leading languages. It’s easy to containerize your application and deploy cross-platform without having to make concessions. You can bring your workflows, your workloads, and, ultimately, your users along.

And that’s just the tip of the iceberg. We welcome developers who develop in other languages like Rust, TypeScript, C#, and many more. Docker images make it easy to create these applications from scratch.

We hope these resources have helped you discover and explore how Docker works with your preferred language. Visit our language-specific guides page to learn key best practices and image management tips for using these languages with Docker Desktop.

]]>
Getting Started with Docker nonadult
JumpStart Your Node.js Development https://www.docker.com/blog/jumpstart-your-node-js-development/ Fri, 06 May 2022 19:45:30 +0000 https://www.docker.com/?p=33369 With over 87,000 stars and 3,100 contributors, Node.js has become a leading choice for enterprise developers in 2022. It’s an open source, cross-platform runtime environment that helps developers build varied server-side tools and applications in JavaScript. 

Developers use Node.js to build fast, scalable, and real-time apps — thanks to its highly-robust event-driven runtime. It’s also asynchronous. Node.js can handle a huge number of concurrent connections, with high throughput, as traffic spikes. Accordingly, it’s ideal for building microservices architectures. 

Event Queue Graphic

 

Users have downloaded our Node.js Docker Official Image more than 1 billion times from Docker Hub. What’s driving this significant download rate? There’s an ever-increasing demand for Docker containers to streamline development workflows, while giving Node.js developers the freedom to innovate with their choice of project-tailored tools, application stacks, and deployment environments. 

We’ll show you how to rapidly and easily containerize your Node.js app — seamlessly circumventing common Node compatibility issues while accelerating deployment. This lets your application easily run cross-platform on different CPU architectures. 

Building the Application       

node js express

 

This walkthrough will show you how to easily build a Node.js to-do list app with Docker. 

First, we’ll create our simple to-do list application in Node.js without using Docker. You’ll see how the application lets you create and delete task lists using the Redis backend database. 

Next, we’ll build a Docker image for that application. You’ll also learn how Docker Compose can help you rapidly deploy your application within containers. Let’s get started. 

Prerequisites

  • NPM – a node package manager used for Node.js app development 
  • Node.js – our runtime for building web applications
  • Express – a backend web-application framework for Node.js 
  • Bootstrap – a toolkit for responsive, front-end web development
  • Redis – an in-memory, key-value, NoSQL database used for caching, data storage, and message brokering
  • Docker Desktopa suite of software-development tools for creating, sharing, and running individual containers

Getting Started

Once you’ve installed your Node.js packages on your machine, follow these steps to build a simple to-do list app from scratch.

Starting with NodeJS

  1.  Create an empty directory:

mkdir todolist

      2.   Run the npm init  command to set up a new npm package.

npm init

This utility walks you through creating a package.json file that describes your app and its dependencies.

 
{
  "name": "todolist",
  "version": "1.0.0",
  "description": "A Sample Todo-List app",
  "main": "todolist.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" &amp;&amp; exit 1"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/ajeetraina/todolist.git"
  },
  "keywords": [
    "node.js"
  ],
  "author": "Ajeet Singh Raina",
  "license": "MIT",
  "bugs": {
    "url": "https://github.com/ajeetraina/todolist/issues"
  },
  "homepage": "https://github.com/ajeetraina/todolist#readme"
}

With your new package.json, run npm install.

npm install --save express redis ejs dotenv

Here’s your result:

+ ejs@3.1.6
+ redis@4.0.6
+ express@4.17.3
+ dotenv@16.0.0
added 79 packages from 106 contributors and audited 79 packages in 5.112s

4 packages are looking for funding
run `npm fund` for details

found 0 vulnerabilities

Next, open your package.json file to see the following entries:

{
  "name": "todolist",
  "version": "1.0.0",
  "description": "A Sample Todo-List app",
  "main": "todolist.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/ajeetraina/todolist.git"
  },
  "keywords": [
    "node.js"
  ],
  "author": "Ajeet Singh Raina",
  "license": "MIT",
  "bugs": {
    "url": "https://github.com/ajeetraina/todolist/issues"
  },
  "homepage": "https://github.com/ajeetraina/todolist#readme",
  "dependencies": {
    "dotenv": "^16.0.0",
    "ejs": "^3.1.6",
    "express": "^4.17.3",
    "redis": "^4.0.6"
  }
}

Installing nodemon

Nodemon is a handy CLI utility that’s primarily used for development purposes, instead of production. It monitors for any changes in your source code and automatically restarts the app server to apply them. Additionally, you can add nodemon to your dev dependencies if you want to run it using scripts. You can alternatively install it globally. Nodemon is open source and available on GitHub.

The recommended way to install nodemon is through the npm utility.

npm install --save-dev nodemon

Here’s your result:

+ nodemon@2.0.15
added 106 packages from 55 contributors and audited 185 packages in 3.514s

18 packages are looking for funding
run `npm fund` for details

found 0 vulnerabilities

You should now be able to see nodemon added to your package.json file:

{
  "name": "todo-list",
  "version": "1.0.0",
  "description": "A Sample Todo List app",
  "main": "app.js",
  "scripts": {
    "start": "nodemon app.js"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/ajeetraina/todolist.git"
  },
  "keywords": [
    "nodejs",
    "express"
  ],
  "author": "Ajeet S Raina",
  "license": "MIT",
  "bugs": {
    "url": "https://github.com/ajeetraina/todolist/issues"
  },
  "homepage": "https://github.com/ajeetraina/todolist#readme",
  "dependencies": {
    "dotenv": "^16.0.0",
    "ejs": "^3.1.6",
    "express": "^4.17.3",
    "redis": "^4.0.6"
  },
  "devDependencies": {
    "nodemon": "^2.0.15"
  }
}

Defining the To-Do List Web App

First, create an empty app.js file that defines a web app using the Express.js framework. Next, you’ll have to ensure that all essential components are in place before initiating the build process. 

The first portion of your JavaScript code imports all essential packages into the project. However, you’ll also have to create an instance of an Express application along with redis, body-parser, and path. You then must initialize the express variable. Express is a minimal — but flexible — Node.js web-application framework with robust features for web and mobile applications. Express requires a middleware module called body-parser to extract incoming data from a POST request.

‘use strict’;

const express = require('express');
const redis = require('redis');
const path = require('path');
const port = 3000;
const bodyParser = require('body-parser');


var app = express();

const client = redis.createClient();

client.on('connect', () => {
  console.log('Successfully connected to Redis...');
});

Defining the View Engine

Next, you’ll need the view engine, which is useful for rendering web pages. You’ll also be using a popular view engine called Embedded JavaScript (EJS). This is essentially a simple templating language-engine that lets developers generate HTML using plain JavaScript. Your code section defines the path to the view engine. The bodyParser helps normalize each element.

The last line of this code snippet lets you insert an image under the /public directory — so that index.ejs can grab it and display it on the frontend:

app.set('views'), path.join(__dirname, 'views');
app.set('view engine', 'ejs');

app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: false}));

app.use(express.static(path.join(__dirname, 'public')))

GET

This code snippet uses Express router to handle your app’s routing. Whenever a user requests the app from a web browser, it’ll serve a response from the Redis database.

app.get('/', (req, res) => {
  var title = 'A Simple Todo-List App';
  var counter = 0;
  client.LRANGE('todo', 0, -1, (err, reply) => {
    if(err){
      res.send(err);
      return;
    }
    res.render('index', {
      title: title,
      todo: reply,
      counter: counter
    });
  });
});

Quick Tips: Redis lists can be defined as lists of strings that are stored in order of insertion:

  • RPUSH: adds a new element to the right of the list. (i.e. inserts the element at the tail of the list)
  • LRANGE: retrieves a subset of list elements based on the provided “start” and “stop” offsets

POST

Similarly, whenever you want to push data to the Redis database, you’d use the Redis RPUSH command as shown:

app.post('/todo/add', (req, res, next) => {
  var todo = req.body.todos;
  client.RPUSH('todo', todo, (err, reply) => {
    if(err){
      res.send(err);
      return;
    }
    res.redirect('/');
  });
});

Should you want to delete the messages in the Redis database by index, you’d use the LREM command. This Redis command removes the first-count occurrences of elements equal to elements from the list.

app.post('/todo/delete', (req, res, next) => {
  var delTODO = req.body.todo;
  var deleted = '__DELETED__';
  client.LRANGE('todo', 0, -1, (err, todo) => {
    for(let i = 0; i < delTODO.length; i++){
      client.LSET('todo', delTODO[i], deleted);
    }
    client.LREM('todo', 0, deleted);
    res.redirect('/');
  });
});

Meanwhile, this entry tells your app to start a server and listen for connections on port 3000:

app.listen(3000, () => {
  console.log('Server listening at port 3000...');
});

module.exports = app;

Here’s what your completed code will look like:

‘use strict’;

const express = require('express');
const redis = require('redis');
const path = require('path');
const port 3000
const bodyParser = require('body-parser');


var app = express();

const client = redis.createClient();

client.on('connect', () => {
  console.log('Successfully connected to Redis...');
});

app.set('views'), path.join(__dirname, 'views');
app.set('view engine', 'ejs');

app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: false}));

app.use(express.static(path.join(__dirname, 'public')));

app.get('/', (req, res) => {
  var title = 'A Simple Todo App List';
  var counter = 0;
  client.LRANGE('todo', 0, -1, (err, reply) => {
    if(err){
      res.send(err);
      return;
    }
    res.render('index', {
      title: title,
      todo: reply,
      counter: counter
    });
  });
});

app.post('/todo/add', (req, res, next) => {
  var todo = req.body.todos;
  client.RPUSH('todo', todo, (err, reply) => {
    if(err){
      res.send(err);
      return;
    }
    res.redirect('/');
  });
});

app.post('/todo/delete', (req, res, next) => {
  var delTODO = req.body.todo;
  var deleted = '__DELETED__';
  client.LRANGE('todo', 0, -1, (err, todo) => {
    for(let i = 0; i < delTODO.length; i++){
      client.LSET('todo', delTODO[i], deleted);
    }
    client.LREM('todo', 0, deleted);
    res.redirect('/');
  });
});

app.listen(3000, () => {
  console.log('Server listening at port 3000...');
});

module.exports = app;

Building a View Engine

The process of building a view engine is relatively simple. To do so, create an empty folder called “views” and add your content to a file namedindex.ejs:

!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <link rel="stylesheet" href="https://bootswatch.com/4/slate/bootstrap.min.css">
  <title>A Simple Todo List App</title>
</head>
<body>
  <div class="container">
    <h1 class="text-center"> <%= title %> </h1>
    <img src="/images/todo.jpeg" class="center"  width="300" alt="">
    <form action="/todo/add" method="POST">
      <div class="form-group">
        <input type="text" class="form-control" name="todos" placeholder="Start typing and Press Enter...">
      </div>
    </form>
    <form action="/todo/delete" method="POST">
    
      <% todo.forEach( (list) => { %>
        <div class="alert alert-success">
          <input type="checkbox" class="form-check-input mt-2" name="todo" value="<%= counter %>">
          <h4 class="d-inline">> </h4> <strong><%= list %></strong>
        </div>
        <% counter++ %>
      <% }) %>
        <input type="submit" value="Remove" class="btn btn-primary">
    </form>
    
  </div>
</body>
</html>

Next, let’s run the Redis server in your system. If you’re using Mac OS, we recommend using Homebrew as shown below:

brew install redis

brew services start redis

Once this is finished, verify that you are able to connect to Redis server via redis-cli:

redis-cli
127.0.0.1:6379> info
..

Finally, simply start nodemon with the following command. Your result is displayed soon after:

nodemon .

[nodemon] 2.0.12
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node .`
Server listening at port 3000...
Successfully connected to Redis...

You’ll then want to confirm that your app is working properly. Open your browser and access http://localhost:3000 in the address bar. Here’s what you’ll see:

a simple to do list app

 

Once you start typing out your to-do list, you’ll see the following activity in Redis database:

1650438955.239370 [0 172.17.0.1:59706] "info"
1650438999.498418 [0 172.17.0.1:59708] "info"
1650439059.072301 [0 172.17.0.1:59708] "lrange" "todo" "0" "-1"
1650439086.492042 [0 172.17.0.1:59708] "rpush" "todo" "Watch netflix"
1650439086.500834 [0 172.17.0.1:59708] "lrange" "todo" "0" "-1"
1650439094.054506 [0 172.17.0.1:59708] "rpush" "todo" "Attend Meeting"
1650439094.059366 [0 172.17.0.1:59708] "lrange" "todo" "0" "-1"
1650439099.726832 [0 172.17.0.1:59708] "rpush" "todo" "Walk for 30 min"
1650439099.731735 [0 172.17.0.1:59708] "lrange" "todo" "0" "-1"

Building a Multi-Container Node.js app with Docker Desktop

Let’s assess how you can run this app inside a Docker container using the official Docker image. First, you’ll need to install Docker Desktop — which lets you build a Docker image for your app.

getting started with docker

Next, create an empty file called “Dockerfile”:

touch Dockerfile

Use your favorite text editor to open the Dockerfile. You’ll then need to define your base image. 

Accordingly, ensure that you’re using the Long Term Support (LTS) version of Node.js, and the minimal alpine image type. This helps minimize your image’s footprint, and therefore its attack surface:

FROM node:lts-alpine

Quick Tips: We recommended using explicit and deterministic Docker image base tags. Smaller Docker images offer quicker re-builds. Docker image builds are also highly varied. For example, if you use the node:latest tag with a build image, it’s possible that every build will try pulling a newly-built Docker node image. This could introduce non-deterministic behavior, which can hamper deployment consistency. 

Want to learn more about how Node.js and Docker can work together? Join us at DockerCon 2022 — where you’ll learn best practices for managing Node.js and JavaScript projects while developing, testing, and operating containers.

Next, let’s quickly create a directory to house our image’s application code. This acts as the working directory for your application:

WORKDIR '/var/www/app'

Both Node.js and NPM come pre-installed within your image. However, you’ll need to install your app dependencies using the npm binary. 

To bundle your app’s source code within your Docker image, use the COPY instruction:

COPY..

Your app binds to port 3000. Use the EXPOSE instruction to have the Docker daemon map it properly:

EXPOSE 3000

Your simple Dockerfile will look something like this:

FROM node:lts-alpine

COPY package*.json ./
  
WORKDIR '/var/www/app'

RUN npm install --save express redis ejs dotenv 

COPY . .

EXPOSE 3000

That said, you’ll now need to build your Docker image. Enter the following command to kickstart this process, which produces an output soon after:

docker build -t ajeetraina/todolist .

docker images
REPOSITORY                             TAG       IMAGE ID       CREATED         SIZE
ajeetraina/todolist                    latest    6aeeaac8ace3   2 minutes ago   131MB

Next, you’ll create a .dockerignore file. This file closely resembles.gitignore. It prevents files from being added to the initial build context — which the Docker daemon receives during docker build execution.

You’ll create this ..dockerignore file in the same directory as your Dockerfile with the following:

node_modules
.git

This prevents local modules and debug logs from being copied onto your Docker image. When this happens, you can potentially overwrite modules installed within your image.

If you rebuild the Docker image and verify it, you can save roughly 4MB of disk space:

docker images                        
REPOSITORY                             TAG       IMAGE ID       CREATED          SIZE
ajeetraina/todolist                    latest    453c5aeae5e0   3 seconds ago    127MB

Finally, it’s time to create a Docker Compose file. This single YAML file lets you specify your frontend app and your Redis database:

services:
  
  app:
    build: ./
    volumes:
       - ./:/var/www/app
    ports:
      - 3000:3000
    environment:
      - REDIS_URL=redis://db:6379
      - NODE_ENV=development
      - PORT=3000
    command:
       sh -c 'node app.js'
    depends_on:
      - db
 
  db:
    image: redis

Your example application has the following parts:

  • Two services backed by Docker images: your frontend web app and your backend database
  • The frontend, accessible via port 3000
  • The depends_on parameter, letting you create the backend service before the frontend service starts
  • One persistent volume, attached to the frontend

Next, you’ll specify your REDIS_URL within your app.js file — letting you pass Redis endpoints within your Docker Compose file:

const client = redis.createClient({ url: process.env.REDIS_URL });

 

You’ll then want to start your services using the docker-compose up command. Just like that, you’ve created and deployed your Node.js to-do list app! This is usable in your browser, like before:

simple to do list app

 

Want to dig a little deeper? My complete project code is available on my GitHub page.

Conclusion

Docker helps accelerate the process of building, running, and sharing modern applications. Docker Official Images help you develop your own unique applications, no matter what tech stack you’re accustomed to. With a single YAML file, we demonstrated how Docker Compose helps you easily build a multi-container Node.js app with Redis.

We can even take Docker Compose and develop real-world microservices applications. With just a few extra steps, you can apply this tutorial while building applications with much greater complexity. Happy coding! 

Additional Resources

 

]]>
Docker for Node.js Developers: 5 Things You Need to Know Not to Fail Your Security https://www.docker.com/blog/docker-for-node-js-developers-5-things-you-need-to-know-not-to-fail-your-security/ Tue, 13 Jul 2021 17:30:56 +0000 https://www.docker.com/blog/?p=28493 Guest post by Liran Tal, Snyk Director of Developer Advocacy 

Docker is totalling up to more than 318 billion downloads of container images. With millions of applications available on Docker Hub, container-based applications are popular and make an easy way to consume and publish applications.

That being said, the naive way of building your own Docker Node.js web applications may come with many security risks. So, how do we make security an essential part of Docker for Node.js developers?

Many articles on this topic have been written, yet sadly, without thoughtful consideration of security and production best practices for building Node.js Docker images. This is the focus of my article here and the demos I shared on this recent Docker Build show with Peter McKee. 

Before we jump into the gist of Docker for Node.js and building Docker images, let’s have a look at some frequently asked questions on the topic.

How do I dockerize Node.js applications?

Running your Node.js application in a Docker container can be as simple as copying over the project’s directory and installing all the required npm packages, but there are many security and production related concerns that you might miss. These production-grade tips are laid out in the following guide on containerizing Node.js web applications with Docker, which covers everything from choosing the right Docker base image and using multi-stage builds, to managing secrets safely and properly enabling production-related framework configuration.

This article focuses on the information you need to better understand the impact of choosing the right Node.js Docker base image for your web application and will help you find the most secure Docker image available for your application.  

How is Docker helpful for Node.js developers?

Packaging your Node.js application in a container allows you to bundle your complete application, including the runtime, configuration and OS-level dependencies, and everything required for your web application to run across different platforms and CPU architectures. These images are bundled as deployable artifacts called container images. These Docker images are software-based bundles enabling easily reproducible builds, and give Node.js developers a way to run the same project or product in all environments. 

Finally, Docker containers allow you to experiment more easily with new platform releases or other changes without requiring special permissions, or setting up a dedicated environment to run a project.

1. Choose the right Node.js Docker base image for your application

When creating a Docker image for a Node.js project, we build our own application image based on another Docker image, which we pull from Docker Hub. This is what we refer to as the base image. The base image is the building block of the new Docker image you are about to build for your Node.js application.

The selection of a base image is critical because it significantly impacts everything from the Docker image build speed, as well as the security and performance of your web application. This is so critical Docker and Snyk co-wrote this practical guide focused on container image security for developer teams

It’s quite possible that you are choosing a full-fledged operating system image based on Debian or Ubuntu, because it enables you to utilize all the tooling and libraries available in these images. However, this comes at a price. When a base image has a security vulnerability, you will inherit it in your newly created image. Why would you want to start off on bad terms by defaulting to a big base image that contains many vulnerabilities?

When we look at the base images, many of the security vulnerabilities belong to the Operating System (OS) layer this base image uses. Snyk’s 2019 research Shifting Docker security left, showed that the vulnerabilities brought in by the OS layer can vary largely depending on the flavor you choose.

1

2. Scan your Node.js Docker image during development

Creating a Docker image based on other images, as well as rebuilding them can potentially introduce new vulnerabilities, but there’s a way for you to be on top of it.

Treat the Docker image build process just like any other development related activity. Just as you test the code you write, you should test the Docker images you build. 

These tests include static file checks—also known as linters—to ensure you’re avoiding security pitfalls and other bad patterns in your Dockerfile. We’ve outlined some of these in our Docker image security best practices. If you’re a Node.js application developer you’ll also want to read through this step-by-step 10 best practices to containerize Node.js web applications with Docker.

Connecting your git repositories to Snyk is also an excellent choice. Snyk supports native integrations with GitHub, GitLab, Bitbucket and Azure Repos. Having a git integration means that we can scan your pull requests and annotate them with security information, if we find security vulnerabilities. This allows you to put gates and deny merging a pull request if it brings new security vulnerabilities.

If you need more flexibility for your Continuous Integration (CI), or a closely integrated developer experience, meet the Snyk CLI.

The CLI allows you to easily test your Docker container image. Let’s say you’re building a Docker image locally and tagged it as nodejs:notification-v99.9—we test it as follows:

  1. Install the Snyk CLI:
    $ npm install -g snyk
  2. Then let the Snyk CLI automatically grab an API token for you with:
    $ snyk auth
  3. Scan the local base image:
    $ snyk container test nodejs:notification-v99.9

Test results are then printed to the screen, along with information about the CVE, the path that introduces the vulnerability, so you know which OS dependency is responsible for it.

Following is an example output for testing Docker base image node:15:

✗ High severity vulnerability found in binutils
  Description: Out-of-Bounds
  Info: https://snyk.io/vuln/SNYK-DEBIAN9-BINUTILS-404153
  Introduced through: dpkg/dpkg-dev@1.18.25, libtool@2.4.6-2
  From: dpkg/dpkg-dev@1.18.25 > binutils@2.28-5
  From: libtool@2.4.6-2 > gcc-defaults/gcc@4:6.3.0-4 > gcc-6@6.3.0-18+deb9u1 > binutils@2.28-5
  Introduced by your base image (node:15)

✗ High severity vulnerability found in binutils
  Description: Integer Overflow or Wraparound
  Info: https://snyk.io/vuln/SNYK-DEBIAN9-BINUTILS-404253
  Introduced through: dpkg/dpkg-dev@1.18.25, libtool@2.4.6-2
  From: dpkg/dpkg-dev@1.18.25 > binutils@2.28-5
  From: libtool@2.4.6-2 > gcc-defaults/gcc@4:6.3.0-4 > gcc-6@6.3.0-18+deb9u1 > binutils@2.28-5
  Introduced by your base image (node:15)



Organization:      snyk-demo-567
Package manager:   deb
Target file:       Dockerfile
Project name:      docker-image|node
Docker image:      node:15
Platform:          linux/amd64
Base image:        node:15
Licenses:          enabled

Tested 412 dependencies for known issues, found 554 issues.

Base Image  Vulnerabilities  Severity
node:15     554              56 high, 63 medium, 435 low

Recommendations for base image upgrade:

Alternative image types
Base Image                Vulnerabilities  Severity
node:current-buster-slim  53               10 high, 4 medium, 39 low
node:15.5-slim            72               18 high, 7 medium, 47 low
node:current-buster       304              33 high, 43 medium, 228 low

3. Fix your Node.js runtime vulnerabilities in your Docker images

An often overlooked detail, when managing the risk of Docker container images, is the application runtime itself. Whether you’re practicing Docker for Java, or you’re running Docker for Node.js web applications, the Node.js application runtime itself may be vulnerable.

You should be aware and follow Node.js security releases and the Node.js security policy. Instead of manually keeping up with these, take advantage of Snyk to also find Node.js security vulnerabilities.

To give you more context on security vulnerabilities across the different Node.js base image tags, I scanned some of them with the Snyk CLI and plotted the results in the following logarithmic scale chart:

2

You can see that:

  1. The default node base image tag, also tagged as node:latest, bundles more than 500 security vulnerabilities, but also introduces 2 security vulnerabilities in the Node.js runtime itself. That should worry you if you’re currently running a Node.js 15 version in production and you didn’t patch or fix it.
  2. The node:alpine base image tag might not be bundling vulnerable OS dependencies in the base image—this is why it’s missing a blue bar—but it still has a vulnerable version of the latest Node.js runtime (version 15).
  3. If you’re running an unsupported version of Node.js—for example, Node.js 10—it is vulnerable and you can see that it is not receiving any security updates.

If you were to choose the Node.js version 15, which is the latest version released, at the time of writing this article, you would  actually be exposing yourself not only to 561 security vulnerabilities within this container, but also to two security vulnerabilities in the Node.js runtime itself.

We can see the Docker scan test results found in this public image testing URL: https://snyk.io/test/docker/node:15.5.0. You’re welcome to test other Node.js base image tags that you’re using with this public and free Docker scanning service: https://snyk.io/test.

3

Security is now an integral part of the Docker workflow, with Snyk powering container scanning in Docker Hub and Docker Desktop. In fact, if you’re using Docker as a development platform, you should review our Snyk and Docker Vulnerability Cheatsheet.

If you already have a Docker user account, you can use it to connect to Snyk and quickly import your Docker Hub repositories with up to 200 free scans per month. 

4. Monitor your deployed Docker images for your Node.js applications

Once you have Docker images built, you’re probably pushing them to a Docker registry that keeps track of the images, so that these can be deployed and spun up as a functional container application.

Why should we monitor Docker base images?

If you’re practicing all of the security guidelines we covered so far with scanning and fixing base images, that’s great. However, keep in mind that new security vulnerabilities get discovered all the time. If you have 78 security vulnerabilities in your image now, that doesn’t mean you won’t have 100 tomorrow morning when new CVEs are reported and impact your running containers in production. That’s why monitoring your registry of container images—those that you’re using to deploy containers—is crucial to ensure you will find out about security issues soon and can remediate them.

If you’re using a paid Docker Hub registry for your images, you might have already seen the integrated Docker security scanning by Snyk in Docker Hub. 

You can also integrate with many Docker image registries from the Snyk app directly. For example, you can import images from Docker Hub, ACR, ECR, GCR, or Artifactory and then Snyk will scan these regularly for you and alert you via Slack or email about any security issues found:

4

5. Follow security guidelines and production-grade recommendation for a secure and optimal Node.js Docker image

Congratulations for keeping up with all the security guidelines so far!

To wrap up, if you want to dive deep into security best practices for building optimal Docker images for Node.js and Java applications, check out these resources:

  1. 10 Docker Security Best Practices – detailed security practices that you should follow when building Docker base images and when pulling them too, as it also introduces the reader to Docker content trust.
  2. Are you a Java developer? You’ll find this resource valuable: Docker for Java developers: 5 things you need to know not to fail your security.
  3. 10 best practices to containerize Node.js web applications with Docker – If you’re a Node.js developer you are going to love this step by step walkthrough, showing you how to build secure and performant Docker base images for your Node.js applications.

Start testing and fixing your container images with Snyk and your Docker ID.

]]>
Getting Started with Docker Using Node – Part II https://www.docker.com/blog/getting-started-with-docker-using-node-part-ii/ Fri, 11 Sep 2020 00:27:08 +0000 https://www.docker.com/blog/?p=27013 In part I of this series, we learned about creating Docker images using a Dockerfile, tagging our images and managing images. Next we took a look at running containers, publishing ports, and running containers in detached mode. We then learned about managing containers by starting, stopping and restarting them. We also looked at naming our containers so they are more easily identifiable.

started using node part 2 1

In this post, we’ll focus on setting up our local development environment. First, we’ll take a look at running a database in a container and how we use volumes and networking to persist our data and allow our application to talk with the database. Then we’ll pull everything together into a compose file which will allow us to setup and run a local development environment with one command. Finally, we’ll take a look at connecting a debugger to our application running inside a container.

Local Database and Containers

Instead of downloading MongoDB, installing, configuring and then running the Mongo database as a service. We can use the Docker Official Image for MongoDB and run it in a container.

Before we run MongoDB in a container, we want to create a couple of volumes that Docker can manage to store our persistent data and configuration. I like to use the managed volumes feature that docker provides instead of using bind mounts. You can read all about volumes in our documentation.

Let’s create our volumes now. We’ll create one for the data and one for configuration of MongoDB.

$ docker volume create mongodb

$ docker volume create mongodb_config

Now we’ll create a network that our application and database will use to talk with each other. The network is called a user defined bridge network and gives us a nice DNS lookup service which we can use when creating our connection string.

docker network create mongodb

Now we can run MongoDB in a container and attach to the volumes and network we created above. Docker will pull the image from Hub and run it for you locally.

$ docker run -it --rm -d -v mongodb:/data/db \

-v mongodb_config:/data/configdb -p 27017:27017 \

--network mongodb \

--name mongodb \

mongo

Okay, now that we have a running mongodb, let’s update server.js to use a the MongoDB and not an in-memory data store. 

const ronin     = require( 'ronin-server' )

const mocks     = require( 'ronin-mocks' )

const database  = require( 'ronin-database' )

const server = ronin.server()

database.connect( process.env.CONNECTIONSTRING )

server.use( '/', mocks.server( server.Router(), false, false ) )

server.start()

We’ve add the ronin-database module and we updated the code to connect to the database and set the in-memory flag to false. We now need to rebuild our image so it contains our changes.

First let’s add the ronin-database module to our application using npm.

$ npm install ronin-database

Now we can build our image.

$ docker build --tag node-docker .

Now let’s run our container. But this time we’ll need to set the CONNECTIONSTRING environment variable so our application knows what connection string to use to access the database. We’ll do this right in the docker run command.

$ docker run \

-it --rm -d \

--network mongodb \

--name rest-server \

-p 8000:8000 \

-e CONNECTIONSTRING=mongodb://mongodb:27017/yoda_notes \

node-docker

Let’s test that our application is connected to the database and is able to add a note.

$ curl --request POST \

  --url http://localhost:8000/notes \

  --header 'content-type: application/json' \

  --data '{

"name": "this is a note",

"text": "this is a note that I wanted to take while I was working on writing a blog post.",

"owner": "peter"

}'

You should receive the following json back from our service.

{"code":"success","payload":{"_id":"5efd0a1552cd422b59d4f994","name":"this is a note","text":"this is a note that I wanted to take while I was working on writing a blog post.","owner":"peter","createDate":"2020-07-01T22:11:33.256Z"}}

Using Compose to Develop locally

Awesome! We now have our MongoDB running inside a container and persisting it’s data to a Docker volume. We also were able to pass in the connection string using an environment variable.

But this can be a little bit time consuming and also difficult to remember all the environment variables, networks and volumes that need to be created and set up to run our application. 

In this section, we’ll use a Compose file to configure everything we just did manually. We’ll also set up the Compose file to start the application in debug mode so that we can connect a debugger to the running node process.

Open your favorite IDE or text editor and create a new file named docker-compose.dev.yml. Copy and paste the below commands into that file.

version: '3.8'

services:

 notes:

   build:

     context: .

   ports:

     - 8000:8000

     - 9229:9229

   environment:

     - CONNECTIONSTRING=mongodb://mongo:27017/notes

   volumes:

     - ./:/code

   command: npm run debug

 mongo:

   image: mongo:4.2.8

   ports:

     - 27017:27017

   volumes:

     - mongodb:/data/db

     - mongodb_config:/data/configdb

 volumes:

   mongodb:

   mongodb_config:

This compose file is super convenient because now we do not have to type all the parameters to pass to the docker run command. We can declaratively do that in the compose file.

We are exposing port 9229 so that we can attach a debugger. We are also mapping our local source code into the running container so that we can make changes in our text editor and have those changes picked up in the container.

One other really cool feature of using a compose file, is that we have service resolution automatically set up for us. So we are now able to use “mongo” in our connection string. The reason we can use “mongo” is because this is the name we used in the compose file to label our container running our MongoDB.

To be able to start our application in debug mode, we need to add a line to our package.json file to tell npm how to start our application in debug mode.

Open the package.json file and add the following line to the scripts section.

"debug": "nodemon --inspect=0.0.0.0:9229 server.js"

As you can see we are going to use nodemon. Nodemon will start our server in debug mode and also watch for files that have changed and restart our server. Let’s add nodemon to our package.json file.

$ npm install nodemon

Let’s first stop our running application and the mongodb container. Then we can start our application using compose and confirm that it is running properly.

$ docker stop rest-server mongodb

$ docker-compose -f docker-compose.dev.yml up --build

If you get the following error: ‘Error response from daemon: No such container:’ Don’t worry. That just means that you have already stopped the container or it wasn’t running in the first place.

You’ll notice that we pass the “–build” flag to the docker-compose command. This tells Docker to first compile our image and then start it.

If all goes will you should see something similar:

started using node part 2 2

Now let’s test our API endpoint. Run the following curl command:

$ curl --request GET --url http://localhost:8000/notes

You should receive the following response:

{"code":"success","meta":{"total":0,"count":0},"payload":[]}

Connecting a Debugger

We’ll use the debugger that comes with the Chrome browser. Open Chrome on your machine and then type the following into the address bar.

about:inspect

The following screen will open.

started using node part 2 3

Click the “Open dedicated DevTools for Node” link. This will open the DevTools window that is connected to the running node.js process inside our container.

Let’s change the source code and then set a breakpoint. 

Add the following code to the server.js file on line 9 and save the file. 

 server.use( '/foo', (req, res) => {

   return res.json({ "foo": "bar" })

 })

If you take a look at the terminal where our compose application is running, you’ll see that nodemon noticed the changes and reloaded our application.

started using node part 2 4

Navigate back to the Chrome DevTools and set a breakpoint on line 10 and then run the following curl command to trigger the breakpoint.

$ curl --request GET --url http://localhost:8000/foo

💥BOOM 💥You should have seen the code break on line 10 and now you are able to use the debugger just like you would normally. You can inspect and watch variables, set conditional breakpoints, view stack traces and a bunch of other stuff.

Conclusion

In this post, we ran MongoDB in a container, connected it to a couple of volumes and created a network so our application could talk with the database. Then we used Docker Compose to pull all this together into one file. Finally, we took a quick look at configuring our application to start in debug mode and connected to it using the Chrome debugger.

If you have any questions, please feel free to reach out on Twitter @pmckee and join us in our community slack.

]]>
Getting Started with Docker Using Node.js(Part I) https://www.docker.com/blog/getting-started-with-docker-using-node-jspart-i/ Thu, 03 Sep 2020 21:51:34 +0000 https://www.docker.com/blog/?p=26999 A step-by-step guide to help you get started using Docker containers with your Node.js apps.

Prerequisites

To complete this tutorial, you will need the following:

node docker logo

Docker Overview

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. 

With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

Sample Application

Let’s create a simple Node.js application that we’ll use as our example. Create a directory on your local machine named node-docker and follow the steps below to create a simple REST API.

$ cd [path to your node-docker directory]
$ npm init -y
$ npm install ronin-server ronin-mocks
$ touch server.js

Now let’s add some code to handle our REST requests. We’ll use a mocks server so we can focus on Dockerizing the application and not so much the actual code.

Open this working directory in your favorite IDE and enter the following code into the server.js file.

const ronin     = require( 'ronin-server' )
const mocks     = require( 'ronin-mocks' )
 
const server = ronin.server()
 
server.use( '/', mocks.server( server.Router(), false, true ) )
server.start()

The mocking server is called Ronin.js and will list on port 8000 by default. You can make POST requests to the root (/) endpoint and any JSON structure you send to the server will be saved in memory. You can also send GET requests to the same endpoint and receive an array of JSON objects that you have previously POSTed.

Testing Our Application

Let’s start our application and make sure it’s running properly. Open your terminal and navigate to your working directory you created. 

$ node server.js

To test that the application is working properly, we’ll first POST some json to the API and then make a GET request to see that the data has been saved. Open a new terminal and run the following curl commands:

$ curl --request POST \
  --url http://localhost:8000/test \
  --header 'content-type: application/json' \
  --data '{
	"msg": "testing"
}'
{"code":"success","payload":[{"msg":"testing","id":"31f23305-f5d0-4b4f-a16f-6f4c8ec93cf1","createDate":"2020-08-28T21:53:07.157Z"}]}

$ curl http://localhost:8000/test
{"code":"success","meta":{"total":1,"count":1},"payload":[{"msg":"testing","id":"31f23305-f5d0-4b4f-a16f-6f4c8ec93cf1","createDate":"2020-08-28T21:53:07.157Z"}]}

Switch back to the terminal where our server is running and you should see the following requests in the server logs.

2020-XX-31T16:35:08:4260  INFO: POST /test
2020-XX-31T16:35:21:3560  INFO: GET /test

Creating Dockerfiles for Node.js

Now that our application is running properly, let’s take a look at creating a Dockerfile. 

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. When we tell Docker to build our image by executing the docker build command, Docker will read these instructions and execute them one by one and create a Docker image as a result.

Let’s walk through creating a Dockerfile for our application. In the root of your working directory, create a file named Dockerfile and open this file in your text editor.

NOTE: The name of the Dockerfile is not important but the default filename for many commands is simply Dockerfile. So we’ll use that as our filename throughout this series.

The first thing we need to do is add a line in our Dockerfile that tells Docker what base image we would like to use for our application. 

Dockerfile:

FROM node:12.18.1

Docker images can be inherited from other images. So instead of creating our own base image, we’ll use the official Node.js image that already has all the tools and packages that we need to run a Node.js application. You can think of this as in the same way you would think about class inheritance in object oriented programming. So for example. If we were able to create Docker images in JavaScript, we might write something like the following.

class MyImage extends NodeBaseImage {}

This would create a class called MyImage that inherited functionality from the base class NodeBaseImage.

In the same way, when we use the FROM command, we tell docker to include in our image all the functionality from the node:12.18.1 image.

NOTE: If you want to learn more about creating your own base images, please checkout our documentation on creating base images.

To make things easier when running the rest of our commands, let’s create a working directory. 

This instructs Docker to use this path as the default location for all subsequent commands. This way we do not have to type out full file paths but can use relative paths based on the working directory.

WORKDIR /app

Usually the very first thing you do once you’ve downloaded a project written in Node.js is to install npm packages. This will ensure that your application has all its dependencies installed into the node_modules directory where the node runtime will be able to find them.

Before we can run npm install, we need to get our package.json and package-lock.json files into our images. We’ll use the COPY command to do this. The COPY command takes two parameters. The first parameter tells Docker what file(s) you would like to copy into the image. The second parameter tells Docker where you want that file(s) to be copied to. We’ll copy the package.json and package-lock.json file into our working directory – /app.

COPY package.json package.json
COPY package-lock.json package-lock.json

Once we have our package.json files inside the image, we can use the RUN command to execute the command npm install. This works exactly the same as if we were running npm install locally on our machine but this time these node modules will be installed into the node_modules directory inside our image.

RUN npm install

At this point we have an image that is based on node version 12.18.1 and we have installed our dependencies. The next thing we need to do is to add our source code into the image. We’ll use the COPY command just like we did with our package.json files above.

COPY . .

This COPY command will take all the files located in the current directory and copies them into the image. Now all we have to do is to tell Docker what command we want to run when our image is run inside of a container. We do this with the CMD command. 

CMD [ "node", "server.js" ]

Below is the complete Dockerfile.

FROM node:12.18.1
 
WORKDIR /app
 
COPY package.json package.json
COPY package-lock.json package-lock.json
 
RUN npm install
 
COPY . .
 
CMD [ "node", "server.js" ]

Building Images

Now that we’ve created our Dockerfile, let’s build our image. To do this we use the docker build command. The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The Docker build process can access any of the files located in the context. 

The build command optionally takes a –tag flag. The tag is used to set the name of the image and an optional tag in the format ‘name:tag’. We’ll leave off the optional “tag” for now to help simplify things. If you do not pass a tag, docker will use “latest” as it’s default tag. You’ll see this in the last line of the build output.

Let’s build our first Docker image.

$ docker build --tag node-docker .
Sending build context to Docker daemon  82.94kB
Step 1/7 : FROM node:12.18.1
---> f5be1883c8e0
Step 2/7 : WORKDIR /code
...
Successfully built e03018e56163
Successfully tagged node-docker:latest

Viewing Local Images

To see a list of images we have on our local machine, we have two options. One is to use the CLI and the other is to use Docker Desktop. Since we are currently working in the terminal let’s take a look at listing images with the CLI.

To list images, simply run the images command.

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
node-docker         latest              3809733582bc        About a minute ago   945MB
node                12.18.1             f5be1883c8e0        2 months ago         918MB

You should see at least two images listed. One for the base image node:12.18.1 and the other for our image we just build node-docker:latest.

Tagging Images

As mentioned earlier, an image name is made up of slash-separated name components. Name components may contain lowercase letters, digits and separators. A separator is defined as a period, one or two underscores, or one or more dashes. A name component may not start or end with a separator.

An image is made up of a manifest and a list of layers. Do not worry to much about manifests and layers at this point other than a “tag” points to a combination of these artifacts. You can have multiple tags for an image. Let’s create a second tag for the image we built and take a look at it’s layers.

To create a new tag for the image we built above, run the following command.

$ docker tag node-docker:latest node-docker:v1.0.0

The docker tag command creates a new tag for an image. It does not create a new image. The tag points to the same image and is just another way to reference the image.

Now run the docker images command to see a list of our local images.

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
node-docker         latest              3809733582bc        24 minutes ago      945MB
node-docker         v1.0.0              3809733582bc        24 minutes ago      945MB
node                12.18.1             f5be1883c8e0        2 months ago        918MB

You can see that we have two images that start with node-docker. We know they are the same image because if you look at the IMAGE ID column, you can see that the values are the same for the two images.

Let’s remove the tag that we just created. To do this, we’ll use the rmi command. The rmi command stands for “remove image”. 

$ docker rmi node-docker:v1.0.0
Untagged: node-docker:v1.0.0

Notice that the response from Docker tells us that the image has not been removed but only “untagged”. Double check this by running the images command.

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
node-docker         latest              3809733582bc        32 minutes ago      945MB
node                12.18.1             f5be1883c8e0        2 months ago        918MB

Our image that was tagged with :v1.0.0 has been removed but we still have the node-docker:latest tag available on our machine.

Running Containers

A container is a normal operating system process except that this process is isolated in that it has its own file system, its own networking, and its own isolated process tree separate from the host.

To run an image inside of a container, we use the docker run command. The docker run command requires one parameter and that is the image name. Let’s start our image and make sure it is running correctly. Execute the following command in your terminal.

$ docker run node-docker

After running this command you’ll notice that you were not returned to the command prompt. This is because our application is a REST server and will run in a loop waiting for incoming requests without return control back to the OS until we stop the container.

Let’s make a GET request to the server using the curl command.

$ curl --request POST \
  --url http://localhost:8000/test \
  --header 'content-type: application/json' \
  --data '{
	"msg": "testing"
}'
curl: (7) Failed to connect to localhost port 8000: Connection refused

As you can see, our curl command failed because the connection to our server was refused. Meaning that we were not able to connect to localhost on port 8000. This is expected because our container is run in isolation which includes networking. Let’s stop the container and restart with port 8000 published on our local network.

To stop the container, press ctrl-c. This will return you to the terminal prompt.

To publish a port for our container, we’ll use the —publish flag (-p for short) on the docker run command. The format of the —publish command is [host port]:[container port]. So if we wanted to expose port 8000 inside the container to port 3000 outside the container, we would pass 3000:8000 to the —publish flag. 

Start the container and expose port 8000 to port 8000 on the host.

$ docker run --publish 8000:8000 node-docker

Now let’s rerun the curl command from above.

$ curl --request POST \
  --url http://localhost:8000/test \
  --header 'content-type: application/json' \
  --data '{
	"msg": "testing"
}'
{"code":"success","payload":[{"msg":"testing","id":"dc0e2c2b-793d-433c-8645-b3a553ea26de","createDate":"2020-09-01T17:36:09.897Z"}]}

Success! We were able to connect to the application running inside of our container on port 8000. Switch back to the terminal where your container is running and you should see the POST request logged to the console.

2020-09-01T17:36:09:8770  INFO: POST /test

Press ctrl-c to stop the container.

Run In Detached Mode

This is great so far but our sample application is a web server and we should not have to have our terminal connected to the container. Docker can run your container in detached mode or in the background. To do this, we can use the —detach or -d for short. Docker will start your container the same as before but this time will “detach” from the container and return you to the terminal prompt.

$ docker run -d -p 8000:8000 node-docker
ce02b3179f0f10085db9edfccd731101868f58631bdf918ca490ff6fd223a93b

Docker started our container in the background and printed the Container ID on the terminal.

Again, let’s make sure that our container is running properly. Run the same curl command from above.

$ curl --request POST \
  --url http://localhost:8000/test \
  --header 'content-type: application/json' \
  --data '{
	"msg": "testing"
}'
{"code":"success","payload":[{"msg":"testing","id":"dc0e2c2b-793d-433c-8645-b3a553ea26de","createDate":"2020-09-01T17:36:09.897Z"}]}

Listing Containers

Since we ran our container in the background, how do we know if our container is running or what other containers are running on our machine? Well, we can run the docker ps command. Just like on linux, to see a list of processes on your machine we would run the ps command. In the same spirit, we can run the docker ps command which will show us a list of containers running on our machine.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
ce02b3179f0f        node-docker         "docker-entrypoint.s…"   6 minutes ago       Up 6 minutes        0.0.0.0:8000->8000/tcp   wonderful_kalam

The ps command tells a bunch of stuff about our running containers. We can see the Container ID, The image running inside the container, the command that was used to start the container, when it was created, the status, ports that exposed and the name of the container. 

You are probably wondering where the name of our container is coming from. Since we didn’t provide a name for the container when we started it, Docker generated a random name. We’ll fix this in a minute but first we need to stop the container. To stop the container, run the docker stop command which does just that, stops the container. You will need to pass the name of the container or you can use the container id.

$ docker stop wonderful_kalam
wonderful_kalam

Now rerun the docker ps command to see a list of running containers.

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Stopped, Started and Naming Containers

Docker containers can be started, stopped and restarted. When we stop a container, it is not removed but the status is changed to stopped and the process inside of the container is stopped. When we ran the docker ps command, the default output is to only show running containers. If we pass the —all or –a for short, we will see all containers on our system whether they are stopped or started.

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS               NAMES
ce02b3179f0f        node-docker         "docker-entrypoint.s…"   16 minutes ago      Exited (0) 5 minutes ago                        wonderful_kalam
ec45285c456d        node-docker         "docker-entrypoint.s…"   28 minutes ago      Exited (0) 20 minutes ago                       agitated_moser
fb7a41809e5d        node-docker         "docker-entrypoint.s…"   37 minutes ago      Exited (0) 36 minutes ago                       goofy_khayyam

If you’ve been following along, you should see several containers listed. These are containers that we started and stopped but have not been removed.

Let’s restart the container that we just stopped. Locate the name of the container we just stopped and replace the name of the container below in the restart command.

$ docker restart wonderful_kalam

Now list all the containers again using the ps command.

$ docker ps --all
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS                    NAMES
ce02b3179f0f        node-docker         "docker-entrypoint.s…"   19 minutes ago      Up 8 seconds                0.0.0.0:8000->8000/tcp   wonderful_kalam
ec45285c456d        node-docker         "docker-entrypoint.s…"   31 minutes ago      Exited (0) 23 minutes ago                            agitated_moser
fb7a41809e5d        node-docker         "docker-entrypoint.s…"   40 minutes ago      Exited (0) 39 minutes ago                            goofy_khayyam

Notice that the container we just restarted has been started in detached mode and has port 8000 exposed. Also observe the status of the container is “Up X seconds”. When you restart a container, it will be started with the same flags or commands that it was originally started with.

Let’s stop and remove all of our containers and take a look at fixing the random naming issue.

Stop the container we just started. Find the name of your running container and replace the name in the command below with the name of the container on your system.

$ docker stop wonderful_kalam
wonderful_kalam

Now that all of our containers are stopped, let’s remove them. When a container is removed, it is no longer running nor is it in the stopped status but the process inside the container has been stopped and the metadata for the container has been removed.

$ docker ps --all
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS                    NAMES
ce02b3179f0f        node-docker         "docker-entrypoint.s…"   19 minutes ago      Up 8 seconds                0.0.0.0:8000->8000/tcp   wonderful_kalam
ec45285c456d        node-docker         "docker-entrypoint.s…"   31 minutes ago      Exited (0) 23 minutes ago                            agitated_moser
fb7a41809e5d        node-docker         "docker-entrypoint.s…"   40 minutes ago      Exited (0) 39 minutes ago                            goofy_khayyam

To remove a container, simple run the docker rm command passing the container name. You can pass multiple container names to the command in one command. Again, replace the containers names in the below command with the container names from your system.

$ docker rm wonderful_kalam agitated_moser goofy_khayyam
wonderful_kalam 
agitated_moser 
goofy_khayyam

Run the docker ps --all command again to see that all containers are gone.

Now let’s address the pesky random name issue. Standard practice is to name your containers for the simple reason that it is easier to identify what is running in the container and what application or service it is associated with. Just like good naming conventions for variables in your code makes it simpler to read. So goes naming your containers.

To name a container, we just need to pass the –name flag to the run command.

$ docker run -d -p 8000:8000 --name rest-server node-docker
1aa5d46418a68705c81782a58456a4ccdb56a309cb5e6bd399478d01eaa5cdda
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
1aa5d46418a6        node-docker         "docker-entrypoint.s…"   3 seconds ago       Up 3 seconds        0.0.0.0:8000->8000/tcp   rest-server

There, that’s better. Now we can easily identify our container based on the name.

Conclusion

In this post, we learned about creating Docker images using a Dockerfile, tagging our images and managing images. Next we took a look at running containers, publishing ports, and running containers in detached mode. We then learned about managing containers by starting, stopping and restarting them. We also looked at naming our containers so they are more easily identifiable.

In part II, we’ll take a look at running a database in a container and connecting it to our application. We’ll also look at setting up your local development environment and sharing your images using Docker.

If you have any questions, please feel free to reach out on Twitter @pmckee and join us in our community slack.

]]>
How To Setup Your Local Node.js Development Environment Using Docker – Part 2 https://www.docker.com/blog/how-to-setup-your-local-node-js-development-environment-using-docker-part-2/ Wed, 05 Aug 2020 22:47:53 +0000 https://www.docker.com/blog/?p=26839 In part I of this series, we took a look at creating Docker images and running Containers for Node.js applications. We also took a look at setting up a database in a container and how volumes and network play a part in setting up your local development environment.

In this article we’ll take a look at creating and running a development image where we can compile, add modules and debug our application all inside of a container. This helps speed up the developer setup time when moving to a new application or project. 

We’ll also take a quick look at using Docker Compose to help streamline the processes of setting up and running a full microservices application locally on your development machine.

Prerequisites

Fork the Code Repository

The first thing we want to do is download the code to our local development machine. Let’s do this using the following git command:

git clone git@github.com:pmckeetx/memphis.git

Now that we have the code local, let’s take a look at the project structure. Open the code in your favorite IDE and expand the root level directories. You’ll see the following file structure.

├── docker-compose.yml
├── notes-service
│   ├── config
│   ├── node_modules
│   ├── nodemon.json
│   ├── package-lock.json
│   ├── package.json
│   └── server.js
├── reading-list-service
│   ├── config
│   ├── node_modules
│   ├── nodemon.json
│   ├── package-lock.json
│   ├── package.json
│   └── server.js
├── users-service
│   ├── Dockerfile
│   ├── config
│   ├── node_modules
│   ├── nodemon.json
│   ├── package-lock.json
│   ├── package.json
│   └── server.js
└── yoda-ui
    ├── README.md
    ├── node_modules
    ├── package.json
    ├── public
    ├── src
  └── yarn.lock

The application is made up of a couple simple microservices and a front-end written in React.js. It uses MongoDB as it’s datastore.

In part I of this series, we created a couple of Dockerfiles for our services and also took a look at running them in containers and connecting them to an instance of MongoDb running in a container. 

Local development in Containers

There are many ways to use Docker and containers to do local development and a lot of it depends on your application structure. We’ll start at with the very basic and then progress into more complicated setups

Using a Development Image

One of the easiest ways to start using containers in your development workflow is to use a development image. A development image is an image that has all the tools that you need to develop and compile your application with.

In this article we are using node.js, so our image should have Node.js installed as well as npm or yarn. Let’s create a development image that we can use to run our node.js application inside of.

Development Dockerfile

Create a local directory on your development machine that we can use as a working directory to save our Dockerfile and any other files that we’ll need for our development image.

$ mkdir -p ~/projects/dev-image

Create a Dockerfile in this folder and add the following commands.

FROM node:12.18.3
RUN apt-get update &amp;&amp; apt-get install -y \
  nano \
  vim

We start off by using the node:12.18.3 official image. I’ve found that this image is fine for creating a development image. I like to add a couple of text editors to the image in case I want to quickly edit a file while inside the container.

We did not add an ENTRYPOINT or CMD to the Dockerfile because we will rely on the base image’s ENTRYPOINT and we will override the CMD when we start the image.

Let’s build our image.

$ docker build -t node-dev-image .

And now we can run it.

$ docker run -it --rm --name dev -v $(pwd):/code node-dev-image bash

You will be presented with a bash command prompt. Now, inside the container we can create a JavaScript file and run it with Node.js.

Run the following commands to test our image.

$ cat <<EOF > index.js
console.log( 'Hello from inside our container' )
EOF
$ node index.js

Nice. It appears that we have a working development image. We can now do everything that we would do in our normal bash terminal.

If you ran the above Docker command inside of the notes-service directory, then you will have access to the code inside of the container. 

You can start the notes-service by simply navigating to the /code directory and running npm run start.

Using Compose to Develop locally

The notes-service project uses MongoDb as it’s data store. If you remember from Part I of this series, we had to start the Mongo container manually and connect it to the same network that our notes-service is running on. We also had to create a couple of volumes so we could persist our data across restarts of our application and MongoDb.

In this section, we’ll create a Compose file to start our notes-serice and the MongoDb with one command. We’ll also set up the Compose file to start the notes-serice in debug mode so that we can connect a debugger to the running node process.

Open the notes-service in your favorite IDE or text editor and create a new file named docker-compose.dev.yml. Copy and paste the the below commands into the file.

version: '3.8'
services:
 notes:
   build:
     context: .
   ports:
     - 8080:8080
     - 9229:9229
   environment:
     - SERVER_PORT=8080
     - DATABASE_CONNECTIONSTRING=mongodb://mongo:27017/notes
   volumes:
     - ./:/code
   command: npm run debug
 
 mongo:
   image: mongo:4.2.8
   ports:
     - 27017:27017
   volumes:
     - mongodb:/data/db
     - mongodb_config:/data/configdb
 volumes:
   mongodb:
   mongodb_config:

This compose file is super convenient because now we do not have to type all the parameters to pass to the docker run command. We can declaratively do that in the compose file.

We are exposing port 9229 so that we can attach a debugger. We are also mapping our local source code into the running container so that we can make changes in our text editor and have those changes picked up in the container.

One other really cool feature of using the a compose file, is that we have service resolution setup to use the service names. So we are now able to use “mongo” in our connection string. The reason we use mongo is because that is what we have named our mongo service in the compose file as.

Let’s start our application and confirm that it is running properly.

$ docker-compose -f docker-compose.dev.yml up --build

We pass the “–build” flag so Docker will compile our image and then start it.

If all goes will you should see something similar:

node js environment using docker part 2 1

Now let’s test our API endpoint. Run the following curl command:

$ curl --request GET --url http://localhost:8080/services/m/notes

You should receive the following response:

{"code":"success","meta":{"total":0,"count":0},"payload":[]}

Connecting a Debugger

We’ll use the debugger that comes with the Chrome browser. Open Chrome on your machine and then type the following into the address bar.

about:inspect

The following screen will open.

node js environment using docker part 2 2

Click the “Open dedicated DevTools for Node” link. This will open the DevTools that are connected to the running node.js process inside our container.

Let’s change the source code and then set a breakpoint. 

Add the following code to the server.js file on line 19 and save the file. 

server.use( '/foo', (req, res) => {
   return res.json({ "foo": "bar" })
 })

If you take a look at the terminal where our compose application is running, you’ll see that nodemon noticed the changes and reloaded our application.

node js environment using docker part 2 3

Navigate back to the Chrome DevTools and set a breakpoint on line 20 and then run the following curl command to trigger the breakpoint.

$ curl --request GET --url http://localhost:8080/foo

💥 BOOM 💥 You should have seen the code break on line 20 and now you are able to use the debugger just like you would normally. You can inspect and watch variables, set conditional breakpoints, view stack traces and a bunch of other stuff.

Conclusion

In this article we took a look at creating a general development image that we can use pretty much like our normal command line. We also set up our compose file to map our source code into the running container and exposed the debugging port.

Resources

]]>