docker compose – Docker https://www.docker.com Thu, 11 May 2023 14:07:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png docker compose – Docker https://www.docker.com 32 32 Docker Init: Initialize Dockerfiles and Compose files with a single CLI command https://www.docker.com/blog/docker-init-initialize-dockerfiles-and-compose-files-with-a-single-cli-command/ Thu, 11 May 2023 14:07:13 +0000 https://www.docker.com/?p=42695 Docker has revolutionized the way developers build, package, and deploy their applications. Docker containers provide a lightweight, portable, and consistent runtime environment that can run on any infrastructure. And now, the Docker team has developed docker init, a new command-line interface (CLI) command introduced as a beta feature that simplifies the process of adding Docker to a project (Figure 1).

Blue box showing Docker init command in white text.

Note: Docker Init should not be confused with the internally -used docker-init executable, which is invoked by Docker when utilizing the –init flag with the docker run command.

Screenshot of CommandPrompt showing directory of \users\Marc\containers\example.
Screenshot of CommandPrompt showing directory of \users\Marc\containers\example.
Figure 1: With one command, all required Docker files are created and added to your project.

Create assets automatically

The new  docker init command automates the creation of necessary Docker assets, such as Dockerfiles, Compose files, and .dockerignore files, based on the characteristics of the project. By executing the docker init command, developers can quickly containerize their projects. Docker init is a valuable tool for developers who want to experiment with Docker, learn about containerization, or integrate Docker into their existing projects.

To use docker init, developers need to upgrade to the version 4.19.0 or later of Docker Desktop and execute the command in the target project folder. Docker init will detect the project definitions, and it will automatically generate the necessary files to run the project in Docker. 

The current Beta release of docker init supports Go, Node, and Python, and our development team is actively working to extend support for additional languages and frameworks, including Java, Rust, and .NET. If there is a language or stack that you would like to see added or if you have other feedback about docker init, let us know through our Google form.

In conclusion, docker init is a valuable tool for developers who want to simplify the process of adding Docker support to their projects. It automates the creation of necessary Docker assets  and can help standardize the creation of Docker assets across different projects. By enabling developers to focus on developing their applications and reducing the risk of errors and inconsistencies, Docker init can help accelerate the adoption of Docker and containerization.

See Docker Init in action

To see docker init in action, check out the following overview video by Francesco Ciulla, which demonstrates building the required Docker assets to your project.


Check out the documentation to learn more.

]]>
Docker init nonadult
Docker Desktop 4.19: Compose v2, the Moby project, and more https://www.docker.com/blog/docker-desktop-4-19/ Tue, 02 May 2023 13:54:50 +0000 https://www.docker.com/?p=42110 Docker Desktop release 4.19 is now available. In this post, we highlight features added to Docker Desktop in the past month, including performance enhancements, new language support, and a Moby update.

banner 4.19 docker desktop

5x faster container-to-host networking on macOS

In Docker Desktop 4.19, we’ve made container-to-host networking performance 5x faster on macOS by replacing vpnkit with the TCP/IP stack from the gVisor project.

Many users work on projects that have containers communicating with a server outside their local Docker network. One example of this would be workloads that download packages from the internet via npm install or apt-get. This performance improvement should help a lot in these cases.

Over the next month, we’ll keep track of the stability of this new networking stack. If you notice any issues, you can revert to using the legacy vpnkit networking stack by setting "networkType":"vpnkit" in Docker Desktop’s settings.json config file.

Docker Init (Beta): Support for Node and Python

In our 4.18 release, we introduced docker init, a CLI command in Beta that helps you easily add Docker to any of your projects by creating the required assets for you. In the 4.19 release, we’re happy to add to this and share that the feature now includes support for Python and Node.js. 

You can try docker init with Python and Node.js by updating to the latest version of Docker Desktop (4.19) and typing docker init in the command line while inside a target project folder. 

The Docker team is working on adding more languages and frameworks for this command, including Java, Rust, and .Net. Let us know if you would like us to support a specific language or framework. We welcome any feedback you may have as we continue to develop and improve Docker Init (Beta).

Docker Init CLI welcome message that says this utility will walk you through creating the following files with sensible defaults for your project: .docker ignore, Dockerfile, and compose.yaml.

Docker Scout (Early Access)

With Docker Desktop release 4.19, we’ve made it easier to view Docker Scout data for all of your images directly in Docker Desktop. Whether you’re using an image stored locally in Docker Desktop or a remote image from Docker Hub, you can see all that data without leaving Docker Desktop.

Images view of Docker Desktop showing myorg in Hub with myorg/app, myorg/service, and myorg/auth
myorg/app:latest in Images view, showing image hierarchy, Layers (28), and 48 vulnerabilities in 746 packages
In Images view, recommended fixes for base image debian: Tag is preferred tag (stable-slim) and Major OS version update (10-slim).

A nudge toward Compose v2

Compose v1 has reached end-of-life and will no longer be bundled with Docker Desktop after June 2023.

In preparation, a new warning will be shown in the terminal when running Compose v1 commands. Set the COMPOSE_V1_EOL_SILENT=1 environment variable to suppress this message.

You can upgrade by enabling Use Compose v2 in the Docker Desktop settings. When active, Docker Desktop aliases docker-compose to Compose v2 and supports the recommended docker compose syntax.

Moby 23

We updated the Docker Engine and the CLI to Moby 23.0,  where we are upstreaming open source internal developments such as the containerd integration and Wasm support, which will ship with Moby 24.0. Moby 23.0 includes additional enhancements such as the --format=json shorthand variant of --format=“{{ json . }}” and support of relative source paths to the run command in the -v/--volume and -m/--mount flags. You can read more about Moby 23.0 in the release notes.

Conclusion

We love hearing your feedback. Please leave any feedback on our public GitHub roadmap and let us know what else you’d like to see. Check out the Docker Desktop 4.19 release notes for a full breakdown of what’s new in the latest release.

]]>
Docker Compose Experiment: Sync Files and Automatically Rebuild Services with Watch Mode https://www.docker.com/blog/docker-compose-experiment-sync-files-and-automatically-rebuild-services-with-watch-mode/ Thu, 20 Apr 2023 14:08:48 +0000 https://www.docker.com/?p=41922 We often hear how indispensable Docker Compose is as a development tool. Running docker compose up offers a streamlined experience and scales from quickly standing up a PostgreSQL database to building 50+ services across multiple platforms.

And, although “building a Docker image” was previously considered a last step in release pipelines, it’s safe to say that containers have since become an essential part of our daily workflow. Still, concerns around slow builds and developer experience have often been a barrier towards the adoption of containers for local development.

We’ve come a long way, though. For starters, Docker Compose v2 now has deep integration with BuildKit, so you can use features like RUN cache mounts, SSH Agent forwarding, and efficient COPY with –link to speed up and simplify your builds. We’re also constantly making quality-of-life tweaks like enhanced progress reporting and improving consistency across the Docker CLI ecosystem.

As a result, more developers are running docker compose build && docker compose up to keep their running development environment up-to-date as they make code changes. In some cases, you can even use bind mounts combined with a framework that supports hot reload to avoid the need for an image rebuild, but this approach often comes with its own set of caveats and limitations.

white Compose text on purple background

An early look at the watch command

Starting with Compose v2.17, we’re excited to share an early look at the new development-specific configuration in Compose YAML as well as an experimental file watch command (Figure 1) that will automatically update your running Compose services as you edit and save your code.

This preview is brought to you in no small part by Compose developer Nicolas De Loof (in addition to more than 10 bugfixes in this release alone).

Screenshot of Docker compose command line showing the new "watch" command.
Figure 1: Preview of the new watch command.

An optional new section, x-develop, can be added to a Compose service to configure options specific to your project’s daily flow. In this release, the only available option is watch, which allows you to define file or directory paths to monitor on your computer and a corresponding action to take inside the service container.

Currently, there are two possible actions: 

  • sync — Copy changed files matching the pattern into the running service container(s).
  • rebuild — Trigger an image build and recreate the running service container(s).
services:
  web:
    build: .
    x-develop:
      watch:
        - action: sync
          path: ./web
          target: /src/web
        - action: rebuild
          path: package.json

In the preceding example, whenever a source file in the web/ directory is changed, Compose will copy the file to the corresponding location under /src/web inside the container. Because Webpack supports Hot Module Reload, the changes are automatically detected and applied.

Unlike source code files, adding a new dependency cannot be done on the fly, so whenever package.json is changed, Compose will rebuild the image and recreate the web service container.

Behind the scenes, the file watching code shares its core with Tilt. The intricacies and surprises of file watching have always been near and dear to the Tilt team’s hearts, and, as Dockhands, the geeking out has continued. 

We are going to continue to build out the experience while gated behind the new docker compose alpha command and x-develop Compose YAML section. This approach will allow us to respond to community feedback early in the development process while still providing a clear path to stabilization as part of the Compose Spec.

Docker Compose powers countless workflows today, and its lightweight approach to containerized development is not going anywhere — it’s just learning a few new tricks.

Try it out

Follow the instructions at dockersamples/avatars to quickly run a small demo app, as follows:

git clone https://github.com/dockersamples/avatars.git
cd avatars
docker compose up -d
docker compose alpha watch
# open http://localhost:5735 in your browser

If you try it out on your own project, you can comment on the proposed specification on GitHub issue #253 in the compose-spec repository.

]]>
Docker Compose: What’s New, What’s Changing, What’s Next https://www.docker.com/blog/new-docker-compose-v2-and-v1-deprecation/ Tue, 31 Jan 2023 15:00:00 +0000 https://www.docker.com/?p=40141 Switch to Docker Compose V2.

We’ll walk through new Docker Compose features the team has built, share what we plan to work on next, and remind you to switch to Compose V2 as soon as possible.

Compose V1 support will no longer be provided after June 2023 and will be removed from all future Docker Desktop versions. If you’re still on Compose V1, we recommend you switch as soon as possible to leave time to address any issues with running your Compose applications. (Until the end of June 2023, we’ll monitor Compose issues to address challenges related to V1 migration to V2.)

Compose V1: So long and farewell, old friend!

In the Compose V2 GA announcement we proposed the following timeline:

Compose v1 end-of-life timeline: April 2022, Compose v2 GA. Only high severity security issues and critical bug fixes will continue to be made on v1 until the next milestone. Users can alias docker-compose to docker compose. Users can opt-out of v2 via the Docker Desktop UI or through the docker-compose disable-v2 command. October 2022, six months post GA. Support of critical bug fixes and severe security issues will end on Compose v1. Users can alias docker-compose to docker compose. Users can opt-out of v2 via the docker desktop UI or through the docker-compose disable-v2 commands. April 2023, 1 year post GA. Users can alias docker-compose to docker compose. User can no longer opt-out of v2 via the Docker Desktop UI or through the docker-compose disable-v2 command in new versions.

We’ve extended the timeline, so support now ends after June 2023. 

Switching is easy. Type docker compose instead of docker-compose in your favorite terminal.

An even easier way is to choose Compose V2 by default inside Docker Desktop settings. Activating this option creates a symlink for you so you can continue using docker-compose to preserve your potential existing scripts, but start using the newest version of Compose.

Enable Compose V2 under Preferences > General in Docker Desktop.

For more on the differences between V1 and V2, see the Evolution of Compose in docs.

What’s new?

Build improvements

During the past few months, the main effort of the team was to focus on improving the build experience within Compose. After collecting all the proposals opened in the Compose specification, we started to ship the following new features incrementally:

  • cache_to support to allow sharing layers from intermediary images in a multi-stage build. One of the best ways to use this option is sharing cache in your CI between your workflow steps.
  • no-cache to force a full rebuild of your service.
  • pull to trigger a registry sync for force-pulling your base images.
  • secrets to use at build time.
  • tags to define a list associated with your final build image.
  • ssh to use your local ssh configuration or pass keys to your build process. This allows you to clone a private repo or interact with protected resources; the ssh info won’t be stored in the final image.
  • platforms to define multiple platforms and let Compose produce multi-arch images of your services.

Let’s dive deeper into those last two improvements.

Using ssh resources

ssh was introduced in Compose V2.4.0 GA and lets you use ssh resources at build time. Now you’re able to use your local ssh configuration or public/private keys when you build your service image. For example, you can clone a private Git repository inside your container or connect to a remote server to use critical resources during the build process of your services.

The ssh resources are only used during the build process and won’t be available in your final image.

There are different possibilities for using ssh with Compose. The first one is the new ssh attribute of the build section in your Compose file:

services:
 myservice:
   image: build-test-ssh
   build:
     context: .
     ssh:
       - fake-ssh=./fixtures/build-test/ssh/fake_rsa

And you need to reference the ID of your ssh resource inside your Dockerfile:

FROM alpine
RUN apk add --no-cache openssh-client

WORKDIR /compose
COPY fake_rsa.pub /compose/

RUN --mount=type=ssh,id=fake-ssh,required=true diff <(ssh-add -L) <(cat /compose/fake_rsa.pub)

This example is a simple demonstration of using keys at build time. It copies a public ssh key, mounts the private key inside the container, and checks if it matches the public key previously added.

It’s also possible to directly use the CLI with the new --ssh flag. Let’s try to use it to copy a private Git repository. 

The following Dockerfile adds GitHub as a known host in the ssh configuration of the image and then mounts the ssh local agent to clone the private repository:

# syntax=docker/dockerfile:1
FROM alpine:3.15

RUN apk add --no-cache openssh-client git
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git@github.com:glours/secret-repo.git

CMD ls -lah secret-repo

And using the docker compose build --no-cache --progress=plain --ssh default command will pass your local ssh agent to Compose.

Build multi-arch images with Compose

In Compose version V2.11.0, we introduced the ability to add platforms in the build section and let Compose do a cross-platform build for you.

The following Dockerfile logs the name of the service, the targeted platform to build, and the platform used for doing this build:

FROM --platform=$BUILDPLATFORM golang:alpine AS build

ARG TARGETPLATFORM
ARG BUILDPLATFORM
ARG SERVICENAME
RUN echo "I am $SERVICENAME building for $TARGETPLATFORM, running on $BUILDPLATFORM" > /log

FROM alpine
COPY --from=build /log /log

This Compose file defines an application stack with two services (A and B) which are targeting different build platforms:

services:
 serviceA:
   image: build-test-platform-a:test
   build:
     context: .
     args:
       - SERVICENAME=serviceA
     platforms:
       - linux/amd64
       - linux/arm64
 serviceB:
   image: build-test-platform-b:test
   build:
     context: .
     args:
       - SERVICENAME=serviceB
     platforms:
       - linux/386
       - linux/arm64

Be sure to create and use a docker-container build driver that allows you to build multi-arch images: 

docker buildx create --driver docker-container --use

To use the multi-arch build feature:

> docker compose build --no-cache

Additional updates

We also fixed issues, managed corner cases, and added features. For example, you can define a secret from the environment variable value:

services:
 myservice:
   image: build-test-secret
   build:
     context: .
     secrets:
       - envsecret

secrets:
 envsecret:
   environment: SOME_SECRET

We’re now providing Compose binaries for windows/arm64 and linux/riscv64.

We overhauled the way Compose manages .env files, environment variables, and precedence interpolation. Read the environment variables precedence documentation to learn more. 

To see all the changes we’ve made since April 2022, check out the Compose release page or the comparing changes page.

What’s next?

The Compose team is focused on improving the developer inner loop using Compose. Ideas we’re working on include:

  • A development section in the Compose specification, including a watch mode so you will be able to use the one defined by your programming tooling or let Compose manage it for you 
  • Capabilities to add specific debugging ports, or use profiling tooling inside your service containers
  • Lifecycle hooks to interact with services at different moments of the container lifecycle (for example, letting you execute a command when a container is created but not started, or when it’s up and healthy)
  • A --dry-run flag to test a Compose command before executing it

If you’d like to see something in Compose to improve your development workflow, we invite your feedback in our Public Roadmap.

To take advantage of ongoing improvements to Compose and surface any issues before support ends June 2023, make sure you’re on Compose V2. Use the docker compose CLI or activate the option in Docker Desktop settings.

To learn more about the differences between V1 and V2, check out the Evolution of Compose in our documentation.

]]>
How Rapid7 Reduced Setup Time From Days to Minutes With Docker https://www.docker.com/blog/how-rapid7-reduced-setup-time-from-days-to-minutes-with-docker/ Fri, 18 Nov 2022 15:00:00 +0000 https://www.docker.com/?p=38772 This post was co-written by Kris Rivera, Principal Software Engineer at Rapid7.

rapid7 1

Rapid7 is a Boston-based provider of security analytics and automation solutions enabling organizations to implement an active approach to cybersecurity. Over 10,000 customers rely on Rapid7 technology, services, and research to improve security outcomes and securely advance their organizations.

The security space is constantly changing, with new threats arising every day. To meet their customers’ needs, Rapid7 focuses on increasing the reliability and velocity of software builds while also maintaining their quality.

That’s why Rapid7 turned to Docker. Their teams use Docker to help development, support the sales pipeline, provide testing environments, and deploy to production in an automated, reliable way. 

By using Docker, Rapid7 transformed their onboarding process by automating manual processes. Setting up a new development environment now takes minutes instead of days. Their developers can produce faster builds that enable regular code releases to support changing requirements.

Automating the onboarding process

When developers first joined Rapid7, they were met with a static, manual process that was time consuming and error-prone. Configuring a development environment isn’t exciting for most developers. They want to spend most of their time creating! And setting up the environment is the least glamorous part of the process.

Docker helped automate this cumbersome process. Using Docker, Rapid7 could create containerized systems that were preconfigured with the right OS and developer tools. Docker Compose enabled multiple containers to communicate with each other, and it had the hooks needed to incorporate custom scripting and debugging tools.

Once the onboarding setup was configured through Docker, the process was simple for other developers to replicate. What once took multiple days now takes minutes.

Expanding containers into production

The Rapid7 team streamlined the setup of the development environment by using a Dockerfile. This helped them create an image with every required dependency and software package.

But they didn’t stop there. As this single Docker image evolved into a more complex system, they realized that they’d need more Docker images and container orchestration. That’s when they integrated Docker Compose into the setup.

Docker Compose simplified Docker image builds for each of Rapid7’s environments. It also encouraged a high level of service separation that split out different initialization steps into separate bounded contexts. Plus, they could leverage Docker Compose for inter-container communication, private networks, Docker volumes, defining environment variables with anchors, and linking containers for communication and aliasing.

This was a real game changer for Rapid7, because Docker Compose truly gave them unprecedented flexibility. Teams then added scripting to orchestrate communication between containers when a trigger event occurs (like when a service has completed).

Using Docker, Docker Compose, and scripting, Rapid7 was able to create a solution for the development team that could reliably replicate a complete development environment. To optimize the initialization, Rapid7 wanted to decrease the startup times beyond what Docker enables out of the box.

Optimizing build times even further

After creating Docker base images, the bottom layers rarely have to change. Essentially, that initial build is a one-time cost. Even if the images change, the cached layers make it a breeze to get through that process quickly. However, you do have to reinstall all software dependencies from scratch again, which is a one-time cost per Docker image update.

Committing the installed software dependencies back to the base image allows for a simple, incremental, and often skippable stage. The Docker image is always usable in development and production, all on the development computer.

All of these efficiencies together streamlined an already fast 15 minute process down to 5 minutes — making it easy for developers to get productive faster.

How to build it for yourself

Check out code examples and explanations about how to replicate this setup for yourself. We’ll now tackle the key steps you’ll need to follow to get started.

Downloading Docker

Download and install the latest version of Docker to be able to perform Docker-in-Docker. Docker-in-Docker lets your Docker environment have Docker installed within a container. This lets your container run other containers or pull images.

To enable Docker-in-Docker, you can apt install the docker.io distribution as one of your first commands in your Dockerfile. Once the container is configured, mount the Docker socket from the host installation:

# Dockerfile

FROM ubuntu:20.04

# Install dependencies

RUN apt update &&  \
           apt install -y docker.io

Next, build your Docker image by running the following command in your CLI or shell script file:

docker build -t <docker-image-name>

Then, start your Docker container with the following command:

docker run -v /var/run/docker.sock:/var/run/docker.sock -ti <docker-image-name>

Using a Docker commit script

Committing layered changes to your base image is what drives the core of the Dev Environments in Docker. Docker fetches the container ID based on the service name, and the changes you make to the running container are committed to the desired image. 

Because the host Docker socket is mounted into the container when executing the docker commit command, the container will apply the change to the base image located in the host Docker installation.

# ! /bin/bash

SERVICE=${1}
IMAGE=${2}

# Commit changes to image
CONTAINER_ID=$(docker ps -aqf “name=${SERVICE}”)

if [ ! -z “$CONTAINER_ID”]; then
	Echo “--- Committing changes from $SERVICE to $IMAGE --- ”
	docker commit $CONTAINER_ID $IMAGE
fi

Updating your environment

Mount the Docker socket from the host installation. Mounting the source code is insufficient without the :z property, which tells Docker that the content will be shared between containers.

You’ll have to mount the host machine’s Docker socket into the container. This lets any Docker operations performed inside the container actually modify the host Docker images and installation. Without this, changes made in the container are only going to persist in the container until it’s stopped and removed.

Add the following code into your Docker Compose file:

# docker-compose.yaml

services:
  service-name:
    image: image-with-docker:latest
    volumes:
        - /host/code/path:/container/code/path:z
        - /var/run/docker.sock:/var/run/docker.sock

Orchestrating components

Once Docker Compose has the appropriate services configured, you can start your environment in two different ways. Use either the docker-compose up command or start the environment by running the individual service with the linked services with the following command:

docker compose start webserver

The main container references the linked service via the linked names. This makes it very easy to override any environment variables with the provided names. Check out the YAML file below:

services:
  webserver:
  mysql:
    ports:
      - '3306:3306'
    volume
	 - dbdata:var/lib/mysql
  redis:
    ports:
      - 6379:6379
    volumes:
	 - redisdata:/data

volumes:
  dbdata:
  redisdata:

Notes: For each service, you’ll want to choose and specify your preferred Docker Official Image version. Additionally, the MySQL Docker Official Image comes with important environment variables defaulted in — though you can specify them here as needed.

Managing separate services

Starting a small part of the stack can also be useful if a developer only needs that specific piece. For example, if we just wanted to start the MySQL service, we’d run the following command:

docker compose start mysql

We can stop this service just as easily with the following command:

docker compose stop mysql

Configuring your environment

Mounting volumes into the database services lets your containers apply the change to their respective databases while letting those databases remain as ephemeral containers.

In the main entry point and script orchestrator, provide a -p attribute to ./start.sh to set the PROD_BUILD environment variable. The build reads the variable inside the entry point and optionally builds a production or development version of the development environment.  

First, here’s how that script looks:

# start.sh

while [ "$1" != ""];
do
	case $1 in
		-p |  --prod) PROD_BUILD="true";;

	esac
		shift
done

Second, here’s a sample shell script:

export PROD_BUILD=$PROD_BUILD

Third, here’s your sample Docker Compose file:

# docker-compose.yaml

services:
  build-frontend:
    entrypoint:
      - bash
      - -c
      - "[[ \"$PROD_BUILD\" == \"true\" ]] && make fe-prod || make fe-dev"

Note: Don’t forget to add your preferred image under build-frontend if you’re aiming to make a fully functional Docker Compose file.

What if we need to troubleshoot any issues that arise? Debugging inside a running container only requires the appropriate debugging library in the mounted source code and an open port to mount the debugger. Here’s our YAML file:

# docker-compose.yaml

services:
  webserver:
    ports:
	- '5678:5678'
    links:
	- mysql
	 - redis
	entrypoint:
      - bash
      - -c
      - ./start-webserver.sh

Note: Like in our previous examples, don’t forget to specify an image underneath webserver when creating a functional Docker Compose file.

In your editor of choice, provide a launch configuration to attach the debugger using the specified port. Once the container is running, run the configuration and the debugger will be attached:

#launch-setting.json
{
	"configurations" : [
		{
			"name":  "Python: Remote Attach",
			"type": "python",
			"request": "attach",
			"port": 5678,
			"host": "localhost",
			"pathMappings": [
				{
					"localRoot": "${workspaceFolder}",
					"remoteRoot": "."
				}
			]
		}
	]
}

Confirming that everything works

Once the full stack is running, it’s easy to access the main entry point web server via a browser on the defined webserver port.

The docker ps command will show your running containers. Docker is managing communication between containers. 

The entire software service is now running completely in Docker. All the code lives on the host computer inside the Docker container. The development environment is now completely portable using only Docker.

Remembering important tradeoffs

This approach has some limitations. First, running your developer environment in Docker will incur additional resource overhead. Docker has to run and requires extra computing resources as a result. Also, including multiple stages will require scripting as a final orchestration layer to enable communication between containers.

Wrapping up

Rapid7’s development team uses Docker to quickly create their development environments. They use the Docker CLI, Docker Desktop, Docker Compose, and shell scripts to create an extremely unique and robust Docker-friendly environment. They can use this to spin up any part of their development environment.

The setup also helps Rapid7 compile frontend assets, start cache and database servers, run the backend service with different parameters, or start the entire application stack. Using a “Docker-in-Docker” approach of mounting the Docker socket within running containers makes this possible. Docker’s ability to commit layers to the base image after dependencies are either updated or installed is also key. 

The shell scripts will export the required environment variables and then run specific processes in a specific order. Finally, Docker Compose makes sure that the appropriate service containers and dependencies are running.

Achieving future development goals

Relying on the Docker tool chain has been truly beneficial for Rapid7, since this has helped them create a consistent environment compatible with any part of their application stack. This integration has helped Rapid7 do the following: 

  • Deploy extremely reliable software to advanced customer environments
  • Analyze code before merging it in development
  • Deliver much more stable code
  • Simplify onboarding 
  • Form an extremely flexible and configurable development environment

By using Docker, Rapid7 is continuously refining its processes to push past the boundaries of what’s possible. Their next goal is to deliver production-grade stable builds on a daily basis, and they’re confident that Docker can help them get there.

]]>
Build, Share, and Run WebAssembly Apps Using Docker https://www.docker.com/blog/build-share-run-webassembly-apps-docker/ Thu, 03 Nov 2022 14:00:00 +0000 https://www.docker.com/?p=38635 There’s no doubt that WebAssembly (AKA Wasm) is having a moment on the development stage. And while it may seem like a flash in the pan to some, we believe Wasm has a key role in continued containerized development. Docker and Wasm can be complementary technologies. 

In the past, we’ve explored how Docker could successfully run Wasm modules alongside Linux or Windows containers. Nearly five months later, we’ve taken another big step forward with the Docker+Wasm Technical Preview. Developers need exceptional performance, portability, and runtime isolation more than ever before. 

Chris Crone, a Director of Engineering at Docker, and Second State CEO, Founder Michael Yuan addressed these sticking points at the CNCF’s Wasm Day 2022. Here’s their full session, but feel free to stick around for our condensed breakdown:

You don’t need to learn new processes to develop successfully with Docker and Wasm. Popular Docker CLI commands can tackle this for you. Docker can even manage the WebAssembly runtime thanks to our collaboration with WasmEdge. We’ll dive into why we’re handling this new project and the technical mechanisms that make it possible. 

Why WebAssembly and Docker?

How workloads and code are isolated has a major impact on how quickly we can deliver software to users. Chris highlights this by explaining how developers value: 

  • Easy reuse of components and defined interfaces across projects that help build value quicker
  • Maximization of shared compute resources while maintaining safe, sturdy boundaries between workloads — lowering the cost of application delivery
  • Seamless application delivery to users, in seconds, through convenient packaging mechanisms like container images so users see value quicker

We know that workload isolation plays a role in these things, yet there are numerous ways to achieve it — like air gapping, hardware virtualization, stack virtualization (Wasm or JVM), containerization, and so on. Since each has unique advantages and disadvantages, choosing the best solution can be tricky. 

Finding the right tools can also be enormously difficult. The CNCF tooling landscape alone is saturated, and while we’re thankful these tools exist, the variety is overwhelming for many developers. 

Chris believes that specialized tooling can conquer the task at hand. It’s also our responsibility at Docker to guide these tooling decisions. This builds upon our continued mission to help developers build, share, and run their applications as quickly as possible.

That’s where WasmEdge — and Michael Yuan — come in. 

Exciting opportunities with Docker and WasmEdge

Michael showed there’s some overlap between container and WebAssembly use cases. For example, developers from both camps might want to ship microservice applications. Wasm can enable quicker startup times and code-level security, which are beneficial in many cases.

However, WebAssembly doesn’t fit every use case due to threading, garbage collection, and binary packaging limitations. Running applications with Wasm also requires extra tooling, currently. 

WasmEdge in action: TensorFlow interface

Michael then kicked off a TensorFlow ML application demo to show what WasmEdge can do. This application wouldn’t work with other WASI-compatible runtimes:

Code snippet showing TensorFlow ML application demo with WasmEdge.

A few things made this demo possible:

  • Rust: a safe and fast programming language with first-class support for the Wasm compiling target.
  • Tokio: a popular asynchronous runtime that can handle multiple, parallel HTTP requests without multithreading.
  • WasmEdge’s TensorFlow: a plug-in compatible with the WASI-NN spec. Besides Tensorflow, PyTorch and OpenVINO are also supported in WasmEdge. 

Note: Tokio and TensorFlow support are WasmEdge features that aren’t available on other WASI-compliant runtimes.

With Rust’s cargo build command, we can compile the program into a Wasm module using the wasm32-wasi target platform. The WasmEdge runtime can execute the resulting .wasm file. Once the application is running, we can perform HTTP queries to run some pretty cool image recognition tasks. 

This example exemplifies the draw of WasmEdge as a WASI-compatible runtime. According to its maintainers, “WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices.” 

Making Wasm accessible with Docker

Docker has two magical features. First, Docker and containers work on any machine and anywhere in production. Second, Docker makes it easy to build, share, and reuse components from any project. Container images and other OCI artifacts are easy to consume (and share). Isolation is baked in. Millions of developers are also very familiar with many Docker workflows like docker compose up.

Chris described how standardization and open ecosystems make Docker and container tooling available everywhere. The OCI specifications are crucial here and let us form new solutions that’ll work for nearly anyone and any supported technology (like Wasm). 

On the other hand, setting up cross-platform Wasm developer environments is tricky. You also have to learn new tools and workflows — hampering productivity while creating frustration. We believe we can help developers overcome these challenges, and we’re excited to leverage our own platform to make Wasm more accessible. 

Demoing Docker+WasmEdge

How does Wasm support look in practice? Chris fired up a demo using a preview of Docker Desktop, complete with WASI support. He created a Docker Compose file with three services: 

docker compose file javascript rust mariadb

That Rust server runs as a Wasm Module, while the NGINX and MariaDB servers run in Linux containers. Chris built this Rust server using a Dockerfile that compiled from his local platform to a wasm32-wasi target. He also ran WasmEdge’s proprietary AOT compiler to optimize the built Wasm module. However, this step is optional and optimized modules require the WasmEdge runtime.

We’ll leave the nitty gritty to Chris (see 19:43 for the demo) for now. However, know that you can run a Compose build and come away with a wasi/wasm32 platform image. Running docker compose up launches your application which you can then interact with through your Web browser. This is one way to seamlessly run containers and Wasm side by side.

From the Docker CLI, you’ll see the Wasm microservice is less than 2MB. It contains a high-performance HTTP server and a MySQL database client. The NGINX and MariaDB servers are 10MB and 120MB, respectively. Alternatively, your Rust microservice would be tens of megabytes after building it into a Linux binary and running it in a Linux container. This underscores how lightweight Wasm images are.

Since the output is an OCI image, you can store or share it using an OCI-compliant registry like Docker Hub. You don’t have to learn complex new workflows. And while Chris and Michael centered on WasmEdge, Docker should support any WASI runtime. 

The approach is interoperable with containers and has early support within Docker Desktop. Although Wasm might initially seem unfamiliar, integration with the Docker ecosystem immediately levels that learning curve.

The future of Docker and Wasm

As Chris mentioned, we’re invested in making Docker and Wasm work well together. Our recent Docker+Wasm Technical Preview is a major step towards boosting interoperability. However, we’re also thrilled to explore how Docker tooling can improve the lives of Wasm-hungry developers — no matter their goals. 

Docker wants to get involved with the Wasm community to better understand how developers like you are building your WebAssembly applications. Your use cases and obstacles matter. By sharing our experiences with the container ecosystem with the community, we hope to accelerate Wasm’s growth and help you tackle that next big project. 

Get started and learn more

Want to test run Docker and Wasm? Check out Chris’ GitHub page for links to special Wasm-compatible Docker Desktop builds, demo repos, and more. We’d also love to hear your feedback as we continue bolstering Docker+Wasm support!

Finally, don’t miss the chance to learn more about WebAssembly and microservices — alongside experts and fellow developers — at an upcoming meetup.

]]>
Build, Share, Run WebAssembly Apps Using the Docker Toolchain - Chris Crone & Michael Yuan nonadult
How to Fix and Debug Docker Containers Like a Superhero https://www.docker.com/blog/how-to-fix-and-debug-docker-containers-like-a-superhero/ Wed, 19 Oct 2022 14:00:00 +0000 https://www.docker.com/?p=38125 While containers help developers rapidly build and run cross-platform applications, creating error-free apps remains a constant challenge. And while it’s not always obvious how container errors occur, this mystery is even harder for newer developers to unravel. Figuring out how to debug Docker containers can seem daunting.

In this Community All-Hands session, Ákos Takács demonstrated how to solve many of these pesky problems and gain the superpower of fixing containers.

Each issue can impact your image builds and final applications. Some bugs may not trigger clear error messages. To further complicate things, source-code inspection isn’t always helpful. 

But, common container issues don’t have to be your kryptonite! We’ll share Ákos’ favorite tips and show you how to conquer these development challenges.

In this tutorial:

Finding and fixing common container mistakes

Everyone is prone to the occasional silly mistake. You code when you’re tired, suffer from the occasional keyboard slip, or sometimes fail to copy text correctly between steps. These missteps can carry forward from one command to the next. And because easy-to-miss things like spelling errors or character omissions can fly under the radar, you’re left doing plenty of digging to solve basic problems. Nobody wants that, so what tools are at your disposal? 

Using the CLI for extra container visibility

Say we have an image downloaded from Docker Hub — any image at all — and use some variation of the docker run command to run it. The resulting container will be running the default command. If you want to surface that command, entering docker container ls --all will grab a list of containers with their respective commands. 

Users often copy these commands and reuse them within other longer CLI commands. As you’d expect, it’s incredibly easy to highlight incorrectly, copy an incomplete phrase, and run a faulty command that uses it.

While spinning up a new container, you’ll hit a snag. The runtime in this instance will fail since Docker cannot find the executable. It’s not located in the PATH, which indicates a problem:

Docker Run

Running the docker container ls --all command also offers some hints. Note the httpd-foregroun container command paired with its created (but not running) container. Conversely, the v0 container that’s running successfully leverages a valid, complete command:

Docker Container ls

How do we investigate further? Use the docker run --rm -it --name MYCONTAINER [IMAGE] bash command to open an interactive terminal within your container. Take the container’s default command and attempt to run it again. A “command not found” error message will appear.

This is much more succinct and shows that you’ve likely entered the wrong command — in this case by forgetting a character. While Ákos’ example uses httpd, it’s applicable to almost any container image. 

Change your CLI output formatting for visibility and readability

Container commands are clipped once they exceed a certain length in the terminal output. That prevents you from inspecting the command in its entirety. 

Luckily, Ákos showed how the --format ‘{{ json . }}’ | jq -C flag can improve how your terminal displays outputs. Instead of cutting off portions of text, here’s how your docker container ls --all result will look:

JSON jQ C Format

You can read and compare any parameters in full. Nothing is hidden. If you don’t have jq installed, you could instead enter the following command to display outputs similarly minus syntax highlighting. This beats the default tabular layout for troubleshooting:

docker container ls --all --format ‘{{ json . }}’ | python3 -m json.tool --json-lines

Lastly, why not just expand the original table view while only displaying relevant information? Run the following command with the --no-trunc flag to expand those table rows and completely reveal each cell’s contents:

docker container ls --all --format ‘table {{ .Names }}/t{{ .Status }}/t{{ .Command }}’ --no-trunc

These examples highlight the importance of visibility and transparency in troubleshooting. When you can uncover and easily digest the information you need, making corrections is much easier.      

Remember to leverage your logs

By following best practices, any active application running within a Docker container will produce log outputs. While you might view logging as a problem-catching mechanism, many running containers don’t experience issues.

Ákos believes it’s important to understand how normal log entries look. As a result, identifying abnormal log entries becomes that much easier. The docker logs command enables this:

Docker Logs

The process of tuning your logs differs between tools and languages. For example, Ákos drew from methods involving httpd — like trace for detailed trace-level messages or LogLevel for filtering error messages — but these practices are widely applicable. You’ll probably want to zero in on startup and runtime errors to diagnose most issues. 

Log handling is configurable. Here are some common commands to help you drill down into container issues (and reduce noise):

Grab your container’s last 100 logs:

docker logs --tail 100 [container ID]

Grab all logs for a specific container:

docker logs [container ID]

View all active processes within a running container, should its logs be inaccessible:

docker top [container ID]

Log inspection enables easier remediation. Alongside Ákos, we agree that you should confirm any container changes or fixes after making them. This means you’ve taken the right steps and can move ahead.

Want to view all your logs together within Docker Desktop? Download our Logs Explorer extension, which lets you browse through your logs using filters and advanced search capabilities. You can even view new logs as they populate.

Logs Explorer

Tackle issues with ENTRYPOINT

When running applications, you’ll need to run executable files within your container. The ENTRYPOINT portion of your Dockerfile sets the main command within a container and basically assigns it a task. These ENTRYPOINT instructions rely on executable files being in the container. 

In Ákos’ example, he tackles a scenario where improper permissions can prevent Docker from successfully mounting and running an entrypoint.sh executable. You can copy his approach by doing the following: 

  1. Use the ls -l $PWD/examples/v6/entrypoint.sh command to view your file’s permissions, which may be inadequate.
  2. Confirm that permissions are incorrect. 
  3. Run a chmod 774 command to let this file read, write, and execute for all users.
  4. Use docker run to spin up a container v7 from the original entrypoint, which may work briefly but soon stop running. 
  5. Inspect the entrypoint.sh file to confirm our desired command exists. 

We can confirm this again by entering docker container inspect v7-exiting to view our container definition and parameters. While the Entrypoint is specified, its Cmd definition is null. That’s what’s causing the issue:

Config File

Why does this happen? Many don’t know that by setting --entrypoint, any image with a default command will empty that command automatically. You’ll need to redefine your command for your container to work properly. Here’s how that CLI command might look:

docker run -d -v $PWD/examples/v7/entrypoint.sh:/entrypoint.sh --entrypoint /entrypoint.sh --name v7-running httpd:2.4 httpd-foreground

This works for any container image but we’re just drawing from an earlier example. If you run this and list your containers again, v7 will be active. Confirm within your logs that everything looks good. 

Access and inspect container content

Carefully managing files and system resources is critical during local development. That’s doubly true while working with multiple images, containers, or resource constraints. There are scenarios where your containers bloat as their contents accumulate over time. 

Keeping your files tidy is one thing. However, you may also want to copy your files from your container and move them into a temporary folder — using the docker cp command with a specified directory. Using a variation of ls -la ./var/v8, borrowing from Ákos’ example, then produces a list containing every file. 

This is great for visibility and confirming your container’s contents. And we can diagnose any issues one step further with docker container diff v8 to view which files have been changed, appended, or even deleted. If you’re experiencing strange container behavior, digging into these files might be useful. 


Note: You can also leverage our Resource Usage extension to monitor disk space consumption, network activity, CPU usage, and memory usage in real time!

Dive deeply into files and folders

Close inspection is where hexdump comes in handy. The hexdump function converts your file into hexadecimal code, which is much more readable than binary. Ákos used the following commands:

docker cp v8:/usr/local/apache2/bin/httpd ./var/v8-httpd`
`hexdump -C -n 100 ./var/v8-httpd

You can adjust this -n number to read additional or fewer initial bytes. If your file contains text, this content will stand out and reveal the file’s main purpose. But, say you want to access a folder. While changing your directory and running docker container inspect … is standard, this method doesn’t work for Docker Desktop users. Since Desktop runs things in a VM, the host cannot access the folders within. 

Ákos showcased CTO Justin Cormack’s own nsenter1 image on GitHub, which lets us tap into those containers running with Docker Desktop environments. Docker Captain Bret Fisher has since expanded upon nsenter1’s documentation while adding useful commands. With these pieces in place, run the following command:

docker run --rm --privileged --pid=host alpine:3.16.2 nsenter -t 1 -m -u -i -n -p -- sh -c “ cd \”$(docker container inspect v8 --format ‘{{ .GraphDriver.Data.UpperDir }}’}\” \&& find .”

This command’s output mirrors that from our earlier docker container diff command. You can also run a hexdump using that same image above, which gives you the same troubleshooting abilities regardless of your environment. You can also inspect your entrypoint.sh to make important changes.  

Solve Docker Build errors 

While Docker BuildKit is quick and resilient, you can encounter errors that prevent image build completion. To learn why, run the following command to view each sequential build stage:

docker build $PWD/[MY SOURCE] --tag “MY TAG” --progress plain

BuildKit will provide readable context for each step and display any errors that occur:

Docker Build Progress

If you see a missing file or directory error like the one above, don’t worry! You can use the cat $PWD/[MY SOURCE]/[MY DOCKERFILE] command to view the contents of your Dockerfile. Not only can you see where you misstepped more clearly, but you can also add a new instruction before the failing command to list your folder’s contents. 

Maybe those contents need updating. Maybe your folder is empty! In that case, you need to update everything so docker build has something to leverage. 

Next, run the build command again with the --no-cache flag added. This flag tells Docker to cleanly build from scratch each time without relying on caching:

Docker Build No Cache

You can progressively build updated versions of your Dockerfile and test those changes, given the cascading nature of instructions. Writing new instructions after the last working instruction — or making changes earlier on in your file — can eliminate those pesky build issues. Mechanisms like unlink or cp are helpful. The first behaves like rm while accepting only one argument, while cp copies critical files and folders into your image from a source.  

Solve Docker Compose errors

We use Docker Compose to spin up multiple services simultaneously using the docker compose --project-directory $PWD/[MY SOURCE] up -d command. 

However, one or more of those containers might unexpectedly exit. By running docker compose --project-directory $PWD/[MY SOURCE] ps to list out our services, you can see which containers are running or exited.

To pinpoint the problem, you’d usually grab logs via the docker compose logs command. You won’t need to specify a project directory in most cases. However, your container produces no logs since it isn’t running. 

Next, run the cat $PWD/[MY SOURCE]/docker-compose.yml command to view your Docker Compose file’s contents. It’s likely that your services definitions need fixing, so digging line by line within the CLI is helpful. Enter the following command to make this output even clearer:

docker compose --project-directory $PWD/[MY SOURCE] config

Your container exits when the commands contained within are invalid — just like we saw earlier. You’ll be able to see if you’ve entered a command incorrectly or if that command is empty. From there, you can update your Compose file and re-run docker compose --project-directory $PWD/[MY SOURCE] up -d. You can now confirm that everything is working by listing your services again. Your terminal will also output logs! 

Optional: Make direct file edits within running containers

Finally, it’s possible (and tempting) to directly edit your files within your container. This is viable while testing new changes and inspecting your containers. However, it’s usually considered best practice to create a new image and container instead. 

If you want to make edits within running containers, an editor like VS Code allows this, while IntelliJ doesn’t by comparison. Install the Docker extension for VS Code. You can then browse through your containers in the left sidebar, expand your collection of resources, and directly access important files. For example, web developers can directly edit their index.html files to change how user content is structured. 

Investigate less and develop more

Overall, the process of fixing a container, on the surface, may seem daunting to newer Docker users. The methods we’ve highlighted above can dramatically reduce that troubleshooting complexity — saving you time and effort. You can spend less time investigating issues and more time creating the applications users love. And we think those skills are pretty heroic. 

For more information, you can view Ákos Takács’ full presentation on YouTube to carefully follow each step. Want to dive deeper? Check out these additional resources to become a Docker expert: 

]]>
Have the superpower of fixing containers nonadult
How to Build and Run Next.js Applications with Docker, Compose, & NGINX https://www.docker.com/blog/how-to-build-and-run-next-js-applications-with-docker-compose-nginx/ Wed, 31 Aug 2022 14:00:00 +0000 https://www.docker.com/?p=37047 At DockerCon 2022, Kathleen Juell, a Full Stack Engineer at Sourcegraph, shared some tips for combining Next.js, Docker, and NGINX to serve static content. With nearly 400 million active websites today, efficient content delivery is key to attracting new web application users.

In some cases, using Next.js can boost deployment efficiency, accelerate time to market, and help attract web users. Follow along as we tackle building and running Next.js applications with Docker. We’ll also cover key processes and helpful practices for serving that static content. 

Why serve static content with a web application?

According to Kathleen, the following are the benefits of serving static content: 

  • Fewer moving parts, like databases or other microservices, directly impact page rendering. This backend simplicity minimizes attack surfaces. 
  • Static content stands up better (with fewer uncertainties) to higher traffic loads.
  • Static websites are fast since they don’t require repeated rendering.
  • Static website code is stable and relatively unchanging, improving scalability.
  • Simpler content means more deployment options.

Since we know why building a static web app is beneficial, let’s explore how.

Building our services stack

To serve static content efficiently, a three-pronged services approach composed of Next.js, NGINX, and Docker is useful. While it’s possible to run a Next.js server, offloading those tasks to an NGINX server is preferable. NGINX is event-driven and excels at rapidly serving content thanks to its single-threaded architecture. This enables performance optimization even during periods of higher traffic.  

Luckily, containerizing a cross-platform NGINX server instance is pretty straightforward. This setup is also resource friendly. Below are some of the reasons why Kathleen — explicitly or perhaps implicitly — leveraged three technologies. 

Docker Desktop also gives us the tools needed to build and deploy our application. It’s important to install Docker Desktop before recreating Kathleen’s development process. 

The following trio of services will serve our static content:

First, our auth-backend has a build context rooted in a directory and a port mapping. It’s based on a slimmer alpine flavor of the Node.js Docker Official Image and uses named Dockerfile build stages to prevent reordered COPY instructions from breaking. 

Second, our client service has its own build context and a named volume mapped to the staticbuild:/app/out directory. This lets us mount our volume within our NGINX container. We’re not mapping any ports since NGINX will serve our content.

Third, we’ll containerize an NGINX server that’s based on the NGINX Docker Official Image.

As Kathleen mentions, ending this client service’s Dockerfile with a RUN command is key. We want the container to exit after completing the yarn build process. This process generates our static content and should only happen once for a static web application.

Each component is accounted for within its own container. Now, how do we seamlessly spin up this multi-container deployment and start serving content? Let’s dive in!

Using Docker Compose and Docker volumes

The simplest way to orchestrate multi-container deployments is with Docker Compose. This lets us define multiple services within a unified configuration, without having to juggle multiple files or write complex code. 

We use a compose.yml file to describe our services, their contexts, networks, ports, volumes, and more. These configurations influence app behavior. 

Here’s what our complete Docker Compose file looks like: 

services:
  auth-backend:
    build:
      context: ./auth-backend
    ports:
      - "3001:3001"
    networks:
      - dev
	
  client:
    build:
      context: ./client
    volumes:
      - staticbuild:/app/out
    networks:
      - dev

  nginx:
    build:
      context: ./nginx
    volumes:
      - staticbuild:/app/public
    ports:
      - “8080:80”
    networks:
      - dev

  networks:
    dev:
      driver: bridge

  volumes:
    staticbuild:

You’ll also see that we’ve defined our networks and volumes in this file. These services all share the dev network, which lets them communicate with each other while remaining discoverable. You’ll also see a common volume between these services. We’ll now explain why that’s significant.

Using mounted volumes to share files

Specifically, this example leverages named volumes to share files between containers. By mapping the staticbuild volume to Next.js’ default out directory location, you can export your build and serve content with your NGINX server. This typically exists as one or more HTML files. Note that NGINX uses the app/public directory by comparison. 

While Next.js helps present your content on the frontend, NGINX delivers those important resources from the backend. 

Leveraging A/B testing to create tailored user experiences

You can customize your client-side code to change your app’s appearance, and ultimately the end-user experience. This code impacts how page content is displayed while something like an NGINX server is running. It may also determine which users see which content — something that’s common based on sign-in status, for example. 

Testing helps us understand how application changes can impact these user experiences, both positively and negatively. A/B testing helps us uncover the “best” version of our application by comparing features and page designs. How does this look in practice? 

Specifically, you can use cookies and hooks to track user login activity. When a user logs in, they’ll see something like user stories (from Kathleen’s example). Logged-out users won’t see this content. Alternatively, a web user might only have access to certain pages once they’re authenticated. It’s your job to monitor user activity, review any feedback, and determine if those changes bring clear value. 

These are just two use cases for A/B testing, and the possibilities are nearly endless when it comes to conditionally rendering static content with Next.js. 

Containerize your Next.js static web app

There are many different ways to serve static content. However, Kathleen’s three-service method remains an excellent example. It’s useful both during exploratory testing and in production. To learn more, check out Kathleen’s complete talk

By containerizing each service, your application remains flexible and deployable across any platform. Docker can help developers craft accessible, customizable user experiences within their web applications. Get started with Next.js and Docker today to begin serving your static web content! 

Additional Resources

]]>
How to Build and Run Next.js Applications with Docker, Compose, & NGINX nonadult
9 Tips for Containerizing Your .NET Application https://www.docker.com/blog/9-tips-for-containerizing-your-net-application/ Mon, 18 Jul 2022 14:00:35 +0000 https://www.docker.com/?p=34809 Over the last five years, .NET has maintained its position as a top framework among professional developers. In Stack Overflow’s 2022 Developer Survey, .NET ranked first in the “other framework and libraries” category. Stack Overflow reserves this for developers who’ve done extensive development work with key technologies in the past year, and want to continue using them the next.

 

image1 1

Data courtesy of Stack Overflow.

 

Over 60,000 developers and 3,700 companies have contributed to the .NET platform. Since its 2002 debut, .NET has supported multiple languages (C#, F#, Visual Basic), platforms (.NET Core, .NET framework, Mono), editors, and libraries for building for diverse applications. .NET provides standard sets of base class libraries and APIs common to all .NET applications.

Why is containerizing a .NET application important?

.NET was originally designed for Windows. Meanwhile, we originally based Docker around Linux. .NET has the application virtual machine (called Common Language Runtime) and other components aimed at solving build problems common to large enterprise applications from 10 to 20 years ago. The two weren’t inherently compatible on day one.

Both have since evolved to become cross-platform, open-source developer platforms. When building tiny containers with a single process running inside, using a directly compiled language is typically faster. That said, .NET has come a long way and is now container-friendly. Microsoft has made a concerted effort to enable the container system since Windows Server 2016 SP2. Its goal has been keeping up with this growing container ecosystem. Today, you can run containers on Windows hosts that aren’t just based on the Linux kernel, but also the Windows kernel.

Running your .NET application in a Docker container has numerous benefits. First, Docker containers can act as isolated test environments. .NET developers can code and test locally while ensuring consistency between development and production. Second, it eliminates deployment issues caused by missing dependencies while moving to a production environment. Third, containers let developers of all skill levels build, share, and run containerized .NET applications. Containers are immutable infrastructure, provide portability, and help improve scalability. Likewise, the modularity and lightweight nature of .NET 6 make it perfect for containers. 

Containerizing a .NET application is easy. You can do this by copying source code files and building a Docker image. We’ll also cover common concerns like image bloat, missing image tags, and poor build performance with these nine tips for containerizing your .NET application code.

Containerizing a Student Record Management Application

To better understand those concerns, let’s look at a simple student record management application. In our last blog post, you saw how easy building and deploying a student database application is via a Dockerfile and Docker Compose.

Running your application is simple. You’ll clone the GitHub project repository and use the Docker Compose CLI to bring up the complete application with the following commands:

git clone https://github.com/dockersamples/student-record-management

 

Change your directory to student-record-management to see the following Docker Compose file:

services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
volumes:
- postgres-data:/var/lib/postgresql/data
adminer:
image: adminer
restart: always
ports:
- 8080:8080
app:
build:
context: .
dockerfile: ./Dockerfile
ports:
- 5000:80
depends_on:
- db
volumes:
postgres-data:

 

We’ve defined two services in this Compose file by the name db and app attributes. The Adminer (formerly phpMinAdmin) Docker image is a fully-featured database management tool written in PHP. We’ve set up port forwarding via the ports attribute. The depends_on attribute lets us express dependencies between services. In this case, we’ll start Postgres before our core application.  

Run the following command to bring up our student record management application:

docker-compose up -d

 

Once it’s up and running, you can view the Docker Dashboard and click on the “arrow” key (shown in app-1) to quickly access the application:

 

image4 1

 

Typically, developers use the following Dockerfile template to build a Docker image. A Dockerfile is a list of sequential instructions that build your container image. This image is composed of a stack of layers, and each represents an instruction in our Dockerfile. Each layer contains changes to its underlying layer.

FROM mcr.microsoft.com/dotnet/sdk:6.0

WORKDIR /src
COPY . ./src

RUN dotnet build -o /app
RUN dotnet publish -o /publish

WORKDIR /publish
ENV ASPNETCORE_URLS=http://+:80/
EXPOSE 80
CMD ["./myWebApp"]

 

The first line defines our base image, which is around 754 MB in size (or, alternatively, 994 MB for Nano Server and 6.34GB for Windows Server). The COPY copies the necessary project file from the host system to the root of the Docker image. The EXPOSE instruction tells Docker that the container listens specifically on network port 80 at runtime. Lastly, our CMD lets us configure a container that’ll run as an executable.

To build a Docker image, we’ll use the docker build command:

docker build -t student-app .

 

Let’s check the size of our new Docker image:

docker images
REPOSITORY                             TAG       IMAGE ID       CREATED         SIZE
student-app                            latest    d3caa8643c2c   4 minutes ago   827MB

 

One key drawback of this example is that our Docker image isn’t optimized. Crucially, optimization lets teams share smaller images, boost performance, and enables easier debugging. It’s essential at every CI/CD stage including production. If you’re using Windows base images, you can expect your images to be much larger vs. Linux base images. There must be a better build approach that lets us discard unneeded files after compilation, since these aren’t required in our final image.

1) Choosing the Right .NET Docker Images

The official .NET Docker images are publicly available in the Microsoft repositories on Docker Hub. The process of identifying and picking up the right container base image while building applications can be confusing. To simplify the selection process, most images repositories provide extension tagging to help you select both a specific framework version. They also let you choose the right operating system, like a specific Linux distribution or Windows version.

Microsoft offers two categories of images. The first encompasses images used to develop and build .NET apps, while the second houses those used to run .NET apps. For example, mcr.microsoft.com/dotnet/sdk:6.0 is used during the development and build process. This image includes the compiler and any other .NET dependencies. Meanwhile, mcr.microsoft.com/dotnet/aspnet:6.0 is ideal for production environments. This image includes ASP.NET Core, with runtime only alongside ASP.NET Core optimizations, on Linux and Windows (multi-arch).

You can visit GitHub to browse available Docker images.

2) Optimize your Dockerfile for dotnet Restore

When building .NET Core apps with Docker, it’s important to consider how Docker caches layers while building your app.

A common way to leverage the build cache is to copy only the .csproj ,.sln, and nuget.config files for your app before performing a dotnet restore — instead of copying the full source code. The NuGet package restore can be one of the slowest parts of the build, and it only depends on these files. By copying them first, Docker can cache the restore result. For example, it won’t need to run again if you only change a .cs file.

FROM mcr.microsoft.com/dotnet/sdk:6.0
WORKDIR /src

COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app
RUN dotnet publish -o /publish
WORKDIR /publish
ENV ASPNETCORE_URLS=http://+:80/
EXPOSE 80
CMD ["./myWebApp"]

 

💁  The dotnet restore command uses NuGet to restore dependencies and project-specific tools that are specified in the project file.

3) Use a Multi-Stage Build

With multi-stage builds, Docker can use one base image for compilation, packaging, and unit tests. Another image then holds the application runtime. This makes the final image more secure and smaller in size (as it does not contain any development or debugging tools). Multi-stage Docker builds are a great way to ensure your builds are 100% reproducible and as lean as possible. You can create multiple stages within a Dockerfile and control how you build that image.

The .NET SDK includes .NET runtimes and tooling to develop, build, and package .NET applications. One best practice while creating docker images is keeping the image compact. You can containerize your .NET applications using a multi-layer approach. Each layer may contain different parts of the application like dependencies, source code, resources, and even snapshot dependencies. Alternatively, you can build any application as a separate image from the final image that contains the runnable application. To better understand this, let’s analyze the following Dockerfile.

The build stage uses SDK images to build the application and create final artifacts in the publish folder. The final stage copies artifacts from the build stage to the app folder, exposing port 80 to incoming requests and specifying the command to run the application, WebApp. In the first stage, we’ll extract the dependencies. In the second stage, we’ll copy the extracted dependencies to the final image.  Here’s a sample multi-stage Dockerfile for the student database example:

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build

WORKDIR /src
COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app
RUN dotnet publish -o /publish

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as base
COPY --from=build  /publish /app
WORKDIR /app
EXPOSE 80
CMD ["./myWebApp"]


The first stage is labeled build
, where mcr.microsoft.com/dotnet/sdk is the base image.

docker images
REPOSITORY                                  TAG              IMAGE ID       CREATED         SIZE
mywebapp_app                                latest           1d4d9778ce14   3 hours ago     229MB

 

Our final image size shrinks dramatically to 229 MB, when compared to the single stage Dockerfile size of 827MB!

4) Use Specific Base Image tags, Instead of “Latest”

While building Docker images, we always recommended tagging them with useful tags that codify version information, intended destination (prod or test, for instance), stability, or other useful information for deploying the application in different environments. Conversely, we don’t recommend relying on the :latest tag. This :latest tag is often updated frequently and new versions can cause breaking changes. If you want to protect yourself against breaking changes, it’s best to pin to a specific version then update to newer versions when you’re ready.

For example, we’d avoid using mcr.microsoft.com/dotnet/sdk:latest as a base image. Instead, you should use specific tags like mcr.microsoft.com/dotnet/sdk:6.0, mcr.microsoft.com/dotnet/sdk:6.0-windowsservercore-ltsc2019, or others.

5) Run as a Non-root User for Security Purposes

While running an application within a Docker container, it has default access to the root for Linux or administrator privileges for Windows. This can undermine application security. You can solve this problem by adding USER instructions within your Dockerfile. The USER instruction sets the preferred user name (or UID) and optionally the user group (or GID) while running the image — and for any subsequent RUN, CMD, or ENTRYPOINT instructions.

Windows networks commonly use Active Directory (AD) to enable authentication and authorization between users, computers, and other network resources. Windows application developers often use Integrated Windows Authentication. This makes it easy for users and other services to automatically, transparently sign into the application using their credentials. Although Windows containers cannot be domain joined, they can still use Active Directory domain identities to support various authentication scenarios.

To achieve this, you can configure a Windows container to run with a group Managed Service Account (gMSA), which is a special type of service account introduced in Windows Server 2012. It’s designed to let multiple computers share an identity without requiring a password.

6) Use .dockerignore

To increase the build performance (and as a general best practice) we recommend creating a .dockerignore file in the same directory as your Dockerfile. For this tutorial, your .dockerignore file should contain the following lines:

Dockerfile*
**/[b|B]in/
**/[O|o]bj/

 

These lines exclude the  bin and obj files from the Docker build context. There are many good reasons to carefully structure a .dockerignore file, but this simple version works for now. It’s also helpful to understand how the docker build command works and what the build context means.

The build context is the place or space where the developer works. It can be a folder in Windows or a directory in Linux. In this directory, you’ll find every necessary app component like source code, configuration files, libraries, and plugins. You’ll determine which of these components to include while constructing a new image.

With the .dockerignore file, we can determine which components are vital. They’ll ultimately belong to the new image that we’re building.

For example, if we don’t want to include the bin and conf directory in our image build, we just need to indicate that within our .dockerignore file.

7) Add Health Checks to Your Containers

The HEALTHCHECK instruction tells Docker how to test a container and confirm that it’s still working. This can detect (for example) when a web server is stuck in an infinite loop and unable to handle new connections — even though the server process is still running.

When an application is deployed in production, an orchestrator like Kubernetes or a service fabric will most likely manage it. By providing the health check, you’re sharing the status of your containers with the orchestrator to permit management tasks based on your configurations. Let’s look at the following example:

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build

WORKDIR /src
COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app
RUN dotnet publish -o /publish

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as base
COPY --from=build  /publish /app
WORKDIR /app
EXPOSE 80
#If you’re using the Linux Container
HEALTHCHECK CMD curl --fail http://localhost || exit 1
#If you’re using Windows Container with Powershell
#HEALTHCHECK CMD powershell -command `
#    try { `
#     $response = iwr http://localhost; `
#     if ($response.StatusCode -eq 200) { return 0} `
#     else {return 1}; `
#    } catch { return 1 }

CMD ["./myWebApp"]

 

When HEALTHCHECK is present in a Dockerfile, you’ll see the container’s health in the STATUS column while running docker ps. A container that passes this check displays as healthy. An unhealthy container displays as unhealthy.

docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS                           PORTS                  NAMES
7bee4d6a652a   student-app    "./myWebApp"             2 seconds ago   Up 1 second (health: starting)   0.0.0.0:5000-80/tcp   modest_murdock

 

8) Optimize for Startup Performance

You can improve .NET app startup times and reduce latency by compiling your assemblies with Ready to Run (R2R) compilation. However, this will increase your build time as a compromise. You can do this by setting the PublishReadyToRun property, which takes effect when you publish an application.

You can add the PublishReadyToRun property in two ways:

1) Set it within your project file:

&amp;amp;amp;lt;PropertyGroup&amp;amp;amp;gt;
&amp;amp;amp;lt;PublishReadyToRun&amp;amp;amp;gt;true&amp;amp;amp;lt;/PublishReadyToRun&amp;amp;amp;gt;
&amp;amp;amp;lt;/PropertyGroup&amp;amp;amp;gt;

 

2) Set it using the command line:

/p:PublishReadyToRun=true

 

The default Dockerfile that comes with the sample doesn’t use R2R compilation since the application is too small to warrant it. The bulk of the IL code that’s executed in this sample application is within .NET’s libraries, which are already R2R-compiled. This example enables R2R in Dockerfile, where we pass the /p:PublishReadyToRun=true to the dotnet build and dotnet publish commands.

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build

WORKDIR /src
COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app -r linux-x64 /p:PublishReadyToRun=true
RUN dotnet publish -o /publish -r linux-x64 --self-contained true --no-restore /p:PublishTrimmed=true /p:PublishReadyToRun=true /p:PublishSingleFile=true

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as base
COPY --from=build  /publish /app
WORKDIR /app
EXPOSE 80
HEALTHCHECK CMD curl --fail http://localhost || exit 1

CMD ["./myWebApp"]

9) Choose the Appropriate Isolation Mode For Windows Containers

There are two distinct modes of runtime isolation for Windows containers:  

  • Process Isolation  In this mode, multiple container instances can run concurrently in the same host with isolation on the file system, registry, network ports, process, thread ID space, and Object Manager namespace. It’s almost identical to how Linux containers run.
  • Hyper-V Isolation – In this mode, containers run inside a highly-optimized virtual machine, which provides hardware-level isolation between containers and hosts.

Most developers prefer process isolation when developing locally. It typically consumes fewer hardware resources than Hyper-V isolation. Hence, developers must account for the additional hardware needed while running the container in Hyper-V mode. However, your primary consideration when deciding to choose Hyper-V isolation is security — since it provides added hardware-level isolation. While Windows Server supports both options (default: Process Isolation), Windows 10+ only supports Hyper-V isolation.

To specify the isolation level, you should specify the --isolation flag: 

docker run -it --isolation=process mcr.microsoft.com/windows/servercore:ltsc2019 cmd

Conclusion

You’ve now seen some of the many methods for optimizing your Docker images. In any case, carefully crafting your Dockerfile is essential. If you’d like to go further, check out these bonus resources that cover recommendations and best practices for building secure, production-grade Docker images:

At Docker, we’re incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. Are you working on something awesome with Docker? Send your contributions to Ajeet Singh Raina (@ajeetraina) on our Docker Community Slack channel, and we might feature your work!

 

image2 1

]]>
Quickly Spin Up New Development Projects with Awesome Compose https://www.docker.com/blog/quickly-spin-up-new-development-projects-with-awesome-compose/ Wed, 13 Jul 2022 14:00:30 +0000 https://www.docker.com/?p=34783 Containers optimize our daily development work. They’re standardized, so that we can easily switch between development environments — either migrating to testing or reusing container images for production workloads.

However, a challenge arises when you need more than one container. For example, you may develop a web frontend connected to a database backend with both running inside containers. While possible, this approach risks negating some (or all) of that container magic, since we must also consider storage interaction, network interaction, and port configurations. Those added complexities are tricky to navigate.

How Docker Compose Can Help

Docker Compose streamlines many development workloads based around multi-container implementations. One such example is a WordPress website that’s protected with an NGINX reverse proxy, and requires a MySQL database backend.

Alternatively, consider an eCommerce platform with a complex microservices architecture. Each cycle runs inside its own container — from the product catalog, to the shopping cart, to payment processing, and, finally, product shipping. These processes rely on the same database backend container runtime, using a Redis container for caching and performance.

Maintaining a functional eCommerce platform means running several container instances. This doesn’t fully address the additional challenges of scalability or reliable performance.

While Docker Compose lets us create our own solutions, building the necessary Dockerfile scripts and YAML files can take some time. To simplify these processes, Docker introduced the open source Awesome Compose library in March 2020. Developers can now access pre-built samples to kickstart their Docker Compose projects.

What does that look like in practice? Let’s first take a more detailed look at Docker Compose. Next, we’ll explore step-by-step how to spin up a new development project using Awesome Compose.

Having some practical knowledge of Docker concepts and base commands is helpful while following along. However, this isn’t required! If you’d like to brush up or become familiarized with Docker, check out our orientation page and our CLI reference page.

How Docker Compose Works

Docker Compose is based on a compose.yaml file. This file specifies the platform’s building blocks — typically referencing active ports and the necessary, standalone Docker container images.

The diagram below represents snippets of a compose.yaml file for a WordPress site with a MySQL database, a WordPress frontend, and an NGINX reverse proxy:

 

Compose YAML

 

We’re using three separate Docker images in this example: MySQL, WordPress, and NGINX. Each of these three containers has its own characteristics, such as network ports and volumes.

mysql:
  image: mysql:8.0.28
  container_name: demomysql
  networks:
  	- network
wordpress:
  depends_on:
  	- mysql
  image: wordpress:5.9.1-fpm-alpine
  container_name: demowordpress
  networks:
  	- network
nginx:
  depends_on:
  	- wordpress
  image: nginx:1.21.4-alpine
  container_name: nginx
    ports:
  	- 80:80
  volumes:
  	- wordpress:/var/www/html

 

Originally, you’d have to use the docker run command to start each individual container. However, this introduces hiccups while managing interactions across each container related to network and storage. It’s much more efficient to consolidate all necessary objects into a docker compose scenario.

To help developers deploy baseline scenarios faster, Docker provides a GitHub repository with several environments, available for you to reuse, called Docker Awesome Compose. Let’s explore how to run these on your own machine.

How to Use Docker Compose

Getting Started

First, you’ll need to download and install Docker Desktop (for macOS, Windows, or Linux). Note that all example outputs in this article, however, come from a Windows Docker host.

You can verify that Docker is installed by running a simple docker run hello-world command:

C:\>docker run hello-world

 

This should produce the following output, indicating that things are working correctly:

 

Install Confirmation

 

You’ll also need to install Docker Compose on your machine. Similarly, you can verify this installation by running a basic docker compose command, which triggers a corresponding response:

 

C:\>docker compose

 

Docker Compose Output

 

Next, either locally download or clone the Awesome Compose GitHub repository. If you have Git running locally, simply enter the following command:

git clone https://github.com/docker/awesome-compose.git

 

Git Clone

 

If you’re not running Git, you can download the Awesome Compose repository as a ZIP file. You’ll then extract it within its own folder.

Adjusting Your Awesome Compose Code

After downloading Awesome Compose, jump into the appropriate subfolder and spin up your sample environment. For this example, we’ll use WordPress with MariaDB. You’ll then want to access your wordpress-mysql subfolder.

Next, open your compose.yaml file within your favorite editor and inspect its contents. Make the following changes in your provided YAML file:

 

  • Update line 9: volumes: - mariadb:/var/lib/mysql
  • Provide a complex password for the following variables:
    • MYSQL_ROOT_PASSWORD (line 12)
    • MYSQL_PASSWORD (line 15)
    • WORDPRESS_DB_PASSWORD (line 27)
  • Update line 30: volumes: mariadb (to reflect the name used in line 9 for this volume)

 

While this example has mariadb enabled, you can switch to a mysql example by commenting out image: mariadb:10.7 and uncommenting #image: mysql:8.0.27.

Your updated file should look like this:

services:
  db:
    # We use a mariadb image which supports both amd64 & arm64 architecture
    image: mariadb:10.7
    # If you really want to use MySQL, uncomment the following line
    #image: mysql:8.0.27
    #command: '--default-authentication-plugin=mysql_native_password'
    volumes:
      - mariadb:/var/lib/mysql
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=P@55W.RD123
      - MYSQL_DATABASE=wordpress
      - MYSQL_USER=wordpress
      - MYSQL_PASSWORD=P@55W.RD123
    expose:
      - 3306
      - 33060
  wordpress:
    image: wordpress:latest
    ports:
      - 80:80
    restart: always
    environment:
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_USER=wordpress
      - WORDPRESS_DB_PASSWORD=P@55W.RD123
      - WORDPRESS_DB_NAME=wordpress
volumes:
  mariadb:

 

Save these file changes and close your editor.

Running Docker Compose

Starting up Docker Compose is easy. To begin, ensure you’re in the wordpress-mysql folder and run the following from the Command Prompt:

docker compose up -d

 

This command kicks off the startup process. It downloads and soon runs your various container images from Docker Hub. Now, enter the following Docker command to confirm your containers are running as intended:

docker compose ps

 

This command should show all running containers and their active ports:

Compose ps Output

 

Verify that your WordPress app is active by navigating to http://localhost:80 in your browser — which should display the WordPress welcome page.

If you complete the required fields, it’ll redirect you to the WordPress dashboard, where you can start using WordPress. This experience is identical to running on a server or hosting environment.

 

Welcome WordPress

 

Once testing is complete (or you’ve finished your daily development work), you can shut down your environment by entering the docker compose down command.

 

Compose Down Output

 

Reusing Your Environment

If you want to continue developing in this environment later, simply re-enter docker compose up -d. This action displays the development setup containing all of the previous information in the MySQL database. This takes just a few seconds.

 

WordPress Dashboard

 

However, what if you want to reuse the same environment with a fresh database?

To bring down the environment and remove the volume — which we defined within compose.yaml — run the following command:

docker compose down -v

 

Compose Down Complete

 

Now, if you restart your environment with docker compose up, Docker Compose will summon a new WordPress instance. WordPress will have you configure your settings again, including the WordPress user, password, and website name:

 

WordPress New Instance

 

While Awesome Compose sample projects work out of the box, always start with the README.md instructions file. You’ll typically need to update your sample YAML file with some environmental specifics — such as a password, username, or chosen database name. If you skip this step, the runtime won’t start correctly.

Awesome Compose Simplifies Multi-Container Management

Agile developers always need access to various application development-and-testing environments. Containers have been immensely helpful in providing this. However, more complex microservices architectures — which rely on containers running in tandem — are still quite challenging. Luckily, Docker Compose makes these management processes far more approachable.

Awesome Compose is Docker’s open-source library of sample workloads that empowers developers to quickly start using Docker Compose. The extensive library includes popular industry workloads such as ASP.NET, WordPress, and React web frontends. These can connect to MySQL, MariaDB, or MongoDB backends.

You can spin up samples from the Awesome Compose library in minutes. This lets you quickly deploy new environments locally or virtually. Our example also highlighted how easy customizing your Docker Compose YAML files and getting started are.

Now that you understand the basics of Awesome Compose, check out our other samples and explore how Docker Compose can streamline your next development project.

]]>