buildx – Docker https://www.docker.com Wed, 19 Apr 2023 13:35:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png buildx – Docker https://www.docker.com 32 32 Highlights from the BuildKit v0.11 Release https://www.docker.com/blog/highlights-buildkit-v0-11-release/ Thu, 19 Jan 2023 15:00:00 +0000 https://www.docker.com/?p=39889 BuildKit v0.11 now available.

BuildKit v0.11 is now available, along with Buildx v0.10 and v1.5 of the Dockerfile syntax. We’ve released new features, bug fixes, performance improvements, and improved documentation for all of the Docker Build tools.
Let’s dive into what’s new! We’ll cover the highlights, but you can get the whole story in the full changelogs.

1. SLSA Provenance

BuildKit can now create SLSA Provenance attestation to trace the build back to source and make it easier to understand how a build was created. Images built with new versions of Buildx and BuildKit include metadata like links to source code, build timestamps, and the materials used during the build. To attach the new provenance, BuildKit now defaults to creating OCI-compliant images.

Although docker buildx will add a provenance attestation to all new images by default, you can also opt into more detail. These additional details include your Dockerfile source, source maps, and the intermediate representations used by BuildKit. You can enable all of these new provenance records using the new --provenance flag in Buildx:

$ docker buildx build --provenance=true -t <myorg>/<myimage> --push .

Or manually set the provenance generation mode to either min or max (read more about the different modes):

$ docker buildx build --provenance=mode=max -t <myorg>/<myimage> --push .

You can inspect the provenance of an image using the imagetools subcommand. For example, here’s what it looks like on the moby/buildkit image itself:

$ docker buildx imagetools inspect moby/buildkit:latest --format '{{ json .Provenance }}'
{
  "linux/amd64": {
    "SLSA": {
      "buildConfig": {

You can use this provenance to find key information about the build environment, such as the git repository it was built from:

$ docker buildx imagetools inspect moby/buildkit:latest --format '{{ json (index .Provenance "linux/amd64").SLSA.invocation.configSource }}'
{
  "digest": {
	"sha1": "830288a71f447b46ad44ad5f7bd45148ec450d44"
  },
  "entryPoint": "Dockerfile",
  "uri": "https://github.com/moby/buildkit.git#refs/tags/v0.11.0"
}

Or even the CI job that built it in GitHub actions:

$ docker buildx imagetools inspect moby/buildkit:latest --format '{{ (index .Provenance "linux/amd64").SLSA.builder.id }}'
https://github.com/moby/buildkit/actions/runs/3878249653

Read the documentation to learn more about SLSA Provenance attestations or to explore BuildKit’s SLSA fields.

2. Software Bill of Materials

While provenance attestations help to record how a build was completed, Software Bill of Materials (SBOMs) record what components are used. This is similar to tools like docker sbom, but, instead of requiring you to perform your own scans, the author of the image can build the results into the image.

You can enable built-in SBOMs with the new --sbom flag in Buildx:

$ docker buildx build --sbom=true -t <myorg>/<myimage> --push .

By default, BuildKit uses docker/buildkit-syft-scanner (powered by Anchore’s Syft project) to build an SBOM from the resulting image. But any scanner that follows the BuildKit SBOM scanning protocol can be used here:

$ docker buildx build --sbom=generator=<custom-scanner> -t <myorg>/<myimage> --push .

Similar to SLSA provenance, you can use imagetools to query SBOMs attached to images. For example, if you list all of the discovered dependencies used in moby/buildkit, you get this:

$ docker buildx imagetools inspect moby/buildkit:latest --format '{{ range (index .SBOM "linux/amd64").SPDX.packages }}{{ println .name }}{{ end }}'
github.com/Azure/azure-sdk-for-go/sdk/azcore
github.com/Azure/azure-sdk-for-go/sdk/azidentity
github.com/Azure/azure-sdk-for-go/sdk/internal
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
...

Read the SBOM attestations documentation to learn more.

3. SOURCE_DATE_EPOCH

Getting reproducible builds out of Dockerfiles has historically been quite tricky — a full reproducible build requires bit-for-bit accuracy that produces the exact same result each time. Even builds that are fully deterministic would get different timestamps between runs.

The new SOURCE_DATE_EPOCH build argument helps resolve this, following the standardized environment variable from the Reproducible Builds project. If the build argument is set or detected in the environment by Buildx, then BuildKit will set timestamps in the image config and layers to be the specified Unix timestamp. This helps you get perfect bit-for-bit reproducibility in your builds.

SOURCE_DATE_EPOCH is automatically detected by Buildx from the environment. To force all the timestamps in the image to the Unix epoch:

$ SOURCE_DATE_EPOCH=0 docker buildx build -t <myorg>/<myimage> .

Alternatively, to set it to the timestamp of the most recent commit:

$ SOURCE_DATE_EPOCH=$(git log -1 --pretty=%ct) docker buildx build -t <myorg>/<myimage> .

Read the documentation to find out more about how BuildKit handles SOURCE_DATE_EPOCH

4. OCI image layouts as named contexts

BuildKit has been able to export OCI image layouts for a while now. As of v0.11, BuildKit can import those results again using named contexts. This makes it easier to build contexts entirely locally — without needing to push intermediate results to a registry.

For example, suppose you want to build your own custom intermediate image based on Alpine that contains some development tools:

$ docker buildx build . -f intermediate.Dockerfile --output type=oci,dest=./intermediate,tar=false

This builds the contents of intermediate.Dockerfile and exports it into an OCI image layout into the intermediate/ directory (using the new tar=false option for OCI exports). To use this intermediate result in a Dockerfile, refer to it using any name you like in the FROM statement in your main Dockerfile:

FROM base
RUN ... # use the development tools in the intermediate image

You can then connect this Dockerfile to your OCI layout using the new oci-layout:// URI schema for the --build-context flag:

$ docker buildx build . -t <myorg>/<myimage> --build-context base=oci-layout://intermediate

Instead of resolving the image base to Docker Hub, BuildKit will instead read it from oci-layout://intermediate in the current directory, so you don’t need to push the intermediate image to a remote registry to be able to use it.

Refer to the documentation to find out more about using oci-layout:// with the --build-context flag.

5. Cloud cache backends

To get good build performance when building in ephemeral environments, such as CI pipelines, you need to store the cache in a remote backend. The newest release of BuildKit supports two new storage backends: Amazon S3 and Azure Blob Storage.

When you build images, you can provide the details of your S3 bucket or Azure Blob store to automatically store your build cache to pull into future builds. This build cache means that even though your CI or local runners might be destroyed and recreated, you can still access your remote cache to get quick builds when nothing has changed.

To use the new backends, you can specify them using the --cache-to and --cache-from flags:

$ docker buildx build --push -t <user>/<image> \
  --cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters...] \
  --cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image> .

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=azblob,name=<cache-image>[,parameters...] \
  --cache-from type=azblob,name=<cache-image>[,parameters...] .

You also don’t have to choose between one cache backend or the other. BuildKit v0.11 supports multiple cache exports at a time so you can use as many as you’d like.

Find more information about the new S3 backend in the Amazon S3 cache and the Azure Blob Storage cache backend documentation. 

6. OCI Image annotations

OCI image annotations allow attaching metadata to container images at the manifest level. They’re an alternative to labels that are more generic, and they can be more easily attached to multi-platform images.

All BuildKit image exporters now allow setting annotations to the image exporters. To set the annotations of your choice, use the --output flag:

$ docker buildx build ... \
    --output "type=image,name=foo,annotation.org.opencontainers.image.title=Foo"

You can set annotations at any level of the output, for example, on the image index:

$ docker buildx build ... \
    --output "type=image,name=foo,annotation-index.org.opencontainers.image.title=Foo"

Or even set different annotations for each platform:

$ docker buildx build ... \
    --output "type=image,name=foo,annotation[linux/amd64].org.opencontainers.image.title=Foo,annotation[linux/arm64].org.opencontainers.image.title=Bar"

You can find out more about creating OCI annotations on BuildKit images in the documentation.

7. Build inspection with --print

If you are starting in a codebase with Dockerfiles, understanding how to use them can be tricky. Buildx supports the new --print flag to print details about a build. This flag can be used to get quick and easy information about required build arguments and secrets, and targets that you can build. 

For example, here’s how you get an outline of BuildKit’s Dockerfile:

$ BUILDX_EXPERIMENTAL=1 docker buildx build --print=outline https://github.com/moby/buildkit.git
TARGET:  	buildkit
DESCRIPTION: builds the buildkit container image

BUILD ARG              	   VALUE   DESCRIPTION
RUNC_VERSION           	   v1.1.4   
ALPINE_VERSION         	   3.17	 
BUILDKITD_TAGS                  	defines additional Go build tags for compiling buildkitd
BUILDKIT_SBOM_SCAN_STAGE   true 

We can also list all the different targets to build:

$ BUILDX_EXPERIMENTAL=1 docker buildx build --print=targets https://github.com/moby/buildkit.git
TARGET             	DESCRIPTION
alpine-amd64      	 
alpine-arm        	 
alpine-arm64      	 
alpine-s390x      	 

Any frontend that implements the BuildKit subrequests interface can be used with the buildx --print flag. They can even define their own print functions, and aren’t just limited to outline or targets.

The --print feature is still experimental, so the interface may change, and we may add new functionality over time. If you have feedback, please open an issue or discussion on the docker/buildx GitHub, we’d love to hear your thoughts!

8. Bake features

The Bake file format for orchestrating builds has also been improved.

Bake now supports more powerful variable interpolation, allowing you to use fields from the same or other blocks. This can reduce duplication and make your bake files easier to read:

target "foo" {
  dockerfile = target.foo.name + ".Dockerfile"
  tags       = [target.foo.name]
}

Bake also supports null values for build arguments and allows labels to use the defaults set in your Dockerfile so your bake definition doesn’t override those:

variable "GO_VERSION" {
  default = null
}
target "default" {
  args = {
    GO_VERSION = GO_VERSION
  }
}

Read the Bake documentation to learn more. 

More improvements and bug fixes 

In this post, we’ve only scratched the surface of the new features in the latest release. Along with all the above features, the latest releases include quality-of-life improvements and bug fixes. Read the full changelogs to learn more:

We welcome bug reports and contributions, so if you find an issue in the releases, let us know by opening a GitHub issue or pull request, or get in contact in the #buildkit channel in the Docker Community Slack.

]]>
9 Tips for Containerizing Your Node.js Application https://www.docker.com/blog/9-tips-for-containerizing-your-node-js-application/ Thu, 13 Oct 2022 15:35:55 +0000 https://www.docker.com/?p=37997 Over the last five years, Node.js has maintained its position as a top platform among professional developers. It’s an open source, cross-platform JavaScript runtime environment designed to maximize throughput. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient — perfect for data intensive, real-time, and distributed applications. 

With over 90,500 stars and 24,400 forks, Node’s developer community is highly active. With more devs creating Node.js apps than ever before, finding efficient ways to build and deploy and cross platform is key. Let’s discuss how containerization can help before jumping into the meat of our guide. 

Why is containerizing a Node application important?

Containerizing your Node application has numerous benefits. First, Docker’s friendly, CLI-based workflow lets any developer build, share, and run containerized Node applications. Second, developers can install their app from a single package and get it up and running in minutes. Third, Node developers can code and test locally while ensuring consistency from development to production.

We’ll show you how to quickly package your Node.js app into a container. We’ll also tackle key concerns that are easy to forget — like image vulnerabilities, image bloat, missing image tags, and poor build performance. Let’s explore a simple todo list app and discuss how our nine tips might apply.

Analyzing a simple todo list application

Let’s first consider a simple todo list application. This is a basic React application with a Node.js backend and a MongoDB database. The source code of the complete project is available within our GitHub samples repo.

Building the application

Luckily, we can build our sample application in just a few steps. First, you’ll want to clone the appropriate awesome-compose sample to use it with your project:

git clone https://github.com/dockersamples/awesome-compose/
cd awesome-compose/react-express-mongodb
docker compose -f docker-compose.yaml up -d

Second, enter the docker compose ps command to list out your services in the terminal. This confirms that everything is accounted for and working properly:

docker compose ps
NAME                COMMAND                  SERVICE             STATUS              PORTS
backend             "docker-entrypoint.s…"   backend             running             3000/tcp
frontend            "docker-entrypoint.s…"   frontend            running             0.0.0.0:3000->3000/tcp
mongo               "docker-entrypoint.s…"   mongo               running             27017/tcp

Third, open your browser and navigate to https://localhost:3000 to view your application in action. You’ll see your todo list UI and be able to directly interact with your application:

List View

This is a great way to spin up a functional application in a short amount of time. However, remember that these samples are foundations you can build upon. They’re customizable to better suit your needs. And this can be important from a performance standpoint — since our above example isn’t fully optimized. Next, we’ll share some general optimization tips and more to help you build the best app possible. 

Our top nine tips for containerizing and optimizing Node applications

1) Use a specific base image tag instead of “version:latest”

When building Docker images, we always recommended specifying useful tags which codify version information, intended destination (prod or test, for instance), stability, or other useful information for deploying your application across environments.

Don’t rely on the latest tag that Docker automatically pulls, outside of local development. Using latest is unpredictable and may cause unexpected behavior. Each time you pull a latest image version, it could contain a new build or untested code that may break your application. 

Consider the following Dockerfile that uses the specific node:lts-buster Docker image as a base image instead of node:latest. This approach may be preferable since lts-buster is a stable image:

# Create image based on the official Node image from dockerhub
FROM node:lts-buster

# Create app directory
WORKDIR /usr/src/app

# Copy dependency definitions
COPY package.json ./package.json
COPY package-lock.json ./package-lock.json

# Install dependencies
#RUN npm set progress=false \
#    && npm config set depth 0 \
#    && npm i install
RUN npm ci

# Get all the code needed to run the app
COPY . .

# Expose the port the app runs in
EXPOSE 3000

# Serve the app
CMD ["npm", "start"]

Overall, it’s often best to avoid using FROM node:latest in your Dockerfile.

2) Use a multi-stage build

With multi-stage builds, a Docker build can use one base image for compilation, packaging, and unit testing. A separate image holds the application’s runtime. This makes the final image more secure and shrinks its footprint (since it doesn’t contain development or debugging tools). Multi-stage Docker builds help ensure your builds are 100% reproducible and lean. You can create multiple stages within a Dockerfile to control how you build that image.

You can containerize your Node application using a multi-layer approach. Each layer may contain different app components like source code, resources, and even snapshot dependencies. What if we want to package our application into its own image like we mentioned earlier? Check out the following Dockerfile to see how it’s done:

FROM node:lts-buster-slim AS development

WORKDIR /usr/src/app

COPY package.json ./package.json
COPY package-lock.json ./package-lock.json
RUN npm ci

COPY . .

EXPOSE 3000

CMD [ "npm", "run", "dev" ]

FROM development as dev-envs
RUN <<EOF
apt-get update
apt-get install -y --no-install-recommends git
EOF


# install Docker tools (cli, buildx, compose)
COPY --from=gloursdocker/docker / /
CMD [ "npm", "run", "dev" ]

We first add an AS development label to the node:lts-buster-slim statement. This lets us refer to this build stage in other build stages. Next, we add a new development stage labeled dev-envs. We’ll use this stage to run our development.

Now, let’s rebuild our image and run our development. We’ll use the same docker build command as above — while adding the --target development flag to specifically run the development build stage:

docker build -t node-docker --target dev-envs .

3) Fix security vulnerabilities in your Node image

Today’s developers rely on third-party code and apps while building their services. External software can introduce unwanted vulnerabilities into your code if you’re not careful. Leveraging trusted images and continually monitoring your containers helps protect you.

Whenever you build a node:lts-buster-slim Docker image, Docker Desktop prompts you to run security scans of the image to detect any known vulnerabilities.

Let’s use the the Snyk Extension for Docker Desktop to inspect our Node.js application. To begin, install Docker Desktop 4.8.0+ on your Mac, Windows, or Linux machine. Next, check the box within Settings > Extensions to Enable Docker Extensions.

You can then browse the Extensions Marketplace by clicking the “Add Extensions” button in the left sidebar, then searching for Snyk.

Snyk Extensions Marketplace

Snyk’s extension lets you rapidly scan both local and remote Docker images to detect vulnerabilities.

Snyk Install

Install the Snyk and enter the node:lts-buster-slim Node Docker Official Image into the “Select image name” field. You’ll have to log into Docker Hub to start scanning. Don’t worry if you don’t have an account — it’s free and takes just a minute to create.

When running a scan, you’ll see this result within Docker Desktop:

Snyk Image Scan

Snyk uncovered 70 vulnerabilities of varying severity during this scan. Once you’re aware of these, you can begin remediation to fortify your image.

That’s not all. In order to perform a vulnerability check, you can use  the docker scan command directly against your Dockerfile:

docker scan -f Dockerfile node:lts-buster-slim

4) Leverage HEALTHCHECK

The HEALTHCHECK instruction tells Docker how to test a container and confirm that it’s still working. For example, this can detect when a web server is stuck in an infinite loop and cannot handle new connections — even though the server process is still running.

When an application reaches production, an orchestrator like Kubernetes or a service fabric will most likely manage it. By using HEALTHCHECK, you’re sharing the status of your containers with the orchestrator to enable configuration-based management tasks. Here’s an example:

# syntax=docker/dockerfile:1.4

FROM node:lts-buster-slim AS development

# Create app directory
WORKDIR /usr/src/app

COPY package.json ./package.json
COPY package-lock.json ./package-lock.json
RUN npm ci

COPY . .

EXPOSE 3000

CMD [ "npm", "run", "dev" ]

FROM development as dev-envs
RUN <<EOF
apt-get update
apt-get install -y --no-install-recommends git
EOF

RUN <<EOF
useradd -s /bin/bash -m vscode
groupadd docker
usermod -aG docker vscode
EOF

HEALTHCHECK CMD curl --fail http://localhost:3000 || exit 1  


# install Docker tools (cli, buildx, compose)
COPY --from=gloursdocker/docker / /
CMD [ "npm", "run", "dev" ]
```

When HEALTHCHECK is present in a Dockerfile, you’ll see the container’s health in the STATUS column after running the docker ps command. A container that passes this check is healthy. The CLI will label unhealthy containers as unhealthy:

docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED          STATUS                             PORTS                    NAMES
1d0c5e3e7d6a   react-express-mongodb-frontend   "docker-entrypoint.s…"   23 seconds ago   Up 21 seconds (health: starting)   0.0.0.0:3000->3000/tcp   frontend
a89721d3c42d   react-express-mongodb-backend    "docker-entrypoint.s…"   23 seconds ago   Up 21 seconds (health: starting)   3000/tcp                 backend
194c953f5653   mongo:4.2.0                      "docker-entrypoint.s…"   3 minutes ago    Up 3 minutes                       27017/tcp                mongo

You can also define a healthcheck (note the case difference) within Docker Compose! This can be pretty useful when you’re not using a Dockerfile. Instead of writing a plain text instruction, you’ll write this configuration in YAML format. 

Here’s a sample configuration that lets you define healthcheck within your docker-compose.yml file:

backend:
    container_name: backend
    restart: always
    build: backend
    volumes:
      - ./backend:/usr/src/app
      - /usr/src/app/node_modules
    depends_on:
      - mongo
    networks:
      - express-mongo
      - react-express
    expose:
      - 3000
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000"]
      interval: 1m30s
      timeout: 10s
      retries: 3
      start_period: 40s

5) Use .dockerignore

To increase build performance, we recommend creating a .dockerignore file in the same directory as your Dockerfile. For this tutorial, your .dockerignore file should contain just one line:

node_modules

This line excludes the node_modules directory — which contains output from Maven — from the Docker build context. There are many good reasons to carefully structure a .dockerignore file, but this simple file is good enough for now.

Let’s now explain the build context and why it’s essential . The docker build command builds Docker images from a Dockerfile and a “context.” This context is the set of files located in your specified PATH or URL. The build process can reference any of these files. 

Meanwhile, the compilation context is where the developer works. It could be a folder on Mac, Windows, or a Linux directory. This directory contains all necessary application components like source code, configuration files, libraries, and plugins. With a .dockerignore file, we can determine which of the following elements like source code, configuration files, libraries, plugins, etc. to exclude while building your new image. 

Here’s how your .dockerignore file might look if you choose to exclude the node_modules directory from your build:

Backend:

Dockerignore Backend

Frontend:

Dockerignore Frontend

6) Run as a non-root user for security purpose

Running applications with user privileges is safer since it helps mitigate risks. The same applies to Docker containers. By default, Docker containers and their running apps have root privileges. It’s therefore best to run Docker containers as non-root users. 

You can do this by adding USER instructions within your Dockerfile. The USER instruction sets the preferred user name (or UID) and optionally the user group (or GID) while running the image — and for any subsequent RUN, CMD, or ENTRYPOINT instructions:

# syntax=docker/dockerfile:1.4
FROM node:lts-buster AS development

WORKDIR /usr/src/app

COPY package.json ./package.json
COPY package-lock.json ./package-lock.json

RUN npm ci

COPY . .

EXPOSE 3000


CMD ["npm", "start"]

FROM development as dev-envs


RUN <<EOF
   apt-get update
   apt-get install -y --no-install-recommends git
EOF

RUN <<EOF
   useradd -s /bin/bash -m vscode
   groupadd docker
   usermod -aG docker vscode
EOF

USER vscode

# install Docker tools (cli, buildx, compose)
COPY --from=gloursdocker/docker / /
CMD [ "npm", "start" ]

7) Favor multi-architecture Docker images

Your CPU can only run binaries for its native architecture. For example, Docker images built for an x86 system can’t run on an Arm-based system. With Apple fully transitioning to their custom Arm-based silicon, it’s possible that your x86 (Intel or AMD) container image won’t work with Apple’s M-series chips. 

Consequently, we always recommended building multi-arch container images. Below is the mplatform/mquery Docker image that lets you query the multi-platform status of any public image in any public registry:

docker run --rm mplatform/mquery node:lts-buster
Unable to find image 'mplatform/mquery:latest' locally
d0989420b6f0: Download complete
af74e063fc6e: Download complete
3441ed415baf: Download complete
a0c6ee298a93: Download complete
894bcacb16df: Downloading [=============================================>     ]  3.146MB/3.452MB
Image: node:lts-buster (digest: sha256:a5d9200d3b8c17f0f3d7717034a9c215015b7aae70cb2a9d5e5dae7ff8aa6ca8)
 * Manifest List: Yes (Image type: application/vnd.docker.distribution.manifest.list.v2+json)
 * Supported platforms:
   - linux/amd64
   - linux/arm/v7
   - linux/arm64/v8

We introduced the docker buildx command to help you build multi-architecture images. Buildx is a Docker component that enables many powerful build features with a familiar Docker user experience. 
All Buildx builds run using the Moby BuildKit engine.

BuildKit is designed to excel at multi-platform builds, or those not just targeting the user’s local platform. When you invoke a build, you can set the --platform flag to specify the build output’s target platform (like linux/amd64, linux/arm/v7, linux/arm64/v8, etc.):

docker buildx build --platform linux/amd64,linux/arm/v7 -t node-docker .

8) Explore graceful shutdown options for Node

Docker containers are ephemeral in nature. They can be stopped and destroyed, then either rebuilt or replaced with minimal effort. You can terminate containers by sending a SIGTERM notice signal to the process. This little grace period requires you to ensure that your app is handling ongoing requests and cleaning up resources in a timely fashion. 

On the other hand, Node.js accepts and forwards signals like SIGINT and SIGTERM from the OS, which is key to properly shutting down your app. Node.js lets your app decide how to handle those signals. If you don’t write code or use a module to handle them, your app won’t shut down gracefully. It’ll ignore those signals until Docker or Kubernetes kills it after a timeout period. 

Using certain init options like docker run --init or tini within your Dockerfile is viable when you can’t change your app code. However, we recommend writing code to handle proper signal handling for graceful shutdowns.

Check out this video from Docker Captain Bret Fisher (12:57) where he covers all three available Node shutdown options in detail.

9) Use the OpenTelemetry API to measure NodeJS performance

How do Node developers make their apps faster and more performant? Generally, developers rely on third-party observability tools to measure application performance. This performance monitoring is essential for creating multi-functional Node applications with top notch user experiences.

Observability extends beyond application performance. Metrics, traces, and logs are now front and center. Metrics help developers to understand what’s wrong with the system, while traces help you discover how it’s wrong. Logs tell you why it’s wrong. Developers can dig into particular metrics and traces to holistically understand system behavior.

Observing Node applications means tracking your Node metrics, requests rates, request error rate, and request durations. OpenTelemetry is one popular collection of tools and APIs that help you instrument your Node.js application.

You can also use an open-source tool like SigNoz to analyze your app’s performance. Since SigNoz offers a full-stack observability tool, you don’t need to rely on multiple tools.

Conclusion

In this guide, we explored many ways to optimize your Docker images — from carefully crafting your Dockerfile to securing your image via Snyk scanning. Building better Node.js apps doesn’t have to be complex. By nailing some core fundamentals, you’ll be in great shape. 

If you’d like to dig deeper, check out these additional recommendations and best practices for building secure, production-grade Docker images:

Docker blog NodeJS Best Practices v2 1
]]>
How to Rapidly Build Multi-Architecture Images with Buildx https://www.docker.com/blog/how-to-rapidly-build-multi-architecture-images-with-buildx/ Fri, 17 Jun 2022 14:00:29 +0000 https://www.docker.com/?p=34108 Successfully running your container images on a variety of CPU architectures can be tricky. For example, you might want to build your IoT application — running on an arm64 device like the Raspberry Pi — from a specific base image. However, Docker images typically support amd64 architectures by default. This scenario calls for a container image that supports multiple architectures, which we’ve highlighted in the past.

Multi-architecture (multi-arch) images typically contain variants for different architectures and OSes. These images may also support CPU architectures like arm32v5+, arm64v8, s390x, and others. The magic of multi-arch images is that Docker automatically grabs the variant matching your OS and CPU pairing.

While a regular container image has a manifest, a multi-architecture image has a manifest list. The list combines the manifests that show information about each variant’s size, architecture, and operating system.

Multi-architecture images are beneficial when you want to run your container locally on your x86-64 Linux machine, and remotely atop AWS Elastic Compute Cloud (EC2) Graviton2 CPUs. Additionally, it’s possible to build language-specific, multi-arch images — as we’ve done with Rust.

Follow along as we learn about each component behind multi-arch image builds, then quickly create our image using Buildx and Docker Desktop.

Building Multi-Architecture Images with Buildx and Docker Desktop

You can build a multi-arch image by creating the individual images for each architecture, pushing them to Docker Hub, and entering docker manifest to combine them within a tagged manifest list. You can then push the manifest list to Docker Hub. This method is valid in some situations, but it can become tedious and relatively time consuming.

 

Note: However, you should only use the docker manifest command in testing — not production. This command is experimental. We’re continually tweaking functionality and any associated UX while making docker manifest production ready.

 

However, two tools make it much easier to create multi-architectural builds: Docker Desktop and Docker Buildx. Docker Buildx enables you to complete every multi-architecture build step with one command via Docker Desktop.

Before diving into the nitty gritty, let’s briefly examine some core Docker technologies.

Dockerfiles

The Dockerfile is a text file containing all necessary instructions needed to assemble and deploy a container image with Docker. We’ll summarize the most common types of instructions, while our documentation contains information about others:

  • The FROM instruction headlines each Dockerfile, initializing the build stage and setting a base image which can receive subsequent instructions.
  • RUN defines important executables and forms additional image layers as a result. RUN also has a shell form for running commands.
  • WORKDIR sets a working directory for any following instructions. While you can explicitly set this, Docker will automatically assign a directory in its absence.
  • COPY, as it sounds, copies new files from a specified source and adds them into your container’s filesystem at a given relative path.
  • CMD comes in three forms, letting you define executables, parameters, or shell commands. Each Dockerfile only has one CMD, and only the latest CMD instance is respected when multiple exist.

 

Dockerfiles facilitate automated, multi-layer image builds based on your unique configurations. They’re relatively easy to create, and can grow to support images that require complex instructions. Dockerfiles are crucial inputs for image builds.

Buildx

Buildx leverages the docker build command to build images from a Dockerfile and sets of files located at a specified PATH or URL. Buildx comes packaged within Docker Desktop, and is a CLI plugin at its core. We consider it a plugin because it extends this base command with complete support for BuildKit’s feature set.

We offer Buildx as a CLI command called docker buildx, which you can use with Docker Desktop. In Linux environments, the buildx command also works with the build command on the terminal. Check out our Docker Buildx documentation to learn more.

BuildKit Engine

BuildKit is one core component within our Moby Project framework, which is also open source. It’s an efficient build system that improves upon the original Docker Engine. For example, BuildKit lets you connect with remote repositories like Docker Hub, and offers better performance via caching. You don’t have to rebuild every image layer after making changes.

While building a multi-arch image, BuildKit detects your specified architectures and triggers Docker Desktop to build and simulate those architectures. The docker buildx command helps you tap into BuildKit.

Docker Desktop

Docker Desktop is an application — built atop Docker Engine — that bundles together the Docker CLI, Docker Compose, Kubernetes, and related tools. You can use it to build, share, and manage containerized applications. Through the baked-in Docker Dashboard UI, Docker Desktop lets you tackle tasks with quick button clicks instead of manually entering intricate commands (though this is still possible).

Docker Desktop’s QEMU emulation support lets you build and simulate multiple architectures in a single environment. It also enables building and testing on your macOS, Windows, and Linux machines.

Now that you have working knowledge of each component, let’s hop into our walkthrough.

Prerequisites

Our tutorial requires the following:

  • The correct Go binary for your OS, which you can download here
  • The latest version of Docker Desktop
  • A basic understanding of how Docker works. You can follow our getting started guide to familiarize yourself with Docker Desktop.

 

Building a Sample Go Application

Let’s begin by building a basic Go application which prints text to your terminal. First, create a new folder called multi_arch_sample and move to it:

mkdir multi_arch_sample && cd multi_arch_sample

Second, run the following command to track code changes in the application dependencies:

go mod init multi_arch_sample

Your terminal will output a similar response to the following:

go: creating new go.mod: module multi_arch_sample
go: to add module requirements and sums:
  	go mod tidy

 

Third, create a new main.go file and add the following code to it:

package main
 
import (
  	"fmt"
  	"net/http"
)
 
 
func readyToLearn(w http.ResponseWriter, req *http.Request) {
  	w.Write([]byte("<h1>Ready to learn!</h1>"))
	fmt.Println("Server running...")
}
 
func main() {
     
	http.HandleFunc("/", readyToLearn)
  	http.ListenAndServe(":8000", nil)
}

 

This code created the function readyToLearn, which prints “Ready to learn!” at the 127.0.0.1:8000 web address. It also outputs the phrase Server running… to the terminal.

Next, enter the go run main.go command to run your application code in the terminal, which will produce the Ready to learn! response.

Since your app is ready, you can prepare a Dockerfile to handle the multi-architecture deployment of your Go application.

Creating a Dockerfile for Multi-arch Deployments

Create a new file in the working directory and name it Dockerfile. Next, open that file and add in the following lines:

# syntax=docker/dockerfile:1
 
# specify the base image to  be used for the application
FROM golang:1.17-alpine
 
# create the working directory in the image
WORKDIR /app
 
# copy Go modules and dependencies to image
COPY go.mod ./
 
# download Go modules and dependencies
RUN go mod download
 
# copy all the Go files ending with .go extension
COPY *.go ./
 
# compile application
RUN go build -o /multi_arch_sample
 
# network port at runtime
EXPOSE 8000
 
# execute when the container starts
CMD [ "/multi_arch_sample" ]

 

Building with Buildx

Next, you’ll need to build your multi-arch image. This image is compatible with both the amd64 and arm32 server architectures. Since you’re using Buildx, BuildKit is also enabled by default. You won’t have to switch on this setting or enter any extra commands to leverage its functionality.

The builder builds and provisions a container. It also packages the container for reuse. Additionally, Buildx supports multiple builder instances — which is pretty handy for creating scoped, isolated, and switchable environments for your image builds.

Enter the following command to create a new builder, which we’ll call mybuilder:

docker buildx create --name mybuilder --use --bootstrap

You should get a terminal response that says mybuilder. You can also view a list of builders using the docker buildx ls command. You can even inspect a new builder by entering docker buildx inspect <name>.

Triggering the Build

Now, you’ll jumpstart your multi-architecture build with the single docker buildx command shown below:

docker buildx build --push \
--platform linux/amd64,linux/arm64 \
--tag your_docker_username/multi_arch_sample:buildx-latest .

 

This does several things:

  • Combines the build command to start a build
  • Shares the image with Docker Hub using the push operation
  • Uses the --platform flag to specify the target architectures you want to build for. BuildKit then assembles the image manifest for the architectures
  • Uses the --tag flag to set the image name as multi_arch_sample

 

Once your build is finished, your terminal will display the following:

[+] Building 123.0s (23/23) FINISHED

 

Next, navigate to the Docker Desktop and go to Images > REMOTE REPOSITORIES. You’ll see your newly-created image via the Dashboard!

 

image1 2

 

 

 

 

 

 

 

 

 

 

 

 

Conclusion

Congratulations! You’ve successfully explored multi-architecture builds, step by step. You’ve seen how Docker Desktop, Buildx, BuildKit, and other tooling enable you to create and deploy multi-architecture images. While we’ve used a sample Go web application, you can apply these processes to other images and applications.

To tackle your own projects, learn how to get started with Docker to build more multi-architecture images with Docker Desktop and Buildx. We’ve also outlined how to create a custom registry configuration using Buildx.

]]>
Cross Compiling Rust Code for Multiple Architectures https://www.docker.com/blog/cross-compiling-rust-code-for-multiple-architectures/ Tue, 14 Jun 2022 14:50:53 +0000 https://www.docker.com/?p=34003 Getting an idea, planning it out, implementing your Rust code, and releasing the final build is a long process full of unexpected issues. Cross compilation of your code will allow you to reach more users, but it requires knowledge of building executables for different runtime environments. Luckily, this post will help in getting your Rust application running on multiple architectures, including x86 for Windows.

Overview

You want to vet your idea with as many users as possible, so you need to be able to compile your code for multiple architectures. Your users have their own preferences on what machines and OS to use, so we should do our best to meet them in their preferred set-up. This is why it’s critical to pick a language or framework that lends itself to support multiple ways to export your code for multiple target environments with minimal developer effort. Also, it’d be better to have tooling in place to help automate this export process.

If we invest some time in the beginning to pick the right coding language and automation tooling, then we’ll avoid the headaches of not being able to reach a wider audience without the use of cumbersome manual steps. Basically, we need to remove as many barriers as possible between our code and our audience.

This post will cover building a custom Docker image, instantiating a container from that image, and finally using that container to cross compile your Rust code. Your code will be compiled, and an executable will be created for your target environment within your working directory.

What You’ll Need

Getting Started

My Rust directory has the following structure:

.
├── Cargo.lock
├── Cargo.toml
└── src
  └── main.rs

 

The lock file and toml file both share the same format. The lock file lists packages and their properties. The Cargo program maintains the lock file, and this file should not be manually edited. The toml file is a manifest file that specifies the metadata of your project. Unlike the lock file, you can edit the toml file. The actual Rust code is in main.rs. In my example, the main.rs file contains a version of the game Snake that uses ASCII art graphics. These files run on Linux machines, and our goal is to cross compile them into a Windows executable.

The cross compilation of your Rust code will be done via Docker. Download and install the latest version of Docker Desktop. Choose the version matching your workstation OS — and remember to choose either the Intel or Apple (M-series) processor variant if you’re running macOS.

Creating Your Docker Image

Once you’ve installed Docker Desktop, navigate to your Rust directory. Then, create an empty file called Dockerfile within that directory. The Dockerfile will contain the instructions needed to create your Docker image. Paste the following code into your Dockerfile:

FROM rust:latest 

RUN apt update &amp;&amp; apt upgrade -y 
RUN apt install -y g++-mingw-w64-x86-64 

RUN rustup target add x86_64-pc-windows-gnu 
RUN rustup toolchain install stable-x86_64-pc-windows-gnu 

WORKDIR /app 

CMD ["cargo", "build", "--target", "x86_64-pc-windows-gnu"]

 

Setting Up Your Image

The first line creates your image from the Rust base image. The next command upgrades the contents of your image’s packages to the latest version and installs mingw, an open source program that builds Windows applications.

Compiling for Windows

The next two lines are key to getting cross compilation working. The rustup program is a command line toolchain manager for Rust that allows Rust to support compilation for different target platforms. We need to specify which target platform to add for Rust (a target specifies an architecture which can be compiled into by Rust). We then install that toolchain into Rust. A toolchain is a set of programs needed to compile our application to our desired target architecture.

Building Your Code

Next, we’ll set the working directory of our image to the app folder. The final line utilizes the CMD instruction in our running container. Our command instructs Cargo, the Rust build system, to build our Rust code to the designated target architecture.

Building Your Image

Let’s save our Dockerfile, and then navigate to that directory in our terminal. In the terminal, run the following command:

docker build . -t rust_cross_compile/windows

 

Docker will build the image by using the current directory’s Dockerfile. The command will also tag this image as rust_cross_compile/windows.

Running Your Container

Once you’ve created the image, then you can run the container by executing the following command:

docker run --rm -v ‘your-pwd’:/app rust_cross_compile/windows

 

The -rm option will remove the container when the command completes. The -v command allows you to persist data after a container has existed by linking your container storage with your local machine. Replace ‘your-pwd’ with the absolute path to your Rust directory. Once you run the above command, then you will see the following directory structure within your Rust directory:

.
├── Cargo.lock
├── Cargo.toml
└── src
  └── main.rs
└── target
  └── debug
  └── x86_64-pc-windows-gnu
  	└── debug
		termsnake.exe

 

Running Your Rust Code

You should now see a newly created directory called target. This directory will contain a subdirectory that will be named after the architecture you are targeting. Inside this directory, you will see a debug directory that contains the executable file. Clicking the executable allows you to run the application on a Windows machine. In my case, I was able to start playing the game Snake:

 

Snake GIF

 

Running Rust on armv7

We have compiled our application into a Windows executable, but we can modify the Dockerfile like the below in order for our application to run on the armv7 architecture:

FROM rust:latest 

RUN apt update &amp;&amp; apt upgrade -y 
RUN apt install -y g++-arm-linux-gnueabihf libc6-dev-armhf-cross

RUN rustup target add armv7-unknown-linux-gnueabihf 
RUN rustup toolchain install stable-armv7-unknown-linux-gnueabihf 

WORKDIR /app 

ENV CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER=arm-linux-gnueabihf-gcc CC_armv7_unknown_Linux_gnueabihf=arm-linux-gnueabihf-gcc CXX_armv7_unknown_linux_gnueabihf=arm-linux-gnueabihf-g++

CMD ["cargo", "build", "--target", "armv7-unknown-linux-gnueabihf"]

 

Running Rust on aarch64

Alternatively, we could edit the Dockerfile with the below to support aarch64:

FROM rust:latest 

RUN apt update &amp;&amp; apt upgrade -y 
RUN apt install -y g++-aarch64-linux-gnu libc6-dev-arm64-cross

RUN rustup target add aarch64-unknown-linux-gnu 
RUN rustup toolchain install stable-aarch64-unknown-linux-gnu

WORKDIR /app 

ENV CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc CC_aarch64_unknown_linux_gnu=aarch64-linux-gnu-gcc CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++

CMD ["cargo", "build", "--target", "aarch64-unknown-linux-gnu"]

 

Another way to compile for different architectures without going through the creation of a Dockerfile would be to install the cross project, using the cargo install -f cross command. From there, simply run the following command to start the build:

cross build --target x86_64-pc-windows-gnu

Conclusion

Docker Desktop allows you to quickly build a development environment that can support different languages and frameworks. We can build and compile our code for many target architectures. In this post, we got Rust code written on Linux to run on Windows, but we don’t have to limit ourselves to just that example. We can pick many other languages and architectures. Alternatively, Docker Buildx is a tool that was designed to help solve these same problems. Checkout more documentation of Buildx here.

]]>
Dockerfiles now Support Multiple Build Contexts https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/ Mon, 02 May 2022 22:16:49 +0000 https://www.docker.com/?p=33287 The new releases of Dockerfile 1.4 and Buildx v0.8+ come with the ability to define multiple build contexts. This means you can use files from different local directories as part of your build. Let’s look at why it’s useful and how you can leverage it in your build pipelines.

When you invoke the docker build command, it takes one positional argument which is a path or URL to the build context. Most commonly, you’ll see docker build . making the current working directory the build context.

Inside a Dockerfile you can use COPY and ADD commands to copy files from your build context and make them available to your build steps. In BuildKit, we also added build mounts with RUN --mount that allow accessing build context files directly — without copying them — for extra performance.

Conquering Complex Builds

But, as builds got more complicated, the ability to only access files from one location became quite limiting. That’s why we added multi-stage builds where you can copy files from other parts of the Dockerfile by adding the --from flag and pointing it to the name of another Dockerfile stage or a remote image.

The new named build context feature is an extension of this pattern. You can now define additional build contexts when running the build command, give them a name, and then access them inside a Dockerfile the same way you previously did with build stages.

Additional build contexts can be defined with a new --build-context [name]=[value] flag. The key component defines the name for your build context and the value can be:

  • Local directory – e.g. --build-context project2=../path/to/project2/src
  • Git repository – e.g. --build-context qemu-src=https://github.com/qemu/qemu.git
  • HTTP URL to a tarball – e.g. --build-context src=https://example.org/releases/src.tar
  • Docker image – Define with a docker-image:// prefix, e.g. --build-context alpine=docker-image://alpine:3.15

 

On the Dockerfile side, you can reference the build context on all commands that accept the “from” parameter. Here’s how that might look:

# syntax=docker/dockerfile:1.4
FROM [name]
COPY --from=[name] ...
RUN --mount=from=[name] …

 

The value of [name] is matched with the following priority order:

  • Named build context defined with --build-context [name]=..
  • Stage defined with AS [name] inside Dockerfile
  • Remote image [name] in a container registry

 

If no --from flag is set, files are loaded from the main build context.

Example #1: Pinning an Image

Let’s start with an example of how you can use build contexts to pin an image used by a Dockerfile to a specific version.

This is useful in many different cases. For example, you can use the new BuildInfo feature to capture all the build sources and run a build with the same dependencies as a previous build did, even if the image tags have been updated.

docker buildx imagetools inspect --format '{{json .BuildInfo}}' moby/buildkit

"sources": [
      {
        "type": "docker-image",
        "ref": "docker.io/library/alpine:3.15",
        "pin": "sha256:21a3deaa0d32a8057914f36584b5288d2e5ecc984380bc0118285c70fa8c9300"
      },

 

docker buildx build --build-context alpine:3.15=docker-image://alpine:3.15@sha256:21a3deaa0d32a8057914f36584b5288d2e5ecc984380bc0118285c70fa8c9300 .


When your Dockerfile uses alpine:3.15, even if it’s been updated with a newer version in the registry, your new build will still use the same exact image your previous build did.

As another example, you may just want to try a different image or different version for debugging or developing your image. A common pattern could be that you haven’t released your image yet, and it’s only in the test registry or staging environment. Let’s say you built your app and pushed it to a staging repository, but now want to use it in your other builds that would usually use the release image.

 

docker buildx build --build-context myorg/myapp=docker-image://staging.myorg.com/registry/myapp .


You can also think about the previous examples as a way to create an alias for an image.

Example #2: Multiple Projects

Probably the most requested use case for named contexts capability is the possibility to use multiple local source directories.

If your project contains multiple components that need to be built together, it’s sometimes tricky to load them with a single build context where everything needs to be contained in one directory. There’s a variety of issues: every component needs to be accessed by their full path, you can only have one .dockerignore file, or maybe you’d like each component to have its own Dockerfile.

If your project has the following layout:

project
├── app1
│   ├── .dockerignore
│   ├── src
├── app2
│   ├── .dockerignore
│   ├── src
├── Dockerfile

…with this Dockerfile:

#syntax=docker/dockerfile:1.4
FROM … AS build1
COPY –from=app1 . /src

FROM … AS build2
COPY –from=app2 . /src

FROM …
COPY –from=build1 /out/app1 /bin/
COPY –from=build2 /out/app2 /bin/

…you can invoke your build with docker buildx build –build-context app1=app1/src –build-context app2=app2/src .. Both of the source directories are exposed separately to the Dockerfile and can be accessed by their respective names.

This also allows you to access files that are outside of your main project’s source code. Normally when you’re inside the Dockerfile, you’re not allowed to access files outside of your build context by using the ../ parent selector for security reasons. But as all build contexts are passed directly from the client, you’re now able to use --build-context othersource=../../path/to/other/project to avoid this limitation.

Example #3: Override a Remote Dependency with a Local One

When exposing multiple source contexts to the builds there may be cases where your project always depends on multiple local directories, like in the previous example. Other times, however, you may want your dependencies to be loaded from a remote source by default, while still leaving you the option to replace it with a local source when you want to do some extra debugging.

As an example, let’s look at a common pattern where your app depends on another project that you build from source code using multi-stage builds.

Something like:

FROM golang AS helper
RUN apk add git
WORKDIR /src
ARG HELPERAPP_VERSION=1.0
RUN git clone https://github.com/someorg/helperapp.git && cd helperapp && git checkout $HELPERAPP_VERSION
WORKDIR /src/helperapp
RUN go build -o /out/helperapp .

FROM alpine
COPY –link –from=helper /out/helperapp /bin
COPY –link –from=build /out/myapp /bin

 

This works quite well. When you do a build, helperapp is built directly from its source repository and copied next to your app binary. Whenever you need to use a different version you can use the HELPERAPP_VERSION build argument to specify a different value.

But let’s say you’re developing your application and have found a bug. You’re not quite sure if the bug is in your application code or in the helper app. You’d want to make some local changes to the helperapp code to analyze what’s going on. The problem is that with your current code you’d need to push your changes to Github first so they can then be pulled down by the Dockerfile. Doing this for every code change would be very painful.

Instead, consider if we change the previous code to:

FROM alpine AS helper-clone
RUN apk add git
WORKDIR /src
ARG HELPERAPP_VERSION=1.0
RUN git clone https://github.com/someorg/helperapp.git && cd helperapp && git checkout $HELPERAPP_VERSION

FROM scratch AS helper-src
COPY –from=helper-clone /src/helperapp /

FROM golang:alpine AS helper
WORKDIR helperapp
RUN –mount=target=.,from=helper-src go build -o /out/helperapp .

FROM alpine
COPY –link –from=helper /out/helperapp /bin
COPY –link –from=build /out/myapp /bin

 

By default, this Dockerfile behaves exactly like the previous one, making a clone from GitHub to get the source code. But now, because we have added a separate stage helper-src that contains the source code for helperapp, we can use the new named contexts feature to override it with our local source directory when needed.

docker buildx build –build-context helper-src=../path/to/my/local/helper/checkout .

164336539 676b3ad0 959c 4668 9e8f 07ce211991cd

Now you can test all your local patches without a separate Dockerfile or without needing to move all your source code under the same directory.

Named Contexts in buildx bake

In addition to the `build` command, `docker buildx` also has a command called `bake`. Bake is a higher-level build command that allows you to define your build configurations in files instead of typing in a long list of flags for your build commands every time.

Additionally, it allows running many builds together, defining variables, and sharing definitions between your separate build configurations, etc. It accepts build configurations in JSON, HCL and Docker Compose YAML files. You can read more about it in the Buildx documentation.

We’ve also added named contexts support into bake. This is useful because if you write a Dockerfile that depends on multiple build contexts, you might forget that you need to pass these values with --build-context flag every time you invoke the build command.

With bake, you can define your target definition. For example:

hcl
target “binary” {
  contexts = {
    app1 = “app1/src”
    app2 = “app2/src”
  }
}

 

Now instead of remembering to use the --build-context flag with the correct paths every time, you can just call docker buildx bake binary and your build will run with the correct configuration. Of course, you can also use Bake variables, etc. in these fields for more complex cases.

You may also use this pattern to create special bake targets for the purpose of debugging or testing images in staging repositories.

hcl
target “myapp” {
 …
}

target “myapp-stage” {
  inherits = [“myapp”]
  contexts = {
    helperapp = “docker-image://staging.myorg.com/registry/myapp”
  }
}

With a Bake file like this, you can now call docker buildx bake myapp-stage to build your app with the exact configuration defined for your myapp target, except when your build is using helperapp image it will now be loaded from the staging repository instead of the release one that’s written into the Dockerfile.

Create Build Pipelines by Linking bake Targets

In addition to image, Git, URL, and local directories, Bake files also support another definition that you can use as a named context. You can set the source for the named context to point to another build target inside the Bake file. This way, you can chain together builds from multiple Dockerfiles that depend on each other and build them with a single command invocation.

Let’s say we have two Dockerfiles:

# base.Dockerfile
FROM alpine
…
# Dockerfile
FROM baseapp
...

 

Normally, you’d first build base.Dockerfile, then push it to a registry or leave it in the Docker image store. Then you’d build the second Dockerfile that loads the image by name.

An issue with this approach is that if you use the Docker image store, then it currently doesn’t support multi-platform local images. Using an external registry isn’t always very convenient either and, in both cases, some external change could update the base image in between two builds and make the second build use the wrong image. You also need to run the build commands twice and synchronize them manually.

Instead, you can define a Bake file with a build context defined with a target: prefix:

target “base” {
  dockerfile = “base.Dockerfile”
  platforms = [“linux/amd64”, “linux/arm64”]
}

target “myapp” {
  contexts = {
    baseapp = “target:base”
  }
  platforms = [“linux/amd64”, “linux/arm64”]
}

 

Now you can build your app by just running docker buildx bake myapp to build both Dockerfiles and link them as required. If you want to build both the base image and your app together, you can use docker buildx bake myapp base. Both of these targets are defined as multi-platform and Buildx will take care of linking the corresponding single-platform subimages with each other.

164336677 ca22a85a 276a 4927 aaef 500204bc2e35 2

Note that you should always first consider just using multi-stage builds with a --target parameter in these conditions. Having self-contained Dockerfiles is a simpler solution as it doesn’t require passing extra parameters with your build. This pattern should be used when you can’t combine the Dockerfiles and need to keep them separate.

Please check out the new build context feature in Docker Buildx v0.8 release, included with the latest Docker Desktop.

]]>
Capturing Build Information with BuildKit https://www.docker.com/blog/capturing-build-information-buildkit/ Thu, 14 Apr 2022 20:52:50 +0000 https://www.docker.com/?p=33094 Although every Docker image has a manifest — a JSON collection of tags, digital signatures, and configuration details — Docker images can still lack some basic information at build time. Those missing details could be useful to developers. So, how do we fill in the blanks?

In this guide, we’ll highlight a tentpole feature of BuildKit v0.10: new build information-structure generation, from build metadata. This allows you to see all sources (images, Git repositories, and HTTP URLs) and configurations passed on to your build. This information is also embeddable within the image config.

We’ll discuss how to tackle this process, and share some best practices along the way.

Getting Started

While this feature is automatically activated upon updating to BuildKit v0.10, we also recommend using BuildKit’s Dockerfile v1.4 to reliably capture original image names. You can do so by adding the following syntax to your Dockerfile: # syntax=docker/dockerfile:1.4.

Additionally, we recommend creating a new docker-container builder with Buildx that uses the latest stable version of BuildKit. Enter the following CLI command:

$ docker buildx create --use --bootstrap --name mybuilder

Note: to return to the default builder, enter the $ docker buildx use default command.

 

Next, let’s create a basic Dockerfile:

 # syntax=docker/dockerfile:1.4

FROM busybox AS base
ARG foo=baz
RUN echo bar &gt; /foo

FROM alpine:3.15 AS build
COPY --from=base /foo /
RUN echo baz &gt; /bar

FROM scratch
COPY --from=build /bar /
ADD https://raw.githubusercontent.com/moby/moby/master/README.md /

 

We’ll build this image using Buildx v0.8.1 — which comes packaged within the latest version of Docker Desktop (v4.7). The latest Buildx version lets you inspect and use any build information that’s been generated:

$ docker buildx build --build-arg foo=bar --metadata-file metadata.json .

Storing Build Metadata as a File

We’re using the --metadata-file flag, which writes the build result metadata within the metadata.json file. This flag helps retrieve metadata information about your build result — including the digest of your resulting image, and the new containerimage.buildinfo key.

The --metadata-file flag improves upon the previous --iidfile flag, which would only capture the resulting image ID. The following Dockerfile shows the containerimage.buildinfo key in practice:

{
"containerimage.buildinfo": {
"frontend": "dockerfile.v0",
"attrs": {
"build-arg:foo": "bar",
"filename": "Dockerfile",
"source": "docker/dockerfile:1.4"
},
"sources": [
{
"type": "docker-image",
"ref": "docker.io/library/alpine:3.15",
"pin": "sha256:d6d0a0eb4d40ef96f2310ead734848b9c819bb97c9d846385c4aca1767186cd4"
},
{
"type": "docker-image",
"ref": "docker.io/library/busybox:latest",
"pin": "sha256:caa382c432891547782ce7140fb3b7304613d3b0438834dce1cad68896ab110a"
},
{
"type": "http",
"ref": "https://raw.githubusercontent.com/moby/moby/master/README.md",
"pin": "sha256:419455202b0ef97e480d7f8199b26a721a417818bc0e2d106975f74323f25e6c"
}
]
},
"containerimage.config.digest": "sha256:cd82085d327d4b41f86212fc372f75682f131b5ce5c0c918dabaa0fbf04ec53f",
"containerimage.descriptor": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:6afb8217371adb4b75bd7767d711da74ba226ed868fa5560e01a8961ab150ccb",
"size": 732
},
"containerimage.digest": "sha256:6afb8217371adb4b75bd7767d711da74ba226ed868fa5560e01a8961ab150ccb",
}

 

What’s noteworthy about the structure of this result? The new containerimage.buildinfo key now contains your build information. You’ll also see a host of important field names:

  • frontend defines the BuildKit frontend responsible for the build (we’re building from a Dockerfile above).
  • attrs encompasses the build configuration parameters (e.g. when typing --build-arg).
  • sources defines build sources.

Additionally, each sources entry describes an external source that your Dockerfile used while building the result. However, it’s worth highlighting some other JSON data:

  • type can reference a docker-image for all container images referenced with FROM, or with git using a Git context. Finally, type can reference HTTP URL contexts or remote URLs used by ADD commands.
  • ref is the reference defined in your Dockerfile.
  • pin lets you know which dependency version is installed at any time (digest).

Remember that sources are captured for all of your build stages, and not just for the last stage’s base image that was exported.

Storing Build Metadata as Part of Your Image

Your metadata file isn’t the only transport available. BuildKit also embeds build information within the image config as your image is pushed. This makes your build information portable. Here’s what that push command looks like:

$ docker buildx build --build-arg foo=bar --tag crazymax/buildinfo:latest --push .

You can check the build information for any existing image — while on the latest Buildx version — using the imagetools inspect command:

$ docker buildx imagetools inspect crazymax/buildinfo:latest --format "{{json .BuildInfo}}"

{
"frontend": "dockerfile.v0",
"sources": [
{
"type": "docker-image",
"ref": "docker.io/library/alpine:3.15",
"pin": "sha256:d6d0a0eb4d40ef96f2310ead734848b9c819bb97c9d846385c4aca1767186cd4"
},
{
"type": "docker-image",
"ref": "docker.io/library/busybox:latest",
"pin": "sha256:caa382c432891547782ce7140fb3b7304613d3b0438834dce1cad68896ab110a"
},
{
"type": "http",
"ref": "https://raw.githubusercontent.com/moby/moby/master/README.md",
"pin": "sha256:419455202b0ef97e480d7f8199b26a721a417818bc0e2d106975f74323f25e6c"
}
]
}

 

Unlike with your metadata-file results, build attributes aren’t automatically available within the image config. Attribute inclusion is currently an opt-in configuration to avoid troublesome leaks. Developers occasionally use these attributes as secrets, which isn’t ideal from a security standpoint. We recommend using --mount=type=secret instead.

To add build attributes to your embedded image config, use the image output attribute, buildinfo-attrs:

$ docker buildx build \
--build-arg foo=bar \
--output=type=image,name=crazymax/buildinfo:latest,push=true,buildinfo-attrs=true .

 

Alternatively, you can use the Buildx BUILDKIT_INLINE_BUILDINFO_ATTRS build argument:

$ docker buildx build \
--build-arg BUILDKIT_INLINE_BUILDINFO_ATTRS=1 \
--build-arg foo=bar \
--tag crazymax/buildinfo:latest \
--push .

That’s it! You may now review any newly-generated build dependencies stemming from your image builds.

What’s next?

Transparency is important. Always aim to make your Docker images more self-descriptive, decipherable, reproducible, and visible. In this case, we’ve made it easier to uncover any inputs used while building an image. Additionally, you might compare images updated with security patches to pinned versions of your build sources. That lets you know if your image is up-to-date or safe to use.

This is one important step in our Secure Software Supply Chain (SSSC) journey, and towards better build reproducibility. More information about reproducibility is also available within our BuildKit repo.

However, we want to go even further. Docker is bringing SBOMs to all container images via BuildKit. We want to get our development community involved in this effort to bolster BuildKit — and take that next major step towards higher image-level transparency.

Are you interested in learning more about BuildKit? Our latest BuildKit release has shipped with other useful features — like those showcased in Tonis Tiigi’s blog post. If you’ve been clamoring for improved remote cache support and rapid image rebase, it’s well worth a read!

]]>
Faster Multi-Platform Builds: Dockerfile Cross-Compilation Guide https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/ Thu, 02 Dec 2021 16:00:00 +0000 https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/ There are some important changes happening in the software industry. With Apple moving all of their machines to their custom ARM-based silicon and AWS offering the best performance-per-cost ratio with their Graviton2 instances, one can no longer expect that all software only needs to run on x86 processors. If you work with containers there is some good tooling available for building multi-platform images when your development teams are using different architectures or you want to deploy to a different architecture from the one that you develop on. In this post, I’ll show some patterns that you can use if you want to get the best performance out of such builds.

In order to build multi-platform container images, we will use the docker buildxcommand. Buildx is a Docker component that enables many powerful build features with a familiar Docker user experience. All builds executed via buildx run with Moby Buildkit builder engine. Buildx can also be used standalone or, for example, to run builds in a Kubernetes cluster. In the next version of Docker CLI, the docker buildcommand will also start to use Buildx by default.

By default, a build executed with Buildx will build an image for the architecture that matches your machine. This way, you get an image that runs on the same machine you are working on. In order to build for a different architecture, you can set the--platform flag, e.g. --platform=linux/arm64. To build for multiple platforms together, you can set multiple values with a comma separator.

# building an image for two platforms
docker buildx build --platform=linux/amd64,linux/arm64 .

In order to build multi-platform images, we also need to create a builder instance as building multi-platform images is currently only supported when using BuildKit with docker-container and kubernetes drivers. Setting a single target platform is allowed on all buildx drivers.

docker buildx create --use
# building an image for two platforms
docker buildx build --platform=linux/amd64,linux/arm64 .

When building a multi-platform image from a Dockerfile, effectively your Dockerfile gets built once for each platform. At the end of the build, all of these images are merged together into a single multi-platform image.

FROM alpine
RUN echo "Hello" > /hello

For example, in the case of a simple Dockerfile like this that is built for two architectures, BuildKit will pull two different versions of the Alpine image, one containing x86 binaries and another containing arm64 binaries, and then run their respective shell binary on each of them.

Different methods of building

Generally, the CPU of your machine can only run binaries for its native architecture. x86 CPU can’t run ARM binaries and vice versa. So when we are running the above example on an Intel machine, how can it run the shell binary for ARM? It does this by running the binary through a software emulator instead of doing so directly.

docker buildx ls shows what emulators are installed for each of the builders. If you don’t see them listed for your system you can install them with the tonistiigi/binfmt image.

Using an emulator this way is very easy. We don’t need to modify our Dockerfile at all and can build for multiple platforms automatically. But it doesn’t come without downsides. The binaries running this way need to constantly convert their instructions between architectures and therefore don’t run with native speed. Occasionally you might also find a case that triggers a bug in the emulation layer.

One way to avoid this overhead is to modify your Dockerfile so that the longest-running commands don’t run through an emulator. Instead, we can use a cross-compilation stage.

The difference between emulation and cross-compilation is that in the former, we emulate the full system of another architecture in software, while in cross-compilation we only use binaries built for our native architecture with a special configuration option that makes them generate new binaries for our target architecture. As the name says, this technique can not be used for all processes but mostly only when you are running a compiler. Luckily the two techniques can be combined. For example, your Dockerfile can use emulation to install packages from the package manager and use cross-compilation to build your source code.

delete emmulation docker
Emulation vs. cross-compilation build with “ — platform=linux/amd64,linux/arm64″ as run on Intel/AMD machine. Blue contains x86 binaries, yellow ARM binaries.

When deciding whether to use emulation or cross-compilation, the most important thing to consider is if your process is using a lot of CPU processing power or not. Emulation is usually a fine approach for installing packages or if you need to create some files or run a one-off script. But if using cross-compilation can make your builds (possibly tens of) minutes faster, it is probably worth updating your Dockerfile. If you want to run tests as part of the build then cross-compilation can not achieve that. For the best performance in that case, another option is to use a remote build cluster with multiple machines with different architectures.

Preparing Dockerfile

In order to add cross-compilation to our Dockerfile, we will use multi-stage builds. The most common pattern to use in multi-stage builds is to define a build stage(s) where we prepare our build artifacts and a runtime stage that we export as a final image. We will use the same method here with an extra condition that we want our build stage to always run binaries for our native architecture and our runtime stage to contain binaries for the target architecture.

When we start a build stage with a command like FROM debian it instructs the builder to pull the Debian image that matches the value that was set with --platform flag during your build. What we want to do instead is to make sure this Debian image is always native to our current machine. When we are on an x86 system we could instead use a command like FROM --platform=linux/amd64 debian. Now, no matter what platform was set during the build, this stage will always be based on amd64. Except what happens now if we switch to an ARM machine like the new Apple Macs? Do we now need to change all our Dockerfiles? The answer is no, and instead of writing a constant platform value into our Dockerfile we should use a variable instead, FROM --platform=$BUILDPLATFORM debian.

BUILDPLATFORM is part of a set of automatically defined (global scope) build arguments that you can use. It will always match the platform or your current system and the builder will fill in the correct value for us.

Here is a complete list of such variables:

BUILDPLATFORM — matches the current machine. (e.g. linux/amd64)

BUILDOS — os component of BUILDPLATFORM, e.g. linux

BUILDARCH — e.g. amd64, arm64, riscv64

BUILDVARIANT — used to set ARM variant, e.g. v7

TARGETPLATFORM — The value set with --platform flag on build

TARGETOS - OS component from --platform, e.g. linux

TARGETARCH - Architecture from --platform, e.g. arm64

TARGETVARIANT

Now in our build stage, we can pull in our source code, install the compiler package we want to use, etc. These commands should be identical to the ones you are already using in your single-platform or emulation-based Dockerfile.

The only additional change that needs to be done now is that when you are calling your compiler process you need to pass it a parameter that configures it to return artifacts for your actual target architecture. Remember that now that our build stage always contains binaries for the host’s native architecture, the compiler can’t determine the target’s architecture automatically from the environment anymore.

In order to pass the target architecture, we can use the same automatically defined build arguments shown before, this time with TARGET* prefix. As we are using these build arguments inside the stage, they need to be in the local scope and declared with a ARG command before being used.

FROM --platform=$BUILDPLATFORM alpine AS build
# RUN <install build dependecies/compiler>
# COPY <source> .
ARG TARGETPLATFORM
RUN compile --target=$TARGETPLATFORM -o /out/mybinary

The only thing left to do now is to create a runtime stage that we will export as a result of our build. For this stage, we will not use --platform in the FROM definition. We could write FROM --platform=$TARGETPLATFORM but that is the default value for all builds stages anyway so using a flag is redundant.

FROM alpine
# RUN <install runtime dependencies installed via emulation>
COPY --from=build /out/mybinary /bin

To confirm, let’s look at what happens if the above Dockerfile is built for two platforms with docker buildx build --platform=linux/amd64,linux/arm64 . invoked on ARM64-based systems like new Apple M1 machines.

First, the builder will pull down the Alpine image for ARM64, install the build dependencies and copy over the source. Note that these steps execute only once, even though we are building for two different platforms. BuildKit is smart enough to understand that both of these platforms depend on the same compiler and source code and automatically deduplicates the steps.

Now two separate instances of containers running the compiler process will be invoked, with a different value passed to the --target flag.

For the export stage, BuildKit now pulls down both ARM64 and x86 versions of the Alpine image. If any runtime packages were used then the x86 versions are installed with the help of the emulation layer. All these steps already ran in parallel to the build stage as they did not share dependencies. As the last step, the binary created by the respective compiler process is copied to the stage.

delete docker 1 50IeRwpSLz8axrAx27TzXA
Sample Dockerfile commands as run on Apple M1 machine. Blue contains x86 binaries, yellow ARM.

Both of the runtime stages are then converted into an OCI image and BuildKit will prepare an OCI Image Index structure(also called a manifest list)that contains both of these images.

Go example

For a functional example, let’s look at an example project written in the Go programming language.

A typical multi-stage Dockerfile building a simple Go application would look something like:

FROM golang:1.17-alpine AS build
WORKDIR /src
COPY . .
RUN go build -o /out/myapp .

FROM alpine
COPY --from=build /out/myapp /bin

Using cross-compilation in Go is very easy. The only thing you need to do is pass the target architecture with environment variables. go build understands GOOS , GOARCH environment variables. There is also GOARM for specifying the ARM version for the 32bit systems.

The GOOS and GOARCH values are the same format as the TARGETOS and TARGETARCH values which we saw earlier that BuildKit makes available inside the Dockerfile.

When we apply all the steps we learned before: fixing the build stage to build platform, defining ARG TARGET* variables, and passing cross-compilation parameters to the compiler we will have:

FROM --platform=$BUILDPLATFORM golang:1.17-alpine AS build
WORKDIR /src
COPY . .
ARG TARGETOS TARGETARCH
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /out/myapp .

FROM alpine
COPY --from=build /out/myapp /bin

As you see we only needed to do three small modifications and our Dockerfile is much more powerful. Note that there are no downsides to these changes, the Dockerfile is still portable and works in all architectures. Just now when we build for a non-native architecture our builds are much faster.

Let’s look at some additional optimizations you might want to consider as well.

When Go applications depend on other Go modules they usually do it by either including the sources of the dependencies inside a vendor directory or if their project does not include such a directory then the Go compiler will pull the dependencies listed in the go.mod file while the go build command is running.

In the latter case, it means that (although our own source code was copied only one time) because thego build process was invoked twice for our multi-platform build, these dependencies would also be pulled twice. It’s better to avoid that by telling Go to download these dependencies before we branch our build stage with the ARG TARGETARCH command.

FROM --platform=$BUILDPLATFORM golang:1.17-alpine AS build
WORKDIR /src
COPY go.mod go.sum .
RUN go mod download
COPY . .
ARG TARGETOS TARGETARCH
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /out/myapp .

FROM alpine
COPY --from=build /out/myapp /bin

Now when two go build processes run they already have access to the pre-pulled dependencies. We also copied only thego.mod and go.sum files before downloading the packages so that when our regular source code changes we don’t invalidate cache for the module downloads.

For completeness, let’s also include cache mounts inside our Dockerfile. RUN --mount options allow exposing new mountpoints to the command that may be used for accessing your source code, build secrets, temporary and cache directories. Cache mounts create persistent directories where you can write your application-specific cache files that reappear the next time you invoke the builder again. This results in big performance gains when you are doing incremental builds after making changes to your source code.

In Go, the directories that you want to turn into cache mounts are /root/.cache/go-build and /go/pkg . The first is the default location of the Go build cache and the second is where go mod downloads modules. This assumes your user is root and GOPATH is /go .

RUN --mount=type=cache,target=/root/.cache/go-build \
    --mount=type=cache,target=/go/pkg \
    GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /out/myapp .

You can also use a type=bind mount (default type) to mount in your source code. This helps to avoid the overhead of actually copying the files with COPY . In cross-compiling in Dockerfile, it is sometimes especially important if you don’t want to copy your source before defining ARG TARGETPLATFORM as changes in the source code would invalidate the cache for your target-specific dependencies. Note that type=bind mounts are mounted read-only by default. If the commands you are running need to write files to your source code, you might still want to use COPY or set therw option for the mount.

This leads to our complete, fully-optimized cross-compiling Go Dockerfile:

FROM --platform=$BUILDPLATFORM golang:1.17-alpine AS build
WORKDIR /src
ARG TARGETOS TARGETARCH
RUN --mount=target=. \
    --mount=type=cache,target=/root/.cache/go-build \
    --mount=type=cache,target=/go/pkg \
    GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /out/myapp .

FROM alpine
COPY --from=build /out/myapp /bin

As an example, I measured how much time it takes to build the Docker CLI binary with the multi-stage Dockerfile we started with and then the one with the optimizations applied. As you can see, the difference is quite drastic.

delete blue docker 1 drkFLmbLQQWCiC7 fWejmQ
https://github.com/docker/cli build time with test Dockerfiles (seconds, lower is better)

For the initial build only for our native architecture, the difference is minimal — only a small change from not needing to run the COPY instruction. But when we build an image both for ARM and x86 CPUs the difference is huge. For our new Dockerfile, doubling architectures increases build time only by 70% (because some parts of the builds were shared), while when the second architecture builds with QEMU emulation, our build time is almost seven times longer.

With the additional help from the cache mounts that we added, our incremental rebuilds with a Go source code changes are reaching a ridiculous 100x speed improvement territory compared to the old Dockerfile.

]]>
Speed up Building with Docker BuildX and Graviton2 EC2 https://www.docker.com/blog/speed-up-building-with-docker-buildx-and-graviton2-ec2/ Mon, 18 Oct 2021 20:18:41 +0000 https://www.docker.com/blog/speed-up-building-with-docker-buildx-and-graviton2-ec2/ As the expansion in Arm usage continues, building your images on Arm is crucial to making images available and performant across all architectures which is why we’ve invested in making it super easy to build Arm and multi-arch images. In a previous blog we outlined how to build multi-arch images locally using the QEMU emulator that comes pre-packaged with Docker Desktop. In this blog we outline how to get started using a remote builder to accomplish the same goal, for our purposes we will be using an Amazon EC2 instance. In December of 2019, Amazon announced the new Amazon EC2 instances powered by AWS Graviton2 Processors that significantly improve performance over the first-generation AWS Graviton processors. Using a Graviton2 instance to build your Arm images remotely will speed up the process, making it even easier to develop containers on, and for, Arm servers and devices. 

We will walk through the following:

  1. Using Graviton2 EC2 instance as a remote host
  2. Registering the Graviton2 EC2 instance as remote builder
  3. Building  the docker image on Graviton2 EC2 instance

To learn more about buildX and remote builders, you can visit our documentation.

Docker buildX Remote Diagram 102021 V3

Getting Started With a Remote Builder

Prerequisites

First we’ll have to ensure that you are using a remote host instead of your local machine. Common ways to access remote Docker instances are via mTLS or SSH. Here we will use SSH for simplicity. In order to start with using a remote builder with Buildx and BuildKit on Graviton2 the host needs to be accessible through the Docker CLI, using ssh://<USERNAME>@<HOST> URL as follows:

$ docker -H ssh://me@graviton2-instance info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.6.1-65-gad9dddc3)
  compose: Docker Compose (Docker Inc., v2.0.0-rc.3)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.9
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 5b46e404f6b9f661a205e28d59c982d3634148f8
 runc version: v1.0.2-0-g52b36a2
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-1045-aws
 Operating System: Ubuntu 20.04.2 LTS
 OSType: linux
 Architecture: aarch64
 CPUs: 2
 Total Memory: 1.837GiB

Create the remote builder with Buildx

Now you can register this remote Graviton2 instance to Docker Buildx using the create command:

$ docker buildx create --name graviton2 \
  --driver docker-container \
  --platform linux/arm64 \
  ssh://me@graviton2-instance
graviton2

--platform is specified so that this node will be preferred for arm64 builds when we add other nodes to this build cluster.

Bootstrap the builder to check and create the BuildKit container on the remote host:

$ docker buildx inspect --bootstrap --builder graviton2
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 2.2s done
#1 creating container buildx_buildkit_node1
#1 creating container buildx_buildkit_node1 1.4s done
#1 DONE 3.7s
Name:   graviton2
Driver: docker-container

Nodes:
Name:      node1
Endpoint:  ssh://me@graviton2-instance
Status:    running
Platforms: linux/arm64*, linux/arm/v7, linux/arm/v6

Build your image

Now we will create a simple Dockerfile:

FROM busybox as build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on \$BUILDPLATFORM, building for \$TARGETPLATFORM" > /log
FROM busybox
COPY --from=build /log /log
RUN cat /log
RUN uname -a

And let’s build the image against our builder:

docker buildx build --builder graviton2 \
  --push -t example.com/hello:latest \
  --platform linux/arm64 .
#2 [internal] load .dockerignore
#2 transferring context:
#2 transferring context: 2B 0.2s done
#2 DONE 0.2s

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 246B 0.3s done
#1 DONE 0.4s

#3 [internal] load metadata for docker.io/library/busybox:latest
#3 ...

#4 [auth] library/busybox:pull token for registry-1.docker.io
#4 DONE 0.0s

#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 1.5s

#5 [build 1/2] FROM docker.io/library/busybox@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57
#5 resolve docker.io/library/busybox@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 0.0s done
#5 DONE 0.0s

#5 [build 1/2] FROM docker.io/library/busybox@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57
#5 sha256:7560ee4921c3fab4f1d34c83600f6f65841ec863e072374f4e8044ff01df156f 821.72kB / 821.72kB 0.1s done
#5 extracting sha256:7560ee4921c3fab4f1d34c83600f6f65841ec863e072374f4e8044ff01df156f 0.0s done
#5 DONE 0.2s

#6 [build 2/2] RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
#6 DONE 0.1s

#7 [stage-1 2/4] COPY --from=build /log /log
#7 DONE 0.0s

#8 [stage-1 3/4] RUN cat /log
#8 0.078 I am running on $BUILDPLATFORM, building for $TARGETPLATFORM
#8 DONE 0.1s

#9 [stage-1 4/4] RUN uname -a
#9 0.081 Linux buildkitsandbox 5.4.0-1045-aws #47-Ubuntu SMP Tue Apr 13 07:04:23 UTC 2021 aarch64 GNU/Linux
#9 DONE 0.1s

#10 exporting to image
#10 exporting layers
#10 exporting layers 0.3s done
#10 exporting manifest sha256:7097ec1c09675617e2c44b5924b76f7863c4ff685c640b32dfaa1b1e8f2bc641 0.0s done
#10 exporting config sha256:d18ca92a45b563373606f0a06d0a1d2280d0c11976b1ca64dba0e567d540a3e2 0.0s done
#10 pushing layers
#10 pushing layers 0.7s done

Now, you’ve built your image remotely using a Graviton2 instance! These are just some of the things you can do with buildx. We’d love your feedback on how it went and what you’d like to see us do next, you can submit it feedback to our public roadmap.

]]>
Engineering Update: BuildKit 0.9 and Docker Buildx 0.6 Releases https://www.docker.com/blog/engineering-update-buildkit-0-9-and-docker-buildx-0-6-releases/ Wed, 28 Jul 2021 21:28:38 +0000 https://www.docker.com/blog/?p=28529 On July 16th we released BuildKit 0.9.0, Docker Buildx 0.6.0, Dockerfile 1.3.0 and Dockerfile 1.3.0-labs. These releases come with bug fixes, feature-parity improvements, refactoring and also new features.

Dockerfile new features

Installation

There is no installation needed: BuildKit supports loading frontends dynamically from container images. Images for Dockerfile frontends are available at docker/dockerfile repository.

To use the external frontend, the first line of your Dockerfile needs to be pointing to the specific image you want to use:

# syntax=docker/dockerfile:1.3

More info: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md

RUN command now supports --network flag

RUN command supports --network flag for requesting a specific type of network conditions (host, none, default). This allows the user to disable all networking during a docker build to ensure no network is used, or opt-in to explicitly using the hosts network, which requires a specific flag to be set before this works.

--network=host requires allowing network.host entitlement. This feature was previously only available on labs channel:

# syntax=docker/dockerfile:1.3
FROM python:3.6
ADD mypackage.tgz wheelz/
RUN --network=none pipe install --find-links wheels mypackage

RUN --mount flag variable expansion

Values for RUN --mount flag now support variable expansion, except for the from field:

# syntax=docker/dockerfile:1.3
FROM golang
...
ARG GO_CACHE_DIR=/root/.cache/go-build
RUN --mount=type=cache,target=$GO_CACHE_DIR go build ...

Here-document syntax

RUN and COPY commands now support Here-document syntax allowing writing multiline inline scripts and files (labs channel) without lots of && and \ characters:

Before:

# syntax=docker/dockerfile:1.3
FROM debian
RUN apt-get update \
  && apt-get install -y vim

With new Here-document syntax:

# syntax=docker/dockerfile:1.3-labs
FROM debian
RUN <<eot bash
  apt-get update
  apt-get install -y vim
eot

In COPY commands source parameters can be replaced with here-doc indicators. Regular here-doc variable expansion and tab stripping rules apply:

# syntax=docker/dockerfile:1.3-labs
FROM alpine
ARG FOO=bar
COPY <<-eot /app/foo
  hello ${FOO}
eot

More info on BuildKit repository.

OpenTracing providers replaced with support for OpenTelemetry

OpenTelemetry has superseded OpenTracing. The API is quite similar but additional collector endpoints should allow forwarding traces from client or container in the future.

JAEGER_TRACE env is supported like before. OTEL collector is also supported both via grpc and HTTP protos.

This is also the first time cross-process tracing from Buildx is available:

# create jaeger container
$ docker run -d --name jaeger \
  -p 6831:6831/udp -p 16686:16686 \
  jaegertracing/all-in-one

# create builder with JAEGER_TRACE env
$ docker buildx create \
  --name builder \
  --driver docker-container \
  --driver-opt network=host \
  --driver-opt env.JAEGER_TRACE=localhost:6831 \
  --use

# open Jaeger UI at http://localhost:16686/ and see results as shown below
1 2

Resource limiting

Users can now limit the parallelism of the BuildKit solver, Buildkitd config now allows max-parallelism for limiting the parallelism of the BuildKit solver, which is particularly useful for low-powered machines.

Here is an example if you want to do this with GitHub Actions:

2 2

We are also now limiting TCP connections to 4 per registry with an additional connection not used for layer pulls and pushes. This limitation will be able to manage TCP connection per host to avoid your build being stuck while pulling images. The additional connection is used for metadata requests (image config retrieval) to enhance the overall build time.

GitHub Actions cache backend (experimental)

We have released a new experimental GitHub cache backend to speed up Docker builds in GitHub actions.

This solves a previously inefficient method which reuses the build cache by using official actions/cache action together with local cache exporter/importer, which is inefficient as the cache needs to be saved/loaded every time and tracking does not happen per blob.

To start using this experimental cache, use our example on Build Push Action repository.

Git improvements

Default branch name is now detected correctly from remote when using the Git context.

Support for subdirectories when building from Git source is now released:

$ docker buildx build git://github.com/repo:path/to/subapp

SSH agent is automatically forwarded when building from SSH Git URL:

$ docker buildx build git@github.com:myrepo.git

New platforms and QEMU update

  • Risc-V (buildkitd, buildctl, buildx, frontends)
  • Windows ARM64 (buildctl, buildx)
  • MacOS ARM64 (buildctl)

Also embedded QEMU emulators have been updated to v6.0.0 and provide emulation for additional platforms.

--metadata-file for buildctl and buildx

New --metadata-file flag has been added to build and bake command for buildx that allows saving build result metadata in JSON format:

$ docker buildx build --output type=docker --metadata-file ./metadata.json .
{
  "containerimage.config.digest": "sha256:d8b8b4f781520aeafedb5a88ff50fbb625cfebad87e392794f1e26a724a2f22a",
  "containerimage.digest": "sha256:868f04872380274dcf8528222e84dc66702394de80889e51c87a14126ea9ff6a"
}

Buildctl also allows --metadata-file flag to output build metadata.

Docker buildx image

Buildx binaries can now be accessed through buildx-bin Docker image to build able to use buildx inside a Docker image in your CI for example. Here is how to use buildx inside a Dockerfile:

FROM docker
COPY --from=docker/buildx-bin:latest /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version

Other changes

For the complete list of changes, see the official release notes:

]]>
Guest Post: Calling the Docker CLI from Python with Python-on-whales https://www.docker.com/blog/guest-post-calling-the-docker-cli-from-python-with-python-on-whales/ Thu, 11 Mar 2021 20:00:00 +0000 https://www.docker.com/blog/?p=27679 pythonwhale
Image: Alice Lang, alicelang-creations@outlook.fr

At Docker, we are incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. The following is a guest post by Docker community member Gabriel de Marmiesse. Are you working on something awesome with Docker? Send your contributions to William Quiviger (@william) on the Docker Community Slack and we might feature your work!   

The most common way to call and control Docker is by using the command line.

With the increased usage of Docker, users want to call Docker from programming languages other than shell. One popular way to use Docker from Python has been to use docker-py. This library has had so much success that even docker-compose is written in Python, and leverages docker-py.

The goal of docker-py though is not to replicate the Docker client (written in Golang), but to talk to the Docker Engine HTTP API. The Docker client is extremely complex and is hard to duplicate in another language. Because of this, a lot of features that were in the Docker client could not be available in docker-py. Sometimes users would sometimes get frustrated because docker-py did not behave exactly like the CLI.

Today, we’re presenting a new project built by Gabriel de Marmiesse from the Docker community: Python-on-whales. The goal of this project is to have a 1-to-1 mapping between the Docker CLI and the Python library. We do this by communicating with the Docker CLI instead of calling directly the Docker Engine HTTP API.

Docker clients 1

If you need to call the Docker command line, use Python-on-whales. And if you need to call the Docker engine directly, use docker-py.

In this post, we’ll take a look at some of the features that are not available in docker-py but are available in Python-on-whales:

  • Building with Docker buildx
  • Deploying to Swarm with docker stack
  • Deploying to the local Engine with Compose

Start by downloading Python-on-whales with 

pip install python-on-whales

and you’re ready to rock!

Docker Buildx0

Here we build a Docker image. Python-on-whales uses buildx by default and gives you the output in real time.

>>> from python_on_whales import docker
>>> my_image = docker.build(".", tags="some_name")
[+] Building 1.6s (17/17) FINISHED
 => [internal] load build definition from Dockerfile                       0.0s
 => => transferring dockerfile: 32B                                        0.0s
 => [internal] load .dockerignore                                          0.0s
 => => transferring context: 2B                                            0.0s
 => [internal] load metadata for docker.io/library/python:3.6              1.4s
 => [python_dependencies 1/5] FROM docker.io/library/python:3.6@sha256:293 0.0s
 => [internal] load build context                                          0.1s
 => => transferring context: 72.86kB                                       0.0s
 => CACHED [python_dependencies 2/5] RUN pip install typeguard pydantic re 0.0s
 => CACHED [python_dependencies 3/5] COPY tests/test-requirements.txt /tmp 0.0s
 => CACHED [python_dependencies 4/5] COPY requirements.txt /tmp/           0.0s
 => CACHED [python_dependencies 5/5] RUN pip install -r /tmp/test-requirem 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 1/7] RUN apt-get update && 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 2/7] RUN curl -fsSL https: 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 3/7] RUN add-apt-repositor 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 4/7] RUN  apt-get update & 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 5/7] WORKDIR /python-on-wh 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 6/7] COPY . .              0.0s
 => CACHED [tests_ubuntu_install_without_buildx 7/7] RUN pip install -e .  0.0s
 => exporting to image                                                     0.1s
 => => exporting layers                                                    0.0s
 => => writing image sha256:e1c2382d515b097ebdac4ed189012ca3b34ab6be65ba0c 0.0s
 => => naming to docker.io/library/some_image_name

Docker Stacks

Here we deploy a simple Swarmpit stack on a local Swarm. You get a Stack object that has several methods: remove(), services(), ps().

>>> from python_on_whales import docker
>>> docker.swarm.init()
>>> swarmpit_stack = docker.stack.deploy("swarmpit", compose_files=["./docker-compose.yml"])
Creating network swarmpit_net
Creating service swarmpit_influxdb
Creating service swarmpit_agent
Creating service swarmpit_app
Creating service swarmpit_db
>>> swarmpit_stack.services()
[<python_on_whales.components.service.Service object at 0x7f9be5058d60>,
<python_on_whales.components.service.Service object at 0x7f9be506d0d0>,
<python_on_whales.components.service.Service object at 0x7f9be506d400>,
<python_on_whales.components.service.Service object at 0x7f9be506d730>]
>>> swarmpit_stack.remove()

Docker Compose

Here we show how we can run a Docker Compose application with Python-on-whales. Note that, behind the scenes, it uses the new version of Compose written in Golang. This version of Compose is still experimental. Take appropriate precautions.

$ git clone https://github.com/dockersamples/example-voting-app.git
$ cd example-voting-app
$ python
>>> from python_on_whales import docker
>>> docker.compose.up(detach=True)
Network "example-voting-app_back-tier"  Creating
Network "example-voting-app_back-tier"  Created
Network "example-voting-app_front-tier"  Creating
Network "example-voting-app_front-tier"  Created
example-voting-app_redis_1  Creating
example-voting-app_db_1  Creating
example-voting-app_db_1  Created
example-voting-app_result_1  Creating
example-voting-app_redis_1  Created
example-voting-app_worker_1  Creating
example-voting-app_vote_1  Creating
example-voting-app_worker_1  Created
example-voting-app_result_1  Created
example-voting-app_vote_1  Created
>>> for container in docker.compose.ps():
...     print(container.name, container.state.status)
example-voting-app_vote_1 running
example-voting-app_worker_1 running
example-voting-app_result_1 running
example-voting-app_redis_1 running
example-voting-app_db_1 running
>>> docker.compose.down()
>>> print(docker.compose.ps())
[]

Bonus section: Docker objects attributes as Python attributes

All information that you can access with docker inspect is available as Python attributes:

>>> from python_on_whales import docker
>>> my_container = docker.run("ubuntu", ["sleep", "infinity"], detach=True)
>>> my_container.state.started_at
datetime.datetime(2021, 2, 18, 13, 55, 44, 358235, tzinfo=datetime.timezone.utc)
>>> my_container.state.running
True
>>> my_container.kill()
>>> my_container.remove()

>>> my_image = docker.image.inspect("ubuntu")
>>> print(my_image.config.cmd)
['/bin/bash']

What’s next for Python-on-whales ?

We’re currently improving the integration of Python-on-whales with the new Compose in the Docker CLI (currently beta).

You can consider that Python-on-whales is in beta. Some small API changes are still possible. 

We encourage the community to try it out and give feedback in the issues!

To learn more about Python-on-whales:

]]>