Go – Docker https://www.docker.com Tue, 06 Jun 2023 20:17:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Go – Docker https://www.docker.com 32 32 Developing Go Apps With Docker https://www.docker.com/blog/developing-go-apps-docker/ Wed, 02 Nov 2022 14:00:00 +0000 https://www.docker.com/?p=38582 Go (or Golang) is one of the most loved and wanted programming languages, according to Stack Overflow’s 2022 Developer Survey. Thanks to its smaller binary sizes vs. many other languages, developers often use Go for containerized application development. 

Mohammad Quanit explored the connection between Docker and Go during his Community All-Hands session. Mohammad shared how to Dockerize a basic Go application while exploring each core component involved in the process: 

Follow along as we dive into these containerization steps. We’ll explore using a Go application with an HTTP web server — plus key best practices, optimization tips, and ways to bolster security. 

Go application components

Creating a full-fledged Go application requires you to create some Go-specific components. These are essential to many Go projects, and the containerization process relies equally heavily on them. Let’s take a closer look at those now. 

Using main.go and go.mod

Mohammad mainly highlights the main.go file since you can’t run an app without executable code. In Mohammad’s case, he created a simple web server with two unique routes: an I/O format with print functionality, and one that returns the current time.

A main.go file creating a web server with an I/O format with print functionality and a route to return the current time.

What’s nice about Mohammad’s example is that it isn’t too lengthy or complex. You can emulate this while creating your own web server or use it as a stepping stone for more customization.

Note: You might also use a package main in place of a main.go file. You don’t explicitly need main.go specified for a web server — since you can name the file anything you want — but you do need a func main () defined within your code. This exists in our sample above.

We always recommend confirming that your code works as expected. Enter the command go run main.go to spin up your application. You can alternatively replace main.go with your file’s specific name. Then, open your browser and visit http://localhost:8081 to view your “Hello World” message or equivalent. Since we have two routes, navigating to http://localhost:8081/time displays the current time thanks to Mohammad’s second function. 

Next, we have the go.mod file. You’ll use this as a root file for your Go packages, module path for imports (shown above), and for dependency requirements. Go modules also help you choose a directory for your project code. 

With these two pieces in place, you’re ready to create your Dockerfile

Creating your Dockerfile

Building and deploying your Dockerized Go application means starting with a software image. While you can pull this directly from Docker Hub (using the CLI), beginning with a Dockerfile gives you more configuration flexibility. 

You can create this file within your favorite editor, like VS Code. We recommend VS Code since it supports the official Docker extension. This extension supports debugging, autocompletion, and easy project file navigation. 

Choosing a base image and including your application code is pretty straightforward. Since Mohammad is using Go, he kicked off his Dockerfile by specifying the golang Docker Official Image as a parent image. Docker will build your final container image from this. 

You can choose whatever version you’d like, but a pinned version like golang:1.19.2-bullseye is both stable and slim. Newer image versions like these are also safe from October 2022’s Text4Shell vulnerability

You’ll also need to do the following within your Dockerfile

  • Include an app directory for your source code
  • Copy everything from the root directory into your app directory
  • Copy your Go files into your app directory and install dependencies
  • Build your app with configuration
  • Tell your Docker container to listen on a certain port at runtime
  • Define an executable command that runs once your container starts

With these points in mind, here’s how Mohammad structured his basic Dockerfile:

# Specifies a parent image
FROM golang:1.19.2-bullseye

# Creates an app directory to hold your app’s source code
WORKDIR /app

# Copies everything from your root directory into /app
COPY . .

# Installs Go dependencies
RUN go mod download

# Builds your app with optional configuration
RUN go build -o /godocker

# Tells Docker which network port your container listens on
EXPOSE 8080

# Specifies the executable command that runs when the container starts
CMD [ “/godocker” ]

From here, you can run a quick CLI command to build your image from this file: 

docker build --rm -t [YOUR IMAGE NAME]:alpha .

This creates an image while removing any intermediate containers created with each image layer (or step) throughout the build process. You’re also tagging your image with a name for easier reference later on. 

Confirm that Docker built your image successfully by running the docker image ls command:

A terminal running the docker image ls command and showing that the image was built successfully.

If you’ve already pulled or built images in the past and kept them, they’ll also appear in your CLI output. However, you can see Mohammad’s go-docker image listed at the top since it’s the most recent. 

Making changes for production workloads

What if you want to account for code or dependency changes that’ll inevitably occur with a production Go application? You’ll need to tweak your original Dockerfile and add some instructions, according to Mohammad, so that changes are visible and the build process succeeds:

FROM golang:1.19.2-bullseye

WORKDIR /app

# Effectively tracks changes within your go.mod file
COPY go.mod .

RUN go mod download

# Copies your source code into the app directory
COPY main.go .

RUN go mod -o /godocker

EXPOSE 8080

CMD [ “/godocker” ]

After making those changes, you’ll want to run the same docker build and docker image ls commands. Now, it’s time to run your new image! Enter the following command to start a container from your image: 

docker run -d -p 8080:8081 --name go-docker-app [YOUR IMAGE NAME]:alpha

Confirm that this worked by entering the docker ps command, which generates a list of your containers. If you have Docker Desktop installed, you can also visit the Containers tab from the Docker Dashboard and locate your new container in the list. This also applies to your image builds — instead using the Images tab. 

Congratulations! By tracing Mohammad’s steps, you’ve successfully containerized a functioning Go application. 

Best practices and optimizations

While our Go application gets the job done, Mohammad’s final image is pretty large at 913MB. The client (or end user) shouldn’t have to download such a hefty file. 

Mohammad recommends using a multi-stage build to only copy forward the components you need between image layers. Although we start with a golang:version as a builder image, defining a second build stage and choosing a slim alternative like alpine helps reduce image size. You can watch his step-by-step approach to tackling this. 

This is beneficial and common across numerous use cases. However, you can take things a step further by using FROM scratch in your multi-stage builds. This empty file is the smallest we offer and accepts static binaries as executables — making it perfect for Go application development. 

You can learn more about our scratch image on Docker Hub. Despite being on Hub, you can only add scratch directly into your Dockerfile instead of pulling it. 

Develop your Go application today

Mohammad Quanit outlined some user-friendly development workflows that can benefit both newer and experienced Go users. By following his steps and best practices, it’s possible to create cross-platform Go apps that are slim and performant. Docker and Go inherently mesh well together, and we also encourage you to explore what’s possible through containerization. 

Want to learn more?

]]>
Developing Go apps with Docker nonadult
How to Rapidly Build Multi-Architecture Images with Buildx https://www.docker.com/blog/how-to-rapidly-build-multi-architecture-images-with-buildx/ Fri, 17 Jun 2022 14:00:29 +0000 https://www.docker.com/?p=34108 Successfully running your container images on a variety of CPU architectures can be tricky. For example, you might want to build your IoT application — running on an arm64 device like the Raspberry Pi — from a specific base image. However, Docker images typically support amd64 architectures by default. This scenario calls for a container image that supports multiple architectures, which we’ve highlighted in the past.

Multi-architecture (multi-arch) images typically contain variants for different architectures and OSes. These images may also support CPU architectures like arm32v5+, arm64v8, s390x, and others. The magic of multi-arch images is that Docker automatically grabs the variant matching your OS and CPU pairing.

While a regular container image has a manifest, a multi-architecture image has a manifest list. The list combines the manifests that show information about each variant’s size, architecture, and operating system.

Multi-architecture images are beneficial when you want to run your container locally on your x86-64 Linux machine, and remotely atop AWS Elastic Compute Cloud (EC2) Graviton2 CPUs. Additionally, it’s possible to build language-specific, multi-arch images — as we’ve done with Rust.

Follow along as we learn about each component behind multi-arch image builds, then quickly create our image using Buildx and Docker Desktop.

Building Multi-Architecture Images with Buildx and Docker Desktop

You can build a multi-arch image by creating the individual images for each architecture, pushing them to Docker Hub, and entering docker manifest to combine them within a tagged manifest list. You can then push the manifest list to Docker Hub. This method is valid in some situations, but it can become tedious and relatively time consuming.

 

Note: However, you should only use the docker manifest command in testing — not production. This command is experimental. We’re continually tweaking functionality and any associated UX while making docker manifest production ready.

 

However, two tools make it much easier to create multi-architectural builds: Docker Desktop and Docker Buildx. Docker Buildx enables you to complete every multi-architecture build step with one command via Docker Desktop.

Before diving into the nitty gritty, let’s briefly examine some core Docker technologies.

Dockerfiles

The Dockerfile is a text file containing all necessary instructions needed to assemble and deploy a container image with Docker. We’ll summarize the most common types of instructions, while our documentation contains information about others:

  • The FROM instruction headlines each Dockerfile, initializing the build stage and setting a base image which can receive subsequent instructions.
  • RUN defines important executables and forms additional image layers as a result. RUN also has a shell form for running commands.
  • WORKDIR sets a working directory for any following instructions. While you can explicitly set this, Docker will automatically assign a directory in its absence.
  • COPY, as it sounds, copies new files from a specified source and adds them into your container’s filesystem at a given relative path.
  • CMD comes in three forms, letting you define executables, parameters, or shell commands. Each Dockerfile only has one CMD, and only the latest CMD instance is respected when multiple exist.

 

Dockerfiles facilitate automated, multi-layer image builds based on your unique configurations. They’re relatively easy to create, and can grow to support images that require complex instructions. Dockerfiles are crucial inputs for image builds.

Buildx

Buildx leverages the docker build command to build images from a Dockerfile and sets of files located at a specified PATH or URL. Buildx comes packaged within Docker Desktop, and is a CLI plugin at its core. We consider it a plugin because it extends this base command with complete support for BuildKit’s feature set.

We offer Buildx as a CLI command called docker buildx, which you can use with Docker Desktop. In Linux environments, the buildx command also works with the build command on the terminal. Check out our Docker Buildx documentation to learn more.

BuildKit Engine

BuildKit is one core component within our Moby Project framework, which is also open source. It’s an efficient build system that improves upon the original Docker Engine. For example, BuildKit lets you connect with remote repositories like Docker Hub, and offers better performance via caching. You don’t have to rebuild every image layer after making changes.

While building a multi-arch image, BuildKit detects your specified architectures and triggers Docker Desktop to build and simulate those architectures. The docker buildx command helps you tap into BuildKit.

Docker Desktop

Docker Desktop is an application — built atop Docker Engine — that bundles together the Docker CLI, Docker Compose, Kubernetes, and related tools. You can use it to build, share, and manage containerized applications. Through the baked-in Docker Dashboard UI, Docker Desktop lets you tackle tasks with quick button clicks instead of manually entering intricate commands (though this is still possible).

Docker Desktop’s QEMU emulation support lets you build and simulate multiple architectures in a single environment. It also enables building and testing on your macOS, Windows, and Linux machines.

Now that you have working knowledge of each component, let’s hop into our walkthrough.

Prerequisites

Our tutorial requires the following:

  • The correct Go binary for your OS, which you can download here
  • The latest version of Docker Desktop
  • A basic understanding of how Docker works. You can follow our getting started guide to familiarize yourself with Docker Desktop.

 

Building a Sample Go Application

Let’s begin by building a basic Go application which prints text to your terminal. First, create a new folder called multi_arch_sample and move to it:

mkdir multi_arch_sample && cd multi_arch_sample

Second, run the following command to track code changes in the application dependencies:

go mod init multi_arch_sample

Your terminal will output a similar response to the following:

go: creating new go.mod: module multi_arch_sample
go: to add module requirements and sums:
  	go mod tidy

 

Third, create a new main.go file and add the following code to it:

package main
 
import (
  	"fmt"
  	"net/http"
)
 
 
func readyToLearn(w http.ResponseWriter, req *http.Request) {
  	w.Write([]byte("<h1>Ready to learn!</h1>"))
	fmt.Println("Server running...")
}
 
func main() {
     
	http.HandleFunc("/", readyToLearn)
  	http.ListenAndServe(":8000", nil)
}

 

This code created the function readyToLearn, which prints “Ready to learn!” at the 127.0.0.1:8000 web address. It also outputs the phrase Server running… to the terminal.

Next, enter the go run main.go command to run your application code in the terminal, which will produce the Ready to learn! response.

Since your app is ready, you can prepare a Dockerfile to handle the multi-architecture deployment of your Go application.

Creating a Dockerfile for Multi-arch Deployments

Create a new file in the working directory and name it Dockerfile. Next, open that file and add in the following lines:

# syntax=docker/dockerfile:1
 
# specify the base image to  be used for the application
FROM golang:1.17-alpine
 
# create the working directory in the image
WORKDIR /app
 
# copy Go modules and dependencies to image
COPY go.mod ./
 
# download Go modules and dependencies
RUN go mod download
 
# copy all the Go files ending with .go extension
COPY *.go ./
 
# compile application
RUN go build -o /multi_arch_sample
 
# network port at runtime
EXPOSE 8000
 
# execute when the container starts
CMD [ "/multi_arch_sample" ]

 

Building with Buildx

Next, you’ll need to build your multi-arch image. This image is compatible with both the amd64 and arm32 server architectures. Since you’re using Buildx, BuildKit is also enabled by default. You won’t have to switch on this setting or enter any extra commands to leverage its functionality.

The builder builds and provisions a container. It also packages the container for reuse. Additionally, Buildx supports multiple builder instances — which is pretty handy for creating scoped, isolated, and switchable environments for your image builds.

Enter the following command to create a new builder, which we’ll call mybuilder:

docker buildx create --name mybuilder --use --bootstrap

You should get a terminal response that says mybuilder. You can also view a list of builders using the docker buildx ls command. You can even inspect a new builder by entering docker buildx inspect <name>.

Triggering the Build

Now, you’ll jumpstart your multi-architecture build with the single docker buildx command shown below:

docker buildx build --push \
--platform linux/amd64,linux/arm64 \
--tag your_docker_username/multi_arch_sample:buildx-latest .

 

This does several things:

  • Combines the build command to start a build
  • Shares the image with Docker Hub using the push operation
  • Uses the --platform flag to specify the target architectures you want to build for. BuildKit then assembles the image manifest for the architectures
  • Uses the --tag flag to set the image name as multi_arch_sample

 

Once your build is finished, your terminal will display the following:

[+] Building 123.0s (23/23) FINISHED

 

Next, navigate to the Docker Desktop and go to Images > REMOTE REPOSITORIES. You’ll see your newly-created image via the Dashboard!

 

image1 2

 

 

 

 

 

 

 

 

 

 

 

 

Conclusion

Congratulations! You’ve successfully explored multi-architecture builds, step by step. You’ve seen how Docker Desktop, Buildx, BuildKit, and other tooling enable you to create and deploy multi-architecture images. While we’ve used a sample Go web application, you can apply these processes to other images and applications.

To tackle your own projects, learn how to get started with Docker to build more multi-architecture images with Docker Desktop and Buildx. We’ve also outlined how to create a custom registry configuration using Buildx.

]]>
Deploying Web Applications Quicker and Easier with Caddy 2 https://www.docker.com/blog/deploying-web-applications-quicker-and-easier-with-caddy-2/ Tue, 31 May 2022 14:57:36 +0000 https://www.docker.com/?p=33773 Follow along as we dissect an awesome method for accelerating your web application server deployment. Huge thanks to Chris Maki for his help in putting this guide together!

 

Today’s web applications — and especially enterprise variants — have numerous moving parts that need continual upkeep. These tasks can easily overwhelm developers who already have enough on their plates. You might have to grapple with:

  • TLS certificate management and termination
  • Web server setup and maintenance
  • Application firewalls and proxies
  • API gateway integration
  • Caching
  • Logging

Consequently, the task of building, deploying, and maintaining a web application has become that much more complicated. Developers must worry about performance, access delegation, and security.

Lengthy development times reflect this. It takes 4.5 months on average to build a complete web app — while creating a viable backend can take two to three months. There’s massive room for improvement. Web developers need simpler solutions that dramatically reduce deployment time.

First, we’ll briefly highlight a common deployment pipeline. Second, we’ll explore an easier way to get things up and running with minimal maintenance. Let’s jump in.

Cons of Server Technologies Like NGINX

There are many ways to deploy a web application backend. Upwards of 60% of the web server market is collectively dominated by NGINX and Apache. While these options are tried and true, they’re not 100% ready to use out of the box.

Let’s consider one feature that’s becoming indispensable across web applications: HTTPS. If you’re accustomed to NGINX web servers, you might know that these servers are HTTP-enabled right away. However, If SSL is required for you and your users, you’ll have to manually configure your servers to support it. Here’s what that process looks like from NGINX’s documentation:

server {
listen              443 ssl;
server_name         www.example.com;
ssl_certificate     www.example.com.crt;
ssl_certificate_key www.example.com.key;
ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers         HIGH:!aNULL:!MD5;
...
}

 

Additionally, NGINX shares that “the ssl parameter must be enabled on listening sockets in the server block, and the locations of the server certificate and private key files should be specified.”

You’re tasked with these core considerations and determining best practices through your configurations. Additionally, you must properly optimize your web server to handle these HTTPS connections — without introducing new bottlenecks based on user activity, resource consumption, and timeouts. Your configuration files slowly grow with each customization:

worker_processes auto;

http {
ssl_session_cache   shared:SSL:10m;
ssl_session_timeout 10m;

server {
listen              443 ssl;
server_name         www.example.com;
keepalive_timeout   70;

ssl_certificate     www.example.com.crt;
ssl_certificate_key www.example.com.key;
ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers         HIGH:!aNULL:!MD5;
...

 

It’s worth remembering this while creating your web server. Precise, finely-grained control over server behavior is highly useful but has some drawbacks. Configurations can be time consuming. They may also be susceptible to coding conflicts introduced elsewhere. On the certificate side, you’ll often have to create complex SSL certificate chains to ensure maximum cross-browser compatibility:

$ openssl s_client -connect www.godaddy.com:443
...
Certificate chain
0 s:/C=US/ST=Arizona/L=Scottsdale/1.3.6.1.4.1.311.60.2.1.3=US
/1.3.6.1.4.1.311.60.2.1.2=AZ/O=GoDaddy.com, Inc
/OU=MIS Department/CN=www.GoDaddy.com
/serialNumber=0796928-7/2.5.4.15=V1.0, Clause 5.(b)
i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc.
/OU=http://certificates.godaddy.com/repository
/CN=Go Daddy Secure Certification Authority
/serialNumber=07969287
1 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc.
/OU=http://certificates.godaddy.com/repository
/CN=Go Daddy Secure Certification Authority
/serialNumber=07969287
i:/C=US/O=The Go Daddy Group, Inc.
/OU=Go Daddy Class 2 Certification Authority
2 s:/C=US/O=The Go Daddy Group, Inc.
/OU=Go Daddy Class 2 Certification Authority
i:/L=ValiCert Validation Network/O=ValiCert, Inc.
/OU=ValiCert Class 2 Policy Validation Authority
/CN=http://www.valicert.com//emailAddress=info@valicert.com
...

 

Spinning up your server therefore takes multiple steps. You must also take these tasks and scale them to include proxying, caching, logging, and API gateway setup. The case is similar for module-based web servers like Apache.

We don’t say this to knock either web server (in fact, we maintain official images for both Apache and NGINX). However — as a backend developer — you’ll want to weigh those development-and-deployment efforts vs. their collective benefits. That’s where a plug-and-play solution comes in handy. This is perfect for those without deep backend experience or those strapped for time. 

Pros of Using Caddy 2

Written in Go, Caddy 2 (which we’ll simply call “Caddy” throughout this guide) acts as an enterprise-grade web server with automatic HTTPS. Other servers don’t share this same feature out of the box.

Additionally, Caddy 2 offers some pretty cool benefits:

  • Automatic TLS certificate renewals
  • OCSP stapling, to speed up SSL handshakes through request consolidation
  • Static file serving for scripts, CSS, and images that enrich web applications
  • Reverse proxying for scalability, load balancing, health checking, and circuit breaking
  • Kubernetes (K8s) ingress, which interfaces with K8s’ Go client and SharedInformers
  • Simplified logging, caching, API gateway, and firewall support
  • Configurable, shared Websocket and proxy support with HTTP/2

This is not an exhaustive list, by any means. However, these baked-in features alone can cut deployment time significantly. We’ll explore how Caddy 2 works, and how it makes server setup more enjoyable. Let’s hop in.

Here’s How Caddy Works

One Caddy advantage is its approachability for developers of all skill levels. It also plays nicely with external configurations.

Have an existing web server configuration that you want to migrate over to Caddy? No problem. Native adapters convert NGINX, TOML, Caddyfiles, and other formats into Caddy’s JSON format. You won’t have to reinvent the wheel nor start fresh to deploy Caddy in a containerized fashion.

Here’s how your Caddyfile structure might appear:

example.com

# Templates give static sites some dynamic features
templates

# Compress responses according to Accept-Encoding headers
encode gzip zstd

# Make HTML file extension optional
try_files {path}.html {path}

# Send API requests to backend
reverse_proxy /api/* localhost:9005

# Serve everything else from the file system
file_server

 

Note that this Caddyfile is even considered complex by Caddy’s own standards (since you’re defining extra functionality). What if you want to run WordPress with fully-managed HTTPS? The following Caddyfile enables this:

example.com

root * /var/www/wordpress
php_fastcgi unix//run/php/php-version-fpm.sock
file_server

 

That’s it! Basic files can produce some impressive results. Each passes essential instructions to your server and tells it how to run. Caddy’s reference documentation is also extremely helpful should you want to explore further.

Quick Commands

Before tackling your image, Caddy has shared some rapid, one-line commands that perform some basic tasks. You’ll enter these through the Caddy CLI:

 

To create a quick, local file server:
$ caddy file-server

To create a public file server over HTTPS:
$ caddy file-server --domain yoursampleapp.com

To perform an HTTPS reverse proxy:
​​$ caddy reverse-proxy --from example.com --to localhost:9000

To run a Caddyfile-backed server in an existing working directory:
$ caddy run

 

Caddy notes that these commands are tested and approved for production deployments. They’re safe, easy, and reliable. Want to learn many more useful commands and subcommands? Check out Caddy’s CLI documentation to streamline your workflows.

Leveraging the Caddy Container Image

Caddy bills itself as a web-server solution with fewer moving parts, and therefore fewer management demands. Its static binary compiles for any platform. Since Caddy has no dependencies, it can run practically anywhere — and within containers. This makes it a perfect match for Docker, which is why developers have downloaded Caddy’s Docker Official Image over 100 million times. You can find our Caddy image on Docker Hub.

 

Screen Shot 2022 05 24 at 10.23.04 AM

 

We’ll discuss how to use this image to deploy your web applications faster, and share some supporting Docker mechanisms that can help streamline those processes. You don’t even need libc, either.

Prerequisites

  1. A free Docker Hub account – for complete push/pull registry access to our Caddy Official Image
  2. The latest version of Docker Desktop, tailored to your OS and CPU (for macOS users)
  3. Your preferred IDE, though VSCode is popular

Docker Desktop unlocks greater productivity around container, image, and server management. Although the CLI remains available, we’re leveraging the GUI while running important tasks.

Pull Your Caddy Image

Getting started with Docker’s official Caddy image is easy. Enter the docker pull caddy:latest command to download the image locally on your machine. You can also specify a Caddy image version using a number of available tags.

Entering docker pull caddy:latest grabs Caddy’s current version, while docker pull caddy:2.5.1 requests an image running Caddy v2.5.1 (for example).

Using :latest is okay during testing but not always recommended in production. Ideally, you’ll aim to thoroughly test and vet any image from a security standpoint before using it. Pulling caddy:latest (or any image tagged similarly) may invite unwanted gremlins into your deployment.

Since Docker automatically grabs the newest release, it’s harder to pre-inspect that image or notice that changes have been made. However, :latest images can include the most up-to-date vulnerability and bug fixes. Image release notes are helpful here.

You can confirm your successful image pull within Docker Desktop:

 

Screen Shot 2022 05 24 at 2.21.08 PM

 

After pulling caddy:latest, you can enter the docker image ls command to view its size. Caddy is even smaller than both NGINX and Apache’s httpd images:

 

Screen Shot 2022 05 24 at 2.09.00 PM

 

Need to inspect the contents of your Caddy image? Docker Desktop lets you summon a Docker Software Bill of Materials (SBOM) — which lists all packages contained within your designated image. Simply enter the docker sbom caddy:latest command (or use any container image:tag):

 

ezgif.com gif maker 2

 

 

 

 

 

 

 

You’ll see that your 43.1MB caddy:latest image contains 129 packages. These help Caddy run effectively cross-platform.

Run Your Caddy Image

Your image is ready to go! Next, run your image to confirm that Caddy is working properly with the following command:

docker run --rm -d -p 8080:80 --name web caddy

 

Each portion of this command serves a purpose:

  • --rm removes the container when it exits or when the daemon exits (whichever comes first)
  • -d runs your Caddy container in the background
  • -p designates port connections between host and container
  • --name lets you assign a non-random name to your image
  • web is an argument to the --name flag, and is the descriptive name of your running container
  • caddy is your image’s name, which was automatically assigned during your initial docker pull command

Since you’ve established a port connection, navigate to localhost:8080 in your browser. Caddy should display a webpage:

 

Screen Shot 2022 05 24 at 2.43.35 PM

 

You’ll see some additional instructions for setting up Caddy 2. Scrolling further down the page also reveals more details, plus some handy troubleshooting tips.

Time to Customize

These next steps will help you customize your web application and map it into the container. Caddy uses the /var/www/html path to store your critical static website files.

Here’s where you’ll store any CSS or HTML — or anything residing within your index.html file. Additionally, this becomes your root directory once your Caddyfile is complete.

Locate your Caddyfile at /etc/caddy/Caddyfile to make any important configuration changes. Caddy’s documentation provides a fantastic breakdown of what’s possible, while the Caddyfile Tutorial guides you step-by-step.

The steps mentioned above describe resources that’ll exist within your Caddy container, not those that exist on your local machine. Because of this, it’s often necessary to reload Caddy to apply any configuration changes, or otherwise. You can apply any new changes by entering systemctl reload caddy in your CLI, and visiting your site as confirmation. Running your Caddy server within a Docker container requires just one extra shell command:

$ docker exec -w /etc/caddy web caddy reload

 

This gracefully reloads your content without a complete restart. That process is otherwise time consuming and disruptive.

Next, you can serve your updated index.html file by entering the following. You’ll create a working directory and run the following commands from that directory:

$ mkdir docker-caddy
$ cd docker-caddy
$ echo "hello world" &gt; index.html
$ docker run -d -p 8080:80 \
-v $PWD/index.html:/usr/share/caddy/index.html \
-v caddy_data:/data \
caddy
...
$ curl http://localhost/
hello world

 

Want to quickly push new configurations to your Caddy server? Caddy provides a RESTful JSON API that lets you POST sets of changes with minimal effort. This isn’t necessary for our tutorial, though you may find this config structure useful in the future:

POST /config/

{
"apps": {
"http": {
"servers": {
"example": {
"listen": ["127.0.0.1:2080"],
"routes": [{
"@id": "demo",
"handle": [{
"handler": "file_server",
"browse": {}
}]
}]
}
}
}
}
}

 

No matter which route you take, any changes made through Caddy’s API are persisted on disk and continually usable after restarts. Caddy’s API docs explain this and more in greater detail.

Building Your Caddy-Based Dockerfile

Finally, half the magic of using Caddy and Docker rests with the Dockerfile. You don’t always want to mount your files within a container. Instead, it’s often better to COPY files or directories from a remote source and base your image on them:

FROM caddy:&lt;version&gt;

COPY Caddyfile /etc/caddy/Caddyfile
COPY site /srv

 

This adds your Caddyfile in using an absolute path. You can even add modules to your Caddy Dockerfile to extend your server’s functionality. Caddy’s :builder image streamlines this process significantly. Just note that this version is much larger than the standard Caddy image. To reduce bloat, the FROM instruction uses a binary overlay to save space:

FROM caddy:builder AS builder

RUN xcaddy build \
--with github.com/caddyserver/nginx-adapter \
--with github.com/hairyhenderson/caddy-teapot-module@v0.0.3-0

FROM caddy:&lt;version&gt;

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

 

You’re also welcome to add any of Caddy’s available modules, which are free to download. Congratulations! You now have all the ingredients needed to deploy a functional Caddy 2 web application.

Quick Notes on Docker Compose

Though not required, you can use docker compose to run your Caddy-based stack. As a side note, Docker Compose V2 is also written in Go. You’d store this Compose content within a docker-compose.yml file, which looks like this:

version: "3.7"

services:
caddy:
image: caddy:&lt;version&gt;
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- $PWD/site:/srv
- caddy_data:/data
- caddy_config:/config

volumes:
caddy_data:
caddy_config:

 

Stopping and Reviving Your Containers

Want to stop your container? Simply enter docker stop web to shut it down within 10 seconds. If this process stalls, Docker will kill your container immediately. You can customize and extend this timeout with the following command:

docker stop --time=30 web

 

However, doing this is easier using Docker Desktop — and specifically the Docker Dashboard. In the sidebar, navigate to the Containers pane. Next, locate your Caddy server container titled “web” in the list, hover over it, and click the square Stop icon. This performs the same task from our first command above:

 

Screen Shot 2022 05 24 at 11.09.39 PM

 

You’ll also see options to Open in Browser, launch the container CLI, Restart, or Delete.

Stop your Caddy container by accident, or want to spin up another? Navigate to the Images pane, locate caddy in the list, hover over it, and click the Run button that appears. Once the modal appears, expand the Optional Settings section. Next, do the following:

  1. Name your new container something meaningful
  2. Designate your Local Host port as 8080 and map it Container Port 80
  3. Mount any volumes you want, though this isn’t necessary for this demo
  4. Click the Run button to confirm

Screen Shot 2022 05 27 at 2.42.13 PM 1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Finally, revisit the Containers pane, locate your new container, and click Open in Browser. You should see the very same webpage that your Caddy server rendered earlier, if things are working properly.

Conclusion

Despite the popularity of Apache and NGINX, developers have other server options at their disposal. Alternatives like Caddy 2 can save time, effort, and make the deployment process much smoother. Reduced dependencies, needed configurations, and even Docker integration help simplify each management step. As a bonus, you’re also supporting an open-source project.

Are you currently using Caddy 1 and want to upgrade to Caddy 2? Follow along with Caddy’s official documentation to understand the process.

]]>
Containerize Your Go Developer Environment – Part 3 https://www.docker.com/blog/containerize-your-go-developer-environment-part-3/ Tue, 23 Jun 2020 15:30:37 +0000 https://www.docker.com/blog/?p=26458 In this series of blog posts, we show how to put in place an optimized containerized Go development environment. In part 1, we explained how to start a containerized development environment for local Go development, building an example CLI tool for different platforms. Part 2 covered how to add Go dependencies, caching for faster builds and unit tests. This third and final part is going to show you how to add a code linter, a GitHub Action CI, and some extra build optimizations.

Adding a linter

We’d like to automate checking for good programming practices as much as possible so let’s add a linter to our setup. First step is to modify the Dockerfile:

# syntax = docker/dockerfile:1-experimental

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .


FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=type=cache,target=/root/.cache/go-build \
  GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .


FROM base AS unit-test
RUN --mount=type=cache,target=/root/.cache/go-build \
  go test -v .


FROM golangci/golangci-lint:v1.27-alpine AS lint-base

FROM base AS lint
COPY --from=lint-base /usr/bin/golangci-lint /usr/bin/golangci-lint
RUN --mount=type=cache,target=/root/.cache/go-build \
  --mount=type=cache,target=/root/.cache/golangci-lint \
  golangci-lint run --timeout 10m0s ./...


FROM scratch AS bin-unix
COPY --from=build /out/example /
...

We now have a lint-base stage that is an alias for the golangci-lint image which contains the linter that we would like to use. We then have a lint stage that runs the lint, mounting a cache to the correct place.

As for the unit tests, we can add a lint rule to our Makefile for linting. We can also alias the test rule to run the linter and unit tests:

all: bin/example
test: lint unit-test

PLATFORM=local

.PHONY: bin/example
bin/example:
    @docker build . --target bin \
    --output bin/ \
    --platform ${PLATFORM}

.PHONY: unit-test
unit-test:
    @docker build . --target unit-test

.PHONY: lint
lint:
    @docker build . --target lint

Adding a CI

Now that we’ve containerized our development platform, it’s really easy to add CI for our project. We only need to run our docker build or make commands from the CI script. To demonstrate this, we’ll use GitHub Actions. To set this up, we can use the following .github/workflows/ci.yaml file:

name: Continuous Integration

on: [push]

jobs:
  ci:
    name: CI
    runs-on: ubuntu-latest
    env:
       DOCKER_BUILDKIT: "1"
    steps:
     - name: Checkout code
       uses: actions/checkout@v2
     - name: Run linter
       run: make lint
     - name: Run unit tests
       run: make unit-test
     - name: Build Linux binary
       run: make PLATFORM=linux/amd64
     - name: Build Windows binary
       run: make PLATFORM=windows/amd64

Notice that the commands we run on the CI are identical to those that we use locally and that we don’t need to do any toolchain configuration as everything is already defined in the Dockerfile!

One last optimization

Performing a COPY will create an extra layer in the container image which slows things down and uses extra disk space. This can be avoided by using RUN --mount and bind mounting from the build context, from a stage, or an image. Adopting this pattern, the resulting Dockerfile is as follows:

# syntax = docker/dockerfile:1-experimental

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download

FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=target=. \
  --mount=type=cache,target=/root/.cache/go-build \
  GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .


FROM base AS unit-test
RUN --mount=target=. \
  --mount=type=cache,target=/root/.cache/go-build \
  go test -v .


FROM golangci/golangci-lint:v1.27-alpine AS lint-base

FROM base AS lint
RUN--mount=target=. \
  --mount=from=lint-base,src=/usr/bin/golangci-lint,target=/usr/bin/golangci-lint \
  --mount=type=cache,target=/root/.cache/go-build \
  --mount=type=cache,target=/root/.cache/golangci-lint \
  golangci-lint run --timeout 10m0s ./...


FROM scratch AS bin-unix
COPY --from=build /out/example /

FROM bin-unix AS bin-linux
FROM bin-unix AS bin-darwin

FROM scratch AS bin-windows
COPY --from=build /out/example /example.exe

FROM bin-${TARGETOS} AS bin

The default mount type is a read only bind mount from the context that you pass with the docker build command. This means that you can replace the COPY . . with a RUN --mount=target=. wherever you need the files from your context to run a command but do not need them to persist in the final image.

Instead of separating the Go module download, we could remove this and just use a cache mount for /go/pkg/mod.

Conclusion

This series of posts showed how to put in place an optimized containerized Go development environment and then how to use this same environment on the CI. The only dependencies for those who would like to develop on such a project are Docker and make– the latter being optionally replaced by another scripting language.

You can find the source for this example on my GitHub: https://github.com/chris-crone/containerized-go-dev

You can read more about the experimental Dockerfile syntax here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md

If you’re interested in build at Docker, take a look at the Buildx repository: https://github.com/docker/buildx

]]>
Containerize Your Go Developer Environment – Part 2 https://www.docker.com/blog/containerize-your-go-developer-environment-part-2/ Wed, 17 Jun 2020 15:00:00 +0000 https://www.docker.com/blog/?p=26366 This is the second part in a series of posts where we show how to use Docker to define your Go development environment in code. The goal of this is to make sure that you, your team, and the CI are all using the same environment. In part 1, we explained how to start a containerized development environment for local Go development, building an example CLI tool for different platforms and shrinking the build context to speed up builds. Now we are going to go one step further and learn how to add dependencies to make the project more realistic, caching to make the builds faster, and unit tests.

Adding dependencies

Go program from part 1 is very simple and doesn’t have any dependencies Go dependencies. Let’s add a simple dependency – the commonly used github.com/pkg/errors package:

package main

import (
   "fmt"
   "os"
   "strings"
   "github.com/pkg/errors"

)

func echo(args []string) error {
   if len(args) < 2 {
       return errors.New("no message to echo")
   }
   _, err := fmt.Println(strings.Join(args[1:], " "))
   return err
}

func main() {
   if err := echo(os.Args); err != nil {
       fmt.Fprintf(os.Stderr, "%+v\n", err)
       os.Exit(1)
   }
}

Our example program is now a simple echo program that writes out the arguments that the user inputs or “no message to echo” and a stack trace if nothing is specified.

We will use Go modules to handle this dependency. Running the following commands will create the go.mod and go.sum files:

$ go mod init
$ go mod tidy

Now when we run the build, we will see that each time we build, the dependencies are downloaded

$ make
[+] Building 8.2s (7/9)
 => [internal] load build definition from Dockerfile
...
0.0s
 => [build 3/4] COPY . . 
0.1s
 => [build 4/4] RUN GOOS=darwin GOARCH=amd64 go build -o /out/example .
7.9s
 => => # go: downloading github.com/pkg/errors v0.9.1

This is clearly inefficient and slows things down. We can fix this by downloading our dependencies as a separate step in our Dockerfile:

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
ARG TARGETOS
ARG TARGETARCH
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .


FROM scratch AS bin-unix
COPY --from=build /out/example /
...

Notice that we’ve added the go.* files and download the modules before adding the rest of the source. This allows Docker to cache the modules as it will only rerun these steps if the go.* files change.

Caching

Separating the downloading of our dependencies from our build is a great improvement but each time we run the build, we are starting the compile from scratch. For small projects this might not be a problem but as your project gets bigger you will want to leverage Go’s compiler cache.

To do this, you will need to use BuildKit’s Dockerfile frontend (https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md). Our updated Dockerfile is as follows:

# syntax = docker/dockerfile:1-experimental

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
ARG TARGETOS
ARG TARGETARCH
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .


FROM scratch AS bin-unix
COPY --from=build /out/example /
...

Notice the # syntax at the top of the Dockerfile that selects the experimental Dockerfile frontend and the –mount option attached to the run command. This mount option means that each time the go build command is run, the container will have the cache mounted to Go’s compiler cache folder.

Benchmarking this change for the example binary on a 2017 MacBook Pro 13”, I see that a small code change takes 11 seconds to build without the cache and less than 2 seconds with it. This is a huge improvement!

Adding unit tests

All projects need tests! We’ll add a simple test for our echo function in a main_test.go file:

package main

import (
    "testing"
    "github.com/stretchr/testify/require"

)

func TestEcho(t *testing.T) {
    // Test happy path
    err := echo([]string{"bin-name", "hello", "world!"})
    require.NoError(t, err)
}

func TestEchoErrorNoArgs(t *testing.T) {
    // Test empty arguments
    err := echo([]string{})
    require.Error(t, err)
}

This test ensures that we get an error if the echo function is passed an empty list of arguments.

We will now want another build target for our Dockerfile so that we can run the tests and build the binary separately. This will require a refactor into a base stage and then unit-test and build stages:

# syntax = docker/dockerfile:1-experimental

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .


FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .


FROM base AS unit-test
RUN --mount=type=cache,target=/root/.cache/go-build \
go test -v .


FROM scratch AS bin-unix
COPY --from=build /out/example /
...

Note that Go test uses the same cache as the build so we mount the cache for this stage too. This allows Go to only run tests if there have been code changes which makes the tests run quicker.

We can also update our Makefile to add a test target:

all: bin/example
test: unit-test

PLATFORM=local

.PHONY: bin/example
bin/example:
    @docker build . --target bin \
    --output bin/ \
    --platform ${PLATFORM}

.PHONY: unit-test
unit-test:
    @docker build . --target unit-test

What’s next?

In this post we have seen how to add Go dependencies efficiently, caching to make the build faster and unit tests to our containerized Go development environment. In the next and final post of the series, we are going to complete our journey and learn how to add a linter, set up a GitHub Actions CI, and some extra build optimizations.

You can find the finalized source this example on my GitHub: https://github.com/chris-crone/containerized-go-dev

You can read more about the experimental Dockerfile syntax here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md

If you’re interested in build at Docker, take a look at the Buildx repository: https://github.com/docker/buildx

Read the whole blog post series here.

]]>
Containerize Your Go Developer Environment – Part 1 https://www.docker.com/blog/containerize-your-go-developer-environment-part-1/ Thu, 11 Jun 2020 16:30:00 +0000 https://www.docker.com/blog/?p=26325 When joining a development team, it takes some time to become productive. This is usually a combination of learning the code base and getting your environment setup. Often there will be an onboarding document of some sort for setting up your environment but in my experience, this is never up to date and you always have to ask someone for help with what tools are needed.

This problem continues as you spend more time in the team. You’ll find issues because the version of the tool you’re using is different to that used by someone on your team, or, worse, the CI. I’ve been on more than one team where “works on my machine” has been exclaimed or written in all caps on Slack and I’ve spent a lot of time debugging things on the CI which is incredibly painful.

Many people use Docker as a way to run application dependencies, like databases, while they’re developing locally and for containerizing their production applications. Docker is also a great tool for defining your development environment in code to ensure that your team members and the CI are all using the same set of tools.

We do a lot of Go development at Docker. The Go toolchain is great– providing fast compile times, built in dependency management, easy cross compiling, and strong opinionation on things like code formatting. Even with this toolchain we often run into issues like mismatched versions of Go, missing dependencies, and slightly different configurations. A good example of this is that we use gRPC for many projects and so require a specific version of protoc that works with our code base.

This is the first of a series of blog posts that will show you how to use Docker for Go development. It will cover building, testing, CI, and optimization to make your builds quicker.

Start simple

Let’s start with a simple Go program:

package main

import "fmt"

func main() {
    fmt.Println("Hello world!")
}

You can easily build this into a binary using the following command:
$ go build -o bin/example .

The same can be achieved using the following Dockerfile:

FROM golang:1.14.3-alpine AS build
WORKDIR /src
COPY . .
RUN go build -o /out/example .
FROM scratch AS bin
COPY --from=build /out/example /

This Dockerfile is broken into two stages identified with the AS keyword. The first stage, build, starts from the Go Alpine image. Alpine uses the musl C library and is a minimalist alternative to the regular Debian based Golang image. Note that we can define which version of Go we want to use. It then sets the working directory in the container, copies the source from the host into the container, and runs the go build command from before. The second stage, bin, uses a scratch (i.e.: empty) base image. It then simply copies the resulting binary from the first stage to its filesystem. Note that if your binary needs other resources, like CA certificates, then these would also need to be included in the final image.

As we are leveraging BuildKit in this blog post, you will need to make sure that you enable it by using Docker 19.03 or later and setting DOCKER_BUILDKIT=1 in your environment. On Linux, macOS, or using WSL 2 you can do this using the following command:
$ export DOCKER_BUILDKIT=1

On Windows for PowerShell you can use:
$env:DOCKER_BUILDKIT=1

Or for command prompt:
set DOCKER_BUILDKIT=1

To run the build, we will use the docker build command with the output option to say that we want the result to be written to the host’s filesystem:
$ docker build --target bin --output bin/ .

You will then see that we have the example binary inside our bin directory:

$ ls bin
example

Simple cross compiling

As Docker supports building images for different platforms, the build command has a platform flag. This allows setting the target OS (i.e.: Linux or Windows) and architecture (i.e.: amd64, arm64, etc.). To build natively for other platforms, your builder will need to be set up for these platforms. As Golang’s toolchain includes cross compilation, we do not need to do any builder setup but can still leverage the platform flag.

We can cross compile the binary for the host operating system by adding arguments to our Dockerfile and filling them from the platform flag of the docker build command. The updated Dockerfile is as follows:

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
WORKDIR /src
ENV CGO_ENABLED=0
COPY . .
ARG TARGETOS
ARG TARGETARCH
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .

FROM scratch AS bin
COPY --from=build /out/example /

Notice that we now have the BUILDPLATFORM variable set as the platform for our base image. This will pin the image to the platform that the builder is running on. In the compilation step, we consume the TARGETOS and TARGETARCH variables, both filled by the build platform flag, to tell Go which platform to build for. To simplify things, and because this is a simple application, we statically compile the binary by setting CGO_ENABLED=0. This means that the resulting binary will not be linked to any C libraries. If your application uses any system libraries (like the system’s cryptography library) then you will not be able to statically compile the binary like this.

To build for your host operating system, you can specify the local platform:
$ docker build --target bin --output bin/ --platform local .

As the docker build command is getting quite long, we can put it into a Makefile (or a scripting language of your choice):

all: bin/example
.PHONY: bin/example
bin/example:
   @docker build . --target bin \
   --output bin/ \
   --platform local

This will allow you to run your build as follows:

$ make bin/example
$ make

We can go a step further and add a cross compiling targets to the Dockerfile:

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
WORKDIR /src
ENV CGO_ENABLED=0
COPY . .
ARG TARGETOS
ARG TARGETARCH
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .


FROM scratch AS bin-unix
COPY --from=build /out/example /

FROM bin-unix AS bin-linux
FROM bin-unix AS bin-darwin

FROM scratch AS bin-windows
COPY --from=build /out/example /example.exe

FROM bin-${TARGETOS} AS bin

Above we have two different stages for Unix-like OSes  (bin-unix) and for Windows (bin-windows). We then add aliases for Linux (bin-linux) and macOS (bin-darwin). This allows us to make a dynamic target (bin) that depends on the TARGETOS variable and is automatically set by the docker build platform flag.

This allows us to build for a specific platform:

$ docker build --target bin --platform windows/amd64 .
$ file bin/
bin/example.exe: PE32+ executable (console) x86-64 (stripped to external
PDB), for MS Windows

Our updated Makefile has a PLATFORM variable that you can set:

all: bin/example

PLATFORM=local

.PHONY: bin/example
bin/example:
   @docker build . --target bin \
   --output bin/ \   --platform ${PLATFORM}

This means that you can build for a specific platform by setting PLATFORM:
$ make PLATFORM=windows/amd64

You can find the list of valid operating system and architecture combinations here: https://golang.org/doc/install/source#environment.

Shrinking our build context

By default, the docker build command will take everything in the path passed to it and send it to the builder. In our case, that includes the contents of the bin/ directory which we don’t use in our build. We can tell the builder not to do this, using a .dockerignore file:

bin/*

Since the bin/ directory contains several megabytes of data, adding the .dockerignore file reduces the time it takes to build by a little bit.

Similarly, if you are using Git for code management but do not use git commands as part of your build process, you can exclude the .git/ directory too.

What’s next?

This post showed how to start a containerized development environment for local Go development, build an example CLI tool for different platforms and how to start speeding up builds by shrinking the build context. In the next post of this series, we will add dependencies to make our example project more realistic, look at caching to make builds faster, and add unit tests.

You can find the finalized source this example on my GitHub: https://github.com/chris-crone/containerized-go-dev

If you’re interested in build at Docker, take a look at the Buildx repository: https://github.com/docker/buildx

Read the whole blog post series here.

]]>
Write Maintainable Integration Tests with Docker https://www.docker.com/blog/maintainable-integration-tests-with-docker/ Wed, 31 Jul 2019 17:29:30 +0000 https://blog.docker.com/?p=23771 Testcontainer is an open source community focused on making integration tests easier across many languages. Gianluca Arbezzano is a Docker Captain, SRE at Influx Data and the maintainer of the Golang implementation of Testcontainer that uses the Docker API to expose a test-friendly library that you can use in your test cases. 

markus spiske code unsplash
Photo by Markus Spiske on Unsplash.
The popularity of microservices and the use of third-party services for non-business critical features has drastically increased the number of integrations that make up the modern application. These days, it is commonplace to use MySQL, Redis as a key value store, MongoDB, Postgress, and InfluxDB – and that is all just for the database – let alone the multiple services that make up other parts of the application.

All of these integration points require different layers of testing. Unit tests increase how fast you write code because you can mock all of your dependencies, set the expectation for your function and iterate until you get the desired transformation. But, we need more. We need to make sure that the integration with Redis, MongoDB or a microservice works as expected, not just that the mock works as we wrote it. Both are important but the difference is huge.

In this article, I will show you how to use testcontainer to write integration tests in Go with very low overhead. So, I am not telling you to stop writing unit tests, just to be clear!

Back in the day, when I was interested in becoming  a Java developer, I tried to write an integration between Zipkin, a popular open source tracer, and InfluxDB. I ultimately failed because I am not a Java developer, but I did understand how they wrote integration tests, and I became fascinated.

Getting Started: testcontainers-java

Zipkin provides a UI and an API to store and manipulate traces, it supports Cassandra, in-memory, ElasticSearch, MySQL and many more platforms as storage. In order to validate that all the storage systems work, they use a library called testcontainers-java that is a wrapper around the docker-api designed to be “test-friendly.”Here is the Quick Start example:

public class RedisBackedCacheIntTestStep0 {
    private RedisBackedCache underTest;

    @Before
    public void setUp() {
        // Assume that we have Redis running locally?
        underTest = new RedisBackedCache("localhost", 6379);
    }

    @Test
    public void testSimplePutAndGet() {
        underTest.put("test", "example");

        String retrieved = underTest.get("test");
        assertEquals("example", retrieved);
    }
}
At the setUp you can create a container (redis in this case) and expose a port. From here, you can interact with a live redis instance.

Everytime you start a new container, there is a “sidecar” called ryuk that keeps your Docker environment clean by removing containers, volumes and networks after a certain amount of time. You can also remove them from inside the test.The below example comes from Zipkin. They are testing the ElasticSearch integration and as the example shows, you can programmatically configure your dependencies from inside the test case.

public class ElasticsearchStorageRule extends ExternalResource {
 static final Logger LOGGER = LoggerFactory.getLogger(ElasticsearchStorageRule.class);
 static final int ELASTICSEARCH_PORT = 9200; final String image; final String index;
 GenericContainer container;
 Closer closer = Closer.create();

 public ElasticsearchStorageRule(String image, String index) {
   this.image = image;
   this.index = index;
 }
 @Override
 
  protected void before() {
   try {
     LOGGER.info("Starting docker image " + image);
     container =
         new GenericContainer(image)
             .withExposedPorts(ELASTICSEARCH_PORT)
             .waitingFor(new HttpWaitStrategy().forPath("/"));
     container.start();
     if (Boolean.valueOf(System.getenv("ES_DEBUG"))) {
       container.followOutput(new Slf4jLogConsumer(LoggerFactory.getLogger(image)));
     }
     System.out.println("Starting docker image " + image);
   } catch (RuntimeException e) {
     LOGGER.warn("Couldn't start docker image " + image + ": " + e.getMessage(), e);
   }
That this happens programmatically is key because you do not need to rely on something external such as docker-compose to spin up your integration tests environment. By spinning it up from inside the test itself, you have a lot more control over the orchestration and provisioning, and the test is more stable. You can even check when a container is ready before you start a test.

Since I am not a Java developer, I ported the library (we are still working on all the features) in Golang and now it’s in the main testcontainers/testcontainers-go organization.

func TestNginxLatestReturn(t *testing.T) {
    ctx := context.Background()
    req := testcontainers.ContainerRequest{
        Image:        "nginx",
        ExposedPorts: []string{"80/tcp"},
    }
    nginxC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
        ContainerRequest: req,
        Started:          true,
    })
    if err != nil {
        t.Error(err)
    }
    defer nginxC.Terminate(ctx)
    ip, err := nginxC.Host(ctx)
    if err != nil {
        t.Error(err)
    }
    port, err := nginxC.MappedPort(ctx, "80")
    if err != nil {
        t.Error(err)
    }
    resp, err := http.Get(fmt.Sprintf("http://%s:%s", ip, port.Port()))
    if resp.StatusCode != http.StatusOK {
        t.Errorf("Expected status code %d. Got %d.", http.StatusOK, resp.StatusCode)
    }
}

Creating the Test

This is what it looks like:

ctx := context.Background()
req := testcontainers.ContainerRequest{
    Image:        "nginx",
    ExposedPorts: []string{"80/tcp"},
}
nginxC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
    ContainerRequest: req,
    Started:          true,
})
if err != nil {
    t.Error(err)
}
defer nginxC.Terminate(ctx)
You create the nginx container and with the defer nginxC.Terminate(ctx) command, you are cleaning up the container when the test is over. Remember ryuk? it is not a mandatory command, but testcontainers-go uses it to remove the containers at some point.

Modules

The Java library has a feature called modules where you get pre-canned containers such as databases (mysql, postgress, cassandra, etc.) or applications like nginx.The go version is working on something similar but it is still an open pr.

If you’d like to build a microservice your application relies on from the upstream video, this is a great feature. Or if you would like to test how your application behaves from inside a container (probably more similar to where it will run in prod). This is how it works in Java:

@Rule
public GenericContainer dslContainer = new GenericContainer(
    new ImageFromDockerfile()
            .withFileFromString("folder/someFile.txt", "hello")
            .withFileFromClasspath("test.txt", "mappable-resource/test-resource.txt")
            .withFileFromClasspath("Dockerfile", "mappable-dockerfile/Dockerfile"))

What I’m working on now

Something that I am currently working on is a new canned container that uses kind to spin up Kubernetes clusters inside a container. If your applications use the Kubernetes API, you can test it in integration:

ctx := context.Background()
k := &KubeKindContainer{}
err := k.Start(ctx)
if err != nil {
  t.Fatal(err.Error())
}
defer k.Terminate(ctx)
clientset, err := k.GetClientset()
if err != nil {
  t.Fatal(err.Error())
}
ns, err := clientset.CoreV1().Namespaces().Get("default", metav1.GetOptions{})
if err != nil {
  t.Fatal(err.Error())
}
if ns.GetName() != "default" {
  t.Fatalf("Expected default namespace got %s", ns.GetName())
This feature is still a work in progress as you can see from PR67.

Calling All Coders

The Java version for testcontainers is the first one developed, it has a lot of features not ported to the Go version or to other libraries as well such as JavaScript, Rust, .Net.

My suggestion is to try the one written in your language and to contribute to it. 

In Go we don’t have a way to programmatically build images. I am thinking to embed buildkit or img in order to get a damonless builder that doesn’t depend on Docker. The great part about working with the Go version is that all the container related libraries are already in Go, so you can do a very good work of integration with them.

This is a great chance to become part of this community! If you are passionate about testing framework join us and send your pull requests, or come to hang out on Slack.

Try It Out

I hope you are as excited as me about the flavour and the power this library provides. Take a look at the testcontainers organization on GitHub to see if your language is covered and try it out! And, if your language is not covered, let’s write it! If you are a Go developer and you’d like to contribute, feel free to reach out to me @gianarb, or go check it out and open an issue or pull request!

Docker Captain @GianArb gives the low down on how to write maintainable integration tests with #Docker
Click To Tweet


]]>
Docker + Golang = https://www.docker.com/blog/docker-golang/ Wed, 14 Sep 2016 15:00:00 +0000 https://blog.docker.com/?p=14840 This is a short collection of tips and tricks showing how Docker can be useful when working with Go code. For instance, I’ll show you how to compile Go code with different versions of the Go toolchain, how to cross-compile to a different platform (and test the result!), or how to produce really small container images.

The following article assumes that you have Docker installed on your system. It doesn’t have to be a recent version (we’re not going to use any fancy feature here).

Go without go

… And by that, we mean “Go without installing go”.

If you write Go code, or if you have even the slightest interest into the Go language, you certainly have the Go compiler and toolchain installed, so you might be wondering “what’s the point?”; but there are a few scenarios where you want to compile Go without installing Go.

  • You still have this old Go 1.2 on your machine (that you can’t or won’t upgrade), and you have to work on this codebase that requires a newer version of the toolchain.
  • You want to play with cross compilation features of Go 1.5 (for instance, to make sure that you can create OS X binaries from a Linux system).
  • You want to have multiple versions of Go side-by-side, but don’t want to completely litter your system.
  • You want to be 100% sure that your project and all its dependencies download, build, and run fine on a clean system.

If any of this is relevant to you, then let’s call Docker to the rescue!

Compiling a program in a container

When you have installed Go, you can do go get -v github.com/user/repo to download, build, and install a library. (The -v flag is just here for verbosity, you can remove it if you prefer your toolchain to be swift and silent!)

You can also do go get github.com/user/repo/... (yes, that’s three dots) to download, build, and install all the things in that repo (including libraries and binaries).

We can do that in a container!

Try this:

docker run golang go get -v github.com/golang/example/hello/...

This will pull the golang image (unless you have it already; then it will start right away), and create a container based on that image. In that container, go will download a little “hello world” example, build it, and install it. But it will install it in the container … So how do we run that program now?

Running our program in a container

One solution is to commit the container that we just built, i.e. “freeze” it into a new image:

docker commit $(docker ps -lq) awesomeness

Note: docker ps -lq outputs the ID (and only the ID!) of the last container that was executed. If you are the only uesr on your machine, and if you haven’t created another container since the previous command, that container should be the one in which we just built the “hello world” example.

Now, we can run our program in a container based on the image that we just built:

docker run awesomeness hello

The output should be Hello, Go examples!.

Bonus points

When creating the image with docker commit, you can use the --change flag to specify arbitrary Dockerfile commands. For instance, you could use a CMD or ENTRYPOINT command so that docker run awesomeness automatically executes hello.

Running in a throwaway container

What if we don’t want to create an extra image just to run this Go program?

We got you covered:

docker run --rm golang sh -c \
"go get github.com/golang/example/hello/... && exec hello"

Wait a minute, what are all those bells and whistles?

  • --rm tells to the Docker CLI to automatically issue a docker rm command once the container exits. That way, we don’t leave anything behind ourselves.
  • We chain together the build step (go get) and the execution step (exec hello) using the shell logical operator &&. If you’re not a shell aficionado, && means “and”. It will run the first part go get..., and if (and only if!) that part is successful, it will run the second part (exec hello). If you wonder why this is like that: it works like a lazy and evaluator, which needs to evaluate the right hand side only if the left hand side evaluates to true.
  • We pass our commands to sh -c, because if we were to simply do docker run golang "go get ... && hello", Docker would try to execute the program named go SPACE get SPACE etc. and that wouldn’t work. So instead, we start a shell and instruct the shell to execute the command sequence.
  • We use exec hello instead of hello: this will replace the current process (the shell that we started) with the hello program. This ensures that hello will be PID 1 in the container, instead of having the shell as PID 1 and hello as a child process. This is totally useless for this tiny example, but when we will run more useful programs, this will allow them to receive external signals properly, since external signals are delivered to PID 1 of the container. What kind of signal, you might be wondering? A good example is docker stop, which sends SIGTERM to PID 1 in the container.

Using a different version of Go

When you use the golang image, Docker expands that to golang:latest, which (as you might guess) will map to the latest version available on the Docker Hub.

If you want to use a specific version of Go, that’s very easy: specify that version as a tag after the image name.

For instance, to use Go 1.5, change the example above to replace golang with golang:1.5:

docker run --rm golang:1.5 sh -c \
 "go get github.com/golang/example/hello/... && exec hello"

You can see all the versions (and variants) available on the Golang image page on the Docker Hub.

Installing on our system

OK, so what if we want to run the compiled program on our system, instead of in a container?

We could copy the compiled binary out of the container. Note, however, that this will work only if our container architecture matches our host architecture; in other words, if we run Docker on Linux. (I’m leaving out people who might be running Windows Containers!)

The easiest way to get the binary out of the container is to map the $GOPATH/bin directory to a local directory. In the golang container, $GOPATH is /go. So we can do the following:

docker run -v /tmp/bin:/go/bin \
 golang go get github.com/golang/example/hello/...
 /tmp/bin/hello

If you are on Linux, you should see the Hello, Go examples! message. But if you are, for instance, on a Mac, you will probably see:

-bash:
/tmp/test/hello: cannot execute binary file

What can we do about it?

Cross-compilation

Go 1.5 comes with outstanding out-of-the-box cross-compilation abilities, so if your container operating system and/or architecture doesn’t match your system’s, it’s no problem at all!

To enable cross-compilation, you need to set GOOS and/or GOARCH.

For instance, assuming that you are on a 64 bits Mac:

docker run -e GOOS=darwin -e GOARCH=amd64 -v /tmp/crosstest:/go/bin \
 golang go get github.com/golang/example/hello/...

The output of cross-compilation is not directly in $GOPATH/bin, but in $GOPATH/bin/$GOOS_$GOARCH. In other words, to run the program, you have to execute /tmp/crosstest/darwin_amd64/hello.

Installing straight to the $PATH

If you are on Linux, you can even install directly to your system bin directories:

docker run -v /usr/local/bin:/go/bin \
 golang get github.com/golang/example/hello/...

However, on a Mac, trying to use /usr as a volume will not mount your Mac’s filesystem to the container. It will mount the /usr directory of the Moby VM (the small Linux VM hidden behind the Docker whale icon in your toolbar).

You can, however, use /tmp or something in your home directory, and then copy it from there.

Building lean images

The Go binaries that we produced with this technique are statically linked. This means that they embed all the code that they need to run, including all dependencies. This contrasts withdynamically linked programs, which don’t contain some basic libraries (like the “libc”) and use a system-wide copy which is resolved at run time.

This means that we can drop our Go compiled program in a container, without anything else, and it should work.

Let’s try this!

The scratch image

There is a special image in the Docker ecosystem: scratch. This is an empty image. It doesn’t need to be created or downloaded, since by definition, it is empty.

Let’s create a new, empty directory for our new Go lean image.

In this new directory, create the following Dockerfile:

FROM scratch
 COPY ./hello /hello
 ENTRYPOINT ["/hello"]

This means: * start from scratch (an empty image), * add the hello file to the root of the image, * define this hello program to be the default thing to execute when starting this container.

Then, produce our hello binary as follows:

docker run -v $(pwd):/go/bin --rm \
 golang go get github.com/golang/example/hello/...

Note: we don’t need to set GOOS and GOARCH here, because precisely, we want a binary that will run in a Docker container, not on our host system. So leave those variables alone!

Then, we can build the image:

docker build -t hello .

And test it:

docker run hello

(This should display Hello, Go examples!.)

Last but not least, check the image’s size:

docker images hello

If we did everything right, this image should be about 2 MB. Not bad!

Building something without pushing to GitHub

Of course, if we had to push to GitHub each time we wanted to compile, we would waste a lot of time.

When you want to work on a piece of code and build it within a container, you can mount a local directory to /go in the golang container, so that the $GOPATH is persisted across invocations: docker run -v $HOME/go:/go golang ....

But you can also mount local directories to specific paths, to “override” some packages (the ones that you have edited locally). Here is a complete example:

# Adapt the two following environment variables if you are not running on a Mac
 export GOOS=darwin GOARCH=amd64
 mkdir go-and-docker-is-love
 cd go-and-docker-is-love
 git clone git://github.com/golang/example
 cat example/hello/hello.go
 sed -i .bak s/olleH/eyB/ example/hello/hello.go
 docker run --rm \
 -v $(pwd)/example:/go/src/github.com/golang/example \
 -v $(pwd):/go/bin/${GOOS}_${GOARCH} \
 -e GOOS -e GOARCH \
 golang go get github.com/golang/example/hello/...
 ./hello
 # Should display "Bye, Go examples!"

 

The special case of the net package and CGo

Before diving into real-world Go code, we have to confess something: we lied a little bit about the static binaries. If you are using CGo, or if you are using the net package, the Go linker will generate a dynamic binary. In the case of the net package (which a lot of useful Go programs out there are using indeed!), the main culprit is the DNS resolver. Most systems out there have a fancy, modular name resolution system (like the Name Service Switch) which relies on plugins which are, technically, dynamic libraries. By default, Go will try to use that; and to do so, it will produce dynamic libraries.

How do we work around that?

Re-using another distro’s libc

One solution is to use a base image that has the essential libraries needed by those Go programs to function. Almost any “regular” Linux distro based on the GNU libc will do the trick. So instead of FROM scratch, you would use FROM debian or FROM fedora, for instance. The resulting image will be much bigger now; but at least, the bigger bits will be shared with other images on your system.

Note: you cannot use Alpine in that case, since Alpine is using the musl library instead of the GNU libc.

Bring your own libc

Another solution is to surgically extract the files needed, and place them in your container with COPY. The resulting container will be small. However, this extraction process leaves the author with the uneasy impression of a really dirty job, and they would rather not go into more details.

If you want to see for yourself, look around ldd and the Name Service Switch plugins mentioned earlier.

Producing static binaries with netgo

We can also instruct Go to not use the system’s libc, and substitute Go’s netgo library, which comes with a native DNS resolver.

To use it, just add -tags netgo -installsuffix netgo to the go get options.

  • -tags netgo instructs the toolchain to use netgo.
  • -installsuffix netgo will make sure that the resulting libraries (if any) are placed in a different, non-default directory. This will avoid conflicts between code built with and without netgo, if you do multiple go get (or go build) invocations. If you build in containers like we have shown so far, this is not strictly necessary, since there will be no other Go code compiled in this container, ever; but it’s a good idea to get used to it, or at least know that this flag exists.

The special case of SSL certificates

There is one more thing that you have to worry about if your code has to validate SSL certificates; for instance if it will connect to external APIs over HTTPS. In that case, you need to put the root certificates in your container too, because Go won’t bundle those into your binary.

Installing the SSL certificates

Three again, there are multiple options available, but the easiest one is to use a package from an existing distribution.

Alpine is a good candidate here because it’s so tiny. The following Dockerfile will give you a base image that is small, but has an up-to-date bundle of root certificates:

FROM alpine:3.4
RUN apk add --no-cache ca-certificates apache2-utils

 

Check it out; the resulting image is only 6 MB!

Note: the --no-cache option tells apk (the Alpine package manager) to get the list of available packages from Alpine’s distribution mirrors, without saving it to disk. You might have seen Dockerfiles doing something like apt-get update && apt-get install ... && rm -rf /var/cache/apt/*; this achieves something equivalent (i.e. not leave package caches in the final image) with a single flag.

As an added bonus, putting your application in a container based on the Alpine image gives you access to a ton of really useful tools: now you can drop a shell into your container and poke around while it’s running, if you need to!

Wrapping it up

We saw how Docker can help us to compile Go code in a clean, isolated environment; how to use different versions of the Go toolchain; and how to cross-compile between different operating systems and platforms.

We also saw how Go can help us to build small, lean container images for Docker, and described a number of associated subtleties linked (no pun intended) to static libraries and network dependencies.

Beyond the fact that Go is really good fit for a project that Docker, we hope that we showed you how Go and Docker can benefit from each other and work really well together!

Acknowledgements

This was initially presented during the hack day at GopherCon 2016.

I would like to thank all the people who proofread this material and gave ideas and suggestions to make it better; including but not limited to:

All mistakes and typos are my own; all the good stuff is theirs! ☺


.@jpetazzo’s tips and tricks on how #Docker can be useful when working with @Golang code #golang
Click To Tweet


]]>
Introducing runC: a lightweight universal container runtime https://www.docker.com/blog/runc/ Mon, 22 Jun 2015 17:07:00 +0000 https://www.docker.com/?p=37388 Spinning Out Docker’s Plumbing: Part 1: Introducing runC

On Infrastructure Plumbing

To build a platform like Docker you need a lot of infrastructure plumbing; in fact over the past two years even though our code base has grown to tens of thousands of lines of code; roughly 50% of it is plumbing! Infrastructure plumbing is made of small software tools which perform basic fundamental tasks in the most reliable and simple way possible. It is invisible and under-appreciated especially given that plumbing is what holds the world’s Internet infrastructure together.

To build Docker we have re-used large quantities of plumbing: Linux, Go, lxc, aufs, lvm, iptables, virtualbox, vxlan, mesos, etcd, consul, systemd… the list goes on. Docker wouldn’t be possible without the thousands of people who contributed to create this plumbing.When plumbing was not available or not sufficient, with the help of the Docker community, we built some of our own too. And as the volume and scope of contributions grew, so did the quality and quantity of the underlying plumbing. Of the tens of thousands of lines of code that constitute the Docker platform, roughly 50% is plumbing! Docker has plumbing for interacting with both Linux and Windows native capabilities; it has plumbing for networking; service discovery; master election; security; and more.

Infrastructure Plumbing Manifesto

How we create and reuse infrastructure plumbing is fundamental to the Docker project. Our approach boils down to 3 fundamental principles which we call “the Infrastructure Plumbing Manifesto”:

Whenever possible, re-use existing plumbing and contribute improvements back.

When you need to create new plumbing, make it easy to re-use and contribute improvements back. This grows the common pool of available components, and everyone benefits.

Follow the unix principles: several simple components are better than a single, complicated one.

Define standard interfaces which can be used to combine many simple components into a more sophisticated system.

There has been lots of demand for separating this plumbing from the Docker platform, so that it can be re-used by other infrastructure plumbers in accordance with infrastructure plumbing best practices. Today we are excited to announce that we are doing just that!

Spinning Out ALL Docker Infrastructure Plumbing

Starting today we will begin spinning out ALL INFRASTRUCTURE PLUMBING from the Docker platform, This is a big deal, and the most important architectural change for the Docker project since its introduction.This has many benefits for Docker:

  • If you are deploying Docker in production: this makes Docker more ops-friendly. Because its underlying plumbing will be more cleanly separated, the platform becomes more modular; which in turn makes it easier to scale, easier to troubleshoot, easier to secure and easier to customize.
  • If you want to integrate Docker with your favorite tool: because all plumbing exposes standard interfaces, each component becomes a potential integration point.

Starting today at DockerCon, we have started to announce the first plumbing components to be spun out. But there is still plenty of work to do. WE ARE CALLING TO ALL DOCKER CONTRIBUTORS, AND ALL INFRASTRUCTURE PLUMBERS EVERYWHERE, TO JOIN THE EFFORT AND HELP US CONTRIBUTE BACK DOCKER’S PLUMBING.

Today we are introducing our most famous piece of plumbing: our OS container runtime.

Introducing runC: The universal container runtime

Docker is a platform to build, ship and run distributed applications – meaning that it runs applications in a distributed fashion across many machines, often with a variety of hardware and OS configurations. For this to be possible, it needs a sandboxing environment capable of abstracting the specifics of the underlying host (for portability), without requiring a complete rewrite of the application (for ubiquity), and without introducing excessive performance overhead (for scale).

Over the last 5 years Linux has gradually gained a collection of features which make this kind of abstraction possible. Windows, with its upcoming version 10, is adding similar features as well. Those individual features have esoteric names like “control groups”, “namespaces”, “seccomp”, “capabilities”, “apparmor” and so on. But collectively, they are known as “OS containers” or sometimes “lightweight virtualization”.

Docker makes heavy use of these features and has become famous for it. Because “containers” are actually an array of complicated, sometimes arcane system features, we have integrated them into a unified low-level component which we simply call runC. And today we are spinning out runC as a standalone tool, to be used as plumbing by infrastructure plumbers everywhere.

runC is a lightweight, portable container runtime. It includes all of the plumbing code used by Docker to interact with system features related to containers. It is designed with the following principles in mind:

  • Designed for security.
  • Usable at large scale, in production, today.
  • No dependency on the rest of the Docker platform: just the container runtime and nothing else.

Popular runC features include:

  • Full support for Linux namespaces, including user namespaces
  • Native support for all security features available in Linux: Selinux, Apparmor, seccomp, control groups, capability drop, pivot_root, uid/gid dropping etc. If Linux can do it, runC can do it.
  • Native support for live migration, with the help of the CRIU team at Parallels
  • Native support of Windows 10 containers is being contributed directly by Microsoft engineers
  • Planned native support for Arm, Power, Sparc with direct participation and support from Arm, Intel, Qualcomm, IBM, and the entire hardware manufacturers ecosystem.
  • Planned native support for bleeding edge hardware features – DPDK, sr-iov, tpm, secure enclave, etc.
  • Portable performance profiles, contributed by Google engineers based on their experience deploying containers in production.
  • A formally specified configuration format, governed by the Open Container Project under the auspices of the Linux Foundation. In other words: it’s a real standard.

The goal of runC is to make standard containers available everywhere

In fact, we have decided to donate the code of runC itself to the OCP foundation. Because OCP is designed to work just like the Linux Foundation, we expect that the maintainers – a blend of employees from various container-focused companies and hobbyists – will be largely left alone, and will continue to write the most awesome software possible.

runC is available today and is already under active development. Because it is based on the battle-tested plumbing used by Docker, you can use it in production today, either as part of a Docker deployment or in your own custom platform. We look forward to your contributions!

runC is available today at https://github.com/opencontainers/runc


Learn More about the Docker News from DockerCon 2015

Join our next Docker online meetup recapping all of the news from DockerCon including demos of the latest features of Docker 1.7. The meetup is on Monday, June 29 at 10:00 PDT / 19:00 CEST – click here to register!


Learn More about Docker

]]>