Docker images – Docker https://www.docker.com Wed, 05 Jul 2023 19:02:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Docker images – Docker https://www.docker.com 32 32 How to Use the Node Docker Official Image https://www.docker.com/blog/how-to-use-the-node-docker-official-image/ Wed, 26 Oct 2022 14:04:08 +0000 https://www.docker.com/?p=38370 Topping Stack Overflow’s 2022 list of most popular web frameworks and technologies, Node.js continues to grow as a critical MERN stack component. And since Node applications are written in JavaScript — the world’s leading programming language — many developers will feel right at home using it. We introduced the Node Docker Official Image (DOI) due to Node.js’ popularity and to solve some common development challenges. 

The Node.js Foundation describes Node as “an open-source, cross-platform JavaScript runtime environment.” Developers use it to create performant, scalable server and networking applications. Despite Node’s advantages, building and deploying cross-platform services can be challenging with traditional workflows.

Conversely, the Node Docker Official Image accelerates and simplifies your development processes while allowing additional configuration. You can deploy containerized Node applications in minutes. Throughout this guide, we’ll discuss the Node Official Image, how to use it, and some valuable best practices. 

In this tutorial:

What is the Node Docker Official Image?

node js docker official image blog 900x600 1

The Node Docker Official Image contains all source code, core dependencies, tools, and libraries your application needs to work correctly. 

This image supports multiple CPU architectures like amd64, arm32v6, arm32v7, arm64v8, ppc641le, and s390x. You can also choose between multiple tags (or image versions) for any project. Choosing a pinned version like node:19.0.0-slim locks you into a stable, streamlined version of Node.js. 

Node.js use cases

Node.js lets developers write server-side code in JavaScript. The runtime environment then transforms this JavaScript into hardware-friendly machine code. As a result, the CPU can process these low-level instructions. 

Node is event-driven (through user actions), non-blocking, and known for being lightweight while simultaneously handling numerous operations. As a result, you can use the Node DOI to create the following: 

  • Web server applications
  • Networking applications

Node works well here because it supports HTTP requests and socket connections. An asynchronous I/O library lets Node containers read and write various system files that support applications. 

You could use the Node DOI to build streaming apps, single-page applications, chat apps, to-do list apps, and microservices. Or — if you’re like Community All-Hands’ Kathleen Juell — you could use Node.js to help serve static content. Containerized Node will shine in any scenario dictated by numerous client-server requests. 

Docker Captain Bret Fisher also offered his thoughts on Dockerized Node.js during DockerCon 2022. He discussed best practices for managing Node.js projects while diving into optimization. 

Lastly, we also maintain some Node sample applications within our GitHub Awesome Compose library. You can learn to use Node with different databases or even incorporate an NGINX proxy. 

About Docker Official Images

We’ve curated the Node Docker Official Image as one of many core container images on Docker Hub. The Node.js community maintains this image alongside members of the Docker community. 

Like other Docker Official Images, the Node DOI offers a common starting point for Node and JavaScript developers. We also maintain an evolving list of Node best practices while regularly pushing critical security updates. This distinguishes Docker Official Images from alternatives on Docker Hub. 

How to run Node in Docker

Before getting started, download the latest Docker Desktop release and install it. Docker Desktop includes the Docker CLI, Docker Compose, and additional core development tools. The Docker Dashboard (Docker Desktop’s UI component) will help you manage images and containers. 

You’re then ready to Dockerize Node!

Enter a quick pull command

Pulling the Node DOI is the quickest way to begin. Enter docker pull node in your terminal to grab the default latest Node version from Docker Hub. You can readily use this tag for testing or local development. But, a pinned version might be safer for production use. Here’s how the pull process works: 

Your CLI will display a status message once it’s done. You can also double-check this within Docker Desktop! Click the Images tab on the left sidebar and scan through your listed images. Docker Desktop will display your node image:

Docker UI listing local images, including the Node Docker Official Image..

Your node:latest image is a hefty 942.33 MB. If you inspect your Node image’s contents using docker sbom node, you’ll see that it currently includes 623 packages. The Node image contains numerous dependencies and modules that support Node and various applications. 

However, your final Node image can be much slimmer! We’ll tackle optimization while discussing Dockerfiles. After all, the Node DOI has 24 supported tags spread amongst four major Node versions. Each has its own impact on image size.  

Confirm that Node is functional

Want to run your new image as a container? Hover over your listed node image and click the blue “Run” button. In this state, your Node container will produce some minimal log entries and run continuously in case requests come through. 

Exit this container before moving on by clicking the square “stop” button in Docker Desktop or by entering docker stop YourContainerName in the CLI. 

Create your Node image from a Dockerfile

Building from a Dockerfile gives you ultimate control over image composition, configuration, and your overall application. However, Node requires very little to function properly. Here’s a barebones Dockerfile to get you up and running (using a pinned, Debian-based image version): 

FROM node:19-bullseye

Docker will build your image from your chosen Node version. 

It’s safest to use node:19-bullseye because this image supports numerous use cases. This version is also stable and prevents you from pulling in new breaking changes, which sometimes happens with latest tags. 

To build your image from a Dockerfile, run the docker build -t my-nodejs-app . command. You can then run your new image by entering docker run -it --rm --name my-running-app my-nodejs-app.

Optimize your Node image

The complete version of Node often includes extra packages that weigh your application down. This leaves plenty of room for optimization. 

For example, removing unneeded development dependencies reduces image bloat. You can do this by adding a RUN instruction to our previous file: 

FROM node:19-bullseye

RUN npm prune --production

This approach is pretty granular. It also relies on you knowing exactly what you do and don’t need for your project. Alternatively, switching to a slim image build offers the quickest results. You’ll encounter similar caveats but spend less time writing individual Dockerfile instructions. The easiest approach is to replace node:19-bullseye with its node:19-bullseye-slim counterpart. This alone shrinks image size by 75%. 

You can even pull node:19-alpine to save more disk space. However, this tag contains even fewer dependencies and isn’t officially supported by the Node.js Foundation. Keep this in mind while developing. 

Finally, multi-stage builds lead to smaller image sizes. These let you copy only what you need between build stages to combat bloat. 

Using Docker Compose

Say you have a start script, an existing package.json file, and (possibly) want to operate Node alongside other services. Spinning up Node containers with Docker Compose can be pretty handy in these situations.

Here’s a sample docker-compose.yml file: 

services:
  node:
    image: "node:19-bullseye"
    user: "node"
    working_dir: /home/node/app
    environment:
      - NODE_ENV=production
    volumes:
      - ./:/home/node/app
    ports:
      - "8888:8888"
    command: "npm start"

You’ll see some parameters that we didn’t specify earlier in our Dockerfile. For example, the user parameter lets you run your container as an unprivileged user. This follows the principle of least privilege. 

To jumpstart your Node container, simply enter the docker compose up -d command. Like before, you can verify that Node is running within Docker Desktop. The docker container ls --all command also displays all existing containers within the CLI.  

Running a simple Node script

Your project doesn’t always need a  Dockerfile. In these cases, you can directly leverage the Node DOI with the following command: 

docker run -it --rm --name my-running-script -v "$PWD":/usr/src/app -w /usr/src/app node:19-bullseye node your-daemon-or-script.js

This simplistic approach is ideal for single-file projects.

Docker Node best practices

It’s important to get the most out of Docker and the Node Official Image. We’ve briefly outlined the benefits of running as a non-root node user, but here are some useful tips for developing with Node: 

  • Easily pass secrets and other runtime configurations to your application by setting NODE_ENV to production, as seen here: -e “NODE_ENV=production”.
  • Place any installed, global Node dependencies into a non-root user directory.
  • Remember to manually install curl if using an alpine image tag, since it’s not included by default.
  • Wrap your Node process in an init system with the --init flag, so it can successfully run as PID1. 
  • Set memory limitations for your containers that run on the same host. 
  • Include the package.json start command directly within your Dockerfile, to reduce active container processes and let Node properly receive exit signals. 

This isn’t an exhaustive list. To view more details, check out our best practices documentation.

Get started with Node today

As you’ve seen, spinning up a Node container from the Node Docker Official Image is quick and requires just a few steps depending on your workflow. You’ll no longer need to worry about platform-specific builds or get bogged down with complex development processes. 

We’ve also covered many ways to help your Node builds perform better. Check out our top containerization tips article to learn even more about optimization and security. 

Ready to get started? Swing by Docker Hub and pull our Node image to start experimenting. In no time, you’ll have your server and networking applications up and running. You can also learn more on our GitHub read.me page.

]]>
Docker images Archives | Docker nonadult
How to Use the Alpine Docker Official Image https://www.docker.com/blog/how-to-use-the-alpine-docker-official-image/ Thu, 08 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37364 With its container-friendly design, the Alpine Docker Official Image (DOI) helps developers build and deploy lightweight, cross-platform applications. It’s based on Alpine Linux which debuted in 2005, making it one of today’s newest major Linux distros. 

While some developers express security concerns when using relatively newer images, Alpine has earned a solid reputation. Developers favor Alpine for the following reasons:  

In fact, the Alpine DOI is one of our most popular container images on Docker Hub. To help you get started, we’ll discuss this image in greater detail and how to use the Alpine Docker Official Image with your next project. Plus, we’ll explore using Alpine to grab the slimmest image possible. Let’s dive in!

In this tutorial:

What is the Alpine Docker Official Image?

how to use the alpine docker official image 900x600 1

The Alpine DOI is a building block for Alpine Linux Docker containers. It’s an executable software package that tells Docker and your application how to behave. The image includes source code, libraries, tools, and other core dependencies that your application needs. These components help Alpine Linux function while enabling developer-centric features. 

The Alpine Docker Official Image differs from other Linux-based images in a few ways. First, Alpine is based on the musl libc implementation of the C standard library — and uses BusyBox instead of GNU coreutils. While GNU packages many Linux-friendly programs together, BusyBox bundles a smaller number of core functions within one executable. 

While our Ubuntu and Debian images leverage glibc and coreutils, these alternatives are comparatively lightweight and resource-friendly, containing fewer extensions and less bloat.

As a result, Alpine appeals to developers who don’t need uncompromising compatibility or functionality from their image. Our Alpine DOI is also user-friendly and straightforward since there are fewer moving parts.

Alpine Linux performs well on resource-limited devices, which is fitting for developing simple applications or spinning up servers. Your containers will consume less RAM and less storage space. 

The Alpine Docker Official Image also offers the following features:

Multi-arch support lets you run Alpine on desktops, mobile devices, rack-mounted servers, Raspberry Pis, and even newer M-series Macs. Overall, Alpine pairs well with a wide variety of embedded systems. 

These are only some of the advantages to using the Alpine DOI. Next, we’ll cover how to harness the image for your application. 

When to use Alpine

You may be interested in using Alpine, but find yourself asking, “When should I use it?” Containerized Alpine shines in some key areas: 

  • Creating servers
  • Router-based networking
  • Development/testing environments

While there are some other uses for Alpine, most projects will fall under these two categories. Overall, our Alpine container image excels in situations where space savings and security are critical. 

How to run Alpine in Docker

Before getting started, download Docker Desktop and then install it. Docker Desktop is built upon Docker Engine and bundles together the Docker CLI, Docker Compose, and other core components. Launching Docker Desktop also lets you use Docker CLI commands (which we’ll get into later). Finally, the included Docker Dashboard will help you visually manage your images and containers. 

After completing these steps, you’re ready to Dockerize Alpine!

Note: For Linux users, Docker will still work perfectly fine if you have it installed externally on a server, or through your distro’s package manager. However, Docker Desktop for Linux does save time and effort by bundling all necessary components together — while aiding productivity through its user-friendly GUI. 

Use a quick pull command

You’ll have to first pull the Alpine Docker Official Image before using it for your project. The fastest method involves running docker pull alpine from your terminal. This grabs the alpine:latest image (the most current available version) from Docker Hub and downloads it locally on your machine: 

Your terminal output should show when your pull is complete — and which alpine version you’ve downloaded. You can also confirm this within Docker Desktop. Navigate to the Images tab from the left sidebar. And a list of downloaded images will populate on the right. You’ll see your alpine image, tag, and its minuscule (yes, you saw that right) 5.29 MB size:

Docker Desktop UI with list of downloaded images including Alpine.
Other Linux distro images like Ubuntu, Debian, and Fedora are many, many times larger than Alpine.

That’s a quick introduction to using the Alpine Official Image alongside Docker Desktop. But it’s important to remember that every Alpine DOI version originates from a Dockerfile. This plain-text file contains instructions that tell Docker how to build an image layer by layer. Check out the Alpine Linux GitHub repository for more Dockerfile examples. 

Next up, we’ll cover the significance of these Dockerfiles to Alpine Linux, some CLI-based workflows, and other key information.

Build your Dockerfile

Because Alpine is a standard base for container images, we recommend building on top of it within a Dockerfile. Specify your preferred alpine image tag and add instructions to create this file. Our example takes alpine:3.14 and runs an executable mysql client with it: 

FROM alpine:3.14
RUN apk add --no-cache mysql-client
ENTRYPOINT ["mysql"]

In this case, we’re starting from a slim base image and adding our mysql-client using Alpine’s standard package manager. Overall, this lets us run commands against our MySQL database from within our application. 

This is just one of the many ways to get your Alpine DOI up and running. In particular, Alpine is well-suited to server builds. To see this in action, check out Kathleen Juell’s presentation on serving static content with Docker Compose, Next.js, and NGINX. Navigate to timestamp 7:07 within the embedded video. 

The Alpine Official Image has a close relationship with other technologies (something that other images lack). Many of our Docker Official Images support -alpine tags. For instance, our earlier example of serving static content leverages the node:16-alpine image as a builder

This relationship makes Alpine and multi-stage builds an ideal pairing. Since the primary goal of a multi-stage build is to reduce your final image size, we recommend starting with one of the slimmest Docker Official Images.

Grabbing the slimmest possible image

Pulling an -alpine version of a given image typically yields the slimmest result. You can do this using our earlier docker pull [image] command. Or you can create a Dockerfile and specify this image version — while leaving room for customization with added instructions. 

In either case, here are some results using a few of our most popular images. You can see how image sizes change with these tags:

Image tagImage sizeimage:[version number]-alpine size
python:3.9.13867.66 MB46.71 MB
node:18.8.0939.71 MB164.38 MB
nginx:1.23.1134.51 MB22.13 MB

We’ve used the :latest tag since this is the default image tag Docker grabs from Docker Hub. As shown above with Python, pulling the -alpine image version reduces its footprint by nearly 95%! 

From here, the build process (when working from a Dockerfile) becomes much faster. Applications based on slimmer images spin up quicker. You’ll also notice that docker pull and various docker run commands execute swifter with -alpine images. 

However, remember that you’ll likely have to use this tag with a specified version number for your parent image. Running docker pull python-alpine or docker pull python:latest-alpine won’t work. Docker will alert you that the image isn’t found, the repo doesn’t exist, the command is invalid, or login information is required. This applies to any image. 

Get up and running with Alpine today

The Alpine Docker Official Image shines thanks to its simplicity and small size. It’s a fantastic base image — perhaps the most popular amongst Docker users — and offers plenty of room for customization. Alpine is arguably the most user-friendly, containerized Linux distro. We’ve tackled how to use the Alpine Official Image, and showed you how to get the most from it. 

Want to use Alpine for your next application or server? Pull the Alpine Official Image today to jumpstart your build process. You can also learn more about supported tags on Docker Hub. 

Additional resources

]]>
Docker images Archives | Docker nonadult
How to Colorize Black & White Pictures With OpenVINO on Ubuntu Containers https://www.docker.com/blog/how-to-colorize-black-white-pictures-ubuntu-containers/ Fri, 02 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=36935 If you’re looking to bring a stack of old family photos back to life, check out Ubuntu’s demo on how to use OpenVINO on Ubuntu containers to colorize monochrome pictures. This magical use of containers, neural networks, and Kubernetes is packed with helpful resources and a fun way to dive into deep learning!

A version of Part 1 and Part 2 of this article was first published on Ubuntu’s blog.

Ubuntu and intel repost 900x600 1

Table of contents:

OpenVINO on Ubuntu containers: making developers’ lives easier

Suppose you’re curious about AI/ML and what you can do with OpenVINO on Ubuntu containers. In that case, this blog is an excellent read for you too.

Docker image security isn’t only about provenance and supply chains; it’s also about the user experience. More specifically, the developer experience.

Removing toil and friction from your app development, containerization, and deployment processes avoids encouraging developers to use untrusted sources or bad practices in the name of getting things done. As AI/ML development often requires complex dependencies, it’s the perfect proof point for secure and stable container images.

Why Ubuntu Docker images?

As the most popular container image in its category, the Ubuntu base image provides a seamless, easy-to-set-up experience. From public cloud hosts to IoT devices, the Ubuntu experience is consistent and loved by developers.

One of the main reasons for adopting Ubuntu-based container images is the software ecosystem. More than 30,000 packages are available in one “install” command, with the option to subscribe to enterprise support from Canonical. It just makes things easier.

In this blog, you’ll see that using Ubuntu Docker images greatly simplifies component containerization. We even used a prebuilt & preconfigured container image for the NGINX web server from the LTS images portfolio maintained by Canonical for up to 10 years.

Beyond providing a secure, stable, and consistent experience across container images, Ubuntu is a safe choice from bare metal servers to containers. Additionally, it comes with hardware optimization on clouds and on-premises, including Intel hardware.

Why OpenVINO?

When you’re ready to deploy deep learning inference in production, binary size and memory footprint are key considerations – especially when deploying at the edge. OpenVINO provides a lightweight Inference Engine with a binary size of just over 40MB for CPU-based inference. It also provides a Model Server for serving models at scale and managing deployments.

OpenVINO includes open-source developer tools to improve model inference performance. The first step is to convert a deep learning model (trained with TensorFlow, PyTorch,…) to an Intermediate Representation (IR) using the Model Optimizer. In fact, it cuts the model’s memory usage in half by converting it from FP32 to FP16 precision. You can unlock additional performance by using low-precision tools from OpenVINO. The Post-training Optimisation Tool (POT) and Neural Network Compression Framework (NNCF) provide quantization, binarisation, filter pruning, and sparsity algorithms. As a result, Intel devices’ throughput increases on CPUs, integrated GPUs, VPUs, and other accelerators.

Open Model Zoo provides pre-trained models that work for real-world use cases to get you started quickly. Additionally, Python and C++ sample codes demonstrate how to interact with the model. More than 280 pre-trained models are available to download, from speech recognition to natural language processing and computer vision.

For this blog series, we will use the pre-trained colorization models from Open Model Zoo and serve them with Model Server.

colorize example albert einstein sticks tongue out

OpenVINO and Ubuntu container images

The Model Server – by default – ships with the latest Ubuntu LTS, providing a consistent development environment and an easy-to-layer base image. The OpenVINO tools are also available as prebuilt development and runtime container images.

To learn more about Canonical LTS Docker Images and OpenVINO™, read:

Neural networks to colorize a black & white image

Now, back to the matter at hand: how will we colorize grandma and grandpa’s old pictures? Thanks to Open Model Zoo, we won’t have to train a neural network ourselves and will only focus on the deployment. (You can still read about it.)

architecture diagram colorizer demo app microk8s
Architecture diagram of the colorizer demo app running on MicroK8s

Our architecture consists of three microservices: a backend, a frontend, and the OpenVINO Model Server (OVMS) to serve the neural network predictions. The Model Server component hosts two different demonstration neural networks to compare their results (V1 and V2). These components all use the Ubuntu base image for a consistent software ecosystem and containerized environment.

A few reads if you’re not familiar with this type of microservices architecture:

gRPC vs REST APIs

The OpenVINO Model Server provides inference as a service via HTTP/REST and gRPC endpoints for serving models in OpenVINO IR or ONNX format. It also offers centralized model management to serve multiple different models or different versions of the same model and model pipelines.

The server offers two sets of APIs to interface with it: REST and gRPC. Both APIs are compatible with TensorFlow Serving and expose endpoints for prediction, checking model metadata, and monitoring model status. For use cases where low latency and high throughput are needed, you’ll probably want to interact with the model server via the gRPC API. Indeed, it introduces a significantly smaller overhead than REST. (Read more about gRPC.)

OpenVINO Model Server is distributed as a Docker image with minimal dependencies. For this demo, we will use the Model Server container image deployed to a MicroK8s cluster. This combination of lightweight technologies is suitable for small deployments. It suits edge computing devices, performing inferences where the data is being produced – for increased privacy, low latency, and low network usage.

Ubuntu minimal container images

Since 2019, the Ubuntu base images have been minimal, with no “slim” flavors. While there’s room for improvement (keep posted), the Ubuntu Docker image is a less than 30MB download, making it one of the tiniest Linux distributions available on containers.

In terms of Docker image security, size is one thing, and reducing the attack surface is a fair investment. However, as is often the case, size isn’t everything. In fact, maintenance is the most critical aspect. The Ubuntu base image, with its rich and active software ecosystem community, is usually a safer bet than smaller distributions.

A common trap is to start smaller and install loads of dependencies from many different sources. The end result will have poor performance, use non-optimized dependencies, and not be secure. You probably don’t want to end up effectively maintaining your own Linux distribution … So, let us do it for you.

colorize example cat walking through grass
“What are you looking at?” (original picture source)

Demo architecture

“As a user, I can drag and drop black and white pictures to the browser so that it displays their ready-to-download colorized version.” – said the PM (me).

For that – replied the one-time software engineer (still me) – we only need:

  • A fancy yet lightweight frontend component.
  • OpenVINO™ Model Server to serve the neural network colorization predictions.
  • A very light backend component.

Whilst we could target the Model Server directly with the frontend (it exposes a REST API), we need to apply transformations to the submitted image. The colorization models, in fact, each expect a specific input.

Finally, we’ll deploy these three services with Kubernetes because … well … because it’s groovy. And if you think otherwise (everyone is allowed to have a flaw or two), you’ll find a fully functional docker-compose.yaml in the source code repository.

architecture diagram for demo app
Architecture diagram for the demo app (originally colored tomatoes)

In the upcoming sections, we will first look at each component and then show how to deploy them with Kubernetes using MicroK8s. Don’t worry; the full source code is freely available, and I’ll link you to the relevant parts.

Neural network – OpenVINO Model Server

The colorization neural network is published under the BSD 2-clause License, accessible from the Open Model Zoo. It’s pre-trained, so we don’t need to understand it in order to use it. However, let’s look closer to understand what input it expects. I also strongly encourage you to read the original work from Richard Zhang, Phillip Isola, and Alexei A. Efros. They made the approach super accessible and understandable on this website and in the original paper.

neural network architecture
Neural network architecture (from arXiv:1603.08511 [cs.CV])

As you can see on the network architecture diagram, the neural network uses an unusual color space: LAB. There are many 3-dimensional spaces to code colors: RGB, HSL, HSV, etc. The LAB format is relevant here as it fully isolates the color information from the lightness information. Therefore, a grayscale image can be coded with only the L (for Lightness) axis. We will send only the L axis to the neural network’s input. It will generate predictions for the colors coded on the two remaining axes: A and B.

From the architecture diagram, we can also see that the model expects a 256×256 pixels input size. For these reasons, we cannot just send our RGB-coded grayscale picture in its original size to the network. We need to first transform it.

We compare the results of two different model versions for the demo. Let them be called ‘V1’ (Siggraph) and ‘V2’. The models are served with the same instance of the OpenVINO™ Model Server as two different models. (We could also have done it with two different versions of the same model – read more in the documentation.)

Finally, to build the Docker image, we use the first stage from the Ubuntu-based development kit to download and convert the model. We then rebase on the more lightweight Model Server image.

# Dockerfile
FROM openvino/ubuntu20_dev:latest AS omz
# download and convert the model
…
FROM openvino/model_server:latest
# copy the model files and configure the Model Server
…

Backend – Ubuntu-based Flask app (Python)

For the backend microservice that interfaces between the user-facing frontend and the Model Server hosting the neural network, we chose to use Python. There are many valuable libraries to manipulate data, including images, specifically for machine learning applications. To provide web serving capabilities, Flask is an easy choice.

The backend takes an HTTP POST request with the to-be-colorized picture. It synchronously returns the colorized result using the neural network predictions. In between – as we’ve just seen – it needs to convert the input to match the model architecture and to prepare the output to show a displayable result.

Here’s what the transformation pipeline looks like on the input:

transformation pipline on input

And the output looks something like that:

transformation pipline on output

To containerize our Python Flask application, we use the first stage with all the development dependencies to prepare our execution environment. We copy it onto a fresh Ubuntu base image to run it, configuring the model server’s gRPC connection.

Frontend – Ubuntu-based NGINX container and Svelte app

Finally, I put together a fancy UI for you to try the solution out. It’s an effortless single-page application with a file input field. It can display side-by-side the results from the two different colorization models.

I used Svelte to build the demo as a dynamic frontend. Below each colorization result, there’s even a saturation slider (using a CSS transformation) so that you can emphasize the predicted colors and better compare the before and after.

To ship this frontend application, we again use a Docker image. We first build the application using the Node base image. We then rebase it on top of the preconfigured NGINX LTS image maintained by Canonical. A reverse proxy on the frontend side serves as a passthrough to the backend on the /API endpoint to simplify the deployment configuration. We do that directly in an NGINX.conf configuration file copied to the NGINX templates directory. The container image is preconfigured to use these template files with environment variables.

Deployment with Kubernetes

I hope you had the time to scan some black and white pictures because things are about to get serious(ly colorized).

We’ll assume you already have a running Kubernetes installation from the next section. If not, I encourage you to run the following steps or go through this MicroK8s tutorial.

# https://microk8s.io/docs
sudo snap install microk8s --classic
 
# Add current user ($USER) to the microk8s group
sudo usermod -a -G microk8s $USER && sudo chown -f -R $USER ~/.kube
newgrp microk8s
 
# Enable the DNS, Storage, and Registry addons required later
microk8s enable dns storage registry
 
# Wait for the cluster to be in a Ready state
microk8s status --wait-ready
 
# Create an alias to enable the `kubectl` command
sudo snap alias microk8s.kubectl kubectl
ubuntu command line kubernetes cluster

Yes, you deployed a Kubernetes cluster in about two command lines.

Build the components’ Docker images

Every component comes with a Dockerfile to build itself in a standard environment and ship its deployment dependencies (read What are containers for more information). They all create an Ubuntu-based Docker image for a consistent developer experience.

Before deploying our colorizer app with Kubernetes, we need to build and push the components’ images. They need to be hosted in a registry accessible from our Kubernetes cluster. We will use the built-in local registry with MicroK8s. Depending on your network bandwidth, building and pushing the images will take a few minutes or more.

sudo snap install docker
cd ~ && git clone https://github.com/valentincanonical/colouriser-demo.git
 
# Backend
docker build backend -t localhost:32000/backend:latest
docker push localhost:32000/backend:latest
 
# Model Server
docker build modelserver -t localhost:32000/modelserver:latest
docker push localhost:32000/modelserver:latest
 
# Frontend
docker build frontend -t localhost:32000/frontend:latest
docker push localhost:32000/frontend:latest

Apply the Kubernetes configuration files

All the components are now ready for deployment. The Kubernetes configuration files are available as deployments and services YAML descriptors in the ./K8s folder of the demo repository. We can apply them all at once, in one command:

kubectl apply -f ./k8s

Give it a few minutes. You can watch the app being deployed with watch kubectl status. Of all the services, the frontend one has a specific NodePort configuration to make it publicly accessible by targeting the Node IP address.

ubuntu command line kubernetes configuration files

Once ready, you can access the demo app at http://localhost:30000/ (or replace localhost with a cluster node IP address if you’re using a remote cluster). Pick an image from your computer, and get it colorized!

All in all, the project was pretty easy considering the task we accomplished. Thanks to Ubuntu containers, building each component’s image with multi-stage builds was a consistent and straightforward experience. And thanks to OpenVINO™ and the Open Model Zoo, serving a pre-trained model with excellent inference performance was a simple task accessible to all developers.

That’s a wrap!

You didn’t even have to share your pics over the Internet to get it done. Thanks for reading this article; I hope you enjoyed it. Feel free to reach out on socials. I’ll leave you with the last colorization example.

colorized example christmas cat
Christmassy colorization example (original picture source)

To learn more about Ubuntu, the magic of Docker images, or even how to make your own Dockerfiles, see below for related resources:

]]>
Extending Docker’s Integration with containerd https://www.docker.com/blog/extending-docker-integration-with-containerd/ Thu, 01 Sep 2022 16:44:09 +0000 https://www.docker.com/?p=37231 We’re extending Docker’s integration with containerd to include image management! To share this work early and get feedback, this integration is available as an opt-in experimental feature with the latest Docker Desktop 4.12.0 release.

The Docker Desktop Experimental Features settings with the option for using containerd for pulling and storing images enabled.

What is containerd?

In the simplest terms, containerd is a broadly-adopted open container runtime. It manages the complete container lifecycle of its host system! This includes pulling and pushing images as well as handling the starting and stopping of containers. Not to mention, containerd is a low-level brick in the container experience. Rather than being used directly by developers, it’s designed to be embedded into systems like Docker and Kubernetes.

Docker’s involvement in the containerd project can be traced all the way back to 2016. You could say, it’s a bit of a passion project for us! While we had many reasons for starting the project, our goal was to move the container supervision out of the core Docker Engine and into a separate daemon. This way, it could be reused in projects like Kubernetes. It was donated to the Cloud Native Computing Foundation (CNCF), and it’s now a graduated (stable) project as of 2017.

What does containerd replace in the Docker Engine?

As we mentioned earlier, Docker has used containerd as part of Docker Engine for managing the container lifecycle (creating, starting, and stopping) for a while now! This new work is a step towards a deeper integration of containerd into the Docker Engine. It lets you use containerd to store images and then push and pull them. Containerd also uses snapshotters instead of graph drivers for mounting the root file system of a container. Due to containerd’s pluggable architecture, it can support multiple snapshotters as well. 

Want to learn more? Michael Crosby wrote a great explanation about snapshotters on the Moby Blog.

Why migrate to containerd for image management?

Containerd is the leading open container runtime and, better yet, it’s already a part of Docker Engine! By switching to containerd for image management, we’re better aligning ourselves with the broader industry tooling. 

This migration modifies two main things:

  • We’re replacing Docker’s graph drivers with containerd’s snapshotters.
  • We’ll be using containerd to push, pull, and store images.

What does this mean for Docker users?

We know developers love how Docker commands work today and that many tools rely on the existing Docker API. With this in mind, we’re fully vested in making sure that the integration is as transparent as possible and doesn’t break existing workflows. To do this, we’re first rolling it out as an experimental, opt-in feature so that we can get early feedback. When enabled in the latest Docker Desktop, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save.

This integration has the following benefits:

  1. Containerd’s snapshotter implementation helps you quickly plug in new features. Some examples include using stargz to lazy-pull images on startup or nydus and dragonfly for peer-to-peer image distribution.
  2. The containerd content store can natively store multi-platform images and other OCI-compatible objects. This enables features like the ability to build and manipulate multi-platform images using Docker Engine (and possibly other content in the future!).

If you plan to build the multi-platform image, the below graphic shows what to expect when you run the build command with the containerd store enabled. 

A recording of a docker build terminal output with the containerd store enabled.

Without the experimental feature enabled, you will get an error message stating that this feature is not supported on docker driver as shown in the graphic below. 

A recording of an unsuccessful docker build output with the containerd store disabled.

If you decide not to enable the experimental feature, no big deal! Things will work like before. If you have additional questions, you can access details in our release notes.

Roadmap for the containerd integration

We want to be as transparent as possible with the Docker community when it comes to this containerd integration (no surprises here!). For this reason, we’ve laid out a roadmap. The integration will happen in two key steps:

  1. We’ll ship an initial version in Docker Desktop which enables common workflows but doesn’t touch existing images to prove that this approach works.
  2. Next, we’ll write the code to migrate user images to use containerd and activate the feature for all our users.

We work to make expanding integrations like this as seamless as possible so you, our end user, can reap the benefits! This way, you can create new, exciting things while leveraging existing features in the ecosystem such as namespaces, containerd plug-ins, and more.

We’ve released this experimental feature first in Docker Desktop so that we can get feedback quickly from the community. But, you can also expect this feature in a future Docker Engine release.  


The details on the ongoing integration work can be accessed here

Conclusion

In summary, Docker users can now look forward to full containerd integration. This brings many exciting features from native multi-platform support to encrypted images and lazy pulls. So make sure to download the latest version of Docker Desktop and enable the containerd experimental feature to take it for a spin! 

We love sharing things early and getting feedback from the Docker community — it helps us build products that work better for you. Please join us on our community Slack channel or drop us a line using our feedback form.

]]>
How to Use the Redis Docker Official Image https://www.docker.com/blog/how-to-use-the-redis-docker-official-image/ Wed, 24 Aug 2022 14:00:00 +0000 https://www.docker.com/?p=36720 Maintained in partnership with Redis, the Redis Docker Official Image (DOI) lets developers quickly and easily containerize a Redis instance. It streamlines the cross-platform deployment process — even letting you use Redis with edge devices if they support your workflows. 

Developers have pulled the Redis DOI over one billion times from Docker Hub. As the world’s most popular key-value store, Redis helps apps concurrently access critical bits of data while remaining resource friendly. It’s highly performant, in-memory, networked, and durable. It also stands apart from relational databases like MySQL and PostgreSQL that use tabular data structures. From day one, Redis has also been open source. 

Finally, Redis cluster nodes are horizontally scalable — making it a natural fit for containerization and multi-container operation. Read on as we explore how to use the Redis Docker Official Image to containerize and accelerate your Redis database deployment.

In this tutorial:

What is the Redis Docker Official Image?

redis docker official image

The Redis DOI is a building block for Redis Docker containers. It’s an executable software package that tells Docker and your application how to behave. It bundles together source code, dependencies, libraries, tools, and other core components that support your application. In this case, these components determine how your app and Redis database interact.

Our Redis Docker Official Image supports multiple CPU architectures. An assortment of over 50 supported tags lets you choose the best Redis image for your project. They’re also multi-layered and run using a default configuration (if you’re simply using docker pull). Complexity and base images also vary between tags. 

That said, you can also configure your Redis Official Image’s Dockerfile as needed. We’ll touch on this while outlining how to use the Redis DOI. Let’s get started.

How to run Redis in Docker

Before proceeding, we recommend installing Docker Desktop. Desktop is built upon Docker Engine and packages together the Docker CLI, Docker Compose, and more. Running Docker Desktop lets you use Docker commands. It also helps you manage images and containers using the Docker Dashboard UI. 

Use a quick pull command

Next, you’ll need to pull the Redis DOI to use it with your project. The quickest method involves visiting the image page on Docker Hub, copying the docker pull command, and running it in your terminal:

Your output confirms that Docker has successfully pulled the :latest Redis image. You can also verify this by hopping into Docker Desktop and opening the Images interface from the left sidebar. Your redis image automatically appears in the list:

Docker Desktop list of local images on disk, including the Redis official Docker image.

We can also see that our new Redis image is 111.14 MB in size. This is pretty lightweight compared to many images. However, using an alpine variant like redis:alpine3.16 further slims your image.

Now that you’re acquainted with Docker Desktop, let’s jump into our CLI workflow to get Redis up and running. 

Start your Redis instance

Redis acts as a server, and related server processes power its functionality. We need to start a Redis instance, or software server process, before linking it with our application. Luckily, you can create a running instance with just one command: 

 docker run --name some-redis -d redis 

We recommend naming your container. This helps you reference later on. It also makes it easier to run additional commands that involve it. Your container will run until you stop it. 

By adding -d redis in this command, Docker will run your Redis service in “detached” mode. Redis, therefore, runs in the background. Your container will also automatically exit when its root process exits. You’ll see that we’re not explicitly telling the service to “start” within this command. By leaving this verbiage out, our Redis service will start and continue running — remaining usable to our application.

Set up Redis persistent storage

Persistent storage is crucial when you want your application to save data between runs. You can have Redis write its data to a destination like an SSD. Persistence is also useful for keeping log files across restarts. 

You can capture every Redis operation using the Redis Database (RDB) method. This lets you designate snapshot intervals and record data at certain points in time. However, that running container from our initial docker run command is using port 6379. You should remove (or stop) this container before moving on, since it’s not critical for this example. 

Once that’s done, this command triggers persistent storage snapshots every 60 seconds: 

 docker run --name some-redis -d redis redis-server --save 60 1 --loglevel warning 

The RDB approach is valuable as it enables “set-and-forget” persistence. It also generates more logs. Logging can be useful for troubleshooting, yet it also requires you to monitor accumulation over time. 

However, you can also forego persistence entirely or choose another option. To learn more, check out Redis’ documentation

Redis stores your persisted data in the VOLUME /data location. These connected volumes are shareable between containers. This shareability becomes useful when Redis lives within one container and your application occupies another. 

Connect with the Redis CLI

The Redis CLI lets you run commands directly within your running Redis container. However, this isn’t automatically possible via Docker. Enter the following commands to enable this functionality: 

 docker network create some-network 
 ​​docker run -it --network some-network --rm redis redis-cli -h some-redis

Your Redis service understands Redis CLI commands. Numerous commands are supported, as are different CLI modes. Read through the Redis CLI documentation to learn more. 

Once you have CLI functionality up and running, you’re free to leverage Redis more directly!

Configurations and modules

Finally, we’ve arrived at customization. While you can run a Redis-powered app using defaults, you can tweak your Dockerfile to grab your pre-existing redis.conf file. This better supports production applications. While Redis can successfully start without these files, they’re central to configuring your services. 

You can see what a redis.conf file looks like on GitHub. Otherwise, here’s a sample Dockerfile

FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

You can also use docker run to achieve this. However, you should first do two things for this method to work correctly. First, create the /myredis/config directory on your host machine. This is where your configuration files will live. 

Second, open Docker Desktop and click the Settings gear in the upper right. Choose Resources > File Sharing to view your list of directories. You’ll see a grayed-out directory entry at the bottom, which is an input field for a named directory. Type in /myredis/config there and hit the “+” button to locally verify your file path:

Docker Desktop resource file sharing settings with the `/myredis/config` added.

You’re now ready to run your command! 

 docker run -v /myredis/conf:/usr/local/etc/redis --name myredis redis redis-server /usr/local/etc/redis/redis.conf 

The Dockerfile gives you more granular control over your image’s construction. Alternatively, the CLI option lets you run your Redis container without a Dockerfile. This may be more approachable if your needs are more basic. Just ensure that your mapped directory is writable and exists locally. 

Also, consider the following: 

  • If you edit your Redis configurations on the fly, you’ll have to use CONFIG REWRITE to automatically identify and apply any field changes on the next run.
  • You can also apply configuration changes manually.

Remember how we connected the Redis CLI earlier? You can now pass arguments directly through the Redis CLI (ideal for testing) and edit configs while your database server is running. 

Notes on using Redis modules

Redis modules let you extend your Redis service, and build new services, and adapt your database without taking a performance hit. Redis also processes them in memory. These standard modules support querying, search, JSON processing, filtering, and more. As a result, Docker Hub’s redislabs/redismod image bundles seven of these official modules together: 

  1. RedisBloom
  2. RedisTimeSeries
  3. RedisJSON
  4. RedisAI
  5. RedisGraph
  6. RedisGears
  7. Redisearch

If you’d like to spin up this container and experiment, simply enter docker run -d -p 6379:6379 redislabs/redismod in your terminal. You can open Docker Desktop to view this container like we did earlier on. 

You can view Redis’ curated modules or visit the Redis Modules Hub to explore further.

Get up and running with Redis today

We’ve explored how to successfully Dockerize Redis. Going further, it’s easy to grab external configurations and change how Redis operates on the fly. This makes it much easier to control how Redis interacts with your application. Head on over to Docker Hub and pull your first Redis Docker Official Image to start experimenting. 

The Redis Stack also helps extend Redis within Docker. It adds modern, developer-friendly data models and processing engines. The Stack also grants easy access to full-text search, document store, graphs, time series, and probabilistic data structures. Redis has published related container images through the Docker Verified Publisher (DVP) program. Check them out!

]]>
Docker images Archives | Docker nonadult
Announcing Docker SBOM: A step towards more visibility into Docker images https://www.docker.com/blog/announcing-docker-sbom-a-step-towards-more-visibility-into-docker-images/ Thu, 07 Apr 2022 15:00:26 +0000 https://www.docker.com/?p=33004 Today, Docker takes its first step in making what is inside your container images more visible so that you can better secure your software supply chain. Included in Docker Desktop 4.7.0 is a new, experimental docker sbom CLI command that displays the SBOM (Software Bill Of Materials) of any Docker image. It will also be included in our Linux packages in an upcoming release. The functionality was developed as an open source collaboration with Anchore using their Syft project.

As I wrote in my blog post last week, at Docker our priorities are performance, trust and great experiences. This work is focused on improving trust in the supply chain by making it easier to see what is in images and providing SBOMs to consumers of software, and improving the developer experience by making container images more transparent, so you can easily see what is inside of them. This command is just a first step that Docker is taking to make container images more self descriptive. We believe that the best time to determine and record what is in a container image is when you are putting the image together with docker build. To enable this, we are working on making it easy for partners and the community to add SBOM functionality to docker build using BuildKit’s extensibility.

As this information is generated at build time, we believe that it should be included as part of the image artifact. This means that if you move images between registries (or even into air gapped environments), you should still be able to read the SBOM and other image build metadata off of the image.

We’re looking to collaborate with partners and those in the community on our SBOM work in BuildKit. Take a look at our PoC and leave feedback here.

What is an SBOM?

A Software Bill Of Materials (SBOM) is analogous to a packing list for a shipment; it’s all the components that make up the software, or were used to build it. For container images, this includes the operating system packages that are installed (e.g.: ca-certificates) along with language specific packages that the software depends on (e.g.: log4j). The SBOM could include only some of this information or even more details, like the versions of components and where they came from.

SBOMs are sometimes required by governments or other software consumers who are trying to improve their supply chain security. This is because knowing what is inside your software gives you confidence that it is safe to use and can be useful in understanding impact when a vulnerability is made public.

Using the container image SBOM to check for a vulnerability

Let’s take a quick look at what the docker sbom command can do to help when a vulnerability like log4shell is made public. When a vulnerability like this appears, it’s crucial that you can quickly determine if your software is impacted. We’ll use the neo4j:4.4.5 Docker Official Image. Just running docker sbom neo4j:4.4.5 outputs a tabulated form of the SBOM:

$ docker sbom neo4j:4.4.5

Syft v0.42.2

 ✔ Loaded image            

 ✔ Parsed image            

 ✔ Cataloged packages      [385 packages]




NAME                      VERSION                        TYPE

... 

bsdutils                  1:2.36.1-8+deb11u1             deb

ca-certificates           20210119                       deb

...

log4j-api                 2.17.1                         java-archive  

log4j-core                2.17.1                         java-archive  

...

Note that the output includes not only the Debian packages that have been installed inside the image but also the Java libraries used by the application. Getting this information reliably and with minimal effort allows you to promptly respond and reduce the chance that you will be breached. In the above example, we can see that Neo4j uses version 2.17.1 of the log4j-core library which means that it is not affected by log4shell.

Without docker sbom or another SBOM scanning tool, you would need to check your application’s source code to see which version of log4j-core you are using. When you have several applications or services deployed and multiple versions of them, this can be difficult.

In addition to outputting the SBOM in a table, the docker sbom command has options for outputting SBOM in the standard SPDX and CycloneDX formats along with the GitHub and native Syft formats.

We are sharing the docker sbom functionality early, as an experimental command, with the intention of getting feedback from the community on the direction that we’re going. We’d like to know about your use cases and any other feedback that you have. You can leave it on the command’s repo.

What’s next?

We’d love to collaborate with partners and the community on bringing SBOMs to all container images through BuildKit so please hack on our example and leave feedback on our RFC. Please also give the experimental docker sbom command a try and leave us any feedback that you have. You can also read more about the docker sbom collaboration with Anchore on their blog.

DockerCon2022

Join us for DockerCon2022 on Tuesday, May 10. DockerCon is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/

]]>
How to Build and Test Your Docker Images in the Cloud with Docker Hub https://www.docker.com/blog/how-to-build-and-test-your-docker-images-in-the-cloud-with-docker-hub/ Tue, 05 May 2020 15:50:08 +0000 https://www.docker.com/blog/?p=26222 Part 2 in the series on Using Docker Desktop and Docker Hub Together

Introduction

In part 1 of this series, we took a look at installing Docker Desktop, building images, configuring our builds to use build arguments, running our application in containers, and finally, we took a look at how Docker Compose helps in this process. 

In this article, we’ll walk through deploying our code to the cloud, how to use Docker Hub to build our images when we push to GitHub and how to use Docker Hub to automate running tests.

Docker Hub

Docker Hub is the easiest way to create, manage, and ship your team’s images to your cloud environments whether on-premises or into a public cloud.

in the cloud docker hub 1

This first thing you will want to do is create a Docker ID, if you do not already have one, and log in to Hub.

Creating Repositories

Once you’re logged in, let’s create a couple of repos where we will push our images to.

Click on “Repositories” in the main navigation bar and then click the “Create Repository” button at the top of the screen.

in the cloud docker hub 2

You should now see the “Create Repository” screen.

You can create repositories for your account or for an organization. Choose your Docker ID from the dropdown. This will create the repository for your Docker ID.

Now let’s give our repository a name and description. Type projectz-ui in the name field and a short description such as: This is our super awesome UI for the Projectz application. 

We also have the ability to make the repository Public or Private. Let’s keep the repository Public for now.

We can also connect your repository to a source control system. You have the option to choose GitHub or Bitbucket but we’ll be doing this later in the article. So, for now, do not connect to a source control system. 

Go ahead and click the “Create” button to create a new repository.

Your repository will be created and you will be taken to the General tab of your new repository.

in the cloud docker hub 3

This is the repository screen where we can manage tags, builds, collaborators, webhooks, and visibility settings.

Click on the Tags tab. As expected, we do not have any tags at this time because we have not pushed an image to our repository yet.

We also need a repository for your services application. Follow the previous steps and create a new repository for the projectz-services application. Use the following settings to do so:

Repository name: projectz-services

Description: This is our super awesome services for the Projectz application

Visibility: Public

Build Settings: None

Excellent. We now have two Docker Hub Repositories setup.

Structure Project

For simplicity in part 1 of this series, we only had one git repository. For this article, I refactored our project and broke them into two different git repositories to more align with today’s microservices world.

Pushing Images

Now let’s build our images and push them to the repos we created above.

Fork Repos

Open your favorite browser and navigate to the pmckeetx/projectz-ui repository.

Create a copy of the repo in your GitHub account by clicking the “Fork” button in the top right corner.

Repeat the processes for the pmckeetx/projectz-svc repository.

Clone the repositories

Open a terminal on your local development machine and navigate to wherever you work on your source code. Let’s create a directory where we will clone our repos and do all our work in.

$ cd ~/projects
$ mkdir projectz

Now let’s clone the two repositories you just forked above. Back in your browser click the green “Clone or download” button and copy the URL. Use these URLs to clone the repo to your local machine.

$ git clone https://github.com/[github-id]/projectz-ui.git ui
$ git clone https://github.com/[github-id]/projectz-svc.git services

(Remember to substitute your GitHub ID for [github-id] in the above commands)

If you have SSH keys set up for your github account, you can use the SSH URLs instead.

List local images

Let’s take a look at the list of Docker images we have locally on our machine. Run the following command to see a list of images.

$ docker images

in the cloud docker hub 4

You can see that I have the nginx, projectz-svc, projectz-ui, and node images on my machine. If you do not see the above images, that’s okay, we are going to recreate them now.

Remove local images

Let’s first remove projectz-svc and projectz-ui images. We’ll use the remove image (rmi) command. You can skip this step if you do not have the projectz-svc and projectz-ui on your local machine.

$ docker rmi projectz-svc projectz-ui

in the cloud docker hub 5

If you get the following or similar error: Error response from daemon: conflict: unable to remove repository reference "projectz-svc" (must force) - container 6b1b99cc899c is using its referenced image 6b9eadff19ae

This means that the image you are trying to remove is being used by a container and can not be removed. You need to stop and rm (remove) the container before you can remove the image. To do so, run the following commands.

First, find the running container:

$ docker ps -a

Here we can see that the container named services is using the image projectz-svc which we are trying to remove. 

Let’s stop and remove this container. We can do this at the same time by using the --force option to the rm command. 

If we tried to remove the container by using docker rm services without first stopping it, we would get the following error: Error response from daemon: You cannot remove a running container 6b1b99cc899c. Stop the container before attempting removal or force remove

So we’ll use the --force option to tell Docker to send a SIGKILL to the container and then remove it.

$ docker rm --force services

Do the same for the UI container, if it is still running.

Now that we stopped and removed the containers, we can now remove the images.

$ docker rmi projectz-svc projectz-ui

Let’s list our images again.

$ docker images

in the cloud docker hub 6

Now you should see that the projectz-ui and projectz-services images are gone.

Building images

Let’s build our images for the UI and Services projects now. Run the following commands:

$ cd [working dir]/projectz/services
$ docker build --tag projectz-svc .
$ cd ../ui
$ docker build --tag projectz-ui .

If you would like a more in-depth discussion around building images and Dockerfiles, refer back to part 1 of this series.

Pushing images

Okay, now that we have our images built, let’s take a look at pushing them to Docker Hub.

Tagging images

If you look back at the beginning of the post where we set up our Docker Hub repositories, you’ll see that we created the repositories in our Docker ID namespace. Before we can push our images to Hub, we’ll need to tag them using this namespace.

Open your favorite browser and navigate to Docker Hub and let’s review real quick.

Login to Hub, if you’ve not already done so, and take a look at the dashboard. You should see a list of images. Choose your Docker ID from the dropdown to only show images associated with your Docker ID.

Click on the row for the projectz-ui repository. 

Towards the top right of the window, you should see a docker command highlighting in grey.

This is the Docker Push command followed by the image name. You’ll see that this command uses your Docker ID followed by a slash followed by the image name and tag, separated by a colon. You can read more about pushing to repositories and tagging images in our documentation.

Let’s tag our local images to match the Docker Hub Repository. Run the following commands anywhere in your terminal.

$ docker tag projectz-ui [dockerid]/projectz-ui:latest
$ docker tag projectz-svc [dockerid]/projectz-svc:latest

(Remember to substitute your Docker ID for [dockerid] in the above commands)

Now list your local images and see the newly tagged images.

$ docker images

in the cloud docker hub 7

Pushing

Okay, now that we have our images tagged correctly, let’s push our images to Hub.

The first thing we need to do is make sure we logged into Docker Hub on the terminal. Although the repositories we created earlier are “public”, only the owner of the repository can push by default. If you would like to allow folks on your team to be able to push images and manage repositories. Take a look at Organizations and Teams in Hub.

$ docker login
Login with your Docker ID to push and pull images from Docker Hub...
Username:

Enter your username (Docker ID) and password.

Now we can push our images.

$ docker push [dockerid]/projectz-ui:latest
$ docker push [dockerid]/projectz-svc:latest

Open your favorite browser and navigate to Docker Hub, select one of the repositories we created earlier and then click the “Tags” tab. You will now see the images and tag we just pushed.

Automatically Build and Test Images

That was pretty straightforward but we had to run a lot of manual commands. What if we wanted to build an image, run tests and publish to a repository so we could deploy our latest changes?

We might be tempted to write a shell script and have everybody on the team run it after they completed a feature. But this wouldn’t be very efficient. 

What we want is a continuous integration (CI) pipeline. Docker Hub provides these features using AutoBuilds and AutoTests

Connecting Source Control

Docker Hub can be connected to GitHub and Bitbucket to listen to push notifications so it can trigger AutoBuilds.

I’ve already connected my Hub account to my GitHub account. To connect your own Hub account to your version control system follow these simple steps in our documentation.

Setup AutoBuilds

Let’s set up AutoBuilds for our two repositories. The steps are the same for both repositories so I’ll only walk you through one of them.

Open Hub in your browser, and navigate to the detail page for the projectz-ui repository.

Click on the “Builds” tab and then click the “Link to GitHub” button in the middle of the page.

in the cloud docker hub 8

Now in the Build Configuration screen. Select your organization and repository from the dropdowns. Once you select a repository, the screen will expand with more options.

in the cloud docker hub 9

Leave the AUTOTEST setting to Off and the REPOSITORY LINKS to Off also.

The next thing we can configure is Build Rules. Docker Hub automatically configures the first BUILD RULE using the master branch of our repo. But we can configure more.

We have a couple of options we can set for build rules. 

The first is Source Type which can either be a Branch or a Tag. 

Then we can set the Source, this is referring to either the Branch you want to watch or the Tag name you would like to watch. You can enter a string literal or a RegExp that will be used for matching.

Next, we’ll set the Docker Tag that we want to use when the image is built and tagged.

We can also tell Hub what Dockerfile to use and where the Build Context is located.

The next option turns off or on the Build Rule.

We also have the option to use the Build Cache.

Save and Build

We’ll leave the default Build Rule that Hub added for us. Click the “Save and Build” button.

Our Build options will be saved and an AutoBuild will be kicked off. You can watch this build run on the “Builds” tab of your image page.

To view the build logs, click on the build that is in progress and you will be taken to the build details page where you can view the logs.

in the cloud docker hub 10

Once the build is complete, you can view the newly created image by clicking on the “Tags” tab. There you will see that our image was built and tagged with “latest”.

Follow the same steps to set up the projectz-svc repository. 

Trigger a build from Git Push

Now that we see that our image is being built, let’s make a change to our project and trigger it from git push command.

Open the projectz-svc/src/routes.js file in your favorite editor and add the following code snippet anywhere before the module.exports = appRouter line at the bottom of the file.

...
 
appRouter.get( '/services/hello', function( req, res ) {
 res.json({ code: 'success', payload: 'World' })
})
 
...
 
module.exports = appRouter

Save the file and commit the changes locally.

$ git commit -am "add hello - world route"

Now, if we push the changes to GitHub, GitHub will trigger a webhook to Docker Hub which will in turn trigger a new build of our image. Let’s do that now.

$ git push

Navigate over to Hub in your browser and scroll down. You should see that a build was just triggered.

After the build finishes, navigate to the “Tags” tab and see that the image was updated.

in the cloud docker hub 11

Setup AutoTests

Excellent! We now have both our images building when we push to our source control repo. But this is just step one in our CI process. We should only push new images to the repository if all tests pass.

Docker Hub will automatically run tests if you have a docker-compose.test.yml file that defines a sut service. Let’s create this now and run our tests.

Open the projectz-svc project in your editor and create a new file name: docker-compose.test.yml and add the following yaml.

version: "3.6"
 
services:
 sut:
   build:
     context: .
     args:
       NODE_ENV: test
   ports:
     - "8080:80"
   command: npm run test

Commit the changes and push to GitHub.

$ git add docker-compose.test.yml
$ git commit -m “add docker-compose.test.yml for hub autotests”
$ git push origin master

Now navigate back to Hub and the projectz-svc repo. Once the build finishes, click on the build link and scroll to the bottom of the build logs. There you can see that the tests were run and the image was pushed to the repo.

If the build fails, you will see that the status turns to FAILURE and you will be able to see the error in the build logs.

Conclusion

In part 2 of this series, we showed you how Docker Hub is one of the easiest ways to automatically build your images and run tests without having to use a separate CI system. If you’d like to go further you can take a look at: 

]]>
January Virtual Meetup Recap: Improve Image Builds Using the Features in BuildKit https://www.docker.com/blog/january-virtual-meetup-recap/ Tue, 28 Jan 2020 18:07:52 +0000 https://www.docker.com/blog/?p=25407 This is a guest post by Docker Captain Nicholas Dille, a blogger, speaker and author with 15 years of experience in virtualization and automation. He works as a DevOps Engineer at Haufe Group, a digital media company located in Freiburg, Germany. He is also a Microsoft Most Valuable Professional.

In this virtual meetup, I share how to improve image builds using the features in BuildKit. BuildKit is an alternative builder with great features like caching, concurrency and the ability to separate your image build into multiple stages – which is useful for separating the build environment from the runtime environment. 

The default builder in Docker is the legacy builder. This is recommended for use when you need support for Windows. However, in nearly every other case, using BuildKit is recommended because of the fast build time, ability to use custom BuildKit front-ends, building stages in parallel and other features.

Catch the full replay below and view the slides to learn:

  • Build cache in BuildKit – instead of relying on a locally present image, buildkit will pull the appropriate layers of the previous image from a registry.
  • How BuildKit helps prevent disclosure of credentials by allowing files to be mounted into the build process. They are kept in memory and are not written to the image layers.
  • How BuildKit supports access to remote systems through SSH by mounting the SSH agent socket into the build without adding the SSH private key to the image.
  • How to use the CLI plugin buildx to cross-build images for different platforms.
  • How using the new “docker context,” the CLI is able to manage connection to multiple instances of the Docker engine. Note that it supported SSH remoting to Docker Engine.
  • And finally, a tip that extends beyond image builds: When troubleshooting a running container, a debugging container can be started sharing the network and PID namespace. This allows debugging without changing the misbehaving container.
january virtual meetup 1

I also covered a few tools that I use in my workflow, namely:

  • goss, which allows images to be tested to match a configuration expressed in YAML. It comes with a nice wrapper called `dgoss` to use it with Docker easily. And it even provides a health endpoint to integrate into your image
  • trivy, an OS tool from AquaSecurity that scans images for known vulnerabilities in the OS as well as well-known package managers.

And finally, answered some of your questions:

Why not use BuildKit by default? 

If your workflow involves building images often, then we recommend that you do set BuildKit as the default builder. Here is how to enable BuildKit by default in the docker daemon config. 

Does docker-compose work with BuildKit? 

Support for BuildKit was added in docker-compose 1.25.0 which can be enabled by setting DOCKER_BUILDKIT=1 and COMPOSE_DOCKER_CLI_BUILD=1.

What are the benefits of using BuildKit? 

In addition to the features presented, BuildKit also improves build performance in many cases.

When would I use BuildKit Secrets? (A special thank you to Captain Brandon Mitchell for answering this question)

BuildKit secrets are a good way to use a secret at build time, without saving the secret in the image. Think of it as pulling a private git repo without saving your ssh key to the image. For runtime, it’s often different compose files to support compose vs swarm mode, each mounting the secret a different way, i.e. a volume vs. swarm secret.

How do I enable BuildKit for Jenkins Docker build plugin? 

The only reference to BuildKit I was able to find refers to adding support in the Docker Pipeline plugin.

Does BuildKit share the build cache with the legacy builder? 

No, the caches are separate.

What are your thoughts on having the testing step as a stage in a multi-stage build? 

The test step can be a separate stage in the build. If the test step requires a special tool to be installed, it can be a second final stage. If your multi-stage build increases in complexity, take a look at CI/CD tools.

How does pulling the previous image save time over just doing the build? The download can be significantly faster than redoing all the work.

Is the created image still “identical” or is there any real difference in the final image artifact? 

The legacy builder, as well as BuildKit, produces identical (or rather equivalent) images.

Will Docker inspect show that the image was built using BuildKit? 

No.

Do you know any combination of debugging with docker images/containers (I use the following technologies: python and Django and Pycharm)?

No. Anyone have any advice here? 

Is Docker BuildKit supported with maven Dockerfile plugin? 

If the question is referring to Spotify’s Dockerfile Maven plugin (which is unmaintained), the answer is no. Other plugins may be able to use BuildKit when providing the environment variable DOCKER_BUILDKIT=1. Instead of changing the way the client works, you could configure the daemon to use BuildKit by default (see first question above).

What do you think about CRI-O? 

I think that containerd has gained more visibility and has been adopted by many cloud providers as the runtime in Kubernetes offerings. But I have no experience myself with CRI-O.

To be notified of upcoming meetups, join the Docker Virtual Meetup Group using your Docker ID or on Meetup.com.

]]>
Docker + Golang = https://www.docker.com/blog/docker-golang/ Wed, 14 Sep 2016 15:00:00 +0000 https://blog.docker.com/?p=14840 This is a short collection of tips and tricks showing how Docker can be useful when working with Go code. For instance, I’ll show you how to compile Go code with different versions of the Go toolchain, how to cross-compile to a different platform (and test the result!), or how to produce really small container images.

The following article assumes that you have Docker installed on your system. It doesn’t have to be a recent version (we’re not going to use any fancy feature here).

Go without go

… And by that, we mean “Go without installing go”.

If you write Go code, or if you have even the slightest interest into the Go language, you certainly have the Go compiler and toolchain installed, so you might be wondering “what’s the point?”; but there are a few scenarios where you want to compile Go without installing Go.

  • You still have this old Go 1.2 on your machine (that you can’t or won’t upgrade), and you have to work on this codebase that requires a newer version of the toolchain.
  • You want to play with cross compilation features of Go 1.5 (for instance, to make sure that you can create OS X binaries from a Linux system).
  • You want to have multiple versions of Go side-by-side, but don’t want to completely litter your system.
  • You want to be 100% sure that your project and all its dependencies download, build, and run fine on a clean system.

If any of this is relevant to you, then let’s call Docker to the rescue!

Compiling a program in a container

When you have installed Go, you can do go get -v github.com/user/repo to download, build, and install a library. (The -v flag is just here for verbosity, you can remove it if you prefer your toolchain to be swift and silent!)

You can also do go get github.com/user/repo/... (yes, that’s three dots) to download, build, and install all the things in that repo (including libraries and binaries).

We can do that in a container!

Try this:

docker run golang go get -v github.com/golang/example/hello/...

This will pull the golang image (unless you have it already; then it will start right away), and create a container based on that image. In that container, go will download a little “hello world” example, build it, and install it. But it will install it in the container … So how do we run that program now?

Running our program in a container

One solution is to commit the container that we just built, i.e. “freeze” it into a new image:

docker commit $(docker ps -lq) awesomeness

Note: docker ps -lq outputs the ID (and only the ID!) of the last container that was executed. If you are the only uesr on your machine, and if you haven’t created another container since the previous command, that container should be the one in which we just built the “hello world” example.

Now, we can run our program in a container based on the image that we just built:

docker run awesomeness hello

The output should be Hello, Go examples!.

Bonus points

When creating the image with docker commit, you can use the --change flag to specify arbitrary Dockerfile commands. For instance, you could use a CMD or ENTRYPOINT command so that docker run awesomeness automatically executes hello.

Running in a throwaway container

What if we don’t want to create an extra image just to run this Go program?

We got you covered:

docker run --rm golang sh -c \
"go get github.com/golang/example/hello/... && exec hello"

Wait a minute, what are all those bells and whistles?

  • --rm tells to the Docker CLI to automatically issue a docker rm command once the container exits. That way, we don’t leave anything behind ourselves.
  • We chain together the build step (go get) and the execution step (exec hello) using the shell logical operator &&. If you’re not a shell aficionado, && means “and”. It will run the first part go get..., and if (and only if!) that part is successful, it will run the second part (exec hello). If you wonder why this is like that: it works like a lazy and evaluator, which needs to evaluate the right hand side only if the left hand side evaluates to true.
  • We pass our commands to sh -c, because if we were to simply do docker run golang "go get ... && hello", Docker would try to execute the program named go SPACE get SPACE etc. and that wouldn’t work. So instead, we start a shell and instruct the shell to execute the command sequence.
  • We use exec hello instead of hello: this will replace the current process (the shell that we started) with the hello program. This ensures that hello will be PID 1 in the container, instead of having the shell as PID 1 and hello as a child process. This is totally useless for this tiny example, but when we will run more useful programs, this will allow them to receive external signals properly, since external signals are delivered to PID 1 of the container. What kind of signal, you might be wondering? A good example is docker stop, which sends SIGTERM to PID 1 in the container.

Using a different version of Go

When you use the golang image, Docker expands that to golang:latest, which (as you might guess) will map to the latest version available on the Docker Hub.

If you want to use a specific version of Go, that’s very easy: specify that version as a tag after the image name.

For instance, to use Go 1.5, change the example above to replace golang with golang:1.5:

docker run --rm golang:1.5 sh -c \
 "go get github.com/golang/example/hello/... && exec hello"

You can see all the versions (and variants) available on the Golang image page on the Docker Hub.

Installing on our system

OK, so what if we want to run the compiled program on our system, instead of in a container?

We could copy the compiled binary out of the container. Note, however, that this will work only if our container architecture matches our host architecture; in other words, if we run Docker on Linux. (I’m leaving out people who might be running Windows Containers!)

The easiest way to get the binary out of the container is to map the $GOPATH/bin directory to a local directory. In the golang container, $GOPATH is /go. So we can do the following:

docker run -v /tmp/bin:/go/bin \
 golang go get github.com/golang/example/hello/...
 /tmp/bin/hello

If you are on Linux, you should see the Hello, Go examples! message. But if you are, for instance, on a Mac, you will probably see:

-bash:
/tmp/test/hello: cannot execute binary file

What can we do about it?

Cross-compilation

Go 1.5 comes with outstanding out-of-the-box cross-compilation abilities, so if your container operating system and/or architecture doesn’t match your system’s, it’s no problem at all!

To enable cross-compilation, you need to set GOOS and/or GOARCH.

For instance, assuming that you are on a 64 bits Mac:

docker run -e GOOS=darwin -e GOARCH=amd64 -v /tmp/crosstest:/go/bin \
 golang go get github.com/golang/example/hello/...

The output of cross-compilation is not directly in $GOPATH/bin, but in $GOPATH/bin/$GOOS_$GOARCH. In other words, to run the program, you have to execute /tmp/crosstest/darwin_amd64/hello.

Installing straight to the $PATH

If you are on Linux, you can even install directly to your system bin directories:

docker run -v /usr/local/bin:/go/bin \
 golang get github.com/golang/example/hello/...

However, on a Mac, trying to use /usr as a volume will not mount your Mac’s filesystem to the container. It will mount the /usr directory of the Moby VM (the small Linux VM hidden behind the Docker whale icon in your toolbar).

You can, however, use /tmp or something in your home directory, and then copy it from there.

Building lean images

The Go binaries that we produced with this technique are statically linked. This means that they embed all the code that they need to run, including all dependencies. This contrasts withdynamically linked programs, which don’t contain some basic libraries (like the “libc”) and use a system-wide copy which is resolved at run time.

This means that we can drop our Go compiled program in a container, without anything else, and it should work.

Let’s try this!

The scratch image

There is a special image in the Docker ecosystem: scratch. This is an empty image. It doesn’t need to be created or downloaded, since by definition, it is empty.

Let’s create a new, empty directory for our new Go lean image.

In this new directory, create the following Dockerfile:

FROM scratch
 COPY ./hello /hello
 ENTRYPOINT ["/hello"]

This means: * start from scratch (an empty image), * add the hello file to the root of the image, * define this hello program to be the default thing to execute when starting this container.

Then, produce our hello binary as follows:

docker run -v $(pwd):/go/bin --rm \
 golang go get github.com/golang/example/hello/...

Note: we don’t need to set GOOS and GOARCH here, because precisely, we want a binary that will run in a Docker container, not on our host system. So leave those variables alone!

Then, we can build the image:

docker build -t hello .

And test it:

docker run hello

(This should display Hello, Go examples!.)

Last but not least, check the image’s size:

docker images hello

If we did everything right, this image should be about 2 MB. Not bad!

Building something without pushing to GitHub

Of course, if we had to push to GitHub each time we wanted to compile, we would waste a lot of time.

When you want to work on a piece of code and build it within a container, you can mount a local directory to /go in the golang container, so that the $GOPATH is persisted across invocations: docker run -v $HOME/go:/go golang ....

But you can also mount local directories to specific paths, to “override” some packages (the ones that you have edited locally). Here is a complete example:

# Adapt the two following environment variables if you are not running on a Mac
 export GOOS=darwin GOARCH=amd64
 mkdir go-and-docker-is-love
 cd go-and-docker-is-love
 git clone git://github.com/golang/example
 cat example/hello/hello.go
 sed -i .bak s/olleH/eyB/ example/hello/hello.go
 docker run --rm \
 -v $(pwd)/example:/go/src/github.com/golang/example \
 -v $(pwd):/go/bin/${GOOS}_${GOARCH} \
 -e GOOS -e GOARCH \
 golang go get github.com/golang/example/hello/...
 ./hello
 # Should display "Bye, Go examples!"

 

The special case of the net package and CGo

Before diving into real-world Go code, we have to confess something: we lied a little bit about the static binaries. If you are using CGo, or if you are using the net package, the Go linker will generate a dynamic binary. In the case of the net package (which a lot of useful Go programs out there are using indeed!), the main culprit is the DNS resolver. Most systems out there have a fancy, modular name resolution system (like the Name Service Switch) which relies on plugins which are, technically, dynamic libraries. By default, Go will try to use that; and to do so, it will produce dynamic libraries.

How do we work around that?

Re-using another distro’s libc

One solution is to use a base image that has the essential libraries needed by those Go programs to function. Almost any “regular” Linux distro based on the GNU libc will do the trick. So instead of FROM scratch, you would use FROM debian or FROM fedora, for instance. The resulting image will be much bigger now; but at least, the bigger bits will be shared with other images on your system.

Note: you cannot use Alpine in that case, since Alpine is using the musl library instead of the GNU libc.

Bring your own libc

Another solution is to surgically extract the files needed, and place them in your container with COPY. The resulting container will be small. However, this extraction process leaves the author with the uneasy impression of a really dirty job, and they would rather not go into more details.

If you want to see for yourself, look around ldd and the Name Service Switch plugins mentioned earlier.

Producing static binaries with netgo

We can also instruct Go to not use the system’s libc, and substitute Go’s netgo library, which comes with a native DNS resolver.

To use it, just add -tags netgo -installsuffix netgo to the go get options.

  • -tags netgo instructs the toolchain to use netgo.
  • -installsuffix netgo will make sure that the resulting libraries (if any) are placed in a different, non-default directory. This will avoid conflicts between code built with and without netgo, if you do multiple go get (or go build) invocations. If you build in containers like we have shown so far, this is not strictly necessary, since there will be no other Go code compiled in this container, ever; but it’s a good idea to get used to it, or at least know that this flag exists.

The special case of SSL certificates

There is one more thing that you have to worry about if your code has to validate SSL certificates; for instance if it will connect to external APIs over HTTPS. In that case, you need to put the root certificates in your container too, because Go won’t bundle those into your binary.

Installing the SSL certificates

Three again, there are multiple options available, but the easiest one is to use a package from an existing distribution.

Alpine is a good candidate here because it’s so tiny. The following Dockerfile will give you a base image that is small, but has an up-to-date bundle of root certificates:

FROM alpine:3.4
RUN apk add --no-cache ca-certificates apache2-utils

 

Check it out; the resulting image is only 6 MB!

Note: the --no-cache option tells apk (the Alpine package manager) to get the list of available packages from Alpine’s distribution mirrors, without saving it to disk. You might have seen Dockerfiles doing something like apt-get update && apt-get install ... && rm -rf /var/cache/apt/*; this achieves something equivalent (i.e. not leave package caches in the final image) with a single flag.

As an added bonus, putting your application in a container based on the Alpine image gives you access to a ton of really useful tools: now you can drop a shell into your container and poke around while it’s running, if you need to!

Wrapping it up

We saw how Docker can help us to compile Go code in a clean, isolated environment; how to use different versions of the Go toolchain; and how to cross-compile between different operating systems and platforms.

We also saw how Go can help us to build small, lean container images for Docker, and described a number of associated subtleties linked (no pun intended) to static libraries and network dependencies.

Beyond the fact that Go is really good fit for a project that Docker, we hope that we showed you how Go and Docker can benefit from each other and work really well together!

Acknowledgements

This was initially presented during the hack day at GopherCon 2016.

I would like to thank all the people who proofread this material and gave ideas and suggestions to make it better; including but not limited to:

All mistakes and typos are my own; all the good stuff is theirs! ☺


.@jpetazzo’s tips and tricks on how #Docker can be useful when working with @Golang code #golang
Click To Tweet


]]>