Tyler Charboneau – Docker https://www.docker.com Wed, 11 Jan 2023 01:49:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Tyler Charboneau – Docker https://www.docker.com 32 32 Build, Share, and Run WebAssembly Apps Using Docker https://www.docker.com/blog/build-share-run-webassembly-apps-docker/ Thu, 03 Nov 2022 14:00:00 +0000 https://www.docker.com/?p=38635 There’s no doubt that WebAssembly (AKA Wasm) is having a moment on the development stage. And while it may seem like a flash in the pan to some, we believe Wasm has a key role in continued containerized development. Docker and Wasm can be complementary technologies. 

In the past, we’ve explored how Docker could successfully run Wasm modules alongside Linux or Windows containers. Nearly five months later, we’ve taken another big step forward with the Docker+Wasm Technical Preview. Developers need exceptional performance, portability, and runtime isolation more than ever before. 

Chris Crone, a Director of Engineering at Docker, and Second State CEO, Founder Michael Yuan addressed these sticking points at the CNCF’s Wasm Day 2022. Here’s their full session, but feel free to stick around for our condensed breakdown:

You don’t need to learn new processes to develop successfully with Docker and Wasm. Popular Docker CLI commands can tackle this for you. Docker can even manage the WebAssembly runtime thanks to our collaboration with WasmEdge. We’ll dive into why we’re handling this new project and the technical mechanisms that make it possible. 

Why WebAssembly and Docker?

How workloads and code are isolated has a major impact on how quickly we can deliver software to users. Chris highlights this by explaining how developers value: 

  • Easy reuse of components and defined interfaces across projects that help build value quicker
  • Maximization of shared compute resources while maintaining safe, sturdy boundaries between workloads — lowering the cost of application delivery
  • Seamless application delivery to users, in seconds, through convenient packaging mechanisms like container images so users see value quicker

We know that workload isolation plays a role in these things, yet there are numerous ways to achieve it — like air gapping, hardware virtualization, stack virtualization (Wasm or JVM), containerization, and so on. Since each has unique advantages and disadvantages, choosing the best solution can be tricky. 

Finding the right tools can also be enormously difficult. The CNCF tooling landscape alone is saturated, and while we’re thankful these tools exist, the variety is overwhelming for many developers. 

Chris believes that specialized tooling can conquer the task at hand. It’s also our responsibility at Docker to guide these tooling decisions. This builds upon our continued mission to help developers build, share, and run their applications as quickly as possible.

That’s where WasmEdge — and Michael Yuan — come in. 

Exciting opportunities with Docker and WasmEdge

Michael showed there’s some overlap between container and WebAssembly use cases. For example, developers from both camps might want to ship microservice applications. Wasm can enable quicker startup times and code-level security, which are beneficial in many cases.

However, WebAssembly doesn’t fit every use case due to threading, garbage collection, and binary packaging limitations. Running applications with Wasm also requires extra tooling, currently. 

WasmEdge in action: TensorFlow interface

Michael then kicked off a TensorFlow ML application demo to show what WasmEdge can do. This application wouldn’t work with other WASI-compatible runtimes:

Code snippet showing TensorFlow ML application demo with WasmEdge.

A few things made this demo possible:

  • Rust: a safe and fast programming language with first-class support for the Wasm compiling target.
  • Tokio: a popular asynchronous runtime that can handle multiple, parallel HTTP requests without multithreading.
  • WasmEdge’s TensorFlow: a plug-in compatible with the WASI-NN spec. Besides Tensorflow, PyTorch and OpenVINO are also supported in WasmEdge. 

Note: Tokio and TensorFlow support are WasmEdge features that aren’t available on other WASI-compliant runtimes.

With Rust’s cargo build command, we can compile the program into a Wasm module using the wasm32-wasi target platform. The WasmEdge runtime can execute the resulting .wasm file. Once the application is running, we can perform HTTP queries to run some pretty cool image recognition tasks. 

This example exemplifies the draw of WasmEdge as a WASI-compatible runtime. According to its maintainers, “WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices.” 

Making Wasm accessible with Docker

Docker has two magical features. First, Docker and containers work on any machine and anywhere in production. Second, Docker makes it easy to build, share, and reuse components from any project. Container images and other OCI artifacts are easy to consume (and share). Isolation is baked in. Millions of developers are also very familiar with many Docker workflows like docker compose up.

Chris described how standardization and open ecosystems make Docker and container tooling available everywhere. The OCI specifications are crucial here and let us form new solutions that’ll work for nearly anyone and any supported technology (like Wasm). 

On the other hand, setting up cross-platform Wasm developer environments is tricky. You also have to learn new tools and workflows — hampering productivity while creating frustration. We believe we can help developers overcome these challenges, and we’re excited to leverage our own platform to make Wasm more accessible. 

Demoing Docker+WasmEdge

How does Wasm support look in practice? Chris fired up a demo using a preview of Docker Desktop, complete with WASI support. He created a Docker Compose file with three services: 

docker compose file javascript rust mariadb

That Rust server runs as a Wasm Module, while the NGINX and MariaDB servers run in Linux containers. Chris built this Rust server using a Dockerfile that compiled from his local platform to a wasm32-wasi target. He also ran WasmEdge’s proprietary AOT compiler to optimize the built Wasm module. However, this step is optional and optimized modules require the WasmEdge runtime.

We’ll leave the nitty gritty to Chris (see 19:43 for the demo) for now. However, know that you can run a Compose build and come away with a wasi/wasm32 platform image. Running docker compose up launches your application which you can then interact with through your Web browser. This is one way to seamlessly run containers and Wasm side by side.

From the Docker CLI, you’ll see the Wasm microservice is less than 2MB. It contains a high-performance HTTP server and a MySQL database client. The NGINX and MariaDB servers are 10MB and 120MB, respectively. Alternatively, your Rust microservice would be tens of megabytes after building it into a Linux binary and running it in a Linux container. This underscores how lightweight Wasm images are.

Since the output is an OCI image, you can store or share it using an OCI-compliant registry like Docker Hub. You don’t have to learn complex new workflows. And while Chris and Michael centered on WasmEdge, Docker should support any WASI runtime. 

The approach is interoperable with containers and has early support within Docker Desktop. Although Wasm might initially seem unfamiliar, integration with the Docker ecosystem immediately levels that learning curve.

The future of Docker and Wasm

As Chris mentioned, we’re invested in making Docker and Wasm work well together. Our recent Docker+Wasm Technical Preview is a major step towards boosting interoperability. However, we’re also thrilled to explore how Docker tooling can improve the lives of Wasm-hungry developers — no matter their goals. 

Docker wants to get involved with the Wasm community to better understand how developers like you are building your WebAssembly applications. Your use cases and obstacles matter. By sharing our experiences with the container ecosystem with the community, we hope to accelerate Wasm’s growth and help you tackle that next big project. 

Get started and learn more

Want to test run Docker and Wasm? Check out Chris’ GitHub page for links to special Wasm-compatible Docker Desktop builds, demo repos, and more. We’d also love to hear your feedback as we continue bolstering Docker+Wasm support!

Finally, don’t miss the chance to learn more about WebAssembly and microservices — alongside experts and fellow developers — at an upcoming meetup.

]]>
Build, Share, Run WebAssembly Apps Using the Docker Toolchain - Chris Crone & Michael Yuan nonadult
Developing Go Apps With Docker https://www.docker.com/blog/developing-go-apps-docker/ Wed, 02 Nov 2022 14:00:00 +0000 https://www.docker.com/?p=38582 Go (or Golang) is one of the most loved and wanted programming languages, according to Stack Overflow’s 2022 Developer Survey. Thanks to its smaller binary sizes vs. many other languages, developers often use Go for containerized application development. 

Mohammad Quanit explored the connection between Docker and Go during his Community All-Hands session. Mohammad shared how to Dockerize a basic Go application while exploring each core component involved in the process: 

Follow along as we dive into these containerization steps. We’ll explore using a Go application with an HTTP web server — plus key best practices, optimization tips, and ways to bolster security. 

Go application components

Creating a full-fledged Go application requires you to create some Go-specific components. These are essential to many Go projects, and the containerization process relies equally heavily on them. Let’s take a closer look at those now. 

Using main.go and go.mod

Mohammad mainly highlights the main.go file since you can’t run an app without executable code. In Mohammad’s case, he created a simple web server with two unique routes: an I/O format with print functionality, and one that returns the current time.

A main.go file creating a web server with an I/O format with print functionality and a route to return the current time.

What’s nice about Mohammad’s example is that it isn’t too lengthy or complex. You can emulate this while creating your own web server or use it as a stepping stone for more customization.

Note: You might also use a package main in place of a main.go file. You don’t explicitly need main.go specified for a web server — since you can name the file anything you want — but you do need a func main () defined within your code. This exists in our sample above.

We always recommend confirming that your code works as expected. Enter the command go run main.go to spin up your application. You can alternatively replace main.go with your file’s specific name. Then, open your browser and visit http://localhost:8081 to view your “Hello World” message or equivalent. Since we have two routes, navigating to http://localhost:8081/time displays the current time thanks to Mohammad’s second function. 

Next, we have the go.mod file. You’ll use this as a root file for your Go packages, module path for imports (shown above), and for dependency requirements. Go modules also help you choose a directory for your project code. 

With these two pieces in place, you’re ready to create your Dockerfile

Creating your Dockerfile

Building and deploying your Dockerized Go application means starting with a software image. While you can pull this directly from Docker Hub (using the CLI), beginning with a Dockerfile gives you more configuration flexibility. 

You can create this file within your favorite editor, like VS Code. We recommend VS Code since it supports the official Docker extension. This extension supports debugging, autocompletion, and easy project file navigation. 

Choosing a base image and including your application code is pretty straightforward. Since Mohammad is using Go, he kicked off his Dockerfile by specifying the golang Docker Official Image as a parent image. Docker will build your final container image from this. 

You can choose whatever version you’d like, but a pinned version like golang:1.19.2-bullseye is both stable and slim. Newer image versions like these are also safe from October 2022’s Text4Shell vulnerability

You’ll also need to do the following within your Dockerfile

  • Include an app directory for your source code
  • Copy everything from the root directory into your app directory
  • Copy your Go files into your app directory and install dependencies
  • Build your app with configuration
  • Tell your Docker container to listen on a certain port at runtime
  • Define an executable command that runs once your container starts

With these points in mind, here’s how Mohammad structured his basic Dockerfile:

# Specifies a parent image
FROM golang:1.19.2-bullseye

# Creates an app directory to hold your app’s source code
WORKDIR /app

# Copies everything from your root directory into /app
COPY . .

# Installs Go dependencies
RUN go mod download

# Builds your app with optional configuration
RUN go build -o /godocker

# Tells Docker which network port your container listens on
EXPOSE 8080

# Specifies the executable command that runs when the container starts
CMD [ “/godocker” ]

From here, you can run a quick CLI command to build your image from this file: 

docker build --rm -t [YOUR IMAGE NAME]:alpha .

This creates an image while removing any intermediate containers created with each image layer (or step) throughout the build process. You’re also tagging your image with a name for easier reference later on. 

Confirm that Docker built your image successfully by running the docker image ls command:

A terminal running the docker image ls command and showing that the image was built successfully.

If you’ve already pulled or built images in the past and kept them, they’ll also appear in your CLI output. However, you can see Mohammad’s go-docker image listed at the top since it’s the most recent. 

Making changes for production workloads

What if you want to account for code or dependency changes that’ll inevitably occur with a production Go application? You’ll need to tweak your original Dockerfile and add some instructions, according to Mohammad, so that changes are visible and the build process succeeds:

FROM golang:1.19.2-bullseye

WORKDIR /app

# Effectively tracks changes within your go.mod file
COPY go.mod .

RUN go mod download

# Copies your source code into the app directory
COPY main.go .

RUN go mod -o /godocker

EXPOSE 8080

CMD [ “/godocker” ]

After making those changes, you’ll want to run the same docker build and docker image ls commands. Now, it’s time to run your new image! Enter the following command to start a container from your image: 

docker run -d -p 8080:8081 --name go-docker-app [YOUR IMAGE NAME]:alpha

Confirm that this worked by entering the docker ps command, which generates a list of your containers. If you have Docker Desktop installed, you can also visit the Containers tab from the Docker Dashboard and locate your new container in the list. This also applies to your image builds — instead using the Images tab. 

Congratulations! By tracing Mohammad’s steps, you’ve successfully containerized a functioning Go application. 

Best practices and optimizations

While our Go application gets the job done, Mohammad’s final image is pretty large at 913MB. The client (or end user) shouldn’t have to download such a hefty file. 

Mohammad recommends using a multi-stage build to only copy forward the components you need between image layers. Although we start with a golang:version as a builder image, defining a second build stage and choosing a slim alternative like alpine helps reduce image size. You can watch his step-by-step approach to tackling this. 

This is beneficial and common across numerous use cases. However, you can take things a step further by using FROM scratch in your multi-stage builds. This empty file is the smallest we offer and accepts static binaries as executables — making it perfect for Go application development. 

You can learn more about our scratch image on Docker Hub. Despite being on Hub, you can only add scratch directly into your Dockerfile instead of pulling it. 

Develop your Go application today

Mohammad Quanit outlined some user-friendly development workflows that can benefit both newer and experienced Go users. By following his steps and best practices, it’s possible to create cross-platform Go apps that are slim and performant. Docker and Go inherently mesh well together, and we also encourage you to explore what’s possible through containerization. 

Want to learn more?

]]>
Developing Go apps with Docker nonadult
How to Use the Node Docker Official Image https://www.docker.com/blog/how-to-use-the-node-docker-official-image/ Wed, 26 Oct 2022 14:04:08 +0000 https://www.docker.com/?p=38370 Topping Stack Overflow’s 2022 list of most popular web frameworks and technologies, Node.js continues to grow as a critical MERN stack component. And since Node applications are written in JavaScript — the world’s leading programming language — many developers will feel right at home using it. We introduced the Node Docker Official Image (DOI) due to Node.js’ popularity and to solve some common development challenges. 

The Node.js Foundation describes Node as “an open-source, cross-platform JavaScript runtime environment.” Developers use it to create performant, scalable server and networking applications. Despite Node’s advantages, building and deploying cross-platform services can be challenging with traditional workflows.

Conversely, the Node Docker Official Image accelerates and simplifies your development processes while allowing additional configuration. You can deploy containerized Node applications in minutes. Throughout this guide, we’ll discuss the Node Official Image, how to use it, and some valuable best practices. 

In this tutorial:

What is the Node Docker Official Image?

node js docker official image blog 900x600 1

The Node Docker Official Image contains all source code, core dependencies, tools, and libraries your application needs to work correctly. 

This image supports multiple CPU architectures like amd64, arm32v6, arm32v7, arm64v8, ppc641le, and s390x. You can also choose between multiple tags (or image versions) for any project. Choosing a pinned version like node:19.0.0-slim locks you into a stable, streamlined version of Node.js. 

Node.js use cases

Node.js lets developers write server-side code in JavaScript. The runtime environment then transforms this JavaScript into hardware-friendly machine code. As a result, the CPU can process these low-level instructions. 

Node is event-driven (through user actions), non-blocking, and known for being lightweight while simultaneously handling numerous operations. As a result, you can use the Node DOI to create the following: 

  • Web server applications
  • Networking applications

Node works well here because it supports HTTP requests and socket connections. An asynchronous I/O library lets Node containers read and write various system files that support applications. 

You could use the Node DOI to build streaming apps, single-page applications, chat apps, to-do list apps, and microservices. Or — if you’re like Community All-Hands’ Kathleen Juell — you could use Node.js to help serve static content. Containerized Node will shine in any scenario dictated by numerous client-server requests. 

Docker Captain Bret Fisher also offered his thoughts on Dockerized Node.js during DockerCon 2022. He discussed best practices for managing Node.js projects while diving into optimization. 

Lastly, we also maintain some Node sample applications within our GitHub Awesome Compose library. You can learn to use Node with different databases or even incorporate an NGINX proxy. 

About Docker Official Images

We’ve curated the Node Docker Official Image as one of many core container images on Docker Hub. The Node.js community maintains this image alongside members of the Docker community. 

Like other Docker Official Images, the Node DOI offers a common starting point for Node and JavaScript developers. We also maintain an evolving list of Node best practices while regularly pushing critical security updates. This distinguishes Docker Official Images from alternatives on Docker Hub. 

How to run Node in Docker

Before getting started, download the latest Docker Desktop release and install it. Docker Desktop includes the Docker CLI, Docker Compose, and additional core development tools. The Docker Dashboard (Docker Desktop’s UI component) will help you manage images and containers. 

You’re then ready to Dockerize Node!

Enter a quick pull command

Pulling the Node DOI is the quickest way to begin. Enter docker pull node in your terminal to grab the default latest Node version from Docker Hub. You can readily use this tag for testing or local development. But, a pinned version might be safer for production use. Here’s how the pull process works: 

Your CLI will display a status message once it’s done. You can also double-check this within Docker Desktop! Click the Images tab on the left sidebar and scan through your listed images. Docker Desktop will display your node image:

Docker UI listing local images, including the Node Docker Official Image..

Your node:latest image is a hefty 942.33 MB. If you inspect your Node image’s contents using docker sbom node, you’ll see that it currently includes 623 packages. The Node image contains numerous dependencies and modules that support Node and various applications. 

However, your final Node image can be much slimmer! We’ll tackle optimization while discussing Dockerfiles. After all, the Node DOI has 24 supported tags spread amongst four major Node versions. Each has its own impact on image size.  

Confirm that Node is functional

Want to run your new image as a container? Hover over your listed node image and click the blue “Run” button. In this state, your Node container will produce some minimal log entries and run continuously in case requests come through. 

Exit this container before moving on by clicking the square “stop” button in Docker Desktop or by entering docker stop YourContainerName in the CLI. 

Create your Node image from a Dockerfile

Building from a Dockerfile gives you ultimate control over image composition, configuration, and your overall application. However, Node requires very little to function properly. Here’s a barebones Dockerfile to get you up and running (using a pinned, Debian-based image version): 

FROM node:19-bullseye

Docker will build your image from your chosen Node version. 

It’s safest to use node:19-bullseye because this image supports numerous use cases. This version is also stable and prevents you from pulling in new breaking changes, which sometimes happens with latest tags. 

To build your image from a Dockerfile, run the docker build -t my-nodejs-app . command. You can then run your new image by entering docker run -it --rm --name my-running-app my-nodejs-app.

Optimize your Node image

The complete version of Node often includes extra packages that weigh your application down. This leaves plenty of room for optimization. 

For example, removing unneeded development dependencies reduces image bloat. You can do this by adding a RUN instruction to our previous file: 

FROM node:19-bullseye

RUN npm prune --production

This approach is pretty granular. It also relies on you knowing exactly what you do and don’t need for your project. Alternatively, switching to a slim image build offers the quickest results. You’ll encounter similar caveats but spend less time writing individual Dockerfile instructions. The easiest approach is to replace node:19-bullseye with its node:19-bullseye-slim counterpart. This alone shrinks image size by 75%. 

You can even pull node:19-alpine to save more disk space. However, this tag contains even fewer dependencies and isn’t officially supported by the Node.js Foundation. Keep this in mind while developing. 

Finally, multi-stage builds lead to smaller image sizes. These let you copy only what you need between build stages to combat bloat. 

Using Docker Compose

Say you have a start script, an existing package.json file, and (possibly) want to operate Node alongside other services. Spinning up Node containers with Docker Compose can be pretty handy in these situations.

Here’s a sample docker-compose.yml file: 

services:
  node:
    image: "node:19-bullseye"
    user: "node"
    working_dir: /home/node/app
    environment:
      - NODE_ENV=production
    volumes:
      - ./:/home/node/app
    ports:
      - "8888:8888"
    command: "npm start"

You’ll see some parameters that we didn’t specify earlier in our Dockerfile. For example, the user parameter lets you run your container as an unprivileged user. This follows the principle of least privilege. 

To jumpstart your Node container, simply enter the docker compose up -d command. Like before, you can verify that Node is running within Docker Desktop. The docker container ls --all command also displays all existing containers within the CLI.  

Running a simple Node script

Your project doesn’t always need a  Dockerfile. In these cases, you can directly leverage the Node DOI with the following command: 

docker run -it --rm --name my-running-script -v "$PWD":/usr/src/app -w /usr/src/app node:19-bullseye node your-daemon-or-script.js

This simplistic approach is ideal for single-file projects.

Docker Node best practices

It’s important to get the most out of Docker and the Node Official Image. We’ve briefly outlined the benefits of running as a non-root node user, but here are some useful tips for developing with Node: 

  • Easily pass secrets and other runtime configurations to your application by setting NODE_ENV to production, as seen here: -e “NODE_ENV=production”.
  • Place any installed, global Node dependencies into a non-root user directory.
  • Remember to manually install curl if using an alpine image tag, since it’s not included by default.
  • Wrap your Node process in an init system with the --init flag, so it can successfully run as PID1. 
  • Set memory limitations for your containers that run on the same host. 
  • Include the package.json start command directly within your Dockerfile, to reduce active container processes and let Node properly receive exit signals. 

This isn’t an exhaustive list. To view more details, check out our best practices documentation.

Get started with Node today

As you’ve seen, spinning up a Node container from the Node Docker Official Image is quick and requires just a few steps depending on your workflow. You’ll no longer need to worry about platform-specific builds or get bogged down with complex development processes. 

We’ve also covered many ways to help your Node builds perform better. Check out our top containerization tips article to learn even more about optimization and security. 

Ready to get started? Swing by Docker Hub and pull our Node image to start experimenting. In no time, you’ll have your server and networking applications up and running. You can also learn more on our GitHub read.me page.

]]>
How to Use the Node Docker Official Image | Docker nonadult
How to Fix and Debug Docker Containers Like a Superhero https://www.docker.com/blog/how-to-fix-and-debug-docker-containers-like-a-superhero/ Wed, 19 Oct 2022 14:00:00 +0000 https://www.docker.com/?p=38125 While containers help developers rapidly build and run cross-platform applications, creating error-free apps remains a constant challenge. And while it’s not always obvious how container errors occur, this mystery is even harder for newer developers to unravel. Figuring out how to debug Docker containers can seem daunting.

In this Community All-Hands session, Ákos Takács demonstrated how to solve many of these pesky problems and gain the superpower of fixing containers.

Each issue can impact your image builds and final applications. Some bugs may not trigger clear error messages. To further complicate things, source-code inspection isn’t always helpful. 

But, common container issues don’t have to be your kryptonite! We’ll share Ákos’ favorite tips and show you how to conquer these development challenges.

In this tutorial:

Finding and fixing common container mistakes

Everyone is prone to the occasional silly mistake. You code when you’re tired, suffer from the occasional keyboard slip, or sometimes fail to copy text correctly between steps. These missteps can carry forward from one command to the next. And because easy-to-miss things like spelling errors or character omissions can fly under the radar, you’re left doing plenty of digging to solve basic problems. Nobody wants that, so what tools are at your disposal? 

Using the CLI for extra container visibility

Say we have an image downloaded from Docker Hub — any image at all — and use some variation of the docker run command to run it. The resulting container will be running the default command. If you want to surface that command, entering docker container ls --all will grab a list of containers with their respective commands. 

Users often copy these commands and reuse them within other longer CLI commands. As you’d expect, it’s incredibly easy to highlight incorrectly, copy an incomplete phrase, and run a faulty command that uses it.

While spinning up a new container, you’ll hit a snag. The runtime in this instance will fail since Docker cannot find the executable. It’s not located in the PATH, which indicates a problem:

Docker Run

Running the docker container ls --all command also offers some hints. Note the httpd-foregroun container command paired with its created (but not running) container. Conversely, the v0 container that’s running successfully leverages a valid, complete command:

Docker Container ls

How do we investigate further? Use the docker run --rm -it --name MYCONTAINER [IMAGE] bash command to open an interactive terminal within your container. Take the container’s default command and attempt to run it again. A “command not found” error message will appear.

This is much more succinct and shows that you’ve likely entered the wrong command — in this case by forgetting a character. While Ákos’ example uses httpd, it’s applicable to almost any container image. 

Change your CLI output formatting for visibility and readability

Container commands are clipped once they exceed a certain length in the terminal output. That prevents you from inspecting the command in its entirety. 

Luckily, Ákos showed how the --format ‘{{ json . }}’ | jq -C flag can improve how your terminal displays outputs. Instead of cutting off portions of text, here’s how your docker container ls --all result will look:

JSON jQ C Format

You can read and compare any parameters in full. Nothing is hidden. If you don’t have jq installed, you could instead enter the following command to display outputs similarly minus syntax highlighting. This beats the default tabular layout for troubleshooting:

docker container ls --all --format ‘{{ json . }}’ | python3 -m json.tool --json-lines

Lastly, why not just expand the original table view while only displaying relevant information? Run the following command with the --no-trunc flag to expand those table rows and completely reveal each cell’s contents:

docker container ls --all --format ‘table {{ .Names }}/t{{ .Status }}/t{{ .Command }}’ --no-trunc

These examples highlight the importance of visibility and transparency in troubleshooting. When you can uncover and easily digest the information you need, making corrections is much easier.      

Remember to leverage your logs

By following best practices, any active application running within a Docker container will produce log outputs. While you might view logging as a problem-catching mechanism, many running containers don’t experience issues.

Ákos believes it’s important to understand how normal log entries look. As a result, identifying abnormal log entries becomes that much easier. The docker logs command enables this:

Docker Logs

The process of tuning your logs differs between tools and languages. For example, Ákos drew from methods involving httpd — like trace for detailed trace-level messages or LogLevel for filtering error messages — but these practices are widely applicable. You’ll probably want to zero in on startup and runtime errors to diagnose most issues. 

Log handling is configurable. Here are some common commands to help you drill down into container issues (and reduce noise):

Grab your container’s last 100 logs:

docker logs --tail 100 [container ID]

Grab all logs for a specific container:

docker logs [container ID]

View all active processes within a running container, should its logs be inaccessible:

docker top [container ID]

Log inspection enables easier remediation. Alongside Ákos, we agree that you should confirm any container changes or fixes after making them. This means you’ve taken the right steps and can move ahead.

Want to view all your logs together within Docker Desktop? Download our Logs Explorer extension, which lets you browse through your logs using filters and advanced search capabilities. You can even view new logs as they populate.

Logs Explorer

Tackle issues with ENTRYPOINT

When running applications, you’ll need to run executable files within your container. The ENTRYPOINT portion of your Dockerfile sets the main command within a container and basically assigns it a task. These ENTRYPOINT instructions rely on executable files being in the container. 

In Ákos’ example, he tackles a scenario where improper permissions can prevent Docker from successfully mounting and running an entrypoint.sh executable. You can copy his approach by doing the following: 

  1. Use the ls -l $PWD/examples/v6/entrypoint.sh command to view your file’s permissions, which may be inadequate.
  2. Confirm that permissions are incorrect. 
  3. Run a chmod 774 command to let this file read, write, and execute for all users.
  4. Use docker run to spin up a container v7 from the original entrypoint, which may work briefly but soon stop running. 
  5. Inspect the entrypoint.sh file to confirm our desired command exists. 

We can confirm this again by entering docker container inspect v7-exiting to view our container definition and parameters. While the Entrypoint is specified, its Cmd definition is null. That’s what’s causing the issue:

Config File

Why does this happen? Many don’t know that by setting --entrypoint, any image with a default command will empty that command automatically. You’ll need to redefine your command for your container to work properly. Here’s how that CLI command might look:

docker run -d -v $PWD/examples/v7/entrypoint.sh:/entrypoint.sh --entrypoint /entrypoint.sh --name v7-running httpd:2.4 httpd-foreground

This works for any container image but we’re just drawing from an earlier example. If you run this and list your containers again, v7 will be active. Confirm within your logs that everything looks good. 

Access and inspect container content

Carefully managing files and system resources is critical during local development. That’s doubly true while working with multiple images, containers, or resource constraints. There are scenarios where your containers bloat as their contents accumulate over time. 

Keeping your files tidy is one thing. However, you may also want to copy your files from your container and move them into a temporary folder — using the docker cp command with a specified directory. Using a variation of ls -la ./var/v8, borrowing from Ákos’ example, then produces a list containing every file. 

This is great for visibility and confirming your container’s contents. And we can diagnose any issues one step further with docker container diff v8 to view which files have been changed, appended, or even deleted. If you’re experiencing strange container behavior, digging into these files might be useful. 


Note: You can also leverage our Resource Usage extension to monitor disk space consumption, network activity, CPU usage, and memory usage in real time!

Dive deeply into files and folders

Close inspection is where hexdump comes in handy. The hexdump function converts your file into hexadecimal code, which is much more readable than binary. Ákos used the following commands:

docker cp v8:/usr/local/apache2/bin/httpd ./var/v8-httpd`
`hexdump -C -n 100 ./var/v8-httpd

You can adjust this -n number to read additional or fewer initial bytes. If your file contains text, this content will stand out and reveal the file’s main purpose. But, say you want to access a folder. While changing your directory and running docker container inspect … is standard, this method doesn’t work for Docker Desktop users. Since Desktop runs things in a VM, the host cannot access the folders within. 

Ákos showcased CTO Justin Cormack’s own nsenter1 image on GitHub, which lets us tap into those containers running with Docker Desktop environments. Docker Captain Bret Fisher has since expanded upon nsenter1’s documentation while adding useful commands. With these pieces in place, run the following command:

docker run --rm --privileged --pid=host alpine:3.16.2 nsenter -t 1 -m -u -i -n -p -- sh -c “ cd \”$(docker container inspect v8 --format ‘{{ .GraphDriver.Data.UpperDir }}’}\” \&& find .”

This command’s output mirrors that from our earlier docker container diff command. You can also run a hexdump using that same image above, which gives you the same troubleshooting abilities regardless of your environment. You can also inspect your entrypoint.sh to make important changes.  

Solve Docker Build errors 

While Docker BuildKit is quick and resilient, you can encounter errors that prevent image build completion. To learn why, run the following command to view each sequential build stage:

docker build $PWD/[MY SOURCE] --tag “MY TAG” --progress plain

BuildKit will provide readable context for each step and display any errors that occur:

Docker Build Progress

If you see a missing file or directory error like the one above, don’t worry! You can use the cat $PWD/[MY SOURCE]/[MY DOCKERFILE] command to view the contents of your Dockerfile. Not only can you see where you misstepped more clearly, but you can also add a new instruction before the failing command to list your folder’s contents. 

Maybe those contents need updating. Maybe your folder is empty! In that case, you need to update everything so docker build has something to leverage. 

Next, run the build command again with the --no-cache flag added. This flag tells Docker to cleanly build from scratch each time without relying on caching:

Docker Build No Cache

You can progressively build updated versions of your Dockerfile and test those changes, given the cascading nature of instructions. Writing new instructions after the last working instruction — or making changes earlier on in your file — can eliminate those pesky build issues. Mechanisms like unlink or cp are helpful. The first behaves like rm while accepting only one argument, while cp copies critical files and folders into your image from a source.  

Solve Docker Compose errors

We use Docker Compose to spin up multiple services simultaneously using the docker compose --project-directory $PWD/[MY SOURCE] up -d command. 

However, one or more of those containers might unexpectedly exit. By running docker compose --project-directory $PWD/[MY SOURCE] ps to list out our services, you can see which containers are running or exited.

To pinpoint the problem, you’d usually grab logs via the docker compose logs command. You won’t need to specify a project directory in most cases. However, your container produces no logs since it isn’t running. 

Next, run the cat $PWD/[MY SOURCE]/docker-compose.yml command to view your Docker Compose file’s contents. It’s likely that your services definitions need fixing, so digging line by line within the CLI is helpful. Enter the following command to make this output even clearer:

docker compose --project-directory $PWD/[MY SOURCE] config

Your container exits when the commands contained within are invalid — just like we saw earlier. You’ll be able to see if you’ve entered a command incorrectly or if that command is empty. From there, you can update your Compose file and re-run docker compose --project-directory $PWD/[MY SOURCE] up -d. You can now confirm that everything is working by listing your services again. Your terminal will also output logs! 

Optional: Make direct file edits within running containers

Finally, it’s possible (and tempting) to directly edit your files within your container. This is viable while testing new changes and inspecting your containers. However, it’s usually considered best practice to create a new image and container instead. 

If you want to make edits within running containers, an editor like VS Code allows this, while IntelliJ doesn’t by comparison. Install the Docker extension for VS Code. You can then browse through your containers in the left sidebar, expand your collection of resources, and directly access important files. For example, web developers can directly edit their index.html files to change how user content is structured. 

Investigate less and develop more

Overall, the process of fixing a container, on the surface, may seem daunting to newer Docker users. The methods we’ve highlighted above can dramatically reduce that troubleshooting complexity — saving you time and effort. You can spend less time investigating issues and more time creating the applications users love. And we think those skills are pretty heroic. 

For more information, you can view Ákos Takács’ full presentation on YouTube to carefully follow each step. Want to dive deeper? Check out these additional resources to become a Docker expert: 

]]>
Have the superpower of fixing containers nonadult
How to Use the Postgres Docker Official Image https://www.docker.com/blog/how-to-use-the-postgres-docker-official-image/ Wed, 05 Oct 2022 14:00:00 +0000 https://www.docker.com/?p=37840 Postgres is one of the top relational, multi-model databases currently available. It’s designed to power database applications — which either serve important data directly to end users or through another application via APIs. Your typical website might fit that first example well, while a finance app (like PayPal) typically uses APIs to process GET or POST database requests. 

Postgres’ object-relational structure and concurrency are advantages over alternatives like MySQL. And while no database technology is objectively the best, Postgres shines if you value extensibility, data integrity, and open-source software. It’s highly scalable and supports complex, standards-based SQL queries. 

The Postgres Docker Official Image (DOI) lets you create a Postgres container tailored specifically to your application. This image also handles many core setup tasks for you. We’ll discuss containerization, and the Postgres DOI, and show you how to get started.

In this tutorial:

Why should you containerize Postgres? 

postgres official docker image 900x600 1

Since your Postgres database application can run alongside your main application, containerization is often beneficial. This makes it much quicker to spin up and deploy Postgres anywhere you need it. Containerization also separates your data from your database application. Should your application fail, it’s easy to launch another container while shielding your data from harm. 

This is simpler than installing Postgres locally, performing additional configuration, and starting your own background processes. Such workflows take extra time, require deeper technical knowledge, and don’t adapt well to changing application requirements. That’s why Docker containers come in handy — they’re approachable and tuned for rapid development.

What’s the Postgres Docker Official Image?

Like any other Docker image, the Postgres Docker Official Image contains all source code, core dependencies, tools, and libraries your application needs. The Postgres DOI tells your database application how to behave and interact with data. Meanwhile, your Postgres container is a running instance of this standard image.

Specifically, Postgres is perfect for the following use cases:

  • Connecting Docker shared volumes to your application
  • Testing your storage solutions during development
  • Testing your database application against newer versions of your main application or Postgres itself

The PostgreSQL Docker Community maintains this image and added it to Docker Hub due to its widespread appeal.

Can you deploy Postgres containers in production?

Yes! Though this answer comes with some caveats and depends on how many containers you want to run simultaneously. 

While it’s possible to use the Postgres Official Image in production, Docker Postgres containers are best suited for local development. This lets you use tools like Docker Compose to collectively manage your services. You aren’t forced to juggle multiple database containers at scale, which can be challenging. 

Launching production Postgres containers means using an orchestration system like Kubernetes to stay up and running. You may also need third-party components to supplement Docker’s offerings. However, you can absolutely give this a try if you’re comfortable with Kubernetes! Arctype’s Shanika Wickramasinghe shares one method for doing so.

For these reasons, you can perform prod testing with just a few containers. But, it’s best to reconsider your deployment options for anything beyond that.

How to run Postgres in Docker

To begin, download the latest Docker Desktop release and install it. Docker Desktop includes the Docker CLI, Docker Compose, and supplemental development tools. Meanwhile, the Docker Dashboard (Docker Desktop’s UI component) will help you manage images and containers. 

Afterward, it’s time to Dockerize Postgres!

Enter a quick pull command

Pulling the Postgres Docker Official Image is the fastest way to get started. In your terminal, enter docker pull postgres to grab the latest Postgres version from Docker Hub. 

Alternatively, you can pin your preferred version with a specific tag. Though we usually associate pinning with Dockerfiles, the concept is similar to a basic pull request. 

For example, you’d enter the docker pull postgres:14.5 command if you prefer postgres v14.5. Generally, we recommend using a specific version of Postgres. The :latest version automatically changes with each new Postgres release — and it’s hard to know if those newer versions will introduce breaking changes or vulnerabilities. 

Either way, Docker will download your Postgres image locally onto your machine. Here’s how the process looks via the CLI:

Once the pull is finished, your terminal should notify you. You can also confirm this within Docker Desktop! From the left sidebar, click the Images tab and scan the list that appears in the main window. Docker Desktop will display your postgres image, which weighs in at 355.45 MB.

Docker Desktop user interface displaying the list of current local images, including Postgres.

Postgres is one of the slimmest major database images on Docker Hub. But alpine variants are also available to further reduce your image sizes and include basic packages (perfect for simpler projects). You can learn more about Alpine’s benefits in our recent Docker Official Image article.

Next up, what if you want to run your new image as a container? While many other images let you hover over them in the list and click the blue “Run” button that appears, Postgres needs a little extra attention. Being a database, it requires you to set environment variables before forming a successful connection. Let’s dive into that now.

Start a Postgres instance

Enter the following docker run command to start a new Postgres instance or container: 

docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres

This creates a container named some-postgres and assigns important environment variables before running everything in the background. Postgres requires a password to function properly, which is why that’s included. 

If you have this password already, you can spin up a Postgres container within Docker Desktop. Just click that aforementioned “Run” button beside your image, then manually enter this password within the “Optional Settings” pane before proceeding. 
However, you can also use the Postgres interactive terminal, or psql, to query Postgres directly:

docker run -it --rm --network some-network postgres psql -h some-postgres -U postgres
psql (14.3)
Type "help" for help.

postgres=# SELECT 1;
 ?column? 
----------
        1
(1 row)

Using Docker Compose

Since you’re likely using multiple services, or even a database management tool, Docker Compose can help you run instances more efficiently. With a single YAML file, you can define how your services work. Here’s an example for Postgres:

services:

  db:
    image: postgres
    restart: always
    environment:
      POSTGRES_PASSWORD: example
    volumes:
- pgdata:/var/lib/postgresql/data

volumes: 
  pgdata:

  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

You’ll see that both services are set to restart: always. This makes our data accessible whenever our applications are running and keeps the Adminer management service active simultaneously. When a container fails, this ensures that a new one starts right up.

Say you’re running a web app that needs data immediately upon startup. Your Docker Compose file would reflect this. You’d add your web service and the depends_on parameter to specify startup and shutdown dependencies between services. Borrowing from our docs on controlling startup and shutdown order, your expanded Compose file might look like this:

services:
  web:
    build: .
    ports:
      - "80:8000"
    depends_on:
      db:
        condition: service_healthy
    command: ["python", "app.py"]

  db:
      image: postgres
      restart: always
      environment:
        POSTGRES_PASSWORD: example
	healthcheck:
	  test: [“CMD-SHELL”, “pg_isready”]
        interval	: 1s
	  timeout: 5s
        retries: 10

    adminer:
      image: adminer
      restart: always
      ports:
        - 8080:8080

To launch your Postgres database and supporting services, enter the docker compose -f [FILE NAME] up command. 

Using either docker run, psql, or Docker Compose, you can successfully start up Postgres using the Official Image! These are reliable ways to work with “default” Postgres. However, you can configure your database application even further.

Extending your Postgres image

There are many ways to customize or configure your Postgres image. Let’s tackle four important mechanisms that can help you.

1. Environment variables

We’ve touched briefly on the importance of POSTGRES_PASSWORD to Postgres. Without specifying this, Postgres can’t run effectively. But there are also other variables that influence container behavior: 

  • POSTGRES_USER – Specifies a user with superuser privileges and a database with the same name. Postgres uses the default user when this is empty.
  • POSTGRES_DB – Specifies a name for your database or defaults to the POSTGRES_USER value when left blank. 
  • POSTGRES_INITDB_ARGS – Sends arguments to postgres_initdb and adds functionality
  • POSTGRES_INITDB_WALDIR – Defines a specific directory for the Postgres transaction log. A transaction is an operation and usually describes a change to your database. 
  • POSTGRES_HOST_AUTH_METHOD – Controls the auth-method for host connections to all databases, users, and addresses
  • PGDATA – Defines another default location or subdirectory for database files

These variables live within your plain text .env file. Ultimately, they determine how Postgres creates and connects databases. You can check out our GitHub Postgres Official Image documentation for more details on environment variables.

2. Docker secrets

While environment variables are useful, passing them between host and container doesn’t come without risk. Docker secrets let you access and load those values from files already present in your container. This prevents your environment variables from being intercepted in transit over a port connection. You can use the following command (and iterations of it) to leverage Docker secrets with Postgres: 

docker run --name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d postgres

Note: Docker secrets are only compatible with certain environment variables. Reference our docs to learn more.

3. Initialization scripts

Also called init scripts, these run any executable shell scripts or command-based .sql files once Postgres creates a postgres-data folder. This helps you perform any critical operations before your services are fully up and running. Conversely, Postgres will ignore these scripts if the postgres-data folder initializes.

4. Database configuration

Your Postgres database application acts as a server, and it’s beneficial to control how it runs. Configuring your database not only determines how your Postgres container talks with other services, but also optimizes how Postgres runs and accesses data. 

There are two ways you can handle database configurations with Postgres. You can either apply these configurations locally within a dedicated file or use the command line. The CLI uses an entrypoint script to pass any Docker commands to the Postgres server daemon for processing. 

Note: Available configurations differ between Postgres versions. The configuration file directory also changes slightly while using an alpine variant of the Postgres Docker Official Image.

Important caveats and data storage tips

While Postgres can be pretty user-friendly, it does have some quirks. Keep the following in mind while working with your Postgres container images: 

  • If no database exists when Postgres spins up in a container, it’ll create a default database for you. While this process unfolds, that database won’t accept any incoming connections.
  • Working with a pre-existing database is best when using Docker Compose to start up multiple services. Otherwise, automation tools may fail while Postgres creates a default.
  • Docker will throw an error if a Postgres container exceeds its 64 MB memory allotment.
  • You can use either a docker run command or Docker Compose to allocate more memory to your Postgres containers.

Storing your data in the right place

Data accessibility helps Postgres work correctly, so you’ll also want to make sure you’re storing your data in the right place. This location must be visible to both Postgres and Docker to prevent pesky issues. While there’s no perfect storage solution, remember the following

  • Writing files to the host disk (Docker-managed) and using internal volume management is transparent and user-friendly. However, these files may be inaccessible to tools or apps outside of your containers. 
  • Using bind mounts to connect external data to your Postgres container can solve data accessibility issues. However, you’re responsible for creating the directory and setting up permissions or security.

Lastly, if you decide to start your container via the docker run command, don’t forget to mount the appropriate directory from the host. The -v flag enables this. Browse our Docker run documentation to learn more.

Jumpstart your next Postgres project today

As we’ve discovered, harnessing the Postgres Docker Official Image is pretty straightforward in most cases. Since many customizations are available, you only need to explore the extensibility options you’re comfortable with. Postgres even supports extensions (like PostGIS) — which could add even deeper functionality. 

Overall, Dockerizing Postgres locally has many advantages. Swing by Docker Hub and pull your first Postgres Docker Official Image to start experimenting. You’ll find even deeper instructions for enhancing your database setup on our Postgres GitHub page

Need a springboard? Check out these Docker awesome-compose applications that leverage Postgres: 

]]>
How to Use the Postgres Docker Official Image | Docker nonadult
What is the Best Container Security Workflow for Your Organization? https://www.docker.com/blog/what-is-the-best-container-security-workflow/ Wed, 14 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37470 Since containers are a primary means for developing and deploying today’s microservices, keeping them secure is highly important. But where should you start? A solid container security workflow often begins with assessing your images. These images can contain a wide spectrum of vulnerabilities. Per Sysdig’s latest report, 75% of images have vulnerabilities considered either highly or critically severe. 

There’s good news though — you can patch these vulnerabilities! And with better coordination and transparency, it’s possible to catch these issues in development before they impact your users. This protects everyday users and enterprise customers who require strong security. 

Snyk’s Fani Bahar and Hadar Mutai dove into this container security discussion during their DockerCon session. By taking a shift-left approach and rallying teams around key security goals, stronger image security becomes much more attainable. 

Let’s hop into Fani and Hadar’s talk and digest their key takeaways for developers and organizations. You’ll learn how attitudes, structures, and tools massively impact container security.

Security requires the right mindset across organizations

Mindset is one of the most difficult hurdles to overcome when implementing stronger container security. While teams widely consider security to be important, many often find it annoying in practice. That’s because security has traditionally taken monumental effort to get right. Even today, container security has become “the topic that most developers tend to avoid,” according to Hadar. 

And while teams scramble to meet deadlines or launch dates, the discovery of higher-level vulnerabilities can cause delays. Security soon becomes an enemy rather than a friend. So how do we flip the script? Ideally, a sound container-security workflow should do the following:

  • Support the agile development principles we’ve come to appreciate with microservices development
  • Promote improved application security in production
  • Unify teams around shared security goals instead of creating conflicting priorities

Two main personas are invested in improving application security: developers and DevSecOps. These separate personas have very similar goals. Developers want to ship secure applications that run properly. Meanwhile, DevSecOps teams want everything that’s deployed to be secured. 

The trick to unifying these goals is creating an effective container-security workflow that benefits everyone. Plus, this workflow must overcome the top challenges impacting container security — today and in the future. Let’s analyze those challenges that Hadar highlighted. 

Organizations face common container security challenges

Unraveling the mystery behind security seems daunting, but understanding common challenges can help you form a strategy. Organizations grapple with the following: 

  • Vulnerability overload (container images can introduce upwards of 900)
  • Prioritizing security fixes over others
  • Understanding how container security fundamentally works (this impacts whether a team can fix issues)
  • Lengthier development pipelines stemming from security issues (and testing)
  • Integrating useful security tools, that developers support, into existing workflows and systems

From this, we can see that teams have to work together to align on security. This includes identifying security outcomes and defining roles and responsibilities, while causing minimal disruption. Container security should be as seamless as possible. 

DevSecOps maturity and organizational structures matter

DevSecOps stands for Development, Security, and Operations, but what does that mean? Security under a DevSecOps system becomes a shared responsibility and a priority quite early in the software development lifecycle. While some companies have this concept down pat, many others are new to it. Others lie somewhere in the middle. 

As Fani mentioned, a company’s development processes and security maturity determine how they’re categorized. We have two extremes. On one hand, a company might’ve fully “realized” DevSecOps, meaning they’ve successfully scaled their processes and bolstered security. Conversely, a company might be in the exploratory phase. They’ve heard about DevSecOps and know they want it (or need it). But, their development processes aren’t well-entrenched, and their security posture isn’t very strong. 

Those in the exploratory phase might find themselves asking the following questions:

  • Can we improve our security?
  • Which organizations can we learn from?
  • Which best practices should we follow?

Meanwhile, other companies are either DevOps mature (but security immature) or DevSecOps ready. Knowing where your company sits can help you take the correct next steps to either scale processes or security. 

The impact of autonomy vs. centralization on security

You’ll typically see two methodologies used to organize teams. One focuses on autonomy, while the other prioritizes centralization.

Autonomous approaches

Autonomous organizations might house multiple teams that are more or less siloed. Each works on its own application and oversees that application’s security. This involves building, testing, and validation. Security ownership falls on those developers and anyone else integrated within the team. 

But that’s not to say DevSecOps fades completely into the background! Instead, it fills a support and enablement role. This DevSecOps team could work directly with developers on a case-by-case basis or even build useful, internal tools to make life easier. 

Centralized approaches

Otherwise, your individual developers could rally around a centralized DevOps and AppSec (app security) team. This group is responsible for testing and setting standards across different development teams. For example, DevAppSec would define approved base images and lay out a framework for container design that meets stringent security protocols. This plan must harmonize with each application team throughout the organization. 

Why might you even use approved parent images? These images have undergone rigorous testing to ensure no show-stopping vulnerabilities exist. They also contain basic sets of functionality aimed at different projects. DevSecOps has to find an ideal compromise between functionality and security to support ongoing engineering efforts. 

Whichever camp you fall into will essentially determine how “piecemeal” your plan is. How your developers work best will also influence your security plan. For instance, your teams might be happiest using their own specialized toolsets. In this case, moving to centralization might cause friction or kick off a transition period. 

On the flip side, will autonomous teams have the knowledge to employ strong security after relying on centralized policies? 

It’s worth mentioning that plenty of companies will keep their existing structures. However, any structural changes like those above can affect container security in the short and long term. 

Diverse tools define the container security workflow

Next, Fani showed us just how robust the container security tooling market is. For each step in the development pipeline, and therefore workflow, there are multiple tools for the job. You have your pick between IDEs. You have repositories and version control. You also have integration tools, storage, and orchestration. 

These serve a purpose for the following facets of development: 

  • Local development
  • GitOps
  • CI/CD
  • Registry
  • Production container management

Thankfully, there’s no overarching best or “worst” tool for a given job. But, your organization should choose a tool that delivers exceptional container security with minimal disruption. You should even consider how platforms like Docker Desktop can contribute directly or indirectly to your security workflows, through tools like image management and our Software Bill of Materials (SBOM) feature.

You don’t want to redesign your processes to accommodate a tool. For example, it’s possible that Visual Studio Code suits your teams better than IntelliJ IDEA. The same goes for Jenkins vs. CircleCI, or GitHub vs. Bitbucket. Your chosen tool should fit within existing security processes and even enhance them. Not only that, but these tools should mesh well together to avoid productivity hurdles. 

Container security workflow examples

The theories behind security are important but so are concrete examples. Fani kicked off these examples by hopping into an autonomous team workflow. More and more organizations are embracing autonomy since it empowers individual teams. 

Examining an autonomous workflow

As with any modern workflow, development and security will lean on varying degrees of automation. This is the case with Fani’s example, which begins with a code push to a Git repository. That action initiates a Jenkins job, which is a set of sequential, user-defined tasks. Next, something like the Snyk plugin scans for build-breaking issues. 

If Snyk detects no issues, then the Jenkins job is deemed successful. Snyk monitors continuously from then on and alerts teams to any new issues: 

This flowchart details the security and development steps taken from an initial code push to a successful Jenkins job.
[Click to Enlarge]

When issues are found, your container security tool might flag those build issues, notify developers, provide artifact access, and offer any appropriate remediation steps. From there, the cycle repeats itself. Or, it might be safer to replace vulnerable components or dependencies with alternatives. 

Examining a common base workflow

With DevSecOps at the security helm, processes can look a little different. Hadar walked us through these unique container security stages to highlight DevOps’ key role. This is adjacent to — but somewhat separate from — the developer’s workflows. However, they’re centrally linked by a common registry: 

This flow chart illustrates a common base workflow where DevOps vets base images, which are then selected by the developer team from a common registry.
[Click to Enlarge]

DevOps begins by choosing an appropriate base image, customizing it, optimizing it, and putting it through its paces to ensure strong security. Approved images travel to the common development registry. Conversely, DevOps will fix any vulnerabilities before making that image available internally. 

Each developer then starts with a safe, vetted image that passes scanning without sacrificing important, custom software packages. Issues require fixing and bounce you back to square one, while success means pushing your container artifacts to a downstream registry. 

Creating safer containers for the future 

Overall, container security isn’t as complex as many think. By aligning on security and developing core processes alongside tooling, it’s possible to make rapid progress. Automation plays a huge role. And while there are many ways to tackle container security workflows, no single approach definitively takes the cake. 

Safer public base images and custom images are important ingredients while building secure applications. You can watch Fani and Hadar’s complete talk to learn more. You can also read more about the Snyk Extension for Docker Desktop on Docker Hub.

]]>
What is the Best Container Security Workflow for Your Organization? nonadult
How to Use the Alpine Docker Official Image https://www.docker.com/blog/how-to-use-the-alpine-docker-official-image/ Thu, 08 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37364 With its container-friendly design, the Alpine Docker Official Image (DOI) helps developers build and deploy lightweight, cross-platform applications. It’s based on Alpine Linux which debuted in 2005, making it one of today’s newest major Linux distros. 

While some developers express security concerns when using relatively newer images, Alpine has earned a solid reputation. Developers favor Alpine for the following reasons:  

In fact, the Alpine DOI is one of our most popular container images on Docker Hub. To help you get started, we’ll discuss this image in greater detail and how to use the Alpine Docker Official Image with your next project. Plus, we’ll explore using Alpine to grab the slimmest image possible. Let’s dive in!

In this tutorial:

What is the Alpine Docker Official Image?

how to use the alpine docker official image 900x600 1

The Alpine DOI is a building block for Alpine Linux Docker containers. It’s an executable software package that tells Docker and your application how to behave. The image includes source code, libraries, tools, and other core dependencies that your application needs. These components help Alpine Linux function while enabling developer-centric features. 

The Alpine Docker Official Image differs from other Linux-based images in a few ways. First, Alpine is based on the musl libc implementation of the C standard library — and uses BusyBox instead of GNU coreutils. While GNU packages many Linux-friendly programs together, BusyBox bundles a smaller number of core functions within one executable. 

While our Ubuntu and Debian images leverage glibc and coreutils, these alternatives are comparatively lightweight and resource-friendly, containing fewer extensions and less bloat.

As a result, Alpine appeals to developers who don’t need uncompromising compatibility or functionality from their image. Our Alpine DOI is also user-friendly and straightforward since there are fewer moving parts.

Alpine Linux performs well on resource-limited devices, which is fitting for developing simple applications or spinning up servers. Your containers will consume less RAM and less storage space. 

The Alpine Docker Official Image also offers the following features:

Multi-arch support lets you run Alpine on desktops, mobile devices, rack-mounted servers, Raspberry Pis, and even newer M-series Macs. Overall, Alpine pairs well with a wide variety of embedded systems. 

These are only some of the advantages to using the Alpine DOI. Next, we’ll cover how to harness the image for your application. 

When to use Alpine

You may be interested in using Alpine, but find yourself asking, “When should I use it?” Containerized Alpine shines in some key areas: 

  • Creating servers
  • Router-based networking
  • Development/testing environments

While there are some other uses for Alpine, most projects will fall under these two categories. Overall, our Alpine container image excels in situations where space savings and security are critical. 

How to run Alpine in Docker

Before getting started, download Docker Desktop and then install it. Docker Desktop is built upon Docker Engine and bundles together the Docker CLI, Docker Compose, and other core components. Launching Docker Desktop also lets you use Docker CLI commands (which we’ll get into later). Finally, the included Docker Dashboard will help you visually manage your images and containers. 

After completing these steps, you’re ready to Dockerize Alpine!

Note: For Linux users, Docker will still work perfectly fine if you have it installed externally on a server, or through your distro’s package manager. However, Docker Desktop for Linux does save time and effort by bundling all necessary components together — while aiding productivity through its user-friendly GUI. 

Use a quick pull command

You’ll have to first pull the Alpine Docker Official Image before using it for your project. The fastest method involves running docker pull alpine from your terminal. This grabs the alpine:latest image (the most current available version) from Docker Hub and downloads it locally on your machine: 

Your terminal output should show when your pull is complete — and which alpine version you’ve downloaded. You can also confirm this within Docker Desktop. Navigate to the Images tab from the left sidebar. And a list of downloaded images will populate on the right. You’ll see your alpine image, tag, and its minuscule (yes, you saw that right) 5.29 MB size:

Docker Desktop UI with list of downloaded images including Alpine.
Other Linux distro images like Ubuntu, Debian, and Fedora are many, many times larger than Alpine.

That’s a quick introduction to using the Alpine Official Image alongside Docker Desktop. But it’s important to remember that every Alpine DOI version originates from a Dockerfile. This plain-text file contains instructions that tell Docker how to build an image layer by layer. Check out the Alpine Linux GitHub repository for more Dockerfile examples. 

Next up, we’ll cover the significance of these Dockerfiles to Alpine Linux, some CLI-based workflows, and other key information.

Build your Dockerfile

Because Alpine is a standard base for container images, we recommend building on top of it within a Dockerfile. Specify your preferred alpine image tag and add instructions to create this file. Our example takes alpine:3.14 and runs an executable mysql client with it: 

FROM alpine:3.14
RUN apk add --no-cache mysql-client
ENTRYPOINT ["mysql"]

In this case, we’re starting from a slim base image and adding our mysql-client using Alpine’s standard package manager. Overall, this lets us run commands against our MySQL database from within our application. 

This is just one of the many ways to get your Alpine DOI up and running. In particular, Alpine is well-suited to server builds. To see this in action, check out Kathleen Juell’s presentation on serving static content with Docker Compose, Next.js, and NGINX. Navigate to timestamp 7:07 within the embedded video. 

The Alpine Official Image has a close relationship with other technologies (something that other images lack). Many of our Docker Official Images support -alpine tags. For instance, our earlier example of serving static content leverages the node:16-alpine image as a builder

This relationship makes Alpine and multi-stage builds an ideal pairing. Since the primary goal of a multi-stage build is to reduce your final image size, we recommend starting with one of the slimmest Docker Official Images.

Grabbing the slimmest possible image

Pulling an -alpine version of a given image typically yields the slimmest result. You can do this using our earlier docker pull [image] command. Or you can create a Dockerfile and specify this image version — while leaving room for customization with added instructions. 

In either case, here are some results using a few of our most popular images. You can see how image sizes change with these tags:

Image tagImage sizeimage:[version number]-alpine size
python:3.9.13867.66 MB46.71 MB
node:18.8.0939.71 MB164.38 MB
nginx:1.23.1134.51 MB22.13 MB

We’ve used the :latest tag since this is the default image tag Docker grabs from Docker Hub. As shown above with Python, pulling the -alpine image version reduces its footprint by nearly 95%! 

From here, the build process (when working from a Dockerfile) becomes much faster. Applications based on slimmer images spin up quicker. You’ll also notice that docker pull and various docker run commands execute swifter with -alpine images. 

However, remember that you’ll likely have to use this tag with a specified version number for your parent image. Running docker pull python-alpine or docker pull python:latest-alpine won’t work. Docker will alert you that the image isn’t found, the repo doesn’t exist, the command is invalid, or login information is required. This applies to any image. 

Get up and running with Alpine today

The Alpine Docker Official Image shines thanks to its simplicity and small size. It’s a fantastic base image — perhaps the most popular amongst Docker users — and offers plenty of room for customization. Alpine is arguably the most user-friendly, containerized Linux distro. We’ve tackled how to use the Alpine Official Image, and showed you how to get the most from it. 

Want to use Alpine for your next application or server? Pull the Alpine Official Image today to jumpstart your build process. You can also learn more about supported tags on Docker Hub. 

Additional resources

]]>
How to Use the Alpine Docker Official Image | Docker nonadult
In Case You Missed It: Docker Community All-Hands https://www.docker.com/blog/docker-community-all-hands-6-highlights/ Tue, 06 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37309

That’s a wrap! Community All-Hands has officially come to a close. Our sixth All-Hands featured over 35 talks across 10 channels — with topics ranging from “getting started with Docker” to running machine learning on AI hardware accelerators.

As always, every channel was buzzing with activity. Your willingness to jump in, ask questions, and help others is what the Docker community’s all about. And we loved having the chance to chat with everyone directly! 

Couldn’t attend our recent Community All-Hands event? We’ll cover some important announcements, interesting presentations, and more that you missed.

Docker CPO looks back at a year of developer obsession

Headlining Community All-Hands were some important announcements on the main stage, kicked off by Jake Levirne, our Head of Products. This past year, our engineers focused on improving developer experiences across every product. Integrated features like Dev Environments, Docker Extensions, SBOM, and Compose V2 have helped streamline workflows — along with numerous usability and OS-specific improvements. 

Over the last year, the Docker engineering team:

  • Released 24 new features
  • Made 37,000 internal commits
  • Curated 52 extensions and counting within Docker Desktop and Docker Hub
  • Hosted over eight million Docker Desktop downloads

We couldn’t have made these improvements without your feedback. Keep your votes, comments, and messages coming — they’re essential for helping us ship the features you need. Keep an eye out for continued updates about UX enhancements, Trusted Open Source, and user-centric partnerships.

How to use SBOMs to support multiple layers

Following Jake, our presenters dove deeper into the technical depths. Next up was a session on viewing images through layered software bills of materials (SBOMs), led by Docker Principal Software Engineer Jim Clark. 

SBOMs are extremely helpful for knowing what’s in your images and apps. But where it gets complex is that many images stem from base images. And even those base images can have their own base images, making full image transparency difficult. Multi-layer images have historically been harder to analyze. To get a full picture of a multi-layer image, you’ll need to know things like:

  • Which packages are included
  • How those packages are distributed between layers
  • How image rebuilds can impact packages
  • If security fixes are available for individual packages

Jim shared that it’s now possible to gather this information. While this feature is still under development, users will soon see layer sizes, total packages per layer, and be able to view complete Dockerfiles on GitHub. 

And as a next step, the team is also focused on understanding shared content and tracking public data. This is another step toward building developer trust, and knowing exactly what’s going into your projects.

Docker Desktop meets multi-platform image support via containerd

Rounding out our major announcements was Djordje Lukic, Staff Software Engineer, with a session on containerd image management. Containerd has been our container runtime since 2016. Since then, we’re extended its integration within Docker Desktop and Docker Engine. 

Containerd migration offers some key benefits: 

  • There’s less code to maintain
  • We can ship features more rapidly and shorten release cycles
  • It’s easier to improve our developer tooling
  • We can bring multi-platform support to Docker, while following the Open Container Initiative (OCI) more closely and supporting different snapshotters.

Leveraging containerd more heavily means we can consolidate portions of the Docker Daemon. Check out our containerd announcement blog to learn more. 

Showcasing attendees’ favorite talks

Every Community All-Hands channel hosted unique sets of topics, while each session highlighted relationships between Docker and today’s top technologies. Here are some popular talks from Community All-Hands and why they’re worth watching. 

Developing Go Apps with Docker

From the “Best Practices” channel.

Go (or Golang) is a language well-loved and highly sought after by professional developers. We support it as a core language and maintain a Go language-specific use guide within our docs. 

Follow along with Muhammad Quanit as he explores containerized Go applications. Muhammad covers best practices, the importance of multi-stage builds, and other tips for optimizing your Dockerfiles. By using a Go web server, he demonstrates the “Dockerization” process and the usefulness of IDE extensions.

Integration Testing Your Legacy Java Microservice with docker-maven-plugin

From the “Demos” channel.

Enterprises and development teams often maintain Java code bases upwards of 10 years old. While these services may still be functional, it’s been challenging to bind automated testing to each individual microservice repository. Docker Compose does enable batch testing, but that extra granularity is needed.

Join Terry Brady as he shows you how to run JUnit microservices tests, automated maven testing, code coverage calculation, and even test-resource management. Don’t worry about rewriting your legacy code. Instead, learn how integration testing and dedicated test containers help make life easier. 

How Does Docker Run Machine Learning on Specialized AI Hardware Accelerators

From the “Cutting Edge” channel.

Currently, 35% of companies report using AI in some fashion, while another 42% of respondents say they’re considering it. Machine learning (ML) — a subset of AI — has been critical to creating predictive models, extracting value from big data, and automating many tedious processes. 

Shashank Prasanna outlines just how important specialized hardware is to powering these algorithms. And while ML gains steam, companies are unveiling numerous supporting chipsets and GPUs. How does Docker handle these accelerators? Follow along as Shashank highlights Docker’s capabilities within multi-processor systems, and how these differ from traditional, single-CPU systems from an AI standpoint.

But wait, there’s more! 

The above talks are just a small sample of our learning sessions. Swing by our Docker YouTube channel to browse through our entire content library. 

You can also check out playlists from each event channel: 

  • Mainstage – showcases of the community and Docker’s latest developments 
  • Best Practices – tips to get the most from your Docker applications
  • Demos – in-depth presentations that tackle unique use cases, step by step
  • Security – best practices for building stronger, attack-resistant containers and applications
  • Extensions – the basics of building extensions while demonstrating their usefulness in different scenarios
  • Cutting Edge – talks about how Docker and today’s leading technologies unite
  • International Waters – multilingual tech talks and panel discussions on trends
  • Open Source – panels on the Docker Sponsored Open-Source Program and the value of open source
  • Unconference – informal talks on getting started with Docker and Docker experiences

Thank you and see you next time!

From key Docker announcements, to technical talks, to our buzzworthy Community Awards ceremony, we had an absolute blast with you at Community All-Hands. Also, a huge special thanks to DJ Alessandro Vozza for keeping the music and excitement going!

And don’t forget to download the latest Docker Desktop to check out the releases and try out any new tricks you’ve learned.

See you at our next All-Hands event, and thank you for making this community stronger. Happy developing!

Learn about our recent releases

]]>
Recapping the last year of developer-focused innovation in Docker Desktop nonadult
How to Build and Run Next.js Applications with Docker, Compose, & NGINX https://www.docker.com/blog/how-to-build-and-run-next-js-applications-with-docker-compose-nginx/ Wed, 31 Aug 2022 14:00:00 +0000 https://www.docker.com/?p=37047 At DockerCon 2022, Kathleen Juell, a Full Stack Engineer at Sourcegraph, shared some tips for combining Next.js, Docker, and NGINX to serve static content. With nearly 400 million active websites today, efficient content delivery is key to attracting new web application users.

In some cases, using Next.js can boost deployment efficiency, accelerate time to market, and help attract web users. Follow along as we tackle building and running Next.js applications with Docker. We’ll also cover key processes and helpful practices for serving that static content. 

Why serve static content with a web application?

According to Kathleen, the following are the benefits of serving static content: 

  • Fewer moving parts, like databases or other microservices, directly impact page rendering. This backend simplicity minimizes attack surfaces. 
  • Static content stands up better (with fewer uncertainties) to higher traffic loads.
  • Static websites are fast since they don’t require repeated rendering.
  • Static website code is stable and relatively unchanging, improving scalability.
  • Simpler content means more deployment options.

Since we know why building a static web app is beneficial, let’s explore how.

Building our services stack

To serve static content efficiently, a three-pronged services approach composed of Next.js, NGINX, and Docker is useful. While it’s possible to run a Next.js server, offloading those tasks to an NGINX server is preferable. NGINX is event-driven and excels at rapidly serving content thanks to its single-threaded architecture. This enables performance optimization even during periods of higher traffic.  

Luckily, containerizing a cross-platform NGINX server instance is pretty straightforward. This setup is also resource friendly. Below are some of the reasons why Kathleen — explicitly or perhaps implicitly — leveraged three technologies. 

Docker Desktop also gives us the tools needed to build and deploy our application. It’s important to install Docker Desktop before recreating Kathleen’s development process. 

The following trio of services will serve our static content:

First, our auth-backend has a build context rooted in a directory and a port mapping. It’s based on a slimmer alpine flavor of the Node.js Docker Official Image and uses named Dockerfile build stages to prevent reordered COPY instructions from breaking. 

Second, our client service has its own build context and a named volume mapped to the staticbuild:/app/out directory. This lets us mount our volume within our NGINX container. We’re not mapping any ports since NGINX will serve our content.

Third, we’ll containerize an NGINX server that’s based on the NGINX Docker Official Image.

As Kathleen mentions, ending this client service’s Dockerfile with a RUN command is key. We want the container to exit after completing the yarn build process. This process generates our static content and should only happen once for a static web application.

Each component is accounted for within its own container. Now, how do we seamlessly spin up this multi-container deployment and start serving content? Let’s dive in!

Using Docker Compose and Docker volumes

The simplest way to orchestrate multi-container deployments is with Docker Compose. This lets us define multiple services within a unified configuration, without having to juggle multiple files or write complex code. 

We use a compose.yml file to describe our services, their contexts, networks, ports, volumes, and more. These configurations influence app behavior. 

Here’s what our complete Docker Compose file looks like: 

services:
  auth-backend:
    build:
      context: ./auth-backend
    ports:
      - "3001:3001"
    networks:
      - dev
	
  client:
    build:
      context: ./client
    volumes:
      - staticbuild:/app/out
    networks:
      - dev

  nginx:
    build:
      context: ./nginx
    volumes:
      - staticbuild:/app/public
    ports:
      - “8080:80”
    networks:
      - dev

  networks:
    dev:
      driver: bridge

  volumes:
    staticbuild:

You’ll also see that we’ve defined our networks and volumes in this file. These services all share the dev network, which lets them communicate with each other while remaining discoverable. You’ll also see a common volume between these services. We’ll now explain why that’s significant.

Using mounted volumes to share files

Specifically, this example leverages named volumes to share files between containers. By mapping the staticbuild volume to Next.js’ default out directory location, you can export your build and serve content with your NGINX server. This typically exists as one or more HTML files. Note that NGINX uses the app/public directory by comparison. 

While Next.js helps present your content on the frontend, NGINX delivers those important resources from the backend. 

Leveraging A/B testing to create tailored user experiences

You can customize your client-side code to change your app’s appearance, and ultimately the end-user experience. This code impacts how page content is displayed while something like an NGINX server is running. It may also determine which users see which content — something that’s common based on sign-in status, for example. 

Testing helps us understand how application changes can impact these user experiences, both positively and negatively. A/B testing helps us uncover the “best” version of our application by comparing features and page designs. How does this look in practice? 

Specifically, you can use cookies and hooks to track user login activity. When a user logs in, they’ll see something like user stories (from Kathleen’s example). Logged-out users won’t see this content. Alternatively, a web user might only have access to certain pages once they’re authenticated. It’s your job to monitor user activity, review any feedback, and determine if those changes bring clear value. 

These are just two use cases for A/B testing, and the possibilities are nearly endless when it comes to conditionally rendering static content with Next.js. 

Containerize your Next.js static web app

There are many different ways to serve static content. However, Kathleen’s three-service method remains an excellent example. It’s useful both during exploratory testing and in production. To learn more, check out Kathleen’s complete talk

By containerizing each service, your application remains flexible and deployable across any platform. Docker can help developers craft accessible, customizable user experiences within their web applications. Get started with Next.js and Docker today to begin serving your static web content! 

Additional Resources

]]>
How to Build and Run Next.js Applications with Docker, Compose, & NGINX nonadult
How to Use the Redis Docker Official Image https://www.docker.com/blog/how-to-use-the-redis-docker-official-image/ Wed, 24 Aug 2022 14:00:00 +0000 https://www.docker.com/?p=36720 Maintained in partnership with Redis, the Redis Docker Official Image (DOI) lets developers quickly and easily containerize a Redis instance. It streamlines the cross-platform deployment process — even letting you use Redis with edge devices if they support your workflows. 

Developers have pulled the Redis DOI over one billion times from Docker Hub. As the world’s most popular key-value store, Redis helps apps concurrently access critical bits of data while remaining resource friendly. It’s highly performant, in-memory, networked, and durable. It also stands apart from relational databases like MySQL and PostgreSQL that use tabular data structures. From day one, Redis has also been open source. 

Finally, Redis cluster nodes are horizontally scalable — making it a natural fit for containerization and multi-container operation. Read on as we explore how to use the Redis Docker Official Image to containerize and accelerate your Redis database deployment.

In this tutorial:

What is the Redis Docker Official Image?

redis docker official image

The Redis DOI is a building block for Redis Docker containers. It’s an executable software package that tells Docker and your application how to behave. It bundles together source code, dependencies, libraries, tools, and other core components that support your application. In this case, these components determine how your app and Redis database interact.

Our Redis Docker Official Image supports multiple CPU architectures. An assortment of over 50 supported tags lets you choose the best Redis image for your project. They’re also multi-layered and run using a default configuration (if you’re simply using docker pull). Complexity and base images also vary between tags. 

That said, you can also configure your Redis Official Image’s Dockerfile as needed. We’ll touch on this while outlining how to use the Redis DOI. Let’s get started.

How to run Redis in Docker

Before proceeding, we recommend installing Docker Desktop. Desktop is built upon Docker Engine and packages together the Docker CLI, Docker Compose, and more. Running Docker Desktop lets you use Docker commands. It also helps you manage images and containers using the Docker Dashboard UI. 

Use a quick pull command

Next, you’ll need to pull the Redis DOI to use it with your project. The quickest method involves visiting the image page on Docker Hub, copying the docker pull command, and running it in your terminal:

Your output confirms that Docker has successfully pulled the :latest Redis image. You can also verify this by hopping into Docker Desktop and opening the Images interface from the left sidebar. Your redis image automatically appears in the list:

Docker Desktop list of local images on disk, including the Redis official Docker image.

We can also see that our new Redis image is 111.14 MB in size. This is pretty lightweight compared to many images. However, using an alpine variant like redis:alpine3.16 further slims your image.

Now that you’re acquainted with Docker Desktop, let’s jump into our CLI workflow to get Redis up and running. 

Start your Redis instance

Redis acts as a server, and related server processes power its functionality. We need to start a Redis instance, or software server process, before linking it with our application. Luckily, you can create a running instance with just one command: 

 docker run --name some-redis -d redis 

We recommend naming your container. This helps you reference later on. It also makes it easier to run additional commands that involve it. Your container will run until you stop it. 

By adding -d redis in this command, Docker will run your Redis service in “detached” mode. Redis, therefore, runs in the background. Your container will also automatically exit when its root process exits. You’ll see that we’re not explicitly telling the service to “start” within this command. By leaving this verbiage out, our Redis service will start and continue running — remaining usable to our application.

Set up Redis persistent storage

Persistent storage is crucial when you want your application to save data between runs. You can have Redis write its data to a destination like an SSD. Persistence is also useful for keeping log files across restarts. 

You can capture every Redis operation using the Redis Database (RDB) method. This lets you designate snapshot intervals and record data at certain points in time. However, that running container from our initial docker run command is using port 6379. You should remove (or stop) this container before moving on, since it’s not critical for this example. 

Once that’s done, this command triggers persistent storage snapshots every 60 seconds: 

 docker run --name some-redis -d redis redis-server --save 60 1 --loglevel warning 

The RDB approach is valuable as it enables “set-and-forget” persistence. It also generates more logs. Logging can be useful for troubleshooting, yet it also requires you to monitor accumulation over time. 

However, you can also forego persistence entirely or choose another option. To learn more, check out Redis’ documentation

Redis stores your persisted data in the VOLUME /data location. These connected volumes are shareable between containers. This shareability becomes useful when Redis lives within one container and your application occupies another. 

Connect with the Redis CLI

The Redis CLI lets you run commands directly within your running Redis container. However, this isn’t automatically possible via Docker. Enter the following commands to enable this functionality: 

 docker network create some-network 
 ​​docker run -it --network some-network --rm redis redis-cli -h some-redis

Your Redis service understands Redis CLI commands. Numerous commands are supported, as are different CLI modes. Read through the Redis CLI documentation to learn more. 

Once you have CLI functionality up and running, you’re free to leverage Redis more directly!

Configurations and modules

Finally, we’ve arrived at customization. While you can run a Redis-powered app using defaults, you can tweak your Dockerfile to grab your pre-existing redis.conf file. This better supports production applications. While Redis can successfully start without these files, they’re central to configuring your services. 

You can see what a redis.conf file looks like on GitHub. Otherwise, here’s a sample Dockerfile

FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

You can also use docker run to achieve this. However, you should first do two things for this method to work correctly. First, create the /myredis/config directory on your host machine. This is where your configuration files will live. 

Second, open Docker Desktop and click the Settings gear in the upper right. Choose Resources > File Sharing to view your list of directories. You’ll see a grayed-out directory entry at the bottom, which is an input field for a named directory. Type in /myredis/config there and hit the “+” button to locally verify your file path:

Docker Desktop resource file sharing settings with the `/myredis/config` added.

You’re now ready to run your command! 

 docker run -v /myredis/conf:/usr/local/etc/redis --name myredis redis redis-server /usr/local/etc/redis/redis.conf 

The Dockerfile gives you more granular control over your image’s construction. Alternatively, the CLI option lets you run your Redis container without a Dockerfile. This may be more approachable if your needs are more basic. Just ensure that your mapped directory is writable and exists locally. 

Also, consider the following: 

  • If you edit your Redis configurations on the fly, you’ll have to use CONFIG REWRITE to automatically identify and apply any field changes on the next run.
  • You can also apply configuration changes manually.

Remember how we connected the Redis CLI earlier? You can now pass arguments directly through the Redis CLI (ideal for testing) and edit configs while your database server is running. 

Notes on using Redis modules

Redis modules let you extend your Redis service, and build new services, and adapt your database without taking a performance hit. Redis also processes them in memory. These standard modules support querying, search, JSON processing, filtering, and more. As a result, Docker Hub’s redislabs/redismod image bundles seven of these official modules together: 

  1. RedisBloom
  2. RedisTimeSeries
  3. RedisJSON
  4. RedisAI
  5. RedisGraph
  6. RedisGears
  7. Redisearch

If you’d like to spin up this container and experiment, simply enter docker run -d -p 6379:6379 redislabs/redismod in your terminal. You can open Docker Desktop to view this container like we did earlier on. 

You can view Redis’ curated modules or visit the Redis Modules Hub to explore further.

Get up and running with Redis today

We’ve explored how to successfully Dockerize Redis. Going further, it’s easy to grab external configurations and change how Redis operates on the fly. This makes it much easier to control how Redis interacts with your application. Head on over to Docker Hub and pull your first Redis Docker Official Image to start experimenting. 

The Redis Stack also helps extend Redis within Docker. It adds modern, developer-friendly data models and processing engines. The Stack also grants easy access to full-text search, document store, graphs, time series, and probabilistic data structures. Redis has published related container images through the Docker Verified Publisher (DVP) program. Check them out!

]]>
How to Use the Redis Docker Official Image | Docker nonadult