Docker Hub – Docker https://www.docker.com Wed, 05 Jul 2023 19:02:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Docker Hub – Docker https://www.docker.com 32 32 160% Year-over-Year Growth in Pulls of Red Hat’s Universal Base Image on Docker Hub https://www.docker.com/blog/blog-red-hat-universal-base-image-hub-pulls-grow/ Thu, 25 May 2023 14:00:00 +0000 https://www.docker.com/?p=42852 Red Hat’s Universal Base Image eliminates “works on my machine” headaches for developers.

It’s Red Hat Summit week, and we wanted to use this as an opportunity to highlight several aspects of Docker’s partnership with Red Hat. In this post, we highlight Docker Hub and Red Hat’s Universal Base Images (UBI). Also check out our new post on simplifying Kubernetes development with Docker Desktop + Red Hat OpenShift.

Docker Hub is the world’s largest public registry of artifacts for developers, providing the fundamental building blocks — web servers, runtimes, databases, and more — for any application. It offers more than 15 million images for containers, serverless functions, and Wasm with support for multiple operating systems (Linux, Windows) and architectures (x86, ARM). Altogether, these 15 million images are pulled more than 16 billion times per month by over 20 million IPs.

Red Hat logo with verified publisher button and text that shows 160% increase in pulls, with a docker logo at the bottom

While the majority of the 15 million images are community images, a subset are trusted content, both open source and commercial, curated and actively maintained by Docker and upstream open source communities and Docker’s ISV partners.

Docker and Red Hat have been partners since 2013, and Red Hat started distributing Linux images through Docker Hub in 2014. To help developers reduce the “works on my machine” finger-pointing and ensure consistency between development and production environments, in 2019 Red Hat launched Universal Base Image (UBI). Based on Red Hat Enterprise Linux (RHEL), UBI provides the same reliability, security, and performance as RHEL. Furthermore, to meet the different use cases of developers and ISVs, UBI comes in four sizes — standard, minimal, multi-service, and micro — and offers channels so that additional packages can be added as needed.

Given Docker’s reach in the developer community and the breadth and depth of developer content on Docker Hub, Docker and Red Hat agreed that distributing Red Hat UBI on Docker Hub made a lot of sense. Thus, in May 2021 Red Hat became a Docker Verified Publisher (DVP) and launched Red Hat UBIs on Docker Hub. As a DVP, Red Hat’s UBIs are easier for developers to discover, and it gives developers an extra level of assurance that the images they’re using are accessible, safe, and maintained.

The results? Tens of 1000s of developers are pulling Red Hat UBI millions of times every month. Furthermore, the pulls of Red Hat Universal Base image have grown 2.6X times in the last 12 months alone. Such growth points to value Docker and Red Hat together bring to the developer community.

… and we’re not finished! Having provided Red Hat UBIs directly to developers, now Docker and Red Hat are working together with Docker’s ISV partners and open source communities to bring the value of UBI to those software stacks as well. Stay tuned for more!

Learn More

]]>
Automate API Tests and Debug in Docker With Postman’s Newman Extension https://www.docker.com/blog/automate-api-tests-and-debug-in-docker-with-postmans-newman-extension/ Thu, 17 Nov 2022 15:00:00 +0000 https://www.docker.com/?p=38801 This post was co-written by Joyce Lin, Head of Developer Relations at Postman.

Docker Automate API Tests and Debug in Docker with Postman inline v1b

Over 20 million developers use the Postman API platform, and its Collections feature is a standout within the community. At its core, a collection is a group of API calls. 

While not all collections evolve into anything more complex, many are foundational building blocks for Postman’s more advanced features. For example, a collection can contain API tests and documentation, inform mock servers, or represent a sequence of API calls.

Postman Collection Tests
An example of a Postman collection containing API tests within the Postman app.

Storing API requests in a collection lets users explore, run, and share their work with others. We’ll explain why that matters and how you can start using Postman’s Newman Docker extension.

Why run a Postman collection in Docker Desktop?

Newman Run Dashboard
The Newman extension in Docker Desktop displays collection run results.

Since a collection is a sequence of API calls, it can represent any API workflow imaginable. For example, here are some use cases for running collections during development: 

  • Automation – Automate API testing to run tests locally
  • Status checks – Run collections to assess the current status and health of your API
  • Debugging – Log test results and filter by test failures to debug unexpected API behavior
  • Execution – Run collections to execute an API workflow against different environment configurations

For each use case, you may want to run collections in different scenarios. Here are some scenarios involving API test automation: 

  • Testing locally during development
  • Testing as part of a CI/CD pipeline
  • Testing based on an event trigger
  • Health checking on a predetermined schedule

And you can run collections in several ways. One method leverages Newman — Postman’s open-source library — with Docker. You can use Newman from your command line or with functions, scripts, and containerized applications. You can even run your collection from Docker Desktop!

Getting started with Newman in Docker Desktop

The Postman Docker Extension uses Postman’s Newman image to run a collection and display the results. In this section, we’ll test drive the extension and run our first collection.

Setting up

  1. Install the latest version of Docker Desktop. Install the Newman extension for Docker Desktop.
  2. Sign up for a free Postman account and generate an API key. This will let you access your Postman data like collections and environments.
  3. Log into your Postman account and create a Postman collection. If you don’t have a Postman collection yet, you can fork this sample collection to your own workspace. Afterwards, this forked collection will appear as your own collection.

Running a Postman collection

  1. Enter your Postman API key and click “Get Postman Collections.”
Postman API Key

2. Choose which collection you want to run.

3. (Optional) Select an environment to run alongside your collection. In a Postman environment, you can define different server configurations and credentials corresponding to each server environment.

4. Click “Run Collection” and review the results of your API calls. You can filter by failed tests and drill down into the details. Here’s how everything works, going step by step:

5. Repeat this process with other collections and environments as needed.

Contribute to this extension or make your own

This extension is an open source, community project, so feel free to contribute your ideas. Or you can fork it and make it your own. Give Newman a try by visiting Docker Hub and opening the extension within Docker Desktop. You can also install Newman directly within Docker Desktop’s Extensions Marketplace. 

Want to experiment even further? You can bring your own ideas to life via our Extensions SDK GitHub page. Here you’ll find useful code samples to kickstart your next project. 

Special thanks from Joyce to Postman for supporting open-source projects like Newman, empowering the community to build integrations, and to Software Development Engineer in Test (SDET) Danny Dainton for his UI work around collection run results.

]]>
Use Postman's Newman to run collections on Docker Desktop nonadult
Developing Go Apps With Docker https://www.docker.com/blog/developing-go-apps-docker/ Wed, 02 Nov 2022 14:00:00 +0000 https://www.docker.com/?p=38582 Go (or Golang) is one of the most loved and wanted programming languages, according to Stack Overflow’s 2022 Developer Survey. Thanks to its smaller binary sizes vs. many other languages, developers often use Go for containerized application development. 

Mohammad Quanit explored the connection between Docker and Go during his Community All-Hands session. Mohammad shared how to Dockerize a basic Go application while exploring each core component involved in the process: 

Follow along as we dive into these containerization steps. We’ll explore using a Go application with an HTTP web server — plus key best practices, optimization tips, and ways to bolster security. 

Go application components

Creating a full-fledged Go application requires you to create some Go-specific components. These are essential to many Go projects, and the containerization process relies equally heavily on them. Let’s take a closer look at those now. 

Using main.go and go.mod

Mohammad mainly highlights the main.go file since you can’t run an app without executable code. In Mohammad’s case, he created a simple web server with two unique routes: an I/O format with print functionality, and one that returns the current time.

A main.go file creating a web server with an I/O format with print functionality and a route to return the current time.

What’s nice about Mohammad’s example is that it isn’t too lengthy or complex. You can emulate this while creating your own web server or use it as a stepping stone for more customization.

Note: You might also use a package main in place of a main.go file. You don’t explicitly need main.go specified for a web server — since you can name the file anything you want — but you do need a func main () defined within your code. This exists in our sample above.

We always recommend confirming that your code works as expected. Enter the command go run main.go to spin up your application. You can alternatively replace main.go with your file’s specific name. Then, open your browser and visit http://localhost:8081 to view your “Hello World” message or equivalent. Since we have two routes, navigating to http://localhost:8081/time displays the current time thanks to Mohammad’s second function. 

Next, we have the go.mod file. You’ll use this as a root file for your Go packages, module path for imports (shown above), and for dependency requirements. Go modules also help you choose a directory for your project code. 

With these two pieces in place, you’re ready to create your Dockerfile

Creating your Dockerfile

Building and deploying your Dockerized Go application means starting with a software image. While you can pull this directly from Docker Hub (using the CLI), beginning with a Dockerfile gives you more configuration flexibility. 

You can create this file within your favorite editor, like VS Code. We recommend VS Code since it supports the official Docker extension. This extension supports debugging, autocompletion, and easy project file navigation. 

Choosing a base image and including your application code is pretty straightforward. Since Mohammad is using Go, he kicked off his Dockerfile by specifying the golang Docker Official Image as a parent image. Docker will build your final container image from this. 

You can choose whatever version you’d like, but a pinned version like golang:1.19.2-bullseye is both stable and slim. Newer image versions like these are also safe from October 2022’s Text4Shell vulnerability

You’ll also need to do the following within your Dockerfile

  • Include an app directory for your source code
  • Copy everything from the root directory into your app directory
  • Copy your Go files into your app directory and install dependencies
  • Build your app with configuration
  • Tell your Docker container to listen on a certain port at runtime
  • Define an executable command that runs once your container starts

With these points in mind, here’s how Mohammad structured his basic Dockerfile:

# Specifies a parent image
FROM golang:1.19.2-bullseye

# Creates an app directory to hold your app’s source code
WORKDIR /app

# Copies everything from your root directory into /app
COPY . .

# Installs Go dependencies
RUN go mod download

# Builds your app with optional configuration
RUN go build -o /godocker

# Tells Docker which network port your container listens on
EXPOSE 8080

# Specifies the executable command that runs when the container starts
CMD [ “/godocker” ]

From here, you can run a quick CLI command to build your image from this file: 

docker build --rm -t [YOUR IMAGE NAME]:alpha .

This creates an image while removing any intermediate containers created with each image layer (or step) throughout the build process. You’re also tagging your image with a name for easier reference later on. 

Confirm that Docker built your image successfully by running the docker image ls command:

A terminal running the docker image ls command and showing that the image was built successfully.

If you’ve already pulled or built images in the past and kept them, they’ll also appear in your CLI output. However, you can see Mohammad’s go-docker image listed at the top since it’s the most recent. 

Making changes for production workloads

What if you want to account for code or dependency changes that’ll inevitably occur with a production Go application? You’ll need to tweak your original Dockerfile and add some instructions, according to Mohammad, so that changes are visible and the build process succeeds:

FROM golang:1.19.2-bullseye

WORKDIR /app

# Effectively tracks changes within your go.mod file
COPY go.mod .

RUN go mod download

# Copies your source code into the app directory
COPY main.go .

RUN go mod -o /godocker

EXPOSE 8080

CMD [ “/godocker” ]

After making those changes, you’ll want to run the same docker build and docker image ls commands. Now, it’s time to run your new image! Enter the following command to start a container from your image: 

docker run -d -p 8080:8081 --name go-docker-app [YOUR IMAGE NAME]:alpha

Confirm that this worked by entering the docker ps command, which generates a list of your containers. If you have Docker Desktop installed, you can also visit the Containers tab from the Docker Dashboard and locate your new container in the list. This also applies to your image builds — instead using the Images tab. 

Congratulations! By tracing Mohammad’s steps, you’ve successfully containerized a functioning Go application. 

Best practices and optimizations

While our Go application gets the job done, Mohammad’s final image is pretty large at 913MB. The client (or end user) shouldn’t have to download such a hefty file. 

Mohammad recommends using a multi-stage build to only copy forward the components you need between image layers. Although we start with a golang:version as a builder image, defining a second build stage and choosing a slim alternative like alpine helps reduce image size. You can watch his step-by-step approach to tackling this. 

This is beneficial and common across numerous use cases. However, you can take things a step further by using FROM scratch in your multi-stage builds. This empty file is the smallest we offer and accepts static binaries as executables — making it perfect for Go application development. 

You can learn more about our scratch image on Docker Hub. Despite being on Hub, you can only add scratch directly into your Dockerfile instead of pulling it. 

Develop your Go application today

Mohammad Quanit outlined some user-friendly development workflows that can benefit both newer and experienced Go users. By following his steps and best practices, it’s possible to create cross-platform Go apps that are slim and performant. Docker and Go inherently mesh well together, and we also encourage you to explore what’s possible through containerization. 

Want to learn more?

]]>
Developing Go apps with Docker nonadult
Announcing Docker Hub OCI Artifacts Support https://www.docker.com/blog/announcing-docker-hub-oci-artifacts-support/ Mon, 31 Oct 2022 16:00:00 +0000 https://www.docker.com/?p=38556 We’re excited to announce that Docker Hub can now help you distribute any type of application artifact! You can now keep everything in one place without having to leverage multiple registries.

Before today, you could only use Docker Hub to store and distribute container images — or artifacts usable by container runtimes. This became a limitation of our platform, since container image distribution is just the tip of the application delivery iceberg. Nowadays, modern application delivery requires numerous types of artifacts:

Developers often share these with clients that need them since they add immense value to each project. And while the OCI working groups are busy releasing the latest OCI Artifact Specification, we still have to package application artifacts as OCI images in the meantime. 

Docker Hub acts as an image registry and is perfectly suited for distributing application artifacts. That’s why we’ve added support for any software artifact — packaged as an OCI image — to Docker Hub.

What’s the Open Container Initiative (OCI)?

Back in 2015, we helped establish the Open Container Initiative as an open governance structure to standardize container image formats, container runtimes, and image distribution.

The OCI maintains a few core specifications. These govern the following:

  • How to package filesystem bundles
  • How to launch containerized, cross-platform apps
  • How to make packaged content accessible to remote clients

The Runtime Specification determines how OCI images and runtimes interact. Next, the Image Specification outlines how to create OCI images. Finally, the Distribution Specification defines how to make content distribution interoperable.

The OCI’s overall aim is to boost transparency, runtime predictability, software compatibility, and distribution. We’ve since donated our own container format and runC OCI-compliant runtime to the OCI, plus given the OCI-compliant distribution project to the CNCF.

Why are we adding OCI support? 

Container images are integral to supporting your containerized application builds. We know that images accumulate between projects, making centralized cloud storage essential to efficiently manage resources. Developers shouldn’t have to rely on local storage or wonder if these resources are readily accessible. However, we also know that developers want to store a variety of artifacts within Docker Hub. 

Storing your artifacts in Docker Hub unlocks “anywhere access” while also enabling improved collaboration through Docker Hub’s standard sharing capabilities. This aligns us more closely with the OCI’s content distribution mission by giving users greater control over key pieces of application delivery.

How do I manage different OCI artifacts?

We recommend using dedicated tools to help manage non-container OCI artifacts, like the Helm CLI for Helm charts or the OCI Registry-as-Storage (ORAS) CLI for arbitrary content types.

Let’s walk through a few use cases to showcase OCI support in Docker Hub.

Working with Helm charts

Helm chart support was your most-requested feature, and we’ve officially added it to Docker Hub! So, how do you take advantage? We’ll create a simple Helm chart and push it to Docker Hub. This process will follow Helm’s official guide for storing Helm charts as OCI images in registries.

First, we’ll create a demo Helm chart:

$ helm create demo

This’ll generate a familiar Helm chart boilerplate of files that you can edit:

demo
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── serviceaccount.yaml
│   └── tests
│   	└── test-connection.yaml
└── values.yaml

3 directories, 10 files

Once we’re done editing, we’ll need to package the Helm chart as an OCI image:

$ helm package demo

Successfully packaged chart and saved it to: /Users/martine/tmp/demo-0.1.0.tgz

Don’t forget to log into Docker Hub before pushing your Helm chart. We recommend creating a Personal Access Token (PAT) for this. You can export your PAT via an environment variable, and login, as follows:

$ echo $REG_PAT | helm registry login registry-1.docker.io -u martine --password-stdin

Pushing your Helm chart

You’re now ready to push your first Helm chart to Docker Hub! But first, make sure you have write access to your Helm chart’s destination namespace. In this example, let’s push to the docker namespace:

$ helm push demo-0.1.0.tgz oci://registry-1.docker.io/docker

Pushed: registry-1.docker.io/docker/demo:0.1.0
Digest: sha256:1e960ad1693c234b66ec1f9ddce80986cbf7159d2bb1e9a6d2c2cd6e89925e54

Viewing your Helm chart and using filters

Now, If you log in to Docker Hub and navigate to the demo repository detail, you’ll find your Helm chart in the list of repository tags:

Helm Type Docker Hub

You can navigate to the Helm chart page by clicking on the tag. The page displays useful Helm CLI commands:

Helm CLI Commands

Repository content management is now easier. We’ve improved content discoverability by adding a drop-down button to quickly filter the repository list by content type. Simply click the Content drop-down and select Helm from the list:

Helm Type Selection

Working with volumes

Developers use volumes throughout the Docker ecosystem to share arbitrary application data like database files. You can already back up your volumes using the Volume Backup & Share extension that we recently launched. You can now also filter repositories to find those containing volumes using the same drop-down menu.

But until Volumes Backup & Share pushes volumes as OCI artifacts instead of images (coming soon!), you can use the ORAS CLI to push volumes.

Note: We recommend ORAS CLI versions 0.15 or later since these bring full OCI registry client functionality.

Let’s walk through a simple use case that mirrors the examples documented by the ORAS CLI. First, we’ll create a simple file we want to package as a volume:

$ echo "bar" > foo.txt

For Docker Hub to recognize this volume, we must attach a config file to the OCI image upon creation and mark it with a specific media type. The file can contain arbitrary content, so let’s create one:

$ echo "{\"name\":\"foo\",\"value\":\"bar\"}" > config.json

With this step completed, you’re now ready to push your volume.

Pushing your volume

Here’s where the magic happens. The media type Docker Hub needs to successfully recognize the OCI image as a volume is application/vnd.docker.volume.v1+tar.gz. You can attach the media type to the config file and push it to Docker Hub with the following command (plus its resulting output):

$ oras push registry-1.docker.io/docker/demo:0.0.1 --config config.json:application/vnd.docker.volume.v1+tar.gz foo.txt:text/plain

Uploading b5bb9d8014a0 foo.txt
Uploaded  b5bb9d8014a0 foo.txt
Pushed registry-1.docker.io/docker/demo:0.0.1
Digest: sha256:f36eddbab8459d0ad1436b7ca8af6bfc512ec74f45d8136b53c16db87562016e

We now have two types of content in the demo repository as shown in the following breakdown:

Volume Content Type List

If you navigate to the content page, you’ll see some basic information that we’ll expand upon in future iterations. This will boost visibility into a volume’s contents.

Volume Details

Handling generic content types

If you don’t use the application/vnd.docker.volume.v1+tar.gz media type when pushing the volume with the ORAS CLI, Docker Hub will mark the artifact as generic to distinguish it from recognized content.

Let’s push the same volume but use application/vnd.random.volume.v1+tar.gz media type instead of the one known to Docker Hub:

$ oras push registry-1.docker.io/docker/demo:0.1.1 --config config.json:application/vnd.random.volume.v1+tar.gz foo.txt:text/plain

Exists	7d865e959b24 foo.txt
Pushed registry-1.docker.io/docker/demo:0.1.1
Digest: sha256:d2fb2b176ee4e326f1f34ecdaede8db742f2c444cb2c9ceff0f5c8b743281c95

You can see the new content is assigned a generic Other type. We can still view the tagged content’s media type by hovering over the type label. In this case, that’s application/vnd.random.volume.v1+tar.gz:

Other Content Type List

If you’d like to filter the repositories that contain both Helm charts and volumes, use the same drop-down menu in the top-right corner:

Volume Type Selection

Working with container images

Finally, you can continue pushing your regular container images to the exact same repository as your other artifacts. Say we re-tag the Redis Docker Official Image and push it to Docker Hub:

$ docker tag redis:3.2-alpine docker/demo:v1.2.2

$ docker push docker/demo:v1.2.2

The push refers to repository [docker.io/docker/demo]
a1892d5d1a6d: Mounted from library/redis
e41876edb6d0: Mounted from library/redis
7119119b7542: Mounted from library/redis
169a281fff0f: Mounted from library/redis
04c8ef03e935: Mounted from library/redis
df64d3292fd6: Mounted from library/redis
v1.2.2: digest: sha256:359cfebb00bef01cda3bc1ca453e6455c770a246a06ad8df499a28118c144eda size: 1570

Viewing your container images

If you now visit the demo repository page on Docker Hub, you’ll see every artifact listed under Tags and scans:

All Artifacts Content List

We’ll also introduce more features soon to help you better organize your application content, so stay tuned for more announcements!

Follow along for more updates

All developers can now access and choose from more robust sets of artifacts while building and distributing applications with Docker Hub. Not only does this remove existing roadblocks, but it’ll hopefully encourage you to create and distribute even more exciting applications.

But, our mission doesn’t end here! We’re continually working to bolster our OCI support. While the OCI Artifact Specification is considered a release candidate, full Docker Hub support for OCI Reference Types and the accompanying Referrers API is on the horizon. Stay tuned for upcoming enhancements, improved repo organization, and more.

Note: The OCI artifact has now been removed from OCI image-spec. Refer to this update for more information.

]]>
Security Advisory: High Severity OpenSSL Vulnerabilities https://www.docker.com/blog/security-advisory-critical-openssl-vulnerability/ Thu, 27 Oct 2022 22:19:42 +0000 https://www.docker.com/?p=38506 Update: 01 November 2022 12:57 PM PDT

The OpenSSL Project has officially disclosed two high-severity vulnerabilities: CVE-2022-3602 and CVE-2022-3786. These CVEs impact all OpenSSL versions after 3.0. The sole exception is version 3.0.7, which contains fixes for those latest vulnerabilities. Previously, these CVEs were thought to be “critical.”


Our title and original post below (written October 27th, 2022) have been updated:

What are they?

CVE-2022-3602 is an arbitrary 4-byte stack buffer overflow that could trigger crashes or allow remote code execution (RCE). Meanwhile, attackers can exploit CVE-2022-3786 via malicious email addresses to trigger a denial-of-service (DoS) state via buffer overflow.

The pre-announcement expected the vulnerability to be deemed “critical” per the OpenSSL Project’s security guidelines. Since then, the OpenSSL Project has downgraded those vulnerabilities to “high” severity in its updated advisory. Regardless, the Project recommends updating to OpenSSL 3.0.7 as quickly as possible.

Docker estimates about 1,000 image repositories could be impacted across various Docker Official Images and Docker Verified Publisher images. This includes images that are based on versions of Debian 12, Ubuntu 22.04, and Redhat Enterprise Linux 9+ which install 3.x versions of OpenSSL. Images using Node.js 18 and 19 are also affected.

We’re updating our users to help them quickly remediate any impacted images.

Am I vulnerable?

With OpenSSL’s vulnerability details now live, it’s time to see if your public and private repositories are impacted. Docker created a placeholder that references both OpenSSL CVEs, which we’ll link to the official CVEs. 

Like with Heartbleed, OpenSSL’s maintainers carefully considered what information they publicized until fixes arrived. You can now better protect yourself. We’ve created a way to quickly and transparently analyze any image’s security flaws.

Visit Docker’s Image Vulnerability Database, navigate to the “Vulnerability search” tab, and search for the placeholder security advisory dubbed “DSA-2022-0001”. You can also use this tool to see other vulnerabilities as they’re discovered, receive updates to refresh outdated base images, and more:

Luckily, you can take targeted steps to determine how vulnerable you are. We suggest using the docker scan CLI command and Snyk’s Docker Hub Vulnerability Scanning tool. This’ll help detect the presence of vulnerable library versions and flag your image as vulnerable.

Alternatively, Docker is providing an experimental local tool to detect OpenSSL 3.x in Docker images. You can install this tool from its GitHub repository. Then, you can search your image for OpenSSL 3.x version with the following command:

$ docker-index cve --image gradle@sha256:1a6b42a0a86c9b62ee584f209a17d55a2c0c1eea14664829b2630f28d57f430d DSA-2022–0001 -r

If the image contains a vulnerable OpenSSL version, your terminal output will resemble the following:

Docker Index CVE Output

And if Docker doesn’t detect a vulnerable version of OpenSSL in your image, you’ll see the following:

DSA-2022-0001 not detected

Update and protect yourself today

While we’re happy to see these latest CVEs have been downgraded, it’s important to take every major vulnerability very seriously. Remember to update to OpenSSL version 3.0.7 to squash these bugs and harden your applications. 

We also encourage you to sign up for our Early Access Program to access the tools discussed in this blog, plus have the opportunity to provide invaluable product feedback to help us improve!

]]>
Docker Hub Archives | Docker nonadult
Bring Continuous Integration to Your Laptop With the Drone CI Docker Extension https://www.docker.com/blog/bring-continuous-integration-to-your-laptop-with-the-drone-ci-docker-extension/ Tue, 20 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37606 Continuous Integration (CI) is a key element of cloud native application development. With containers forming the foundation of cloud-native architectures, developers need to integrate their version control system with a CI tool. 

There’s a myth that continuous integration needs a cloud-based infrastructure. Even though CI makes sense for production releases, developers need to build and test the pipeline before they can share it with their team — or have the ability to perform the continuous integration (CI) on their laptop. Is that really possible today? 

Introducing the Drone CI pipeline

An open-source project called Drone CI makes that a reality. With over 25,700 GitHub stars and 300-plus contributors, Drone is a cloud-native, self-service CI platform. Drone CI offers a mature, container-based system that leverages the scaling and fault-tolerance characteristics of cloud-native architectures. It helps you build container-friendly pipelines that are simple, decoupled, and declarative. 

Drone is a container based pipeline engine that lets you run any existing containers as part of your pipeline or package your build logic into reusable containers called Drone Plugins

Drone plugins are configurable based on the need and that allows distributing the container within your organization or to the community in general.

Running Drone CI pipelines from Docker Desktop

For a developer working with decentralized tools, the task of building and deploying microservice applications can be monumental. It’s tricky to install, manage, and use these apps in those environments. That’s where Docker Extensions come in. With Docker Extensions, developer tools are integrated right into Docker Desktop — giving you streamlined management workflows. It’s easier to optimize and transform your development processes. 

The Drone CI extension for Docker Desktop brings CI to development machines. You can now import Drone CI pipelines into Docker Desktop and run them locally. You can also run specific steps of a pipeline, monitor execution results, and inspect logs.

Setting up a Drone CI pipeline

In this guide, you’ll learn how to set up a Drone CI pipeline from scratch on Docker Desktop. 

First, you’ll install the Drone CI Extension within Docker Desktop. Second, you’ll learn how to discover Drone pipelines. Third, you’ll see how to open a Drone pipeline on Visual Studio Code. Lastly, you’ll discover how to run CI pipelines in trusted mode, which grants them elevated privileges on the host machine. Let’s jump in.

Prerequisites

You’ll need to download Docker Desktop 4.8 or later before getting started. Make sure to choose the correct version for your OS and then install it. 

Next, hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box.

Enable Extensions Settings

Installing the Drone CI Docker extension

Drone CI isn’t currently available on the Extensions Marketplace, so you’ll have to download it via the CLI. Launch your terminal and run the following command to install the Drone CI Extension:

docker extension install drone/drone-ci-docker-extension:latest

The Drone CI extension will soon appear in the Docker Dashboard’s left sidebar, underneath the Extensions heading:

Drone CI Sidebar

Import Drone pipelines

You can click the “Import Pipelines” option to specify the host filesystem path where your Drone CI pipelines (drone.yml files) are. If this is your first time with Drone CI pipelines, you can use the examples from our GitHub repo.

In the recording above, we’ve used the long-run-demo sample to run a local pipeline that executes a long running sleep command. This occurs within a Docker container.

kind: pipeline
type: docker
name: sleep-demos
steps: 
  - name: sleep5
    image: busybox
    pull: if-not-exists
    commands:
    - x=0;while [ $x -lt 5 ]; do echo "hello"; sleep 1; x=$((x+1)); done
  - name: an error step
    image: busybox
    pull: if-not-exists
    commands:
    - yq --help

You can download this pipeline YAML file from the Drone CI GitHub page.

The file starts with a pipeline object that defines your CI pipeline. The type attribute defines your preferred runtime while executing that pipeline. 

Drone supports numerous runners like  docker, kubernetes, and more. The extension only supports docker pipelines currently.
Each pipeline step spins up a Docker container with the corresponding image defined as part of the step image attribute.

Each step defines an attribute called commands. This is a list of shell commands that we want to execute as part of the build. The defined list of  commands will be converted into shell script and set as Docker container’s ENTRYPOINT. If any command (for example, the missing yq command, in this case) returns a non-zero exit code, the pipeline fails and exits.

Drone Pipelines List
Drone Stages

Edit your pipeline faster in VS Code via Drone CI

Visual Studio Code (VS Code) is a lightweight, highly-popular IDE. It supports JavaScript, TypeScript, and Node.js. VS Code also has a rich extensions ecosystem for numerous other languages and runtimes. 

Opening your Drone pipeline project in VS Code takes just seconds from within Docker Desktop:

Open Pipeline Project

This feature helps you quickly view your pipeline and add, edit, or remove steps — then run them from Docker Desktop. It lets you iterate faster while testing new pipeline changes.

Running specific steps in the CI pipeline

The Drone CI Extension lets you run individual steps within the CI pipeline at any time. To better understand this functionality, let’s inspect the following Drone YAML file:

kind: pipeline
type: docker
name: sleep-demos
steps: 
  - name: sleep5
    image: busybox
    pull: if-not-exists
    commands:
    - x=0;while [ $x -lt 5 ]; do echo "hello"; sleep 1; x=$((x+1)); done
  - name: an error step
    image: busybox
    pull: if-not-exists
    commands:
    - yq --help

In this example, the first pipeline step defined as sleep5 lets you execute a shell script (echo “hello”) for five seconds and then stop (ignoring an error step).The video below shows you how to run the specific sleep-demos stage within the pipeline:

Running steps in trusted mode

Sometimes, you’re required to run a CI pipeline with elevated privileges. These privileges enable a user to systematically do more than a standard user. This is similar to how we pass the --privileged=true parameter within a docker run command. 

When you execute docker run --privileged, Docker will permit access to all host devices and set configurations in AppArmor or SELinux. These settings may grant the container nearly equal access to the host as processes running outside containers on the host.

Drone’s trusted mode tells your container runtime to run the pipeline containers with elevated privileges on the host machine. Among other things, trusted mode can help you:

  • Mount the Docker host socket onto the pipeline container
  • Mount the host path to the Docker container

Run pipelines using environment variable files

The Drone CI Extension lets you define environment variables for individual build steps. You can set these within a pipeline step. Like docker run provides a way to pass environment variables to running containers, Drone lets you pass usable environment variables to your build. Consider the following Drone YAML file:

kind: pipeline
type: docker
name: default
steps: 
  - name: display environment variables
    image: busybox
    pull: if-not-exists
    commands:
    - printenv

The file starts with a pipeline object that defines your CI pipeline. The type attribute defines your preferred runtime (Docker, in our case) while executing that pipeline. The platform section helps configure the target OS and architecture (like arm64) and routes the pipeline to the appropriate runner. If unspecified, the system defaults to Linux amd64

The steps section defines a series of shell commands. These commands run within a busybox Docker container as the ENTRYPOINT. As shown, the command prints the environment variables if you’ve declared the following environment variables in your my-env file:

DRONE_DESKTOP_FOO=foo
DRONE_DESKTOP_BAR=bar

You can choose your preferred environment file and run the CI pipeline (pictured below):

Pipeline ENV File

If you try importing the CI pipeline, you can print every environment variable.

Run pipelines with secrets files

We use repository secrets to store and manage sensitive information like passwords, tokens, and ssh keys. Storing this information as a secret is considered safer than storing it within a plain text configuration file. 

Note: Drone masks all values used from secrets while printing them to standard output and error.

The Drone CI Extension lets you choose your preferred secrets file and use it within your CI pipeline as shown below:

Secrets File

Remove pipelines

You can remove a CI pipeline in just one step. Select one or more Drone pipelines and remove them by clicking the red minus (“-”) button on the right side of the Dashboard. This action will only remove the pipelines from Docker Desktop — without deleting them from your filesystem.

Bulk remove all pipelines

Bulk Remove

Remove a single pipeline

Remove Single Pipeline

Conclusion

Drone is a modern, powerful, container-friendly CI that empowers busy development teams to automate their workflows. This dramatically shortens building, testing, and release cycles. With a Drone server, development teams can build and deploy cloud apps. These harness the scaling and fault-tolerance characteristics of cloud-native architectures like Kubernetes. 

Check out Drone’s documentation to get started with CI on your machine. With the Drone CI extension, developers can now run their Drone CI pipelines locally as they would in their CI systems.

Want to dive deeper into Docker Extensions? Check out our intro documentation, or discover how to build your own extensions

]]>
Docker Hub Archives | Docker nonadult
Announcing Docker Hub Export Members https://www.docker.com/blog/announcing-docker-hub-export-members/ Mon, 19 Sep 2022 14:30:00 +0000 https://www.docker.com/?p=37523 Docker Hub’s Export Members functionality is now available, giving you the ability to export a full list of all your Docker users into a single CSV file. The file will contain their username, full name, and email address — as well as the user’s current status and if the user belongs to a given team. If you’re an administrator, that means you can quickly view your entire organization’s usage of Docker.

In the Members Tab, you can download a CSV file by pressing the Export members button. The file can be used to verify user status, confirm team structure, and quickly audit Docker usage.

export members docker hub

The Export Members feature is only available for Docker Business subscribers. This feature will help organizations better track their utilization of Docker, while also simplifying the steps needed for an administrator to review their users within Docker Hub. 

At Docker, we continually listen to our customers, and strive to build the tools needed to make them successful. Feel free to check out our public roadmap and leave feedback or requests for more features like this!

Learn more about exporting users on our docs page, or sign in to your Docker Hub account to try it for yourself.

]]>
How to Colorize Black & White Pictures With OpenVINO on Ubuntu Containers https://www.docker.com/blog/how-to-colorize-black-white-pictures-ubuntu-containers/ Fri, 02 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=36935 If you’re looking to bring a stack of old family photos back to life, check out Ubuntu’s demo on how to use OpenVINO on Ubuntu containers to colorize monochrome pictures. This magical use of containers, neural networks, and Kubernetes is packed with helpful resources and a fun way to dive into deep learning!

A version of Part 1 and Part 2 of this article was first published on Ubuntu’s blog.

Ubuntu and intel repost 900x600 1

Table of contents:

OpenVINO on Ubuntu containers: making developers’ lives easier

Suppose you’re curious about AI/ML and what you can do with OpenVINO on Ubuntu containers. In that case, this blog is an excellent read for you too.

Docker image security isn’t only about provenance and supply chains; it’s also about the user experience. More specifically, the developer experience.

Removing toil and friction from your app development, containerization, and deployment processes avoids encouraging developers to use untrusted sources or bad practices in the name of getting things done. As AI/ML development often requires complex dependencies, it’s the perfect proof point for secure and stable container images.

Why Ubuntu Docker images?

As the most popular container image in its category, the Ubuntu base image provides a seamless, easy-to-set-up experience. From public cloud hosts to IoT devices, the Ubuntu experience is consistent and loved by developers.

One of the main reasons for adopting Ubuntu-based container images is the software ecosystem. More than 30,000 packages are available in one “install” command, with the option to subscribe to enterprise support from Canonical. It just makes things easier.

In this blog, you’ll see that using Ubuntu Docker images greatly simplifies component containerization. We even used a prebuilt & preconfigured container image for the NGINX web server from the LTS images portfolio maintained by Canonical for up to 10 years.

Beyond providing a secure, stable, and consistent experience across container images, Ubuntu is a safe choice from bare metal servers to containers. Additionally, it comes with hardware optimization on clouds and on-premises, including Intel hardware.

Why OpenVINO?

When you’re ready to deploy deep learning inference in production, binary size and memory footprint are key considerations – especially when deploying at the edge. OpenVINO provides a lightweight Inference Engine with a binary size of just over 40MB for CPU-based inference. It also provides a Model Server for serving models at scale and managing deployments.

OpenVINO includes open-source developer tools to improve model inference performance. The first step is to convert a deep learning model (trained with TensorFlow, PyTorch,…) to an Intermediate Representation (IR) using the Model Optimizer. In fact, it cuts the model’s memory usage in half by converting it from FP32 to FP16 precision. You can unlock additional performance by using low-precision tools from OpenVINO. The Post-training Optimisation Tool (POT) and Neural Network Compression Framework (NNCF) provide quantization, binarisation, filter pruning, and sparsity algorithms. As a result, Intel devices’ throughput increases on CPUs, integrated GPUs, VPUs, and other accelerators.

Open Model Zoo provides pre-trained models that work for real-world use cases to get you started quickly. Additionally, Python and C++ sample codes demonstrate how to interact with the model. More than 280 pre-trained models are available to download, from speech recognition to natural language processing and computer vision.

For this blog series, we will use the pre-trained colorization models from Open Model Zoo and serve them with Model Server.

colorize example albert einstein sticks tongue out

OpenVINO and Ubuntu container images

The Model Server – by default – ships with the latest Ubuntu LTS, providing a consistent development environment and an easy-to-layer base image. The OpenVINO tools are also available as prebuilt development and runtime container images.

To learn more about Canonical LTS Docker Images and OpenVINO™, read:

Neural networks to colorize a black & white image

Now, back to the matter at hand: how will we colorize grandma and grandpa’s old pictures? Thanks to Open Model Zoo, we won’t have to train a neural network ourselves and will only focus on the deployment. (You can still read about it.)

architecture diagram colorizer demo app microk8s
Architecture diagram of the colorizer demo app running on MicroK8s

Our architecture consists of three microservices: a backend, a frontend, and the OpenVINO Model Server (OVMS) to serve the neural network predictions. The Model Server component hosts two different demonstration neural networks to compare their results (V1 and V2). These components all use the Ubuntu base image for a consistent software ecosystem and containerized environment.

A few reads if you’re not familiar with this type of microservices architecture:

gRPC vs REST APIs

The OpenVINO Model Server provides inference as a service via HTTP/REST and gRPC endpoints for serving models in OpenVINO IR or ONNX format. It also offers centralized model management to serve multiple different models or different versions of the same model and model pipelines.

The server offers two sets of APIs to interface with it: REST and gRPC. Both APIs are compatible with TensorFlow Serving and expose endpoints for prediction, checking model metadata, and monitoring model status. For use cases where low latency and high throughput are needed, you’ll probably want to interact with the model server via the gRPC API. Indeed, it introduces a significantly smaller overhead than REST. (Read more about gRPC.)

OpenVINO Model Server is distributed as a Docker image with minimal dependencies. For this demo, we will use the Model Server container image deployed to a MicroK8s cluster. This combination of lightweight technologies is suitable for small deployments. It suits edge computing devices, performing inferences where the data is being produced – for increased privacy, low latency, and low network usage.

Ubuntu minimal container images

Since 2019, the Ubuntu base images have been minimal, with no “slim” flavors. While there’s room for improvement (keep posted), the Ubuntu Docker image is a less than 30MB download, making it one of the tiniest Linux distributions available on containers.

In terms of Docker image security, size is one thing, and reducing the attack surface is a fair investment. However, as is often the case, size isn’t everything. In fact, maintenance is the most critical aspect. The Ubuntu base image, with its rich and active software ecosystem community, is usually a safer bet than smaller distributions.

A common trap is to start smaller and install loads of dependencies from many different sources. The end result will have poor performance, use non-optimized dependencies, and not be secure. You probably don’t want to end up effectively maintaining your own Linux distribution … So, let us do it for you.

colorize example cat walking through grass
“What are you looking at?” (original picture source)

Demo architecture

“As a user, I can drag and drop black and white pictures to the browser so that it displays their ready-to-download colorized version.” – said the PM (me).

For that – replied the one-time software engineer (still me) – we only need:

  • A fancy yet lightweight frontend component.
  • OpenVINO™ Model Server to serve the neural network colorization predictions.
  • A very light backend component.

Whilst we could target the Model Server directly with the frontend (it exposes a REST API), we need to apply transformations to the submitted image. The colorization models, in fact, each expect a specific input.

Finally, we’ll deploy these three services with Kubernetes because … well … because it’s groovy. And if you think otherwise (everyone is allowed to have a flaw or two), you’ll find a fully functional docker-compose.yaml in the source code repository.

architecture diagram for demo app
Architecture diagram for the demo app (originally colored tomatoes)

In the upcoming sections, we will first look at each component and then show how to deploy them with Kubernetes using MicroK8s. Don’t worry; the full source code is freely available, and I’ll link you to the relevant parts.

Neural network – OpenVINO Model Server

The colorization neural network is published under the BSD 2-clause License, accessible from the Open Model Zoo. It’s pre-trained, so we don’t need to understand it in order to use it. However, let’s look closer to understand what input it expects. I also strongly encourage you to read the original work from Richard Zhang, Phillip Isola, and Alexei A. Efros. They made the approach super accessible and understandable on this website and in the original paper.

neural network architecture
Neural network architecture (from arXiv:1603.08511 [cs.CV])

As you can see on the network architecture diagram, the neural network uses an unusual color space: LAB. There are many 3-dimensional spaces to code colors: RGB, HSL, HSV, etc. The LAB format is relevant here as it fully isolates the color information from the lightness information. Therefore, a grayscale image can be coded with only the L (for Lightness) axis. We will send only the L axis to the neural network’s input. It will generate predictions for the colors coded on the two remaining axes: A and B.

From the architecture diagram, we can also see that the model expects a 256×256 pixels input size. For these reasons, we cannot just send our RGB-coded grayscale picture in its original size to the network. We need to first transform it.

We compare the results of two different model versions for the demo. Let them be called ‘V1’ (Siggraph) and ‘V2’. The models are served with the same instance of the OpenVINO™ Model Server as two different models. (We could also have done it with two different versions of the same model – read more in the documentation.)

Finally, to build the Docker image, we use the first stage from the Ubuntu-based development kit to download and convert the model. We then rebase on the more lightweight Model Server image.

# Dockerfile
FROM openvino/ubuntu20_dev:latest AS omz
# download and convert the model
…
FROM openvino/model_server:latest
# copy the model files and configure the Model Server
…

Backend – Ubuntu-based Flask app (Python)

For the backend microservice that interfaces between the user-facing frontend and the Model Server hosting the neural network, we chose to use Python. There are many valuable libraries to manipulate data, including images, specifically for machine learning applications. To provide web serving capabilities, Flask is an easy choice.

The backend takes an HTTP POST request with the to-be-colorized picture. It synchronously returns the colorized result using the neural network predictions. In between – as we’ve just seen – it needs to convert the input to match the model architecture and to prepare the output to show a displayable result.

Here’s what the transformation pipeline looks like on the input:

transformation pipline on input

And the output looks something like that:

transformation pipline on output

To containerize our Python Flask application, we use the first stage with all the development dependencies to prepare our execution environment. We copy it onto a fresh Ubuntu base image to run it, configuring the model server’s gRPC connection.

Frontend – Ubuntu-based NGINX container and Svelte app

Finally, I put together a fancy UI for you to try the solution out. It’s an effortless single-page application with a file input field. It can display side-by-side the results from the two different colorization models.

I used Svelte to build the demo as a dynamic frontend. Below each colorization result, there’s even a saturation slider (using a CSS transformation) so that you can emphasize the predicted colors and better compare the before and after.

To ship this frontend application, we again use a Docker image. We first build the application using the Node base image. We then rebase it on top of the preconfigured NGINX LTS image maintained by Canonical. A reverse proxy on the frontend side serves as a passthrough to the backend on the /API endpoint to simplify the deployment configuration. We do that directly in an NGINX.conf configuration file copied to the NGINX templates directory. The container image is preconfigured to use these template files with environment variables.

Deployment with Kubernetes

I hope you had the time to scan some black and white pictures because things are about to get serious(ly colorized).

We’ll assume you already have a running Kubernetes installation from the next section. If not, I encourage you to run the following steps or go through this MicroK8s tutorial.

# https://microk8s.io/docs
sudo snap install microk8s --classic
 
# Add current user ($USER) to the microk8s group
sudo usermod -a -G microk8s $USER && sudo chown -f -R $USER ~/.kube
newgrp microk8s
 
# Enable the DNS, Storage, and Registry addons required later
microk8s enable dns storage registry
 
# Wait for the cluster to be in a Ready state
microk8s status --wait-ready
 
# Create an alias to enable the `kubectl` command
sudo snap alias microk8s.kubectl kubectl
ubuntu command line kubernetes cluster

Yes, you deployed a Kubernetes cluster in about two command lines.

Build the components’ Docker images

Every component comes with a Dockerfile to build itself in a standard environment and ship its deployment dependencies (read What are containers for more information). They all create an Ubuntu-based Docker image for a consistent developer experience.

Before deploying our colorizer app with Kubernetes, we need to build and push the components’ images. They need to be hosted in a registry accessible from our Kubernetes cluster. We will use the built-in local registry with MicroK8s. Depending on your network bandwidth, building and pushing the images will take a few minutes or more.

sudo snap install docker
cd ~ && git clone https://github.com/valentincanonical/colouriser-demo.git
 
# Backend
docker build backend -t localhost:32000/backend:latest
docker push localhost:32000/backend:latest
 
# Model Server
docker build modelserver -t localhost:32000/modelserver:latest
docker push localhost:32000/modelserver:latest
 
# Frontend
docker build frontend -t localhost:32000/frontend:latest
docker push localhost:32000/frontend:latest

Apply the Kubernetes configuration files

All the components are now ready for deployment. The Kubernetes configuration files are available as deployments and services YAML descriptors in the ./K8s folder of the demo repository. We can apply them all at once, in one command:

kubectl apply -f ./k8s

Give it a few minutes. You can watch the app being deployed with watch kubectl status. Of all the services, the frontend one has a specific NodePort configuration to make it publicly accessible by targeting the Node IP address.

ubuntu command line kubernetes configuration files

Once ready, you can access the demo app at http://localhost:30000/ (or replace localhost with a cluster node IP address if you’re using a remote cluster). Pick an image from your computer, and get it colorized!

All in all, the project was pretty easy considering the task we accomplished. Thanks to Ubuntu containers, building each component’s image with multi-stage builds was a consistent and straightforward experience. And thanks to OpenVINO™ and the Open Model Zoo, serving a pre-trained model with excellent inference performance was a simple task accessible to all developers.

That’s a wrap!

You didn’t even have to share your pics over the Internet to get it done. Thanks for reading this article; I hope you enjoyed it. Feel free to reach out on socials. I’ll leave you with the last colorization example.

colorized example christmas cat
Christmassy colorization example (original picture source)

To learn more about Ubuntu, the magic of Docker images, or even how to make your own Dockerfiles, see below for related resources:

]]>
The Docker-Sponsored Open Source Program has a new look! https://www.docker.com/blog/docker-sponsored-open-source-program-has-a-new-look/ Thu, 01 Sep 2022 17:00:00 +0000 https://www.docker.com/?p=37239 It’s no secret that developers love open source software. About 70–90% of code is entirely made up of it! Plus, using open source has a ton of benefits like cost savings and scalability. But most importantly, it promotes faster innovation.

That’s why Docker announced our community program, the Docker-Sponsored Open Source (DSOS) Program, in 2020. While our mission and vision for the program haven’t changed (yes, we’re still committed to building a platform for collaboration and innovation!), some of the criteria and benefits have received a bit of a facelift.

We recently discussed these updates at our quarterly Community All-Hands, so check out that video if you haven’t yet. But since you’re already here, let’s give you a rundown of what’s new.

New criteria & benefits

Over the past two years, we’ve been working to incorporate all of the amazing community feedback we’ve received about the DSOS program. And we heard you! Not only have we updated the application process which will decrease the wait time for approval, but we’ve also added on some major benefits that will help improve the reach and visibility of your projects.

  1. New application process — The new, streamlined application process lets you apply with a single click and provides status updates along the way
  2. Updated funding criteria — You can now apply for the program even if your project is commercially funded! However, you must not currently have a pathway to commercialization (this is reevaluated yearly). Adjusting our qualification criteria opens the door for even more projects to join the 300+ we already have!
  3. Insights & Analytics — Exclusive to DSOS members, you now have access to a plethora of data to help you better understand how your software is being used.
  4. DSOS badge on Docker Hub — This makes it easier for your project to be discovered and build brand awareness.

What hasn’t changed

Despite all of these updates, we made sure to keep the popular program features you love. Docker knows the importance of open source as developers create new technologies and turn their innovations into a reality. That’s why there are no changes to the following program benefits:

  • Free autobuilds — Docker will automatically build images from source code in your external repository and automatically push the build image to your Docker Hub Repository.
  • Unlimited pulls and egress — This is for all users pulling public images from your project namespace.
  • Free 1-year Docker Team subscription — This feature is for core contributors of your project namespace. This includes Docker Desktop, 15 concurrent builds, unlimited Docker Hub image vulnerability scans, unlimited scoped tokens, role-based access control, and audit logs.

We’ve also kept the majority of our qualification criteria the same (aside from what was mentioned above). To qualify for the program, your project namespace must:

  • Be shared in public repos
  • Meet the Open Source Initiative’s definition of open source 
  • Be in active development (meaning image updates are pushed regularly within the past 6 months or dependencies are updated regularly, even if the project source code is stable)
  • Not have a pathway to commercialization. Your organization must not seek to make a profit through services or by charging for higher tiers. Accepting donations to sustain your efforts is allowed.

Want to learn more about the program? Reach out to OpenSource@Docker.com with your questions! We look forward to hearing from you.

]]>
How to Use the Apache httpd Docker Official Image https://www.docker.com/blog/how-to-use-the-apache-httpd-docker-official-image/ Wed, 10 Aug 2022 14:00:44 +0000 https://www.docker.com/?p=36480 Deploying and spinning up a functional server is key to distributing web-based applications to users. The Apache HTTP Server Project has long made this possible. However, despite Apache Server’s popularity, users can face some hurdles with configuration and deployment.

Thankfully, Apache and Docker containers can work together to streamline this process — saving you time while reducing complexity. You can package your application code and configurations together into one cross-platform unit. The Apache httpd Docker Official Image helps you containerize a web-server application that works across browsers, OSes, and CPU architectures.

In this guide, we’ll cover Apache HTTP Server (httpd), the httpd Docker Official Image, and how to use each. You’ll also learn some quick tips and best practices. Feel free to skip our Apache intro if you’re familiar, but we hope you’ll learn something new by following along. Let’s dive in.

In this tutorial:

What is Apache Server?

The Apache HTTP Server was created as a “commercial-grade, featureful, and freely available source code implementation of an HTTP (Web) server.” It’s equally suitable for basic applications and robust enterprise alternatives.

Like any server, Apache lets developers store and access backend resources — to ultimately serve user-facing content. HTTP web requests are central to this two-way communication. The “d” portion of the “httpd” acronym stands for “daemon.” This daemon handles and routes any incoming connection requests to the server.

Developers also leverage Apache’s modularity, which lets them add authentication, caching, SSL, and much more. This early extensibility update to Apache HTTP Server sparked its continued growth. Since Apache HTTP Server began as a series of NCSA patches, its name playfully embraces its early existence as “a patchy web server.”

Some Apache HTTP Server fun facts:

  • Apache debuted in 1995 and is still widely used.
  • It’s modeled after NCSA httpd v1.3.
  • Apache currently serves roughly 47% of all sites with a known web server

Httpd vs. Other Server Technologies

If you’re experienced with Apache HTTP Server and looking to containerize your application, the Apache httpd Docker Official Image is a good starting point. You may also want to look at NGINX Server, PHP, or Apache Tomcat depending on your use case.

As a note, HTTP Server differs from Apache Tomcat — another Apache server technology. Apache HTTP Server is written in C while Tomcat is Java based. Tomcat is a Java Servlet dedicated to running Java code. It also helps developers create application pages via JavaServer Pages.

What is the httpd Docker Official Image?

We maintain the httpd Docker Official Image in tandem with the Docker community. Developers can use httpd to quickly and easily spin up a containerized Apache web server application. Out of the box, httpd contains Apache HTTP Server’s default configuration.

Why use the Apache httpd Docker Official Image? Here are some core use cases:

  • Creating an HTML server, as mentioned, to serve static web pages to users
  • Forming secure server HTTPS connections, via SSL, using Apache’s modules
  • Using an existing complex configuration file
  • Leveraging advanced modules like mod_perl, which this GitHub project outlines

While these use cases aren’t specific to our httpd Official Image itself, it’s easy to include these external configurations within your image itself. We’ll explore this process and outline how to use your first Apache container image now.

For use cases such as mod_php, a dedicated image such as the PHP Docker Official Image is probably a better fit.

How to use the httpd Docker Official Image

Before proceeding, you’ll want to download and install Docker Desktop. While we’ll still use the CLI during this tutorial, the built-in Docker Dashboard gives you an easy-to-use UI for managing your images and containers. It’s easy to start, pause, remove, and inspect running containers with the click of a button. Have Desktop running and open before moving on.

The quickest way to leverage the httpd Official Image is to visit Docker Hub, copy the docker pull httpd command into your terminal, and run it. This downloads each package and dependency within your image before automatically adding it into Docker Desktop:

 

 

Some key things happened while we verified that httpd is working correctly in this video:

  1. We pulled our httpd image using the docker pull httpd command.
  2. We found our image in Docker Desktop in the Images pane, chose “Run,” and expanded the Optional settings pane. We named our image so it’s easy to find, and entered 8080 as the host port before clicking “Run” again.
  3. Desktop took us directly into the Containers pane, where our named container, TestApache, was running as expected.
  4. We visited `http://localhost:8080` in our browser to test our basic setup.

This example automatically grabs the :latest version of httpd. We recommend specifying a numbered version or a tag with greater specificity, since these :latest versions can introduce breaking changes. It can be challenging to monitor these changes and test them effectively before moving into production.

That’s a great test case, but what if you want to build something a little more customized? This is where a Dockerfile comes in handy.

How to use a Dockerfile with your image

Though less common than other workflows, using a Dockerfile with the httpd Docker Official Image is helpful for defining custom configurations.

Your Dockerfile is a plain text file that instructs Docker on how to build your image. While building your image manually, this file lets you create configurations and useful image layers — beyond what the default httpd image includes.

Running an HTML server is a common workflow with the httpd Docker Official Image. You’ll want to add your Dockerfile in a directory which contains your project’s complete HTML. We’ll call it public-html in this example:


FROM httpd:2.4

COPY ./public-html/ /usr/local/apache2/htdocs/

 

The FROM instruction tells our builder to use httpd:2.4 as our base image. The COPY instruction copies new files or directories from our specified source, and adds them to the filesystem at a certain location. This setup is pretty bare bones, yet still lets you create a functional Apache HTTP Server image!

Next, you’ll need to both build and run this new image to see it in action. Run the following two commands in sequence:


$ docker build -t my-apache2 .

$ docker run -d --name my-running-app -p 8080:80 my-apache2

 

First, docker build will create your image from your earlier Dockerfile. The docker run command takes this image and starts a container from it. This container is running in detached mode, or in the background. If you wanted to take a step further and open a shell within that running container, you’d enter a third command: docker exec -ti my-running-app sh. However, that’s not necessary for this example.

Finally, visit http://localhost:8080 in your browser to confirm that everything is running properly.

How to use your image without a Dockerfile

Sometimes, you don’t even need nor want a Dockerfile for your image builds. This is the more common approach that most developers will take — compared to using a Dockerfile. It also requires just a couple of commands.

That said, enter the following commands to run your Apache HTTP Server container:

Mac:


$ docker run -d --name my-apache-app -p 8080:80 -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4

 

Linux:


$ docker run -d --name my-apache-app -p 8080:80 -v

$(pwd):/usr/local/apache2/htdocs/ httpd:2.4

 

Windows:


$ docker run -d --name my-apache-app -p 8080:80 -v "$pwd":/usr/local/apache2/htdocs/ httpd:2.4

 

Note: For most Linux users, the Mac version of this command works — but the Linux version is safest for those running compatible shells. While Windows users running Docker Desktop will have bash available, ”$pwd” is needed for Powershell.

Using -v bind mounts your project directory and $PWD (or its OS-specific variation) effectively expands to your current working directory, if you’re running macOS or Linux. This lets your container access your filesystem effectively and grab what it needs to run. You’re still connecting host port 8080 to container port 80/tcp — just like we did earlier within Docker Desktop — and running your Apache container in the background.

Configuration and useful tips

Customizing your Apache HTTP Server configuration is possible with two quick steps. First, enter the following command to grab the default configuration upstream:

 <code>docker run --rm httpd:2.4 cat /usr/local/apache2/conf/httpd.conf > my-httpd.conf</code> 

Second, return to your Dockerfile and COPY in your custom configuration from the required directory:


FROM httpd:2.4

COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf

That’s it! You’ve now dropped your Apache HTTP Server configurations into place. This might include changes to any modules and any functional additions to help your server run.

How to unlock data encryption through SSL

Apache forms connections over HTTP by default. This is fine for smaller projects, test cases, and server setups where security isn’t important. However, larger applications and especially those that transport sensitive data — like enterprise apps — may require encryption. HTTPS is the standard that all web traffic should use given its default encryption.

This is possible natively through Apache using the mod_ssl encryption module. In a Docker context, running web traffic over SSL means using the COPY instruction to add your server.crt and server.key into your /usr/local/apache2/conf/ directory.

This is a condensed version of this process, and more steps are needed to get SSL up and running. Check out our Docker Hub documentation under the “SSL/HTTPS” section for a complete list of approachable steps. Crucially, SSL uses port 443 instead of port 80 — the latter of which is normally reserved for unencrypted data).

Pull your first httpd Docker Official Image

We’ve demoed how to successfully use the httpd Docker Official Image to containerize and run Apache HTTP Server. This is great for serving web pages and powering various web applications — both secure or otherwise. Using this image lets you deploy cross-platform and cross-browser without encountering hiccups.

Combining Apache with Docker also preserves much of the customizability and functionality developers expect from Apache HTTP Server. To quickly start experimenting, head over to Docker Hub and pull your first httpd container image.

Further reading:

]]>
How to Use the Apache httpd Docker Official Image nonadult