Company – Docker https://www.docker.com Thu, 06 Jul 2023 13:53:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Company – Docker https://www.docker.com 32 32 Docker Acquires Mutagen for Continued Investment in Performance and Flexibility of Docker Desktop https://www.docker.com/blog/mutagen-acquisition/ Tue, 27 Jun 2023 17:00:13 +0000 https://www.docker.com/?p=43663 I’m excited to announce that Docker, voted the most-used and most-desired tool in Stack Overflow’s 2023 Developer Survey, has acquired Mutagen IO, Inc., the company behind the open source Mutagen file synchronization and networking technologies that enable high-performance remote development. Mutagen’s synchronization and forwarding capabilities facilitate the seamless transfer of code, binary artifacts, and network requests between arbitrary locations, connecting local and remote development environments. When combined with Docker’s existing developer tools, Mutagen unlocks new possibilities for developers to innovate and accelerate development velocity with local and remote containerized development.

“Docker is more than a container tool. It comprises multiple developer tools that have become the industry standard for self-service developer platforms, empowering teams to be more efficient, secure, and collaborative,” says Docker CEO Scott Johnston. “Bringing Mutagen into the Docker family is another example of how we continuously evolve our offering to meet the needs of developers with a product that works seamlessly and improves the way developers work.”

Mutagen banner 2400x1260 Docker logo and Mutagen logo on red background

The Mutagen acquisition introduces novel mechanisms for developers to extract the highest level of performance from their local hardware while simultaneously opening the gateway to the newest remote development solutions. We continue scaling the abilities of Docker Desktop to meet the needs of the growing number of developers, businesses, and enterprises relying on the platform.

 “Docker Desktop is focused on equipping every developer and dev team with blazing-fast tools to accelerate app creation and iteration by harnessing the combined might of local and cloud resources. By seamlessly integrating and magnifying Mutagen’s capabilities within our platform, we will provide our users and customers with unrivaled flexibility and an extraordinary opportunity to innovate rapidly,” says Webb Stevens, General Manager, Docker Desktop.

 “There are so many captivating integration and experimentation opportunities that were previously inaccessible as a third-party offering,” says Jacob Howard, the CEO at Mutagen. “As Mutagen’s lead developer and a Docker Captain, my ultimate goal has always been to enhance the development experience for Docker users. As an integral part of Docker’s technology landscape, Mutagen is now in a privileged position to achieve that goal.”

Jacob will join Docker’s engineering team, spearheading the integration of Mutagen’s technologies into Docker Desktop and other Docker products.

You can get started with Mutagen today by downloading the latest version of Docker Desktop and installing the Mutagen extension, available in the Docker Extensions Marketplace. Support for current Mutagen offerings, open source and paid, will continue as we develop new and better integration options.

FAQ | Docker Acquisition of Mutagen

With Docker’s acquisition of Mutagen, you’re sure to have questions. We’ve answered the most common ones in this FAQ.

As with all of our open source efforts, Docker strives to do right by the community. We want this acquisition to benefit everyone — community and customer — in keeping with our developer obsession.

What will happen to Mutagen Pro subscriptions and the Mutagen Extension for Docker Desktop?

Both will continue as we evaluate and develop new and better integration options. Existing Mutagen Pro subscribers will see an update to the supplier on their invoices, but no other billing changes will occur.

Will Mutagen become closed-source?

There are no plans to change the licensing structure of Mutagen’s open source components. Docker has always valued the contributions of open source communities.

Will Mutagen or its companion projects be discontinued?

There are no plans to discontinue any Mutagen projects. 

Will people still be able to contribute to Mutagen’s open source projects?

Yes! Mutagen has always benefited from outside collaboration in the form of feedback, discussion, and code contributions, and there’s no desire to change that relationship. For more information about how to participate in Mutagen’s development, see the contributing guidelines.

What about other downstream users, companies, and projects using Mutagen?

Mutagen’s open source licenses continue to allow the embedding and use of Mutagen by other projects, products, and tooling.

Who will provide support for Mutagen projects and products?

In the short term, support for Mutagen’s projects and products will continue to be provided through the existing support channels. We will work to merge support into Docker’s channels in the near future.

Is this replacing Virtiofs, gRPC-FUSE, or osxfs?

No, virtual filesystems will continue to be the default path for bind mounts in Docker Desktop. Docker is continuing to invest in the performance of these technologies.

How does Mutagen compare with other virtual or remote filesystems?

Mutagen is a synchronization engine rather than a virtual or remote filesystem. Mutagen can be used to synchronize files to native filesystems, such as ext4, trading typically imperceptible amounts of latency for full native filesystem performance.

How does Mutagen compare with other synchronization solutions?

Mutagen focuses primarily on configuration and functionality relevant to developers.

How can I get started with Mutagen?

To get started with Mutagen, download the latest version of Docker Desktop and install the Mutagen Extension from the Docker Desktop Extensions Marketplace.

]]>
Find and Fix Vulnerabilities Faster Now that Docker’s a CNA https://www.docker.com/blog/docker-becomes-mitre-cna/ Thu, 01 Dec 2022 15:00:00 +0000 https://www.docker.com/?p=39025 Docker is a CNA through MITRA CVE

CNAs, or CVE Numbering Authorities, are an essential part of vulnerability reporting because they compose a cohort of bug bounty programs, organizations, and companies involved in the secure software supply chain. When millions of developers depend on your projects, like in Docker’s case, it’s important to be a CNA to reinforce your commitment to cybersecurity and good stewardship as part of the software supply chain.

Previously, Docker reported CVEs directly through MITRE and GitHub without CNA status (there are many other organizations that still do this today, and CVE reporting does not require CNA status).

But not anymore! Docker is now officially a CNA under MITRE, which means you should get better notifications and documentation when we publish a vulnerability.

What are CNAs? (And where does MITRE fit in?)

To understand how CNAs, CVEs, and MITRE fit together, let’s start with the reason behind all those acronyms. Namely, a vulnerability.

When a vulnerability pops up, it’s really important that it has a unique identifier so developers know they’re all talking about the same vulnerability. (Let’s be honest, calling it, “That Java bug” really isn’t going to cut it.)

So someone has to give it a CVE (Common Vulnerabilities and Exposures) designation. That’s where a CNA comes in. They submit a request to their root CNA, which is often MITRE (and no, MITRE isn’t an acronym). A new CVE number, or several, is then assigned depending on how the report is categorized, thus making it official. And to keep all the CNAs on the same page, there are companies that maintain the CVE system.

MITRE is a non-profit corporation that maintains the system with sponsorship from the US government’s CISA (Cybersecurity and Infrastructure Security Agency). Like CISA, MITRE helps lead the charge in protecting public interest when it comes to defense, cybersecurity, and a myriad of other industries.

The CVE system provides references and information about the scary-ickies or the ultra terrifying vulnerabilities found in the world of technology, making vulnerabilities for shared resources and technologies easy to publicize, notify folks about, and take action against.

If you feel like learning more about the CVE program check out MITRE’s suite of videos here or the program’s homepage.

Where does Docker fit in?

Docker has reported CVEs in the past directly through MITRE and has, for example, used the reporting functionality through GitHub on Docker Engine. By becoming a CNA, however, we can take a more direct and coordinated approach with our reporting.

And better reporting means better awareness for everyone using our tools!

Docker went through the process of becoming a CNA (including some training and homework) so we can more effectively report on vulnerabilities related to Docker Desktop and Docker Hub. The checklist for CNA status also includes having appropriate disclosure and advisory policies in place. Docker’s status as a CNA means we can centralize CVE reporting for our different offerings connected to Docker Desktop, as well as those connected to Docker Hub and the registry. 

By becoming a CNA, Docker can be more active in the community of companies that make up the software supply chain. MITRE, as the default CNA and one of the root CNAs (CISA is a root CNA too), acts as the unbiased reviewer of vulnerability reports. Other organizations, vendors, or bug bounty programs, like Microsoft, HashiCorp, Rapid7, VMware, Red Hat, and hundreds of others, also act as CNAs.

Keep in mind that Docker’s status as a CNA means we’ll only report for products and projects we maintain. Being a CNA also includes consideration of when certain products might be end-of-life and how that affects CVE assignment. 

Ch-ch-changes?

Will the experience of using Docker Hub and Docker Desktop because of Docker’s new CNA status? Short answer: no. Long answer: the core experience of using Docker will not change. We’ve just leveled up in tackling vulnerabilities and providing better notifications about those vulnerabilities.

By better notifications, we mean a centralized repository for our security advisories. Because these reported vulnerabilities will link back to MITRE’s CVE program, it makes them far easier to search for, track, and tell your friends, your dog, or your cat about.

To see the latest vulnerabilities as Docker reports them and CVEs become assigned, check out our advisory location here: https://docs.docker.com/security/. For historic advisories also check https://docs.docker.com/desktop/release-notes/ and https://docs.docker.com/engine/release-notes/.

Keep in mind that CVEs that get reported are those that affect the consumers of Docker’s toolset and will require remediation from us and potential upgrade actions from the user, just like any other CVE announcement you might have seen in the news recently.

So keep your fins ready for when CVEs we may announce might apply to you.

Help Docker help you

We still encourage users and security researchers to report anything concerning they encounter with their use of Docker Hub and/or Docker Desktop to security@docker.com. (For reference, our security and privacy guidelines can be found here.)

We also still encourage proper configuration according to Docker documentation and to not to do anything Moby wouldn’t do. (That means you should be whale-intentioned in your builds and help your fin-ends and family using Docker configure it properly.)

And while we can’t promise to stop using whale puns any time soon, we can promise to continue to be good stewards for developers — and a big part of that includes proper security procedures.

]]>
How Rapid7 Reduced Setup Time From Days to Minutes With Docker https://www.docker.com/blog/how-rapid7-reduced-setup-time-from-days-to-minutes-with-docker/ Fri, 18 Nov 2022 15:00:00 +0000 https://www.docker.com/?p=38772 This post was co-written by Kris Rivera, Principal Software Engineer at Rapid7.

rapid7 1

Rapid7 is a Boston-based provider of security analytics and automation solutions enabling organizations to implement an active approach to cybersecurity. Over 10,000 customers rely on Rapid7 technology, services, and research to improve security outcomes and securely advance their organizations.

The security space is constantly changing, with new threats arising every day. To meet their customers’ needs, Rapid7 focuses on increasing the reliability and velocity of software builds while also maintaining their quality.

That’s why Rapid7 turned to Docker. Their teams use Docker to help development, support the sales pipeline, provide testing environments, and deploy to production in an automated, reliable way. 

By using Docker, Rapid7 transformed their onboarding process by automating manual processes. Setting up a new development environment now takes minutes instead of days. Their developers can produce faster builds that enable regular code releases to support changing requirements.

Automating the onboarding process

When developers first joined Rapid7, they were met with a static, manual process that was time consuming and error-prone. Configuring a development environment isn’t exciting for most developers. They want to spend most of their time creating! And setting up the environment is the least glamorous part of the process.

Docker helped automate this cumbersome process. Using Docker, Rapid7 could create containerized systems that were preconfigured with the right OS and developer tools. Docker Compose enabled multiple containers to communicate with each other, and it had the hooks needed to incorporate custom scripting and debugging tools.

Once the onboarding setup was configured through Docker, the process was simple for other developers to replicate. What once took multiple days now takes minutes.

Expanding containers into production

The Rapid7 team streamlined the setup of the development environment by using a Dockerfile. This helped them create an image with every required dependency and software package.

But they didn’t stop there. As this single Docker image evolved into a more complex system, they realized that they’d need more Docker images and container orchestration. That’s when they integrated Docker Compose into the setup.

Docker Compose simplified Docker image builds for each of Rapid7’s environments. It also encouraged a high level of service separation that split out different initialization steps into separate bounded contexts. Plus, they could leverage Docker Compose for inter-container communication, private networks, Docker volumes, defining environment variables with anchors, and linking containers for communication and aliasing.

This was a real game changer for Rapid7, because Docker Compose truly gave them unprecedented flexibility. Teams then added scripting to orchestrate communication between containers when a trigger event occurs (like when a service has completed).

Using Docker, Docker Compose, and scripting, Rapid7 was able to create a solution for the development team that could reliably replicate a complete development environment. To optimize the initialization, Rapid7 wanted to decrease the startup times beyond what Docker enables out of the box.

Optimizing build times even further

After creating Docker base images, the bottom layers rarely have to change. Essentially, that initial build is a one-time cost. Even if the images change, the cached layers make it a breeze to get through that process quickly. However, you do have to reinstall all software dependencies from scratch again, which is a one-time cost per Docker image update.

Committing the installed software dependencies back to the base image allows for a simple, incremental, and often skippable stage. The Docker image is always usable in development and production, all on the development computer.

All of these efficiencies together streamlined an already fast 15 minute process down to 5 minutes — making it easy for developers to get productive faster.

How to build it for yourself

Check out code examples and explanations about how to replicate this setup for yourself. We’ll now tackle the key steps you’ll need to follow to get started.

Downloading Docker

Download and install the latest version of Docker to be able to perform Docker-in-Docker. Docker-in-Docker lets your Docker environment have Docker installed within a container. This lets your container run other containers or pull images.

To enable Docker-in-Docker, you can apt install the docker.io distribution as one of your first commands in your Dockerfile. Once the container is configured, mount the Docker socket from the host installation:

# Dockerfile

FROM ubuntu:20.04

# Install dependencies

RUN apt update &&  \
           apt install -y docker.io

Next, build your Docker image by running the following command in your CLI or shell script file:

docker build -t <docker-image-name>

Then, start your Docker container with the following command:

docker run -v /var/run/docker.sock:/var/run/docker.sock -ti <docker-image-name>

Using a Docker commit script

Committing layered changes to your base image is what drives the core of the Dev Environments in Docker. Docker fetches the container ID based on the service name, and the changes you make to the running container are committed to the desired image. 

Because the host Docker socket is mounted into the container when executing the docker commit command, the container will apply the change to the base image located in the host Docker installation.

# ! /bin/bash

SERVICE=${1}
IMAGE=${2}

# Commit changes to image
CONTAINER_ID=$(docker ps -aqf “name=${SERVICE}”)

if [ ! -z “$CONTAINER_ID”]; then
	Echo “--- Committing changes from $SERVICE to $IMAGE --- ”
	docker commit $CONTAINER_ID $IMAGE
fi

Updating your environment

Mount the Docker socket from the host installation. Mounting the source code is insufficient without the :z property, which tells Docker that the content will be shared between containers.

You’ll have to mount the host machine’s Docker socket into the container. This lets any Docker operations performed inside the container actually modify the host Docker images and installation. Without this, changes made in the container are only going to persist in the container until it’s stopped and removed.

Add the following code into your Docker Compose file:

# docker-compose.yaml

services:
  service-name:
    image: image-with-docker:latest
    volumes:
        - /host/code/path:/container/code/path:z
        - /var/run/docker.sock:/var/run/docker.sock

Orchestrating components

Once Docker Compose has the appropriate services configured, you can start your environment in two different ways. Use either the docker-compose up command or start the environment by running the individual service with the linked services with the following command:

docker compose start webserver

The main container references the linked service via the linked names. This makes it very easy to override any environment variables with the provided names. Check out the YAML file below:

services:
  webserver:
  mysql:
    ports:
      - '3306:3306'
    volume
	 - dbdata:var/lib/mysql
  redis:
    ports:
      - 6379:6379
    volumes:
	 - redisdata:/data

volumes:
  dbdata:
  redisdata:

Notes: For each service, you’ll want to choose and specify your preferred Docker Official Image version. Additionally, the MySQL Docker Official Image comes with important environment variables defaulted in — though you can specify them here as needed.

Managing separate services

Starting a small part of the stack can also be useful if a developer only needs that specific piece. For example, if we just wanted to start the MySQL service, we’d run the following command:

docker compose start mysql

We can stop this service just as easily with the following command:

docker compose stop mysql

Configuring your environment

Mounting volumes into the database services lets your containers apply the change to their respective databases while letting those databases remain as ephemeral containers.

In the main entry point and script orchestrator, provide a -p attribute to ./start.sh to set the PROD_BUILD environment variable. The build reads the variable inside the entry point and optionally builds a production or development version of the development environment.  

First, here’s how that script looks:

# start.sh

while [ "$1" != ""];
do
	case $1 in
		-p |  --prod) PROD_BUILD="true";;

	esac
		shift
done

Second, here’s a sample shell script:

export PROD_BUILD=$PROD_BUILD

Third, here’s your sample Docker Compose file:

# docker-compose.yaml

services:
  build-frontend:
    entrypoint:
      - bash
      - -c
      - "[[ \"$PROD_BUILD\" == \"true\" ]] && make fe-prod || make fe-dev"

Note: Don’t forget to add your preferred image under build-frontend if you’re aiming to make a fully functional Docker Compose file.

What if we need to troubleshoot any issues that arise? Debugging inside a running container only requires the appropriate debugging library in the mounted source code and an open port to mount the debugger. Here’s our YAML file:

# docker-compose.yaml

services:
  webserver:
    ports:
	- '5678:5678'
    links:
	- mysql
	 - redis
	entrypoint:
      - bash
      - -c
      - ./start-webserver.sh

Note: Like in our previous examples, don’t forget to specify an image underneath webserver when creating a functional Docker Compose file.

In your editor of choice, provide a launch configuration to attach the debugger using the specified port. Once the container is running, run the configuration and the debugger will be attached:

#launch-setting.json
{
	"configurations" : [
		{
			"name":  "Python: Remote Attach",
			"type": "python",
			"request": "attach",
			"port": 5678,
			"host": "localhost",
			"pathMappings": [
				{
					"localRoot": "${workspaceFolder}",
					"remoteRoot": "."
				}
			]
		}
	]
}

Confirming that everything works

Once the full stack is running, it’s easy to access the main entry point web server via a browser on the defined webserver port.

The docker ps command will show your running containers. Docker is managing communication between containers. 

The entire software service is now running completely in Docker. All the code lives on the host computer inside the Docker container. The development environment is now completely portable using only Docker.

Remembering important tradeoffs

This approach has some limitations. First, running your developer environment in Docker will incur additional resource overhead. Docker has to run and requires extra computing resources as a result. Also, including multiple stages will require scripting as a final orchestration layer to enable communication between containers.

Wrapping up

Rapid7’s development team uses Docker to quickly create their development environments. They use the Docker CLI, Docker Desktop, Docker Compose, and shell scripts to create an extremely unique and robust Docker-friendly environment. They can use this to spin up any part of their development environment.

The setup also helps Rapid7 compile frontend assets, start cache and database servers, run the backend service with different parameters, or start the entire application stack. Using a “Docker-in-Docker” approach of mounting the Docker socket within running containers makes this possible. Docker’s ability to commit layers to the base image after dependencies are either updated or installed is also key. 

The shell scripts will export the required environment variables and then run specific processes in a specific order. Finally, Docker Compose makes sure that the appropriate service containers and dependencies are running.

Achieving future development goals

Relying on the Docker tool chain has been truly beneficial for Rapid7, since this has helped them create a consistent environment compatible with any part of their application stack. This integration has helped Rapid7 do the following: 

  • Deploy extremely reliable software to advanced customer environments
  • Analyze code before merging it in development
  • Deliver much more stable code
  • Simplify onboarding 
  • Form an extremely flexible and configurable development environment

By using Docker, Rapid7 is continuously refining its processes to push past the boundaries of what’s possible. Their next goal is to deliver production-grade stable builds on a daily basis, and they’re confident that Docker can help them get there.

]]>
Announcing Docker Hub OCI Artifacts Support https://www.docker.com/blog/announcing-docker-hub-oci-artifacts-support/ Mon, 31 Oct 2022 16:00:00 +0000 https://www.docker.com/?p=38556 We’re excited to announce that Docker Hub can now help you distribute any type of application artifact! You can now keep everything in one place without having to leverage multiple registries.

Before today, you could only use Docker Hub to store and distribute container images — or artifacts usable by container runtimes. This became a limitation of our platform, since container image distribution is just the tip of the application delivery iceberg. Nowadays, modern application delivery requires numerous types of artifacts:

Developers often share these with clients that need them since they add immense value to each project. And while the OCI working groups are busy releasing the latest OCI Artifact Specification, we still have to package application artifacts as OCI images in the meantime. 

Docker Hub acts as an image registry and is perfectly suited for distributing application artifacts. That’s why we’ve added support for any software artifact — packaged as an OCI image — to Docker Hub.

What’s the Open Container Initiative (OCI)?

Back in 2015, we helped establish the Open Container Initiative as an open governance structure to standardize container image formats, container runtimes, and image distribution.

The OCI maintains a few core specifications. These govern the following:

  • How to package filesystem bundles
  • How to launch containerized, cross-platform apps
  • How to make packaged content accessible to remote clients

The Runtime Specification determines how OCI images and runtimes interact. Next, the Image Specification outlines how to create OCI images. Finally, the Distribution Specification defines how to make content distribution interoperable.

The OCI’s overall aim is to boost transparency, runtime predictability, software compatibility, and distribution. We’ve since donated our own container format and runC OCI-compliant runtime to the OCI, plus given the OCI-compliant distribution project to the CNCF.

Why are we adding OCI support? 

Container images are integral to supporting your containerized application builds. We know that images accumulate between projects, making centralized cloud storage essential to efficiently manage resources. Developers shouldn’t have to rely on local storage or wonder if these resources are readily accessible. However, we also know that developers want to store a variety of artifacts within Docker Hub. 

Storing your artifacts in Docker Hub unlocks “anywhere access” while also enabling improved collaboration through Docker Hub’s standard sharing capabilities. This aligns us more closely with the OCI’s content distribution mission by giving users greater control over key pieces of application delivery.

How do I manage different OCI artifacts?

We recommend using dedicated tools to help manage non-container OCI artifacts, like the Helm CLI for Helm charts or the OCI Registry-as-Storage (ORAS) CLI for arbitrary content types.

Let’s walk through a few use cases to showcase OCI support in Docker Hub.

Working with Helm charts

Helm chart support was your most-requested feature, and we’ve officially added it to Docker Hub! So, how do you take advantage? We’ll create a simple Helm chart and push it to Docker Hub. This process will follow Helm’s official guide for storing Helm charts as OCI images in registries.

First, we’ll create a demo Helm chart:

$ helm create demo

This’ll generate a familiar Helm chart boilerplate of files that you can edit:

demo
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── serviceaccount.yaml
│   └── tests
│   	└── test-connection.yaml
└── values.yaml

3 directories, 10 files

Once we’re done editing, we’ll need to package the Helm chart as an OCI image:

$ helm package demo

Successfully packaged chart and saved it to: /Users/martine/tmp/demo-0.1.0.tgz

Don’t forget to log into Docker Hub before pushing your Helm chart. We recommend creating a Personal Access Token (PAT) for this. You can export your PAT via an environment variable, and login, as follows:

$ echo $REG_PAT | helm registry login registry-1.docker.io -u martine --password-stdin

Pushing your Helm chart

You’re now ready to push your first Helm chart to Docker Hub! But first, make sure you have write access to your Helm chart’s destination namespace. In this example, let’s push to the docker namespace:

$ helm push demo-0.1.0.tgz oci://registry-1.docker.io/docker

Pushed: registry-1.docker.io/docker/demo:0.1.0
Digest: sha256:1e960ad1693c234b66ec1f9ddce80986cbf7159d2bb1e9a6d2c2cd6e89925e54

Viewing your Helm chart and using filters

Now, If you log in to Docker Hub and navigate to the demo repository detail, you’ll find your Helm chart in the list of repository tags:

Helm Type Docker Hub

You can navigate to the Helm chart page by clicking on the tag. The page displays useful Helm CLI commands:

Helm CLI Commands

Repository content management is now easier. We’ve improved content discoverability by adding a drop-down button to quickly filter the repository list by content type. Simply click the Content drop-down and select Helm from the list:

Helm Type Selection

Working with volumes

Developers use volumes throughout the Docker ecosystem to share arbitrary application data like database files. You can already back up your volumes using the Volume Backup & Share extension that we recently launched. You can now also filter repositories to find those containing volumes using the same drop-down menu.

But until Volumes Backup & Share pushes volumes as OCI artifacts instead of images (coming soon!), you can use the ORAS CLI to push volumes.

Note: We recommend ORAS CLI versions 0.15 or later since these bring full OCI registry client functionality.

Let’s walk through a simple use case that mirrors the examples documented by the ORAS CLI. First, we’ll create a simple file we want to package as a volume:

$ echo "bar" > foo.txt

For Docker Hub to recognize this volume, we must attach a config file to the OCI image upon creation and mark it with a specific media type. The file can contain arbitrary content, so let’s create one:

$ echo "{\"name\":\"foo\",\"value\":\"bar\"}" > config.json

With this step completed, you’re now ready to push your volume.

Pushing your volume

Here’s where the magic happens. The media type Docker Hub needs to successfully recognize the OCI image as a volume is application/vnd.docker.volume.v1+tar.gz. You can attach the media type to the config file and push it to Docker Hub with the following command (plus its resulting output):

$ oras push registry-1.docker.io/docker/demo:0.0.1 --config config.json:application/vnd.docker.volume.v1+tar.gz foo.txt:text/plain

Uploading b5bb9d8014a0 foo.txt
Uploaded  b5bb9d8014a0 foo.txt
Pushed registry-1.docker.io/docker/demo:0.0.1
Digest: sha256:f36eddbab8459d0ad1436b7ca8af6bfc512ec74f45d8136b53c16db87562016e

We now have two types of content in the demo repository as shown in the following breakdown:

Volume Content Type List

If you navigate to the content page, you’ll see some basic information that we’ll expand upon in future iterations. This will boost visibility into a volume’s contents.

Volume Details

Handling generic content types

If you don’t use the application/vnd.docker.volume.v1+tar.gz media type when pushing the volume with the ORAS CLI, Docker Hub will mark the artifact as generic to distinguish it from recognized content.

Let’s push the same volume but use application/vnd.random.volume.v1+tar.gz media type instead of the one known to Docker Hub:

$ oras push registry-1.docker.io/docker/demo:0.1.1 --config config.json:application/vnd.random.volume.v1+tar.gz foo.txt:text/plain

Exists	7d865e959b24 foo.txt
Pushed registry-1.docker.io/docker/demo:0.1.1
Digest: sha256:d2fb2b176ee4e326f1f34ecdaede8db742f2c444cb2c9ceff0f5c8b743281c95

You can see the new content is assigned a generic Other type. We can still view the tagged content’s media type by hovering over the type label. In this case, that’s application/vnd.random.volume.v1+tar.gz:

Other Content Type List

If you’d like to filter the repositories that contain both Helm charts and volumes, use the same drop-down menu in the top-right corner:

Volume Type Selection

Working with container images

Finally, you can continue pushing your regular container images to the exact same repository as your other artifacts. Say we re-tag the Redis Docker Official Image and push it to Docker Hub:

$ docker tag redis:3.2-alpine docker/demo:v1.2.2

$ docker push docker/demo:v1.2.2

The push refers to repository [docker.io/docker/demo]
a1892d5d1a6d: Mounted from library/redis
e41876edb6d0: Mounted from library/redis
7119119b7542: Mounted from library/redis
169a281fff0f: Mounted from library/redis
04c8ef03e935: Mounted from library/redis
df64d3292fd6: Mounted from library/redis
v1.2.2: digest: sha256:359cfebb00bef01cda3bc1ca453e6455c770a246a06ad8df499a28118c144eda size: 1570

Viewing your container images

If you now visit the demo repository page on Docker Hub, you’ll see every artifact listed under Tags and scans:

All Artifacts Content List

We’ll also introduce more features soon to help you better organize your application content, so stay tuned for more announcements!

Follow along for more updates

All developers can now access and choose from more robust sets of artifacts while building and distributing applications with Docker Hub. Not only does this remove existing roadblocks, but it’ll hopefully encourage you to create and distribute even more exciting applications.

But, our mission doesn’t end here! We’re continually working to bolster our OCI support. While the OCI Artifact Specification is considered a release candidate, full Docker Hub support for OCI Reference Types and the accompanying Referrers API is on the horizon. Stay tuned for upcoming enhancements, improved repo organization, and more.

Note: The OCI artifact has now been removed from OCI image-spec. Refer to this update for more information.

]]>
How to Implement Decentralized Storage Using Docker Extensions https://www.docker.com/blog/how-to-implement-decentralized-storage-using-docker-extensions/ Thu, 27 Oct 2022 14:00:00 +0000 https://www.docker.com/?p=38456 This is a guest post written by Marton Elek, Principal Software Engineer at Storj.

In part one of this two-part series, we discussed the intersection of Web3 and Docker at a conceptual level. In this post, it’s time to get our hands dirty and review practical examples involving decentralized storage.

We’d like to see how we can integrate Web3 projects with Docker. At the beginning we have to choose from two options:

  1. We can use Docker to containerize any Web3 application. We can also start an IPFS daemon or an Ethereum node inside a container. Docker resembles an infrastructure layer since we can run almost anything within containers.
  2. What’s most interesting is integrating Docker itself with Web3 projects. That includes using Web3 to help us when we start containers or run something inside containers. In this post, we’ll focus on this portion.

The two most obvious integration points for a container engine are execution and storage. We choose storage here since more mature decentralized storage options are currently available. There are a few interesting approaches for decentralized versions of cloud container runtimes (like ankr), but they’re more likely replacements for container orchestrators like Kubernetes — not the container engine itself.

Let’s use Docker with decentralized storage. Our example uses Storj, but all of our examples apply to almost any decentralized cloud storage solution.

Storj Components

Storj is a decentralized cloud storage where node providers are compensated to host the data, but metadata servers (which manage the location of the encrypted pieces) are federated (many, interoperable central servers can work together with storage providers).

It’s important to mention that decentralized storage almost always requires you to use a custom protocol. A traditional HTTP upload is a connection between one client and one server. Decentralization requires uploading data to multiple servers. 

Our goal is simple: we’d like to use docker push and docker pull commands with decentralized storage instead of a central Docker registry. In our latest DockerCon presentation, we identified multiple approaches:

  • We can change Docker and containerd to natively support different storage options
  • We can provide tools that magically download images from decentralized storage and persists them in the container engine’s storage location (in the right format, of course)
  • We can run a service which translates familiar Docker registry HTTP requests to a protocol specific to the decentralized cloud
    • Users can manage this themselves.
    • This can also be a managed service.

Leveraging native support

I believe the ideal solution would be to extend Docker (and/or the underlying containerd runtime) to support different storage options. But this is definitely a bigger challenge. Technically, it’s possible to modify every service, but massive adoption and a big user base mean that large changes require careful planning.

Currently, it’s not readily possible to extend the Docker daemon to use special push or pull targets. Check out our presentation on extending Docker if you’re interested in technical deep dives and integration challenges. The best solution might be a new container plugin type, which is being considered.

One benefit of this approach would be good usability. Users can leverage common push or pull commands. But based on the host, the container layers can be sent to a decentralized storage.

Using tool-based push and pull

Another option is to upload or download images with an external tool — which can directly use remote decentralized storage and save it to the container engine’s storage directory.

One example of this approach (but with centralized storage) is the AWS ECR container resolver project. It provides a CLI tool which can pull and push images using a custom source. It also saves them as container images of the containerd daemon.

Unfortunately this approach also have some strong limitations:

  • It couldn’t work with a container orchestrator like Kubernetes, since they aren’t prepared to run custom CLI commands outside of pulling or pushing images.
  • It’s containerd specific. The Docker daemon – with different storage – couldn’t use it directly.
  • The usability is reduced since users need different CLI tools.

Using a user-manager gateway

If we can’t push or pull directly to decentralized storage, we can create a service which resembles a Docker registry and meshes with any client.ut under the hood, it uploads the data using the decentralized storage’s native protocol.

This thankfully works well, and the standard Docker registry implementation is already compatible with different storage options. 

At Storj, we already have an implementation that we use internally for test images. However, the nerdctl ipfs subcommand is another good example for this approach (it starts a local registry to access containers from IPFS).

We have problems here as well:

  • Users should run the gateway on each host. This can be painful alongside Kubernetes or other orchestrators.
  • Implementation can be more complex and challenging compared to a native upload or download.

Using a hosted gateway

To make it slightly easier one can provide a hosted version of the gateway. For example, Storj is fully S3 compatible via a hosted (or self-hosted) S3 compatible HTTP gateway. With this approach, users have three options:

  • Use the native protocol of the decentralized storage with full end-to-end encryption and every feature
  • Use the convenient gateway services and trust the operator of the hosted gateways.
  • Run the gateway on its own

While each option is acceptable, a perfect solution still doesn’t exist.

Using Docker Extensions

One of the biggest concerns with using local gateways was usability. Our local registry can help push images to decentralized storage, but it requires additional technical work (configuring and running containers, etc.)

This is where Docker Extensions can help us. Extensions are a new feature of Docker Desktop. You can install them via the Docker Dashboard, and they can provide additional functionality — including new screens, menu items, and options within Docker Desktop. These are discoverable within the Extensions Marketplace:

Extensions Marketplace

And this is exactly what we need! A good UI can make Web3 integration more accessible for all users.

Docker Extensions are easily discoverable within the Marketplace, and you can also add them manually (usually for the development).

At Storj, we started experimenting with better user experiences by developing an extension for Docker Desktop. It’s still under development and not currently in the Marketplace, but feedback so far has convinced us that it can massively improve usability, which was our biggest concern with almost every available integration option.

Extensions themselves are Docker containers, which make the development experience very smooth and easy. Extensions can be as simple as a metadata file in a container and static HTML/JS files. There are special JavaScript APIs that manipulate the Docker daemon state without a backend.

You can also use a specialized backend. The JavaScript part of the extension can communicate with any containerized backend via a mounted socket.

The new docker extension command can help you quickly manage extensions (as an example: there’s a special docker extension dev debug subcommand that shows the Web Developer Toolbar for Docker Desktop itself.)

Storj Docker Registry Extension

Thanks to the provided developer tools, the challenge is not creating the Docker Desktop extension, but balancing the UI and UX.

Summary

As we discussed in our previous post, Web3 should be defined by user requirements, not by technologies (like blockchain or NFT). Web3 projects should address user concerns around privacy, data control, security, and so on. They should also be approachable and easy to use.

Usability is a core principle of containers, and one reason why Docker became so popular. We need more integration and extension points to make it easier for Web3 project users to provide what they need. Docker Extensions also provide a very powerful way to pair good integration with excellent usability.

We welcome you to try our Storj Extension for Docker (still under development). Please leave any comments and feedback via GitHub.

]]>
Bring Continuous Integration to Your Laptop With the Drone CI Docker Extension https://www.docker.com/blog/bring-continuous-integration-to-your-laptop-with-the-drone-ci-docker-extension/ Tue, 20 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37606 Continuous Integration (CI) is a key element of cloud native application development. With containers forming the foundation of cloud-native architectures, developers need to integrate their version control system with a CI tool. 

There’s a myth that continuous integration needs a cloud-based infrastructure. Even though CI makes sense for production releases, developers need to build and test the pipeline before they can share it with their team — or have the ability to perform the continuous integration (CI) on their laptop. Is that really possible today? 

Introducing the Drone CI pipeline

An open-source project called Drone CI makes that a reality. With over 25,700 GitHub stars and 300-plus contributors, Drone is a cloud-native, self-service CI platform. Drone CI offers a mature, container-based system that leverages the scaling and fault-tolerance characteristics of cloud-native architectures. It helps you build container-friendly pipelines that are simple, decoupled, and declarative. 

Drone is a container based pipeline engine that lets you run any existing containers as part of your pipeline or package your build logic into reusable containers called Drone Plugins

Drone plugins are configurable based on the need and that allows distributing the container within your organization or to the community in general.

Running Drone CI pipelines from Docker Desktop

For a developer working with decentralized tools, the task of building and deploying microservice applications can be monumental. It’s tricky to install, manage, and use these apps in those environments. That’s where Docker Extensions come in. With Docker Extensions, developer tools are integrated right into Docker Desktop — giving you streamlined management workflows. It’s easier to optimize and transform your development processes. 

The Drone CI extension for Docker Desktop brings CI to development machines. You can now import Drone CI pipelines into Docker Desktop and run them locally. You can also run specific steps of a pipeline, monitor execution results, and inspect logs.

Setting up a Drone CI pipeline

In this guide, you’ll learn how to set up a Drone CI pipeline from scratch on Docker Desktop. 

First, you’ll install the Drone CI Extension within Docker Desktop. Second, you’ll learn how to discover Drone pipelines. Third, you’ll see how to open a Drone pipeline on Visual Studio Code. Lastly, you’ll discover how to run CI pipelines in trusted mode, which grants them elevated privileges on the host machine. Let’s jump in.

Prerequisites

You’ll need to download Docker Desktop 4.8 or later before getting started. Make sure to choose the correct version for your OS and then install it. 

Next, hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box.

Enable Extensions Settings

Installing the Drone CI Docker extension

Drone CI isn’t currently available on the Extensions Marketplace, so you’ll have to download it via the CLI. Launch your terminal and run the following command to install the Drone CI Extension:

docker extension install drone/drone-ci-docker-extension:latest

The Drone CI extension will soon appear in the Docker Dashboard’s left sidebar, underneath the Extensions heading:

Drone CI Sidebar

Import Drone pipelines

You can click the “Import Pipelines” option to specify the host filesystem path where your Drone CI pipelines (drone.yml files) are. If this is your first time with Drone CI pipelines, you can use the examples from our GitHub repo.

In the recording above, we’ve used the long-run-demo sample to run a local pipeline that executes a long running sleep command. This occurs within a Docker container.

kind: pipeline
type: docker
name: sleep-demos
steps: 
  - name: sleep5
    image: busybox
    pull: if-not-exists
    commands:
    - x=0;while [ $x -lt 5 ]; do echo "hello"; sleep 1; x=$((x+1)); done
  - name: an error step
    image: busybox
    pull: if-not-exists
    commands:
    - yq --help

You can download this pipeline YAML file from the Drone CI GitHub page.

The file starts with a pipeline object that defines your CI pipeline. The type attribute defines your preferred runtime while executing that pipeline. 

Drone supports numerous runners like  docker, kubernetes, and more. The extension only supports docker pipelines currently.
Each pipeline step spins up a Docker container with the corresponding image defined as part of the step image attribute.

Each step defines an attribute called commands. This is a list of shell commands that we want to execute as part of the build. The defined list of  commands will be converted into shell script and set as Docker container’s ENTRYPOINT. If any command (for example, the missing yq command, in this case) returns a non-zero exit code, the pipeline fails and exits.

Drone Pipelines List
Drone Stages

Edit your pipeline faster in VS Code via Drone CI

Visual Studio Code (VS Code) is a lightweight, highly-popular IDE. It supports JavaScript, TypeScript, and Node.js. VS Code also has a rich extensions ecosystem for numerous other languages and runtimes. 

Opening your Drone pipeline project in VS Code takes just seconds from within Docker Desktop:

Open Pipeline Project

This feature helps you quickly view your pipeline and add, edit, or remove steps — then run them from Docker Desktop. It lets you iterate faster while testing new pipeline changes.

Running specific steps in the CI pipeline

The Drone CI Extension lets you run individual steps within the CI pipeline at any time. To better understand this functionality, let’s inspect the following Drone YAML file:

kind: pipeline
type: docker
name: sleep-demos
steps: 
  - name: sleep5
    image: busybox
    pull: if-not-exists
    commands:
    - x=0;while [ $x -lt 5 ]; do echo "hello"; sleep 1; x=$((x+1)); done
  - name: an error step
    image: busybox
    pull: if-not-exists
    commands:
    - yq --help

In this example, the first pipeline step defined as sleep5 lets you execute a shell script (echo “hello”) for five seconds and then stop (ignoring an error step).The video below shows you how to run the specific sleep-demos stage within the pipeline:

Running steps in trusted mode

Sometimes, you’re required to run a CI pipeline with elevated privileges. These privileges enable a user to systematically do more than a standard user. This is similar to how we pass the --privileged=true parameter within a docker run command. 

When you execute docker run --privileged, Docker will permit access to all host devices and set configurations in AppArmor or SELinux. These settings may grant the container nearly equal access to the host as processes running outside containers on the host.

Drone’s trusted mode tells your container runtime to run the pipeline containers with elevated privileges on the host machine. Among other things, trusted mode can help you:

  • Mount the Docker host socket onto the pipeline container
  • Mount the host path to the Docker container

Run pipelines using environment variable files

The Drone CI Extension lets you define environment variables for individual build steps. You can set these within a pipeline step. Like docker run provides a way to pass environment variables to running containers, Drone lets you pass usable environment variables to your build. Consider the following Drone YAML file:

kind: pipeline
type: docker
name: default
steps: 
  - name: display environment variables
    image: busybox
    pull: if-not-exists
    commands:
    - printenv

The file starts with a pipeline object that defines your CI pipeline. The type attribute defines your preferred runtime (Docker, in our case) while executing that pipeline. The platform section helps configure the target OS and architecture (like arm64) and routes the pipeline to the appropriate runner. If unspecified, the system defaults to Linux amd64

The steps section defines a series of shell commands. These commands run within a busybox Docker container as the ENTRYPOINT. As shown, the command prints the environment variables if you’ve declared the following environment variables in your my-env file:

DRONE_DESKTOP_FOO=foo
DRONE_DESKTOP_BAR=bar

You can choose your preferred environment file and run the CI pipeline (pictured below):

Pipeline ENV File

If you try importing the CI pipeline, you can print every environment variable.

Run pipelines with secrets files

We use repository secrets to store and manage sensitive information like passwords, tokens, and ssh keys. Storing this information as a secret is considered safer than storing it within a plain text configuration file. 

Note: Drone masks all values used from secrets while printing them to standard output and error.

The Drone CI Extension lets you choose your preferred secrets file and use it within your CI pipeline as shown below:

Secrets File

Remove pipelines

You can remove a CI pipeline in just one step. Select one or more Drone pipelines and remove them by clicking the red minus (“-”) button on the right side of the Dashboard. This action will only remove the pipelines from Docker Desktop — without deleting them from your filesystem.

Bulk remove all pipelines

Bulk Remove

Remove a single pipeline

Remove Single Pipeline

Conclusion

Drone is a modern, powerful, container-friendly CI that empowers busy development teams to automate their workflows. This dramatically shortens building, testing, and release cycles. With a Drone server, development teams can build and deploy cloud apps. These harness the scaling and fault-tolerance characteristics of cloud-native architectures like Kubernetes. 

Check out Drone’s documentation to get started with CI on your machine. With the Drone CI extension, developers can now run their Drone CI pipelines locally as they would in their CI systems.

Want to dive deeper into Docker Extensions? Check out our intro documentation, or discover how to build your own extensions

]]>
Company Archives | Docker nonadult
Clarifying Misconceptions About Web3 and Its Relevance With Docker https://www.docker.com/blog/clarifying-misconceptions-about-web3-and-its-relevance-with-docker/ Thu, 15 Sep 2022 14:30:00 +0000 https://www.docker.com/?p=37521

This is a guest post written by Marton Elek, Principal Software Engineer at Storj.

This blog is the first in a two-part series. We’ll talk about the challenges of defining Web3 plus some interesting connections between Web3 and Docker.

Part two will highlight technical solutions and demonstrate how to use Docker and Web3 together.

We’ll build upon the presentation, “Docker and Web 3.0 — Using Docker to Utilize Decentralized Infrastructure & Build Decentralized Apps,” by JT Olio, Krista Spriggs, and Marton Elek from DockerCon 2022. However, you don’t have to view that session before reading this post.

What’s Web3, after all?

If you ask a group what Web3 is, you’ll likely receive a different answer from each person. The definition of Web3 causes a lot of confusion, but this lack of clarity also offers an opportunity. Since there’s no consensus, we can offer our own vision.

One problem is that many definitions are based on specific technologies, as opposed to goals:

  • “Web3 is an idea […] which incorporates concepts such as decentralization, blockchain technologies, and token-based economics” (Wikipedia)
  • “Web3 refers to a decentralized online ecosystem based on the blockchain.” (Gevin Wood)

There are three problems with defining Web3 based on technologies and not high-level goals or visions (or in addition to them). In general, these definitions unfortunately confuse the “what” with the “how.” We’ll focus our Web3 definition on the “what” — and leave the “how” for a discussion on implementation with technologies. Let’s discuss each issue in more detail.

Problem #1: it should be about “what” problems to solve instead of “how”

To start, most people aren’t really interested in “token-based economics.” But, they can passionately critique the current internet (”Web2”) through many common questions:

  • Why’s it so hard to move between platforms and export or import our data? Why’s it so hard to own our data?
  • Why’s it so tricky to communicate with friends who use other social or messaging services?
  • Why can a service provider shut down my user without proper explanation or possibility of appeal? 
  • Most terms of service agreements can’t help in practicality. They’re long and hard to understand. Nobody reads them (just envision lengthy new terms for websites and user-data treatment, stemming from GDPR regulations.) In a debate against service providers, we’re disadvantaged and less likely to win.  
  • Why can’t we have better privacy? Full encryption for our data? Or the freedom to choose who can read or use our personal data, posts, and activities?
  • Why couldn’t we sell our content in a more flexible way? Are we really forced to accept high margins from central marketplaces to be successful?
  • How can we avoid being dependent on any one person or organization?
  • How can we ensure that our data and sensitive information are secured?

These are well-known problems. They’re also key usability questions — and ultimately the “what” that we need to solve. We’re not necessarily looking to require new technologies like blockchain or NFT. Instead, we want better services with improved security, privacy, control, sovereignty, economics, and so on. Blockchain technology, NFT, federation, and more, are only useful if they can help us address these issues and enjoy better services. Those are potential tools for “how” to solve the “what.”

What if we had an easier, fairer system for connecting artists with patrons and donors, to help fund their work? That’s just one example of how Web3 could help.

As a result, I believe Web3 should be defined as “the movement to improve the internet’s UX, including for — but not limited to — security, privacy, control, sovereignty, and economics.”

Problem #2: Blockchain, but not Web3?

We can use technologies in so many different ways. Blockchains can create a currency system with more sovereignty, control, and economics, but they can also support fraudulent projects. Since we’ve seen so much of that, it’s not surprising that many people are highly skeptical.

However, those comments are usually critical towards unfair or fraudulent projects that use Web3’s core technologies (e.g. blockchain) to siphon money from people. They’re not usually directed at big problems related to usability.

Healthy skepticism can save us, but we at least need some cautious optimism. Always keep inventing and looking for better solutions. Maybe better technologies are required. Or, maybe using current technologies differently could best help us achieve the “how” of Web3.

Problem #3: Web3, but not blockchain?

We can also view the previous problem from the opposite perspective It’s not just blockchain or NFTs that can help us to solve the internet’s current challenges related to Problem #1. Some projects don’t use blockchain at all, yet qualify as Web3 due to the internet challenges they solve.

One good example is federation — one of the oldest ways of achieving decentralization. Our email system is still fairly decentralized, even if big players handle a significant proportion of email accounts. And this decentralization helped new players provide better privacy, security, or control.

Thankfully, there are newer, promising projects like Matrix, which is one of very few chat apps designed for federation from the ground up. How easy would communication be if all chat apps allowed federated message exchanges between providers? 

Docker and Web3

Since we’re here to talk about Docker, how can we connect everything to containers?

While there are multiple ways to build and deploy software, containers are usually involved on some level. Wherever we use technology, containers can probably help.

But, I believe there’s a fundamental, hidden connection between Docker and Web3. These three similarities are small, but together form a very interesting, common link.

Usability as a motivation

We first defined the Web3 movement based on the need to improve user experiences (privacy, control, security, etc.). Docker containers can provide the same benefits.

Containers quickly became popular because they solved real user problems. They gave developers reproducible environments, easy distribution, and just enough isolation.

Since day one, Docker’s been based on existing, proven technologies like namespace isolation or Linux kernel cgroups. By building upon leading technologies, Docker relieved many existing pain points.

Web3 is similar. We should pick the right technologies to achieve our goals. And luckily innovations like blockchains have become mature enough to support the projects where they’re needed.

Content-addressable world

One barrier to creating a fully decentralized system is creating globally unique, decentralized identifiers for all services items. When somebody creates a new identifier, we must ensure it’s truly one of a kind.

There’s no easy fix, but blockchains can help. After all, chains are the central source of truth (agreed on by thousands of participants in a decentralized way). 

There’s another way to solve this problem. It’s very easy to choose a unique identifier if there’s only one option and the choice is obvious. For example, if any content is identified with its hash, then that’s the unique identifier. If the content is the same, the unique identifier (the hash itself) will always be.

One example is Git, which is made for distribution. Every commit is identified by its hash (metadata, pointers to parents, pointers to the file trees). This made Git decentralization-friendly. While most repositories are hosted by big companies, it’s pretty easy to shift content between providers. This was an earlier problem we were trying to solve.

IPFS — as a decentralized content routing protocol — also pairs hashes with pieces to avoid any confusion between decentralized nodes. It also created a full ecosystem to define notation for different hashing types (multihash), or different data structures (IPLD).

We see exactly the same thing when we look at Docker containers! The digest acts as a content-based hash and can identify layers and manifests. This makes it easy to verify them and get them from different sources without confusion. Docker was designed to be decentralized from the get go.

Federation

Content-based digests of container layers and manifests help us, since Docker is usable with any kind of registry.

This is a type of federation. Even if Docker Hub is available, it’s very easy to start new registries. There’s no vendor lock-in, and there’s no grueling process behind being listed on one single possible marketplace. Publishing and sharing new images is as painless as possible.

As we discussed above, I believe the federation is one form of decentralization, and decentralization is one approach to get what we need: better control and ownership. There are stances against federation, but I believe federation offers more benefits despite its complexity. Many hard-forks, soft-forks, and blockchain restarts prove that control (especially democratic control) is possible with federation.

But we can call it in any other way. I believe that the freedom of using different container registries and the process of deploying containers are important factors in the success of Docker containers.

Summary

We’ve successfully defined Web3 based on end goals and user feedback — or “what” needs to be achieved. And this definition seems to be working very well. It’s mindful of “how” we achieve those goals. It also includes the use of existing “Web2” technologies and many future projects, even without using NFTs or blockchains. It even excludes the fraudulent projects which have drawn much skepticism.

We’ve also found some interesting intersections between Web3 and Docker!

Our job is to keep working and keep innovating. We should focus on the goals ahead and find the right technologies based on those goals.

Next up, we’ll discuss fields that are more technical. Join us as we explore using Docker with fully distributed storage options.

]]>
In Case You Missed It: Docker Community All-Hands https://www.docker.com/blog/docker-community-all-hands-6-highlights/ Tue, 06 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37309

That’s a wrap! Community All-Hands has officially come to a close. Our sixth All-Hands featured over 35 talks across 10 channels — with topics ranging from “getting started with Docker” to running machine learning on AI hardware accelerators.

As always, every channel was buzzing with activity. Your willingness to jump in, ask questions, and help others is what the Docker community’s all about. And we loved having the chance to chat with everyone directly! 

Couldn’t attend our recent Community All-Hands event? We’ll cover some important announcements, interesting presentations, and more that you missed.

Docker CPO looks back at a year of developer obsession

Headlining Community All-Hands were some important announcements on the main stage, kicked off by Jake Levirne, our Head of Products. This past year, our engineers focused on improving developer experiences across every product. Integrated features like Dev Environments, Docker Extensions, SBOM, and Compose V2 have helped streamline workflows — along with numerous usability and OS-specific improvements. 

Over the last year, the Docker engineering team:

  • Released 24 new features
  • Made 37,000 internal commits
  • Curated 52 extensions and counting within Docker Desktop and Docker Hub
  • Hosted over eight million Docker Desktop downloads

We couldn’t have made these improvements without your feedback. Keep your votes, comments, and messages coming — they’re essential for helping us ship the features you need. Keep an eye out for continued updates about UX enhancements, Trusted Open Source, and user-centric partnerships.

How to use SBOMs to support multiple layers

Following Jake, our presenters dove deeper into the technical depths. Next up was a session on viewing images through layered software bills of materials (SBOMs), led by Docker Principal Software Engineer Jim Clark. 

SBOMs are extremely helpful for knowing what’s in your images and apps. But where it gets complex is that many images stem from base images. And even those base images can have their own base images, making full image transparency difficult. Multi-layer images have historically been harder to analyze. To get a full picture of a multi-layer image, you’ll need to know things like:

  • Which packages are included
  • How those packages are distributed between layers
  • How image rebuilds can impact packages
  • If security fixes are available for individual packages

Jim shared that it’s now possible to gather this information. While this feature is still under development, users will soon see layer sizes, total packages per layer, and be able to view complete Dockerfiles on GitHub. 

And as a next step, the team is also focused on understanding shared content and tracking public data. This is another step toward building developer trust, and knowing exactly what’s going into your projects.

Docker Desktop meets multi-platform image support via containerd

Rounding out our major announcements was Djordje Lukic, Staff Software Engineer, with a session on containerd image management. Containerd has been our container runtime since 2016. Since then, we’re extended its integration within Docker Desktop and Docker Engine. 

Containerd migration offers some key benefits: 

  • There’s less code to maintain
  • We can ship features more rapidly and shorten release cycles
  • It’s easier to improve our developer tooling
  • We can bring multi-platform support to Docker, while following the Open Container Initiative (OCI) more closely and supporting different snapshotters.

Leveraging containerd more heavily means we can consolidate portions of the Docker Daemon. Check out our containerd announcement blog to learn more. 

Showcasing attendees’ favorite talks

Every Community All-Hands channel hosted unique sets of topics, while each session highlighted relationships between Docker and today’s top technologies. Here are some popular talks from Community All-Hands and why they’re worth watching. 

Developing Go Apps with Docker

From the “Best Practices” channel.

Go (or Golang) is a language well-loved and highly sought after by professional developers. We support it as a core language and maintain a Go language-specific use guide within our docs. 

Follow along with Muhammad Quanit as he explores containerized Go applications. Muhammad covers best practices, the importance of multi-stage builds, and other tips for optimizing your Dockerfiles. By using a Go web server, he demonstrates the “Dockerization” process and the usefulness of IDE extensions.

Integration Testing Your Legacy Java Microservice with docker-maven-plugin

From the “Demos” channel.

Enterprises and development teams often maintain Java code bases upwards of 10 years old. While these services may still be functional, it’s been challenging to bind automated testing to each individual microservice repository. Docker Compose does enable batch testing, but that extra granularity is needed.

Join Terry Brady as he shows you how to run JUnit microservices tests, automated maven testing, code coverage calculation, and even test-resource management. Don’t worry about rewriting your legacy code. Instead, learn how integration testing and dedicated test containers help make life easier. 

How Does Docker Run Machine Learning on Specialized AI Hardware Accelerators

From the “Cutting Edge” channel.

Currently, 35% of companies report using AI in some fashion, while another 42% of respondents say they’re considering it. Machine learning (ML) — a subset of AI — has been critical to creating predictive models, extracting value from big data, and automating many tedious processes. 

Shashank Prasanna outlines just how important specialized hardware is to powering these algorithms. And while ML gains steam, companies are unveiling numerous supporting chipsets and GPUs. How does Docker handle these accelerators? Follow along as Shashank highlights Docker’s capabilities within multi-processor systems, and how these differ from traditional, single-CPU systems from an AI standpoint.

But wait, there’s more! 

The above talks are just a small sample of our learning sessions. Swing by our Docker YouTube channel to browse through our entire content library. 

You can also check out playlists from each event channel: 

  • Mainstage – showcases of the community and Docker’s latest developments 
  • Best Practices – tips to get the most from your Docker applications
  • Demos – in-depth presentations that tackle unique use cases, step by step
  • Security – best practices for building stronger, attack-resistant containers and applications
  • Extensions – the basics of building extensions while demonstrating their usefulness in different scenarios
  • Cutting Edge – talks about how Docker and today’s leading technologies unite
  • International Waters – multilingual tech talks and panel discussions on trends
  • Open Source – panels on the Docker Sponsored Open-Source Program and the value of open source
  • Unconference – informal talks on getting started with Docker and Docker experiences

Thank you and see you next time!

From key Docker announcements, to technical talks, to our buzzworthy Community Awards ceremony, we had an absolute blast with you at Community All-Hands. Also, a huge special thanks to DJ Alessandro Vozza for keeping the music and excitement going!

And don’t forget to download the latest Docker Desktop to check out the releases and try out any new tricks you’ve learned.

See you at our next All-Hands event, and thank you for making this community stronger. Happy developing!

Learn about our recent releases

]]>
Recapping the last year of developer-focused innovation in Docker Desktop nonadult
Integrated Terminal for Running Containers, Extended Integration with Containerd, and More in Docker Desktop 4.12 https://www.docker.com/blog/integrated-terminal-for-running-containers-extended-integration-with-containerd-and-more-in-docker-desktop-4-12/ Thu, 01 Sep 2022 17:10:00 +0000 https://www.docker.com/?p=37252 Docker Desktop 4.12 is now live! This release brings some key quality-of-life improvements to the Docker Dashboard. We’ve also made some changes to our container image management and added it as an experimental feature. Finally, we’ve made it easier to find useful Extensions. Let’s dive in.

Execute commands in a running container straight from the Docker Dashboard

Developers often need to explore a running container’s contents to understand its current state or debug it when issues arise. With Docker Desktop 4.12, you can quickly start an interactive session in a running container directly through a Docker Dashboard terminal. This easy access lets you run commands without needing an external CLI. 

Opening this integrated terminal is equal to running docker exec -it <container-id> /bin/sh (or docker exec -it cmd.exe if you’re using Windows containers) in your system terminal. Docker detects a running container’s default user from the image’s Dockerfile. If there’s none specified, it defaults to root. Placing this in the Docker Dashboard gives you real-time access to logs and other information about your running containers. 

Your session is persisted if you navigate throughout the Dashboard and return — letting you easily pick up where you left off. The integrated terminal also supports copy, paste, search, and session clearing.

Dashboard View Container Details
Dashboard Integrated CLI

Still want to use your external terminal? No problem. We’ve added two easy ways to launch a session externally.

Option 1: Use the “Open in External Terminal” button straight from this tab. Even if you prefer an integrated terminal, this might help you run commands and watch logs simultaneously, for example.

Open in External Terminal Option

Option 2: Change your default settings to always open your system default terminal. We’ve added the option to choose what fits your workflow. After applying this setting, the “Open in terminal” button from the Containers tab will always open your system terminal.

Extending Docker Desktop’s integration with containerd

We’re extending Docker Desktop’s integration with containerd to include image management. This integration is available as an opt-in, experimental feature within this latest release.

Experimental Features Containerd

Docker’s involvement in the containerd project extends all the way back to 2016. Docker has used containerd within the Docker Engine to manage the container lifecycle (creating, starting, and stopping) for a while now! 

This new feature is a step towards deeper containerd integration with Docker Engine. It lets you use containerd to store images and then push and pull them. When enabled in the latest Docker Desktop version, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save

This integration has the following benefits:

  • Containerd’s snapshotter implementation helps you quickly plug in new features. One example is using stargz to lazy pull images on startup.
  • The containerd content store can natively store multi-platform images and other OCI-compatible objects. This lets you build and manipulate multi-platform images, for example, or leverage other related features.

You can learn more in our recent announcement, which fully explains containerd’s integration with Docker.

Easily discover extensions

We’ve added two new ways to interact with extensions in Docker Desktop 4.12.

Docker Extensions are now available directly within the Docker menu. From there, you can browse the Marketplace for new extensions, manage your installed extensions, or change extension settings. 

You can also search for extensions in the Extensions Marketplace! Narrow things down by name or keyword to find the tool you need.

Extensions Docker Menu

Two new extensions have also joined the Extensions Marketplace:

Docker Volumes Backup & Share

Docker Volumes Backup & Share lets you effortlessly back up, clone, restore, and share Docker volumes. You can now easily create copies of your volumes and share them through SSH or by pushing them to a registry. Learn more about Volumes Backup & Share on Docker Hub

Volumes Backup and Share

Mini Cluster

Mini Cluster enables developers who work with Apache Mesos to deploy and test their Mesos applications with ease. Learn more about Mini Cluster on Docker Hub.

Try out Dev Environments with Awesome Compose samples

We’ve updated our GitHub Awesome Compose samples to highlight projects that you can easily launch as Dev Environments in Docker Desktop. This helps you quickly understand how to add multi-service applications as Dev Environment projects. Look for the following green icon in the list of Docker Compose application samples:

Dev Environment Compatible Icon

Here’s our new Awesome Compose/Dev Environments feature in action:

image5 1

Get started with Docker Desktop 4.12 today

While we’ve explored some headlining features in this release, Docker Desktop 4.12 also adds important security enhancements under the hood. To learn about these fixes and more, browse our full release notes

Have any feedback for us? Upvote, comment, or submit new ideas via our in-product links or our public roadmap

Looking to become a new Docker Desktop user? Visit our Get Started page to jumpstart your development journey.

]]>
Resources to Use Javascript, Python, Java, and Go with Docker https://www.docker.com/blog/resources-to-use-javascript-python-java-and-go-with-docker/ Fri, 08 Jul 2022 14:00:05 +0000 https://www.docker.com/?p=34686 With so many programming and scripting languages out there, developers can tackle development projects any number of ways. However, some languages — like JavaScript, Python, and Java — have been perennial favorites. (We’ve previously touched on this while unpacking Stack Overflow’s 2022 Developer Survey results.)

Programming Language Syntax

Image courtesy of Joan Gamell, via Unsplash

Many developers use Docker in tandem with these languages. We’ve seen our users create some amazing applications! Here are some resources and recommendations to level up your container game with these languages.

Getting Started with Docker

If you’ve never used Docker, you may want to familiarize yourself with some basic concepts first. You can learn the technical fundamentals of Docker and containerization via our “Orientation and Setup” guide and our introductory page. You’ll learn how containers work, and even how to harness tools like the Docker CLI or Docker Desktop.

Our Orientation page also serves as a foundation for many of our own official walkthroughs. This is a great resource if you’re completely new to Docker!

If you prefer hands-on learning, look no further than Shy Ruparel’s “Getting Started with Docker” video guide. Shy will introduce you to Docker’s architecture, essential CLI commands, Docker Desktop tips, and sample applications.

If you’re feeling comfortable with Docker, feel free to jump to your language-specific section using the links below. We’ve created language-specific workflows for each top language within our documentation (AKA “Our Language Modules” in this blog). These steps are linked below alongside some extra exploratory resources. We’ll also include some awesome-compose code samples to accelerate similar development projects — or to serve as inspiration.

Table of Contents

How to Use Docker with JavaScript

JavaScript has been the programming world’s leading language for 10 years running. Luckily, there are also many ways to use JavaScript and Docker together. Check out these resources to harness JavaScript, Node.js, and other runtimes or frameworks with Docker.

Docker Node.js Modules

Before exploring further, it’s worth completing our learning modules for Node. These take you through the basics and set you up for increasingly-complex projects later on. We recommend completing these in order:

  1. Overview for Node.js (covering learning objectives and containerization of your Node application)
  2. Build your Node image
  3. Run your image as a container
  4. Use containers for development
  5. Run your tests using Node.js and Mocha frameworks
  6. Configure CI/CD for your application
  7. Deploy your app

It’s also possible that you’ll want to explore more processes for building minimum viable products (MVPs) or pulling container images. You can read more by visiting the following links.

Other Essential Node Resources

How to Use Docker with Python

Python has consistently been one of our developer community’s favorite languages. From building simple sample apps to leveraging machine learning frameworks, the language supports a variety of workloads. You can learn more about the dynamic duo of Python and Docker via these links.

Docker Python Modules

Similar to Node.js, these pages from our documentation are a great starting point for harnessing Python and Docker:

  1. Overview for Python
  2. Build your Python image
  3. Run your image as a container
  4. Use containers for development (featuring Python and MySQL)
  5. Configure CI/CD for your application
  6. Deploy your app

Other Essential Python Resources

How to Use Docker with Java

Both its maturity and the popularity of Spring Boot have contributed to Java’s growth over the years. It’s easy to pair Java with Docker! Here are some resources to help you do it.

Docker Java Modules

Like with Python, these modules can help you hit the ground running with Java and Docker:

  1. Overview for Java
  2. Build your Java image
  3. Run your image as a container
  4. Use containers for development
  5. Run your tests
  6. Configure CI/CD for your application
  7. Deploy your app

Other Essential Java Resources

How to Use Docker with Go

Last, but not least, Go has become a popular language for Docker users. According to Stack Overflow’s 2022 Developer Survey, over 10,000 JavaScript users (of roughly 46,000) want to start or continue developing in Go or Rust. It’s often positioned as an alternative to C++, yet many Go users originally transition over from Python and Ruby.

There’s tremendous overlap there. Go’s ecosystem is growing, and it’s become increasingly useful for scaling workloads. Check out these links to jumpstart your Go and Docker development.

Docker Go Modules

  1. Overview for Go
  2. Build your Go image
  3. Run your image as a container
  4. Use containers for development
  5. Run your tests using Go test
  6. Configure CI/CD for your application
  7. Deploy your app

Other Essential Go Resources

Build in the Language You Want with Docker

Docker supports all of today’s leading languages. It’s easy to containerize your application and deploy cross-platform without having to make concessions. You can bring your workflows, your workloads, and, ultimately, your users along.

And that’s just the tip of the iceberg. We welcome developers who develop in other languages like Rust, TypeScript, C#, and many more. Docker images make it easy to create these applications from scratch.

We hope these resources have helped you discover and explore how Docker works with your preferred language. Visit our language-specific guides page to learn key best practices and image management tips for using these languages with Docker Desktop.

]]>
Getting Started with Docker nonadult