Docker engine – Docker https://www.docker.com Mon, 05 Jun 2023 18:57:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Docker engine – Docker https://www.docker.com 32 32 Docker Desktop 4.20: Docker Engine and CLI Updated to Moby 24.0 https://www.docker.com/blog/docker-desktop-4-20-docker-engine-and-cli-updated-to-moby-24-0/ Thu, 01 Jun 2023 14:00:00 +0000 https://www.docker.com/?p=42974 We are happy to announce the major release of Moby 24.0 in Docker Desktop 4.20. We have dedicated significant effort to this release, marking a major milestone in the open source Moby project.

This effort began in September of last year when we announced we were extending the integration of the Docker Engine with containerd. Since then, we have continued working on the image store integration and contributing this work to the Moby project. 

In this release, we are adding support for image history, pulling images from private repositories, image importing, and support for the classic builder when using the containerd image store.

Patch Release Update: Experiencing startup issues mentioning “An unexpected error occurred”? Try updating to the newest version! We’ve addressed this in our 4.20.1 patch release.

Feature graphic with white 4.20 text on purple background

To explore and test these new features, activate the option Use containerd for pulling and storing images in the Features in Development panel within the Docker Desktop Settings (Figure 1). 

Screenshot of "Features in Development" page with "Use containerd for pulling and storing images" selected.
Figure 1: Select “Use containerd for pulling and storing images” in the “Features in development” screen.

When enabling this feature, you will notice your existing images are no longer listed, but there’s no need to worry. The containerd image store functions differently from the classic store, and you can only view one of them at a time. Images in the classic store can be accessed again by disabling the feature.

On the other hand, if you want to transfer your existing images to the new containerd store, simply push your images to the Docker Hub using the command docker push <my image name>. Then, activate the containerd store in the Settings and pull the image using docker pull <my image name>.

You can read all bug fixes and enhancements in the Moby release notes.

SBOM and provenance attestations using BuildKit v0.11

BuildKit v0.11 helps you secure your software supply chain by allowing you to add an SBOM (Software Bill of Materials) and provenance attestations in the SLSA format to your container images. This can be done with the container driver as described in the blog post Generating SBOMs for Your Image with BuildKit or by enabling the containerd image store and following the steps in the SBOM attestations documentation.

New Dockerfile inspection features

In version 4.20, you can use the docker build command to preview the configuration of your upcoming build or view a list of available build targets in your Dockerfile. This is particularly beneficial when dealing with multi-stage Dockerfiles or when diving into new projects.

When you run the build command, it processes your Dockerfile, evaluates all environment variables, and determines reachable build stages, but stops before running the build steps. You can use --print=outline to get detailed information on all build arguments, secrets, and SSH forwarding keys, along with their current values. Alternatively, use --print=targets to list all the possible stages that can be built with the --target flag.

We also aim to present textual descriptions for these elements by parsing the comments in your Dockerfile. Currently, you need to define BUILDX_EXPERIMENTAL environment variable to use the --print flag (Figure 2).

Screenshot showing Buildx_experimental environmental variable.
Figure 2: Using the new `–print=outline` command outputs detailed information
on build arguments, secrets, SSH forwarding keys and associated values.

Check out the Dockerfile 1.5 changelog for more updates, such as loading images from on-disk OCI layout and improved Git source and checksum validation for ADD commands in the labs channel. Read the Highlights from the BuildKit v0.11 Release blog post to learn more.

Docker Compose dry run

In our continued efforts to make Docker Compose better for developer workflows, we’re addressing a long-standing user request. You can now dry run any Compose command by adding a flag (--dry-run). This gives you insight on what exactly Compose will generate or execute so nothing unexpected happens.

We’re also excited to highlight contributions from the community, such as First implementation of viz subcommand #10376 by @BenjaminGuzman, which generates a graph of your stack, with details like networks and ports. Try docker compose alpha viz on your project to check out your own graph.

Conclusion

We love hearing your feedback. Please leave any feedback on our public GitHub roadmap and let us know what else you’d like to see. Check out the Docker Desktop 4.20 release notes for a full breakdown of what’s new in the latest release.

]]>
Extending Docker’s Integration with containerd https://www.docker.com/blog/extending-docker-integration-with-containerd/ Thu, 01 Sep 2022 16:44:09 +0000 https://www.docker.com/?p=37231 We’re extending Docker’s integration with containerd to include image management! To share this work early and get feedback, this integration is available as an opt-in experimental feature with the latest Docker Desktop 4.12.0 release.

The Docker Desktop Experimental Features settings with the option for using containerd for pulling and storing images enabled.

What is containerd?

In the simplest terms, containerd is a broadly-adopted open container runtime. It manages the complete container lifecycle of its host system! This includes pulling and pushing images as well as handling the starting and stopping of containers. Not to mention, containerd is a low-level brick in the container experience. Rather than being used directly by developers, it’s designed to be embedded into systems like Docker and Kubernetes.

Docker’s involvement in the containerd project can be traced all the way back to 2016. You could say, it’s a bit of a passion project for us! While we had many reasons for starting the project, our goal was to move the container supervision out of the core Docker Engine and into a separate daemon. This way, it could be reused in projects like Kubernetes. It was donated to the Cloud Native Computing Foundation (CNCF), and it’s now a graduated (stable) project as of 2017.

What does containerd replace in the Docker Engine?

As we mentioned earlier, Docker has used containerd as part of Docker Engine for managing the container lifecycle (creating, starting, and stopping) for a while now! This new work is a step towards a deeper integration of containerd into the Docker Engine. It lets you use containerd to store images and then push and pull them. Containerd also uses snapshotters instead of graph drivers for mounting the root file system of a container. Due to containerd’s pluggable architecture, it can support multiple snapshotters as well. 

Want to learn more? Michael Crosby wrote a great explanation about snapshotters on the Moby Blog.

Why migrate to containerd for image management?

Containerd is the leading open container runtime and, better yet, it’s already a part of Docker Engine! By switching to containerd for image management, we’re better aligning ourselves with the broader industry tooling. 

This migration modifies two main things:

  • We’re replacing Docker’s graph drivers with containerd’s snapshotters.
  • We’ll be using containerd to push, pull, and store images.

What does this mean for Docker users?

We know developers love how Docker commands work today and that many tools rely on the existing Docker API. With this in mind, we’re fully vested in making sure that the integration is as transparent as possible and doesn’t break existing workflows. To do this, we’re first rolling it out as an experimental, opt-in feature so that we can get early feedback. When enabled in the latest Docker Desktop, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save.

This integration has the following benefits:

  1. Containerd’s snapshotter implementation helps you quickly plug in new features. Some examples include using stargz to lazy-pull images on startup or nydus and dragonfly for peer-to-peer image distribution.
  2. The containerd content store can natively store multi-platform images and other OCI-compatible objects. This enables features like the ability to build and manipulate multi-platform images using Docker Engine (and possibly other content in the future!).

If you plan to build the multi-platform image, the below graphic shows what to expect when you run the build command with the containerd store enabled. 

A recording of a docker build terminal output with the containerd store enabled.

Without the experimental feature enabled, you will get an error message stating that this feature is not supported on docker driver as shown in the graphic below. 

A recording of an unsuccessful docker build output with the containerd store disabled.

If you decide not to enable the experimental feature, no big deal! Things will work like before. If you have additional questions, you can access details in our release notes.

Roadmap for the containerd integration

We want to be as transparent as possible with the Docker community when it comes to this containerd integration (no surprises here!). For this reason, we’ve laid out a roadmap. The integration will happen in two key steps:

  1. We’ll ship an initial version in Docker Desktop which enables common workflows but doesn’t touch existing images to prove that this approach works.
  2. Next, we’ll write the code to migrate user images to use containerd and activate the feature for all our users.

We work to make expanding integrations like this as seamless as possible so you, our end user, can reap the benefits! This way, you can create new, exciting things while leveraging existing features in the ecosystem such as namespaces, containerd plug-ins, and more.

We’ve released this experimental feature first in Docker Desktop so that we can get feedback quickly from the community. But, you can also expect this feature in a future Docker Engine release.  


The details on the ongoing integration work can be accessed here

Conclusion

In summary, Docker users can now look forward to full containerd integration. This brings many exciting features from native multi-platform support to encrypted images and lazy pulls. So make sure to download the latest version of Docker Desktop and enable the containerd experimental feature to take it for a spin! 

We love sharing things early and getting feedback from the Docker community — it helps us build products that work better for you. Please join us on our community Slack channel or drop us a line using our feedback form.

]]>
Top 5 Docker Desktop Features Developers Must Know About https://www.docker.com/blog/top-5-docker-desktop-features-developers-must-know-about/ Tue, 05 Apr 2022 19:53:33 +0000 https://www.docker.com/?p=32946 Built atop Docker Engine—our open-source container runtime—Docker Desktop is ideal for developers needing greatly expanded Docker functionality, regardless of experience level. Docker Desktop is ready to use out of the box. It packages together Docker’s CLI, Docker Compose, Kubernetes, and other tools to streamline application development. Docker Desktop’s quick installation and OS agnosticism have also boosted its popularity amongst developers. 

Discovering and fully grasping every feature within Docker Desktop can take time, since there’s no clear starting point. There are plenty of powerful functions! However, we’ve chosen to highlight some key features you’ll find most useful within Desktop. Based on community and developer feedback, here are our top five Docker Desktop features that’ll improve your development experience. 

1) Simplified Permissions Management

Managing and understanding permissions can be challenging while working with containers. You have to worry about how ownership and user IDs impact the security of your environment—and how you interact with files generated locally. 

Unfortunately, it’s possible for the container user and the host user to differ. Developer Juan Treminio highlights how running a Docker container with PHP Composer support and a PHP image—while technically effective—produces read-only files. This isn’t great when you need complete file access. Accordingly, while files and directories may be root-owned, you might avoid using root for various reasons. Relying on sudo is also inefficient. Finally, you must consider docker groups and how user permissions are inherited. Unweaving this tangled web is challenging while mounting volumes.  

Docker Desktop erases these file ownership concerns. It automatically applies your file mappings while mounting volumes. Accordingly, Desktop updates user and group ownership to match what your container needs—and what you’re using on your host machine. Doing this manually can be somewhat tedious. 

Desktop lets you own your files as your own machine user. This grants you full access, and empowers you to work without stumbling over permissions hurdles. Finally, Docker Desktop gives you privilege escalation protection while running in a VM. 

2) Powerful File Sharing Capabilities

File sharing remains a core feature within Docker Desktop—especially while communicating between host machines and containers. Making these file types accessible across your Docker environment is crucial, since they’ll power your applications and their runtime configurations: 

  • Source code
  • Smaller quantities of media assets, like images or videos
  • Build artifacts
  • Database backends
  • Logs

You might also reuse existing container data with newer containers. This is useful for streamlining new deployments and applying optimizations right out of the gate. 

Unfortunately, we often encounter compatibility issues with filesystems—which can differ between your container and your computer. Transfers between hosts and containers can also hurt performance at scale. 

Thankfully, Docker Desktop side steps these issues in a few ways. First, Docker Desktop supports named volumes. These exist within the VM filesystem itself, and are fast on all workstations regardless of OS. 

Second, using Windows Subsystem for Linux (WSL 2) is another great way to unlock extra file sharing performance. However, just make sure you’re storing your files within the Linux filesystem. 

Third, macOS users now have an experimental new implementation called virtiofs at their disposal. Depending on workload, this feature can boost execution speeds by up to 98%. This enhancement topped our list of GitHub requests, and we think it’ll positively impact numerous users. 

Finally, Docker Dashboard’s visualizations help you manage critical volumes. Just like with containers and images, Desktop will display all existing volumes in a list. Mounted volumes are labeled as “in use” (so you can tell when they’re active), while idle volumes remain untagged. 

Docker Dashboard Volumes List

 Docker Desktop’s interface is uncluttered, making it easy to navigate Kubernetes configurations.

You’ll never forget when you created a volume. Volume sizes are clearly listed. You can also create or delete volumes with just a few clicks, helping you free up disk space easily when volumes are no longer in use. Meanwhile, accomplishing this with Docker Engine would require multiple CLI commands, and—while not too challenging—would take more time and care. Docker Desktop provides that consolidated management functionality without requiring supplemental, open-source tooling.

3) Rapid Deployment of Single Kubernetes Node Clusters

According to the Cloud Native Computing Foundation’s 2021 Survey, Docker and Kubernetes have grown into a popular pairing. After all, 37% of respondents use a Docker Kubernetes implementation.

Kubernetes also has a reputation for complexity. While the system is highly in-demand, the setup process isn’t always simple. 

Docker Desktop gives you a local Kubernetes cluster and handles all of the management legwork. You don’t have to worry about updating components, nor performing any sort of additional maintenance. After installing Desktop and clicking its app icon, choosing “Settings” will open the Dashboard, with Kubernetes listed in the sidebar. You can easily launch the Kubernetes pane from here:

Kubernetes Setup Docker Desktop

Docker Desktop’s interface is uncluttered, making it easy to navigate Kubernetes configurations. 

Simply clicking the “Enable Kubernetes” checkbox will download all dependencies in the background. From there, you’re good to go! This saves you from choosing your own container runtime and needing other cluster-creation tools. 

For more on deploying Kubernetes via Docker, please view our documentation page

4) Development Environments (Preview)

Though it hasn’t yet graduated to general availability, Development Environments is a tentpole collaboration feature within Docker Desktop. You can access this preview from the sidebar. This lets you quickly join new projects. 

Developer Environments packages all development components into a single development environment—letting you contribute with a few clicks. Say goodbye to the frustration of “it worked on my machine,” and easily manage and maintain a development environment that works across OSes. Dev Environments allows you to:

  • Easily start Dev Environments locally or from a GitHub URL
  • Set up a compose application as a Dev Environment 
  • Run environments cost-free by using local resources, instead of purchasing cloud resources

Make sure to install Docker Desktop 3.5.0 or higher before attempting to use Development Environments, though we always recommend being on the latest version. You’ll also need Git, Visual Studio Code (VSC), and the VSC Remote Containers Extension. 

Dev Environments Docker Desktop

Development Environments is still in beta, but you can contribute to its development!

Dev Environments also allow you to seamlessly work on multiple projects and branches without any local conflicts. You can:

  • Open running containers in a VS Code terminal—allowing you to make code edits, push or pull code to your repo, and switch between branches 
  • Create development environments from branches and tags, and run them simultaneously to easily compare versions of work-in-progress code
  • Edit code and commit any changes in real time

Synchronization and transparency are two chief benefits of Development Environments. Say goodbye to merge conflicts and moving repeatedly between branches to perform code review. 

5) Docker Extensions (Coming Soon)

Finally, Docker Desktop will soon welcome Docker Extensions into the fold. We’re inviting users to test drive new integrations from partners and community contributors after DockerCon 2022. Extensions automatically leverage Desktop’s Dashboard UI. They can display detailed information about your containers, local system, volumes, and more. We also encourage the Docker community to bolster Desktop’s functionality through Extensions. 

Development takes a lot of time under the best of circumstances. Repeatedly jumping between tools can waste valuable time. Extensions let you stay within Docker Desktop and tackle multiple tasks without switching windows. 

We can’t wait to release our Docker Extensions SDK alongside the May preview. This’ll allow organizations and customers to build their own functions into Docker Desktop. Join us for DockerCon to learn more about Extensions, which we’ll cover more deeply before launching the beta. 

Stay Tuned for More!

Docker Desktop is continually evolving. We’re investigating ways to add features based on your feedback and our product roadmap. Check out our current feature requests, open your own Desktop issues, and even vote on features you’d like to see via our public roadmap. Your feedback means everything, and we’d love to hear your thoughts. 

DockerCon2022

Join us for DockerCon2022 on Tuesday, May 10. DockerCon is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/

]]>
Closing Out 2020 with More Innovation for Developers https://www.docker.com/blog/closing-out-2020-with-more-innovation-for-developers/ Tue, 22 Dec 2020 14:00:00 +0000 https://www.docker.com/blog/?p=27359 Recently our CEO Scott Johnston took a look back on all that Docker had achieved one year after selling the Enterprise business to Mirantis and refocusing solely on developers. We made significant investments to deliver value-enhancing features for developers, completed strategic collaborations with key ecosystem partners and doubled down on engaging its user community, resulting in a 70% year-over-year increase in Docker usage.  

Even though we are winding down the calendar year, you wouldn’t know it based on the pace at which our engineering and product teams have been cranking out new features and tools for cloud-native development. In this post, I’ll add some context around all the goodness that we’ve released recently.  

Recall that our strategy is to deliver simplicity, velocity and choice for dev teams going from code to cloud with Docker’s collaborative application development platform. Our latest releases, including Docker Desktop 3.0 and Docker Engine 20.10, accelerate the build, share, and run process for developers and teams. 

Higher Velocity Docker Desktop Releases 

With the release of Docker Desktop 3.0.0, we are totally changing the way we distribute Docker Desktop to developers. These changes allow for smaller, faster Docker Desktop releases for increased simplicity and velocity. Specifically:  

  • Docker Desktop will move to a single release stream for all users. Now developers don’t have to choose between stability versus getting fixes and features quickly.
  • All Docker Desktop updates will be provided as deltas from the previous version. This will reduce the size of a typical update, speeding up your workflow and removing distractions from your day.
  • Updates will be downloaded in the background for a more simple experience. All a developer needs to do is restart Docker Desktop. 
  • A preview of Docker Desktop on Apple M1 silicon, the most in-demand request on our public roadmap

These updates build upon what was a banner year for Docker Desktop. The team has been super hard at work having also collaborated with Snyk on image scanning and collaborating with AWS and Microsoft to deploy from Desktop straight to AWS and Azure, respectively. We also invested in performance improvements to the local development experience, such as CPU improvements on Mac and the WSL2 backend on Windows.   

Docker Engine 20.10

Docker Engine is the industry’s de facto container runtime. It powers millions of applications worldwide, providing a standardized packaging format for diverse applications. This major release of Docker Engine 20.10 continues our investment in the community Engine adding multiple new features including support for cgroups V2, moving multiple features out of experimental into GA including `RUN –mount=type=(cache|secret|ssh|…)`and rootless mode, along with a ton of other improvements to the API, client and build experience. These updates enhance security and increase trust and confidence so that developers can move faster than before.

More 2020 Milestones

In addition to these latest product innovations, we continued to execute on our developer strategy. Other highlights from the year that was 2020 include: 

  • 11.3 million monthly active users sharing apps from 7.9 million Docker Hub repositories at a rate of 13.6 billion pulls per month – up 70% year-over-year. 
  • Collaborated with Microsoft, AWS, Snyk and Canonical to enhance the developer experience and grow the ecosystem. 
  • Docker ranked as the #1 most wanted, the #2 most loved, and the #3 most popular platform according to the 2020 Stack Overflow Survey
  • Hosted DockerCon Live with more than 80,000 registrants. All of the DockerCon Live 2020 content is on-demand if you missed it.
  • Over 80 projects accepted to our open source program.  

Keep an eye out in the new year for the latest and greatest from Docker. You can expect to see us release features and tools that enhance the developer experience for app dev environments, container image management, pipeline automation, collaboration and content. Happy holidays and merry everything! We will “see” you in 2021, Docker fam. 

]]>
Introducing Docker Engine 20.10 https://www.docker.com/blog/introducing-docker-engine-20-10/ Thu, 10 Dec 2020 00:19:18 +0000 https://www.docker.com/blog/?p=27312 We are pleased to announce that we have completed the next major release of the Docker Engine 20.10. This release continues Docker’s investment in our community Engine adding multiple new features including support for cgroups V2, moving multiple features out of experimental including RUN --mount and rootless, along with a ton of other improvements to the API, client and build experience. The full list of changes can be found as part of our change log

engine e1431551143773

Docker Engine is the underlying tooling/client that enables users to easily build, manage, share and run their container objects on Linux. The Docker Engine is made up of 3 core components:

  • A server with a long-running daemon process dockerd.
  • APIs which specify interfaces that programs can use to talk to and instruct the Docker daemon.
  • A command line interface (CLI) client docker.

For those who are curious about the recent questions about Docker Engine/K8s, please have a look at Dieu’s blog to learn more. 

Along with this I want to give a huge thank you to everyone in the community and all of our maintainers who have also contributed towards this Engine release. Without their contribution, hard work and support we would not be where we are nor have this Engine release. When I say ‘we’ throughout this article I don’t just mean the (awesome) engineers at Docker, I mean the (awesome) engineers outside of Docker and the wider community that have helped shape this release 🙂 

You can get started with the 20.10 version of Docker Engine either by getting the packages from here or this will be available in this week’s community release of Docker Desktop, for those of your can’t wait on Mac and Windows you can try out the RC of 20.10 using the latest Docker Desktop. Now let’s jump in and have a closer look at some of the 20.10 changes. 

Initial support for cgroups V2

Just as a reminder on how Docker works; Docker uses several foundational Linux kernel features to provide isolation to your running processes and the files associated with them. One of these features that we make use of is cgroups. Cgroups in Linux limits the resource usage (CPU, memory, disk, etc.) of a process. Docker combines these with the use of Linux namespaces to isolate your processes in containers. 

V2 of Cgroups was first introduced to the Linux kernel in 2016 bringing with it changes to how groups are managed and support for imposing resource limitations on rootless containers. For a bit more background on where this came from and some background on why these changes have taken a while to come along check out Akihiro’s blog.

Now that support for this in runc has been introduced we have added it to Docker. This change in turn has allowed Docker to graduate rootless from experimental to a fully supported feature.

Support for reading docker logs with all logging drivers

Prior to Docker Engine 20.10, the jsonfile and journald log drivers supported reading container logs using docker logs. However, many third party logging drivers had no support for locally reading logs using docker logs, including:

  • syslog
  • gelf
  • fluentd
  • awslogs
  • splunk
  • etwlogs
  • gcplogs
  • logentries

This created multiple problems when attempting to gather log data in an automated and standard way. Log information could only be accessed and viewed through the third-party solution in the format specified by that third-party tool.

Starting with Docker Engine 20.10, you can use docker logs to read container logs regardless of the configured logging driver or plugin. This capability, sometimes referred to as dual logging, allows you to use docker logs to read container logs locally in a consistent format, regardless of the remote log driver used, because the Engine is configured to log information to the “local” logging driver. Refer to “Configure the default logging driver “ in the Docker documentation for additional information

OS support changes

With the 20.10 release of Engine we are updating the OS support we have, this means we are adding support for both Ubuntu 20.10 and Fedora 33 along with continuing the support for CentOS8 – giving users on these OS’s access to Docker’s latest features.  

CLI improvements

Along with all of this we have made multiple changes to improve the CLI experience to provide you with access to the functionality you need and the configurability to automate this. We have been looking at making the experience across the CLI more consistent, removing older and unused commands to make it simpler and adding new options  to make it easier to get started and easier to script using Docker. 

Taking a look at a few of these:

Docker push - we have changed the default behavior to be in line with pull, now if you push an image name without a tag we will only push the :latest tag rather than all tags. To support this we have also now added a -a/all-tags flag to push all the tags of an image. 

--pull=missing|always|never – have been added to the run and create commands, giving you more fine grain control over when to pull images

docker exec – we have added a new -env-file flag. This allows you to run docker exec with the –env-file flag and a file containing environment variables. And subsequently print/use any of the variables inside the file in the command.

 To learn more about Docker Engine 20.10:

]]>
What developers need to know about Docker, Docker Engine, and Kubernetes v1.20 https://www.docker.com/blog/what-developers-need-to-know-about-docker-docker-engine-and-kubernetes-v1-20/ Fri, 04 Dec 2020 18:54:00 +0000 https://www.docker.com/blog/?p=27292 The latest version of Kubernetes Kubernetes v1.20.0-rc.0 is now available. The Kubernetes project plans to deprecate Docker Engine support in the kubelet and support for dockershim will be removed in a future release, probably late next year. The net/net is support for your container images built with Docker tools is not being deprecated and will still work as before.

Even better news however, is that Mirantis and Docker have agreed to partner to maintain the shim code standalone outside Kubernetes, as a conformant CRI interface for Docker Engine. We will start with the great initial prototype from Dims, at https://github.com/dims/cri-dockerd and continuing to make it available as an open source project, at https://github.com/Mirantis/cri-dockerd. This means that you can continue to build Kubernetes based on Docker Engine as before, just switching from the built in dockershim to the external one. Docker and Mirantis will work together on making sure it continues to work as well as before and that it passes all the conformance tests and works just like the built in version did. Docker will continue to ship this shim in Docker Desktop as this gives a great developer experience, and Mirantis will be using this in Mirantis Kubernetes Engine. 

What does this mean for you if you use Docker and Kubernetes?

First of all, don’t panic 🙂 Developers can still use the Docker platform to build, share, and run containers on Kubernetes! This change primarily impacts operators and administrators for Kubernetes and doesn’t impact developer work flows. The images Docker builds are compliant with OCI (Open Container Initiative), are fully supported on containerd, and will continue to run great on Kubernetes.

If you’re using Docker, you’re already using containerd. We build Docker’s runtime upon containerd while providing a great developer experience around it. For production environments that benefit from a minimal container runtime, such as Kubernetes, and may have no need for Docker’s great developer experience, it’s reasonable to directly use lightweight runtimes like containerd.

Docker set up in 2015 the Open Container Initiative (OCI) in order to support fully interoperable container standards, and make sure that every container can run in any environment. This has been a huge success in promoting innovation while maintaining interoperability.

Docker created the containerd project, along with Google and IBM, in 2016, with the goal of this transition in mind. The deprecation of docker-shim (and Docker Engine as runtime) marks the completion of a long-term commitment to provide a modern runtime for Kubernetes. Containerd was created as a core low-level, extensible runtime for both Docker and Kubernetes to each use in the most appropriate way.

Containerd was donated to the CNCF in 2017, and has grown to incorporate the containerd CRI project to interface with Kubernetes, as well as seeing a host of innovation and investment from across the industry, including from Amazon, Google, Microsoft and IBM.

In 2019 it became a graduated CNCF project, the highest project level, showing its maturity and it remains the only container runtime with this status. Over the last few years the leading Kubernetes service providers such as AWS and Google have migrated to Containerd as their Kubernetes runtime. This process of depreciation now reflects the great success of this work, and of the thriving community around containerd.

Support for your container images built with Docker tools is not being deprecated.

Container images you build using Docker tools will continue to run on Kubernetes. Buildkit, our next generation build infrastructure, has a flexible architecture so that while it can be used as the builder with Docker, it can also talk directly to containerd or runc instead for use in infrastructure where Docker might not be available.

Docker is committed to containerd development: we will continue to further invest, along with the growing buildkit community, in helping you use Docker builds wherever and however your infrastructure is hosted.

You can continue to build and run Docker images locally and in your Kubernetes cluster as this deprecation will not impact that experience.

What is the Kubernetes project deprecating then?

Kubernetes is deprecating dockershim, which is a component in Kubernetes’ kubelet implementation, communicating with Docker Engine. Arnaud Porterie had some great thoughts on this that he shared here.

The Kubernetes project has also published this FAQ.  Kat Cosgrove did a great job explaining the changes very simply here.

What does this mean for Developers and Admins?

Today, and in Kubernetes v1.20, Kubernetes administrators can continue to use docker commands and kubectl commands to manage their Kubernetes clusters.

Kubernetes administrators will be able to use dockershim in future. Keep an eye on the Mirantis and Docker blogs for updated information about the future of dockershim and how to install the standalone shim..

In a future release of Kubernetes, a few minor releases from now, when support for the internal dockershim is eventually removed from Kubernetes’ kubelet, Kubernetes administrators will need to make some changes to ensure docker commands to inspect Kubernetes clusters will continue to work. Developers can continue to use Docker tools to docker build, docker push and docker run containers and container images on Kubernetes. 

Further Background

KEP-1985: Kubernetes Enhancement Proposal to remove dockershim from Kubelet

Questions? Feedback?

Please reach out on Docker’s slack if you have questions or other feedback. 

]]>
Build and run your first Docker Windows Server container https://www.docker.com/blog/build-your-first-docker-windows-server-container/ Mon, 26 Sep 2016 18:00:00 +0000 https://blog.docker.com/?p=14989 Today, Microsoft announced the general availability of Windows Server 2016, and with it, Docker engine running containers natively on Windows. This blog post describes how to get setup to run Docker Windows Containers on Windows 10 or using a Windows Server 2016 VM. Check out the companion blog posts on the technical improvements that have made Docker containers on Windows possible and the post announcing the Docker Inc. and Microsoft partnership.

Before getting started, It’s important to understand that Windows Containers run Windows executables compiled for the Windows Server kernel and userland (either windowsservercore or nanoserver). To build and run Windows containers, a Windows system with container support is required.

Windows 10 with Anniversary Update

For developers, Windows 10 is a great place to run Docker Windows containers and containerization support was added to the the Windows 10 kernel with the Anniversary Update (note that container images can only be based on Windows Server Core and Nanoserver, not Windows 10). All that’s missing is the Windows-native Docker Engine and some image base layers.

The simplest way to get a Windows Docker Engine is by installing the Docker for Windows public beta (direct download link). Docker for Windows used to only setup a Linux-based Docker development environment (slightly confusing, we know), but the public beta version now sets up both Linux and Windows Docker development environments, and we’re working on improving Windows container support and Linux/Windows container interoperability.

With the public beta installed, the Docker for Windows tray icon has an option to switch between Linux and Windows container development. For details on this new feature, check out Stefan Scherers blog post.

Switch to Windows containers and skip the next section.

switching to windows containers

Windows Server 2016

Windows Server 2016 is the where Docker Windows containers should be deployed for production. For developers planning to do lots of Docker Windows container development, it may also be worth setting up a Windows Server 2016 dev system (in a VM, for example), at least until Windows 10 and Docker for Windows support for Windows containers matures.

For Microsoft Ignite 2016 conference attendees, USB flash drives with Windows Server 2016 preloaded are available at the expo. Not at ignite? Download a free evaluation version and install it on bare metal or in a VM running on Hyper-V, VirtualBox or similar. Running a VM with Windows Server 2016 is also a great way to do Docker Windows container development on macOS and older Windows versions.

Once Windows Server 2016 is running, log in, run Windows Update to ensure you have all the latest updates and install the Windows-native Docker Engine directly (that is, not using “Docker for Windows”). Run the following in an Administrative PowerShell prompt:

Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
Install-Module -Name DockerMsftProvider -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force
Restart-Computer -Force

Docker Engine is now running as a Windows service, listening on the default Docker named pipe. For development VMs running (for example) in a Hyper-V VM on Windows 10, it might be advantageous to make the Docker Engine running in the Windows Server 2016 VM available to the Windows 10 host:

# Open firewall port 2375
netsh advfirewall firewall add rule name="docker engine" dir=in action=allow protocol=TCP localport=2375

# Configure Docker daemon to listen on both pipe and TCP (replaces docker --register-service invocation above)
Stop-Service docker
dockerd --unregister-service
dockerd -H npipe:// -H 0.0.0.0:2375 --register-service
Start-Service docker

The Windows Server 2016 Docker engine can now be used from the VM host by setting DOCKER_HOST:

$env:DOCKER_HOST = "<ip-address-of-vm>:2375"

See the Microsoft documentation for more comprehensive instructions.

Running Windows containers

First, make sure the Docker installation is working:

> docker version
Client:
Version:      1.12.1
API version:  1.24
Go version:   go1.6.3
Git commit:   23cf638
Built:        Thu Aug 18 17:32:24 2016
OS/Arch:      windows/amd64
Experimental: true

Server:
Version:      1.12.2-cs2-ws-beta
API version:  1.25
Go version:   go1.7.1
Git commit:   62d9ff9
Built:        Fri Sep 23 20:50:29 2016
OS/Arch:      windows/amd64

Next, pull a base image that’s compatible with the evaluation build, re-tag it and to a test-run:

docker pull microsoft/windowsservercore
docker run microsoft/windowsservercore hostname
69c7de26ea48

Building and pushing Windows container images

Pushing images to Docker Cloud requires a free Docker ID. Storing images on Docker Cloud is a great way to save build artifacts for later user, to share base images with co-workers or to create build-pipelines that move apps from development to production with Docker.

Docker images are typically built with docker build from a Dockerfile recipe, but for this example, we’re going to just create an image on the fly in PowerShell.

"FROM microsoft/windowsservercore `n CMD echo Hello World!" | docker build -t <docker-id>/windows-test-image -

Test the image:

docker run <docker-id>/windows-test-image
Hello World!

Login with docker login and then push the image:

docker push <docker-id>/windows-test-image

Images stored on Docker Cloud available in the web interface and public images can be pulled by other Docker users.

Using docker-compose on Windows

Docker Compose is a great way develop complex multi-container consisting of databases, queues and web frontends. Compose support for Windows is still a little patchy and only works on Windows Server 2016 at the time of writing (i.e. not on Windows 10).

To develop with Docker Compose on a Windows Server 2016 system, install compose too (this is not required on Windows 10 with Docker for Windows installed):

Invoke-WebRequest https://dl.bintray.com/docker-compose/master/docker-compose-Windows-x86_64.exe -UseBasicParsing -OutFile $env:ProgramFiles\docker\docker-compose.exe

To try out Compose on Windows, clone a variant of the ASP.NET Core MVC MusicStore app, backed by a SQL Server Express 2016 database. A correctly tagged microsoft/windowsservercore image is required before starting.

git clone https://github.com/friism/Musicstore
...
cd Musicstore
docker-compose -f .\docker-compose.windows.yml build
...
docker-compose -f .\docker-compose.windows.yml up
...

To access the running app from the host running the containers (for example when running on Windows 10 or if opening browser on Windows Server 2016 system running Docker engine) use the container IP and port 5000. localhost will not work:

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" musicstore_web_1
172.21.124.54

If using Windows Server 2016 and accessing from outside the VM or host, simply use the VM or host IP and port 5000.

Summary

This post described how to get setup to build and run native Docker Windows containers on both Windows 10 and using the recently published Windows Server 2016 evaluation release. To see more example Windows Dockerfiles, check out the Golang, MongoDB and Python Docker Library images.
Please share any Windows Dockerfiles or Docker Compose examples your build with @docker on Twitter using the tag #windows. And don’t hesitate to reach on the Docker Forums if you have questions.

More Resources:


Build and run your first #Docker @Windows #Server #container
Click To Tweet


]]>
Docker 1.11: The first runtime built on containerd and based on OCI technology https://www.docker.com/blog/docker-engine-1-11-runc/ Wed, 13 Apr 2016 10:02:00 +0000 https://blog.docker.com/?p=11413 We are excited to introduce Docker Engine 1.11, our first release built on runC ™ and containerd ™. With this release, Docker is the first to ship a runtime based on OCI technology, demonstrating the progress the team has made since donating our industry-standard container format and runtime under the Linux Foundation in June of 2015.

Over the last year, Docker has helped advance the work of the OCI to make it more readily available to more users. It started in December 2015, when we introduced containerd ™, a daemon to control runC. This was part of our effort to break out Docker into small reusable components. With this release, Docker Engine is now built on containerd, so everyone who is using Docker is now using OCI. We’re proud of the progress we’ve made on the OCI with the 40+ members to continue the work to standardize container technology.

Besides this mind-blowing piece of technological trivia that I’m sure will impress your friends at parties, what difference does it make to you? Well, short answer is: nothing… yet! Nevertheless, let me convince you that this is still something you should be excited about.

This is among the biggest technical refactorings the Engine has gone through, and our priority for 1.11 was to get the integration right, without changing the command-line interface or API. However, this lays the technical groundwork for significant user-facing improvements.

docker engine 1 11 runc 1

Stability and performance

With the containerd integration comes an impressive cleanup of the Docker codebase and a number of historical bugs being fixed. In general, splitting Docker up into focused independent tools mean more focused maintainers, and ultimately better quality software.

Performance-wise, we were extremely careful in making sure 1.11 would not be any slower despite the extra inter-processes communications. We’re pleased to say that it is faster at creating containers concurrently than its predecessor, and although we made a deliberate choice of favoring correctness first, you can expect more performance improvements in the future.

 

Creating an ecosystem for container execution backends

runC is the first implementation of the Open Containers Runtime specification and the default executor bundled with Docker Engine. Thanks to the open specification, future versions of Engine will allow you to specify different executors, thus enabling the ecosystem of alternative execution backends without any changes to Docker itself. By separating out this piece, an ecosystem partner can build their own compliant executor to the specification, and make it available to the user community at any time – without being dependent on the Engine release schedule or wait to be reviewed and merged into the codebase.

What does this mean for you? This gives you choice: the runtime is now pluggable. Following  the Docker principle of batteries included but swappable, Docker Engine will ship with runC available as the default with the ability to choose from a variety of container executors that are for specific platforms or have different security and performance features. By separating out the thing that runs containers from the Engine, this opens up new possibilities. As an example, 1.11 is a huge step toward allowing Engine restarts/upgrades without restarting the containers, improving the availability of containers. This is probably one of the most requested features by Docker users. In fact, with this new architecture, you will even be able to restart containerd and your containers will keep running.


 

But wait, there’s more!

In addition to this huge architectural change, as usual we have also added a bunch of features in Engine, Compose, Swarm, Machine, and Registry.

 

Engine 1.11

  • DNS round robin load balancing: It’s now possible to load balance between containers with Docker’s networking. If you give multiple containers the same alias, Docker’s service discovery will return the addresses of all of the containers for round-robin DNS.
  • VLAN support (experimental): VLAN support has been added for Docker networks in the experimental channel, so you can integrate better with existing networking infrastructure.
  • IPv6 service discovery: Engine’s DNS-based service discovery system can now return AAAA records.
  • Yubikey hardware image signing: A few months ago we added the ability to sign images with hardware Yubikeys in the experimental channel of Docker. This is now available in the stable release. Read more about how it works in this blog post.
  • Labels on networks and volumes: You can now attach arbitrary key/value data to networks and volumes, in the same way you could with containers and images.
  • Better handling of low disk space with device mapper storage: The dm.min_free_space option has been added to make device mapper fail more gracefully when running out of disk space.
  • Consistent status field in docker inspect: This is a little thing, but really handy if you use the Docker API. docker inspect now has a Status field, a single consistent value to define a container’s state (running, stopped, restarting, etc). Read more in the pull request.

See the release notes for full details.

 

Compose 1.7

  • --build option for docker-compose up: This is a shorthand for the common pattern of running docker-compose build and then docker-compose up. Compose doesn’t build on every run by default in case your build is slow, but if you’ve got a quick build, running this every time will ensure your environment is always up to date.
  • docker-compose exec command: Mirroring the docker exec command.

See the release notes for full details.

 

Swarm 1.2

  • Container rescheduling no longer experimental: In the previous version of Swarm, we added support for rescheduling containers when a node dies. This is now considered stable so you can safely use it in production.

 

  • Better errors when containers can’t be scheduled: For example, when a constraint fails, the constraints will be printed out so you can easily see what went wrong.

See the release notes for full details.

 

Machine 0.7

In this version of Machine, the Microsoft Azure driver now uses the new Azure APIs and is easier to authenticate. See Azure’s blog post for more details. There are also a bunch of other improvements in this release – take a look at the full release notes for details.

 

Registry 2.4

  • Garbage collection: A tool has been added so system administrators can clean up the data from images that have been deleted by users. For more details, see the garbage collector docs.
  • Faster and more stable S3 driver: The S3 storage driver is now implemented on top of the official Amazon S3 SDK, which has major performance and stability goodness.

See the release notes for full details.

 

Download and try out Docker 1.11

The easiest way to try out all of this stuff in development is to download Docker Toolbox. For other platforms, check out the installation instructions.

All of this stuff is also available in Docker for Mac and Windows, the new way to use Docker in development, currently in private beta. Sign up to get a chance to try it out early.


Announcing #Docker 1.11: First runtime built on #containerd and based on OCI technology
Click To Tweet



 

Learn More about Docker

]]>
Docker containerd integration https://www.docker.com/blog/docker-containerd-integration/ Wed, 13 Apr 2016 10:00:00 +0000 https://blog.docker.com/?p=11401 In an effort to make Docker Engine smaller, better, faster, stronger, we looked for components of the current engine that we can break out into separate projects and improve along the way. One of those components is the Docker runtime for managing containers. With standalone runtimes like runc, we need a clean integration point for adding runc to the stack as well as managing 100s of containers.

So we started the containerd project to move the container supervision out of the core Docker Engine and into a separate daemon. containerd has full support for starting OCI bundles and managing their lifecycle. This allows users to replace the runc binary on their system with an alternate runtime and get the benefits of still using Docker’s API.

So why another project? Why are you all busy refactoring things instead of fixing real issues? Well…

containerd improves on parallel container start times so if you need to launch multiple containers as fast as possible you should see improvements with this release. On my laptop, I can launch 100 containers in 1.64 seconds. That is 60 containers per second. When starting a container most of the time is spent within syscalls and system level operations. It does not make sense to launch all 100 containers concurrently since the majority of the startup time is mostly spent waiting on hardware/kernel to complete the operations. containerd uses events to schedule container start requests and various other operations lock free. It has a configurable limit to how many containers it will start concurrently, by default we have it set at 10 workers. This allows you to make as many API requests as you want and containerd will start containers as fast as it can without totally overwhelming the system.

containerd
Because the containerd integration also brings in runc the model for starting a container has changed a little in order to support the OCI runtime specification. When you make a request to Docker Engine we will do all the image management stuff as normal but after that the first thing is to generate an OCI bundle for the container. This will contain information from the image that you pulled and runtime information that you provided via the API. After the configuration has been generated Docker will mount the container’s root filesystem inside the bundle before making the call to containerd to start the container.

containerd’s API is very simple. We decided to build it with gRPC for performance as well as the ability to easily create client libraries. The basic request to start a container via the API is to only provide the path to the OCI bundle and the ID for the container. By keeping the API simple we minimize the changes in the API when new features are added.

We also have very minimal state in containerd. After the container has exited Docker will delete the generated bundle and any runtime state will be gone. This allows us to separate container runtime state on disk to only live as long as the container is running.

You should start to see more features like live migration and the ability to reattach to running containers so that Docker can be upgraded without affecting your containers in future releases of Docker as a result of this integration.

Give containerd a try with the latest Docker Engine 1.11 release. To install on your laptop, download Docker Toolbox. For other platforms, check out the installation instructions.


Check out the @Docker #containerd integration, a daemon to control runC – post by @crosbymichael
Click To Tweet



Learn More about Docker

]]>
Docker 1.10: New Compose file, improved security, networking and much more! https://www.docker.com/blog/docker-1-10/ Thu, 04 Feb 2016 23:33:21 +0000 https://blog.docker.com/?p=9942 We’re pleased to announce Docker 1.10, jam-packed with stuff you’ve been asking for.

It’s now much easier to define and run complex distributed apps with Docker Compose. The power that Compose brought to orchestrating containers is now available for setting up networks and volumes. On your development machine, you can set up your app with multiple network tiers and complex storage configurations, replicating how you might set it up in production. You can then take that same configuration from development, and use it to run your app on CI, on staging, and right through into production. Check out the blog post about the new Compose file to find out more.

As usual, we’ve got a load of security updates in this release. All the big features you’ve been asking for are now available to use: user namespacing for isolating system users, seccomp profiles for filtering syscalls, and an authorization plugin system for restricting access to Engine features.

Another big security enhancement is that image IDs now represent the content that is inside an image, in a similar way to how Git commits represent the content inside commits. This means you can guarantee that the content you’re running is what you expect by just specifying that image’s ID. When upgrading to Engine 1.10, there is a migration process that could take a long time, so take a read of the documentation if you want to prevent downtime.

Networking gets even better

We added a new networking system in the previous version of Docker Engine. It allowed you to create virtual networks and attach containers to them so you could create the network topology that was best for your application. In addition to the support in Compose, we’ve added some other top requested features:

  • Use links in networks: Links work in the default bridge network as they have always done, but you couldn’t use them in networks that you created yourself. We’ve now added support for this so you can define the relationships between your containers and alias a hostname to a different name inside a specific container (e.g. --link db:production_postgres)
  • Network-wide container aliases: Links let you alias a hostname for a specific container, but you can now also make a container accessible by multiple hostnames across an entire network.
  • Internal networks: Pass the --internal flag to network create to restrict traffic in and out of the network.
  • Custom IP addresses: You can now give a container a custom IP address when running it or adding it to a network.
  • DNS server for name resolution: Hostname lookups are done with a DNS server rather than /etc/hosts, making it much more reliable and scalable. Read the feature proposal and discussion.
  • Multi-host networking on all supported Engine kernel versions: The multi-host overlay driver now works on older kernel versions (3.10 and greater).

Engine 1.10

Apart from the new security and networking features, we’ve got a whole load of new stuff in Engine:

  • Content addressable image IDs: Image IDs now represent the content that is inside an image, in a similar way to how Git commit hashes represent the content inside commits. This means you can guarantee that the content you’re running is what you expect by just specifying that image’s ID. This is an improvement upon the image digests in Engine 1.6. There is a migration process for your existing images which might take a long time, so take a read of the documentation if you want to prevent downtime.
  • Better event stream: The docker events command and events API endpoint has been improved and cleaned up. Events are now consistently structured with a resource type and the action being performed against that resource, and events have been added for actions against volumes and networks. Full details are in the docs.
  • Improved push/pull performance and reliability: Layers are now pushed in parallel, resulting in much faster pushes (as much as 3x faster). Pulls are a bit faster and more reliable too – with a streamlined protocol and better retry and fallback mechanisms.
  • Live update container resource constraints: When setting limits on what resources containers can use (e.g. memory usage), you had to restart the container to change them. You can now update these resource constraints on the fly with the new docker update command.
  • Daemon configuration file: It’s now possible to configure daemon options in a file and reload some of them without restarting the daemon so, for example, you can set new daemon labels and enable debug logging without restarting anything.
  • Temporary filesystems: It’s now really easy to create temporary filesystems by passing the --tmpfs flag to docker run. This is particularly useful for running a container with a read-only root filesystem when the piece of software inside the container expects to be able to write to certain locations on disk.
  • Constraints on disk I/O: Various options for setting constraints on disk I/O have been added to docker run: --device-read-bps, --device-write-bps, --device-read-iops, --device-write-iops, and --blkio-weight-device.
  • Splunk logging driver: Ship container logs straight to the Splunk logging service.
  • Start linked containers in correct order when restarting daemon: This is a little thing, but if you’ve run into it you’ll know what a headache it is. If you restarted a daemon with linked containers, they sometimes failed to start up if the linked containers weren’t running yet. Engine will now attempt to start up containers in the correct order.

Check out the release notes for the full list. There are a few features being deprecated in this release, and we’re ending support for Fedora 21 and Ubuntu 15.04, so be sure to check the release notes in case you’re affected by this. If you have written a volume plugin, there’s also a change in the volume plugin API that you need to be aware of.

Big thanks to all of the people who made this release happen – in particular to Qiang Huang, Denis Gladkikh, Dima Stopel, and Liron Levin.

The easiest way to try out Docker in development is by installing Docker Toolbox. For other platforms, check out the installation instructions in the documentation.

Swarm 1.1

Docker Swarm is native clustering for Docker. It makes it really easy to manage and deploy to a cluster of Engines. Swarm is also the clustering and scheduling foundation for the Docker Universal Control Plane, an on-premises tool for deploying and managing Docker applications and clusters.

Back in November we announced the first production-ready version of Swarm, version 1.0. This release is an incremental improvement, especially adding a few things you’ve been asking us for:

  • Reschedule containers when a node fails: If a node fails, Swarm can now optionally attempt to reschedule that container on a healthy node to keep it running. This is an experimental feature, so don’t expect it to work perfectly, but please do give it a try!
  • Better node management: If Swarm fails to connect to a node, it will now retry instead of just giving up. It will also display the status of this and any error messages in docker info, making it much easier to debug. Take a look at the feature proposal for full details.

Check out the release notes for the full list and the documentation for how to get started.

And save the date for Swarm Week – Coming Soon!

If you are new to Swarm or are familiar and want to know more, Swarm Week is the place for you get ALL your Swarm information in a single place. We will feature a different Swarm related topic each day.

Sign up here to be notified of #SwarmWeek!

Machine 0.6

Machine is at the heart of Docker Toolbox, and a big focus of Machine 0.6 has been making it much more reliable when you’re using it with VirtualBox and running it on Windows. This should make the experience of using Toolbox much better.

There have also been a couple of new features for Machine power users:

  • No need to type “default”: Commands will now perform actions against the “default” VM if you don’t specify a name.
  • New provision command: This is useful for re-running the provisioning on hosts where it failed or the configuration has drifted.

For full details, check out the release notes. The easiest way to install Machine is by installing Docker Toolbox. Other installation methods are detailed in the documentation.

Registry 2.3

In Registry 2.3, we’ve got a bunch of improvements to performance and security. It has support for the new manifest format, and makes it possible for layers to be shared between different images, improving the performance of push for layers that already exist on your Registry.

Check out the full release notes here and see the documentation for how to install or upgrade.

 

 

 

Watch this video overview on the new features in the Docker 1.10

 

Additional Resources on Docker 1.10


 

Learn More about Docker

]]>
Docker 1.10 Release nonadult