containerd – Docker https://www.docker.com Thu, 16 Feb 2023 00:56:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png containerd – Docker https://www.docker.com 32 32 Integrated Terminal for Running Containers, Extended Integration with Containerd, and More in Docker Desktop 4.12 https://www.docker.com/blog/integrated-terminal-for-running-containers-extended-integration-with-containerd-and-more-in-docker-desktop-4-12/ Thu, 01 Sep 2022 17:10:00 +0000 https://www.docker.com/?p=37252 Docker Desktop 4.12 is now live! This release brings some key quality-of-life improvements to the Docker Dashboard. We’ve also made some changes to our container image management and added it as an experimental feature. Finally, we’ve made it easier to find useful Extensions. Let’s dive in.

Execute commands in a running container straight from the Docker Dashboard

Developers often need to explore a running container’s contents to understand its current state or debug it when issues arise. With Docker Desktop 4.12, you can quickly start an interactive session in a running container directly through a Docker Dashboard terminal. This easy access lets you run commands without needing an external CLI. 

Opening this integrated terminal is equal to running docker exec -it <container-id> /bin/sh (or docker exec -it cmd.exe if you’re using Windows containers) in your system terminal. Docker detects a running container’s default user from the image’s Dockerfile. If there’s none specified, it defaults to root. Placing this in the Docker Dashboard gives you real-time access to logs and other information about your running containers. 

Your session is persisted if you navigate throughout the Dashboard and return — letting you easily pick up where you left off. The integrated terminal also supports copy, paste, search, and session clearing.

Dashboard View Container Details
Dashboard Integrated CLI

Still want to use your external terminal? No problem. We’ve added two easy ways to launch a session externally.

Option 1: Use the “Open in External Terminal” button straight from this tab. Even if you prefer an integrated terminal, this might help you run commands and watch logs simultaneously, for example.

Open in External Terminal Option

Option 2: Change your default settings to always open your system default terminal. We’ve added the option to choose what fits your workflow. After applying this setting, the “Open in terminal” button from the Containers tab will always open your system terminal.

Extending Docker Desktop’s integration with containerd

We’re extending Docker Desktop’s integration with containerd to include image management. This integration is available as an opt-in, experimental feature within this latest release.

Experimental Features Containerd

Docker’s involvement in the containerd project extends all the way back to 2016. Docker has used containerd within the Docker Engine to manage the container lifecycle (creating, starting, and stopping) for a while now! 

This new feature is a step towards deeper containerd integration with Docker Engine. It lets you use containerd to store images and then push and pull them. When enabled in the latest Docker Desktop version, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save

This integration has the following benefits:

  • Containerd’s snapshotter implementation helps you quickly plug in new features. One example is using stargz to lazy pull images on startup.
  • The containerd content store can natively store multi-platform images and other OCI-compatible objects. This lets you build and manipulate multi-platform images, for example, or leverage other related features.

You can learn more in our recent announcement, which fully explains containerd’s integration with Docker.

Easily discover extensions

We’ve added two new ways to interact with extensions in Docker Desktop 4.12.

Docker Extensions are now available directly within the Docker menu. From there, you can browse the Marketplace for new extensions, manage your installed extensions, or change extension settings. 

You can also search for extensions in the Extensions Marketplace! Narrow things down by name or keyword to find the tool you need.

Extensions Docker Menu

Two new extensions have also joined the Extensions Marketplace:

Docker Volumes Backup & Share

Docker Volumes Backup & Share lets you effortlessly back up, clone, restore, and share Docker volumes. You can now easily create copies of your volumes and share them through SSH or by pushing them to a registry. Learn more about Volumes Backup & Share on Docker Hub

Volumes Backup and Share

Mini Cluster

Mini Cluster enables developers who work with Apache Mesos to deploy and test their Mesos applications with ease. Learn more about Mini Cluster on Docker Hub.

Try out Dev Environments with Awesome Compose samples

We’ve updated our GitHub Awesome Compose samples to highlight projects that you can easily launch as Dev Environments in Docker Desktop. This helps you quickly understand how to add multi-service applications as Dev Environment projects. Look for the following green icon in the list of Docker Compose application samples:

Dev Environment Compatible Icon

Here’s our new Awesome Compose/Dev Environments feature in action:

image5 1

Get started with Docker Desktop 4.12 today

While we’ve explored some headlining features in this release, Docker Desktop 4.12 also adds important security enhancements under the hood. To learn about these fixes and more, browse our full release notes

Have any feedback for us? Upvote, comment, or submit new ideas via our in-product links or our public roadmap

Looking to become a new Docker Desktop user? Visit our Get Started page to jumpstart your development journey.

]]>
Extending Docker’s Integration with containerd https://www.docker.com/blog/extending-docker-integration-with-containerd/ Thu, 01 Sep 2022 16:44:09 +0000 https://www.docker.com/?p=37231 We’re extending Docker’s integration with containerd to include image management! To share this work early and get feedback, this integration is available as an opt-in experimental feature with the latest Docker Desktop 4.12.0 release.

The Docker Desktop Experimental Features settings with the option for using containerd for pulling and storing images enabled.

What is containerd?

In the simplest terms, containerd is a broadly-adopted open container runtime. It manages the complete container lifecycle of its host system! This includes pulling and pushing images as well as handling the starting and stopping of containers. Not to mention, containerd is a low-level brick in the container experience. Rather than being used directly by developers, it’s designed to be embedded into systems like Docker and Kubernetes.

Docker’s involvement in the containerd project can be traced all the way back to 2016. You could say, it’s a bit of a passion project for us! While we had many reasons for starting the project, our goal was to move the container supervision out of the core Docker Engine and into a separate daemon. This way, it could be reused in projects like Kubernetes. It was donated to the Cloud Native Computing Foundation (CNCF), and it’s now a graduated (stable) project as of 2017.

What does containerd replace in the Docker Engine?

As we mentioned earlier, Docker has used containerd as part of Docker Engine for managing the container lifecycle (creating, starting, and stopping) for a while now! This new work is a step towards a deeper integration of containerd into the Docker Engine. It lets you use containerd to store images and then push and pull them. Containerd also uses snapshotters instead of graph drivers for mounting the root file system of a container. Due to containerd’s pluggable architecture, it can support multiple snapshotters as well. 

Want to learn more? Michael Crosby wrote a great explanation about snapshotters on the Moby Blog.

Why migrate to containerd for image management?

Containerd is the leading open container runtime and, better yet, it’s already a part of Docker Engine! By switching to containerd for image management, we’re better aligning ourselves with the broader industry tooling. 

This migration modifies two main things:

  • We’re replacing Docker’s graph drivers with containerd’s snapshotters.
  • We’ll be using containerd to push, pull, and store images.

What does this mean for Docker users?

We know developers love how Docker commands work today and that many tools rely on the existing Docker API. With this in mind, we’re fully vested in making sure that the integration is as transparent as possible and doesn’t break existing workflows. To do this, we’re first rolling it out as an experimental, opt-in feature so that we can get early feedback. When enabled in the latest Docker Desktop, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save.

This integration has the following benefits:

  1. Containerd’s snapshotter implementation helps you quickly plug in new features. Some examples include using stargz to lazy-pull images on startup or nydus and dragonfly for peer-to-peer image distribution.
  2. The containerd content store can natively store multi-platform images and other OCI-compatible objects. This enables features like the ability to build and manipulate multi-platform images using Docker Engine (and possibly other content in the future!).

If you plan to build the multi-platform image, the below graphic shows what to expect when you run the build command with the containerd store enabled. 

A recording of a docker build terminal output with the containerd store enabled.

Without the experimental feature enabled, you will get an error message stating that this feature is not supported on docker driver as shown in the graphic below. 

A recording of an unsuccessful docker build output with the containerd store disabled.

If you decide not to enable the experimental feature, no big deal! Things will work like before. If you have additional questions, you can access details in our release notes.

Roadmap for the containerd integration

We want to be as transparent as possible with the Docker community when it comes to this containerd integration (no surprises here!). For this reason, we’ve laid out a roadmap. The integration will happen in two key steps:

  1. We’ll ship an initial version in Docker Desktop which enables common workflows but doesn’t touch existing images to prove that this approach works.
  2. Next, we’ll write the code to migrate user images to use containerd and activate the feature for all our users.

We work to make expanding integrations like this as seamless as possible so you, our end user, can reap the benefits! This way, you can create new, exciting things while leveraging existing features in the ecosystem such as namespaces, containerd plug-ins, and more.

We’ve released this experimental feature first in Docker Desktop so that we can get feedback quickly from the community. But, you can also expect this feature in a future Docker Engine release.  


The details on the ongoing integration work can be accessed here

Conclusion

In summary, Docker users can now look forward to full containerd integration. This brings many exciting features from native multi-platform support to encrypted images and lazy pulls. So make sure to download the latest version of Docker Desktop and enable the containerd experimental feature to take it for a spin! 

We love sharing things early and getting feedback from the Docker community — it helps us build products that work better for you. Please join us on our community Slack channel or drop us a line using our feedback form.

]]>
Welcome Tilt: Fixing the pains of microservice development for Kubernetes https://www.docker.com/blog/welcome-tilt-fixing-the-pains-of-microservice-development-for-kubernetes/ Tue, 24 May 2022 15:00:59 +0000 https://www.docker.com/?p=33668 Big news! The Tilt team is joining Docker. The Tilt project is joining too.

We think this is a great fit and I will tell you why.

logos

The Problem

Modern apps are made of so many services. They’re everywhere. Every team we talk to is trying to figure out how to set up environments to run their apps in dev.  Simple `start.sh` scripts inevitably grow into mini bespoke orchestrators. They need to start servers in the right order, update them in-place, and monitor when one is failing. We built Tilt, a dev environment as code for teams on Kubernetes, to help solve these problems. Whether your dev env is local processes or containers, in a local cluster or a remote cloud, Tilt keeps you in flow and your feedback loops fast.

So how does this make sense at Docker?

When we started building Tilt in 2018, we thought of Docker as the container company selling Swarm to enterprises. In 2019, the Docker’s Next Chapter blog post announced a change in focus to invest more in great tools for developers and development teams to help them spend more time on innovation, less time on everything else. 

Tilt interoperates with Docker Buildkit, Docker Desktop, and Docker Compose. Improvements to these tools help Tilt users too! We always had a hunch that our product roadmaps might overlap. And in the years since Docker focused on developers, we’ve been converging more and more.

Once we started talking more with Docker, we found more in common than just a problem space including:

  • A product philosophy around deeply understanding devs’ existing workflows, so we can make dramatic improvements in user experience that feel magic;
  • An engineering philosophy around patterns and flexibility so devs can adapt their tools to their needs;
  • A business philosophy around building a sustainable company so we can continue to make great free, open-source tools for every developer.

So you could say we got along. What’s next?

What Does a Combined Tilt + Docker Look Like?

Tilt will remain open-source. It’s great! You should try it! We’ll still be responding to issues and hanging out in the community slack channelBut this has never been about Tilt the technology. Or even about Kubernetes. Our history is full of experiments. Dan Bentley and I started hacking on ideas in 2017. We knew we were unhappy about microservice dev. But we weren’t sure what the first stepping stone might be.

This was more of a research project than a company. Some examples:

  • Our first prototype was a more interactive, developer-focused CI.
  • We almost trolled ourselves into becoming a Bazel company.
  • We bought two lab coats, poster board, glue, and glitter so we could show off our prototypes at the GothamGo conference.
  • We built many weird demos: (1) Mishell (an interactive multi-service shell) represented by a French-speaking hermit crab named Michel, and PETS (Process for Editing Tons of Services) represented by three cats overwhelmed by microservice dev. Our teammate Han Yu had a blast with mascot design (yeah to Docker for animal mascots):

Tilt CatsTilt Crab

The first version of Tilt was a bare-bones terminal app to update containers in a Kubernetes cluster. It resonated immediately. Tilt has grown a lot since then. Running all or just a few of your services is easier than ever. Why did we focus on Kubernetes? Kubernetes contains a few simple, brilliant ideas for how to operate apps. Tilt borrows a lot of ideas (and a lot of core libraries) from Kubernetes on how to be scriptable and adaptable. But more importantly, the Kubernetes community is lovely. They appreciate Goose-themed trolling. We found a worldwide community of people enthusiastic about building better tools.

We’re sad we missed everyone at Kubecon EU this year! We didn’t know if this deal would finish before, during, or after the conference.

That said, over the next couple months, we’ll be swapping notes with the Docker team about what we’ve both learned and what we’ve both tried.  We don’t know yet where this will take us. Maybe you’ll see Tilt & Kubernetes features in Docker Compose. Or maybe you’ll see Docker Desktop features in Tilt. There will be research and tinkering, where we’ll be in our lab coats and glitter, but do expect us to bring the power of Tilt to Docker.

]]>
What developers need to know about Docker, Docker Engine, and Kubernetes v1.20 https://www.docker.com/blog/what-developers-need-to-know-about-docker-docker-engine-and-kubernetes-v1-20/ Fri, 04 Dec 2020 18:54:00 +0000 https://www.docker.com/blog/?p=27292 The latest version of Kubernetes Kubernetes v1.20.0-rc.0 is now available. The Kubernetes project plans to deprecate Docker Engine support in the kubelet and support for dockershim will be removed in a future release, probably late next year. The net/net is support for your container images built with Docker tools is not being deprecated and will still work as before.

Even better news however, is that Mirantis and Docker have agreed to partner to maintain the shim code standalone outside Kubernetes, as a conformant CRI interface for Docker Engine. We will start with the great initial prototype from Dims, at https://github.com/dims/cri-dockerd and continuing to make it available as an open source project, at https://github.com/Mirantis/cri-dockerd. This means that you can continue to build Kubernetes based on Docker Engine as before, just switching from the built in dockershim to the external one. Docker and Mirantis will work together on making sure it continues to work as well as before and that it passes all the conformance tests and works just like the built in version did. Docker will continue to ship this shim in Docker Desktop as this gives a great developer experience, and Mirantis will be using this in Mirantis Kubernetes Engine. 

What does this mean for you if you use Docker and Kubernetes?

First of all, don’t panic 🙂 Developers can still use the Docker platform to build, share, and run containers on Kubernetes! This change primarily impacts operators and administrators for Kubernetes and doesn’t impact developer work flows. The images Docker builds are compliant with OCI (Open Container Initiative), are fully supported on containerd, and will continue to run great on Kubernetes.

If you’re using Docker, you’re already using containerd. We build Docker’s runtime upon containerd while providing a great developer experience around it. For production environments that benefit from a minimal container runtime, such as Kubernetes, and may have no need for Docker’s great developer experience, it’s reasonable to directly use lightweight runtimes like containerd.

Docker set up in 2015 the Open Container Initiative (OCI) in order to support fully interoperable container standards, and make sure that every container can run in any environment. This has been a huge success in promoting innovation while maintaining interoperability.

Docker created the containerd project, along with Google and IBM, in 2016, with the goal of this transition in mind. The deprecation of docker-shim (and Docker Engine as runtime) marks the completion of a long-term commitment to provide a modern runtime for Kubernetes. Containerd was created as a core low-level, extensible runtime for both Docker and Kubernetes to each use in the most appropriate way.

Containerd was donated to the CNCF in 2017, and has grown to incorporate the containerd CRI project to interface with Kubernetes, as well as seeing a host of innovation and investment from across the industry, including from Amazon, Google, Microsoft and IBM.

In 2019 it became a graduated CNCF project, the highest project level, showing its maturity and it remains the only container runtime with this status. Over the last few years the leading Kubernetes service providers such as AWS and Google have migrated to Containerd as their Kubernetes runtime. This process of depreciation now reflects the great success of this work, and of the thriving community around containerd.

Support for your container images built with Docker tools is not being deprecated.

Container images you build using Docker tools will continue to run on Kubernetes. Buildkit, our next generation build infrastructure, has a flexible architecture so that while it can be used as the builder with Docker, it can also talk directly to containerd or runc instead for use in infrastructure where Docker might not be available.

Docker is committed to containerd development: we will continue to further invest, along with the growing buildkit community, in helping you use Docker builds wherever and however your infrastructure is hosted.

You can continue to build and run Docker images locally and in your Kubernetes cluster as this deprecation will not impact that experience.

What is the Kubernetes project deprecating then?

Kubernetes is deprecating dockershim, which is a component in Kubernetes’ kubelet implementation, communicating with Docker Engine. Arnaud Porterie had some great thoughts on this that he shared here.

The Kubernetes project has also published this FAQ.  Kat Cosgrove did a great job explaining the changes very simply here.

What does this mean for Developers and Admins?

Today, and in Kubernetes v1.20, Kubernetes administrators can continue to use docker commands and kubectl commands to manage their Kubernetes clusters.

Kubernetes administrators will be able to use dockershim in future. Keep an eye on the Mirantis and Docker blogs for updated information about the future of dockershim and how to install the standalone shim..

In a future release of Kubernetes, a few minor releases from now, when support for the internal dockershim is eventually removed from Kubernetes’ kubelet, Kubernetes administrators will need to make some changes to ensure docker commands to inspect Kubernetes clusters will continue to work. Developers can continue to use Docker tools to docker build, docker push and docker run containers and container images on Kubernetes. 

Further Background

KEP-1985: Kubernetes Enhancement Proposal to remove dockershim from Kubelet

Questions? Feedback?

Please reach out on Docker’s slack if you have questions or other feedback. 

]]>
What is containerd ? https://www.docker.com/blog/what-is-containerd-runtime/ Mon, 07 Aug 2017 18:00:00 +0000 https://blog.docker.com/?p=18340
containerd

We have done a few talks in the past on different features of containerd, how it was designed, and some of the problems that we have fixed along the way.  Containerd is used by Docker, Kubernetes CRI, and a few other projects but this is a post for people who may not know what containerd actually does within these platforms.  I would like to do more posts on the featureset and design of containerd in the future but for now, we will start with the basics.

I think the container ecosystem can be confusing at times. Especially with the terminology that we use. Whats this? A runtime. And this? A runtime…  containerd as the name implies, not contain nerd as some would like to troll me with, is a container daemon.  It was originally built as an integration point for OCI runtimes like runc but over the past six months it has added a lot of functionality to bring it up to par with the needs of modern container platforms like Docker and Kubernetes.

containerd

Since there is no such thing as Linux containers in the kernelspace, containers are various kernel features tied together, when you are building a large platform or distributed system you want an abstraction layer between your management code and the syscalls and duct tape of features to run a container.  That is where containerd lives.  It provides a client layer of types that platforms can build on top of without ever having to drop down to the kernel level.  It’s so much nicer towork with Container, Task, and Snapshot types than it is to manage calls to clone() or mount().

Containerd was designed to be used by Docker and Kubernetes as well as any other container platform that wants to abstract away syscalls or OS specific functionality to run containers on linux, windows, solaris, or other OSes.  With these users in mind, we wanted to make sure that containerd has only what they need and nothing that they don’t.  Realistically this is impossible but at least that is what we try for.  Things like networking are out of scope for containerd.  The reason for this is, when you are building a distributed system, networking is a very central aspect.  With SDN and service discovery today, networking is way more platform specific than abstracting away netlink calls on linux.  Most of the new overlay networks are route based and require routing tables to be updated each time a new container is created or deleted.  Service discovery, DNS, etc all have to be notified of these changes as well.  It would be a large chunk of code to be able to support all the different network interfaces, hooks, and integration points to support this if we added networking to containerd.  What we did instead is opted for a robust events system inside containerd so that multiple consumers can subscribe to the events that they care about.  We also expose a task API that lets users create a running task, have the ability to add interfaces to the network namespace of the container, and then start the container’s process without the need for complex hooks in various points of a container’s lifecycle.

Another area that has been added to containerd over the past few months is a complete storage and distribution system that supports both OCI and Docker image formats.  You have a complete content addressed storage system across the containerd API that works not only for images but also metadata, checkpoints, and arbitrary data attached to containers.

We also took the time to rethink how “graphdrivers” work.  These are the overlay or block level filesystems that allow images to have layers and you to perform efficient builds.  Graphdrivers were initially written by Solomon and I when we added support for devicemapper.  Docker only supported AUFS at the time so we modeled the graphdrivers after the overlay filesystem.  However, making a block level filesystem such as devicemapper/lvm act like an overlay fillesystem proved to be much harder to do in the long run.  The interfaces had to expand over time to support different features than what we originally thought would be needed.  With containerd, we took a different approach, make overlay filesystems act like a snapshotter instead of vice versa.  This was much easier to do as overlay filesystems provide much more flexibility than snapshotting filesystems like BTRFS, ZFS, and devicemapper as they don’t have a strict parent/child relationship.  This helped us build out a smaller interface for the snapshotters while still fulfilling the requirements needed from things like a builder as well as reduce the amount of code needed, making it much easier to maintain in the long run.

So what do you actually get using containerd?  You get push and pull functionality as well as image management.  You get container lifecycle APIs to create, execute, and manage containers and their tasks. An entire API dedicated to snapshot management.  Basically everything that you need to build a container platform without having to deal with the underlying OS details.  I think the most important part of containerd is having a versioned and stable API that will have bug fixes and security patches backported.


What’s #containerd? All you need to know about #Docker’s open and reliable #container runtime
Click To Tweet


Learn more about containerd:

]]> Demystifying the Open Container Initiative (OCI) Specifications https://www.docker.com/blog/demystifying-open-container-initiative-oci-specifications/ Wed, 19 Jul 2017 12:59:00 +0000 https://blog.docker.com/?p=18193 The Open Container Initiative (OCI) announced the completion of the first versions of the container runtime and image specifications this week. The OCI is an effort under the auspices of the Linux Foundation to develop specifications and standards to support container solutions. A lot of effort has gone into the building of these specifications over the past two years. With that in mind, let’s take a look at some of the myths that have arisen over the past two years.

OCI

Myth: The OCI is a replacement for Docker

Standards are important, but they are far from a complete production platform. Take for example, the World Wide Web. It  has evolved over the last 25 years and was built on core dependable standards like TCP/IP, HTTP and HTML. Using TCP/IP as an example, when enterprises coalesced around TCP/IP as a common protocol, it fueled the growth of routers and in particular – Cisco. However, Cisco became a leader in its market by focusing on differentiated features on its routing platform. We believe the parallel exists with the OCI specifications and Docker.

Docker is a complete production platform for developing, distributing, securing and orchestrating container-based solutions. The OCI specification is used by Docker, but it represents only about five percent of our code and a small part of the Docker platform concerned with the runtime behavior of a container and the layout of a container image. 

Myth: Products and projects already are certified to the OCI specifications

The runtime and image specifications have just released as 1.0 this week. However, the OCI certification program is still in development so companies cannot claim compliance, conformance or compatibility until certification is formally rolled out later this year.

The OCI certification working group is currently defining the standard so that products and open source projects can demonstrate conformance to the specifications. Standards and specifications are important for engineers implementing solutions, but formal certification is the only way to reassure customers that the technology they are working with is truly conformant to the standard.

Myth: Docker doesn’t support the OCI specifications work

Docker has a long history with contributing to the OCI. We developed and donated a majority of the OCI code and have been instrumental in defining the OCI runtime and image specifications as maintainers of the project. When the Docker runtime and image format quickly became the de facto standards after being released as open source in 2013, we thought it would be beneficial to donate the code to a neutral governance body to avoid fragmentation and encourage innovation. The goal was to provide a dependable and standardized specification so Docker contributed runc, a simple container runtime, as the basis of the runtime specification work, and later contributed the Docker V2 image specification as the basis for the OCI image specification work.

Docker developers like Michael Crosby and Stephen Day have been key contributors from the beginning of this work, ensuring Docker’s experience hosting and running billions of container images carries through to the OCI. When the certification working group completes its work, Docker will bring its products through the OCI certification process to demonstrate OCI conformance.

Myth: The OCI specifications are about Linux containers 

There is a misperception that the OCI is only applicable to Linux container technologies because it is under the aegis of the Linux Foundation. The reality is that although Docker technology started in the Linux world, Docker has been collaborating with Microsoft to bring our container technology, platform and tooling to the world of Windows Server. Additionally, the underlying technology that Docker has donated to the OCI is broadly applicable to  multi-architecture environments including Linux, Windows and Solaris and covers x86, ARM and IBM zSeries. 

Myth: Docker was just one of many contributors to the OCI

The OCI as an organization has a lot of supporting members representing the breadth of the container industry. That said, it has been a small but dedicated group of individual technologists that have contributed the time and technology to the efforts that have produced the initial specifications. Docker was a founding member of the OCI, contributing the initial code base that would form the basis of the runtime specification and later the reference implementation itself. Likewise, Docker contributed the Docker V2 Image specification to act as the basis of the OCI image specification.

Myth: CRI-O is an OCI project

CRI-O is an open source project in the Kubernetes incubator in the Cloud Native Computing Foundation (CNCF) – it is not an OCI project. It is based on an earlier version of the Docker architecture, whereas containerd is a direct CNCF project that is a larger container runtime that includes the runc reference implementation. containerd is responsible for image transfer and storage, container execution and supervision, and low-level functions to support storage and network attachments. Docker donated containerd to the CNCF with the support of the five largest cloud providers: Alibaba Cloud, AWS, Google Cloud Platform, IBM Softlayer and Microsoft Azure with a charter of being a core container runtime for multiple container platforms and orchestration systems.  

Myth: The OCI specifications are now complete 

While the release of the runtime and image format specifications is an important milestone, there’s still work to be done. The initial scope of the OCI was to define a narrow specification on which developers could depend for the runtime behavior of a container, preventing fragmentation in the industry, and still allowing innovation in the evolving container domain. This was later expanded to include a container image specification.

As the working groups complete the first stable specifications for runtime behavior and image format, new work is under consideration. Ideas for future work include distribution and signing. The next most important work for the OCI, however, is delivering on a certification process backed by a test suite now that the first specifications are stable.

Learn more about OCI and Open Source at Docker:


7 things to know about v1.0 of the #OCI runtime and image specifications
Click To Tweet


]]>
Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems https://www.docker.com/blog/introducing-linuxkit-container-os-toolkit/ Tue, 18 Apr 2017 14:30:00 +0000 https://blog.docker.com/?p=17311
linuxKit

Last year, one of the most common requests we heard from our users was to bring a Docker-native experience to their platforms. These platforms were many and varied: from cloud platforms such as AWS, Azure, Google Cloud, to server platforms such as Windows Server, desktop platforms that their developers used such as OSX and Windows 10, to mainframes and IoT platforms –  the list went on.

We started working on support for these platforms, and we initially shipped Docker for Mac and Docker for Windows, followed by Docker for AWS and Docker for Azure. Most recently, we announced the beta of Docker for GCP. The customizations we applied to make Docker native for each platform have furthered the adoption of the Docker editions.

One of the issues we encountered was that for many of these platforms, the users wanted Linuxcontainer support but the platform itself did not ship with Linux included. Mac OS and Windows are two obvious examples, but cloud platforms do not ship with a standard Linux either. So it made sense for us to bundle Linux into the Docker platform to run in these places.

What we needed to bundle was a secure, lean and portable Linux subsystem that can provide Linux container functionality as a component of a container platform. As it turned out, this is what many other people working with containers wanted as well; secure, lean and portable Linux subsystem for the container movement, So, we partnered with several companies and the Linux Foundation to build this component. These companies include HPE, Intel, ARM, IBM and Microsoft – all of whom are interested in bringing Linux container functionality to new and varied platforms, from IoT to mainframes.

LinuxKit includes the tooling to allow building custom Linux subsystems that only include exactly the components the runtime platform requires. All system services are containers that can be replaced, and everything that is not required can be removed. All components can be substituted with ones that match specific needs. It is a kit, very much in the Docker philosophy of batteries included but swappable.  Today, onstage at Dockercon 2017 we opensourced LinuxKit at https://github.com/linuxkit/linuxkit.

To achieve our goals of a secure, lean and portable OS,we built it from containers, for containers.  Security is a top-level objective and aligns with NIST stating, in their draft Application Container Security Guide: “Use container-specific OSes instead of general-purpose ones to reduce attack surfaces. When using a container-specific OS, attack surfaces are typically much smaller than they would be with a general-purpose OS, so there are fewer opportunities to attack and compromise a container-specific OS.”

The leanness directly helps with security by removing parts not needed if the OS is designed around the single use case of running containers. Because LinuxKit is container-native, it has a very minimal size – 35MB with a very minimal boot time.  All system services are containers, which means that everything can be removed or replaced.

System services are sandboxed in containers, with only the privileges they need. The configuration is designed for the container use case. The whole system is built to be used as immutable infrastructure, so it can be built and tested in your CI pipeline, deployed, and new versions are redeployed when you wish to upgrade.

The kernel comes from our collaboration with the Linux kernel community, participating in the process and work with groups such as the Kernel Self Protection Project (KSPP), while shipping recent kernels with only the minimal patches needed to fix issues with the platforms LinuxKit supports. The kernel security process is too big for a single company to try to develop on their own therefore a broad industry collaboration is necessary.

In addition LinuxKit provides a space to incubate security projects that show promise for improving Linux security. We are working with external open source projects such as Wireguard, Landlock, Mirage, oKernel, Clear Containers and more to provide a testbed and focus for innovation in the container space, and a route to production.

LinuxKit is portable, as it was built for the many platforms Docker runs on now, and with a view to making it run on far more.. Whether they are large or small machines, bare metal or virtualized, mainframes or the kind of devices that are used in Internet of Things scenarios as containers reach into every area of computing.

For the launch we invited John Gossman from Microsoft onto the stage. We have a long history of collaboration with Microsoft, on Docker for Windows Server, Docker for Windows and Docker for Azure. Part of that collaboration has been work on the Linux subsystem in Docker for Windows and Docker for Azure, and working on Hyper-V integration with LinuxKit on those platforms. The next step in that collaboration announced today is that all Windows Server and Windows 10 customers will get access to Linux containers and we will be working together on how to integrate linuxKit with Hyper-V isolation.

Today we open up LinuxKit to partners and open source enthusiasts to build new things with Linux and to expand the container platform. We look forward to seeing what you make from it and contribute back to the community.


Announcing #LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems
Click To Tweet


Learn More about Linuxkit:

]]>
containerd – a core container runtime project for the industry https://www.docker.com/blog/introducing-containerd/ Wed, 14 Dec 2016 14:05:00 +0000 https://www.docker.com/?p=37378 Today Docker is spinning out its core container runtime functionality into a standalone component, incorporating it into a separate project called containerd, and will be donating it to a neutral foundation early next year. This is the latest chapter in a multi-year effort to break up the Docker platform into a more modular architecture of loosely coupled components.

Over the past 3 years, as Docker adoption skyrocketed, it grew into a complete platform to build, ship and run distributed applications, covering many functional areas from infrastructure to orchestration, the core container runtime being just a piece of it. For millions of developers and IT pros, a complete platform is exactly what they need. But many platform builders and operators are looking for “boring infrastructure”: a basic component that provides the robust primitives for running containers on their system, bundled in a stable interface, and nothing else. containerd is component that they can customize, extend and swap out as needed, without unnecessary abstraction getting in their way and built to provide exactly that.

chart-c

What Docker does best is provide developers and operators with great tools which make them more productive. Those tools come from integrating many different components into a cohesive whole. Most of those components are invented by others – but along the way we find ourselves developing some of those components from scratch. Over time we spin out these components as independent projects which anyone can reuse and contribute back to. containerd is the latest of those components.

Docker Open source components

containerd is already deployed on millions of machines since April 2016 when it was included in Docker 1.11. Today we are announcing a roadmap to extend containerd, with input from the largest cloud providers, Alibaba Cloud, AWS, Google, IBM, Microsoft, and other active members of the container ecosystem. We will add more Docker Engine functionality to containerd so that containerd 1.0 will provide all the core primitives you need to manage containers with parity on Linux and Windows hosts:

  • Container execution and supervision
  • Image distribution
  • Network Interfaces Management
  • Local storage
  • Native plumbing level API
  • Full OCI support, including the extended OCI image specification
containerd

When containerd 1.0 implements that scope, in Q2 2017, Docker and other leading container systems, from AWS ECS to Microsoft ACS, Kubernetes, Mesos or Cloud Foundry will be able to use it as their core container runtime. containerd will use the OCI standard and be fully OCI compliant.

chart f 1

Over the past 3 years, the adoption of containers with Docker has triggered an unprecedented wave of innovation in our industry. We think containerd will unlock a whole new phase of innovation and growth across the entire container ecosystem, which in turn will benefit every Docker developer and customer.

You can find up-to-date roadmap, architecture and API definitions in the Github repository, and more details about the project in our engineering team’s blog post. We plan to have a containerd summit at the end of February to bring in more contributors, stay tuned for more details about that in the next few weeks.

Thank you to Arnaud Porterie, Michael Crosby, Mickaël Laventure, Stephen Day, Patrick Chanezon and Mike Goelzer from the Docker team, and all the maintainers and contributors of the Docker project for making this project a reality.

Additional Resources


Introducing #containerd – a core #container runtime project for the industry
Click To Tweet


]]>
Keynote: Incremental Revolution - What Docker Learned from the Open-Source Fire Hose nonadult
Docker 1.11: The first runtime built on containerd and based on OCI technology https://www.docker.com/blog/docker-engine-1-11-runc/ Wed, 13 Apr 2016 10:02:00 +0000 https://blog.docker.com/?p=11413 We are excited to introduce Docker Engine 1.11, our first release built on runC ™ and containerd ™. With this release, Docker is the first to ship a runtime based on OCI technology, demonstrating the progress the team has made since donating our industry-standard container format and runtime under the Linux Foundation in June of 2015.

Over the last year, Docker has helped advance the work of the OCI to make it more readily available to more users. It started in December 2015, when we introduced containerd ™, a daemon to control runC. This was part of our effort to break out Docker into small reusable components. With this release, Docker Engine is now built on containerd, so everyone who is using Docker is now using OCI. We’re proud of the progress we’ve made on the OCI with the 40+ members to continue the work to standardize container technology.

Besides this mind-blowing piece of technological trivia that I’m sure will impress your friends at parties, what difference does it make to you? Well, short answer is: nothing… yet! Nevertheless, let me convince you that this is still something you should be excited about.

This is among the biggest technical refactorings the Engine has gone through, and our priority for 1.11 was to get the integration right, without changing the command-line interface or API. However, this lays the technical groundwork for significant user-facing improvements.

docker engine 1 11 runc 1

Stability and performance

With the containerd integration comes an impressive cleanup of the Docker codebase and a number of historical bugs being fixed. In general, splitting Docker up into focused independent tools mean more focused maintainers, and ultimately better quality software.

Performance-wise, we were extremely careful in making sure 1.11 would not be any slower despite the extra inter-processes communications. We’re pleased to say that it is faster at creating containers concurrently than its predecessor, and although we made a deliberate choice of favoring correctness first, you can expect more performance improvements in the future.

 

Creating an ecosystem for container execution backends

runC is the first implementation of the Open Containers Runtime specification and the default executor bundled with Docker Engine. Thanks to the open specification, future versions of Engine will allow you to specify different executors, thus enabling the ecosystem of alternative execution backends without any changes to Docker itself. By separating out this piece, an ecosystem partner can build their own compliant executor to the specification, and make it available to the user community at any time – without being dependent on the Engine release schedule or wait to be reviewed and merged into the codebase.

What does this mean for you? This gives you choice: the runtime is now pluggable. Following  the Docker principle of batteries included but swappable, Docker Engine will ship with runC available as the default with the ability to choose from a variety of container executors that are for specific platforms or have different security and performance features. By separating out the thing that runs containers from the Engine, this opens up new possibilities. As an example, 1.11 is a huge step toward allowing Engine restarts/upgrades without restarting the containers, improving the availability of containers. This is probably one of the most requested features by Docker users. In fact, with this new architecture, you will even be able to restart containerd and your containers will keep running.


 

But wait, there’s more!

In addition to this huge architectural change, as usual we have also added a bunch of features in Engine, Compose, Swarm, Machine, and Registry.

 

Engine 1.11

  • DNS round robin load balancing: It’s now possible to load balance between containers with Docker’s networking. If you give multiple containers the same alias, Docker’s service discovery will return the addresses of all of the containers for round-robin DNS.
  • VLAN support (experimental): VLAN support has been added for Docker networks in the experimental channel, so you can integrate better with existing networking infrastructure.
  • IPv6 service discovery: Engine’s DNS-based service discovery system can now return AAAA records.
  • Yubikey hardware image signing: A few months ago we added the ability to sign images with hardware Yubikeys in the experimental channel of Docker. This is now available in the stable release. Read more about how it works in this blog post.
  • Labels on networks and volumes: You can now attach arbitrary key/value data to networks and volumes, in the same way you could with containers and images.
  • Better handling of low disk space with device mapper storage: The dm.min_free_space option has been added to make device mapper fail more gracefully when running out of disk space.
  • Consistent status field in docker inspect: This is a little thing, but really handy if you use the Docker API. docker inspect now has a Status field, a single consistent value to define a container’s state (running, stopped, restarting, etc). Read more in the pull request.

See the release notes for full details.

 

Compose 1.7

  • --build option for docker-compose up: This is a shorthand for the common pattern of running docker-compose build and then docker-compose up. Compose doesn’t build on every run by default in case your build is slow, but if you’ve got a quick build, running this every time will ensure your environment is always up to date.
  • docker-compose exec command: Mirroring the docker exec command.

See the release notes for full details.

 

Swarm 1.2

  • Container rescheduling no longer experimental: In the previous version of Swarm, we added support for rescheduling containers when a node dies. This is now considered stable so you can safely use it in production.

 

  • Better errors when containers can’t be scheduled: For example, when a constraint fails, the constraints will be printed out so you can easily see what went wrong.

See the release notes for full details.

 

Machine 0.7

In this version of Machine, the Microsoft Azure driver now uses the new Azure APIs and is easier to authenticate. See Azure’s blog post for more details. There are also a bunch of other improvements in this release – take a look at the full release notes for details.

 

Registry 2.4

  • Garbage collection: A tool has been added so system administrators can clean up the data from images that have been deleted by users. For more details, see the garbage collector docs.
  • Faster and more stable S3 driver: The S3 storage driver is now implemented on top of the official Amazon S3 SDK, which has major performance and stability goodness.

See the release notes for full details.

 

Download and try out Docker 1.11

The easiest way to try out all of this stuff in development is to download Docker Toolbox. For other platforms, check out the installation instructions.

All of this stuff is also available in Docker for Mac and Windows, the new way to use Docker in development, currently in private beta. Sign up to get a chance to try it out early.


Announcing #Docker 1.11: First runtime built on #containerd and based on OCI technology
Click To Tweet



 

Learn More about Docker

]]>
Docker containerd integration https://www.docker.com/blog/docker-containerd-integration/ Wed, 13 Apr 2016 10:00:00 +0000 https://blog.docker.com/?p=11401 In an effort to make Docker Engine smaller, better, faster, stronger, we looked for components of the current engine that we can break out into separate projects and improve along the way. One of those components is the Docker runtime for managing containers. With standalone runtimes like runc, we need a clean integration point for adding runc to the stack as well as managing 100s of containers.

So we started the containerd project to move the container supervision out of the core Docker Engine and into a separate daemon. containerd has full support for starting OCI bundles and managing their lifecycle. This allows users to replace the runc binary on their system with an alternate runtime and get the benefits of still using Docker’s API.

So why another project? Why are you all busy refactoring things instead of fixing real issues? Well…

containerd improves on parallel container start times so if you need to launch multiple containers as fast as possible you should see improvements with this release. On my laptop, I can launch 100 containers in 1.64 seconds. That is 60 containers per second. When starting a container most of the time is spent within syscalls and system level operations. It does not make sense to launch all 100 containers concurrently since the majority of the startup time is mostly spent waiting on hardware/kernel to complete the operations. containerd uses events to schedule container start requests and various other operations lock free. It has a configurable limit to how many containers it will start concurrently, by default we have it set at 10 workers. This allows you to make as many API requests as you want and containerd will start containers as fast as it can without totally overwhelming the system.

containerd
Because the containerd integration also brings in runc the model for starting a container has changed a little in order to support the OCI runtime specification. When you make a request to Docker Engine we will do all the image management stuff as normal but after that the first thing is to generate an OCI bundle for the container. This will contain information from the image that you pulled and runtime information that you provided via the API. After the configuration has been generated Docker will mount the container’s root filesystem inside the bundle before making the call to containerd to start the container.

containerd’s API is very simple. We decided to build it with gRPC for performance as well as the ability to easily create client libraries. The basic request to start a container via the API is to only provide the path to the OCI bundle and the ID for the container. By keeping the API simple we minimize the changes in the API when new features are added.

We also have very minimal state in containerd. After the container has exited Docker will delete the generated bundle and any runtime state will be gone. This allows us to separate container runtime state on disk to only live as long as the container is running.

You should start to see more features like live migration and the ability to reattach to running containers so that Docker can be upgraded without affecting your containers in future releases of Docker as a result of this integration.

Give containerd a try with the latest Docker Engine 1.11 release. To install on your laptop, download Docker Toolbox. For other platforms, check out the installation instructions.


Check out the @Docker #containerd integration, a daemon to control runC – post by @crosbymichael
Click To Tweet



Learn More about Docker

]]>