oci – Docker https://www.docker.com Wed, 24 May 2023 20:20:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png oci – Docker https://www.docker.com 32 32 Announcing Docker Hub OCI Artifacts Support https://www.docker.com/blog/announcing-docker-hub-oci-artifacts-support/ Mon, 31 Oct 2022 16:00:00 +0000 https://www.docker.com/?p=38556 We’re excited to announce that Docker Hub can now help you distribute any type of application artifact! You can now keep everything in one place without having to leverage multiple registries.

Before today, you could only use Docker Hub to store and distribute container images — or artifacts usable by container runtimes. This became a limitation of our platform, since container image distribution is just the tip of the application delivery iceberg. Nowadays, modern application delivery requires numerous types of artifacts:

Developers often share these with clients that need them since they add immense value to each project. And while the OCI working groups are busy releasing the latest OCI Artifact Specification, we still have to package application artifacts as OCI images in the meantime. 

Docker Hub acts as an image registry and is perfectly suited for distributing application artifacts. That’s why we’ve added support for any software artifact — packaged as an OCI image — to Docker Hub.

What’s the Open Container Initiative (OCI)?

Back in 2015, we helped establish the Open Container Initiative as an open governance structure to standardize container image formats, container runtimes, and image distribution.

The OCI maintains a few core specifications. These govern the following:

  • How to package filesystem bundles
  • How to launch containerized, cross-platform apps
  • How to make packaged content accessible to remote clients

The Runtime Specification determines how OCI images and runtimes interact. Next, the Image Specification outlines how to create OCI images. Finally, the Distribution Specification defines how to make content distribution interoperable.

The OCI’s overall aim is to boost transparency, runtime predictability, software compatibility, and distribution. We’ve since donated our own container format and runC OCI-compliant runtime to the OCI, plus given the OCI-compliant distribution project to the CNCF.

Why are we adding OCI support? 

Container images are integral to supporting your containerized application builds. We know that images accumulate between projects, making centralized cloud storage essential to efficiently manage resources. Developers shouldn’t have to rely on local storage or wonder if these resources are readily accessible. However, we also know that developers want to store a variety of artifacts within Docker Hub. 

Storing your artifacts in Docker Hub unlocks “anywhere access” while also enabling improved collaboration through Docker Hub’s standard sharing capabilities. This aligns us more closely with the OCI’s content distribution mission by giving users greater control over key pieces of application delivery.

How do I manage different OCI artifacts?

We recommend using dedicated tools to help manage non-container OCI artifacts, like the Helm CLI for Helm charts or the OCI Registry-as-Storage (ORAS) CLI for arbitrary content types.

Let’s walk through a few use cases to showcase OCI support in Docker Hub.

Working with Helm charts

Helm chart support was your most-requested feature, and we’ve officially added it to Docker Hub! So, how do you take advantage? We’ll create a simple Helm chart and push it to Docker Hub. This process will follow Helm’s official guide for storing Helm charts as OCI images in registries.

First, we’ll create a demo Helm chart:

$ helm create demo

This’ll generate a familiar Helm chart boilerplate of files that you can edit:

demo
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── serviceaccount.yaml
│   └── tests
│   	└── test-connection.yaml
└── values.yaml

3 directories, 10 files

Once we’re done editing, we’ll need to package the Helm chart as an OCI image:

$ helm package demo

Successfully packaged chart and saved it to: /Users/martine/tmp/demo-0.1.0.tgz

Don’t forget to log into Docker Hub before pushing your Helm chart. We recommend creating a Personal Access Token (PAT) for this. You can export your PAT via an environment variable, and login, as follows:

$ echo $REG_PAT | helm registry login registry-1.docker.io -u martine --password-stdin

Pushing your Helm chart

You’re now ready to push your first Helm chart to Docker Hub! But first, make sure you have write access to your Helm chart’s destination namespace. In this example, let’s push to the docker namespace:

$ helm push demo-0.1.0.tgz oci://registry-1.docker.io/docker

Pushed: registry-1.docker.io/docker/demo:0.1.0
Digest: sha256:1e960ad1693c234b66ec1f9ddce80986cbf7159d2bb1e9a6d2c2cd6e89925e54

Viewing your Helm chart and using filters

Now, If you log in to Docker Hub and navigate to the demo repository detail, you’ll find your Helm chart in the list of repository tags:

Helm Type Docker Hub

You can navigate to the Helm chart page by clicking on the tag. The page displays useful Helm CLI commands:

Helm CLI Commands

Repository content management is now easier. We’ve improved content discoverability by adding a drop-down button to quickly filter the repository list by content type. Simply click the Content drop-down and select Helm from the list:

Helm Type Selection

Working with volumes

Developers use volumes throughout the Docker ecosystem to share arbitrary application data like database files. You can already back up your volumes using the Volume Backup & Share extension that we recently launched. You can now also filter repositories to find those containing volumes using the same drop-down menu.

But until Volumes Backup & Share pushes volumes as OCI artifacts instead of images (coming soon!), you can use the ORAS CLI to push volumes.

Note: We recommend ORAS CLI versions 0.15 or later since these bring full OCI registry client functionality.

Let’s walk through a simple use case that mirrors the examples documented by the ORAS CLI. First, we’ll create a simple file we want to package as a volume:

$ echo "bar" > foo.txt

For Docker Hub to recognize this volume, we must attach a config file to the OCI image upon creation and mark it with a specific media type. The file can contain arbitrary content, so let’s create one:

$ echo "{\"name\":\"foo\",\"value\":\"bar\"}" > config.json

With this step completed, you’re now ready to push your volume.

Pushing your volume

Here’s where the magic happens. The media type Docker Hub needs to successfully recognize the OCI image as a volume is application/vnd.docker.volume.v1+tar.gz. You can attach the media type to the config file and push it to Docker Hub with the following command (plus its resulting output):

$ oras push registry-1.docker.io/docker/demo:0.0.1 --config config.json:application/vnd.docker.volume.v1+tar.gz foo.txt:text/plain

Uploading b5bb9d8014a0 foo.txt
Uploaded  b5bb9d8014a0 foo.txt
Pushed registry-1.docker.io/docker/demo:0.0.1
Digest: sha256:f36eddbab8459d0ad1436b7ca8af6bfc512ec74f45d8136b53c16db87562016e

We now have two types of content in the demo repository as shown in the following breakdown:

Volume Content Type List

If you navigate to the content page, you’ll see some basic information that we’ll expand upon in future iterations. This will boost visibility into a volume’s contents.

Volume Details

Handling generic content types

If you don’t use the application/vnd.docker.volume.v1+tar.gz media type when pushing the volume with the ORAS CLI, Docker Hub will mark the artifact as generic to distinguish it from recognized content.

Let’s push the same volume but use application/vnd.random.volume.v1+tar.gz media type instead of the one known to Docker Hub:

$ oras push registry-1.docker.io/docker/demo:0.1.1 --config config.json:application/vnd.random.volume.v1+tar.gz foo.txt:text/plain

Exists	7d865e959b24 foo.txt
Pushed registry-1.docker.io/docker/demo:0.1.1
Digest: sha256:d2fb2b176ee4e326f1f34ecdaede8db742f2c444cb2c9ceff0f5c8b743281c95

You can see the new content is assigned a generic Other type. We can still view the tagged content’s media type by hovering over the type label. In this case, that’s application/vnd.random.volume.v1+tar.gz:

Other Content Type List

If you’d like to filter the repositories that contain both Helm charts and volumes, use the same drop-down menu in the top-right corner:

Volume Type Selection

Working with container images

Finally, you can continue pushing your regular container images to the exact same repository as your other artifacts. Say we re-tag the Redis Docker Official Image and push it to Docker Hub:

$ docker tag redis:3.2-alpine docker/demo:v1.2.2

$ docker push docker/demo:v1.2.2

The push refers to repository [docker.io/docker/demo]
a1892d5d1a6d: Mounted from library/redis
e41876edb6d0: Mounted from library/redis
7119119b7542: Mounted from library/redis
169a281fff0f: Mounted from library/redis
04c8ef03e935: Mounted from library/redis
df64d3292fd6: Mounted from library/redis
v1.2.2: digest: sha256:359cfebb00bef01cda3bc1ca453e6455c770a246a06ad8df499a28118c144eda size: 1570

Viewing your container images

If you now visit the demo repository page on Docker Hub, you’ll see every artifact listed under Tags and scans:

All Artifacts Content List

We’ll also introduce more features soon to help you better organize your application content, so stay tuned for more announcements!

Follow along for more updates

All developers can now access and choose from more robust sets of artifacts while building and distributing applications with Docker Hub. Not only does this remove existing roadblocks, but it’ll hopefully encourage you to create and distribute even more exciting applications.

But, our mission doesn’t end here! We’re continually working to bolster our OCI support. While the OCI Artifact Specification is considered a release candidate, full Docker Hub support for OCI Reference Types and the accompanying Referrers API is on the horizon. Stay tuned for upcoming enhancements, improved repo organization, and more.

Note: The OCI artifact has now been removed from OCI image-spec. Refer to this update for more information.

]]>
Docker Registry API to be standardized in OCI https://www.docker.com/blog/docker-registry-api-standardized-oci/ Tue, 10 Apr 2018 20:00:00 +0000 https://blog.docker.com/?p=20419 We are excited to announce that the Docker Registry HTTP API V2 specification will be adopted in the Open Container Initiative (OCI), the organization under the Linux Foundation that provides the standards that fuel the containerization industry. The Docker team is proud to see another aspect of our technology stack become a de-facto standard. As we’ve done with our image format, we are happy to formally share and collaborate with the container ecosystem as part of the OCI community. Our distribution protocol is the underpinning of all container registries on the market and is so robust that it is leveraged over a billion times every two weeks as container content is distributed across the globe.

What does this protocol do?

Putting the protocol into perspective, part of the core functionality of Docker is the ability to push and pull images. From the first “Hello, World” moment, this concept is introduced to every user and is a large part of the Docker experience. While we normally sit back in our armchairs and marvel at this magical occurence, the amount of design and consideration that has gone into that simple capability can easily be overlooked.

When Docker was first released, the team quickly put together a protocol and implementation of the Image Registry and the magic truly began. An Image Registry provides a common service to store images across machines. It is what allows one to build an image on one machine, then pull down that same image and run it on others. One now had the power to pull down an entire software distribution and run it at the drop of a finger tip. This implementation powers the Docker Hub and eventually was open sourced as https://github.com/docker/docker-registry. This protocol and the implementation behind it eventually became known as the V1 protocol. Many an image was pushed and pulled and the developers rejoiced.

Evolution

Pushing and pulling images continued throughout the ages but, as users began to use Docker with other registries, issues with the V1 protocol arised. The central theme of the problems was around the concept of shared identity across registries and the tight coupling with the Docker implementation. The problem was that if a single Docker Engine pulls images from two separate registries, they may disagree on which image has which identifier. Something needed to change to ensure using multiple registries wouldn’t lead to problems for users.

Towards the end of 2014, Docker began addressing these problems with the introduction of  a proposal with the initial API structure. Key to this design was content addressable images, which allowed registries to have common identifiers for images, and the decoupling of internal details of the image format from the Docker Engine, allowing it to evolve on its own. The community came together and produced 140 comments on that proposal that were incorporated into the specification and implementation. The result of this effort was the release of Docker Registry 2.0 with GA support in Docker 1.6 in the spring of 2015. Since then, the Docker community has evolved to meet the growing needs of users.

Going Forward

As a result of the popularity of Docker, this protocol has become widely adopted across the industry. It is battle tested in a wide variety of environments. The protocol integrates well with complementary technologies such as signing and verification, as is available in Docker Enterprise Edition. By donating this specification to the OCI, we can ensure that this important part of the container experience becomes an official OCI standard. The Open Container Initiative previously introduced the runtime-spec and image-spec used by container runtimes. With the acceptance of the distribution-spec proposal, the protocol that has been a key part of using containers will flourish as part of OCI.

]]>
What is containerd ? https://www.docker.com/blog/what-is-containerd-runtime/ Mon, 07 Aug 2017 18:00:00 +0000 https://blog.docker.com/?p=18340
containerd

We have done a few talks in the past on different features of containerd, how it was designed, and some of the problems that we have fixed along the way.  Containerd is used by Docker, Kubernetes CRI, and a few other projects but this is a post for people who may not know what containerd actually does within these platforms.  I would like to do more posts on the featureset and design of containerd in the future but for now, we will start with the basics.

I think the container ecosystem can be confusing at times. Especially with the terminology that we use. Whats this? A runtime. And this? A runtime…  containerd as the name implies, not contain nerd as some would like to troll me with, is a container daemon.  It was originally built as an integration point for OCI runtimes like runc but over the past six months it has added a lot of functionality to bring it up to par with the needs of modern container platforms like Docker and Kubernetes.

containerd

Since there is no such thing as Linux containers in the kernelspace, containers are various kernel features tied together, when you are building a large platform or distributed system you want an abstraction layer between your management code and the syscalls and duct tape of features to run a container.  That is where containerd lives.  It provides a client layer of types that platforms can build on top of without ever having to drop down to the kernel level.  It’s so much nicer towork with Container, Task, and Snapshot types than it is to manage calls to clone() or mount().

Containerd was designed to be used by Docker and Kubernetes as well as any other container platform that wants to abstract away syscalls or OS specific functionality to run containers on linux, windows, solaris, or other OSes.  With these users in mind, we wanted to make sure that containerd has only what they need and nothing that they don’t.  Realistically this is impossible but at least that is what we try for.  Things like networking are out of scope for containerd.  The reason for this is, when you are building a distributed system, networking is a very central aspect.  With SDN and service discovery today, networking is way more platform specific than abstracting away netlink calls on linux.  Most of the new overlay networks are route based and require routing tables to be updated each time a new container is created or deleted.  Service discovery, DNS, etc all have to be notified of these changes as well.  It would be a large chunk of code to be able to support all the different network interfaces, hooks, and integration points to support this if we added networking to containerd.  What we did instead is opted for a robust events system inside containerd so that multiple consumers can subscribe to the events that they care about.  We also expose a task API that lets users create a running task, have the ability to add interfaces to the network namespace of the container, and then start the container’s process without the need for complex hooks in various points of a container’s lifecycle.

Another area that has been added to containerd over the past few months is a complete storage and distribution system that supports both OCI and Docker image formats.  You have a complete content addressed storage system across the containerd API that works not only for images but also metadata, checkpoints, and arbitrary data attached to containers.

We also took the time to rethink how “graphdrivers” work.  These are the overlay or block level filesystems that allow images to have layers and you to perform efficient builds.  Graphdrivers were initially written by Solomon and I when we added support for devicemapper.  Docker only supported AUFS at the time so we modeled the graphdrivers after the overlay filesystem.  However, making a block level filesystem such as devicemapper/lvm act like an overlay fillesystem proved to be much harder to do in the long run.  The interfaces had to expand over time to support different features than what we originally thought would be needed.  With containerd, we took a different approach, make overlay filesystems act like a snapshotter instead of vice versa.  This was much easier to do as overlay filesystems provide much more flexibility than snapshotting filesystems like BTRFS, ZFS, and devicemapper as they don’t have a strict parent/child relationship.  This helped us build out a smaller interface for the snapshotters while still fulfilling the requirements needed from things like a builder as well as reduce the amount of code needed, making it much easier to maintain in the long run.

So what do you actually get using containerd?  You get push and pull functionality as well as image management.  You get container lifecycle APIs to create, execute, and manage containers and their tasks. An entire API dedicated to snapshot management.  Basically everything that you need to build a container platform without having to deal with the underlying OS details.  I think the most important part of containerd is having a versioned and stable API that will have bug fixes and security patches backported.


What’s #containerd? All you need to know about #Docker’s open and reliable #container runtime
Click To Tweet


Learn more about containerd:

]]> Demystifying the Open Container Initiative (OCI) Specifications https://www.docker.com/blog/demystifying-open-container-initiative-oci-specifications/ Wed, 19 Jul 2017 12:59:00 +0000 https://blog.docker.com/?p=18193 The Open Container Initiative (OCI) announced the completion of the first versions of the container runtime and image specifications this week. The OCI is an effort under the auspices of the Linux Foundation to develop specifications and standards to support container solutions. A lot of effort has gone into the building of these specifications over the past two years. With that in mind, let’s take a look at some of the myths that have arisen over the past two years.

OCI

Myth: The OCI is a replacement for Docker

Standards are important, but they are far from a complete production platform. Take for example, the World Wide Web. It  has evolved over the last 25 years and was built on core dependable standards like TCP/IP, HTTP and HTML. Using TCP/IP as an example, when enterprises coalesced around TCP/IP as a common protocol, it fueled the growth of routers and in particular – Cisco. However, Cisco became a leader in its market by focusing on differentiated features on its routing platform. We believe the parallel exists with the OCI specifications and Docker.

Docker is a complete production platform for developing, distributing, securing and orchestrating container-based solutions. The OCI specification is used by Docker, but it represents only about five percent of our code and a small part of the Docker platform concerned with the runtime behavior of a container and the layout of a container image. 

Myth: Products and projects already are certified to the OCI specifications

The runtime and image specifications have just released as 1.0 this week. However, the OCI certification program is still in development so companies cannot claim compliance, conformance or compatibility until certification is formally rolled out later this year.

The OCI certification working group is currently defining the standard so that products and open source projects can demonstrate conformance to the specifications. Standards and specifications are important for engineers implementing solutions, but formal certification is the only way to reassure customers that the technology they are working with is truly conformant to the standard.

Myth: Docker doesn’t support the OCI specifications work

Docker has a long history with contributing to the OCI. We developed and donated a majority of the OCI code and have been instrumental in defining the OCI runtime and image specifications as maintainers of the project. When the Docker runtime and image format quickly became the de facto standards after being released as open source in 2013, we thought it would be beneficial to donate the code to a neutral governance body to avoid fragmentation and encourage innovation. The goal was to provide a dependable and standardized specification so Docker contributed runc, a simple container runtime, as the basis of the runtime specification work, and later contributed the Docker V2 image specification as the basis for the OCI image specification work.

Docker developers like Michael Crosby and Stephen Day have been key contributors from the beginning of this work, ensuring Docker’s experience hosting and running billions of container images carries through to the OCI. When the certification working group completes its work, Docker will bring its products through the OCI certification process to demonstrate OCI conformance.

Myth: The OCI specifications are about Linux containers 

There is a misperception that the OCI is only applicable to Linux container technologies because it is under the aegis of the Linux Foundation. The reality is that although Docker technology started in the Linux world, Docker has been collaborating with Microsoft to bring our container technology, platform and tooling to the world of Windows Server. Additionally, the underlying technology that Docker has donated to the OCI is broadly applicable to  multi-architecture environments including Linux, Windows and Solaris and covers x86, ARM and IBM zSeries. 

Myth: Docker was just one of many contributors to the OCI

The OCI as an organization has a lot of supporting members representing the breadth of the container industry. That said, it has been a small but dedicated group of individual technologists that have contributed the time and technology to the efforts that have produced the initial specifications. Docker was a founding member of the OCI, contributing the initial code base that would form the basis of the runtime specification and later the reference implementation itself. Likewise, Docker contributed the Docker V2 Image specification to act as the basis of the OCI image specification.

Myth: CRI-O is an OCI project

CRI-O is an open source project in the Kubernetes incubator in the Cloud Native Computing Foundation (CNCF) – it is not an OCI project. It is based on an earlier version of the Docker architecture, whereas containerd is a direct CNCF project that is a larger container runtime that includes the runc reference implementation. containerd is responsible for image transfer and storage, container execution and supervision, and low-level functions to support storage and network attachments. Docker donated containerd to the CNCF with the support of the five largest cloud providers: Alibaba Cloud, AWS, Google Cloud Platform, IBM Softlayer and Microsoft Azure with a charter of being a core container runtime for multiple container platforms and orchestration systems.  

Myth: The OCI specifications are now complete 

While the release of the runtime and image format specifications is an important milestone, there’s still work to be done. The initial scope of the OCI was to define a narrow specification on which developers could depend for the runtime behavior of a container, preventing fragmentation in the industry, and still allowing innovation in the evolving container domain. This was later expanded to include a container image specification.

As the working groups complete the first stable specifications for runtime behavior and image format, new work is under consideration. Ideas for future work include distribution and signing. The next most important work for the OCI, however, is delivering on a certification process backed by a test suite now that the first specifications are stable.

Learn more about OCI and Open Source at Docker:


7 things to know about v1.0 of the #OCI runtime and image specifications
Click To Tweet


]]>
Docker 1.11: The first runtime built on containerd and based on OCI technology https://www.docker.com/blog/docker-engine-1-11-runc/ Wed, 13 Apr 2016 10:02:00 +0000 https://blog.docker.com/?p=11413 We are excited to introduce Docker Engine 1.11, our first release built on runC ™ and containerd ™. With this release, Docker is the first to ship a runtime based on OCI technology, demonstrating the progress the team has made since donating our industry-standard container format and runtime under the Linux Foundation in June of 2015.

Over the last year, Docker has helped advance the work of the OCI to make it more readily available to more users. It started in December 2015, when we introduced containerd ™, a daemon to control runC. This was part of our effort to break out Docker into small reusable components. With this release, Docker Engine is now built on containerd, so everyone who is using Docker is now using OCI. We’re proud of the progress we’ve made on the OCI with the 40+ members to continue the work to standardize container technology.

Besides this mind-blowing piece of technological trivia that I’m sure will impress your friends at parties, what difference does it make to you? Well, short answer is: nothing… yet! Nevertheless, let me convince you that this is still something you should be excited about.

This is among the biggest technical refactorings the Engine has gone through, and our priority for 1.11 was to get the integration right, without changing the command-line interface or API. However, this lays the technical groundwork for significant user-facing improvements.

docker engine 1 11 runc 1

Stability and performance

With the containerd integration comes an impressive cleanup of the Docker codebase and a number of historical bugs being fixed. In general, splitting Docker up into focused independent tools mean more focused maintainers, and ultimately better quality software.

Performance-wise, we were extremely careful in making sure 1.11 would not be any slower despite the extra inter-processes communications. We’re pleased to say that it is faster at creating containers concurrently than its predecessor, and although we made a deliberate choice of favoring correctness first, you can expect more performance improvements in the future.

 

Creating an ecosystem for container execution backends

runC is the first implementation of the Open Containers Runtime specification and the default executor bundled with Docker Engine. Thanks to the open specification, future versions of Engine will allow you to specify different executors, thus enabling the ecosystem of alternative execution backends without any changes to Docker itself. By separating out this piece, an ecosystem partner can build their own compliant executor to the specification, and make it available to the user community at any time – without being dependent on the Engine release schedule or wait to be reviewed and merged into the codebase.

What does this mean for you? This gives you choice: the runtime is now pluggable. Following  the Docker principle of batteries included but swappable, Docker Engine will ship with runC available as the default with the ability to choose from a variety of container executors that are for specific platforms or have different security and performance features. By separating out the thing that runs containers from the Engine, this opens up new possibilities. As an example, 1.11 is a huge step toward allowing Engine restarts/upgrades without restarting the containers, improving the availability of containers. This is probably one of the most requested features by Docker users. In fact, with this new architecture, you will even be able to restart containerd and your containers will keep running.


 

But wait, there’s more!

In addition to this huge architectural change, as usual we have also added a bunch of features in Engine, Compose, Swarm, Machine, and Registry.

 

Engine 1.11

  • DNS round robin load balancing: It’s now possible to load balance between containers with Docker’s networking. If you give multiple containers the same alias, Docker’s service discovery will return the addresses of all of the containers for round-robin DNS.
  • VLAN support (experimental): VLAN support has been added for Docker networks in the experimental channel, so you can integrate better with existing networking infrastructure.
  • IPv6 service discovery: Engine’s DNS-based service discovery system can now return AAAA records.
  • Yubikey hardware image signing: A few months ago we added the ability to sign images with hardware Yubikeys in the experimental channel of Docker. This is now available in the stable release. Read more about how it works in this blog post.
  • Labels on networks and volumes: You can now attach arbitrary key/value data to networks and volumes, in the same way you could with containers and images.
  • Better handling of low disk space with device mapper storage: The dm.min_free_space option has been added to make device mapper fail more gracefully when running out of disk space.
  • Consistent status field in docker inspect: This is a little thing, but really handy if you use the Docker API. docker inspect now has a Status field, a single consistent value to define a container’s state (running, stopped, restarting, etc). Read more in the pull request.

See the release notes for full details.

 

Compose 1.7

  • --build option for docker-compose up: This is a shorthand for the common pattern of running docker-compose build and then docker-compose up. Compose doesn’t build on every run by default in case your build is slow, but if you’ve got a quick build, running this every time will ensure your environment is always up to date.
  • docker-compose exec command: Mirroring the docker exec command.

See the release notes for full details.

 

Swarm 1.2

  • Container rescheduling no longer experimental: In the previous version of Swarm, we added support for rescheduling containers when a node dies. This is now considered stable so you can safely use it in production.

 

  • Better errors when containers can’t be scheduled: For example, when a constraint fails, the constraints will be printed out so you can easily see what went wrong.

See the release notes for full details.

 

Machine 0.7

In this version of Machine, the Microsoft Azure driver now uses the new Azure APIs and is easier to authenticate. See Azure’s blog post for more details. There are also a bunch of other improvements in this release – take a look at the full release notes for details.

 

Registry 2.4

  • Garbage collection: A tool has been added so system administrators can clean up the data from images that have been deleted by users. For more details, see the garbage collector docs.
  • Faster and more stable S3 driver: The S3 storage driver is now implemented on top of the official Amazon S3 SDK, which has major performance and stability goodness.

See the release notes for full details.

 

Download and try out Docker 1.11

The easiest way to try out all of this stuff in development is to download Docker Toolbox. For other platforms, check out the installation instructions.

All of this stuff is also available in Docker for Mac and Windows, the new way to use Docker in development, currently in private beta. Sign up to get a chance to try it out early.


Announcing #Docker 1.11: First runtime built on #containerd and based on OCI technology
Click To Tweet



 

Learn More about Docker

]]>