networking – Docker https://www.docker.com Thu, 16 Feb 2023 01:28:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png networking – Docker https://www.docker.com 32 32 How Docker Desktop Networking Works Under the Hood https://www.docker.com/blog/how-docker-desktop-networking-works-under-the-hood/ Tue, 25 Jan 2022 15:00:00 +0000 https://www.docker.com/blog/how-docker-desktop-networking-works-under-the-hood/ Modern applications make extensive use of networks. At build time it’s common to apt-get/dnf/yum/apk install a package from a Linux distribution’s package repository. At runtime an application may wish to connect() to an internal postgres or mysql database to persist some state, while also calling listen() and accept() to expose APIs and UIs over TCP and UDP ports. Meanwhile developers need to be able to work from anywhere, whether in an office or at home or on mobile or on a VPN. Docker Desktop is designed to ensure that networking “just works” for all of these use-cases in all of these scenarios. This post describes the tools and techniques we use to make this happen, starting with everyone’s favorite protocol suite: TCP/IP.

TCP/IP

When containers want to connect to the outside world, they will use TCP/IP. Since Linux containers require a Linux kernel, Docker Desktop includes a helper Linux VM. Traffic from containers therefore originates from the Linux VM rather than the host, which causes a serious problem.

Many IT departments create VPN policies which say something like, “only forward traffic which originates from the host over the VPN”. The intention is to prevent the host accidentally acting as a router, forwarding insecure traffic from the Internet onto secure corporate networks. Therefore if the VPN software sees traffic from the Linux VM, it will not be routed via the VPN, preventing containers from accessing resources such as internal registries.

Docker Desktop avoids this problem by forwarding all traffic at user-level via vpnkit, a TCP/IP stack written in OCaml on top of the network protocol libraries of the MirageOS Unikernel project. The following diagram shows the flow of packets from the helper VM, through vpnkit and to the Internet:

1 TCPIP 1

When the VM boots it requests an address using DHCP. The ethernet frame containing the request is transmitted from the VM to the host over shared memory, either through a virtio device on Mac or through a “hypervisor socket” (AF_VSOCK) on Windows. Vpnkit contains a virtual ethernet switch (mirage-vnetif) which forwards the request to the DHCP (mirage/charrua) server.

Once the VM receives the DHCP response containing the VM’s IP address and the IP of the gateway, it sends an ARP request to discover the ethernet address of the gateway (mirage/arp). Once it has received the ARP response it is ready to send a packet to the Internet.

When vpnkit sees an outgoing packet with a new destination IP address, it creates a virtual TCP/IP stack to represent the remote machine (mirage/mirage-tcpip). This stack acts as the peer of the one in Linux, accepting connections and exchanging packets. When a container calls connect() to establish a TCP connection, Linux sends a TCP packet with the SYNchronize flag set. Vpnkit observes the SYNchronize flag and calls connect() itself from the host. If the connect() succeeds, vpnkit replies to Linux with a TCP SYNchronize packet which completes the TCP handshake. In Linux the connect() succeeds and data is proxied in both directions (mirage/mirage-flow). If the connect() is rejected, vpnkit replies with a TCP RST (reset) packet which causes the connect() inside Linux to return an error. UDP and ICMP are handled similarly.

In addition to low-level TCP/IP, vpnkit has a number of built-in high-level network services, such as a DNS server (mirage/ocaml-dns) and HTTP proxy (mirage/cohttp). These services can be addressed directly via a virtual IP address / DNS name, or indirectly by matching on outgoing traffic and redirecting dynamically, depending on the configuration.

TCP/IP addresses are difficult to work with directly. The next section describes how Docker Desktop uses the Domain Name System (DNS) to give human-readable names to network services.

DNS

Inside Docker Desktop there are multiple DNS servers:

2 DNS

DNS requests from containers are first processed by a server inside dockerd, which recognises the names of other containers on the same internal network. This allows containers to easily talk to each other without knowing their internal IP addresses. For example in the diagram there are 3 containers: “nginx”, “golang” and “postgres”, taken from the docker/awesome-compose example. Each time the application is started, the internal IP addresses might be different, but containers can still easily connect to each other by human-readable name thanks to the internal DNS server inside dockerd.

All other name lookups are sent to CoreDNS (from the CNCF). Requests are then forwarded to one of two different DNS servers on the host, depending on the domain name. The domain docker.internal is special and includes the DNS name host.docker.internal which resolves to a valid IP address for the current host. Although we prefer if everything is fully containerized, sometimes it makes sense to run part of an application as a plain old host service. The special name host.docker.internal allows containers to contact these host services in a portable way, without worrying about hardcoding IP addresses.

The second DNS server on the host handles all other requests by resolving them via standard OS system libraries. This ensures that, if a name resolves correctly in the developer’s web-browser, it will also resolve correctly in the developer’s containers. This is particularly important in sophisticated setups, such as pictured in the diagram where some requests are sent over a corporate VPN (e.g. internal.registry.mycompany) while other requests are sent to the regular Internet (e.g. docker.com).

Now that we’ve described DNS, let’s talk about HTTP.

HTTP(S) proxies

Some organizations block direct Internet access and require all traffic to be sent via HTTP proxies for filtering and logging. This affects pulling images during build as well as outgoing network traffic generated by containers.

The simplest method of using an HTTP proxy is to explicitly point the Docker engine at the proxy via environment variables. This has the disadvantage that if the proxy needs to be changed, the Docker engine process must be restarted to update the variables, causing a noticeable glitch. Docker Desktop avoids this by running a custom HTTP proxy inside vpnkit which forwards to the upstream proxy. When the upstream proxy changes, the internal proxy dynamically reconfigures which avoids having to restart the Docker engine.

On Mac Docker Desktop monitors the proxy settings stored in system preferences. When the computer switches network (e.g. between WiFi networks or onto cellular), Docker Desktop automatically updates the internal HTTP proxy so everything continues to work without the developer having to take any action.

This just about covers containers talking to each other and to the Internet. How do developers talk to the containers?

Port forwarding

When developing applications, it’s useful to be able to expose UIs and APIs on host ports, accessible by debug tools such as web-browsers. Since Docker Desktop runs Linux containers inside a Linux VM, there is a disconnect: the ports are open in the VM but the tools are running on the host. We need something to forward connections from the host into the VM.

3 ports

Consider debugging a web-application: the developer types docker run -p 80:80 to request that the container’s port 80 is exposed on the host’s port 80 to make it accessible via http://localhost. The Docker API call is written to /var/run/docker.sock on the host as normal. When Docker Desktop is running Linux containers, the Docker engine (dockerd in the diagram above) is a Linux program running inside the helper Linux VM, not natively on the host. Therefore Docker Desktop includes a Docker API proxy which forwards requests from the host to the VM. For security and reliability, the requests are not forwarded directly over TCP over the network. Instead Docker Desktop forwards Unix domain socket connections over a secure low-level transport such as shared-memory hypervisor sockets via processes labeled vpnkit-bridge in the diagram above.

The Docker API proxy can do more than simply forward requests back and forth. It can also decode and transform requests and responses, to improve the developer’s experience. When a developer exposes a port with docker run -p 80:80, the Docker API proxy decodes the request and uses an internal API to request a port forward via the com.docker.backend process. If something on the host is already listening on that port, a human-readable error message is returned to the developer. If the port is free, the com.docker.backend process starts accepting connections and forwarding them to the container via the process vpnkit-forwarder, running on top of vpnkit-bridge.

Docker Desktop does not run with “root” or “Administrator” on the host. A developer can use docker run –privileged to become root inside the helper VM but the hypervisor ensures the host remains completely protected at all times. This is great for security but it causes a usability problem on macOS: how can a developer expose port 80 (docker run -p 80:80) when this is considered a “privileged port” on Unix i.e. a port number < 1024? The solution is that Docker Desktop includes a tiny helper privileged service which does run as root from launchd and which exposes a “please bind this port” API. This raises the question: “is it safe to allow a non-root user to bind privileged ports?”

Originally the notion of a privileged port comes from a time when ports were used to authenticate services: it was safe to assume you were talking to the host’s HTTP daemon because it had bound to port 80, which requires root, so the admin must have arranged it. The modern way to authenticate a service is via TLS certificates and ssh fingerprints, so as long as system services have bound their ports before Docker Desktop has started – macOS arranges this by binding ports on boot with launchd –  there can be no confusion or denial of service. Accordingly, modern macOS has made binding privileged ports on all IPs (0.0.0.0 or INADDR_ANY) an unprivileged operation. There is only one case where Docker Desktop still needs to use the privileged helper to bind ports: when a specific IP is requested (e.g. docker run -p 127.0.0.1:80:80), which still requires root on macOS.

Summary

Applications need reliable network connections for lots of everyday activities including: pulling Docker images, installing Linux packages, communicating with database backends, exposing APIs and UIs and much more. Docker Desktop runs in many different network environments: in the office, at home and while traveling on unreliable wifi. Some machines have restrictive firewall policies installed. Other machines have sophisticated VPN configurations. For all these use-cases in all these environments, Docker Desktop aims to “just work”, so the developer can focus on building and testing their application (rather than debugging ours!)

If building this kind of tooling sounds interesting, come and make Docker Desktop networking even better, we are hiring see https://www.docker.com/career-openings 

DockerCon2022

Join us for DockerCon2022 on Tuesday, May 10. DockerCon is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/

]]>
Docker Networking Design Philosophy https://www.docker.com/blog/docker-networking-design-philosophy/ Tue, 01 Mar 2016 15:21:05 +0000 https://blog.docker.com/?p=10287 From the experimental networking in Docker 1.7 to the initial release in Docker 1.9, the reception from the community has been fantastic! First of all we want to thank you for all the discussions, evaluations, PRs and filed issues. As the networking capabilities evolving with every release, we wanted to spend some time explaining the guiding principles behind the design.

Users Firstdocker_networking

Docker’s philosophy is to build tools that have a great user experience and seamless application portability across infrastructure. New features are always continuously refined and iterated upon so that the end product delivers the best possible user experience. Networking follows the same philosophy and we iterated several times to find the right abstractions for the user.

When it comes to networking, there are two kinds of users:

  • The application developer who wants to create and deploy a distributed application stack on the Docker platform
  • The network IT team who configures and manages the infrastructure

We wanted to give the right kind of tools to both these kinds of users so that they are empowered to easily accomplish their goals and you can read about some of the experiences from the Docker community like @arungupta, @allingeek and @Yoanis_Gil.

Docker’s primary focus is on the user, whether they are from the application team or IT operations. That also means supporting ecosystem partners that support the same architecture goals of user experience AND seamless application portability. With that in mind, it is our belief that all APIs and UI must be exposed to end users and anything else would compromise on the core values. Anyone in the ecosystem claiming to support or include Docker must adhere to maintaining user experience and portability otherwise it simply isn’t Docker.

Users: Application developers

As much as application developers want their applications to communicate with each other, most don’t want to understand or get involved in the details of how exactly that is achieved. In fact they don’t even want to know what IP addresses their applications are bound to. The application developer’s concern generally ends at the Layer4 service layer that their applications are bound to.

One of the guiding principles in the Docker Networking design is to relieve the application developer from worrying about the network plumbing details. Our belief is that by hiding the gory details of how network connectivity and service discovery is achieved behind a simple API and UI, we enable the application developers to develop their distributed application stack more freely. We should eliminate the connectivity/discoverability headaches and portability concerns that hinders an application developer from converting a monolithic application to a set of microservices. We want developers to be able to bind their microservices into a single distributed application using a few simple “network creation” and “network connection” commands.

The other guiding principle is to extend the same portable experience of Docker containers to networks. A Docker container created using an image works the same regardless of where it runs as long as the same image is used. Similarly, when the application developer defines their application stack as a set of distributed applications, it should work just the same whatever infrastructure it runs on. This heavily depends on what abstractions we expose to the application developer and more importantly what abstractions we do not expose to the application developer.

This is how the Docker “network” abstraction (CNM) was born. It provides the right reference for the application developer to think and reason about the connectivity and discoverability needs of their application services without distracting them with all the complexities of how exactly this is achieved. In some ways, the “network” abstraction is declarative because the user is allowed to tell “what” kind of topology the application needs instead of telling “how” to physically build that topology.

cnm-model
For example, a classic three-tier web application stack is where the web server and the app server are in one network and then the same app server is connected to another network which also has the database server. The app developer should not have to bother with how that is implemented with the physical networks, firewalls, etc. Decoupling the infrastructure from the application significantly increases the portability of the distributed application. This also means developers have more freedom in how exactly the application topology is defined.

Users: Network IT

While the application developer wants to be free of infrastructure details and wants a portable application deployment experience, the network IT team wants to make sure all applications deployed in the infrastructure run smoothly and complies with the application’s SLAs and business rules. This means the ability to change network configurations and even providers without disrupting the application’s functionality and intent. Agile means speed for developers and a different kind of speed for network IT, the kind where they are able to respond quickly to changing needs and make adjustments without breaking something else.

The “how” part of the equation is accomplished with a “driver” abstraction. Given a definition of an abstract network topology, how that topology is achieved in concrete terms depends on the driver that is used. By defining the Docker Networking plugin API that every driver can easily conform to, one can simply deploy the exact same application-driven network topology in any infrastructure by replacing one driver with another.

The plugin API provides a hook to the driver when:

  • a network is created
  • a container is connected to a network
  • a container needs an IP address

These are the most essential hooks to achieve network connectivity for any application network topology. Docker provides the same network connectivity guarantees to the application regardless of the driver used. At the same time, the network IT team is free to choose any driver which facilitates the application topologies in their infrastructure.

There are some special kinds of drivers called “plugins”. All plugins are drivers. But plugins are not built into the Docker Engine binary. They are independent external programs (in most cases, they are docker containers themselves) that use the same driver API as the built-in drivers. So in essence one can swap out a built-in driver for an external plugin to achieve any network topology. This reflects Docker’s philosophy of “Batteries included but swappable”. Plugins are critical in supporting portability and choice for network IT.

When we first started thinking about enhancing Docker Networking it was clear that plugins should be a first class citizen. In any infrastructure the connectivity and discoverability needs of applications are wide and varied. There is no single solution for that problem that will satisfy every user and application. Therefore plugins have been an important part of Docker Networking design from the very beginning. We decided that when we release the first version of new Docker Networking the ability to “swap the battery” was available to the users. In the end that is exactly how we released Docker Networking in Docker 1.9.

Plugin API Design

While application network topology and network abstraction is more focused on the application developer, the driver/plugin configuration is focused on IT administrators. Network IT is more focused on the infrastructure and related service levels on which the application is going to be deployed. However, they do want to ensure that they can meet the application’s network connectivity intent.

They want to ensure that:

  • the right solution to plumb the networking path is used
  • the right solution to manage networking resources is used
  • the right solution to discover application services is used
  • they can make separate and independent choices on all of the above

Providing the flexibility to network IT to pick and choose various solutions for the various elements of network configuration gives the best operations experience.

Instead of providing one all-encompassing plugin API/extension-point, we segmented the plugin API into separate extension points corresponding to logical configuration groupings:

The design gets its inspiration from the golang interface philosophy, which advocates defining one “interface” per function to encourage composability. This is a powerful facility for network IT to compose different solutions for different needs.

Another aspect of the plugin API design is to make sure that Docker Networking remains the broker to resolve conflicts that can arise when a container joins multiple networks backed by different plugins. For example two different drivers may want to plumb a static route with the same route prefix but with a different next hop IP. When this happens it is simply not possible for these drivers to independently choose whose route wins without sacrificing the user experience. Therefore, as part of the plugin API libnetwork doesn’t provide driver’s access to the container’s network namespace since there is no way a particular driver will be able to resolve these conflicts by itself. This is true for built-in drivers and plugins. Other plugin frameworks like CNI provide namespace access to its drivers and hence they may have to deal with these drivers stomping over each other inside the container namespace. When that happens user experience and portability suffers.

Another reason for this plugin design is to provide granular network plugability at various layers (such as IP Address management, Service Discovery, Load Balancing, etc…) which lets the user choose the best driver to satisfy a feature instead of depending on an all-encompassing and opinionated network plugin. For example, a scenario where a network operator might want to use a specific IPAM solution (such as Infoblox) in combination with a different network plugin (such as Cisco’s contiv). Because libnetwork ™ manages the container’s network namespace, we could implement the necessary Docker UX and guarantee such combinations of different plugins. Thus providing the necessary guarantees to the network operator to take control over the network design.

Docker API and User Interaction

Docker Networking allows for separation of concerns for two different users and it was only natural to design two distinct commands in Docker UI. The UI and API are designed in a way that network IT can configure the infrastructure with as little coordination with the application developers as possible. It avoids the lock-stepped workflow between the application developers and Network IT team.

For example, if the application developer requests that network IT just create networks with certain names, then the network IT can go independently and start creating those networks applying configuration options that are appropriate for the given infrastructure. At the same time, the application developer can work on composing the application assuming that those networks with the referenced names will be available to achieve the network topology that the application needs.

With that in mind the branch in UI and API roughly looks as follows:

  • Network IT can create, administer and precisely control which network driver and IPAM driver combination is used for the network. They can also specify various network specific configurations like subnets, gateway, IP ranges etc. and also pass on driver specific configuration, if any.
  • A configuration is to connect any container to the created network. This one has application developer focus since their concern is mainly one of connectivity and discoverability.

A typical way an application developer can compose the application is using a “Docker Compose” file where one can specify all the application services which are part of the application and how they connect to each other in an application defined topology referencing networks which may be pre-provisioned by the network IT.

In fact, the developer builds an application using Docker Compose file which inherently defines the application topology. The exact same compose file can now be used to deploy the application in any infrastructure and the network IT team could pre-provision the network (that is referenced in the Compose file) based on the infrastructure requirements. The key aspect of it is that the application developer does not need to revisit the Compose file every time it is deployed to a different environment.

This separation of concern enables a workflow where developers and network IT can work independently in provisioning networks and deploying applications, using different plugins if needed.

As an example, let’s consider the following Compose v2 application:

$ cat docker-compose.yml
    version: "2"
    services:
    voting-app:
    image: docker/example-voting-app-voting-app
    ports:
    - "80"
    networks:
    - votenet
    result-app:
    image: docker/example-voting-app-result-app
    ports:
    - "80"
    networks:
    - votenet
    worker:
    image: docker/example-voting-app-worker
    networks:
    - votenet
    redis:
    image: redis
    networks:
    - votenet
    db:
    image: postgres:9.4
    volumes:
    - "db-data:/var/lib/postgresql/data"
    networks:
    - votenet
    volumes:
    db-data:
    networks:
    votenet:

By default, compose v2 will create a docker network for this application using the default driver. When run against docker-engine, the default driver is the bridge driver. Hence, when the application is launched, we can see that the network is created using the “default driver”.

$ docker-compose up -d
    Creating network "voteapp_votenet" with the default driver
    Starting db
    Starting redis
    Starting voteapp_worker_1
    Starting voteapp_voting-app_1
    Starting voteapp_result-app_1

The application works just fine in a single-host and the application dev can get the work done without having to deal with any network specific configurations.

Looking into the details,

$ docker network inspect - voteapp_votenet
    [
    {
    "Name": "- voteapp_votenet",
    "Id": "7be1879036b217c072c824157e82403081ec60edfc4f34599674444ba01f0c57",
    "Scope": "local",
    "Driver": "bridge",
    "IPAM": {
    "Driver": "default",
    "Options": null,
    "Config": [
    {
    "Subnet": "172.19.0.0/16",
    "Gateway": "172.19.0.1/16"
    }
    ]
    },
    ...
    ...
    ...
    ]

Lets bring down the application.

$ docker-compose down

Now, Let us assume that the application is handed over to the operations team to be deployed in staging. Network IT manages the networks by either pre-provisioning the networks ahead of time using the docker network commands. For example:

$ docker network create -d overlay --subnet=70.28.0.0/16 --gateway=70.28.5.254 voteapp_votenet
    6d215748f300a0eda3878e76fe99e717c8ef85a87de0779e379c92af5d615b88

Alternatively, network IT can take control over the network configurations of the above docker-compose application using the extension feature (Read more about this compose functionality) by adding another compose file:
“docker-compose.override.yml” without having to change anything on the application.

$ cat docker-compose.override.yml
    version : "2"
    networks:
    votenet:
    driver: overlay
    ipam:
    config:
    - subnet: 70.28.0.0/16
    gateway: 70.28.5.254

In this example, the network driver used in staging is “overlay”, which provides multi-host network connectivity and the network IT team can use preferred IPAM settings for this network.

$ docker-compose up -d
    Creating network "- voteapp_votenet" with driver "overlay"
    Starting voteapp_worker_1
    Starting redis
    Starting db
    Starting voteapp_voting-app_1
    Starting voteapp_result-app_1

When the same application is run, this time, the network created is with a different driver named “overlay”, which provides multi-host network connectivity. Now when we dig deeper into the network using the “docker network inspect” command, we can also see the configured IPAM being used to configure the network and all the containers in this network will have ip-address in this subnet.

$ docker network inspect - voteapp_votenet
    [
    {
    "Name": "- voteapp_votenet",
    "Id": "b510c0affb2289548a07af7cc7e3f778987fc43812ac0603c5d01b7acf6c12be",
    "Scope": "global",
    "Driver": "overlay",
    "IPAM": {
    "Driver": "default",
    "Options": null,
    "Config": [
    {
    "Subnet": "70.28.0.0/16",
        "Gateway": "70.28.5.254"
    }
    ]
    },
    ...
    ...
    ...
    ]

When this Compose application is running on Docker Swarm, the containers are scheduled across the hosts, while the overlay driver provides seamless connectivity between the containers. All of this is made possible by the Docker Networking design principles explained in this blog post.


Just learned more about the @Docker Networking design principles in this blog post
Click To Tweet


Application is Still the King

If we did not mention before, with all operations focused configuration knobs, application still remains the king. So, as stated in the beginning we wanted to hide as many networking artifacts as possible, one last thing that needed hiding is the IP addresses themselves. IP addresses expose something about the underlying infrastructure and this reduces the portability experience for the application. If applications can discover each other using names that they know at compile time, then it completes the portability story. This is exactly what we achieved by providing implicit container discovery using embedded DNS server. The container “linking” and “aliasing” capabilities, ensures that containers can discover their peer containers with the name they knew at the compile time.

If you want to give Docker Networking a spin for yourself, check out these resources:


Learn More about Docker

]]>
Docker 1.10: New Compose file, improved security, networking and much more! https://www.docker.com/blog/docker-1-10/ Thu, 04 Feb 2016 23:33:21 +0000 https://blog.docker.com/?p=9942 We’re pleased to announce Docker 1.10, jam-packed with stuff you’ve been asking for.

It’s now much easier to define and run complex distributed apps with Docker Compose. The power that Compose brought to orchestrating containers is now available for setting up networks and volumes. On your development machine, you can set up your app with multiple network tiers and complex storage configurations, replicating how you might set it up in production. You can then take that same configuration from development, and use it to run your app on CI, on staging, and right through into production. Check out the blog post about the new Compose file to find out more.

As usual, we’ve got a load of security updates in this release. All the big features you’ve been asking for are now available to use: user namespacing for isolating system users, seccomp profiles for filtering syscalls, and an authorization plugin system for restricting access to Engine features.

Another big security enhancement is that image IDs now represent the content that is inside an image, in a similar way to how Git commits represent the content inside commits. This means you can guarantee that the content you’re running is what you expect by just specifying that image’s ID. When upgrading to Engine 1.10, there is a migration process that could take a long time, so take a read of the documentation if you want to prevent downtime.

Networking gets even better

We added a new networking system in the previous version of Docker Engine. It allowed you to create virtual networks and attach containers to them so you could create the network topology that was best for your application. In addition to the support in Compose, we’ve added some other top requested features:

  • Use links in networks: Links work in the default bridge network as they have always done, but you couldn’t use them in networks that you created yourself. We’ve now added support for this so you can define the relationships between your containers and alias a hostname to a different name inside a specific container (e.g. --link db:production_postgres)
  • Network-wide container aliases: Links let you alias a hostname for a specific container, but you can now also make a container accessible by multiple hostnames across an entire network.
  • Internal networks: Pass the --internal flag to network create to restrict traffic in and out of the network.
  • Custom IP addresses: You can now give a container a custom IP address when running it or adding it to a network.
  • DNS server for name resolution: Hostname lookups are done with a DNS server rather than /etc/hosts, making it much more reliable and scalable. Read the feature proposal and discussion.
  • Multi-host networking on all supported Engine kernel versions: The multi-host overlay driver now works on older kernel versions (3.10 and greater).

Engine 1.10

Apart from the new security and networking features, we’ve got a whole load of new stuff in Engine:

  • Content addressable image IDs: Image IDs now represent the content that is inside an image, in a similar way to how Git commit hashes represent the content inside commits. This means you can guarantee that the content you’re running is what you expect by just specifying that image’s ID. This is an improvement upon the image digests in Engine 1.6. There is a migration process for your existing images which might take a long time, so take a read of the documentation if you want to prevent downtime.
  • Better event stream: The docker events command and events API endpoint has been improved and cleaned up. Events are now consistently structured with a resource type and the action being performed against that resource, and events have been added for actions against volumes and networks. Full details are in the docs.
  • Improved push/pull performance and reliability: Layers are now pushed in parallel, resulting in much faster pushes (as much as 3x faster). Pulls are a bit faster and more reliable too – with a streamlined protocol and better retry and fallback mechanisms.
  • Live update container resource constraints: When setting limits on what resources containers can use (e.g. memory usage), you had to restart the container to change them. You can now update these resource constraints on the fly with the new docker update command.
  • Daemon configuration file: It’s now possible to configure daemon options in a file and reload some of them without restarting the daemon so, for example, you can set new daemon labels and enable debug logging without restarting anything.
  • Temporary filesystems: It’s now really easy to create temporary filesystems by passing the --tmpfs flag to docker run. This is particularly useful for running a container with a read-only root filesystem when the piece of software inside the container expects to be able to write to certain locations on disk.
  • Constraints on disk I/O: Various options for setting constraints on disk I/O have been added to docker run: --device-read-bps, --device-write-bps, --device-read-iops, --device-write-iops, and --blkio-weight-device.
  • Splunk logging driver: Ship container logs straight to the Splunk logging service.
  • Start linked containers in correct order when restarting daemon: This is a little thing, but if you’ve run into it you’ll know what a headache it is. If you restarted a daemon with linked containers, they sometimes failed to start up if the linked containers weren’t running yet. Engine will now attempt to start up containers in the correct order.

Check out the release notes for the full list. There are a few features being deprecated in this release, and we’re ending support for Fedora 21 and Ubuntu 15.04, so be sure to check the release notes in case you’re affected by this. If you have written a volume plugin, there’s also a change in the volume plugin API that you need to be aware of.

Big thanks to all of the people who made this release happen – in particular to Qiang Huang, Denis Gladkikh, Dima Stopel, and Liron Levin.

The easiest way to try out Docker in development is by installing Docker Toolbox. For other platforms, check out the installation instructions in the documentation.

Swarm 1.1

Docker Swarm is native clustering for Docker. It makes it really easy to manage and deploy to a cluster of Engines. Swarm is also the clustering and scheduling foundation for the Docker Universal Control Plane, an on-premises tool for deploying and managing Docker applications and clusters.

Back in November we announced the first production-ready version of Swarm, version 1.0. This release is an incremental improvement, especially adding a few things you’ve been asking us for:

  • Reschedule containers when a node fails: If a node fails, Swarm can now optionally attempt to reschedule that container on a healthy node to keep it running. This is an experimental feature, so don’t expect it to work perfectly, but please do give it a try!
  • Better node management: If Swarm fails to connect to a node, it will now retry instead of just giving up. It will also display the status of this and any error messages in docker info, making it much easier to debug. Take a look at the feature proposal for full details.

Check out the release notes for the full list and the documentation for how to get started.

And save the date for Swarm Week – Coming Soon!

If you are new to Swarm or are familiar and want to know more, Swarm Week is the place for you get ALL your Swarm information in a single place. We will feature a different Swarm related topic each day.

Sign up here to be notified of #SwarmWeek!

Machine 0.6

Machine is at the heart of Docker Toolbox, and a big focus of Machine 0.6 has been making it much more reliable when you’re using it with VirtualBox and running it on Windows. This should make the experience of using Toolbox much better.

There have also been a couple of new features for Machine power users:

  • No need to type “default”: Commands will now perform actions against the “default” VM if you don’t specify a name.
  • New provision command: This is useful for re-running the provisioning on hosts where it failed or the configuration has drifted.

For full details, check out the release notes. The easiest way to install Machine is by installing Docker Toolbox. Other installation methods are detailed in the documentation.

Registry 2.3

In Registry 2.3, we’ve got a bunch of improvements to performance and security. It has support for the new manifest format, and makes it possible for layers to be shared between different images, improving the performance of push for layers that already exist on your Registry.

Check out the full release notes here and see the documentation for how to install or upgrade.

 

 

 

Watch this video overview on the new features in the Docker 1.10

 

Additional Resources on Docker 1.10


 

Learn More about Docker

]]>
Docker 1.10 Release nonadult