Ben De St Paer-Gotch – Docker https://www.docker.com Wed, 22 Feb 2023 18:21:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Ben De St Paer-Gotch – Docker https://www.docker.com 32 32 The Magic Behind the Scenes of Docker Desktop https://www.docker.com/blog/the-magic-behind-the-scenes-of-docker-desktop/ Thu, 09 Sep 2021 16:28:12 +0000 https://www.docker.com/blog/?p=28670 With all the changes recently quite a few people have been talking about Docker Desktop and trying to understand what it actually does on your machine. A few people have asked, “is it just a container UI?” 

Great developer tools are magic for new developers and save experienced developers a ton of time. This is what we set out to do with Docker Desktop. Docker Desktop is designed to let you build, share and run containers as easily on Mac and Windows as you do on Linux. Docker handles the tedious and complex setup so you can focus on writing code. 

Some of the magic Docker Desktop takes care of for developers includes:

  • A secure, optimized Linux VM that runs Linux tools and containers 
  • Seamless plumbing into the host OS giving containers access to the filesystem and networking 
  • Bundled container tools including Kubernetes, Docker Compose, buildkit, scanning 
  • Docker Dashboard for visually managing all your container content 
  • A simple one click installer for Mac and Windows 
  • Preconfigured sane and secure defaults
  • Automatic incremental updates to keep your system running securely

Let’s dive into some of these in more detail!

Moby Group 4

Start with a single package 

Starting from the top, Docker Desktop comes as one single package for Mac or Windows. By this we have a single installer which, in one click, sets up everything you need to use Docker in seconds. 

But what is it that Docker Desktop is installing when it does this?

Built securely and maintained by Docker

At the heart of Docker Desktop we have a lightweight LinuxKit VM that Docker manages for you. 

This means we help address tricky issues with annoying customer impacts like the previous work on Docker Desktop for Mac. As well as setting up this VM, Docker Desktop will keep this VM up to date for you over time by applying kernel patches or other security fixes as are required. This gives you the peace of mind that you don’t have another machine image you are managing in your estate and instead Docker will look after this for you.This VM is where all of the Linux tools that we include will run and is where in turn all of your Linux containers will run when you are using Docker Engine. 

On Windows we run this VM under WSL2 and in doing so are able to give all of your WSL2 distro’s access to Docker, simply by toggling them on in the UI. If you want to learn more about the WSL 2 backend, check out Introducing the Docker Desktop WSL 2 Backend On Mac (on Intel and M1 machines) we are currently transitioning away from our previous HyperKit implementation to use Apple’s new Virtualization framework to run this VM.

Docker Desktop also provides you with a graphical interface to manage the settings for this VM, on Mac we provide the tools to change the resources this has access to (CPU, RAM etc) and on Windows we provide the tools to choose which distros can access this. Being in a VM also means we can limit which areas of the file system on your host machine can be accessed by the containers running the VM, this is great for security as it means you know exactly what files anything you are running in containers could possibly have access to and keep this locked down.

Integrating with the host machine 

Just having a VM doesn’t make this magic, as most of you who have used Docker Desktop will have noticed, you don’t need to “go into a VM” to use Docker. Instead this just works as if natively on your local machine. This is achieved through integrations in both networking and the file system into the VM to make this seem like a seamless piece of your local machine.

With networking, Docker Desktop maps your local host ports to those in the VM meaning that you can run a container on say port 80 on the VM and be able to access that from the browser on your local host – being able to see what you are running! Along with this it also uses VPNKit to guarantee networking is seamless, as if each container were running as a native app on the host, even when your IT department has configured a complicated VPN policy or requires the use of network proxies. Docker Desktop automates all of this and provides you a simple UI to make changes as you need.

Along with networking we also have the file system integration, Docker Desktop setups up bind mounts from your host to the VM giving you access to your local files (as you want!) inside the VM. Filesystem change notifications (fsnotify/inotify) work transparently, automatically triggering page reload when source code changes. It also allows you to route back from the container to the host allowing Docker containers to access local services running on the host. If you want to learn more about the file sharing implementation on Mac, check out Dave’s deep dive blog post Deep Dive Into the New Docker Desktop filesharing Implementation Using FUSE

The best container tools included 

All of this integration is great into the VM, but without the contents of the VM it won’t provide you with a lot. This is why we install and keep up to date the best Linux container tooling for you inside the VM. 

What most people think of as the ‘Docker’ experience is a lot more now than just the Docker Engine, it is a setup including multiple tools that together produce a seamless environment for developers to work with their containers. The heart of this is still the Docker Engine, an OCI compatible container run time included as part of Docker Desktop. Docker Desktop also bundles the Docker CLI to provide access to this and then includes Docker Compose 2.0 as well, allowing developers to work with their favorite multi container manifest format locally.

Docker Desktop also includes buildkit and buildx as part of the Docker CLI, giving developers access to faster builds and empowers developers to build for x86 or ARM from any local machine. Along with this Docker Desktop includes tools for scanning your images for vulnerabilities (docker scan), for working with your content and teams on Docker Hub (hub-tool) and the ability to connect and deploy to AWS ECS and Microsoft Azure ACI straight from the CLI (docker context).

These aren’t the only Linux container tools in Docker Desktop, we appreciate that there is a great community of tools and we are continuing to review which are the best we should also be including as part of the developer experience. The first of these which was introduced was support for Kubernetes (K8s) in Docker Desktop. In one click you can install and set up K8s with a load balancer ready to use with your local image store to run clusters.

Graphical controls 

All of these core components of Docker Desktop come with a simple graphical interface to help you control and manage these settings. Nestled in the menu bar on Mac and system tray on Windows you will find the Docker Desktop whale icon which allows you to jump in and get into settings, control core actions and jump into the Docker Dashboard.

The Docker Dashboard provides you with a simplified UI to manage your core Docker components on Docker Desktop. The Docker Dashboard now supports the management of Docker images locally and in Docker Hub, management of local running containers and the ability to manage and explore your Docker volumes. 

Portable developer tooling

Docker Desktop also includes new features like Dev Environments. With Dev Environments developers can now easily set up repeatable and reproducible development environments by keeping the environment details versioned in their SCM along with their code. Once a developer is working in a Dev Environment, they can share their work-in-progress code and dependencies in one click via Docker Hub. They can then switch between their developer environments or their teammates’ environments, moving between branches to look at work-in-progress changes without moving off their current Git branch. This makes reviewing PRs as simple as opening a new environment.

Multi-architecture support

Along with all of these tools, Docker Desktop also supports you in using them whatever system architecture you choose. With support for Apple M1 ARM Mac and QEMU included in Docker Desktop, you are able to build and use multi-architecture images (Linux x86, ARM, Windows) on whatever platform you are working on out of the box. 

As with all of these components, Docker’s updates keep these all in sync working together and secure with the latest fixes applied automatically for you. This keeps your team in sync, working with the same tools and secure.

And with a Docker subscription, if you have issues getting any of these items to work successfully for your team, you get support to unblock you to keep all of your developers productive. 

Get started

To get started download Docker Desktop for Mac or Windows. To learn more about using Docker for your developer workflows check out our documentation on Orientation and setup | Docker Documentation. We are continuing to build new features for all Desktop users and are keen to hear what you need so let us know on our roadmap

Finally, we will be showing off some of the next generation of innovation across Docker, including some new features and sneak previews for Docker Desktop at our September Community All Hands meeting. The free event takes place Thursday, September 16th from 8 AM – 11 AM Pacific time, register today here.

]]>
Level Up Security with Scoped Access Tokens https://www.docker.com/blog/level-up-security-with-scoped-access-tokens/ Tue, 20 Jul 2021 17:36:59 +0000 https://www.docker.com/blog/?p=28512 Scoped tokens are here 💪!

Scopes give you more fine grained control over what access your tokens have to your content and other public content on Docker Hub! 

It’s been a while since we first introduced tokens into Docker Hub (back in 2019!) and we are now excited to say that we have added the ability for accounts on a Pro or Team plan to apply scopes to their Personal Access Tokens (PATs) as a way to authenticate with Docker Hub. 

1 1

Access tokens can be used as a substitute for your password in Docker Hub, adding scopes to these tokens gives you more fine grained control over what access the machine logged in has. This is great for setting up things like service accounts in CI systems, registry mirrors or even on your local machine to make sure you are not giving too much access away. 

PATs are an alternative to using passwords for authentication to Docker Hub when using Docker command line

docker login --username <username>

When prompted for your password you can simply provide a token. The other advantages of tokens are that you can create and manage multiple tokens at once, being able to see when they were last used and if things look wrong – revoke the tokens access. This and our API support make it easy to manage the rotation of your tokens to help improve the security of your supply chain. 

Create and Manage Personal Access Tokens in Docker Hub 

Personal access tokens are created and managed in your Account Settings.

2 1

Then head to security: 

3 1

From here, you can:

4 1
  • Create new access tokens
  • Modify existing tokens
  • Delete access tokens

The other way you can manage your tokens is through the Hub APIs. We have Swagger docs for our APIs and the new docs for scoped tokens can be found here:

https://docs.docker.com/docker-hub/api/latest/#tag/access-tokens

Scopes available 

When you are creating a token Pro and Team plan members will now have access to 4 scopes:
Read, write, delete: The scope of this token allows you to read, write and delete all of the repos that you have access to. (It does not allow you to modify account settings as a password authentication would) 

Read, write: This scope is for read/write within repos you have access to (all the public content on Hub & your private content). This is the sort of scope to use within a CI that is also pushing to a repo

Read only: This scope is read only for all repos you have have access to, this is great when used in production where it only needs to pull content from your repos to run it/

Public repo read only: This scope is for reading only public content, so nothing from your or your team’s repos. This is great when you want to set up a system which is just pulling say Docker Official Images or Trusted content from Docker Hub. 

These scopes are for Pro accounts (which get 5 tokens) and Team accounts (which give each team member unlimited tokens). Free users can continue to use their single read, write, delete token and revoke/reissue this as they need. 

Scoped access tokens levels up the security of Docker users supply chain with how you can authenticate into Docker Hub. Available for Pro and Team plans, we are excited for you to try the scope tokens out and start giving us some feedback. 

Want to learn more about Docker Scoped Tokens? Make sure to follow us on Twitter: @Docker. We’ll be hosting a live Twitter Spaces event on Thursday, Jul 22, 2021 from 8:30 – 9:00 am PST, where you’ll hear from Docker engineers, product managers and a Docker Captain!

If you have feedback or other ideas, remember to add them to our public roadmap. We are always interested in what you would like us to build next!

]]>
Tech Preview: Docker Dev Environments https://www.docker.com/blog/tech-preview-docker-dev-environments/ Wed, 23 Jun 2021 16:37:25 +0000 https://www.docker.com/blog/?p=28439 A couple of weeks ago at DockerCon we showed off a new feature that we are building – Docker Dev Environments. Today we are excited to announce the release of the Technical Preview of Dev Environment as part of Docker Desktop 3.5. 

At Docker we have been looking at how teams collaborate on projects using Git. We know that Git is a powerful tool for version control of source code, but it doesn’t solve a lot of the challenges that exist when developers try to collaborate. Developers still suffer with ‘it works on my machine’ when they are trying to work together on changes as dependencies can differ. Developers may also need to move between Git branches to achieve this and often don’t bother, simply looking at code in the browser rather than running it. This means they lack the context and the tools needed to really validate that the code is good and that this collaboration is all happening right at the end of the creation process. 

ben

To address this, we are excited to release our preview of Dev Environments. With Dev Environments developers can now easily set up repeatable and reproducible development environments by keeping the environment details versioned in their SCM along with their code. Once a developer is working in a Development Environment, they can share their work-in-progress  code and dependencies in one click via the Docker Hub. They can then switch between their developer environments or their teammates’ environments, moving between branches to look at work-in-progress  changes without moving off their current Git branch. This makes reviewing PRs as simple as opening a new environment. Dev Environments use tools built into code editors that allow Docker to access code mounted into a container rather than on the developer’s local host. This isolates the tools, files and running services on the developer’s machine allowing multiple versions of them to exist side by side, also improving file system performance!  And we have built this experience on top of Compose rather than adding another manifest for developers to worry about or look after. 

With this preview we provide you with the ability to get started with a Dev Environment locally either by using our one click creation process or by providing a Compose file as part of a .docker folder. This will then allow you to run a Dev environment on your local machine and give you access to your git credentials inside it. With Compose you will be able to use the other services related to your Dev Environment, allowing you to develop in the full context of the application. We have got our first part of the sharing experience for team members as well, allowing you to share a Dev Environment with your team for them to see your code changes and dependencies together in just one click. 

There are some areas of the first release that we are going to be improving  in the coming  weeks as we build the experience out to make it even easier to use. 

When it comes to working with your team, we will be improving  this to make it easier to send someone your work-in-progress changes. Rather than having to create a unique name for your changes each time, we will let you instead share this with one click – keeping everything synced automatically via Docker Hub for your team. This means your team can see your shared Dev Environment in their UI as soon as you share it. They will also be able to swap out the existing services in their Compose stacks for the one you have shared, moving seamlessly between them. 

We know that developers love Compose and that we can leverage compose features to make it easier to set up your Dev Environments(things like profiles, setting a gopath, defining debug ports, supporting mounts etc). We will be extending what we have in Compose over the coming weeks, if there are particular features you think we should support please let us know!

We will also be looking at other areas of the experience like support for other IDEs, new creation flows and better ways to set up new Dev Environments. 

Lastly we will be looking at all the feedback you as a community give us on other areas we need to improve! If you have feedback on these items or have other areas you think we should be focusing on ready for our GA release, please let us know as part of our feedback repo.

We are really excited about the preview of Dev Environments! If you want to check them out simply download or upgrade Docker Desktop 3.5 and check out the new preview tab. To get started sharing Dev Environments with your team and moving your feedback process back into development rather than at the time of review, upgrade to one of Docker’s team plans today.

]]>
Advanced Image Management in Docker Hub https://www.docker.com/blog/advanced-image-management-in-docker-hub/ Tue, 23 Mar 2021 16:19:32 +0000 https://www.docker.com/blog/?p=27772 We are excited to announce the latest feature for Docker Pro and Team users, our new Advanced Image Management Dashboard available on Docker Hub. The new dashboard provides developers with a new level of access to all of the content you have stored in Docker Hub providing you with more fine grained control over removing old content and exploring old versions of pushed images. 

advanced image management

Historically in Docker Hub we have had visibility into the latest version of a tag that a user has pushed, but what has been very hard to see or even understand is what happened to all of those old things that you pushed. When you push an image to Docker Hub you are pushing a manifest, a list of all of the layers of your image, and the layers themselves.

When you are updating an existing tag, only the new layers will be pushed along with the new manifest which references these layers. This new manifest will be given the tag you specify when you push, such as bengotch/simplewhale:latest. But this does not mean that all of those old manifests which point at the previous layers that made up your image are removed from Hub. These are still here, there is just no way to easily see them or to manage that content. You can in fact still use and reference these using the digest of the manifest if you know it. You can kind of think of this like your commit history (the old digests) to a particular branch (your tag) of your repo (your image repo!). 

image

This means you can have hundreds of old versions of images which your systems can still be pulling by hash rather than by the tag and you may be unaware which old versions are still in use. Along with this the only way until now to remove these old versions was to delete the entire repo and start again!

With the release of the image management dashboard we have provided a new GUI with all of this information available to you including whether those currently ‘untagged old manifests’ are still ‘active’ (have been pulled in the last month) or whether they are inactive. This combined with the new bulk delete for these objects and current tags provides you a more powerful tool for batch managing your content in Docker Hub. 

To get started you will find a new banner on your repos page if you have inactive images:

This will tell you how many images you have, tagged or old, which have not been pushed or pulled to in the last month. By clicking view you can go through to the new Advanced Image Management Dashboard to check out all your content, from here you can see what the tags of certain manifests used to be and use the multi-selector option to bulk delete these. 

For a full product tour check out our overview video of the feature below.

We hope that you are excited for the first step of us providing greater insight into your content on Docker Hub, if you want to get started exploring your content then all users can see how many inactive images they have and Pro & Team users can see which tags these used to be associated with, what the hashes of these are and start removing these today. To find out more about becoming a Pro or Team user check out this page.

]]>
Advanced Image Management in Docker Hub nonadult
WSL 2 GPU Support is Here https://www.docker.com/blog/wsl-2-gpu-support-is-here/ Mon, 21 Dec 2020 14:00:00 +0000 https://www.docker.com/blog/?p=27355 At Microsoft Build in the first half of the year, Microsoft demonstrated some awesome new capabilities and improvements that were coming to Windows Subsystem for Linux 2 including the ability to share the host machine’s GPU with WSL 2 processes. Then in June Craig Loewen from Microsoft announced that developers working on the Windows insider ring machines could now make use of GPU for the Linux workloads. This support for NVIDIA CUDA enabled developers and data scientists to use their local Windows machines for inner-loop development and experimentation. 

Last week, during the Docker Community All Hands, we announced the availability of a developer preview build of Docker Desktop for WSL 2 supporting GPU for our Developer Preview Program. We already have more than 1,000 who have joined us to help test preview builds of Docker Desktop for Windows (and Mac!). If you’re interested in joining the program for future releases you should do it now!

Today we are excited to announce the general preview of Docker Desktop support for GPU with Docker in WSL2. There are over one and a half million users of Docker Desktop for Windows today and we saw in our roadmap how excited you all were for us to provide this support.

Preview of Docker Desktop with GPU support in WSL2

To get started with Docker Desktop with Nvidia GPU support on WSL 2, you will need to download our technical preview build.

Once you have the preview build installed there are still a couple of steps you will need to do to get started using your GPU: 

  • You will need access to a PC with an Nvidia GPU (if you don’t have this we would still like feedback on this build, we have changed a fair bit in our VM to get this working!) 
  • The latest Windows Insider version from the Dev preview ring 
  • Beta drivers from Nvidia supporting WSL 2 GPU paravirtualization: https://developer.nvidia.com/cuda/wsl
  • WSL 2 backend enabled in Docker Desktop

Once you have all of this working you can have a go at the command below to check that GPU support is working. 

Screen Shot 2020 12 18 at 10.37.15 AM

Keep in mind that this is a technical preview release: it may break, it has not been tested as thoroughly as our normal releases and ‘here be dragons’. 

If you do find issues and want to give us feedback, please raise bugs on our public repo https://github.com/docker/for-win. We use this feedback to improve the product and your support in testing this will help us get this ready for GA sooner 🙂 

Enjoy the tech preview and happy GPU Hacking!

]]>
🧪 Docker Hub Experimental CLI tool https://www.docker.com/blog/docker-hub-experimental-cli-tool/ Fri, 18 Dec 2020 18:28:32 +0000 https://www.docker.com/blog/?p=27342 We are excited to let you know that we have released a new experimental tool. We would love to get your feedback on it. Today we have released an experimental Docker Hub CLI tool, the hub-tool. The new Hub CLI tool lets you explore, inspect and manage your content on Docker Hub as well as work with your teams and manage your account. 

The new tool is available as of today for Docker Desktop for Mac and Windows users and we will be releasing this for Linux in early 2021.

The hub-tool is designed to map as closely to the top level features we know people are using in Docker Hub and provide a new way for people to start interacting with and managing their content. Let’s start by taking a look at the top level options we have. 

What you can do

We can see that we have the ability to jump into your account, your content, your orgs and your personal access tokens.

Screen Shot 2020 12 18 at 10.14.37 AM

From here I can dive into one of my repos

Screen Shot 2020 12 18 at 10.19.43 AM

And from here I can then decide to list the tags in one of those repos. This also now lets me see when these images were last pulled 🎉

Screen Shot 2020 12 18 at 10.20.23 AM

Changing focus, I can go over and look at some of the teams I am a member of to see what permissions people have

docker hub experimental cli tool

Or I can have a look at my access tokens 

Screen Shot 2020 12 18 at 10.22.04 AM

Why a standalone tool?

I also wanted to mention why we have decided to do this as a standalone tool rather than a Docker command with something like docker registry. We know that Docker Hub has some unique features and we wanted to bring these out as part of this tool and get feedback on whether this is something that would be valuable to add (or which bits of this we should add!) to the Docker CLI in the future. Given that some of these are unique to Hub, that we wanted feedback before adding more top level commands into the Docker CLI and that we wanted to do something quick, we decided to go with a stand alone tool. This does mean that this tool is going to be an experiment so we do expect it to go away sometime in 2021. We plan to use the lessons we learn here to make something awesome as part of the Docker CLI. 

Give us feedback!

If you have feedback or want to see this move into the existing Docker CLI, please let us know on the roadmap item. To get started trying out the tool, sign up for a Hub account and start using the tool in the Edge version of Docker Desktop.

]]>
Introducing Docker Engine 20.10 https://www.docker.com/blog/introducing-docker-engine-20-10/ Thu, 10 Dec 2020 00:19:18 +0000 https://www.docker.com/blog/?p=27312 We are pleased to announce that we have completed the next major release of the Docker Engine 20.10. This release continues Docker’s investment in our community Engine adding multiple new features including support for cgroups V2, moving multiple features out of experimental including RUN --mount and rootless, along with a ton of other improvements to the API, client and build experience. The full list of changes can be found as part of our change log

engine e1431551143773

Docker Engine is the underlying tooling/client that enables users to easily build, manage, share and run their container objects on Linux. The Docker Engine is made up of 3 core components:

  • A server with a long-running daemon process dockerd.
  • APIs which specify interfaces that programs can use to talk to and instruct the Docker daemon.
  • A command line interface (CLI) client docker.

For those who are curious about the recent questions about Docker Engine/K8s, please have a look at Dieu’s blog to learn more. 

Along with this I want to give a huge thank you to everyone in the community and all of our maintainers who have also contributed towards this Engine release. Without their contribution, hard work and support we would not be where we are nor have this Engine release. When I say ‘we’ throughout this article I don’t just mean the (awesome) engineers at Docker, I mean the (awesome) engineers outside of Docker and the wider community that have helped shape this release 🙂 

You can get started with the 20.10 version of Docker Engine either by getting the packages from here or this will be available in this week’s community release of Docker Desktop, for those of your can’t wait on Mac and Windows you can try out the RC of 20.10 using the latest Docker Desktop. Now let’s jump in and have a closer look at some of the 20.10 changes. 

Initial support for cgroups V2

Just as a reminder on how Docker works; Docker uses several foundational Linux kernel features to provide isolation to your running processes and the files associated with them. One of these features that we make use of is cgroups. Cgroups in Linux limits the resource usage (CPU, memory, disk, etc.) of a process. Docker combines these with the use of Linux namespaces to isolate your processes in containers. 

V2 of Cgroups was first introduced to the Linux kernel in 2016 bringing with it changes to how groups are managed and support for imposing resource limitations on rootless containers. For a bit more background on where this came from and some background on why these changes have taken a while to come along check out Akihiro’s blog.

Now that support for this in runc has been introduced we have added it to Docker. This change in turn has allowed Docker to graduate rootless from experimental to a fully supported feature.

Support for reading docker logs with all logging drivers

Prior to Docker Engine 20.10, the jsonfile and journald log drivers supported reading container logs using docker logs. However, many third party logging drivers had no support for locally reading logs using docker logs, including:

  • syslog
  • gelf
  • fluentd
  • awslogs
  • splunk
  • etwlogs
  • gcplogs
  • logentries

This created multiple problems when attempting to gather log data in an automated and standard way. Log information could only be accessed and viewed through the third-party solution in the format specified by that third-party tool.

Starting with Docker Engine 20.10, you can use docker logs to read container logs regardless of the configured logging driver or plugin. This capability, sometimes referred to as dual logging, allows you to use docker logs to read container logs locally in a consistent format, regardless of the remote log driver used, because the Engine is configured to log information to the “local” logging driver. Refer to “Configure the default logging driver “ in the Docker documentation for additional information

OS support changes

With the 20.10 release of Engine we are updating the OS support we have, this means we are adding support for both Ubuntu 20.10 and Fedora 33 along with continuing the support for CentOS8 – giving users on these OS’s access to Docker’s latest features.  

CLI improvements

Along with all of this we have made multiple changes to improve the CLI experience to provide you with access to the functionality you need and the configurability to automate this. We have been looking at making the experience across the CLI more consistent, removing older and unused commands to make it simpler and adding new options  to make it easier to get started and easier to script using Docker. 

Taking a look at a few of these:

Docker push - we have changed the default behavior to be in line with pull, now if you push an image name without a tag we will only push the :latest tag rather than all tags. To support this we have also now added a -a/all-tags flag to push all the tags of an image. 

--pull=missing|always|never – have been added to the run and create commands, giving you more fine grain control over when to pull images

docker exec – we have added a new -env-file flag. This allows you to run docker exec with the –env-file flag and a file containing environment variables. And subsequently print/use any of the variables inside the file in the command.

 To learn more about Docker Engine 20.10:

]]>
Docker Compose for Amazon ECS Now Available https://www.docker.com/blog/docker-compose-for-amazon-ecs-now-available/ Thu, 19 Nov 2020 20:00:00 +0000 https://www.docker.com/blog/?p=27209 Docker is pleased to announce that as of today the integration with Docker Compose and Amazon ECS has reached V1 and is now GA! 🎉

We started this work way back at the beginning of the year with our first step – moving the Compose specification into a community run project. Then in July we announced how we were working together with AWS to make it easier to deploy Compose Applications to ECS using the Docker command line. As of today all Docker Desktop users will now have the stable ECS experience available to them, allowing developers to use docker compose commands with an ECS context to run their containers against ECS.

As part of this we want to thank the AWS team who have helped us make this happen: Carmen Puccio, David Killmon, Sravan Rengarajan, Uttara Sridhar, Massimo Re Ferre, Jonah Jones and David Duffey.

Getting started with Docker Compose & ECS

As an existing ECS user or a new starter all you will need to do is update to the latest Docker Desktop Community version (2.5.0.1 or greater) store your image on Docker Hub so you can deploy it (you can get started with Hub here), then you will need to get yourself setup on AWS and then lastly you will need to create an ECS context using that account. You are then read to use your Compose file to start running your applications in ECS.

We have done a couple of blog posts and videos along with AWS to give you an idea of how to get started or use the ECS experience. 

If you have other questions about the experience or would like to give us feedback then drop us a message in the Compose CLI repo or in the #docker-ecs channel in our community Slack.

New in the Docker Compose ECS integration 

We have been adding new features to the ECS integration over the last few months and we wanted to run you through some of the ones that we are more excited about:

GPU support 

As part of the more recent versions of ECS we have provided the ability to deploy to EC2 (rather than the default fargate) to allow developers to make use of unique instance types/features like GPU within EC2. 

To do this all you need to do is specify that you need a GPU instance type as part of your Compose file and the Compose CLI will take care of the rest! 

services:
   learn:
       image: itamarost/object-detection-app:latest-gpu
       command: python app.py
       ports:
           - target: 8000
             protocol: tcp
             x-aws-protocol: http
       deploy:
           resources:
               # devices:
               #   - capabilities: ["gpu"]
               reservations:
                  memory: 30Gb
                  generic_resources:
                    - discrete_resource_spec:
                        kind: gpus
                        value: 1

EFS support

We heard early feedback from developers that when you are trying to move to the cloud you may not be ready to move to managed service to persist your data and may still want to use volumes with your application. To solve  this we have added Elastic File System (EFS) volume support to the Compose CLI allowing users to create volumes and use them as part of their Compose applications. This is created with a Retain policy so data won’t be deleted on application shut-down. If the same application (same project name) is deployed again, the file system will be re-attached to offer the same user experience developers are used to locally with docker-compose.

To do this I can either specify an existing file system that I have already created:

volumes:
   my-data:
     external: true
     name: fs-123abcd

Or I can create a new one from scratch by providing information about how I want it configured:

volumes:
   my-data:
     driver_opts:
       # Filesystem configuration
       backup_policy: ENABLED
       lifecycle_policy: AFTER_14_DAYS
       performance_mode: maxIO
       throughput_mode: provisioned
       provisioned_throughput: 1

I can also manage these through the docker volume command which lets me list and manage my resources allowing me to remove them when I no longer need them.

Context creation improvements 

We have also been looking at how we improve the context creation flow to make this simpler and more interactive – while also allowing power users to specify things more up front if they know how they want to configure your context. 

When you get started we now have 3 options for creating a new context: 

? Create a Docker context using:  [Use arrows to move, type to filter]
> An existing AWS profile
 A new AWS profile
 AWS environment variables

If you select an existing profile, we will list your available profiles to choose from and allow you to simply select the profile you want to have associated with this context. 

$ docker context create ecs test2
? Create a Docker context using: An existing AWS profile
? Select AWS Profile nondefault
Successfully created ecs context "test2"
 
$ docker context inspect test2
[
   {
       "Name": "test2",
       "Metadata": {
           "Description": "(eu-west-3)",
           "Type": "ecs"
       },
       "Endpoints": {
           "ecs": {
               "Profile": "nondefault",
           }
       ...
   }
]

If you want to create a new profile, we will ask you for the credentials needed to do this as part of the creation flow and will save this profile for you:

? Create a Docker context using: A new AWS profile
? AWS Access Key ID fiasdsdkngjgwka
? Enter AWS Secret Access Key *******************
? Region eu-west-3
Saving to profile "test3"
Successfully created ecs context "test3"
 
$ docker context inspect test3
[
   {
       "Name": "test3",
       "Metadata": {
           "Description": "(eu-west-3)",
           "Type": "ecs"
       },
       "Endpoints": {
           "ecs": {
               "Profile": "test3",
           }
       },
       ...
   }
]

If you want to do this using your existing AWS environment variables, then you can choose this option we will create the context with a reference to these env vars so we continue to respect them as you work with them:

$ docker context create ecs test1
? Create a Docker context using: AWS environment variables
Successfully created ecs context "test1"
$ docker context inspect test1
[
   {
       "Name": "test1",
       "Metadata": {
           "Description": "credentials read from environment",
           "Type": "ecs"
       },
       "Endpoints": {
           "ecs": {
               "CredentialsFromEnv": true
           }
       },
       ...
   }
]

We hope this new simplified way of getting started and the flags we have added in here to allow you to override parts of this will help you get started with ECS even faster than before. 

We are really excited about the new experience we have built with ECS, if you have any feedback on the experience or have ideas for other backends for the Compose CLI please let us know via our Public Roadmap.

Join our workshop “I Didn’t Know I Could Do That with Docker – AWS ECS Integration” with Docker’s Peter McKee and AWS’ Jonah Jones Tuesday, November 24, 2020 – 10:00am PT / 1:00pm ET. Register here

compose for amazon ecs
]]>
Apple Silicon M1 Chips and Docker https://www.docker.com/blog/apple-silicon-m1-chips-and-docker/ Mon, 16 Nov 2020 17:21:12 +0000 https://www.docker.com/blog/?p=27222 Apple new m1 chip graphic 11102020

Revealed at Apple’s ‘One More Thing’ event on Nov 10th, Docker was excited to see new Macs feature Apple silicon and their M1 chip. At Docker we have been looking at the new hypervisor features and support that are required for Mac to continue to delight our millions of customers. We saw the first spotlight of these efforts at Apple WWDC in June, when Apple highlighted Docker Desktop on stage. Our goal at Docker is to provide the same great experience on the new Macs as we do today for our millions of users on Docker Desktop for Mac, and to make this transition as seamless as possible. 

Building the right experience for our customers means getting quite a few things right before we push a release. Although Apple has released Rosetta 2 to help move applications over to the new M1 chips, this does not get us all the way with Docker Desktop. Under the hood of Docker Desktop, we run a virtual machine, to achieve this on Apple’s new hardware we need to move onto Apple’s new hypervisor framework. We also need to do all the plumbing that provides the core experience of Docker Desktop, allowing you to docker run from your terminal as you can today.

Along with this, we have technical dependencies upstream of us that need to make changes prior to making a new version of Docker Desktop GA. We rely on things like Go for the backend of Docker Desktop and Electron for the Docker Dashboard to view your Desktop content. We know these projects are hard at work getting ready for M1 chips, and we are watching them closely. 

We also want to make sure we get the quality of our release right, which means putting the right tooling in place for our team to support repeatable, reliable testing. To do this we need to complete work including setting up CI for M1 chips to supplement the 25 Mac Minis that we use for automated testing of Docker Desktop. Apple’s announcement means we can start to get these set up and put in place to start automating the testing of Desktop on M1 chips. 

Last but by no means least, we also need to review the experience in the product for docker build. We know that developers will look at doing more multi-architecture builds than before. We have support for multi-architecture builds today behind buildx, and we will need to work on how we are going to make this simpler as part of this release. We want developers to continue  to work locally in Docker and have the same confidence that you can just build - share - run your content as easily as you do now regardless of the architecture. 

If you are excited for the new Mac hardware and want to be kept up to date on the status of Docker on M1 chips, please sign up for a Docker ID to get our newsletter for the latest updates. We are also happy to let you know that the latest version of Docker Desktop runs on Big Sur. If you have any feedback, please let us know either by our issue tracker or our public roadmap!

Also a big thank you to all of you who have engaged on the public roadmap, Twitter and our issue trackers highlight how much you care about Docker for Mac. Your interest and energy is greatly appreciated! Keep providing feedback and check in with us as we work on this going forward. ❤️

]]>
Compose CLI ACI Integration Now Available https://www.docker.com/blog/compose-cli-aci-integration-now-available/ Thu, 05 Nov 2020 23:17:18 +0000 https://www.docker.com/blog/?p=27200 Today we are pleased to announce that we have reached a major milestone, reaching GA and our V1 of both the Compose CLI and the ACI integration. 🎉

In May we announced the partnership between Docker and Microsoft to make it easier to deploy containerized applications from the Desktop to the cloud with Azure Container Instances (ACI). We are happy to let you know that all users of Docker Desktop now have the ACI experience available to them by default, allowing them to easily use existing Docker commands to deploy and manage containers running in ACI. 

As part of this I want to also call out a thank you to the MSFT team who have worked with us to make this all happen! That is a big thank you to Mike Morton, Karol Zadora-Przylecki, Brandon Waterloo, MacKenzie Olson, and Paul Yuknewicz.

Getting started with Docker and ACI 

As a new starter, to get going all you will need to do is upgrade your existing Docker Desktop to the latest stable version (2.5.0.0 or later), store your image on Docker Hub so you can deploy it (you can get started with Hub here) and then lastly you will need to create an ACI context to deploy it to. 

We have done a few blog posts now on the different types of things you can achieve with the ACI integration. 

If you have other questions on the experience or would like some other guides then drop us a message in the Compose CLI repo so we can update our docs. 

What’s new in V1.0 

Since the last release of the CLI we have added a few new commands to make it easier to manage your working environment and also make it simpler for you to understand what you can clear up to save you money on resources you are not using.

To start we have add a volume inspect command alongside the volume create to allow you better management of your volumes: 

cli aci integration 1

We are also very excited by our new top level prune command to allow you to better clear up your ACI working environment and manage your costs. 

docker prune --help

cli aci integration 2

We have also added in a couple of interesting flags in here, we have the —dry-run flag to let you see what would be cleared up:

cli aci integration 3
(OK I am not running a great deal here!) 

As you can see, this also lets you know the amount of compute resources you will be reclamining as well. At the end of a development session being able to do a force prune allows you to remove ‘all the things you have run’, giving you the confidence you won’t have left something running and get an unexpected bill. 

Lastly we have started to add a few more flags in based on your feedback, a couple of examples of these are the addition of the --format json and --quiet in commands ps, context ls, compose ls, compose ps, volume ls to output json or single IDs.

We are really excited about the new experience we have built with ACI, if you have any feedback on the experience or have ideas for other backends for the Compose CLI please let us know via our Public Roadmap

]]>