desktop – Docker https://www.docker.com Fri, 10 Feb 2023 19:03:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png desktop – Docker https://www.docker.com 32 32 Desktop Support for iTerm2 – A Feature Request from the Docker Public Roadmap https://www.docker.com/blog/desktop-support-for-iterm2-a-feature-request-from-the-docker-public-roadmap/ Tue, 09 Mar 2021 18:05:32 +0000 https://www.docker.com/blog/?p=27676 The latest Docker Desktop release, 3.2, includes support for iTerm2 which is a terminal emulator that is highly popular with macOS fans. From the Containers/Apps Dashboard, for a running container, you can click `CLI` to open a terminal and run commands on the container. With this latest release of Docker Desktop, if you have installed iTerm2 on your Mac, the CLI option opens an iTerm2 terminal. Otherwise, it opens the Terminal app on Mac or a Command Prompt on Windows. 

Of note, this feature request to support additional terminals started from the Docker public roadmap. Daniel Rodriguez, one of our community members, submitted the request to the public roadmap. 180 people upvoted that request and we added it and prioritized it on our public roadmap. 

The public roadmap is our source of truth for community feedback on prioritizing product updates and feature enhancements. Not everything submitted to the public roadmap will end up as a delivered feature, but the support for M1 chipsets, image vulnerability scanning and audit logging – all delivered within the last year – all started as issues submitted via the roadmap.  

This is the easiest way for you to let us know about your pain points and what we can do to make Docker work better for your use cases. If you haven’t seen the public roadmap yet, check out the issues already submitted by others.  

If any of the submitted issues resonate with you, provide your comments, and/or add your vote.  If you do not see an issue that you want to be addressed, go ahead and submit your own. We regularly review the new entries and the roadmap updates. If you have never done this before, please review the contributing guidelines. We have created these guidelines to make sure that we understand your needs and your use cases.

Check out the public roadmap, take a look at what’s already on there and please give us your feedback!

]]>
The Docker Dashboard Welcomes Hub and Local Images https://www.docker.com/blog/the-docker-dashboard-welcomes-hub-and-local-images/ Thu, 01 Oct 2020 20:49:25 +0000 https://www.docker.com/blog/?p=27095 Last year we released the Docker Dashboard as part of Docker Desktop, today we are excited to announce we are releasing the next piece of the dashboard to our community customers with a new Images UI. We have expanded the existing UI for your local machine with the ability to interact with your Docker images on Docker Hub and locally. This allows you to: display your local images, manage them (run, inspect, delete) through an intuitive UI without using the CLI. And for you images in Hub you can now view you repos or your teams repos and pull images directly from the UI. 

To get started, Download the latest Docker Desktop release and load up the dashboard (we are also excited that we have given the dashboard an icon🎉)

docker dashboard local images

You will be able to see that we have also added a new sidebar to navigate between the two areas and we are planning to add new sections in here soon. To find out more about what’s coming or to give feedback on what you would like to see check out our public roadmap

Let’s jump in and have a look at what we can do…

From the whale menu, you can access the “Dashboard” and then “Images” where you’ll see a summary and each image with its details: tag, image id, creation date, and size. 

docker dashboard local images 1

I can also see which of my images are being used by a running or stopped container with  docker dashboard local images 2

Now I have all these images, I can have a go at running one of them.I will go for my standard demo my simple Whale, when I hover over the image line I can see I get two new buttons, I am going to start off by clicking ‘run’ 

docker dashboard local images 3

This pops up a UI where you can either just ‘run’ or add in some values. In this instance I am just going to add a port mapping so I can see what is running in my container once I have started it. 

docker dashboard local images 4

Great this now takes me to my running container UI and I can see that my container is alive! 

Next let’s look at clearing up what we have not used so far. I am going to start by hitting the clean up button we saw in the top right corner of our images UI 

Screen Shot 2020 08 06 at 3.18.05 PM 5

Then my UI changes, I am going to remove all my unused images, I can see I will free up 4gb which isn’t bad!
docker dashboard local images 5

I hit remove up and I can then see after I accept the warning my clean up happens I only have my in use images remaining! 

docker dashboard local images 6

Now let’s have a look at our Hub images, as I am signed in I can just click on the remote repositories tabdocker dashboard local images 7

Great! I can now see my repos in Hub along with my tags, I can either pull my image to then run it or I can switch between my Teams to see the repositories in my teams. If I click pull on one of my images, I can see this is started as it takes me back to my local image cache screen and shows me the progress of the download:

docker dashboard local images 8

I’ve also created an “unboxing” walkthrough of the entire process. You can join me on a guided tour of the new features in this video.

Screen Shot 2020 10 01 at 1.52.58 PM

We hope you are as excited as we are for the next piece of our Docker Dashboard, to get started you can download the latest Docker Desktop release to explore your local images. You can try out the Remote Repository functionality by signing into Docker Hub and pulling any of your existing images or if you are new why not try pulling a Docker Official Image like NGINX , retagging it and then having a go at pushing it from the UI.

]]>
Docker ❤️ WSL 2 – The Future of Docker Desktop for Windows https://www.docker.com/blog/docker-hearts-wsl-2/ Sun, 16 Jun 2019 11:59:43 +0000 https://engineering.docker.com/?p=321 One of Docker’s goals has always been to provide the best experience working with containers from a Desktop environment, with an experience as close to native as possible whether you are working on Windows, Mac or Linux. We spend a lot of time working with the software stacks provided by Microsoft and Apple to achieve this. As part of this work, we have been closely monitoring Windows Subsystem for Linux (WSL) since it was introduced in 2016, to see how we could leverage it for our products.

The original WSL was an impressive effort to emulate a Linux Kernel on top of Windows, but there are such foundational differences between Windows and Linux that some things were impossible to implement with the same behavior as on native Linux, and this meant that it was impossible to run the Docker Engine and Kubernetes directly inside WSL. Instead, Docker Desktop developed an alternative solution using Hyper-V VMs and LinuxKit to achieve the seamless integration our users expect and love today.

Microsoft has just announced WSL 2 with a major architecture change: instead of using emulation, they are actually providing a real Linux Kernel running inside a lightweight VM. This approach is architecturally very close to what we do with LinuxKit and Hyper-V today, with the additional benefit that it is more lightweight and more tightly integrated with Windows than Docker can provide alone. The Docker daemon runs well on it with great performance, and the time it takes from a cold boot to have dockerd running in WSL 2 is around 2 seconds on our developer machines. We are very excited about this technology, and we are happy to announce that we are working on a new version of Docker Desktop leveraging WSL 2, with a public preview in July. It will make the Docker experience for developing with containers even greater, unlock new capabilities, and because WSL 2 works on Windows 10 Home edition, so will Docker Desktop.

Collaborating with Microsoft

As part of our shared effort to make Docker Desktop the best way to use Docker on Windows, Microsoft gave us early builds of WSL 2 so that we could evaluate the technology, see how it fits with our product, and share feedback about what is missing or broken. We started prototyping different approaches and we are now ready to share a little bit about what is coming in the next few months.

Docker Desktop Future

We will replace the Hyper-V VM we currently use by a WSL 2 integration package. This package will provide the same features as the current Docker Desktop VM: Kubernetes 1-click setup, automatic updates, transparent HTTP proxy configuration, access to the daemon from Windows, transparent bind mounts of Windows files, and more.

wsl2 1

This integration package will contain both the server side components required to run Docker and Kubernetes, as well as the CLI tools used to interact with those components within WSL. We will then be able to introduce a new feature with Docker Desktop: Linux workspaces.

Linux Workspaces

When using Docker Desktop today, the VM running the daemon is completely opaque: you can interact with the Docker and Kubernetes API from Windows, but you can’t run anything within the VM except Docker containers or Kubernetes Pods.

With WSL 2 integration, you will still experience the same seamless integration with Windows, but Linux programs running inside WSL will also be able to do the same. This has a huge impact for developers working on projects targeting a Linux environment, or with a build process tailored for Linux. No need for maintaining both Linux and Windows build scripts anymore! As an example, a developer at Docker can now work on the Linux Docker daemon on Windows, using the same set of tools and scripts as a developer on a Linux machine:

wsl2 2 smaller

A developer working on the Docker Daemon using Docker Desktop technical preview, WSL 2 and VS Code remote

Also, bind mounts from WSL will support inotify events and have nearly identical I/O performance as on a native Linux machine, which will solve one of the major Docker Desktop pain points with I/O-heavy toolchains. NodeJS, PHP and other web development tools will benefit greatly from this feature.

Combined with Visual Studio Code “Remote to WSL”, Docker Desktop Linux workspaces will make it possible to run a full Linux toolchain for building containers on your local machine, from your IDE running on Windows.

Performance

With WSL 2, Microsoft put a huge amount of effort into performance and resource allocations: The VM is setup to use dynamic memory allocation, and can schedule work on all the Host CPUs, while consuming as little (or as much) memory it requires – within the limits of what the host can provide, and in a collaborative manner towards win32 processes running on the host.

Docker Desktop will leverage this to greatly improve its resource consumption. It will use as little or as much CPU and memory as it needs, and CPU/Memory intensive tasks such as building a container will run much faster than today.

In addition, the time to start a WSL 2 distribution and the Docker daemon after a cold start is blazingly fast – within 2s on our development laptops, compared to tens of seconds in the current version of Docker Desktop. This opens the door to battery-life optimizations by deferring the daemon startup to the first API call, and automatically stop the daemon when it is not running any container.

Zero-configuration bind mount support

One of the major issues users have today with Docker Desktop – especially in an enterprise environment – is the reliability of Windows file bind mounts. The current implementation relies on Samba Windows service, which may be deactivated, blocked by enterprise GPOs, blocked by 3rd party firewalls etc. Docker Desktop with WSL 2 will solve this whole category of issues by leveraging WSL features for implementing bind mounts of Windows files. It will provide an “it just works” experience, out of the box.

 

Technical Preview of Docker Desktop for WSL 2

Thanks to our collaboration with Microsoft, we are already hard at work on implementing our vision. We have written core functionalities to deploy an integration package, run the daemon and expose it to Windows processes, with support for bind mounts and port forwarding.

A technical preview of Docker Desktop for WSL 2 will be available for download in July. It will run side by side with the current version of Docker Desktop, so you can continue to work safely on your existing projects. If you are running the latest Windows Insider build, you will be able to experience this first hand. In the coming months, we will add more features until the WSL 2 architecture is used in Docker Desktop for everyone running a compatible version of Windows.

]]>
Addressing Time Drift in Docker Desktop for Mac https://www.docker.com/blog/addressing-time-drift-in-docker-desktop-for-mac/ Mon, 25 Feb 2019 10:00:59 +0000 https://engineering-stage.docker.com/?p=35

Docker Desktop for Mac runs the Docker engine and Linux containers in a helper LinuxKit VM since macOS doesn’t have native container support. The helper VM has its own internal clock, separate from the host’s clock. When the two clocks drift apart then suddenly commands which rely on the time, or on file timestamps, may start to behave differently. For example “make” will stop working properly across shared volumes (“docker run -v”) when the modification times on source files (typically written on the host) are older than the modification times on the binaries (typically written in the VM) even after the source files are changed. Time drift can be very frustrating as you can see by reading issues such as https://github.com/docker/for-mac/issues/2076.

Wait, doesn’t the VM have a (virtual) hardware Real Time Clock (RTC)?

When the helper VM boots the clocks are initially synchronised by an explicit invocation of “hwclock -s” which reads the virtual RTC in HyperKit. Unfortunately reading the RTC is a slow operation (both on physical hardware and virtual) so the Linux kernel builds its own internal clock on top of other sources of timing information, known as clocksources. The most reliable is usually the CPU Time Stamp Counter (“tsc”) clocksource which measures time by counting the number of CPU cycles since the last CPU reset. TSC counters are frequently used for benchmarking, where the current TSC value is read (via the rdtsc instruction) at the beginning and then again at the end of a test run. The two values can then be subtracted to yield the time the code took to run in CPU cycles. However there are problems when we try to use these counters long-term as a reliable source of absolute physical time, particularly when running in a VM:

  • There is no reliable way to discover the TSC frequency: without this we don’t know what to divide the counter values by to transform the result into seconds.
  • Some power management technology will change the TSC frequency dynamically.
  • The counter can jump back to 0 when the physical CPU is reset, for example over a host suspend / resume.
  • When a virtual CPU is stopped executing on one physical CPU core and later starts executing on another one, the TSC counter can suddenly jump forwards or backwards.

The unreliability of using TSC counters can be seen on this Docker Desktop for Mac install:

$ docker run --rm --privileged alpine /bin/dmesg | grep clocksource

[    3.486187] clocksource: Switched to clocksource tsc
[ 6963.789123] clocksource: timekeeping watchdog on CPU3: Marking clocksource 'tsc' as unstable because the skew is too large:
[ 6963.792264] clocksource:                       'hpet' wd_now: 388d8fc2 wd_last: 377f3b7c mask: ffffffff
[ 6963.794806] clocksource:                       'tsc' cs_now: 104a0911ec5a cs_last: 10492ccc2aec mask: ffffffffffffffff
[ 6963.797812] clocksource: Switched to clocksource hpet

Many hypervisors fix these problems by providing an explicit “paravirtualised clock” interface, providing enough additional information to the VM to allow it to correctly convert a TSC value to seconds. Unfortunately the Hypervisor.framework on the Mac does not provide enough information (particularly over suspend/resume) to allow the implementation of such a paravirtualised timesource so we reported the issue to Apple and searched for a workaround.

How bad is this in practice?

I wrote a simple tool to measure the time drift between a VM and the host– the source is here. I created a small LinuxKit test VM without any kind of time synchronisation software installed and measured the “natural” clock drift after the VM boots:

time drift 1

Each line on the graph shows the clock drift for a different test run. From the graph it appears that the time in the VM loses roughly 2ms for every 3s of host time which passes. Once the total drift gets to about 1s (after approximately 1500s or 25 minutes) it will start to get really annoying.

OK, can we turn on NTP and forget about it?

The Network Time Protocol (NTP) is designed to keep clocks in sync so it should be ideal. The question then becomes

  • which client?
  • which server?

How about using the “default” pool.ntp.org like everyone else?

Many machines and devices uses the free pool.ntp.org as their NTP server. This is a bad idea for us for several reasons:

  • it’s against their [guidelines](http://www.pool.ntp.org/en/vendors.html) (although we could register as a vendor)
  • there’s no guarantee clocks in the NTP server pool are themselves well-synchronised
  • people don’t like their Mac sending unexpected UDP traffic; they fear it’s malware infestation
  • anyway… we don’t want the VM to synchronise with atomic clocks in some random physics lab, we want it to synchronise with the host (so the timestamps work). If the host itself has drifted 30 minutes away from “real” time, we want the VM to also be 30 minutes away from “real” time.

Therefore in Docker Desktop we should run our own NTP server on the host, serving the host’s clock.

Which server implementation should we use?

The NTP protocol is designed to be robust and globally scalable. Servers with accurate clock hardware (e.g. an atomic clock or a GPS feed containing a signal from an atomic clock) are relatively rare so not all other hosts can connect directly to them. NTP servers are arranged in a hierarchy where lower “strata” synchronise with the stratum directly above and end-users and devices synchronise with the servers at the bottom. Since our use-case only involves one server and one client this is all completely unnecessary and so we use “Simplified NTP” as described in RFC2030 which enables clients (in our case the VM) to synchronise immediately with a server (in our case the host).

Which NTP client should we use (and does it even matter)?

Early versions of Docker Desktop included openntpd from the upstream LinuxKit package. The following graph shows the time drift on one VM boot where openntpd runs for the first 10000s and then we switch to the busybox NTP client:

time drift 2

The diagram shows the clock still drifting significantly with openntpd running but it’s “fixed” by running busybox — why is this? To understand this it’s important to first understand how an NTP client adjusts the Linux kernel clock:

  • adjtime (3) – this accepts a delta (e.g. -10s) and tells the kernel to gradually adjust the system clock avoiding suddenly moving the clock forward (or backward, which can cause problems with timing loops which aren’t using monotonic clocks)
  • adjtimex (2) – this allows the kernel clock *rate* itself to be adjusted, to cope with systematic drift like we are suffering from
  • settimeofday (2) – this immediately bumps the clock to the given time

If we look at line 433 of openntpd ntp.c (sorry no direct link in cvsweb) then we can see that openntpd is using adjtime to periodically add a delta to the clock, to try to correct the drift. This could also be seen in the openntpd logs. So why wasn’t this effective?

The following graph shows how the natural clock drift is affected by a call to adjtime(+10s) and adjtime(-10s):

time drift 3

It seems the “natural” drift we’re experiencing is so large it can’t be compensated for solely by using “adjtime”. The reason busybox performs better for us is because it adjusts the clock rate itself using “adjtimex”.

The following graph shows the change in kernel clock frequency (timex.freq) viewed using adjtimex. For the first 10000s we use openntpd (and hence the adjustment is 0 since it doesn’t use the API) and for the rest of the graph we used busybox:

rate 1

Note how the adjustment value remains flat and very negative after an initial spike upwards. I have to admit when I first saw this graph I was disappointed — I was hoping to see something zig-zag up and down, as the clock rate was constantly micro-managed to remain stable.

Is there anything special about the final value of the kernel frequency offset?

Unfortunately it is special. From the adjtimex (2) manpage:

ADJ_FREQUENCY
             Set frequency offset from buf.freq.  Since Linux 2.6.26, the
             supplied value is clamped to the range (-32768000, +32768000).

So it looks like busybox slowed the clock by the maximum amount (-32768000) to correct the systematic drift. According to the adjtimex(8) manpage a value of 65536 corresponds to 1ppm, so 32768000 corresponds to 500ppm. Recall that the original estimate of the systematic drift was 2ms every 3s, which is about 666ppm. This isn’t good: this means that we’re right at the limit of what adjtimex can do to compensate for it and are probably also relying on adjtime to provide additional adjustments. Unfortunately all our tests have been on one single machine and it’s easy to imagine a different system (perhaps with different powersaving behaviour) where even adjtimex + adjtime would be unable to cope with the drift.

So what should we do?

The main reason why NTP clients use APIs like adjtime and adjtimex is because they want

  • monotonicity: i.e. to ensure time never goes backwards because this can cause bugs in programs which aren’t using monotonic clocks for timing, for example How and why the leap second affected Cloudflare DNS; and
  • smoothness: i.e. no sudden jumping forwards, triggering lots of timing loops, cron jobs etc at once.

Docker Desktop is used by developers to build and test their code on their laptops and desktops. Developers routinely edit files on their host with an IDE and then build them in a container using docker run -v. This requires the clock in the VM to be synchronised with the clock in the host, otherwise tools like make will fail to rebuild changed source files correctly.

Option 1: adjust the kernel “tick”

According to adjtimex(8) it’s possible to adjust the kernel “tick”:

Set the number of microseconds that should be added to the system time for each kernel tick interrupt. For a kernel with USER_HZ=100, there are supposed to be 100 ticks per second, so val should be close to 10000. Increasing val by 1 speeds up the system clock by about 100 ppm,

If we knew (or could measure) the systematic drift we could make a coarse-grained adjustment with the “tick” and then let busybox NTP to manage the remaining drift.

Option 2: regularly bump the clock forward with settimeofday (2)

If we assume that the clock in the VM is always running slower than the real physical clock (because it is virtualised, vCPUs are periodically descheduled etc) and if we don’t care about smoothness, we could use an NTP client which calls settimeofday (2) periodically to immediately resync the clock.

The choice

Although option 1 could potentially provide the best results, we decided to keep it simple and go with option 2: regularly bump the clock forward with settimeofday (2) rather than attempt to measure and adjust the kernel tick. We assume that the VM clock always runs slower than the host clock but we don’t have to measure exactly how much it runs slower, or assume that the slowness remains constant over time, or across different hardware setups. The solution is very simple and easy to understand. The VM clock should stay in close sync with the host and it should still be monotonic but it will not be very smooth.

We use an NTP client called SNTPC written by Natanael Copa, founder of Alpine Linux (quite a coincidence considering we use Alpine extensively in Docker Desktop). SNTPC can be configured to call settimeofday every n seconds with the following results:

time drift 5

As you can see in the graph, every 30s the VM clock has fallen behind by 20ms and is then bumped forward. Note that since the VM clock is always running slower than the host, the VM clock always jumps forwards but never backwards, maintaining monotonicity.

Just a sec, couldn’t we just run hwclock -s every 30s?

Rather than running a simple NTP client and server communicating over UDP every 30s we could instead run hwclock -s to synchronise with the hardware RTC every 30s. Reading the RTC is inefficient because the memory reads trap to the hypervisor and block the vCPU, unlike UDP traffic which is efficiently queued in shared memory; however the code would be simple and an expensive operation once every 30s isn’t too bad. How well would it actually keep the clock in sync?

time drift 6

 

Unfortunately running hwclock -s in a HyperKit Linux VM only manages to keep the clock in sync within about 1s, which would be quite noticeable when editing code and immediately recompiling it. So we’ll stick with NTP.

Final design

The final design looks like this:

 

time drift 7

In the VM the sntpc process sends UDP on port 123 (the NTP port) to the virtual NTP server running on the gateway, managed by the vpnkit process on the host. The NTP traffic is forwarded to a custom SNTP server running on localhost which executes gettimeofday and replies. The sntpc process receives the reply, calculates the local time by subtracting an estimate of the round-trip-time and calls settimeofday to bump the clock to the correct value.

In Summary

  • Timekeeping in computers is hard (especially so in virtual computers)
  • There’s a serious source of systematic drift in the macOS Hypervisor.framework, HyperKit, Linux system.
  • Minimising time drift is important for developer use-cases where file timestamps are used in build tools like make
  • For developer use-cases we don’t mind if the clock moves abruptly forwards
  • We’ve settled on a simple design using a standard protocol (SNTP) and a pre-existing client (sntpc)
  • The new design is very simple and should be robust: even if the clock drift rate is faster in future (e.g. because of a bug in the Hypervisor.framework or HyperKit) the clock will still be kept in sync.
  • The new code has shipped on both the stable and the edge channels (as of 18.05)

]]>