docker for windows – Docker https://www.docker.com Tue, 06 Jun 2023 20:21:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png docker for windows – Docker https://www.docker.com 32 32 Developing Docker-Powered Apps on Windows with WSL 2 https://www.docker.com/blog/developing-docker-windows-app-wsl2/ Wed, 14 Aug 2019 17:10:18 +0000 https://blog.docker.com/?p=23979 This is a guest post from Docker Captain Antonis Kalipetis, a Senior Software Engineer at efood.gr — the leading online food delivery service in Greece. He is a Python lover and developer and helps teams embrace containers and improve their development workflow. He loves automating stuff and sharing knowledge around all things containers, DevOps and developer workflows. You can follow him on Twitter @akalipetis.

WSL 2 (or Windows Subsystem for Linux version 2) is Microsoft’s second take on shipping a Linux Kernel with Windows. The first version was awesome as it translated Linux system calls to the equivalent Windows NT call in real time. The second version includes a full fledged virtual machine

It was only natural that Docker would embrace this change and ship a Docker Desktop for Windows version that runs on WSL 2 (WSL 1 had issues running the Docker daemon). This is still a Technical Preview, but after using it for a couple of days, I’ve completely switched my local development to take advantage of it and I’m pretty happy with it.

In this blog, I’ll show you an example of how to develop Docker-powered applications using the Docker Desktop WSL 2 Tech Preview.

Why use Docker Desktop WSL 2 Tech Preview over the “stable” Docker Desktop for Windows?

The main advantage of using the technical preview is that you don’t have to manage your Docker VM anymore.  More specifically:

  • The VM grows and shrinks with your needs in terms of RAM/CPU, so you don’t have to decide its size and preallocate resources. It can shrink to almost zero CPU/RAM if you don’t use it. It works so well that most of the time you forget there’s a VM involved.
  • Filesystem performance is great, with support for inotify and the VM’s disk size can match the size of your machine’s disk.

Apart from the above, if you love Visual Studio Code like I do, you can use the VS Code Remote WSL plugin to develop Docker-powered applications locally (more on that in a bit). You also get the always awesome Docker developer experience while using the VM.

How does Docker Desktop for WSL 2 Tech Preview work?

When you install it, it automatically installs Docker in a managed directory in your default WSL 2 distribution. This installation includes the Docker daemon, the Docker CLI and the Docker Compose CLI. It is kept up to date with Docker Desktop and you can either access it from within WSL, or from PowerShell by switching contexts — see, Docker developer experience in action!

Developing applications with Docker Desktop for WSL 2 Tech Preview

For this example, we’ll develop a simple Python Flask application, with Redis as its data store. Every time you visit the page, the page counter will increase — say hello to Millennium!

Setting up VS Code Remote – WSL

Visual Studio Code recently announced a new set of tools for developing applications remotely — using SSH, Docker or WSL. This splits Visual Studio Code into a “client-server” architecture, with the client (that is the UI) running on your Windows machine and the server (that is your code, Git, plugins, etc) running remotely. In this example, we’re going to use the WSL version.

To start, open VS Code and select “Remote-WSL: New Window”. This will install the VS Code Remote server in your default WSL distribution (the one running Docker) and open a new VS Code workspace in your HOME directory.

screenshot 1

Getting and Exploring the Code

Clone this Github repository by running git clone https://github.com/akalipetis/python-docker-example. Next, run code -r python-docker-example to open this directory in VS Code and let’s go a quick tour!

screenshot 2

Dockerfile and docker-compose.yml

These should look familiar. The Dockerfile is used for building your application container, while docker-compose.yml is the one you could use for deploying it. docker-compose.override.yml contains all the things that are needed for local development.

screenshot 3

Pipfile and Pipfile.lock

These include the application dependencies. Pipenv is the tool used to manage them.

The app.py  file contains the Flask application, which we’re just using in this example. Nothing special here!

Running the application and making changes

In order to run the application, open a WSL terminal (this is done using the integrated terminal feature of VS Code) and run docker-compose up. This will start all the containers (in this case, a Redis container and the one running the application). After doing so, visit http://localhost:5000 in your browser and voila — you’ve visited your new application. That’s not development though, so let’s change and see it in action. Open the app.py in VS Code and change the following line:

screenshot 4

Refresh the web page and observe that:

  1. The message was immediately changed
  2. The visit counter continued counting from the latest value

Under the hood

Let’s see what actually happened.

  • We changed a file in VS Code, which is running on Windows.
  • Since VS Code is running on a client-server mode with the server running is WSL 2, the change was actually made to the file living inside WSL.
  • Since you’re using the Technical Preview of Docker Desktop for WSL 2 and docker-compose.override.yml is using Linux workspaces to mount the code from WSL 2 directly into the running container, the change was propagated inside the container.
    • While this is possible with the “stable” Docker Desktop for Windows, it isn’t as easy. By using Linux workspaces, we don’t need to worry about file system permissions. It’s also super fast, as it’s a local Linux filesystem mount.
  • Flask is using an auto-reloading server by default, which — using inotify — is reloading the server on every file change and within milliseconds from saving your file, your server was reloaded.
  • Data is stored in Redis using a Docker volume, thus the visits counter was not affected by the restart of the server.

Other tips to help you with Docker Desktop for WSL 2

Here are a few additional tips on developing inside containers using the Technical Preview of Docker Desktop for WSL 2:

  • For maximum file system performance, use Docker volumes for your application’s data and Linux Workspaces for your code.
  • To avoid running an extra VM, switch to Windows containers for your “stable” Docker Desktop for Windows environment.
  • Use docker context default|wsl to switch contexts and develop both Windows and Linux Docker-powered applications easily.

Final Thoughts

I’ve switched to Windows and WSL 2 development for the past two months and I can’t describe how happy I am with my development workflow. Using Docker Desktop for WSL 2 for the past couple of days seems really promising, and most of the current issues of using Docker in WSL 2 seem to be resolved. I can’t wait for what comes next!

The only thing currently missing in my opinion is integration with VS Code   Remote Containers (instead of Remote WSL which was used for this blogpost) would allow you to run all your tooling within your Docker container.

Until VS Code Remote Containers support is ready, you can run pipenv install --dev to install the application dependencies on WSL 2, allowing VS Code to provide auto-complete and use all the nice tools included to help in development.

Get the Technical Preview and Learn More

If you’d like to get on board, read the instructions and install the technical preview from the Docker docs.

For more on WSL 2, check out these blog posts:


Developing #Docker Powered Apps on Windows with #WSL2 by Docker Captain @akalipetis
Click To Tweet



]]>
Docker Compose and Kubernetes with Docker for Desktop https://www.docker.com/blog/docker-compose-kubernetes-docker-desktop/ Thu, 15 Feb 2018 17:45:00 +0000 https://blog.docker.com/?p=20045 If you’re running an edge version of Docker on your desktop (Docker for Mac or Docker for Windows Desktop), you can now stand up a single-node Kubernetes cluster with the click of a button. While I’m not a developer, I think this is great news for the millions of developers who have already been using Docker on their Macbook or Windows laptop because they now have a fully compliant Kubernetes cluster at their fingertips without installing any other tools.

Developers using Docker to build containerized applications often build Docker Compose files to deploy them. With the integration of Kubernetes into the Docker product line, some developers may want to leverage their existing Compose files but deploy these applications in Kubernetes.

With Docker on the desktop (as well as Docker Enterprise Edition) you can use Docker compose to directly deploy an application onto a Kubernetes cluster.

Here’s how it works:

Let’s assume I have a simple Docker compose file like the one below that describes a three tier app: a web front end, a worker process (words) and a database.

Notice that our web front end is set to route traffic from port 80 on the host to port 80 on the service (and subsequently the underlying containers). Also, our words service is going to launch with 5 replicas.

 

services:
  web:
    build: web
    image: dockerdemos/lab-web
    volumes:
     - "./web/static:/static"
    ports:
     - "80:80"

  words:
    build: words
    image: dockerdemos/lab-words
    deploy:
      replicas: 5
      endpoint_mode: dnsrr
      resources:
        limits:
          memory: 16M
        reservations:
          memory: 16M

  db:
    build: db
    image: dockerdemos/lab-db

I’m using Docker for Mac, and Kubernetes is set as my default orchestrator. To deploy this application I simply use docker stack deploy providing the name of our compose file (words.yaml) and the name of the stack (words). What’s really cool is that this would be the exact same command you would use with Docker Swarm:

$ docker stack deploy --compose-file words.yaml words
Stack words was created
Waiting for the stack to be stable and running...
 - Service db has one container running
 - Service words has one container running
 - Service web has one container running
Stack words is stable and running

 

Under the covers the compose file has created a set of deployments, pods, and services which can be viewed using kubectl.

 
$ kubectl get deployment
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
db        1         1         1            1           2m
web       1         1         1            1           2m
words     5         5         5            5           2m

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
db-5489494997-2krr2      1/1       Running   0          2m
web-dd5755876-dhnkh      1/1       Running   0          2m
words-86645d96b7-8whpw   1/1       Running   0          2m
words-86645d96b7-dqwxp   1/1       Running   0          2m
words-86645d96b7-nxgbb   1/1       Running   0          2m
words-86645d96b7-p5qxh   1/1       Running   0          2m
words-86645d96b7-vs8x5   1/1       Running   0          2m

$ kubectl get services
NAME            TYPE          CLUSTER-IP       EXTERNAL-IP    PORT(S)       AGE
db              ClusterIP     None                    55555/TCP     2m
web             ClusterIP     None                    55555/TCP     2m
web-published   LoadBalancer  10.104.198.84        80:32315/TCP  2m
words           ClusterIP     None                    55555/TCP     2m

If you look at the list of services you might notice something that seems a bit odd at first glance. There are services for both web and web-published. The web service allows for intra-application communication, whereas the web-published service (which is a load balancer backed by vpnkit in Docker for Mac) exposes our web front end out to the rest of the world.

So if we visit http://localhost:80 we can see the application running. You can actually see the whole process in this video that Elton recorded.

bb2ab502 568a 40f5 b2a5 7104c485ba75 2

Now if we wanted to remove the service you might think you would remove the deployments using kubectl (I know I did). But what you actually do is use docker stack rm and that will remove all the components created when we brought the stack up.

$ docker stack rm words
Removing stack: words

$ kubectl get deployment
No resources found

And, to me, the cool thing is that this same process can be used with Docker EE – I simply take my Compose file and deploy it directly in the UI of Docker Enterprise Edition (EE) – but that’s another post.

Want to try it for yourself? Grab Docker for Mac or Docker for Windows, but be sure to check out the documentation (Mac and Windows) for more info.

Learn more:


Use Docker Compose to deploy a multiservice app on #Kubernetes w/ #Docker for Mac by @mikegcoleman
Click To Tweet


]]>
Docker for Windows Desktop… Now With Kubernetes! https://www.docker.com/blog/docker-windows-desktop-now-kubernetes/ Tue, 30 Jan 2018 20:00:00 +0000 https://blog.docker.com/?p=19944 Today we are excited to announce the beta for Docker for Windows Desktop with integrated Kubernetes is now available in the edge channel! This release includes Kubernetes 1.8, just like the Docker for Mac and Docker Enterprise Edition and will allow you to develop Linux containers.

The easiest way to get Kubernetes on your desktop is here.

Simply check the box and go

Windows containers Kubernetes

What You Can Do with Kubernetes on your desktop?

Docker for Mac and Docker for Windows are the most popular way to configure a Docker dev environment, and are each used everyday by millions of developers to build, test, and debug containerized apps. The beauty of building with Docker for Mac or Windows is that you can deploy the exact same set of Docker container images on your desktop as you do on your production systems with Docker EE.

Docker for Mac and Docker for Windows are used for building, testing and preparing to ship applications, whereas Docker EE provides the ability to secure and manage your applications in production at scale. You eliminate the “it worked on my machine” problem because you run the same Docker containers on the same Docker engines in development, testing, and production environments, along with the same Docker Swarm and Kubernetes orchestrators.

Docker and Kubernetes

With beta support for Kubernetes, Docker provides users end-to-end container-management software and services spanning from developer workstations running Docker for Mac or Docker for Windows, through test and CI/CD using Docker CE or Docker Enterprise Edition (EE), our container platform, through to production systems on-premises or in the cloud running Docker EE.

How to Get Started

A few things to keep in mind:

  • Edge channel required
    Kubernetes support is still considered a beta with this release, so to enable the download and use of Kubernetes components you must be on the Edge channel. The Docker for Windows version should be 18.02 or later.
  • Already using other Kubernetes tools?
    If you are already running a version of kubectl pointed at another environment, for example minikube, you will want to follow the activation instructions to change contexts to docker-for-desktop.

Docker for Windows

Kubernetes support

Things To Try

If you are new to Kubernetes and looking for some introductory exercises to try, here are a few resources:

  • The Docker for Windows Desktop with Kubernetes page has instructions for getting an example app up and running
  • Follow along with Docker Developer Advocate Elton Stoneman during his short video, demonstrating activating Kubernetes and deploying an application using both Docker compose and a Kubernetes manifest. (Note: the video shows Docker for Mac but the application works exactly the same in this beta of Docker for Windows…the power of Docker containers in action!)

Send Us Your Feedback

Send us your feedback, ideas for improvement, bugs, complaints and more so we can make Docker better on the Desktop. You can use the Docker community forums for general discussions and you can also directly file technical issues on Github.


#Docker for #Windows with beta support for using #Kubernetes as your orchestrator is here!
Click To Tweet


Learn More:

]]> Kubernetes in Docker for Mac Beta nonadult Exciting new things for Docker with Windows Server 1709 https://www.docker.com/blog/docker-windows-server-1709/ Mon, 25 Sep 2017 18:00:00 +0000 https://blog.docker.com/?p=18848 What a difference a year makes… last September, Microsoft and Docker launched Docker Enterprise Edition (EE), a Containers-as-a-Service platform for IT that manages and secures diverse applications across disparate infrastructures, for Windows Server 2016. Since then we’ve continued to work together and Windows Server 1709 contains several enhancements for Docker customers.

Docker Enterprise Edition Preview

To experiment with the new Docker and Windows features, a preview build of Docker is required. Here’s how to install it on Windows Server 1709 (this will also work on Insider builds):

Install-Module DockerProvider
    Install-Package Docker -ProviderName DockerProvider -RequiredVersion preview

To run Docker Windows containers in production on any Windows Server version, please stick to Docker EE 17.06.

Docker Linux Containers on Windows

A key focus of Windows Server version 1709 is support for Linux containers on Windows. We’ve already blogged about how we’re supporting Linux containers on Windows with the LinuxKit project.

To try Linux Containers on Windows Server 1709, install the preview Docker package and enable the feature. The preview Docker EE package includes a full LinuxKit system (all 13MB of it) for use when running Docker Linux containers.

[Environment]::SetEnvironmentVariable("LCOW_SUPPORTED", "1", "Machine")
    Restart-Service Docker

To disable, just remove the environment variable:

[Environment]::SetEnvironmentVariable("LCOW_SUPPORTED", $null, "Machine")
    Restart-Service Docker

Docker Linux containers on Windows is in preview, with ongoing joint development by Microsoft and Docker. Linux Containers is also available on Windows 10 version 1709 (“Creators Update 2”).

Docker ingress mode service publishing on Windows

Parity with Linux service publishing options has been highly requested by Windows customers. Adding support for service publishing using ingress mode in Windows Server 1709 enables use of Docker’s routing mesh, allowing external endpoints to access a service via any node in the swarm regardless of which nodes are running tasks for the service.

These networking improvements also unlock VIP-based service discovery when using overlay networks so that Windows users are not limited to DNS Round Robin.

Check out the corresponding post on the Microsoft Virtualization blog for details on the improvements.

Named pipes in Windows containers

A common and powerful Docker pattern is to run Docker containers that use the Docker API of the host that the container is running on, for example to start more Docker containers or to visualize the containers, networks and volumes on the Docker host. This pattern lets you ship, in a container, software that manages or visualizes what’s going on with Docker. This is great for building software like Docker Universal Control Plane.

Running Docker on Linux, the Docker API is usually hosted on Unix domain socket, and since these are in the filesystem namespace, sockets can be bind-mounted easily into containers. On Windows, the Docker API is available on a named pipe. Previously, named pipes where not bind-mountable into Docker Windows containers, but starting with Windows 10 and Windows Server 1709, named pipes can now bind-mounted.

Jenkins CI is a neat way to demonstrate this. With Docker and Windows Server 1709, you can now:

  1. Run Jenkins in a Docker Windows containers (no more hand-installing and maintaining Java, Git and Jenkins on CI machines)
  2. Have that Jenkins container build Docker images and run Docker CI/CD jobs on the same host

I’ve built a Jenkins sample image (Windows Server 1709 required) that uses the new named-pipe mounting feature. To run it, simple start a container, grab the initial password and visit port 8080. You don’t have to setup any Jenkins plugins or extra users:

> docker run -d -p 8080:8080 -v \\.\pipe\docker_engine:\\.\pipe\docker_engine friism/jenkins
    3c90fdf4ff3f5b371de451862e02f2b7e16be4311903649b3fc8ec9e566774ed
    > docker exec 3c cmd /c type c:\.jenkins\secrets\initialAdminPassword
    <password>

Now create a simple freestyle project and use the “Windows Batch Command” build step. We’ll build my fork of the Jenkins Docker project itself:

git clone --depth 1 --single-branch --branch add-windows-dockerfile https://github.com/friism/docker-3 %BUILD_NUMBER%
    cd %BUILD_NUMBER%
    docker build -f Dockerfile-windows -t jenkins-%BUILD_NUMBER% .
    cd ..
    rd /s /q %BUILD_NUMBER%

Hit “Build Now” and see Jenkins (running in a container) start to build a CI job to build a container image on the very host it’s running on!

Smaller Windows base images

When Docker and Microsoft launched Windows containers last year, some people noticed that Windows container base images are not as small as typical Linux ones. Microsoft has worked very hard to winnow down the base images, and with 1709, the Nanoserver download is now about 70MB (200MB expanded on the filesystem).

One of the things that’s gone from the Nanoserver Docker image is PowerShell. This can present some challenges when authoring Dockerfiles, but multi-stage builds make it fairly easy to do all the build and component assembly in a Windows Server Core image, and then move just the results into a nanoserver image. Here’s an example showing how to build a minimal Docker image containing just the Docker CLI:

# escape=`
    FROM microsoft/windowsservercore as builder
    SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
    RUN Invoke-WebRequest -Uri https://download.docker.com/win/static/test/x86_64/docker-17.09.0-ce-rc1.zip -OutFile 'docker.zip'
    RUN Expand-Archive -Path docker.zip -DestinationPath .

    FROM microsoft/nanoserver
    COPY --from=builder ["docker\\docker.exe", "C:\\Program Files\\docker\\docker.exe"]
    RUN setx PATH "%PATH%;C:\Program Files\docker"
    ENTRYPOINT ["docker"]

You now get the best of both worlds: Easy-to-use, full-featured build environment and ultra-small and minimal runtime images that deploy and start quickly, and have minimal exploit surface area. Another good example of this pattern in action are the .NET Core base images maintained by the Microsoft .NET team.

Summary

It’s hard to believe that Docker Windows containers GA’d on Windows Server 2016 and Windows 10 just one year ago. In those 12 months, we’ve seen lots of adoption by the Docker community and lots of uptake with customers and partners. The latest release only adds more functionality to smooth the user experience and brings Windows overlay networking up to par with Linux, with smaller container images and with support for bind-mounting named pipes into containers.

To learn more about Docker solutions for IT:


Exciting new things for #Docker with @Windows Server 1709
Click To Tweet


]]>
Preview: Linux Containers on Windows https://www.docker.com/blog/preview-linux-containers-on-windows/ Wed, 13 Sep 2017 16:00:00 +0000 https://blog.docker.com/?p=18698 Microsoft is getting ready for the next big update for Windows Server (check out today’s complimentary Microsoft blog post) and some of the new features are very exciting for Docker users. One of the most important enhancements is that Docker can now run Linux containers on Windows (LCOW), using Hyper-V technology.

Running Docker Linux containers on Windows requires a minimal Linux kernel and userland to host the container processes. This is exactly what the LinuxKit toolkit was designed for: creating secure, lean and portable Linux subsystems that can provide Linux container functionality as a component of a container platform.

We’ve been busy prototyping LinuxKit support for Docker Linux containers on Windows and have a working preview for you to try. This is still a work in progress, and requires either the recently announced  “Windows Server Insider” or Windows 10 Insider builds.

UPDATE: LCOW support is available in Windows 10 Fall Creators Update and in Windows Server 1709. The simplest way to try it out on Windows 10 is to install the edge variant of Docker for Windows (details). On Windows Server 1709, install EE preview.

Running Docker Linux Containers on Windows with LinuxKit

UPDATE: The LinuxKit LCOW repo has a README with updated details for users interested in LinuxKit.

The instructions below have been tested on Windows 10 and Windows Server Insider builds 16278 and 16281.

Be sure to install Docker for Windows (Windows 10) or Docker Enterprise Edition (Windows Server Insider) before starting.

Setup Docker and LinuxKit

A preview build of LinuxKit is available by simply running the following commands in PowerShell (with Administrator rights):

$progressPreference = 'silentlyContinue'
mkdir "$Env:ProgramFiles\Linux Containers”

Invoke-WebRequest -UseBasicParsing -OutFile linuxkit.zip https://github.com/friism/linuxkit/releases/download/preview-1/linuxkit.zip

Expand-Archive linuxkit.zip -DestinationPath "$Env:ProgramFiles\Linux Containers\."
rm linuxkit.zip

Now get a master branch build of the Docker daemon that contains preview support for Linux containers on Windows:

Invoke-WebRequest -UseBasicParsing -OutFile dockerd.exe https://master.dockerproject.org/windows/x86_64/dockerd.exe

Start a new Docker daemon listening on a separate pipe and using a separate storage location from the default install:

$Env:LCOW_SUPPORTED=1
$env:LCOW_API_PLATFORM_IF_OMITTED="linux"
.\dockerd.exe -D --experimental -H "npipe:////./pipe//docker_lcow" --data-root c:\lcow

Try it

Run a Linux container:

docker -H "npipe:////./pipe//docker_lcow" run -ti busybox sh

Docker just launched a minimal VM running a LinuxKit instance hosting the Linux container!

Since this is an early preview there are some limitations, but basic Docker operations like pull and run work.

Looking ahead

Both Windows Server Insider builds and Docker support for Linux containers on Windows are in early preview mode. When GA, Docker Linux containers on Windows will improve the Docker Linux container experience for both Windows developers and server administrators. Developers will be able to more easily build and test mixed Windows/Linux Docker applications by running containers for both platforms side-by-side on the same system.

And IT-admins that prefer Windows will soon be able to easily run (mostly) Linux-only software like HAProxy and Redis on Windows systems by way of Docker Linux containers on Windows. For example, Docker Linux containers on Windows will make setting up Docker Enterprise Edition and Universal Control Plane (which relies on some Linux-only components) on Windows Server much simpler.

We hope this LinuxKit-based walkthrough will set you up to start experimenting. Feedback is always welcome! For general help and getting started with Insider builds use the Windows Feedback Hub (Windows 10), or the Windows Server Insiders Tech Community. For issues with LinuxKit and Docker support for Linux containers on Windows use the Docker for Windows issue tracker on GitHub. And let us know on Twitter if you build something cool!

More Resources:


Preview @Linux Containers on @Windows using #LinuxKit by @neugebar cc @MS_ITPro
Click To Tweet


]]>
Docker for Windows Server and Image2Docker https://www.docker.com/blog/docker-windows-server-image2docker/ Tue, 17 Jan 2017 20:15:00 +0000 https://blog.docker.com/?p=16382 In December we had a live webinar focused on Windows Server Docker containers. We covered a lot of ground and we had some great feedback – thanks to all the folks who joined us. This is a brief recap of the session, which also gives answers to the questions we didn’t get round to.

Webinar Recording

You can view the webinar on YouTube:

The recording clocks in at just under an hour. Here’s what we covered:

  • 00:00 Introduction
  • 02:00 Docker on Windows Server 2016
  • 05:30 Windows Server 2016 technical details
  • 10:30 Hyper-V and Windows Server Containers
  • 13:00 Docker for Windows Demo – ASP.NET Core app with SQL Server
  • 25:30 Additional Partnerships between Docker, Inc. and Microsoft
  • 27:30 Introduction to Image2Docker
  • 30:00 Demo – Extracting ASP.NET Apps from a VM using Image2Docker
  • 52:00 Next steps and resources for learning Docker on Windows

Q&A

Can these [Windows] containers be hosted on a Linux host?

No. Docker containers use the underlying operating system kernel to run processes, so you can’t mix and match kernels. You can only run Windows Docker images on Windows, and Linux Docker images on Linux.

However, with an upcoming release to the Windows network stack, you will be able to run a hybrid Docker Swarm – a single cluster containing a mixture of Linux and Windows hosts. Then you can run distributed apps with Linux containers and Windows containers communicating in the same Docker Swarm, using Docker’s networking layer.

Is this only for ASP.NET Core apps?

No. You can package pretty much any Windows application into a Docker image, provided it can be installed and run without a UI.

The first demo in the Webinar showed an ASP.NET Core app running in Docker. The advantage with .NET Core is that it’s cross-platform so the same app can run in Linux or Windows containers, and on Windows you can use the lightweight Nano Server option.

In the second demo we showed ASP.NET WebForms and ASP.NET MVC apps running in Docker. Full .NET Framework apps need to use the WIndows Server Core base image, but that gives you access to the whole feature set of Windows Server 2016.

If you have existing ASP.NET applications running in VMs, you can use the Image2Docker tool to port them across to Docker images. Image2Docker works on any Windows Server VM, from Server 2003 to Server 2016.

image2docker

How does licensing work?

For production, licensing is at the host level, i.e. each machine or VM which is running Docker. Your Windows licence on the host allows you to run any number of Windows Docker containers on that host. With Windows Server 2016 you get the commercially supported version of Docker included in the licence costs, with support from Microsoft and Docker, Inc.

For development, Docker for Windows runs on Windows 10 and is free, open-source software. Docker for Windows can also run a Linux VM on your machine, so you can use both Linux and Windows containers in development. Like the server version, your Windows 10 licence allows you to run any number of Windows Docker containers.

Windows admins will want a unified platform for managing images and containers. That’s Docker Datacenter which is separately licensed, and will be available for Windows soon.

What about Windows updates for the containers?

Docker containers have a different life cycle from full VMs or bare-metal servers. You wouldn’t deploy an app update or a Windows update inside a running container – instead you update the image that packages your app, then just kill the container and start a new container from the updated image.

Microsoft are supporting that workflow with the two Windows base images on Docker Hub – for Windows Server Core and Nano Server. They are following a monthly release cycle, and each release adds an incremental update with new patches and security updates.

For your own applications, you would aim to have the same deployment schedule – after a new release of the Windows base image, you would rebuild your application images and deploy new containers. All this can be automated, so it’s much faster and more reliable than manual patching. Docker Captain Stefan Scherer has a great blog post on keeping your Windows containers up to date.

Additional Resources


Our #Windows containers webinar is on @YouTube for you to #LearnDocker
Click To Tweet


]]>
Docker for Windows and Windows Containers nonadult
Image2Docker: A New Tool for Prototyping Windows VM Conversions https://www.docker.com/blog/image2docker-prototyping-windows-vm-conversions/ Wed, 28 Sep 2016 19:30:00 +0000 https://blog.docker.com/?p=15062 Docker is a great tool for building, shipping, and running your applications. Many companies are already moving their legacy applications to Docker containers and now with the introduction of the Microsoft Windows Server 2016, Docker Engine can now run containers  natively on Windows.To make it even easier, there’s a new prototyping tool for Windows VMs that shows you how to replicate a VM Image to a container.

Docker Captain Trevor Sullivan recently released the Image2Docker tool, an open source project we’re hosting on GitHub. Still in it’s early stages, Image2Docker is a Powershell module that you can point at a virtual hard disk image, scan for common Windows components and suggest a Dockerfile. And to make it even easier, we’re hosting it in the Powershell Gallery to make it easy to install and use.

In Powershell, just type:

Install-Module -Name Image2Docker

And you’ll have access to Get-WindowsArtifacts and ConvertTo-Dockerfile. You can even select which discovery artifacts to search for.

Powershell.png

Currently Image2Docker supports VHD, VHDK, and WIM images. If you have a VMDK, Microsoft provides a great conversion tool to convert VMDK images to VHD images.

And as an open source project, lead by a Docker Captain, it’s easy to contribute. We welcome contributions to add more discovery objects and functionality.

More Resources:


Introducing Image2Docker: A New Tool for Prototyping @Windows VM Conversions by @pcgeek86
Click To Tweet


]]>
Build and run your first Docker Windows Server container https://www.docker.com/blog/build-your-first-docker-windows-server-container/ Mon, 26 Sep 2016 18:00:00 +0000 https://blog.docker.com/?p=14989 Today, Microsoft announced the general availability of Windows Server 2016, and with it, Docker engine running containers natively on Windows. This blog post describes how to get setup to run Docker Windows Containers on Windows 10 or using a Windows Server 2016 VM. Check out the companion blog posts on the technical improvements that have made Docker containers on Windows possible and the post announcing the Docker Inc. and Microsoft partnership.

Before getting started, It’s important to understand that Windows Containers run Windows executables compiled for the Windows Server kernel and userland (either windowsservercore or nanoserver). To build and run Windows containers, a Windows system with container support is required.

Windows 10 with Anniversary Update

For developers, Windows 10 is a great place to run Docker Windows containers and containerization support was added to the the Windows 10 kernel with the Anniversary Update (note that container images can only be based on Windows Server Core and Nanoserver, not Windows 10). All that’s missing is the Windows-native Docker Engine and some image base layers.

The simplest way to get a Windows Docker Engine is by installing the Docker for Windows public beta (direct download link). Docker for Windows used to only setup a Linux-based Docker development environment (slightly confusing, we know), but the public beta version now sets up both Linux and Windows Docker development environments, and we’re working on improving Windows container support and Linux/Windows container interoperability.

With the public beta installed, the Docker for Windows tray icon has an option to switch between Linux and Windows container development. For details on this new feature, check out Stefan Scherers blog post.

Switch to Windows containers and skip the next section.

switching to windows containers

Windows Server 2016

Windows Server 2016 is the where Docker Windows containers should be deployed for production. For developers planning to do lots of Docker Windows container development, it may also be worth setting up a Windows Server 2016 dev system (in a VM, for example), at least until Windows 10 and Docker for Windows support for Windows containers matures.

For Microsoft Ignite 2016 conference attendees, USB flash drives with Windows Server 2016 preloaded are available at the expo. Not at ignite? Download a free evaluation version and install it on bare metal or in a VM running on Hyper-V, VirtualBox or similar. Running a VM with Windows Server 2016 is also a great way to do Docker Windows container development on macOS and older Windows versions.

Once Windows Server 2016 is running, log in, run Windows Update to ensure you have all the latest updates and install the Windows-native Docker Engine directly (that is, not using “Docker for Windows”). Run the following in an Administrative PowerShell prompt:

Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
Install-Module -Name DockerMsftProvider -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force
Restart-Computer -Force

Docker Engine is now running as a Windows service, listening on the default Docker named pipe. For development VMs running (for example) in a Hyper-V VM on Windows 10, it might be advantageous to make the Docker Engine running in the Windows Server 2016 VM available to the Windows 10 host:

# Open firewall port 2375
netsh advfirewall firewall add rule name="docker engine" dir=in action=allow protocol=TCP localport=2375

# Configure Docker daemon to listen on both pipe and TCP (replaces docker --register-service invocation above)
Stop-Service docker
dockerd --unregister-service
dockerd -H npipe:// -H 0.0.0.0:2375 --register-service
Start-Service docker

The Windows Server 2016 Docker engine can now be used from the VM host by setting DOCKER_HOST:

$env:DOCKER_HOST = "<ip-address-of-vm>:2375"

See the Microsoft documentation for more comprehensive instructions.

Running Windows containers

First, make sure the Docker installation is working:

> docker version
Client:
Version:      1.12.1
API version:  1.24
Go version:   go1.6.3
Git commit:   23cf638
Built:        Thu Aug 18 17:32:24 2016
OS/Arch:      windows/amd64
Experimental: true

Server:
Version:      1.12.2-cs2-ws-beta
API version:  1.25
Go version:   go1.7.1
Git commit:   62d9ff9
Built:        Fri Sep 23 20:50:29 2016
OS/Arch:      windows/amd64

Next, pull a base image that’s compatible with the evaluation build, re-tag it and to a test-run:

docker pull microsoft/windowsservercore
docker run microsoft/windowsservercore hostname
69c7de26ea48

Building and pushing Windows container images

Pushing images to Docker Cloud requires a free Docker ID. Storing images on Docker Cloud is a great way to save build artifacts for later user, to share base images with co-workers or to create build-pipelines that move apps from development to production with Docker.

Docker images are typically built with docker build from a Dockerfile recipe, but for this example, we’re going to just create an image on the fly in PowerShell.

"FROM microsoft/windowsservercore `n CMD echo Hello World!" | docker build -t <docker-id>/windows-test-image -

Test the image:

docker run <docker-id>/windows-test-image
Hello World!

Login with docker login and then push the image:

docker push <docker-id>/windows-test-image

Images stored on Docker Cloud available in the web interface and public images can be pulled by other Docker users.

Using docker-compose on Windows

Docker Compose is a great way develop complex multi-container consisting of databases, queues and web frontends. Compose support for Windows is still a little patchy and only works on Windows Server 2016 at the time of writing (i.e. not on Windows 10).

To develop with Docker Compose on a Windows Server 2016 system, install compose too (this is not required on Windows 10 with Docker for Windows installed):

Invoke-WebRequest https://dl.bintray.com/docker-compose/master/docker-compose-Windows-x86_64.exe -UseBasicParsing -OutFile $env:ProgramFiles\docker\docker-compose.exe

To try out Compose on Windows, clone a variant of the ASP.NET Core MVC MusicStore app, backed by a SQL Server Express 2016 database. A correctly tagged microsoft/windowsservercore image is required before starting.

git clone https://github.com/friism/Musicstore
...
cd Musicstore
docker-compose -f .\docker-compose.windows.yml build
...
docker-compose -f .\docker-compose.windows.yml up
...

To access the running app from the host running the containers (for example when running on Windows 10 or if opening browser on Windows Server 2016 system running Docker engine) use the container IP and port 5000. localhost will not work:

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" musicstore_web_1
172.21.124.54

If using Windows Server 2016 and accessing from outside the VM or host, simply use the VM or host IP and port 5000.

Summary

This post described how to get setup to build and run native Docker Windows containers on both Windows 10 and using the recently published Windows Server 2016 evaluation release. To see more example Windows Dockerfiles, check out the Golang, MongoDB and Python Docker Library images.
Please share any Windows Dockerfiles or Docker Compose examples your build with @docker on Twitter using the tag #windows. And don’t hesitate to reach on the Docker Forums if you have questions.

More Resources:


Build and run your first #Docker @Windows #Server #container
Click To Tweet


]]>
The 10 Most Common Questions IT Admins ask About Docker https://www.docker.com/blog/the-10-most-common-questions-it-admins-ask-about-docker/ Wed, 27 Jul 2016 15:00:00 +0000 https://blog.docker.com/?p=14112 CnNSmL1VMAAzTx9.jpgOver the past few months we have attended a string of industry tradeshow events, helping to teach the world about Docker for enterprise. We were at HPE Discover, DockerCon, RedHat Summit and Cisco Live all within the past 6weeks! I had the pleasure of helping to represent Docker at each events and spoke with attendees. Some folks worked in IT ops, while others worked in development. I also spoke with a lot of folks working as IT admins within their company’s infrastructure team, and over time, I began to notice some trends when it came to the types of questions they asked. This got me thinking. Why not put together a list of the most common questions from IT administrators? I mean there’s a good chance there are other IT infrastructure folks out there who have the very same questions, right?

So here it is. The list IT admins have being waiting for. The ten most common questions (and their answers) from IT Admins.

  1. So what exactly is Docker? Something about “container applications” right?

Docker is an open platform that both IT operations teams and Developer team use to build, ship and run their applications, giving them the agility, portability and control that each team requires across the software supply chain. We have created a standard Docker container that packages up an application, with everything that the applications requires to run. This standardization allows teams to containerize applications and run them in any environment, on any infrastructure and to be written in any language

  1. What is a Docker container and how is it different than a VM?  Does containerization replace my virtualization infrastructure?

Containerization is very different from virtualization. It starts with the Docker engine, the tool that creates and runs containers (1 or more), and is the Docker installed software on any physical, virtual or cloud host with a compatible OS. Containerization leverages the kernel within the host operating system to run multiple root file systems. We call these root file systems “containers.” Each container shares the kernel within the host OS, allowing you to run multiple Docker containers on the same host. Unlike VMs, containers do not have an OS within it. They simply share the underlying kernel with the other containers. Each container running on a host is completely isolated so applications running on the same host are unaware of each other (you can use Docker Networking to create a multi-host overlay network that enables containers running on hosts to speak to one another).

The image below shows containerization on the left and virtualization on the right. Notice how containerization (left), unlike virtualization (right) does not require a hypervisor or multiple OSs.

admins ask about docker 2

Docker containers and traditional VMs are not mutually exclusive, so no, containers do not have to replace VMs. Docker containers can actually run within VMs. This allows teams to containerize each service and run multiple Docker containers per vm.

admins ask about docker 3

  1. What’s the benefit of “Dockerizing?”…

By Dockerizing their environment enterprise teams can leverage the Docker Containers as a Service Platform (CaaS). CaaS gives development teams and IT operations teams agility, portability and control within their environment.

Developers love Docker because it gives them the ability to quickly build and ship applications. Since Docker containers are portable and can run in any environment (with Docker Engine installed on physical, virtual or cloud hosts), developers can go from dev, test, staging and production seamlessly, without having to recode. This accelerates the application lifecycle and allows them to release applications 13x more often. Docker containers also makes it super easy for developers to debug applications, create an updated image and quickly ship an updated version of the application.

IT Ops teams can manage and secure their environment while allowing developers to build and ship apps in a self-service manner. The Docker CaaS platform is supported by Docker, deploys on-premises and is chock full of enterprise security features like role-based access control, integration with LDAP/AD, image signing and many more.

In addition, IT ops teams have the ability to manage deploy and scale their Dockerized applications across any environment. For example, the portability of Docker containers allows teams to migrate workloads running in AWS over to Azure, without having to recode and with no downtime. Team cans also migrate workloads from their cloud environment, down to their physical datacenter, and back. This enables teams to utilize the best infrastructure for their business needs, rather than being locked into a particular infrastructure type.

The lightweight nature of Docker containers compared to traditional tools like virtualization, combined with the ability for Docker containers to run within VMs, allowing teams to optimize their infrastructure by 20X, and save money in the process.

  1. From an infrastructure standpoint, what do I need from Docker? Is Docker a piece of hardware running in my datacenter, and how taxing is it on my environment?

The Docker engine is the software that is installed on the host (bare metal server, VM or public cloud instance) and is the only “Docker infrastructure” you’ll need. The tool creates, runs and manages Docker containers. So actually, there is no hardware installation necessary at all.

The Docker Engine itself is very lightweight, weighing in around 80 MB total.

  1. What exactly do you mean by “Dockerized node”? Can this node be on-premises or in the cloud?

A Dockerized node is anything i.e a bare metal server, VM or public cloud instance that has the Docker Engine installed and running on it.

Docker can manage nodes that exist on-premises as well as in the cloud. Docker Datacenter is an on-premises solution that enterprises use to create, manage, deploy and scale their applications and comes with support from the Docker team. It can manage hosts that exist in your datacenter as well as in your virtual private cloud or public cloud provider (AWS, Azure, Digital Ocean, SoftLayer etc.).

  1. Do Docker containers package up the entire OS and make it easier to deploy?

Docker containers do not package up the OS. They package up the applications with everything that the application needs to run. The engine is installed on top of the OS running on a host. Containers share the OS kernel allowing a single host to run multiple containers.

  1. What OS can the Docker Engine run on?

The Docker Engine runs on all modern Linux distributions. We also provide a commercially supported Docker Engine for Ubuntu, CentOS, OpenSUSE, RHEL. There is also a technical preview of Docker running on Windows Server 2016.

  1. How does Docker help manage my infrastructure? Do I containerize all my infrastructure or something?

Docker isn’t focused on managing your infrastructure. The platform, which is infrastructure agnostic, manages your applications and helps ensure that they can run smoothly, regardless of infrastructure type via solutions like Docker Datacenter. This gives your company the agility, portability and control you require. Your team is responsible for managing the actual infrastructure.

  1. How many containers can run per host?

As far as the number of containers that can be run, this really depends on your environment. The size of your applications as well as the amount of available resources (i.e like CPU) will all affect the number of containers that can be run in your environment. Containers unfortunately are not magical. They can’t create new CPU from scratch. They do, however, provide a more efficient way of utilizing your resources. The containers themselves are super lightweight (remember, shared OS vs individual OS per container) and only last as long as the process they are running. Immutable infrastructure if you will.

  1. What do I have to do to begin the “Dockerization process”

The best way for your team to get started is for your developers to download Docker for Mac or Docker Windows. These are native installations of Docker on a Mac or Windows device. From their, developers will take their applications and create a Dockerfile. The Dockerfile is where all of the application configuration is specified. It is essentially the blueprint for the Docker Image. The image is a snapshot of your application and is what the Docker Engine looks at so it knows what the container it is spinning up should look like.

If your developers aren’t using Docker quite yet. Feel free to point them to our website where they can learn more at www.docker.com

Bonus Question: We have several monolithic applications in our environment. But Docker only works for microservices right?

I added this in because this is one of the biggest misconceptions about Docker. Docker can absolutely be used for to containerize monolithic apps as well as microservices based apps. We find that most customers who are leveraging Docker containerize their legacy monolithic applications to benefit from the isolation that Docker containers provide, as well as portability. Remember Docker containers can package up any application (monolithic or distributed) and migrate workloads to any infrastructure. This portability is what enables our enterprise customers to embrace strategies like moving to the hybrid cloud.

In the case of microservices, customers typically containerize each service and use tools like Docker Compose to deploy these multi-container distributed applications into their production environment as a single running application.

We’ve even seen some companies have a hybrid environment where they are slowly restructuring their dockerized monolithic applications to become dockerized distributed applications over time. This is the case with ADP, a Docker Datacenter customer of ours.

So there it is. The list of the top ten questions that IT admins ask about Docker, plus a bonus question for good measure.

Now, I have a question for you. Has YOUR team started using Docker? If not, it may be time for you to try the new hotness.

If you are looking to learn more about containers vs. VMs, take a look at this webinar recording from a few weeks ago called “Containers for The Virtualization Admin”

Additional Resources that may peak your interest:


Check out the 10 question IT Admins ask about @Docker #learndocker!
Click To Tweet


]]>
docker for windows Archives | Docker nonadult
Improving Docker with Unikernels: Introducing HyperKit, VPNKit and DataKit https://www.docker.com/blog/docker-unikernels-open-source/ Wed, 18 May 2016 14:00:00 +0000 https://blog.docker.com/?p=12212 We’ve been working hard to build native Docker for Mac and Windows apps to ensure that your Docker experience  is as seamless as possible on the most popular developer operating systems. Docker for Mac and Windows include everything required to spin up a Linux Docker container that efficiently bridges storage and networking from the host into the Docker containers. They work transparently on both MacOS X and Windows, and require no other third party software.

Docker has always been built on open-source foundations: Solomon Hykes is presenting a keynote today at OSCON 2016 about the incremental revolution that the firehose of collaborative open source development has enabled throughout Docker’s history.  Today, we are adding to our existing open source contributions by open sourcing the core technology that powers the Docker for Mac and Windows desktop applications!

Building Docker for Mac and Windows has required integrating hardware virtualization, embedded operating systems and unikernel technology, all without exposing this magic to the end user. Let’s take a look under the hood of our applications to understand what some of this source code does, and give you a better of idea of how to contribute to it or use it in your own projects.

When you run Docker for Mac, it spins up a lightweight hypervisor that exists solely to run a single, embedded Linux instance that includes the latest stable release of Docker Engine. Unlike most hypervisors, this requires no special admin privileges since it uses the included Hypervisor Framework (available since OSX 10.10). The Docker application also bundles libraries that supply the Docker VM with host networking and storage capabilities that map intelligently between Linux and OSX/Windows semantics.

Screen Shot 2016-05-18 at 7.19.27 AM.png
Today, we are excited to announce the open-sourcing of these discrete components, the same source code we use in the release builds of Docker for Mac and Windows. The new components are:

  • HyperKit ™: A lightweight virtualization toolkit on OSX
  • DataKit ™: A modern pipeline framework for distributed components
  • VPNKit ™: A library toolkit for embedding virtual networking

Each of these kits can be used independently or together to form a complete product such as Docker for Mac or Windows.  This is just the beginning: we will open more components in the future as they mature (e.g. the filesystem framework).  They all have a set of curated Pioneer Projects for beginners to take on: HyperKit ™, DataKit ™, and VPNKit ™.

Screen Shot 2016-05-18 at 7.19.47 AM.png

HyperKit

HyperKit is based around a lightweight approach to virtualization that is possible due to the Hypervisor framework being supplied with MacOS X 10.10 onwards. HyperKit applications can take advantage of hardware virtualization to run VMs, but without requiring elevated privileges or complex management tool stacks.

HyperKit is built on the xHyve and bHyve projects, with additional functionality to make it easier to interface with other components such as the VPNKit or DataKit. Since HyperKit is broadly structured as a library, linking it against unikernel libraries is straightforward. For example, we added persistent block device support that uses the MirageOS QCow libraries written in OCaml.

How can you contribute?

There are three great areas for contribution:

  • Support for booting more guest operating systems. Linux is the only “first class” operating system supported at the moment. FreeBSD does boot, but requires running the installer and so isn’t as seamless. Patches exist to add more BIOS support to boot Windows, OpenBSD, or NetBSD, but require more testing.
  • Support for more high-level language bindings. Because the HyperKit is structured as a library, it can be interfaced with high-level languages using their normal foreign function interfaces.
  • Hypervisor features. Several traditional hypervisor features such as suspend/resume, live relocation and support for hardware performance counters are not supported. These need to be added in the same library style as the rest of the codebase, in order to ensure that HyperKit remains lightweight and easy to embed.

We will ensure that any contributions are structured such that they can be submitted to their respective upstream projects.

How else can you use it?

Any applications that need to spin up specialised or short-lived virtual machines can benefit from linking against HyperKit. These could be conventional operating systems such as Linux, or some of the unikernel projects once they have been ported to HyperKit.

DataKit

DataKit is a toolkit to coordinate processes with a git-compatible filesystem interface. It revisits the UNIX pipeline concept and the Plan9 9P protocol, but with a modern twist: streams of tree-structured data instead of raw text. DataKit lets you define complex workflows between loosely coupled processes using something as simple as shell scripts interacting with a version controlled file-system.

DataKit is a rethinking of application architecture around data flows, bringing back the wisdom of Plan 9’s “everything is a file”, in the git era where “everything is a versioned file”. Since we are making use of DataKit and 9P heavily in Docker for Mac and Windows, we are also open sourcing go-p9p, a modern, performant 9P library for Go.

How else can you use it?

There is a sample project using DataKit to create a Continuous Integration system in 50 lines of shell scripts in this repository: github.com/docker/datakit/tree/master/ci

The README also covers DataKit integration with GitHub. DataKit can be used in any situation where you need to coordinate processes around data, and shines when it is around versioned data.

How can you contribute?

GitHub PR support in DataKit is still quite basic, this is an area that could use additional contributions. DataKit could be used for a very broad set of use cases: share how you use it in your projects.

VPNKit

The VPNKit is a networking library that translates between raw Ethernet network traffic and their equivalent socket calls in MacOS X or Windows. It is based on the MirageOS TCP/IP unikernel stack, and is a library written in OCaml. VPNKit is useful when you need fine-grained control over networking protocols in user-space, with the additional convenience of being extensible in a high-level language.

How can you contribute?

VPNKit provides an interception point for all container traffic going through Docker for Mac or Windows. It could be extended with support for packet capture and inspection, protocol proxying to filter for particular traffic patterns, or even HTTP protocol visualisation for debugging web applications.

How else can you use it?

If VPNKit had support for more endpoint types, it could also be used to test network traffic without the overhead of actually generating and transmitting it.  It could also be used to build lightweight overlay networks between application components.


#Docker for Mac and Windows introduces new #opensource components: #HyperKit, #DataKit and #VPNKit
Click To Tweet


Next Steps

While the VPNKit and DataKit started life as quite specialised components in Docker for Mac and Windows, we are excited by the possibilities enabled by open sourcing them. The ideas here are by no means exhaustive, and we are looking forward to hearing about your own projects. Please file issues in their respective bug trackers as you come across them, or if you wish to discuss a particular idea.

And if you are at OSCON please come meet and collaborate with the maintainers of these projects in our OSCON Contribute session on Thursday 3 to 6 PM in Meeting Room 6. You can find more details about the internals of Docker for Mac and Windows in the slides for the talk I gave yesterday at OSCON.

If you haven’t already, please sign up for the Docker for Mac and Windows beta and send us feedback to make it better as we head towards general availability.  Finally, we would once again like to thank all of the open source efforts that made this release possible. The Docker for Mac and Windows acknowledgements list the hundreds of contributions that we use directly in our product, and we hope that you will also be able to check out and benefit from today’s releases in your own creations.

[getanews color=”F3E5D9″ h1=”Docker for Mac and Windows Beta” newsletter=”” btn=”yes” btn_text=”Sign up for the beta!” btn_url=”https://beta.docker.com” btn_target=”_blank” ]An integrated, easy-to-deploy environment for building, assembling, and shipping applications.[/getanews]


Learn More about Docker

]]>