Compose – Docker https://www.docker.com Tue, 11 Jul 2023 19:53:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Compose – Docker https://www.docker.com 32 32 Docker Desktop 4.19: Compose v2, the Moby project, and more https://www.docker.com/blog/docker-desktop-4-19/ Tue, 02 May 2023 13:54:50 +0000 https://www.docker.com/?p=42110 Docker Desktop release 4.19 is now available. In this post, we highlight features added to Docker Desktop in the past month, including performance enhancements, new language support, and a Moby update.

banner 4.19 docker desktop

5x faster container-to-host networking on macOS

In Docker Desktop 4.19, we’ve made container-to-host networking performance 5x faster on macOS by replacing vpnkit with the TCP/IP stack from the gVisor project.

Many users work on projects that have containers communicating with a server outside their local Docker network. One example of this would be workloads that download packages from the internet via npm install or apt-get. This performance improvement should help a lot in these cases.

Over the next month, we’ll keep track of the stability of this new networking stack. If you notice any issues, you can revert to using the legacy vpnkit networking stack by setting "networkType":"vpnkit" in Docker Desktop’s settings.json config file.

Docker Init (Beta): Support for Node and Python

In our 4.18 release, we introduced docker init, a CLI command in Beta that helps you easily add Docker to any of your projects by creating the required assets for you. In the 4.19 release, we’re happy to add to this and share that the feature now includes support for Python and Node.js. 

You can try docker init with Python and Node.js by updating to the latest version of Docker Desktop (4.19) and typing docker init in the command line while inside a target project folder. 

The Docker team is working on adding more languages and frameworks for this command, including Java, Rust, and .Net. Let us know if you would like us to support a specific language or framework. We welcome any feedback you may have as we continue to develop and improve Docker Init (Beta).

Docker Init CLI welcome message that says this utility will walk you through creating the following files with sensible defaults for your project: .docker ignore, Dockerfile, and compose.yaml.

Docker Scout (Early Access)

With Docker Desktop release 4.19, we’ve made it easier to view Docker Scout data for all of your images directly in Docker Desktop. Whether you’re using an image stored locally in Docker Desktop or a remote image from Docker Hub, you can see all that data without leaving Docker Desktop.

Images view of Docker Desktop showing myorg in Hub with myorg/app, myorg/service, and myorg/auth
myorg/app:latest in Images view, showing image hierarchy, Layers (28), and 48 vulnerabilities in 746 packages
In Images view, recommended fixes for base image debian: Tag is preferred tag (stable-slim) and Major OS version update (10-slim).

A nudge toward Compose v2

Compose v1 has reached end-of-life and will no longer be bundled with Docker Desktop after June 2023.

In preparation, a new warning will be shown in the terminal when running Compose v1 commands. Set the COMPOSE_V1_EOL_SILENT=1 environment variable to suppress this message.

You can upgrade by enabling Use Compose v2 in the Docker Desktop settings. When active, Docker Desktop aliases docker-compose to Compose v2 and supports the recommended docker compose syntax.

Moby 23

We updated the Docker Engine and the CLI to Moby 23.0,  where we are upstreaming open source internal developments such as the containerd integration and Wasm support, which will ship with Moby 24.0. Moby 23.0 includes additional enhancements such as the --format=json shorthand variant of --format=“{{ json . }}” and support of relative source paths to the run command in the -v/--volume and -m/--mount flags. You can read more about Moby 23.0 in the release notes.

Conclusion

We love hearing your feedback. Please leave any feedback on our public GitHub roadmap and let us know what else you’d like to see. Check out the Docker Desktop 4.19 release notes for a full breakdown of what’s new in the latest release.

]]>
Docker Compose Experiment: Sync Files and Automatically Rebuild Services with Watch Mode https://www.docker.com/blog/docker-compose-experiment-sync-files-and-automatically-rebuild-services-with-watch-mode/ Thu, 20 Apr 2023 14:08:48 +0000 https://www.docker.com/?p=41922 We often hear how indispensable Docker Compose is as a development tool. Running docker compose up offers a streamlined experience and scales from quickly standing up a PostgreSQL database to building 50+ services across multiple platforms.

And, although “building a Docker image” was previously considered a last step in release pipelines, it’s safe to say that containers have since become an essential part of our daily workflow. Still, concerns around slow builds and developer experience have often been a barrier towards the adoption of containers for local development.

We’ve come a long way, though. For starters, Docker Compose v2 now has deep integration with BuildKit, so you can use features like RUN cache mounts, SSH Agent forwarding, and efficient COPY with –link to speed up and simplify your builds. We’re also constantly making quality-of-life tweaks like enhanced progress reporting and improving consistency across the Docker CLI ecosystem.

As a result, more developers are running docker compose build && docker compose up to keep their running development environment up-to-date as they make code changes. In some cases, you can even use bind mounts combined with a framework that supports hot reload to avoid the need for an image rebuild, but this approach often comes with its own set of caveats and limitations.

white Compose text on purple background

An early look at the watch command

Starting with Compose v2.17, we’re excited to share an early look at the new development-specific configuration in Compose YAML as well as an experimental file watch command (Figure 1) that will automatically update your running Compose services as you edit and save your code.

This preview is brought to you in no small part by Compose developer Nicolas De Loof (in addition to more than 10 bugfixes in this release alone).

Screenshot of Docker compose command line showing the new "watch" command.
Figure 1: Preview of the new watch command.

An optional new section, x-develop, can be added to a Compose service to configure options specific to your project’s daily flow. In this release, the only available option is watch, which allows you to define file or directory paths to monitor on your computer and a corresponding action to take inside the service container.

Currently, there are two possible actions: 

  • sync — Copy changed files matching the pattern into the running service container(s).
  • rebuild — Trigger an image build and recreate the running service container(s).
services:
  web:
    build: .
    x-develop:
      watch:
        - action: sync
          path: ./web
          target: /src/web
        - action: rebuild
          path: package.json

In the preceding example, whenever a source file in the web/ directory is changed, Compose will copy the file to the corresponding location under /src/web inside the container. Because Webpack supports Hot Module Reload, the changes are automatically detected and applied.

Unlike source code files, adding a new dependency cannot be done on the fly, so whenever package.json is changed, Compose will rebuild the image and recreate the web service container.

Behind the scenes, the file watching code shares its core with Tilt. The intricacies and surprises of file watching have always been near and dear to the Tilt team’s hearts, and, as Dockhands, the geeking out has continued. 

We are going to continue to build out the experience while gated behind the new docker compose alpha command and x-develop Compose YAML section. This approach will allow us to respond to community feedback early in the development process while still providing a clear path to stabilization as part of the Compose Spec.

Docker Compose powers countless workflows today, and its lightweight approach to containerized development is not going anywhere — it’s just learning a few new tricks.

Try it out

Follow the instructions at dockersamples/avatars to quickly run a small demo app, as follows:

git clone https://github.com/dockersamples/avatars.git
cd avatars
docker compose up -d
docker compose alpha watch
# open http://localhost:5735 in your browser

If you try it out on your own project, you can comment on the proposed specification on GitHub issue #253 in the compose-spec repository.

]]>
How to Set Up Your Local Node.js Development Environment Using Docker https://www.docker.com/blog/how-to-setup-your-local-node-js-development-environment-using-docker/ Tue, 30 Aug 2022 21:16:50 +0000 https://www.docker.com/blog/?p=26568 Docker is the de facto toolset for building modern applications and setting up a CI/CD pipeline – helping you build, ship, and run your applications in containers on-prem and in the cloud. 

Whether you’re running on simple compute instances such as AWS EC2 or something fancier like a hosted Kubernetes service, Docker’s toolset is your new BFF. 

But what about your local Node.js development environment? Setting up local dev environments while also juggling the hurdles of onboarding can be frustrating, to say the least.

While there’s no silver bullet, with the help of Docker and its toolset, we can make things a whole lot easier.

Table of contents:

How to set up a local Node.js dev environment — Part 1

Docker's Moby Dock whale pointing to whiteboard with Node.js logo.

In this tutorial, we’ll walk through setting up a local Node.js development environment for a relatively complex application that uses React for its front end, Node and Express for a couple of micro-services, and MongoDb for our datastore. We’ll use Docker to build our images and Docker Compose to make everything a whole lot easier.

If you have any questions, comments, or just want to connect. You can reach me in our Community Slack or on Twitter at @rumpl.

Let’s get started.

Prerequisites

To complete this tutorial, you will need:

  • Docker installed on your development machine. You can download and install Docker Desktop.
  • Sign-up for a Docker ID through Docker Hub.
  • Git installed on your development machine.
  • An IDE or text editor to use for editing files. I would recommend VSCode.

Step 1: Fork the Code Repository

The first thing we want to do is download the code to our local development machine. Let’s do this using the following git command:

git clone https://github.com/rumpl/memphis.git

Now that we have the code local, let’s take a look at the project structure. Open the code in your favorite IDE and expand the root level directories. You’ll see the following file structure.

├── docker-compose.yml
├── notes-service
├── reading-list-service
├── users-service
└── yoda-ui

The application is made up of a couple of simple microservices and a front-end written in React.js. It also uses MongoDB as its datastore.

Typically at this point, we would start a local version of MongoDB or look through the project to find where our applications will be looking for MongoDB. Then, we would start each of our microservices independently and start the UI in hopes that the default configuration works.

However, this can be very complicated and frustrating. Especially if our microservices are using different versions of Node.js and configured differently.

Instead, let’s walk through making this process easier by dockerizing our application and putting our database into a container. 

Step 2: Dockerize your applications

Docker is a great way to provide consistent development environments. It will allow us to run each of our services and UI in a container. We’ll also set up things so that we can develop locally and start our dependencies with one docker command.

The first thing we want to do is dockerize each of our applications. Let’s start with the microservices because they are all written in Node.js, and we’ll be able to use the same Dockerfile.

Creating Dockerfiles

Create a Dockerfile in the notes-services directory and add the following commands.

Dockerfile in the notes-service directory using Node.js.

This is a very basic Dockerfile to use with Node.js. If you are not familiar with the commands, you can start with our getting started guide. Also, take a look at our reference documentation.

Building Docker Images

Now that we’ve created our Dockerfile, let’s build our image. Make sure you’re still located in the notes-services directory and run the following command:

cd notes-service
docker build -t notes-service .
Docker build terminal output located in the notes-service directory.

Now that we have our image built, let’s run it as a container and test that it’s working.

docker run --rm -p 8081:8081 --name notes notes-service

Docker run terminal output located in the notes-service directory.

From this error, we can see we’re having trouble connecting to the mongodb. Two things are broken at this point:

  1. We didn’t provide a connection string to the application.
  2. We don’t have MongoDB running locally.

To resolve this, we could provide a connection string to a shared instance of our database, but we want to be able to manage our database locally and not have to worry about messing up our colleagues’ data they might be using to develop. 

Step 3: Run MongoDB in a localized container

Instead of downloading MongoDB, installing, configuring, and then running the Mongo database service. We can use the Docker Official Image for MongoDB and run it in a container.

Before we run MongoDB in a container, we want to create a couple of volumes that Docker can manage to store our persistent data and configuration. I like to use the managed volumes that Docker provides instead of using bind mounts. You can read all about volumes in our documentation.

Creating volumes for Docker

To create our volumes, we’ll create one for the data and one for the configuration of MongoDB.

docker volume create mongodb

docker volume create mongodb_config

Creating a user-defined bridge network

Now we’ll create a network that our application and database will use to talk with each other. The network is called a user-defined bridge network and gives us a nice DNS lookup service that we can use when creating our connection string.

docker network create mongodb

Now, we can run MongoDB in a container and attach it to the volumes and network we created above. Docker will pull the image from Hub and run it for you locally.

docker run -it --rm -d -v mongodb:/data/db -v mongodb_config:/data/configdb -p 27017:27017 --network mongodb --name mongodb mongo

Step 4: Set your environment variables

Now that we have a running MongoDB, we also need to set a couple of environment variables so our application knows what port to listen on and what connection string to use to access the database. We’ll do this right in the docker run command.

docker run \
-it --rm -d \
--network mongodb \
--name notes \
-p 8081:8081 \
-e SERVER_PORT=8081 \
-e SERVER_PORT=8081 \
-e DATABASE_CONNECTIONSTRING=mongodb://mongodb:27017/yoda_notes \
notes-service

Step 5: Test your database connection

Let’s test that our application is connected to the database and is able to add a note.

curl --request POST \
--url http://localhost:8081/services/m/notes \
  --header 'content-type: application/json' \
  --data '{
"name": "this is a note",
"text": "this is a note that I wanted to take while I was working on writing a blog post.",
"owner": "peter"
}'

You should receive the following JSON back from our service.

{"code":"success","payload":{"_id":"5efd0a1552cd422b59d4f994","name":"this is a note","text":"this is a note that I wanted to take while I was working on writing a blog post.","owner":"peter","createDate":"2020-07-01T22:11:33.256Z"}}

Once we are done testing, run ‘docker stop notes mongodb’ to stop the containers.

Awesome! We’ve completed the first steps in Dockerizing our local development environment for Node.js. In Part II, we’ll take a look at how we can use Docker Compose to simplify the process we just went through.

How to set up a local Node.js dev environment — Part 2

In Part I, we took a look at creating Docker images and running containers for Node.js applications. We also took a look at setting up a database in a container and how volumes and networks play a part in setting up your local development environment.

In Part II, we’ll take a look at creating and running a development image where we can compile, add modules and debug our application all inside of a container. This helps speed up the developer setup time when moving to a new application or project. In this case, our image should have Node.js installed as well as NPM or YARN. 

We’ll also take a quick look at using Docker Compose to help streamline the processes of setting up and running a full microservices application locally on your development machine.

Let’s create a development image we can use to run our Node.js application.

Step 1: Develop your Dockerfile

Create a local directory on your development machine that we can use as a working directory to save our Dockerfile and any other files that we’ll need for our development image.

$ mkdir -p ~/projects/dev-image

Create a Dockerfile in this folder and add the following commands.

FROM node:18.7.0
RUN apt-get update && apt-get install -y \
  nano \
  Vim

We start off by using the node:18.7.0 official image. I’ve found that this image is fine for creating a development image. I like to add a couple of text editors to the image in case I want to quickly edit a file while inside the container.

We did not add an ENTRYPOINT or CMD to the Dockerfile because we will rely on the base image’s ENTRYPOINT, and we will override the CMD when we start the image.

Step 2: Build your Docker image

Let’s build our image.

$ docker build -t node-dev-image .

And now we can run it.

$ docker run -it --rm --name dev -v $(pwd):/code node-dev-image bash

You will be presented with a bash command prompt. Now, inside the container, we can create a JavaScript file and run it with Node.js.

Step 3: Test your image

Run the following commands to test our image.

$ node -e 'console.log("hello from inside our container")'
hello from inside our container

If all goes well, we have a working development image. We can now do everything that we would do in our normal bash terminal.

If you run the above Docker command inside of the notes-service directory, then you will have access to the code inside of the container. You can start the notes-service by simply navigating to the /code directory and running npm run start.

Step 4: Use Compose to Develop locally

The notes-service project uses MongoDB as its data store. If you remember from Part I, we had to start the Mongo container manually and connect it to the same network as our notes-service. We also had to create a couple of volumes so we could persist our data across restarts of our application and MongoDB.

Instead, we’ll create a Compose file to start our notes-service and the MongoDb with one command. We’ll also set up the Compose file to start the notes-service in debug mode. This way, we can connect a debugger to the running node process.

Open the notes-service in your favorite IDE or text editor and create a new file named docker-compose.dev.yml. Copy and paste the below commands into the file.

services:
 notes:
   build:
     context: .
   ports:
     - 8080:8080
     - 9229:9229
   environment:
     - SERVER_PORT=8080
     - DATABASE_CONNECTIONSTRING=mongodb://mongo:27017/notes
   volumes:
     - ./:/code
   command: npm run debug
 
 mongo:
   image: mongo:4.2.8
   ports:
     - 27017:27017
   volumes:
     - mongodb:/data/db
     - mongodb_config:/data/configdb
 volumes:
   mongodb:
   Mongodb_config:


This compose file is super convenient because now we don’t have to type all the parameters to pass to the `docker run` command. We can declaratively do that in the compose file.

We are exposing port 9229 so that we can attach a debugger. We are also mapping our local source code into the running container so that we can make changes in our text editor and have those changes picked up in the container.

One other really cool feature of using the compose file is that we have service resolution setup to use the service names. As a result, we are now able to use “mongo” in our connection string. We use “mongo” because that is what we have named our mongo service in the compose file.

Let’s start our application and confirm that it is running properly.

$ docker compose -f docker-compose.dev.yml up --build

We pass the “–build” flag so Docker will compile our image and then start it.

If all goes well, you should see the logs from the notes and mongo services:

Docker compose terminal ouput showing logs from the notes and mongo services.
[Click to Enlarge]

Now let’s test our API endpoint. Run the following curl command:

$ curl --request GET --url http://localhost:8080/services/m/notes

You should receive the following response:

{"code":"success","meta":{"total":0,"count":0},"payload":[]}

Step 5: Connect to a Debugger

We’ll use the debugger that comes with the Chrome browser. Open Chrome on your machine, and then type the following into the address bar.

About:inspect

The following screen will open.

The DevTools function on the Chrome browser, showing a list of devices and remote targets.

Click the “Open dedicated DevTools for Node” link. This will open the DevTools that are connected to the running Node.js process inside our container.

Let’s change the source code and then set a breakpoint. 

Add the following code to the server.js file on line 19 and save the file.

server.use( '/foo', (req, res) => {
   return res.json({ "foo": "bar" })
 })

If you take a look at the terminal where our compose application is running, you’ll see that nodemon noticed the changes and reloaded our application.

Docker compose terminal output showcasing the nodemon-reloaded application.
[Click to Enlarge]

Navigate back to the Chrome DevTools and set a breakpoint on line 20. Then, run the following curl command to trigger the breakpoint.

$ curl --request GET --url http://localhost:8080/foo

💥 BOOM 💥 You should have seen the code break on line 20, and now you are able to use the debugger just like you would normally. You can inspect and watch variables, set conditional breakpoints, view stack traces, and a bunch of other stuff.

Conclusion

In this article, we completed the first steps in Dockerizing our local development environment for Node.js. Then, we took things a step further and created a general development image that can be used like our normal command line. We also set up our compose file to map our source code into the running container and exposed the debugging port.

For further reading and additional resources:

]]>
Start Dev Environments locally, Compose V2 RC 1, and more in Docker Desktop 3.6 https://www.docker.com/blog/start-dev-environments-locally-compose-v2-rc-1-and-more-in-docker-desktop-3-6/ Mon, 23 Aug 2021 15:08:32 +0000 https://www.docker.com/blog/?p=28580 Docker Desktop 3.6 has just been released and we’re looking forward to you trying it.

Start Dev Environments from your Local Machine

You can now share dev environments with your colleagues and get started with code already on your local machine as well as the existing remote repository support.

It’s easy to use your local code! Just click Create in the top right corner of the dev environments page. 

Screen Shot 2021 08 12 at 7.37.37 PM

Next select the Local tab and click Select directory to open the root of the code that you would like to work on.

Screen Shot 2021 08 12 at 7.41.29 PM
Screen Shot 2021 08 12 at 7.39.02 PM 1

Finally, click Create. This creates a Dev Environment using your local folder, and bind-mounts your local code in the Dev Environment. It opens VS Code inside the Dev Environment container.

Screen Shot 2021 08 12 at 7.42.23 PM 1
Screen Shot 2021 08 12 at 7.42.02 PM

We are excited that you are trying out our Dev Environments Preview and would love to hear from you! Let us know your feedback by creating an issue in the Dev Environments GitHub repository. Alternatively, get in touch with us on the #docker-dev-environments channel in the Docker Community Slack.

Enhanced Usability on Volume Management

We know that volumes can take up a lot of disk space, but when you’re dealing with  a lot of volumes, it can be hard to find which ones you want to clean up. In 3.6 we’ve made it easier to find and sort your volumes. You can now sort volumes by the name, the date created, and the size of the volume. You can also search for specific volumes using the search field. 

Screen Shot 2021 08 12 at 6.50.41 PM
Screen Shot 2021 07 22 at 7.52.30 PM

We’re continuing to enhance volume management and would love your input. Have ideas on how we might make managing volumes easier? Interested in sharing your volumes with colleagues? Let us know here.

Docker Compose V2 Release Candidate 1 

A first Release Candidate for Compose V2 is now available! We’ve been working hard to address all your feedback so that you can seamlessly run the compose command in the Docker CLI. Let us know your feedback on the new ‘compose’ command by creating an issue in the Compose-CLI GitHub repository.

We have also introduced the following new features:

  • Docker compose command line completion, less typing is always better 🎉 
  • docker-compose logs --follow which makes it easier to follow logs of new containers. This reacts to containers added by scale and reports additional logs when more containers are added to 
  • service.

You can test this new functionality by running the docker compose command, dropping the – in docker-compose. We are continuing to roll this out gradually; 54% of compose users are already using compose V2. You’ll be notified if you are using the new docker compose. You can opt-in to run Compose v2 with docker-compose, by running docker-compose enable-v2 command or by updating your Docker Desktop’s Experimental Features settings.  

Screen Shot 2021 06 29 at 2.17.03 PM

If you run into any issues using Compose V2, simply run docker-compose disable-v2 command, or turn it off using Docker Desktop’s Experimental Features. 

To get started simply download or update to Docker Desktop 3.6. If you’d like to dig deeper into your volumes or take your collaboration to the next level with dev environments, upgrade to a Pro or Team subscription today!

]]>
Improved Volume Management, Docker Dev Environments and more in Desktop 3.5 https://www.docker.com/blog/improved-volume-management-docker-dev-environments-and-more-in-desktop-3-5/ Tue, 29 Jun 2021 18:32:43 +0000 https://www.docker.com/blog/?p=28466 Docker Desktop 3.5 is here and we can’t wait for you to try it!

We’ve introduced some exciting new features including improvements to the Volume Management interface, a tech preview of Docker Dev Environments, and enhancements to Compose V2.

Easily Manage Files in your Volumes

Volumes can quickly take up local disk storage and without an easy way to see which ones are being used or their contents, it can be hard to free up space. This is why in the release of Docker Desktop 3.5 we’ve made it even easier for Pro and Team users to explore the directories and files inside of a volume. We’ve added in the modified date, kind, and size of files so that you can quickly identify what is taking up all that space and decide if you can part with it.

1 1

Once you’ve identified a file or directory inside a volume you no longer need, you can remove them straight from the Dashboard to free up space. We’ve also introduced a way to download files locally using “Save As” so that you can easily back up files before removing them.

2

We’re continuing to add more to volume management like the ability to share your volumes with your colleagues. Have ideas on how we might make managing volumes easier? We’d love you to help us prioritize by adding your use cases on our public roadmap

Docker Dev Environments

In 3.5 we released a technical preview of Docker Dev Environments. Check out our blog to learn more about why we built this and how it works.

Docker Compose V2 Beta Rollout Continues

We’re continuing to roll out the beta of Docker Compose V2, which allows you to seamlessly run the compose command in the Docker CLI. We are working towards launching Compose v2 as a drop-in replacement for docker-compose, so that no changes are required in your code to use this new functionality. We have also introduced the following new features:

  • Added support for container links and external links to facilitate communication between containers 
  • Introduced the docker compose logs --since and --until options enabling you to search logs by date.
  • `docker compose config --profiles` now lists all defined profiles so you can see which additional services are defined in a single docker-compose.yml file. Profiles allow you to adjust the Compose application model for various usages and environments by selectively enabling services. 

You can test this new functionality by running the docker compose command, dropping the – in docker-compose. We are continuing to roll this out gradually; 31% of compose users are already using this beta version. You’ll be notified if you are using the new docker compose. You can opt-in to run Compose v2 with docker-compose, by running docker-compose enable-v2 command or by updating your Docker Desktop’s Experimental Features settings.  

Screen Shot 2021 06 29 at 2.17.03 PM

If you run into any issues using Compose V2, simply run docker-compose disable-v2 command, or turn it off using Docker Desktop’s Experimental Features. Let us know your feedback on the new ‘compose’ command by creating an issue in the Compose-CLI GitHub repository.

Warning for Images incompatible with Apple Silicon Machines

Docker Dashboard will now warn you if an image you are using does not match your architecture on Apple Silicon. If you are using Desktop on Apple Silicon and an image is amd64 run by qemu emulation, it is possible that it may have poor performance or potentially crash. While we are promoting the usage of multi-architecture images, we want to make sure you are aware when an image you are using is running under emulation because it does not match your machine’s native architecture. If this is the case a warning will appear on the Containers / Apps page.

4

Less Disruptive Requests for Feedback

And finally, we’ve heard your feedback on how we ask you for your feedback. We’ve changed the way that the feedback form works so that it won’t pop up while you’re in the middle of working. When it’s time, the feedback form will only show up if you click on the whale menu. We do appreciate the time you spend to rate Docker Desktop. Your input helps us make changes like this! 

See the full release notes for Docker Desktop for Mac and Docker Desktop for Windows for the complete set of changes in Docker Desktop 3.5. 

We can’t wait for you to try Volume Management and the preview of Dev Environments! To get started simply download or update to Docker Desktop 3.5. To start collaborating with your teammates on your dev environments and digging into the contents of your volumes, upgrade to a Pro or Team subscription today!

]]>
Guest Post: Calling the Docker CLI from Python with Python-on-whales https://www.docker.com/blog/guest-post-calling-the-docker-cli-from-python-with-python-on-whales/ Thu, 11 Mar 2021 20:00:00 +0000 https://www.docker.com/blog/?p=27679 pythonwhale
Image: Alice Lang, alicelang-creations@outlook.fr

At Docker, we are incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. The following is a guest post by Docker community member Gabriel de Marmiesse. Are you working on something awesome with Docker? Send your contributions to William Quiviger (@william) on the Docker Community Slack and we might feature your work!   

The most common way to call and control Docker is by using the command line.

With the increased usage of Docker, users want to call Docker from programming languages other than shell. One popular way to use Docker from Python has been to use docker-py. This library has had so much success that even docker-compose is written in Python, and leverages docker-py.

The goal of docker-py though is not to replicate the Docker client (written in Golang), but to talk to the Docker Engine HTTP API. The Docker client is extremely complex and is hard to duplicate in another language. Because of this, a lot of features that were in the Docker client could not be available in docker-py. Sometimes users would sometimes get frustrated because docker-py did not behave exactly like the CLI.

Today, we’re presenting a new project built by Gabriel de Marmiesse from the Docker community: Python-on-whales. The goal of this project is to have a 1-to-1 mapping between the Docker CLI and the Python library. We do this by communicating with the Docker CLI instead of calling directly the Docker Engine HTTP API.

Docker clients 1

If you need to call the Docker command line, use Python-on-whales. And if you need to call the Docker engine directly, use docker-py.

In this post, we’ll take a look at some of the features that are not available in docker-py but are available in Python-on-whales:

  • Building with Docker buildx
  • Deploying to Swarm with docker stack
  • Deploying to the local Engine with Compose

Start by downloading Python-on-whales with 

pip install python-on-whales

and you’re ready to rock!

Docker Buildx0

Here we build a Docker image. Python-on-whales uses buildx by default and gives you the output in real time.

>>> from python_on_whales import docker
>>> my_image = docker.build(".", tags="some_name")
[+] Building 1.6s (17/17) FINISHED
 => [internal] load build definition from Dockerfile                       0.0s
 => => transferring dockerfile: 32B                                        0.0s
 => [internal] load .dockerignore                                          0.0s
 => => transferring context: 2B                                            0.0s
 => [internal] load metadata for docker.io/library/python:3.6              1.4s
 => [python_dependencies 1/5] FROM docker.io/library/python:3.6@sha256:293 0.0s
 => [internal] load build context                                          0.1s
 => => transferring context: 72.86kB                                       0.0s
 => CACHED [python_dependencies 2/5] RUN pip install typeguard pydantic re 0.0s
 => CACHED [python_dependencies 3/5] COPY tests/test-requirements.txt /tmp 0.0s
 => CACHED [python_dependencies 4/5] COPY requirements.txt /tmp/           0.0s
 => CACHED [python_dependencies 5/5] RUN pip install -r /tmp/test-requirem 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 1/7] RUN apt-get update && 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 2/7] RUN curl -fsSL https: 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 3/7] RUN add-apt-repositor 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 4/7] RUN  apt-get update & 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 5/7] WORKDIR /python-on-wh 0.0s
 => CACHED [tests_ubuntu_install_without_buildx 6/7] COPY . .              0.0s
 => CACHED [tests_ubuntu_install_without_buildx 7/7] RUN pip install -e .  0.0s
 => exporting to image                                                     0.1s
 => => exporting layers                                                    0.0s
 => => writing image sha256:e1c2382d515b097ebdac4ed189012ca3b34ab6be65ba0c 0.0s
 => => naming to docker.io/library/some_image_name

Docker Stacks

Here we deploy a simple Swarmpit stack on a local Swarm. You get a Stack object that has several methods: remove(), services(), ps().

>>> from python_on_whales import docker
>>> docker.swarm.init()
>>> swarmpit_stack = docker.stack.deploy("swarmpit", compose_files=["./docker-compose.yml"])
Creating network swarmpit_net
Creating service swarmpit_influxdb
Creating service swarmpit_agent
Creating service swarmpit_app
Creating service swarmpit_db
>>> swarmpit_stack.services()
[<python_on_whales.components.service.Service object at 0x7f9be5058d60>,
<python_on_whales.components.service.Service object at 0x7f9be506d0d0>,
<python_on_whales.components.service.Service object at 0x7f9be506d400>,
<python_on_whales.components.service.Service object at 0x7f9be506d730>]
>>> swarmpit_stack.remove()

Docker Compose

Here we show how we can run a Docker Compose application with Python-on-whales. Note that, behind the scenes, it uses the new version of Compose written in Golang. This version of Compose is still experimental. Take appropriate precautions.

$ git clone https://github.com/dockersamples/example-voting-app.git
$ cd example-voting-app
$ python
>>> from python_on_whales import docker
>>> docker.compose.up(detach=True)
Network "example-voting-app_back-tier"  Creating
Network "example-voting-app_back-tier"  Created
Network "example-voting-app_front-tier"  Creating
Network "example-voting-app_front-tier"  Created
example-voting-app_redis_1  Creating
example-voting-app_db_1  Creating
example-voting-app_db_1  Created
example-voting-app_result_1  Creating
example-voting-app_redis_1  Created
example-voting-app_worker_1  Creating
example-voting-app_vote_1  Creating
example-voting-app_worker_1  Created
example-voting-app_result_1  Created
example-voting-app_vote_1  Created
>>> for container in docker.compose.ps():
...     print(container.name, container.state.status)
example-voting-app_vote_1 running
example-voting-app_worker_1 running
example-voting-app_result_1 running
example-voting-app_redis_1 running
example-voting-app_db_1 running
>>> docker.compose.down()
>>> print(docker.compose.ps())
[]

Bonus section: Docker objects attributes as Python attributes

All information that you can access with docker inspect is available as Python attributes:

>>> from python_on_whales import docker
>>> my_container = docker.run("ubuntu", ["sleep", "infinity"], detach=True)
>>> my_container.state.started_at
datetime.datetime(2021, 2, 18, 13, 55, 44, 358235, tzinfo=datetime.timezone.utc)
>>> my_container.state.running
True
>>> my_container.kill()
>>> my_container.remove()

>>> my_image = docker.image.inspect("ubuntu")
>>> print(my_image.config.cmd)
['/bin/bash']

What’s next for Python-on-whales ?

We’re currently improving the integration of Python-on-whales with the new Compose in the Docker CLI (currently beta).

You can consider that Python-on-whales is in beta. Some small API changes are still possible. 

We encourage the community to try it out and give feedback in the issues!

To learn more about Python-on-whales:

]]>
How to Deploy GPU-Accelerated Applications on Amazon ECS with Docker Compose https://www.docker.com/blog/deploy-gpu-accelerated-applications-on-amazon-ecs-with-docker-compose/ Tue, 16 Feb 2021 17:00:00 +0000 https://www.docker.com/blog/?p=27442 shutterstock 1315361570

Many applications can take advantage of GPU acceleration, in particular resource-intensive Machine Learning (ML) applications. The development time of such applications may vary based on the hardware of the machine we use for development. Containerization will facilitate development due to reproducibility and will make the setup easily transferable to other machines. Most importantly, a containerized application is easily deployable to platforms such as Amazon ECS, where it can take advantage of different hardware configurations.

In this tutorial, we discuss how to develop GPU-accelerated applications in containers locally and how to use Docker Compose to easily deploy them to the cloud (the Amazon ECS platform). We make the transition from the local environment to a cloud effortless, the GPU-accelerated application being packaged with all its dependencies in a Docker image, and deployed in the same way regardless of the target environment.

Requirements

In order to follow this tutorial, we need the following tools installed locally:

For deploying to a cloud platform, we rely on the new Docker Compose implementation embedded into the Docker CLI binary. Therefore, when targeting a cloud platform we are going to run docker compose commands instead of docker-compose. For local commands, both implementations of Docker Compose should work. If you find a missing feature that you use, report it on the issue tracker.

Sample application

Keep in mind that what we want to showcase is how to structure and manage a GPU accelerated application with Docker Compose, and how we can deploy it to the cloud. We do not focus on GPU programming or the AI/ML algorithms, but rather on how to structure and containerize such an application to facilitate portability, sharing and deployment.

For this tutorial, we rely on sample code provided in the Tensorflow documentation, to simulate a GPU-accelerated translation service that we can orchestrate with Docker Compose. The original code can be found documented at  https://www.tensorflow.org/tutorials/text/nmt_with_attention. For this exercise, we have reorganized the code such that we can easily manage it with Docker Compose.

This sample uses the Tensorflow platform which can automatically use GPU devices if available on the host. Next, we will discuss how to organize this sample in services to containerize them easily and what the challenges are when we locally run such a resource-intensive application.

Note: The sample code to use throughout this tutorial can be found here. It needs to be downloaded locally to exercise the commands we are going to discuss.

1. Local environment

Let’s assume we want to build and deploy a service that can translate simple sentences to a language of our choice. For such a service, we need to train an ML model to translate from one language to another and then use this model to translate new inputs. 

Application setup

We choose to separate the phases of the ML process in two different Compose services:

  • A training service that trains a model to translate between two languages (includes the data gathering, preprocessing and all the necessary steps before the actual training process).
  • A translation service that loads a model and uses it to `translate` a sentence.

This structure is defined in the docker-compose.dev.yaml from the downloaded sample application which has the following content:

docker-compose.yml

services:

 training:
   build: backend
   command: python model.py
   volumes:
     - models:/checkpoints

 translator:
   build: backend
   volumes:
     - models:/checkpoints
   ports:
     - 5000:5000

volumes:
 models:

We want the training service to train a model to translate from English to French and to save this model to a named volume models that is shared between the two services. The translator service has a published port to allow us to query it easily.

Deploy locally with Docker Compose

The reason for starting with the simplified compose file is that it can be deployed locally whether a GPU is present or not. We will see later how to add the GPU resource reservation to it.

Before deploying, rename the docker-compose.dev.yaml to docker-compose.yaml to avoid setting the file path with the flag -f for every compose command.

To deploy the Compose file, all we need to do is open a terminal, go to its base directory and run:

$ docker compose up
The new 'docker compose' command is currently experimental.
To provide feedback or request new features please open
issues at https://github.com/docker/compose-cli
[+] Running 4/0
 ⠿ Network "gpu_default"  Created                               0.0s
 ⠿ Volume "gpu_models"    Created                               0.0s
 ⠿ gpu_translator_1       Created                               0.0s
 ⠿ gpu_training_1         Created                               0.0s
Attaching to gpu_training_1, gpu_translator_1
...
translator_1  |  * Running on http://0.0.0.0:5000/ (Press CTRL+C
to quit)
...
HTTP/1.1" 200 -
training_1    | Epoch 1 Batch 0 Loss 3.3540
training_1    | Epoch 1 Batch 100 Loss 1.6044
training_1    | Epoch 1 Batch 200 Loss 1.3441
training_1    | Epoch 1 Batch 300 Loss 1.1679
training_1    | Epoch 1 Loss 1.4679
training_1    | Time taken for 1 epoch 218.06381964683533 sec
training_1    | 
training_1    | Epoch 2 Batch 0 Loss 0.9957
training_1    | Epoch 2 Batch 100 Loss 1.0288
training_1    | Epoch 2 Batch 200 Loss 0.8737
training_1    | Epoch 2 Batch 300 Loss 0.8971
training_1    | Epoch 2 Loss 0.9668
training_1    | Time taken for 1 epoch 211.0763041973114 sec
...
training_1    | Checkpoints saved in /checkpoints/eng-fra
training_1    | Requested translator service to reload its model,
response status: 200
translator_1  | 172.22.0.2 - - [18/Dec/2020 10:23:46] 
"GET /reload?lang=eng-fra 

Docker Compose deploys a container for each service and attaches us to their logs which allows us to follow the progress of the training service.

Every 10 cycles (epochs), the training service requests the translator to reload its model from the last checkpoint. If the translator is queried before the first training phase (10 cycles) is completed, we should get the following message. 

$ curl -d "text=hello" localhost:5000/
No trained model found / training may be in progress...

From the logs, we can see that each training cycle is resource-intensive and may take very long (depending on parameter setup in the ML algorithm).

The training service runs continuously and checkpoints the model periodically to a named volume shared between the two services. 

$ docker ps -a
CONTAINER ID   IMAGE            COMMAND                  CREATED          STATUS                     PORTS                    NAMES
f11fc947a90a   gpu_training     "python model.py"        14 minutes ago   Up 54 minutes                   gpu_training_1                           
baf147fbdf18   gpu_translator   "/bin/bash -c 'pytho..." 14 minutes ago   Up 54 minutes              0.0.0.0:5000->5000/tcp   gpu_translator_1

We can now query the translator service which uses the trained model:

$ $ curl -d "text=hello" localhost:5000/
salut !
$ curl -d "text=I want a vacation" localhost:5000/
je veux une autre . 
$ curl -d "text=I am a student" localhost:5000/
je suis etudiant .

Keep in mind that, for this exercise, we are not concerned about the accuracy of the translation but how to set up the entire process following a service approach that will make it easy to deploy with Docker Compose.

During development, we may have to re-run the training process and evaluate it each time we tweak the algorithm. This is a very time consuming task if we do not use development machines built for high performance.

An alternative is to use on-demand cloud resources. For example, we could use cloud instances hosting GPU devices to run the resource-intensive components of our application. Running our sample application on a machine with access to a GPU will automatically switch to train the model on the GPU. This will speed up the process and significantly reduce the development time.

The first step to deploy this application to some faster cloud instances is to pack it as a Docker image and push it to Docker Hub, from where we can access it from cloud instances.

Build and Push images to Docker Hub

During the deployment with compose up, the application is packed as a Docker image which is then used to create the containers. We need to tag the built images and push them to Docker Hub.

 A simple way to do this is by setting the image property for services in the Compose file. Previously, we had only set the build property for our services, however we had no image defined. Docker Compose requires at least one of these properties to be defined in order to deploy the application.

We set the image property following the pattern <account>/<name>:<tag> where the tag is optional (default to ‘latest’). We take for example a Docker Hub account ID myhubuser and the application name gpudemo. Edit the compose file and set the image property for the two services as below:

docker-compose.yml

services:

 training:
   image: myhubuser/gpudemo
   build: backend
   command: python model.py
   volumes:
     - models:/checkpoints

 translator:
   image: myhubuser/gpudemo
   build: backend
   volumes:
     - models:/checkpoints
   ports:
     - 5000:5000

volumes:
 models:

To build the images run:

$ docker compose build
The new 'docker compose' command is currently experimental. To
provide feedback or request new features please open issues
 at https://github.com/docker/compose-cli
[+] Building 1.0s (10/10) FINISHED 
 => [internal] load build definition from Dockerfile
0.0s 
=> => transferring dockerfile: 206B
...
 => exporting to image
0.8s 
 => => exporting layers    
0.8s  
 => => writing image sha256:b53b564ee0f1986f6a9108b2df0d810f28bfb209
4743d8564f2667066acf3d1f
0.0s
 => => naming to docker.io/myhubuser/gpudemo

$ docker images | grep gpudemo
myhubuser/gpudemo  latest   b53b564ee0f1   2 minutes ago 
  5.83GB   

Notice the image has been named according to what we set in the Compose file.

Before pushing this image to Docker Hub, we need to make sure we are logged in. For this we run:

$ docker login
...
Login Succeeded

Push the image we built:

$ docker compose push
Pushing training (myhubuser/gpudemo:latest)...
The push refers to repository [docker.io/myhubuser/gpudemo]
c765bf51c513: Pushed
9ccf81c8f6e0: Layer already exists
...
latest: digest: sha256:c40a3ca7388d5f322a23408e06bddf14b7242f9baf7fb
e7201944780a028df76 size: 4306

The image pushed is public unless we set it to private in Docker Hub’s repository settings. The Docker documentation covers this in more detail.

With the image stored in a public image registry, we will look now at how we can use it to deploy our application on Amazon ECS and how we can use GPUs to accelerate it.

2. Deploy to Amazon ECS for GPU-acceleration

To deploy the application to Amazon ECS, we need to have credentials for accessing an AWS account and to have Docker CLI set to target the platform.

Let’s assume we have a valid set of AWS credentials that we can use to connect to AWS services.  We need now to create an ECS Docker context to redirect all Docker CLI commands to Amazon ECS.

Create an ECS context

To create an ECS context run the following command:

$ docker context create ecs cloud
? Create a Docker context using:  [Use arrows to move, type
to filter]
> AWS environment variables 
  An existing AWS profile
  A new AWS profile

This prompts users with 3 options, depending on their familiarity with the AWS credentials setup.

For this exercise, to skip the details of  AWS credential setup, we choose the first option. This requires us to have the AWS_ACCESS_KEY and AWS_SECRET_KEY set in our environment,  when running Docker commands that target Amazon ECS.

We can now run Docker commands and set the context flag for all commands targeting the platform, or we can switch it to be the context in use to avoid setting the flag on each command.

Set Docker CLI to target ECS

Set the context we created previously as the context in use by running:

$ docker context use cloud

$ docker context ls
NAME                TYPE                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES ENDPOINT   ORCHESTRATOR
default             moby                Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                         swarm
cloud *             ecs                 credentials read from environment

Starting from here, all the subsequent Docker commands are going to target Amazon ECS. To switch back to the default context targeting the local environment, we can run the following:

$ docker context use default

For the following commands, we keep ECS context as the current context in use. We can now run a command to check we can successfully access ECS.

$ AWS_ACCESS_KEY="*****" AWS_SECRET_KEY="******" docker compose ls
NAME                                STATUS 

Before deploying the application to Amazon ECS, let’s have a look at how to update the Compose file to request GPU access for the training service. This blog post describes a way to define GPU reservations. In the next section, we cover the new format supported in the local compose and the legacy docker-compose.

Define GPU reservation in the Compose file

Tensorflow can make use of NVIDIA GPUs with CUDA compute capabilities to speed up computations. To reserve NVIDIA GPUs,  we edit the docker-compose.yaml  that we defined previously and add the deploy property under the training service as follows:

...
 training:
   image: myhubuser/gpudemo
   command: python model.py eng-fra
   volumes:
     - models:/checkpoints
   deploy:
     resources:
       reservations:
         memory:32Gb
         devices:
         - driver: nvidia
           count: 2
           capabilities: [gpu]
...

For this example we defined a reservation of 2 NVIDIA GPUs and 32GB memory dedicated to the container. We can tweak these parameters according to the resources of the machine we target for deployment. If our local dev machine hosts an NVIDIA GPU, we can tweak the reservation accordingly and deploy the Compose file locally.  Ensure you have installed the NVIDIA container runtime and set up the Docker Engine to use it before deploying the Compose file.

We focus in the next part on how to make use of GPU cloud instances to run our sample application.

Note: We assume the image we pushed to Docker Hub is public. If so, there is no need to authenticate in order to pull it (unless we exceed the pull rate limit). For images that need to be kept private, we need to define the x-aws-pull_credentials property with a reference to the credentials to use for authentication. Details on how to set it can be found in the documentation.

Deploy to Amazon ECS

Export the AWS credentials to avoid setting them for every command.

$ export AWS_ACCESS_KEY="*****" 
$ export AWS_SECRET_KEY="******"

When deploying the Compose file, Docker Compose will also reserve an EC2 instance with GPU capabilities that satisfies the reservation parameters. In the example we provided, we ask to reserve an instance with 32GB and 2 Nvidia GPUs. Docker Compose matches this reservation with the instance that satisfies this requirement. Before setting the reservation property in the Compose file, we recommend to check the Amazon GPU instance types and set your reservation accordingly. Ensure you are targeting an Amazon region that contains such instances.

WARNING: Aside from ECS containers, we will have a `g4dn.12xlarge` EC2 instance reserved. Before deploying to the cloud, check the Amazon documentation for the resource cost this will incur.

To deploy the application, we run the same command as in the local environment.

$ docker compose up     
[+] Running 29/29
 ⠿ gpu                 CreateComplete          423.0s  
 ⠿ LoadBalancer        CreateComplete          152.0s
 ⠿ ModelsAccessPoint   CreateComplete            6.0s
 ⠿ DefaultNetwork      CreateComplete            5.0s
...
 ⠿ TranslatorService   CreateComplete          205.0s
 ⠿ TrainingService     CreateComplete          161.0s

Check the status of the services:

$ docker compose ps
NAME                                        SERVICE             STATE               PORTS
task/gpu/3311e295b9954859b4c4576511776593   training            Running             
task/gpu/78e1d482a70e47549237ada1c20cc04d   translator          Running             gpu-LoadBal-6UL1B4L7OZB1-d2f05c385ceb31e2.elb.eu-west-3.amazonaws.com:5000->5000/tcp

Query the exposed translator endpoint. We notice the same behaviour as in the local deployment (the model reload has not been triggered yet by the training service).

$ curl -d "text=hello" gpu-LoadBal-6UL1B4L7OZB1-d2f05c385ceb31e2.elb.eu-west-3.amazonaws.com:5000/
No trained model found / training may be in progress...

Check the logs for the GPU device’s tensorflow detected. We can easily identify the 2 GPU devices we reserved and how the training is almost 10X faster than our CPU-based local training.

$ docker compose logs
...
training    | 2021-01-08 20:50:51.595796: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
training    | pciBusID: 0000:00:1c.0 name: Tesla T4 computeCapability: 7.5
training    | coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
...
training    | 2021-01-08 20:50:51.596743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties: 
training    | pciBusID: 0000:00:1d.0 name: Tesla T4 computeCapability: 7.5
training    | coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
...

training      | Epoch 1 Batch 300 Loss 1.2269
training      | Epoch 1 Loss 1.4794
training      | Time taken for 1 epoch 42.98397183418274 sec
...
training      | Epoch 2 Loss 0.9750
training      | Time taken for 1 epoch 35.13995909690857 sec
...
training      | Epoch 9 Batch 0 Loss 0.1375
...
training      | Epoch 9 Loss 0.1558
training      | Time taken for 1 epoch 32.444278955459595 sec
...
training      | Epoch 10 Batch 300 Loss 0.1663
training      | Epoch 10 Loss 0.1383
training      | Time taken for 1 epoch 35.29659080505371 sec
training      | Checkpoints saved in /checkpoints/eng-fra
training      | Requested translator service to reload its model, response status: 200.

The training service runs continuously and triggers the model reload on the translation service every 10 cycles (epochs). Once the translation service has been notified at least once, we can stop and remove the training service and release the GPU instances at any time we choose. 

We can easily do this by removing the service from the Compose file:

services:
 translator:
   image: myhubuser/gpudemo
   build: backend
   volumes:
     - models:/checkpoints
   ports:
     - 5000:5000
volumes:
 models:

and then run docker compose up again to update the running application. This will apply the changes and remove the training service.

$ docker compose up      
[+] Running 0/0
 ⠋ gpu                  UpdateInProgress User Initiated    
 ⠋ LoadBalancer         CreateComplete      
 ⠋ ModelsAccessPoint    CreateComplete     
...
 ⠋ Cluster              CreateComplete     
 ⠋ TranslatorService    CreateComplete   

We can list the services running to see the training service has been removed and we only have the translator one:

$ docker compose ps
NAME                                        SERVICE             STATE               PORTS
task/gpu/78e1d482a70e47549237ada1c20cc04d   translator          Running             gpu-LoadBal-6UL1B4L7OZB1-d2f05c385ceb31e2.elb.eu-west-3.amazonaws.com:5000->5000/tcp

Query the translator:

$ curl -d "text=hello" gpu-LoadBal-6UL1B4L7OZB1-d2f05c385ceb31e2.elb.eu-west-3.amazonaws.com:5000/
salut ! 

To remove the application from Amazon ECS run:

$ docker compose down

Summary

We discussed how to setup a resource-intensive ML application to make it easily deployable in different environments with Docker Compose. We have exercised how to define the use of GPUs in a Compose file and how to deploy it on Amazon ECS.

Resources:

]]>
Compose CLI ACI Integration Now Available https://www.docker.com/blog/compose-cli-aci-integration-now-available/ Thu, 05 Nov 2020 23:17:18 +0000 https://www.docker.com/blog/?p=27200 Today we are pleased to announce that we have reached a major milestone, reaching GA and our V1 of both the Compose CLI and the ACI integration. 🎉

In May we announced the partnership between Docker and Microsoft to make it easier to deploy containerized applications from the Desktop to the cloud with Azure Container Instances (ACI). We are happy to let you know that all users of Docker Desktop now have the ACI experience available to them by default, allowing them to easily use existing Docker commands to deploy and manage containers running in ACI. 

As part of this I want to also call out a thank you to the MSFT team who have worked with us to make this all happen! That is a big thank you to Mike Morton, Karol Zadora-Przylecki, Brandon Waterloo, MacKenzie Olson, and Paul Yuknewicz.

Getting started with Docker and ACI 

As a new starter, to get going all you will need to do is upgrade your existing Docker Desktop to the latest stable version (2.5.0.0 or later), store your image on Docker Hub so you can deploy it (you can get started with Hub here) and then lastly you will need to create an ACI context to deploy it to. 

We have done a few blog posts now on the different types of things you can achieve with the ACI integration. 

If you have other questions on the experience or would like some other guides then drop us a message in the Compose CLI repo so we can update our docs. 

What’s new in V1.0 

Since the last release of the CLI we have added a few new commands to make it easier to manage your working environment and also make it simpler for you to understand what you can clear up to save you money on resources you are not using.

To start we have add a volume inspect command alongside the volume create to allow you better management of your volumes: 

cli aci integration 1

We are also very excited by our new top level prune command to allow you to better clear up your ACI working environment and manage your costs. 

docker prune --help

cli aci integration 2

We have also added in a couple of interesting flags in here, we have the —dry-run flag to let you see what would be cleared up:

cli aci integration 3
(OK I am not running a great deal here!) 

As you can see, this also lets you know the amount of compute resources you will be reclamining as well. At the end of a development session being able to do a force prune allows you to remove ‘all the things you have run’, giving you the confidence you won’t have left something running and get an unexpected bill. 

Lastly we have started to add a few more flags in based on your feedback, a couple of examples of these are the addition of the --format json and --quiet in commands ps, context ls, compose ls, compose ps, volume ls to output json or single IDs.

We are really excited about the new experience we have built with ACI, if you have any feedback on the experience or have ideas for other backends for the Compose CLI please let us know via our Public Roadmap

]]>
Docker Open Sources Compose for Amazon ECS and Microsoft ACI https://www.docker.com/blog/open-source-cloud-compose/ Thu, 24 Sep 2020 17:00:00 +0000 https://www.docker.com/blog/?p=27018 Today we are open sourcing the code for the Amazon ECS and Microsoft ACI Compose integrations. This is the first time that Docker has made Compose available for the cloud, allowing developers to take their Compose projects they were running locally and deploy them to the cloud by simply switching context.

With Docker focusing on developers, we’ve been doubling down on the parts of Docker that developers love, like Desktop, Hub, and of course Compose. Millions of developers all over the world use Compose to develop their applications and love its simplicity but there was no simple way to get these applications running in the cloud.

Docker is working to make it easier to get code running in the cloud in two ways. First we moved the Compose specification into a community project. This will allow Compose to evolve with the community so that it may better solve more user needs and ensure that it is agnostic of runtime platform. Second, we’ve been working with Amazon and Microsoft on CLI integrations for Amazon ECS and Microsoft ACI that allow you to use docker compose up to deploy Compose applications directly to the cloud.

While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted. We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:

CLI Architecture

The Node SDK and Compose CLI parts of this diagram are what we have open sourced today. This architecture is not final and we plan to merge the Compose CLI with the existing CLI at a later time.

Depending on the Docker Context that the user selects, the Compose CLI switches which backend is used for the command or API call. This allows us to pass commands which use existing contexts to the existing CLI transparently. The backend interface abstraction allows the implementation of a backend for any container runtime so that users can get the same Docker CLI UX they know and love for it along with the new APIs and SDK.

The Compose CLI can serve a gRPC API to provide similar functionality to that of the CLI commands. We chose to use gRPC as this allows us to generate high quality SDKs in popular languages like Node.js, Python, and Golang. While we currently only provide a Node SDK that supports single container management on ACI, there are plans to add Compose support, extend it to ECS and other backends, and add other language SDKs in the near future. The Node SDK is already used by VS Code to implement its Docker experience on ACI.

This work wouldn’t have been possible without help from our partners at Microsoft and AWS who helped us build the best possible experience for their respective platforms. Our team has enjoyed working with all of you! From Microsoft we’d specifically like to thank Mike Morton, Karol Zadora-Przylecki, Brandon Waterloo, MacKenzie Olson, and Paul Yuknewicz. From AWS we’d like to thank Carmen Puccio, David Killmon, Sravan Rengarajan, Uttara Sridhar, and David Duffey.

These tools are currently in beta so feedback and pull requests are welcome!

To get started working with Compose in the Cloud you can download Docker Desktop here, get a free Hub account to deploy your images from here. Once you have you image saved to Docker Hub you will be able to deploy it to either ECS or ACI, to find out more about how to do this:

]]>
Deploying WordPress to the Cloud https://www.docker.com/blog/deploying-wordpress-to-the-cloud/ Tue, 11 Aug 2020 16:00:00 +0000 https://www.docker.com/blog/?p=26861 I was curious the other day how hard it would be to actually set up my own blog or rather I was more interested in how easy it is now to do this with containers. There are plenty of platforms that host blogs for you but is it really now as easy to just run one yourself?

In order to get started, you can sign up for a Docker ID, or use your existing Docker ID to download the latest version of Docker Desktop Edge which includes the new Compose on ECS experience. 

Start with the local experience

To start I setup a local WordPress instance on my machine, grabbing a Compose file example from the awesome-compose repo.

Initially I had a go at running this locally on with Docker Compose:

$ docker-compose up -d

Then I can get the list of running containers:

$ docker-compose ps
    Name                          Command               State          Ports
    --------------------------------------------------------------------------------------
    deploywptocloud_db_1          docker-entrypoint.sh --def ...   Up      3306/tcp, 33060/tcp
    deploywptocloud_wordpress_1   docker-entrypoint.sh apach ...   Up      0.0.0.0:80->80/tcp

And then lastly I had a look to see that this was running correctly: wordpress to cloud 1

Deploy to the Cloud

Great! Now I needed to look at the contents of the Compose file to understand what I would want to change when moving over to the cloud.

I am going to be running this in Elastic Container Service on AWS using the new Docker ECS integration in the Docker CLI. This means I will be using some of the new docker ecs compose commands to run things rather than the traditional docker-compose commands. (In the future we will be moving to just docker compose everywhere!)

version: '3.7'
    services:
    db:
    image: mysql:8.0.19
    command: '--default-authentication-plugin=mysql_native_password'
    restart: always
    volumes:
    - db_data:/var/lib/mysql
    environment:
    - MYSQL_ROOT_PASSWORD=somewordpress
    - MYSQL_DATABASE=wordpress
    - MYSQL_USER=wordpress
    - MYSQL_PASSWORD=wordpress
    wordpress:
    image: wordpress:latest
    ports:
    - 80:80
    restart: always
    environment:
    - WORDPRESS_DB_HOST=db
    - WORDPRESS_DB_USER=wordpress
    - WORDPRESS_DB_PASSWORD=wordpress
    - WORDPRESS_DB_NAME=wordpress
    volumes:
    db_data:

Normally here I would move my DB passwords into a secret, but secret support is still to come in the ECS integration so for now we will keep our secret in our Compose file.

The next step is to now consider how we are going to run this in AWS, to continue you will need to have an AWS account setup. 

Choosing a Database service

Currently the Compose support for ECS in Docker doesn’t support volumes (please vote on our roadmap here), so we probably want to choose a database service to use instead. In this instance, let’s pick RDS. 

To start let’s open up our AWS console and get our RDS instance provisioned.

wordpress to cloud 2

Here I have gone into the RDS section and I will choose the MySQL instance type to match what I was using locally and also choose the lowest tier of DB as that is all I think I need. 

I now enter the details of my DB making sure to note the password to include in my Compose file:
wordpress to cloud 3

Great, now we need to update our Compose file to no longer use our local MYSQL and instead use the RDS instance. For this I am going to make a ‘prod’ Compose file to use, I will also need to grab my DB host name from RDS. 

wordpress to cloud 4

Adapting our Compose File

I can now update my compose file by removing the DB running in a container and adding my environment information. 

version: '3.7'
    services:
    wordpress:
    image: wordpress:latest
    ports:
    - 80:80
    restart: always
    environment:
    WORDPRESS_DB_HOST: wordpressdbecs.c1unvilqlnyq.eu-west-3.rds.amazonaws.com:3306
    WORDPRESS_DB_USER: wordpress
    WORDPRESS_DB_PASSWORD: wordpress123
    WORDPRESS_DB_NAME: wordpressdbecs

What we can see is that the Compose file is much smaller now as I am taking a dependency on the manually created RDS instance. We only have a single service (“wordpress”) and there is no more “db” service needed. 

Creating an ecs context and Deploying

Now we have all the parts ready to deploy, we will start by setting up our ECS context by following these steps 

  1. Create a new ECS context by running: docker ecs setup
  1. We will be asked to name our context, I am just going to hit enter to name my context ecs
  2. We will then be asked to select an AWS profile, I don’t already have the AWS extension installed so I will select new profile and name it ‘myecsprofile’
  3. I will then need to select a region, I am based in Europe so will enter eu-west-3 (make sure you do this in the same region you deployed your DB earlier!)
  4. Now either you will need to enter an AWS access key here or if you are already using something like AWS okta or an AWS CLI you will be able to say N here to use your existing credentials
  5. With all of this done you may still get the error message that you need to ‘migrate to the new ARN format’ (Amazon Resource name). You can read more about this on the Amazon blog post here.
    To complete the change you will need to go to the console settings for your AWS account and move over your opt in to an ‘enabled’ state then save the setting:
    wordpress to the cloud 1
  6. Let’s now check that our ECS context has been successfully created by listing the available contexts using docker context ls
  7. With this all in place we can now use our new ECS context to deploy! We will need to set our ECS context as our current focus : docker context use ecs
  1. Then we will be able to have our first go at deploying our Compose Application to ECS using the compose up command : docker ecs compose up 
  1. With this complete we can check the logs to our WordPress to see that everything is working correctly : docker ecs compose logs 
  1. It looks like our WordPress instance cannot access our DB, if we jump back into the Amazon web console and have a look at our DB settings using the ‘modify’ button on the overview page, we can see in our security groups that our WordPress deployment is not included as only the default group is:
    wordpress to cloud 5

You should be able to see your container project name (I have a couple here from prepping this blog post a couple of times!). You will want to add this group in with the same project name and then save you changes to apply immediately. 

Creating an ecs context and deploying

Now I run: docker ecs compose ps

wordpress to cloud 8

From the command output  I can grab the full URL including my port and navigate to my site newly deployed to the Cloud using my web browser: 

wordpress to cloud 7

Great! Now we have 2 compose files, one that lets us work on this locally and one that lets us run this in the cloud with our RDS instance. 

Cleaning resources

Remember when you are done if you don’t want to keep your website running (and continuing to pay for it) to use a docker compose down and you may want to remove your RDS instance as well.

Conclusion

There you have it, we have now got a wordpress instance we can deploy either locally with persistent state or in the cloud!

To get started remember you will need the latest Edge version of Docker Desktop, if you want to do this from scratch you can get started with the WordPress Official Image or you could try this with one of the other Official images on Hub. And remember if you want to run anything you have created locally in your ECS instance you will need to have saved it to Docker Hub first.  To start sharing your content on Hub check out our get started guide for Hub.

]]>