cloud – Docker https://www.docker.com Thu, 06 Jul 2023 13:53:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png cloud – Docker https://www.docker.com 32 32 Docker Acquires Mutagen for Continued Investment in Performance and Flexibility of Docker Desktop https://www.docker.com/blog/mutagen-acquisition/ Tue, 27 Jun 2023 17:00:13 +0000 https://www.docker.com/?p=43663 I’m excited to announce that Docker, voted the most-used and most-desired tool in Stack Overflow’s 2023 Developer Survey, has acquired Mutagen IO, Inc., the company behind the open source Mutagen file synchronization and networking technologies that enable high-performance remote development. Mutagen’s synchronization and forwarding capabilities facilitate the seamless transfer of code, binary artifacts, and network requests between arbitrary locations, connecting local and remote development environments. When combined with Docker’s existing developer tools, Mutagen unlocks new possibilities for developers to innovate and accelerate development velocity with local and remote containerized development.

“Docker is more than a container tool. It comprises multiple developer tools that have become the industry standard for self-service developer platforms, empowering teams to be more efficient, secure, and collaborative,” says Docker CEO Scott Johnston. “Bringing Mutagen into the Docker family is another example of how we continuously evolve our offering to meet the needs of developers with a product that works seamlessly and improves the way developers work.”

Mutagen banner 2400x1260 Docker logo and Mutagen logo on red background

The Mutagen acquisition introduces novel mechanisms for developers to extract the highest level of performance from their local hardware while simultaneously opening the gateway to the newest remote development solutions. We continue scaling the abilities of Docker Desktop to meet the needs of the growing number of developers, businesses, and enterprises relying on the platform.

 “Docker Desktop is focused on equipping every developer and dev team with blazing-fast tools to accelerate app creation and iteration by harnessing the combined might of local and cloud resources. By seamlessly integrating and magnifying Mutagen’s capabilities within our platform, we will provide our users and customers with unrivaled flexibility and an extraordinary opportunity to innovate rapidly,” says Webb Stevens, General Manager, Docker Desktop.

 “There are so many captivating integration and experimentation opportunities that were previously inaccessible as a third-party offering,” says Jacob Howard, the CEO at Mutagen. “As Mutagen’s lead developer and a Docker Captain, my ultimate goal has always been to enhance the development experience for Docker users. As an integral part of Docker’s technology landscape, Mutagen is now in a privileged position to achieve that goal.”

Jacob will join Docker’s engineering team, spearheading the integration of Mutagen’s technologies into Docker Desktop and other Docker products.

You can get started with Mutagen today by downloading the latest version of Docker Desktop and installing the Mutagen extension, available in the Docker Extensions Marketplace. Support for current Mutagen offerings, open source and paid, will continue as we develop new and better integration options.

FAQ | Docker Acquisition of Mutagen

With Docker’s acquisition of Mutagen, you’re sure to have questions. We’ve answered the most common ones in this FAQ.

As with all of our open source efforts, Docker strives to do right by the community. We want this acquisition to benefit everyone — community and customer — in keeping with our developer obsession.

What will happen to Mutagen Pro subscriptions and the Mutagen Extension for Docker Desktop?

Both will continue as we evaluate and develop new and better integration options. Existing Mutagen Pro subscribers will see an update to the supplier on their invoices, but no other billing changes will occur.

Will Mutagen become closed-source?

There are no plans to change the licensing structure of Mutagen’s open source components. Docker has always valued the contributions of open source communities.

Will Mutagen or its companion projects be discontinued?

There are no plans to discontinue any Mutagen projects. 

Will people still be able to contribute to Mutagen’s open source projects?

Yes! Mutagen has always benefited from outside collaboration in the form of feedback, discussion, and code contributions, and there’s no desire to change that relationship. For more information about how to participate in Mutagen’s development, see the contributing guidelines.

What about other downstream users, companies, and projects using Mutagen?

Mutagen’s open source licenses continue to allow the embedding and use of Mutagen by other projects, products, and tooling.

Who will provide support for Mutagen projects and products?

In the short term, support for Mutagen’s projects and products will continue to be provided through the existing support channels. We will work to merge support into Docker’s channels in the near future.

Is this replacing Virtiofs, gRPC-FUSE, or osxfs?

No, virtual filesystems will continue to be the default path for bind mounts in Docker Desktop. Docker is continuing to invest in the performance of these technologies.

How does Mutagen compare with other virtual or remote filesystems?

Mutagen is a synchronization engine rather than a virtual or remote filesystem. Mutagen can be used to synchronize files to native filesystems, such as ext4, trading typically imperceptible amounts of latency for full native filesystem performance.

How does Mutagen compare with other synchronization solutions?

Mutagen focuses primarily on configuration and functionality relevant to developers.

How can I get started with Mutagen?

To get started with Mutagen, download the latest version of Docker Desktop and install the Mutagen Extension from the Docker Desktop Extensions Marketplace.

]]>
Develop Your Cloud App Locally with the LocalStack Extension https://www.docker.com/blog/develop-your-cloud-app-locally-with-the-localstack-extension/ Fri, 13 Jan 2023 15:00:00 +0000 https://www.docker.com/?p=39772 LocalStack Docker Extension

Local deployment is a great way to improve your development speed, lower your cloud costs, and develop for the cloud when access is restricted due to regulations. But it can also mean one more tool to manage when you’re developing an application.

With the LocalStack Docker Extension, you get a fully functional local cloud stack integrated directly into Docker Desktop, so it’s easy to develop and test cloud-native applications in one place.

Let’s take a look at local deployment and how to use the LocalStack Docker Extension.

Why run cloud applications locally?

By running your cloud app locally, you have complete control over your environment. That control makes it easier to reproduce results consistently and test new features. This gives you faster deploy-test-redeploy cycles and makes it easier to debug and replicate bugs. And since you’re not using cloud resources, you can create and tear down resources at will without incurring cloud costs.

Local cloud development also allows you to work in regulated environments where access to the cloud is restricted. By running the app on their machines, you can still work on projects without being constrained by external restrictions. 

How LocalStack works

LocalStack is a cloud service emulator that provides a fully functional local cloud stack for developing and testing AWS cloud and serverless applications. With 45K+ GitHub stars and 450+ contributors, LocalStack is backed by a large, active open-source community with 100,000+ active users worldwide.

LocalStack acts as a local “mini-cloud” operating system with multiple components, such as process management, file system abstraction, event processing, schedulers, and more. These LocalStack components run in a Docker container and expose a set of external network ports for integrations, SDKs, or CLI interfaces to connect to LocalStack APIs.

Diagram of the LocalStack architecture using Docker containers.
The LocalStack architecture is designed to be lightweight and cross-platform compatible to make it easy to use a local cloud stack.

With LocalStack, you can simulate the functionality of many AWS cloud services, like Lambda and S3, without having to connect to the actual cloud environment. You can even apply your complex CDK applications or Terraform configurations and emulate everything locally.

The official LocalStack Docker image has been downloaded 100+ million times and provides a multi-arch build that’s compatible with AMD/x86 and ARM-based CPU architectures. LocalStack supports over 80 AWS APIs, including compute (Lambda, ECS), databases (RDS, DynamoDB), messaging (SQS, MSK), and other sophisticated services (Glue, Athena). It offers advanced collaboration features and integrations, with Infrastructure-as-Code toolings, continuous integration (CI) systems, and much more, thus enabling an efficient development and testing loop for developers. 

Why run LocalStack as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With LocalStack as a Docker Extension, you now have an easier, faster way to run LocalStack.

The extension creates a running LocalStack instance. This allows you to easily configure LocalStack to fit the needs of a local cloud sandbox for development, testing and experimentation. Currently, the LocalStack extension for Docker Desktop supports the following features:

  • Control LocalStack: Start, stop, and restart LocalStack from Docker Desktop. You can also see the current status of your LocalStack instance and navigate to the LocalStack Web Application.
  • LocalStack insights: You can see the log information of the LocalStack instance and all the available services and their status on the service page.
  • LocalStack configurations: You can manage and use your profiles via configurations and create new configurations for your LocalStack instance.

How to use the LocalStack Docker Extension 

In this section, we’ll emulate some simple AWS commands by running LocalStack through Docker Desktop. For this tutorial, you’ll need to have Docker Desktop(v4.8+) and the AWS CLI installed.

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Preferences tab in Docker Desktop.

Enable Docker Extensions on Docker Desktop.

Step 2: Install the LocalStack extension

The LocalStack extension is available on the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for LocalStack in the Extensions Marketplace, then select Install.

Search the Extensions Marketplace on Docker Desktop for LocalStack.

Alternatively, you can install the LocalStack Extension for Docker Desktop by pulling our public Docker image from Docker Hub:

docker extension install localstack/localstack-docker-desktop:0.3.1

Step 3: Initialize LocalStack

Once the extension is installed, you’re ready to use LocalStack! When you open the extension for the first time, you’ll be prompted to select where LocalStack will be mounted. Open the drop-down and choose the username.
You can also change this setting by navigating to the Configurations tab and selecting the mount point.

Select where LocalStack will be mounted.

Use the Start button to get started using LocalStack. If LocalStack’s Docker image isn’t present, the extension will pull it automatically (which may take some time).

Step 4. Run the basic AWS command

To demonstrate the functionalities of LocalStack, you can try to mock all AWS commands against the local infrastructure using awslocal, our wrapper around the AWS CLI. You can install it using pip.

pip install awscli-local

After it’s installed, all the available services will be displayed in the LocalStack Docker image on startup.

List of available AWS actions on the LocalStack Docker Extension.

You can now run some basic AWS commands to check if the extension is working correctly. Try these commands to create a hello-world file on LocalStack’s S3, fully emulated locally:

awslocal s3 mb s3://test 
echo "hello world" > /tmp/hello-world 
awslocal s3 cp /tmp/hello-world s3://test/hello-world 
awslocal s3 ls s3://test/

You should see a hello-world file in your local S3 bucket. You can now navigate to the Docker Desktop to see that S3 is running while the rest of the services are still marked available.

Amazon S3 running on the LocalStack Docker Extension.

Navigate to the logs and you’ll see the API requests being made with 200 status codes. If you’re running LocalStack to emulate a local AWS infrastructure, you can check the logs to see if a particular API request has gone wrong and further debug it through the logs.

Since the resources are ephemeral, you can stop LocalStack anytime to start fresh. And unlike doing this on AWS, you can spin up or down any resources you want without worrying about lingering resources inferring costs.

View the logs for the LocalStack Docker Extension.

Step 5: Use configuration profiles to quickly spin up different environments

Using LocalStack’s Docker Extension, you can create a variety of pre-made configuration profiles, specific LocalStack Configuration variables, or API keys. When you select a configuration profile before starting the container, you directly pass these variables to the running LocalStack container.

This makes it easy for you to change the behavior of LocalStack so you can quickly spin up local cloud environments already configured to your needs.

What will you build with LocalStack?

The LocalStack Docker Extension makes it easy to control the LocalStack container via a user interface. By integrating directly with Docker Desktop, we hope to make your development process easier and faster.

And even more is on the way! In upcoming iterations, the extension will be further developed to increase supported AWS APIs, integrations with LocalStack Web Application, and toolings like Cloud Pods, LocalStack’s state management and team collaboration feature.

Please let us know what you think! LocalStack is an open source, community focused project. If you’d like to contribute, you can follow our contributing documentation to set up the project on your local machine and use developer tools to develop new features or fix old bugs. You can also use the LocalStack Docker Extension issue tracker to create new issues or propose new features to LocalStack through our LocalStack Discuss forum.

]]>
Join Docker This Month at KubeCon and the Cloud Engineering Summit https://www.docker.com/blog/join-docker-this-month-at-kubecon-and-the-cloud-engineering-summit/ Fri, 08 Oct 2021 14:21:51 +0000 https://www.docker.com/blog/join-docker-this-month-at-kubecon-and-the-cloud-engineering-summit/ Two cloud-related conferences are coming up this month, and Docker will have speakers at both. First up, Docker CTO Justin Cormack will present at KubeCon next week. The week after that Peter McKee, Docker’s head of Developer Relations, will speak at  Pulumi Cloud Engineering Summit.

Screen Shot 2021 10 08 at 7.01.59 AM

At KubeCon, Justin and co-presenter Steve Lasker of Microsoft will speak on the topic of tooling for supply chain security with special reference to the Notary project. They’ll also look at the future roadmap and the supply chain landscape. KubeCon, the flagship conference of the Cloud Native Computing Foundation, is geared toward adopters and technologists from leading open source and cloud native communities. The conference runs Oct. 11 – 15 in Los Angeles and virtually. Justin’s presentation, titled Notary: State of the Container Supply Chain, takes place Thursday, Oct. 14 at 4:30 p.m. – 5:05 p.m. Pacific.


At the Cloud Engineering Summit, Peter will team up with Uffizzi’s Josh Thurman to speak about Continuous Previews — a cousin of Continuous Integration and Continuous Deployments that allows developers to easily share new features and changes to a wide audience within their organization, thereby speeding the delivery of features to users. The Wednesday, Oct. 20 summit is a virtual day of learning for cloud practitioners that focuses on best practices for building, deploying and managing modern cloud infrastructure. Peter’s presentation, titled Continuous Previews: Using Infrastructure as Code to Continuously Share and Preview Your Application, takes place at 3:00 p.m. – 3:30 p.m. Pacific.

]]>
Setting Up Cloud Deployments Using Docker, Azure and Github Actions https://www.docker.com/blog/setting-up-cloud-deployments-using-docker-azure-and-github-actions/ Thu, 29 Oct 2020 15:00:00 +0000 https://www.docker.com/blog/?p=27159 A few weeks ago I shared a blog about how to use GitHub Actions with Docker, prior to that Guillaume has also shared his blog post on using Docker and ACI. I thought I would bring these two together to look at a single flow to go from your code in GitHub all the way through to deploying on ACI using our new Docker to ACI experience!

To start, let’s remember where we were with our last Github action. Last time we got to a point where our builds to master would be re-built and pushed to Docker Hub (and we used some caching to speed these up).  

name: CI to Docker Hub
 
on:
 push:
   tags:
     - "v*.*.*"
 
jobs:
 
 build:
   runs-on: ubuntu-latest
   steps:
     -
       name: Checkout
       uses: actions/checkout@v2
     -      
       name: Set up Docker Buildx
       id: buildx
       uses: docker/setup-buildx-action@v1
     -    
       name: Cache Docker layers
       uses: actions/cache@v2
       with:
         path: /tmp/.buildx-cache
         key: ${{ runner.os }}-buildx-${{ github.sha }}
         restore-keys: |
           ${{ runner.os }}-buildx-
     -
       uses: docker/login-action@v1
       with:
         username: ${{ secrets.DOCKER_USERNAME }}
         password: ${{ secrets.DOCKER_PASSWORD }}
     -
       name: Build and push
       id: docker_build
       uses: docker/build-push-action@v2
       with:
         context: ./
         file: ./Dockerfile
         builder: ${{ steps.buildx.outputs.name }}
         push: true
         tags: bengotch/simplewhale:latest
         cache-from: type=local,src=/tmp/.buildx-cache
         cache-to: type=local,dest=/tmp/.buildx-cache
     -
       name: Image digest
       run: echo ${{ steps.docker_build.outputs.digest }}

Now we want to find out how we could take our image we have built and get that deployed onto ACI. 

The first thing I will need to do is head over to my Github repository and add in a few more secrets which will be used to store my credentials for Azure. If you already have an Azure account and can grab your credentials that is great. If not, you will need to create your Azure credentials that we are going to use, but we cover that as well. 

I will need to add in my tenant ID as the secret AZURE_TENANT_ID, I will then need to go and create an App in Azure to get a client and a secret. The easiest way to do this is to use the Azure console with the command 

az ad sp create-for-rbac --name http://myappname --role contributor --sdk-auth

This will output your AZURE_CLIENT_ID and an AZURE_CLIENT_SECRET.

Lastly I will need to add my subscription ID, I can find this here and will add it as AZURE_SUBSCRIPTION_ID.

docker azure and github actions 1

If this is the first time you have used Azure you will also need to create a resource group, this is the Azure way to group a set of resources for a single solution. You can set up new resource groups by going here and adding one, for example I created a new one called simplewhale in uk-south.  

Now we can start to build out our action, we will want to put in a condition for when we want this workflow to trigger. I would like to be quite continuous so will deploy the image each time I have pushed it to Docker Hub:

on:
   workflow_run:
     workflows: ["CI to Docker Hub"]
     branches: [main]
     types:
       - completed

With this in place, I will now setup on an Ubuntu box for my action:

jobs:
 run-aci:
   runs-on: ubuntu-latest
   steps:
     - name: Checkout code
       uses: actions/checkout@v2

Next I will need to install the Docker Compose CLI onto the actions instance I am running on:

- name: Install Docker Compose CLI
       run: >
         curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh

With this installed, I can then log into Azure using the Compose CLI and making use of our secrets we entered earlier:

  - name: "login azure"
       run: "docker login azure --client-id $AZURE_CLIENT_ID --client-secret $AZURE_CLIENT_SECRET --tenant-id $AZURE_TENANT_ID"
       env:
         AZURE_TENANT_ID: '${{ secrets.AZURE_TENANT_ID }}'
         AZURE_CLIENT_ID: '${{ secrets.AZURE_CLIENT_ID }}'
         AZURE_CLIENT_SECRET: '${{ secrets.AZURE_CLIENT_SECRET }}'

Having logged in, I need to create an ACI context to use for my deployments:

- name: "Create an aci context"
       run: 'docker context create aci --subscription-id $AZURE_SUBSCRIPTION_ID --resource-group simplewhale --location uksouth acicontext'
       env:
         AZURE_SUBSCRIPTION_ID: '${{ secrets.AZURE_SUBSCRIPTION_ID }}'

Then I will want to deploy my container using my ACI context. I have added a curl it to make sure it exists:

- name: "Run my App"
       run: 'docker --context acicontext run -d --name simplewhale --domainname simplewhale -p 80:80 bengotch/simplewhale '
 
     - name: "Test deployed server"
       run: 'curl http://simplewhale.uksouth.azurecontainer.io/'

And then we can just double check to be sure:
docker azure and github actions 2

Great! Once again my Whale app has been successfully deployed! Now I have a CI that stores things in the Github Registry for minor changes, that ships my full numbered versions to Docker Hub and then re-deploys these to ACI for me!

To run through a deeper example using Compose as well, why not check out Karol’s example of using the ACI experience with his Compose application which also includes how to use mounts and connect to another registry.
You can get started using the ACI experience locally using Docker Desktop today. Remember, you will also need to have your images in a repo to use them in ACI, which can easily be done with Docker Hub.

]]>
Docker Open Sources Compose for Amazon ECS and Microsoft ACI https://www.docker.com/blog/open-source-cloud-compose/ Thu, 24 Sep 2020 17:00:00 +0000 https://www.docker.com/blog/?p=27018 Today we are open sourcing the code for the Amazon ECS and Microsoft ACI Compose integrations. This is the first time that Docker has made Compose available for the cloud, allowing developers to take their Compose projects they were running locally and deploy them to the cloud by simply switching context.

With Docker focusing on developers, we’ve been doubling down on the parts of Docker that developers love, like Desktop, Hub, and of course Compose. Millions of developers all over the world use Compose to develop their applications and love its simplicity but there was no simple way to get these applications running in the cloud.

Docker is working to make it easier to get code running in the cloud in two ways. First we moved the Compose specification into a community project. This will allow Compose to evolve with the community so that it may better solve more user needs and ensure that it is agnostic of runtime platform. Second, we’ve been working with Amazon and Microsoft on CLI integrations for Amazon ECS and Microsoft ACI that allow you to use docker compose up to deploy Compose applications directly to the cloud.

While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted. We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:

CLI Architecture

The Node SDK and Compose CLI parts of this diagram are what we have open sourced today. This architecture is not final and we plan to merge the Compose CLI with the existing CLI at a later time.

Depending on the Docker Context that the user selects, the Compose CLI switches which backend is used for the command or API call. This allows us to pass commands which use existing contexts to the existing CLI transparently. The backend interface abstraction allows the implementation of a backend for any container runtime so that users can get the same Docker CLI UX they know and love for it along with the new APIs and SDK.

The Compose CLI can serve a gRPC API to provide similar functionality to that of the CLI commands. We chose to use gRPC as this allows us to generate high quality SDKs in popular languages like Node.js, Python, and Golang. While we currently only provide a Node SDK that supports single container management on ACI, there are plans to add Compose support, extend it to ECS and other backends, and add other language SDKs in the near future. The Node SDK is already used by VS Code to implement its Docker experience on ACI.

This work wouldn’t have been possible without help from our partners at Microsoft and AWS who helped us build the best possible experience for their respective platforms. Our team has enjoyed working with all of you! From Microsoft we’d specifically like to thank Mike Morton, Karol Zadora-Przylecki, Brandon Waterloo, MacKenzie Olson, and Paul Yuknewicz. From AWS we’d like to thank Carmen Puccio, David Killmon, Sravan Rengarajan, Uttara Sridhar, and David Duffey.

These tools are currently in beta so feedback and pull requests are welcome!

To get started working with Compose in the Cloud you can download Docker Desktop here, get a free Hub account to deploy your images from here. Once you have you image saved to Docker Hub you will be able to deploy it to either ECS or ACI, to find out more about how to do this:

]]>
July Recap: Docker Talks Live Stream https://www.docker.com/blog/docker-talks-live-stream-monthly-recap/ Fri, 07 Aug 2020 19:10:11 +0000 https://www.docker.com/blog/?p=26846

Here at Docker, we have a deep love for developers and with more and more of the community working remotely, we thought it would be a great time to start live streaming and connecting with the community virtually. 

To that end, Chad Metcalf (@metcalfc) and I (@pmckee) have started to live stream every Wednesday at 10am Pacific Time on YouTube. You can find all of the past streams and subscribe to get notifications when we go live on our YouTube channel.

Every week we will cover a new topic focusing on developers and developer productivity using the Docker platform. We will have guest speakers, demo a bunch of code and answer any questions that you might have. 

Below I’ve compiled a list of past live streams that you can watch at your leisure and we look forward to seeing you on the next live stream.

Docker ♥️ AWS – A match made in heaven

Screen Shot 2020 08 07 at 1.53.51 PM

Cloud container runtimes are complex and the learning curve can be steep for some developers. Not all development teams have DevOps teams to partner with which shifts the burden of understanding runtime environments, CLIs, and configuration for the cloud to the development team. But one thing is for sure, developers love the simplicity of Docker and Compose.

In this live stream, follow along as Chad Metcalf (@metcalfc) uses the new ECS context along with Docker Compose commands to run an application locally and then without changes deploy directly to ECS.

Running Docker containers in Azure ACI

Screen Shot 2020 08 07 at 1.54.40 PM

Back in June, we announced our partnership with Microsoft to help developers seamlessly move their code and applications from their desktops running Docker to the Cloud running on Azure Container Instances (ACI) .

Developers can now easily switch between their local Docker context to an ACI context and run a single Docker container or a service composed of a group of multiple containers defined with a Docker Compose file. All this is done without setting up infrastructure and takes advantage of features such as mounting Azure Storage and GitHub repositories as volumes.

In this live stream, Chad Metcalf (@metcalfc) walks us through the new Azure ACI integration using Docker Context and Compose. He starts out by running a single container locally and then switching context to ACI and deploying that same container in the cloud. Then he moves on to show the same developer workflow but now using Docker Compose and ACI to deploy a multi-container application.

Getting Started Q&A

Screen Shot 2020 08 07 at 1.55.48 PM

At Docker, our mission is to help developers become more productive. The Docker product is essential to every developer who is using containers and deploying to the cloud. Whether that’s on-prem or on a public cloud. 

In this live stream, we answer the top questions developers have when getting started with Docker. We talk about running containers locally, setting up Docker Compose files, building images and a little bit of networking.

If you have a question that you would like us to answer, please feel free to fill out this form and we would be happy to answer these questions on the next Q&A live stream. Or feel free to join us live and drop your questions in the chat box.

VSCode Docker Extension

Screen Shot 2020 08 07 at 1.56.18 PM

VSCode is a developer favorite and Microsoft has created a fantastic plug-in to help developers manage the development lifecycle using Docker. 
In this live stream, Chad Metcalf (@metcalfc) walks Peter McKee (@pmckee) through the major features of the VSCode Docker Extension and answers your questions. We cover the new context features and show how to start, stop and connect to containers using the VSCode Docker Extension.

Resources

For more on how to use Docker and to sign-up for a free account, check out the resources below: 

]]>
DockerCon 2020: Top Rated Sessions – The Fundamentals https://www.docker.com/blog/dockercon-2020-top-rated-sessions-the-fundamentals/ Fri, 12 Jun 2020 16:10:46 +0000 https://www.docker.com/blog/?p=26418 Of all the sessions from DockerCon LIVE 2020, the Best Practices + How To’s track sessions received the most live views and on-demand views. Not only were these sessions highly viewed, they were also highly rated. We thought this would be the case based on the fact that many developers are learning Docker for this first time as application containerization is experiencing broad adoption within IT shops. In the recently released 2020 Stack Overflow Developer Survey Docker ranked as the #1 most wanted platform. The data is clear…developers love Docker!

This post begins our series of blog articles focusing on the key developer content that we are curating from DockerCon. What better place to start than with the fundamentals. Developers are looking for the best content by the top experts to get started with Docker. These are the top sessions from the Best Practices + How To’s track. 

How to Get Started with Docker
Peter McKee – Docker

2020 top rated sessions 1

Peter’s session was the top session based on views across all of the tracks. He does an excellent job focusing on the fundamentals of containers and how to go from code to cloud. This session covers getting Docker installed, writing your first Dockerfiles, building and managing images, and shipping your images to the cloud.

Build & Deploy Multi-Container Applications to AWS
Lukonde Mwila – Entelect

2020 top rated sessions 2

Lukonde’s excellent session was the second most-viewed DockerCon session. Developers are looking for more information on how to best deploy their apps to the cloud. You definitely want to watch this session as Lukonde provides not only a great overview but gets into the code and command line. This session covers Docker Compose as well as how to containerize: Nginx server, React app, Node.js app, and a MongoDB app. He also covers how to create a CI/CD pipeline and how to push images to Docker Hub.

Simplify All the Things with Docker Compose
Michael Irwin – Virginia Tech

2020 top rated sessions 3

Michael is a Docker Captain and a top expert on Docker. He focuses this session on where the magic happens with Docker: Docker Compose. It’s the magic that delivers the simplest dev onboarding experience imaginable. Michael starts with the basics but quickly moves into several advanced topics. The section on how to use Docker Compose in your CI/CD pipelines to perform automated tests of your container images is a real gem!

How to Build and Test Your Docker Images in the Cloud
Peter McKee – Docker

2020 top rated sessions 4

This is another awesome session by Peter. He focused this talk on how to automate your build pipeline and perform continuous testing. With a focus on the fundamentals, Peter explains continuous integration (CI) and how to setup a CI pipeline using Docker Hub’s Webhooks, AutoBuilds, AutoTests and GitHub Actions. This is a great overview and primer for developers looking to start using Docker Hub.
If you are ready to get started with Docker, we offer free plans for individual developers and teams just starting out. Get started with Docker today.

]]>
How to Build and Test Your Docker Images in the Cloud with Docker Hub https://www.docker.com/blog/how-to-build-and-test-your-docker-images-in-the-cloud-with-docker-hub/ Tue, 05 May 2020 15:50:08 +0000 https://www.docker.com/blog/?p=26222 Part 2 in the series on Using Docker Desktop and Docker Hub Together

Introduction

In part 1 of this series, we took a look at installing Docker Desktop, building images, configuring our builds to use build arguments, running our application in containers, and finally, we took a look at how Docker Compose helps in this process. 

In this article, we’ll walk through deploying our code to the cloud, how to use Docker Hub to build our images when we push to GitHub and how to use Docker Hub to automate running tests.

Docker Hub

Docker Hub is the easiest way to create, manage, and ship your team’s images to your cloud environments whether on-premises or into a public cloud.

in the cloud docker hub 1

This first thing you will want to do is create a Docker ID, if you do not already have one, and log in to Hub.

Creating Repositories

Once you’re logged in, let’s create a couple of repos where we will push our images to.

Click on “Repositories” in the main navigation bar and then click the “Create Repository” button at the top of the screen.

in the cloud docker hub 2

You should now see the “Create Repository” screen.

You can create repositories for your account or for an organization. Choose your Docker ID from the dropdown. This will create the repository for your Docker ID.

Now let’s give our repository a name and description. Type projectz-ui in the name field and a short description such as: This is our super awesome UI for the Projectz application. 

We also have the ability to make the repository Public or Private. Let’s keep the repository Public for now.

We can also connect your repository to a source control system. You have the option to choose GitHub or Bitbucket but we’ll be doing this later in the article. So, for now, do not connect to a source control system. 

Go ahead and click the “Create” button to create a new repository.

Your repository will be created and you will be taken to the General tab of your new repository.

in the cloud docker hub 3

This is the repository screen where we can manage tags, builds, collaborators, webhooks, and visibility settings.

Click on the Tags tab. As expected, we do not have any tags at this time because we have not pushed an image to our repository yet.

We also need a repository for your services application. Follow the previous steps and create a new repository for the projectz-services application. Use the following settings to do so:

Repository name: projectz-services

Description: This is our super awesome services for the Projectz application

Visibility: Public

Build Settings: None

Excellent. We now have two Docker Hub Repositories setup.

Structure Project

For simplicity in part 1 of this series, we only had one git repository. For this article, I refactored our project and broke them into two different git repositories to more align with today’s microservices world.

Pushing Images

Now let’s build our images and push them to the repos we created above.

Fork Repos

Open your favorite browser and navigate to the pmckeetx/projectz-ui repository.

Create a copy of the repo in your GitHub account by clicking the “Fork” button in the top right corner.

Repeat the processes for the pmckeetx/projectz-svc repository.

Clone the repositories

Open a terminal on your local development machine and navigate to wherever you work on your source code. Let’s create a directory where we will clone our repos and do all our work in.

$ cd ~/projects
$ mkdir projectz

Now let’s clone the two repositories you just forked above. Back in your browser click the green “Clone or download” button and copy the URL. Use these URLs to clone the repo to your local machine.

$ git clone https://github.com/[github-id]/projectz-ui.git ui
$ git clone https://github.com/[github-id]/projectz-svc.git services

(Remember to substitute your GitHub ID for [github-id] in the above commands)

If you have SSH keys set up for your github account, you can use the SSH URLs instead.

List local images

Let’s take a look at the list of Docker images we have locally on our machine. Run the following command to see a list of images.

$ docker images

in the cloud docker hub 4

You can see that I have the nginx, projectz-svc, projectz-ui, and node images on my machine. If you do not see the above images, that’s okay, we are going to recreate them now.

Remove local images

Let’s first remove projectz-svc and projectz-ui images. We’ll use the remove image (rmi) command. You can skip this step if you do not have the projectz-svc and projectz-ui on your local machine.

$ docker rmi projectz-svc projectz-ui

in the cloud docker hub 5

If you get the following or similar error: Error response from daemon: conflict: unable to remove repository reference "projectz-svc" (must force) - container 6b1b99cc899c is using its referenced image 6b9eadff19ae

This means that the image you are trying to remove is being used by a container and can not be removed. You need to stop and rm (remove) the container before you can remove the image. To do so, run the following commands.

First, find the running container:

$ docker ps -a

Here we can see that the container named services is using the image projectz-svc which we are trying to remove. 

Let’s stop and remove this container. We can do this at the same time by using the --force option to the rm command. 

If we tried to remove the container by using docker rm services without first stopping it, we would get the following error: Error response from daemon: You cannot remove a running container 6b1b99cc899c. Stop the container before attempting removal or force remove

So we’ll use the --force option to tell Docker to send a SIGKILL to the container and then remove it.

$ docker rm --force services

Do the same for the UI container, if it is still running.

Now that we stopped and removed the containers, we can now remove the images.

$ docker rmi projectz-svc projectz-ui

Let’s list our images again.

$ docker images

in the cloud docker hub 6

Now you should see that the projectz-ui and projectz-services images are gone.

Building images

Let’s build our images for the UI and Services projects now. Run the following commands:

$ cd [working dir]/projectz/services
$ docker build --tag projectz-svc .
$ cd ../ui
$ docker build --tag projectz-ui .

If you would like a more in-depth discussion around building images and Dockerfiles, refer back to part 1 of this series.

Pushing images

Okay, now that we have our images built, let’s take a look at pushing them to Docker Hub.

Tagging images

If you look back at the beginning of the post where we set up our Docker Hub repositories, you’ll see that we created the repositories in our Docker ID namespace. Before we can push our images to Hub, we’ll need to tag them using this namespace.

Open your favorite browser and navigate to Docker Hub and let’s review real quick.

Login to Hub, if you’ve not already done so, and take a look at the dashboard. You should see a list of images. Choose your Docker ID from the dropdown to only show images associated with your Docker ID.

Click on the row for the projectz-ui repository. 

Towards the top right of the window, you should see a docker command highlighting in grey.

This is the Docker Push command followed by the image name. You’ll see that this command uses your Docker ID followed by a slash followed by the image name and tag, separated by a colon. You can read more about pushing to repositories and tagging images in our documentation.

Let’s tag our local images to match the Docker Hub Repository. Run the following commands anywhere in your terminal.

$ docker tag projectz-ui [dockerid]/projectz-ui:latest
$ docker tag projectz-svc [dockerid]/projectz-svc:latest

(Remember to substitute your Docker ID for [dockerid] in the above commands)

Now list your local images and see the newly tagged images.

$ docker images

in the cloud docker hub 7

Pushing

Okay, now that we have our images tagged correctly, let’s push our images to Hub.

The first thing we need to do is make sure we logged into Docker Hub on the terminal. Although the repositories we created earlier are “public”, only the owner of the repository can push by default. If you would like to allow folks on your team to be able to push images and manage repositories. Take a look at Organizations and Teams in Hub.

$ docker login
Login with your Docker ID to push and pull images from Docker Hub...
Username:

Enter your username (Docker ID) and password.

Now we can push our images.

$ docker push [dockerid]/projectz-ui:latest
$ docker push [dockerid]/projectz-svc:latest

Open your favorite browser and navigate to Docker Hub, select one of the repositories we created earlier and then click the “Tags” tab. You will now see the images and tag we just pushed.

Automatically Build and Test Images

That was pretty straightforward but we had to run a lot of manual commands. What if we wanted to build an image, run tests and publish to a repository so we could deploy our latest changes?

We might be tempted to write a shell script and have everybody on the team run it after they completed a feature. But this wouldn’t be very efficient. 

What we want is a continuous integration (CI) pipeline. Docker Hub provides these features using AutoBuilds and AutoTests

Connecting Source Control

Docker Hub can be connected to GitHub and Bitbucket to listen to push notifications so it can trigger AutoBuilds.

I’ve already connected my Hub account to my GitHub account. To connect your own Hub account to your version control system follow these simple steps in our documentation.

Setup AutoBuilds

Let’s set up AutoBuilds for our two repositories. The steps are the same for both repositories so I’ll only walk you through one of them.

Open Hub in your browser, and navigate to the detail page for the projectz-ui repository.

Click on the “Builds” tab and then click the “Link to GitHub” button in the middle of the page.

in the cloud docker hub 8

Now in the Build Configuration screen. Select your organization and repository from the dropdowns. Once you select a repository, the screen will expand with more options.

in the cloud docker hub 9

Leave the AUTOTEST setting to Off and the REPOSITORY LINKS to Off also.

The next thing we can configure is Build Rules. Docker Hub automatically configures the first BUILD RULE using the master branch of our repo. But we can configure more.

We have a couple of options we can set for build rules. 

The first is Source Type which can either be a Branch or a Tag. 

Then we can set the Source, this is referring to either the Branch you want to watch or the Tag name you would like to watch. You can enter a string literal or a RegExp that will be used for matching.

Next, we’ll set the Docker Tag that we want to use when the image is built and tagged.

We can also tell Hub what Dockerfile to use and where the Build Context is located.

The next option turns off or on the Build Rule.

We also have the option to use the Build Cache.

Save and Build

We’ll leave the default Build Rule that Hub added for us. Click the “Save and Build” button.

Our Build options will be saved and an AutoBuild will be kicked off. You can watch this build run on the “Builds” tab of your image page.

To view the build logs, click on the build that is in progress and you will be taken to the build details page where you can view the logs.

in the cloud docker hub 10

Once the build is complete, you can view the newly created image by clicking on the “Tags” tab. There you will see that our image was built and tagged with “latest”.

Follow the same steps to set up the projectz-svc repository. 

Trigger a build from Git Push

Now that we see that our image is being built, let’s make a change to our project and trigger it from git push command.

Open the projectz-svc/src/routes.js file in your favorite editor and add the following code snippet anywhere before the module.exports = appRouter line at the bottom of the file.

...
 
appRouter.get( '/services/hello', function( req, res ) {
 res.json({ code: 'success', payload: 'World' })
})
 
...
 
module.exports = appRouter

Save the file and commit the changes locally.

$ git commit -am "add hello - world route"

Now, if we push the changes to GitHub, GitHub will trigger a webhook to Docker Hub which will in turn trigger a new build of our image. Let’s do that now.

$ git push

Navigate over to Hub in your browser and scroll down. You should see that a build was just triggered.

After the build finishes, navigate to the “Tags” tab and see that the image was updated.

in the cloud docker hub 11

Setup AutoTests

Excellent! We now have both our images building when we push to our source control repo. But this is just step one in our CI process. We should only push new images to the repository if all tests pass.

Docker Hub will automatically run tests if you have a docker-compose.test.yml file that defines a sut service. Let’s create this now and run our tests.

Open the projectz-svc project in your editor and create a new file name: docker-compose.test.yml and add the following yaml.

version: "3.6"
 
services:
 sut:
   build:
     context: .
     args:
       NODE_ENV: test
   ports:
     - "8080:80"
   command: npm run test

Commit the changes and push to GitHub.

$ git add docker-compose.test.yml
$ git commit -m “add docker-compose.test.yml for hub autotests”
$ git push origin master

Now navigate back to Hub and the projectz-svc repo. Once the build finishes, click on the build link and scroll to the bottom of the build logs. There you can see that the tests were run and the image was pushed to the repo.

If the build fails, you will see that the status turns to FAILURE and you will be able to see the error in the build logs.

Conclusion

In part 2 of this series, we showed you how Docker Hub is one of the easiest ways to automatically build your images and run tests without having to use a separate CI system. If you’d like to go further you can take a look at: 

]]>