Docker Desktop – Docker https://www.docker.com Wed, 12 Jul 2023 13:36:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Docker Desktop – Docker https://www.docker.com 32 32 How Kinsta Improved the End-to-End Development Experience by Dockerizing Every Step of the Production Cycle https://www.docker.com/blog/how-kinsta-improved-the-end-to-end-development-experience-by-dockerizing-every-step-of-the-production-cycle/ Tue, 11 Jul 2023 14:20:00 +0000 https://www.docker.com/?p=43592 Guest author Amin Choroomi is an experienced software developer at Kinsta. Passionate about Docker and Kubernetes, he specializes in application development and DevOps practices. His expertise lies in leveraging these transformative technologies to streamline deployment processes and enhance software scalability.

One of the biggest challenges of developing and maintaining cloud-native applications at the enterprise level is having a consistent experience through the entire development lifecycle. This process is even harder for remote companies with distributed teams working on different platforms, with different setups, and asynchronous communication. 

At Kinsta, we have projects of all sizes for application hosting, database hosting, and managed WordPress hosting. We need to provide a consistent, reliable, and scalable solution that allows:

  • Developers and quality assurance teams, regardless of their operating systems, to create a straightforward and minimal setup for developing and testing features.
  • DevOps, SysOps, and Infrastructure teams to configure and maintain staging and production environments.
Kinsta logo on blue background with white arrows

Overcoming the challenge of developing cloud-native applications on a distributed team

At Kinsta, we rely heavily on Docker for this consistent experience at every step, from development to production. In this article, we’ll walk you through:

  • How to leverage Docker Desktop to increase developers’ productivity.
  • How we build Docker images and push them to Google Container Registry via CI pipelines with CircleCI and GitHub Actions.
  • How we use CD pipelines to promote incremental changes to production using Docker images, Google Kubernetes Engine, and Cloud Deploy.
  • How the QA team seamlessly uses prebuilt Docker images in different environments.

Using Docker Desktop to improve the developer experience

Running an application locally requires developers to meticulously prepare the environment, install all the dependencies, set up servers and services, and make sure they are properly configured. When you run multiple applications, this approach can be cumbersome, especially when it comes to complex projects with multiple dependencies. And, when you introduce multiple contributors with multiple operating systems, chaos is installed. To prevent this, we use Docker.

With Docker, you can declare the environment configurations, install the dependencies, and build images with everything where it should be. Anyone, anywhere, with any OS can use the same images and have exactly the same experience as anyone else.

Declare your configuration with Docker Compose

To get started, you need to create a Docker Compose file, docker-compose.yml. This is a declarative configuration file written in YAML format that tells Docker your application’s desired state. Docker uses this information to set up the environment for your application.

Docker Compose files come in handy when you have more than one container running and there are dependencies between containers.

To create your docker-compose.yml file:

  1. Start by choosing an image as the base for our application. Search on Docker Hub to find a Docker image that already contains your app’s dependencies. Make sure to use a specific image tag to avoid errors. Using the latest tag can cause unforeseen errors in your application. You can use multiple base images for multiple dependencies — for example, one for PostgreSQL and one for Redis.
  2. Use volumes to persist data on your host if you need to. Persisting data on the host machine helps you avoid losing data if Docker containers are deleted or if you have to recreate them.
  3. Use networks to isolate your setup to avoid network conflicts with the host and other containers. It also helps your containers to find and communicate with each other easily.

Bringing it all together, we have a docker-compose.yml that looks like this:

version: '3.8'

services:
  db:
    image: postgres:14.7-alpine3.17
    hostname: mk_db
    restart: on-failure
    ports:
      - ${DB_PORT:-5432}:5432
    volumes:
      - db_data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: ${DB_USER:-user}
      POSTGRES_PASSWORD: ${DB_PASSWORD:-password}
      POSTGRES_DB: ${DB_NAME:-main}
    networks:
      - mk_network
  redis:
    image: redis:6.2.11-alpine3.17
    hostname: mk_redis
    restart: on-failure
    ports:
      - ${REDIS_PORT:-6379}:6379
    networks:
      - mk_network
      
volumes:
  db_data:

networks:
  mk_network:
    name: mk_network

Containerize the application

Build a Docker image for your application

To begin, we need to build a Docker image using a Dockerfile, and then call that from docker-compose.yml.

Follow these five steps to create your Dockerfile file:

1. Start by choosing an image as a base. Use the smallest base image that works for the app. Usually, alpine images are minimal with nearly zero extra packages installed. You can start with an alpine image and build on top of that:

   docker
   FROM node:18.15.0-alpine3.17

2. Sometimes you need to use a specific CPU architecture to avoid conflicts. For example, suppose that you use an arm64-based processor but you need to build an amd64 image. You can do that by specifying the -- platform in Dockerfile:

   docker
   FROM --platform=amd64 node:18.15.0-alpine3.17

3. Define the application directory and install the dependencies and copy the output to your root directory:

    docker
    WORKDIR /opt/app 
    COPY package.json yarn.lock ./ 
    RUN yarn install 
    COPY . .

4. Call the Dockerfile from docker-compose.yml:

     services:
      ...redis
      ...db
      
      app:
        build:
          context: .
          dockerfile: Dockerfile
        platforms:
          - "linux/amd64"
        command: yarn dev
        restart: on-failure
        ports:
          - ${PORT:-4000}:${PORT:-4000}
        networks:
          - mk_network
        depends_on:
          - redis
          - db

5. Implement auto-reload so that when you change something in the source code, you can preview your changes immediately without having to rebuild the application manually. To do that, build the image first, then run it in a separate service:

     services:
      ... redis
      ... db
      
      build-docker:
        image: myapp
        build:
          context: .
          dockerfile: Dockerfile
      app:
        image: myapp
        platforms:
          - "linux/amd64"
        command: yarn dev
        restart: on-failure
        ports:
          - ${PORT:-4000}:${PORT:-4000}
        volumes:
          - .:/opt/app
          - node_modules:/opt/app/node_modules
        networks:
          - mk_network
        depends_on:
          - redis
          - db
          - build-docker

Pro tip: Note that node_modules is also mounted explicitly to avoid platform-specific issues with packages. This means that, instead of using the node_modules on the host, the Docker container uses its own but maps it on the host in a separate volume.

Incrementally build the production images with continuous integration 

The majority of our apps and services use CI/CD for deployment, and Docker plays an important role in the process. Every change in the main branch immediately triggers a build pipeline through either GitHub Actions or CircleCI. The general workflow is simple: It installs the dependencies, runs the tests, builds the Docker image, and pushes it to Google Container Registry (or Artifact Registry). In this article, we’ll describe the build step.

Building the Docker images

We use multi-stage builds for security and performance reasons.

Stage 1: Builder

In this stage, we copy the entire code base with all source and configuration, install all dependencies, including dev dependencies, and build the app. It creates a dist/ folder and copies the built version of the code there. This image is way too large, however, with a huge set of footprints to be used for production. Also, as we use private NPM registries, we use our private NPM_TOKEN in this stage as well. So, we definitely don’t want this stage to be exposed to the outside world. The only thing we need from this stage is the dist/ folder.

Stage 2: Production

Most people use this stage for runtime because it is close to what we need to run the app. However, we still need to install production dependencies, and that means we leave footprints and need the NPM_TOKEN. So, this stage is still not ready to be exposed. Here, you should also note the yarn cache clean on line 19. That tiny command cuts our image size by up to 60 percent.

Stage 3: Runtime

The last stage needs to be as slim as possible with minimal footprints. So, we just copy the fully baked app from production and move on. We put all those yarn and NPM_TOKEN stuff behind and only run the app.

This is the final Dockerfile.production:

docker
# Stage 1: build the source code 
FROM node:18.15.0-alpine3.17 as builder 
WORKDIR /opt/app 
COPY package.json yarn.lock ./ 
RUN yarn install 
COPY . . 
RUN yarn build 

# Stage 2: copy the built version and build the production dependencies FROM node:18.15.0-alpine3.17 as production 
WORKDIR /opt/app 
COPY package.json yarn.lock ./ 
RUN yarn install --production && yarn cache clean 
COPY --from=builder /opt/app/dist/ ./dist/ 

# Stage 3: copy the production ready app to runtime 
FROM node:18.15.0-alpine3.17 as runtime 
WORKDIR /opt/app 
COPY --from=production /opt/app/ . 
CMD ["yarn", "start"]

Note that, for all the stages, we start copying package.json and yarn.lock files first, installing the dependencies, and then copying the rest of the code base. The reason for this is that Docker builds each command as a layer on top of the previous one, and each build could use the previous layers if available and only build the new layers for performance purposes. 

Let’s say you have changed something in src/services/service1.ts without touching the packages. That means the first four layers of the builder stage are untouched and could be reused. This approach makes the build process incredibly faster.

Pushing the app to Google Container Registry through CircleCI pipelines

There are several ways to build a Docker image in CircleCI pipelines. In our case, we chose to use circleci/gcp-gcr orbs:

Minimum configuration is needed to build and push our app, thanks to Docker.

executors:
  docker-executor:
    docker:
      - image: cimg/base:2023.03
orbs:
  gcp-gcr: circleci/gcp-gcr@0.15.1
jobs:
  ...
  deploy:
    description: Build & push image to Google Artifact Registry
    executor: docker-executor
    steps:
      ...
      - gcp-gcr/build-image:
          image: my-app
          dockerfile: Dockerfile.production
          tag: ${CIRCLE_SHA1:0:7},latest
      - gcp-gcr/push-image:
          image: my-app
          tag: ${CIRCLE_SHA1:0:7},latest

Pushing the app to Google Container Registry through GitHub Actions

As an alternative to CircleCI, we can use GitHub Actions to deploy the application continuously.

We set up gcloud and build and push the Docker image to gcr.io:

jobs:
  setup-build:
    name: Setup, Build
    runs-on: ubuntu-latest

    steps:
    - name: Checkout
      uses: actions/checkout@v3

    - name: Get Image Tag
      run: |
        echo "TAG=$(git rev-parse --short HEAD)" >> $GITHUB_ENV

    - uses: google-github-actions/setup-gcloud@master
      with:
        service_account_key: ${{ secrets.GCP_SA_KEY }}
        project_id: ${{ secrets.GCP_PROJECT_ID }}

    - run: |-
        gcloud --quiet auth configure-docker

    - name: Build
      run: |-
        docker build \
          --tag "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:$TAG" \
          --tag "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:latest" \
          .

    - name: Push
      run: |-
        docker push "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:$TAG"
        docker push "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:latest"

With every small change pushed to the main branch, we build and push a new Docker image to the registry.

Deploying changes to Google Kubernetes Engine using Google Delivery Pipelines

Having ready-to-use Docker images for each and every change also makes it easier to deploy to production or roll back in case something goes wrong. We use Google Kubernetes Engine to manage and serve our apps, and we use Google Cloud Deploy and Delivery Pipelines for our continuous deployment process.

When the Docker image is built after each small change (with the CI pipeline shown previously), we take one step further and deploy the change to our dev cluster using gcloud. Let’s look at that step in CircleCI pipeline:

- run:
    name: Create new release
    command: gcloud deploy releases create release-${CIRCLE_SHA1:0:7} --delivery-pipeline my-del-pipeline --region $REGION --annotations commitId=$CIRCLE_SHA1 --images my-app=gcr.io/${PROJECT_ID}/my-app:${CIRCLE_SHA1:0:7}

This step triggers a release process to roll out the changes in our dev Kubernetes cluster. After testing and getting the approvals, we promote the change to staging and then production. This action is all possible because we have a slim isolated Docker image for each change that has almost everything it needs. We only need to tell the deployment which tag to use.

How the Quality Assurance team benefits from this process

The QA team needs mostly a pre-production cloud version of the apps to be tested. However, sometimes they need to run a prebuilt app locally (with all the dependencies) to test a certain feature. In these cases, they don’t want or need to go through all the pain of cloning the entire project, installing npm packages, building the app, facing developer errors, and going over the entire development process to get the app up and running.

Now that everything is already available as a Docker image on Google Container Registry, all the QA team needs is a service in Docker compose file:

services:
  ...redis
  ...db
  
  app:
    image: gcr.io/${PROJECT_ID}/my-app:latest
    restart: on-failure
    ports:
      - ${PORT:-4000}:${PORT:-4000}
    environment:
      - NODE_ENV=production
      - REDIS_URL=redis://redis:6379
      - DATABASE_URL=postgresql://${DB_USER:-user}:${DB_PASSWORD:-password}@db:5432/main
    networks:
      - mk_network
    depends_on:
      - redis
      - db

With this service, the team can spin up the application on their local machines using Docker containers by running:

docker compose up

This is a huge step toward simplifying testing processes. Even if QA decides to test a specific tag of the app, they can easily change the image tag on line 6 and re-run the Docker compose command. Even if they decide to compare different versions of the app simultaneously, they can easily achieve that with a few tweaks. The biggest benefit is to keep our QA team away from developer challenges.

Advantages of using Docker

  • Almost zero footprints for dependencies: If you ever decide to upgrade the version of Redis or PostgreSQL, you can just change one line and re-run the app. There’s no need to change anything on your system. Additionally, if you have two apps that both need Redis (maybe even with different versions) you can have both running in their own isolated environment, without any conflicts with each other.
  • Multiple instances of the app: There are many cases where we need to run the same app with a different command, such as initializing the DB, running tests, watching DB changes, or listening to messages. In each of these cases, because we already have the built image ready, we just add another service to the Docker compose file with a different command, and we’re done.
  • Easier testing environment: More often than not, you just need to run the app. You don’t need the code, the packages, or any local database connections. You only want to make sure the app works properly, or need a running instance as a backend service while you’re working on your own project. That could also be the case for QA, Pull Request reviewers, or even UX folks who want to make sure their design has been implemented properly. Our Docker setup makes it easy for all of them to take things going without having to deal with too many technical issues.

Learn more

]]>
Optimizing Deep Learning Workflows: Leveraging Stable Diffusion and Docker on WSL 2 https://www.docker.com/blog/stable-diffusion-and-docker-on-wsl2/ Tue, 11 Jul 2023 14:15:00 +0000 https://www.docker.com/?p=43799 Deep learning has revolutionized the field of artificial intelligence (AI) by enabling machines to learn and generate content that mimics human-like creativity. One advancement in this domain is Stable Diffusion, a text-to-image model released in 2022. 

Stable Diffusion has gained significant attention for its ability to generate highly detailed images conditioned on text descriptions, thereby opening up new possibilities in areas such as creative design, visual storytelling, and content generation. With its open source nature and accessibility, Stable Diffusion has become a go-to tool for many researchers and developers seeking to harness the power of deep learning. 

In this article, we will explore how to optimize deep learning workflows by leveraging Stable Diffusion alongside Docker on WSL 2, enabling seamless and efficient experimentation with this cutting-edge technology.

Dark purple background with the Docker logo in the bottom left corner and a paint palette in the center

In this comprehensive guide, we will walk through the process of setting up the Stable Diffusion WebUI Docker, which includes enabling WSL 2 and installing Docker Desktop. You will learn how to download the required code from GitHub and initialize it using Docker Compose

The guide provides instructions on adding additional models and managing the system, covering essential tasks such as reloading the UI and determining the ideal location for saving image output. Troubleshooting steps and tips for monitoring hardware and GPU usage are also included, ensuring a smooth and efficient experience with Stable Diffusion WebUI (Figure 1).

Screenshot of Stable Diffusion WebUI showing five different cat images.
Figure 1: Stable Diffusion WebUI.

Why use Docker Desktop for Stable Diffusion?

In the realm of image-based generative AI, setting up an effective execution and development environment on a Windows PC can present particular challenges. These challenges arise due to differences in software dependencies, compatibility issues, and the need for specialized tools and frameworks. Docker Desktop emerges as a powerful solution to tackle these challenges by providing a containerization platform that ensures consistency and reproducibility across different systems.

By leveraging Docker Desktop, we can create an isolated environment that encapsulates all the necessary components and dependencies required for image-based generative AI workflows. This approach eliminates the complexities associated with manual software installations, conflicting library versions, and system-specific configurations.

Using Stable Diffusion WebUI

The Stable Diffusion WebUI is a browser interface that is built upon the Gradio library, offering a convenient way to interact with and explore the capabilities of Stable Diffusion. Gradio is a powerful Python library that simplifies the process of creating interactive interfaces for machine learning models.

Setting up the Stable Diffusion WebUI environment can be a tedious and time-consuming process, requiring multiple steps for environment construction. However, a convenient solution is available in the form of Stable Diffusion WebUI Docker project. This Docker image eliminates the need for manual setup by providing a preconfigured environment.

If you’re using Windows and have Docker Desktop installed, you can effortlessly build and run the environment using the docker-compose command. You don’t have to worry about preparing libraries or dependencies beforehand because everything is encapsulated within the container.

You might wonder whether there are any problems because it’s a container. I was anxious before I started using it, but I haven’t had any particular problems so far. The images, models, variational autoencoders (VAEs), and other data that are generated are shared (bind mounted) with my Windows machine, so I can exchange files simply by dragging them in Explorer or in the Files of the target container on Docker Desktop. 

The most trouble I had was when I disabled the extension without backing it up, and in a moment blew away about 50GB of data that I had spent half a day training. (This is a joke!)

Architecture

I’ve compiled a relatively simple procedure to start with Stable Diffusion using Docker Desktop on Windows. 

Prerequisites:

  • Windows 10 Pro, 21H2 Build 19044.2846
  • 16GB RAM
  • NVIDIA GeForce RTX 2060 SUPER
  • WSL 2 (Ubuntu)
  • Docker Desktop 4.18.0 (104112)

Setup with Docker Compose

We will use the WebUI called AUTOMATIC1111 to utilize Stable Diffusion this time. The environment for these will be constructed using Docker Compose. The main components are shown in Figure 2.

Illustration of execution environment using Automatic1111 showing Host, portmapping, Docker image, bind mount information, etc.
Figure 2: Configuration built using Docker Compose.

The configuration of Docker Compose is defined in docker-compose.yml. We are using a Compose extension called x-base_service to describe the major components common to each service.

To start, there are settings for bind mount between the host and the container, including /data, which loads modes, and /output, which outputs images. Then, we make the container recognize the GPU by loading the NVIDIA driver.

Furthermore, the service named sd-auto:58 runs AUTOMATIC1111, WebUI for Stable Diffusion, within the container. Because there is a port mapping (TCP:7860), between the host and the container in the aforementioned common service settings, it is possible to access from the browser on the host side to the inside of the container.

Getting Started

Prerequisite

WSL 2 must be activated and Docker Desktop installed.

On the first execution, it downloads 12GB of Stable Diffusion 1.5 models, etc. The Web UI cannot be used until this download is complete. Depending on your connection, it may take a long time until the first startup.

Downloading the code

First, download the Stable Diffusion WebUI Docker code from GitHub. If you download it as a ZIP, click Code > Download ZIP and the stable-diffusion-webui-docker-master.zip file will be downloaded (Figure 3). 

Unzip the file in a convenient location. When you expand it, you will find a folder named stable-diffusion-webui-docker-master. Open the command line or similar and run the docker compose command inside it.

 Screenshot showing Stable Diffusion WebUI Docker being downloaded as ZIP file from GitHub.
Figure 3: Downloading the configuration for Docker Compose from the repository.

Or, if you have an environment where you can use Git, such as Git for Windows, it’s quicker to download it as follows:

git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git

In this case, the folder name is stable-diffusion-webui-docker. Move it with cd stable-diffusion-webui-docker.

Supplementary information for those familiar with Docker

If you just want to get started, you can skip this section.

By default, the timezone is UTC. To adjust the time displayed in the log and the date of the directory generated under output/txt2img to Japan time, add TZ=Asia/Tokyo to the environment variables of the auto service. Specifically, add the following description to environment:.

auto: &automatic
    <<: *base_service
    profiles: ["auto"]
    build: ./services/AUTOMATIC1111
    image: sd-auto:51
    environment:
      - CLI_ARGS=--allow-code --medvram --xformers --enable-insecure-extension-access --api
      - TZ=Asia/Tokyo

Tasks at first startup

The rest of the process is as described in the GitHub documentation. Inside the folder where the code is expanded, run the following command:

docker compose --profile download up --build

After the command runs, the log of a container named webui-docker-download-1 will be displayed on the screen. For a while, the download will run as follows, so wait until it is complete:

webui-docker-download-1  | [DL:256KiB][#4561e1 1.4GiB/3.9GiB(36%)][#42c377 1.4GiB/3.9GiB(37%)]

If the process ends successfully, it will be displayed as exited with code 0 and returned to the original prompt:

…(snip)
webui-docker-download-1  | https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE 
webui-docker-download-1  | https://github.com/xinntao/ESRGAN/blob/master/LICENSE 
webui-docker-download-1  | https://github.com/cszn/SCUNet/blob/main/LICENSE 
webui-docker-download-1 exited with code 0

If a code other than 0 comes out like the following, the download process has failed:

webui-docker-download-1  | 42c377|OK  |   426KiB/s|/data/StableDiffusion/sd-v1-5-inpainting.ckpt 
webui-docker-download-1  | 
webui-docker-download-1  | Status Legend: 
webui-docker-download-1  | (OK):download completed.(ERR):error occurred. 
webui-docker-download-1  | 
webui-docker-download-1  | aria2 will resume download if the transfer is restarted. 
webui-docker-download-1  | If there are any errors, then see the log file. See '-l' option in help/m 
an page for details. 
webui-docker-download-1 exited with code 24

In this case, run the command again and check whether it ends successfully. Once it finishes successfully, run the command to start the WebUI. 

Note: The following is for AUTOMATIC1111’s UI and GPU specification:

docker compose --profile auto up --build

When you run the command, loading the model at the first startup may take a few minutes. It may look like it’s frozen like the following display, but that’s okay:

webui-docker-auto-1  | LatentDiffusion: Running in eps-prediction mode
webui-docker-auto-1  | DiffusionWrapper has 859.52 M params.

If you wait for a while, the log will flow, and the following URL will be displayed:

webui-docker-auto-1  | Running on local URL:  http://0.0.0.0:7860

Now the startup preparation of the Web UI is set. If you open http://127.0.0.1:7860 from the browser, you can see the Web UI. Once open, select an appropriate model from the top left of the screen, write some text in the text field, and select the Generate button to start generating images (Figure 4).

Screenshot showing text input and large orange "generate" button for creating images.
Figure 4: After selecting the model, input the prompt and generate the image.

When you click, the button will be reversed. Wait until the process is finished (Figure 5).

Screenshot of gray "interrupt" and "skip" buttons.
Figure 5: Waiting until the image is generated.

At this time, the log of image generation appears on the terminal you are operating, and you can also check the similar display by looking at the log of the container on Docker Desktop (Figure 6).

Screenshot of progress log, showing 100% completion.
Figure 6: 100% indicates that the image generation is complete.

When the status reaches 100%, the generation of the image is finished, and you can check it on the screen (Figure 7).

Screenshot of Stable Diffusion WebUI showing "space cat" as text input with image of gray and white cat on glowing purple background.
Figure 7: After inputting “Space Cat” in the prompt, a cat image was generated at the bottom right of the screen.

The created images are automatically saved in the output/txt2img/date folder directly under the directory where you ran the docker compose command.

To stop the launched WebUI, enter Ctrl+C on the terminal that is still running the docker compose command.

Gracefully stopping... (press Ctrl+C again to force)
Aborting on container exit...
[+] Running 1/1
 ? Container webui-docker-auto-1  Stopped                                                     11.4s
canceled

When the process ends successfully, you will be able to run the command again. To use the WebUI again after restarting, re-run the docker compose command:

docker compose --profile auto up --build

To see the operating hardware status, use the task manager to look at the GPU status (Figure 8).

Screenshot of Windows Task Manager showing GPU status.
Figure 8: From the Performance tab of the Windows Task Manager, you can monitor the processing of CUDA and similar tasks on the GPU.

To check whether the GPU is visible from inside the container and to see whether the information comes out, run the nvidia-smi command from docker exec or the Docker Desktop terminal.

root@e37fcc5a5810:/stable-diffusion-webui# nvidia-smi 
Mon Apr 17 07:42:27 2023 
+---------------------------------------------------------------------------------------+ 
| NVIDIA-SMI 530.41.03              Driver Version: 531.41       CUDA Version: 12.1     | 
|-----------------------------------------+----------------------+----------------------+ 
| GPU  Name                  Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC | 
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. | 
|                                         |                      |               MIG M. | 
|=========================================+======================+======================| 
|   0  NVIDIA GeForce RTX 2060 S...    On | 00000000:01:00.0  On |                  N/A | 
| 42%   40C    P8                6W / 175W|   2558MiB /  8192MiB |      2%      Default | 
|                                         |                      |                  N/A | 
+-----------------------------------------+----------------------+----------------------+ 
+---------------------------------------------------------------------------------------+ 
| Processes:                                                                            | 
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory | 
|        ID   ID                                                             Usage      | 
|=======================================================================================| 
|    0   N/A  N/A       149      C   /python3.10                               N/A      | 
+---------------------------------------------------------------------------------------+

Adding models and VAEs

If you download a model that is not included from the beginning, place files with extensions, such as .safetensors in stable-diffusion-webui-docker\data\StableDiffusion. In the case of VAE, place .skpt files in stable-diffusion-webui-docker\data\VAE.

If you’re using Docker Desktop, you can view and operate inside on the Files of the webui-docker-auto-1 container, so you can also drag it into Docker Desktop. 

Figure 9 shows the Docker Desktop screen. It says MOUNT in the Note column, and it shares the information in the folder with the container from the Windows host side.

Screenshot of Docker Desktop showing blue MOUNT indicator under Note column.
Figure 9: From the Note column, you can see whether the folder is mounted or has been modified.

Now, after placing the file, a link to Reload UI is shown in the footer of the WebUI, so select there (Figure 10).

Screenshot of Stable Diffusion WebUI showing Reload UI option.
Figure 10: By clicking Reload UI, the WebUI settings are reloaded.

When you select Reload UI, the system will show a loading screen, and the browser connection will be cut off. When you reload the browser, the model and VAE files are automatically loaded. To remove a model, delete the model file from data\StableDiffusion.

Conclusion

With Docker Desktop, image generation using the latest generative AI environment can be done easier than ever. Typically, a lot of time and effort is required just to set up the environment, but Docker Desktop solves this complexity. If you’re interested, why not take a challenge in the world of generative AI? Enjoy!

Learn more

]]>
Docker Desktop 4.21: Support for new Wasm runtimes, Docker Init support for Rust, Docker Scout Dashboard enhancements, Builds view (Beta), and more https://www.docker.com/blog/docker-desktop-4-21/ Thu, 06 Jul 2023 13:26:37 +0000 https://www.docker.com/?p=43865 Docker Desktop 4.21 is now available and includes Docker Init support for Rust, new Wasm runtimes support, enhancements to Docker Scout dashboards, Builds view (Beta), performance and filesystem enhancements to Docker Desktop on macOS, and more. Docker Desktop in 4.21 also uses substantially less memory, allowing developers to run more applications simultaneously on their machines without relying on swap. 

purple background with large white numbers that say 4.21

Added support for new Wasm runtimes

Docker Desktop 4.21 now has added support for the following Wasm runtimes: Slight, Spin, and Wasmtime. These runtimes can be downloaded on demand when the containerd image store is enabled. The following steps outline the process:

  1. In Docker Desktop, navigate to the settings by clicking the gear icon.
  2. Select the Features in development tab.
  3. Check the boxes for Use containerd for pulling and storing images and Enable Wasm.
  4. Select Apply & restart.
  5. When prompted for Wasm Runtimes Installation, select Install.
  6. After installation, these runtimes can be used to run Wasm workloads locally with the corresponding flags, for example:
    --runtime=io.containerd.spin.v1 --platform=wasi/wasm32

Docker Init (Beta) added support for Rust 

In the 4.21 release, we’ve added Rust server support to Docker Init. Docker Init is a CLI command in beta that simplifies the process of adding Docker to a project. (Learn more about Docker Init in our blog post: Docker Init: Initialize Dockerfiles and Compose files with a single CLI command.)

You can try Docker Init with Rust by updating to the latest version of Docker Desktop and typing docker init in the command line while inside a target project folder. 

The Docker team is working on adding more languages and frameworks for this command, including Java and .Net. Let us know if you want us to support a specific language or framework. We welcome feedback as we continue to develop and improve Docker Init (Beta).

Docker Scout dashboard enhancements 

The Docker Scout Dashboard helps you share the analysis of images in an organization with your team. Developers can now see an overview of their security status across all their images from both Docker Hub and Artifactory (more registry integrations coming soon) and get remediation advice at their fingertips. Docker Scout analysis helps team members in roles such as security, compliance, and operations to know what vulnerabilities and issues they need to focus on.

screenshot of Docker Scout vulnerabilities dashboard shwoing 2412 vulnerabilities that are critical severity with a red line, a lighter red showing 13106 high severity vulnerabilities, yellow with 11108 medium severity, and light yellow with 3138 low severity. A chart below shows the number of vulnerabilities in the last 30 days (May 29-June 29), with an increase starting on June 13
Figure 1: A screenshot of the Docker Scout vulnerabilities overview

Visit the Docker Scout vulnerability dashboard to get end-to-end observability into your supply chain. 

Docker Buildx v0.11

Docker Buildx component has been updated to a new version, enabling many new features. For example, you can now load multi-platform images into the Docker image store when containerd image store is enabled.

The buildx bake command now supports matrix builds, allowing defining multiple configurations of the same build target that can all be built together.

There are also multiple new experimental commands for better debugging support for your builds. Read more from the release changelog

Builds (Beta)

Docker Desktop 4.21 includes our Builds view beta release. Builds view gives you visibility into the active builds currently running on your system and enables analysis and debugging of your completed builds.

All builds started with docker build or docker buildx build commands will automatically appear in the Builds view. From there, you can inspect all the properties of a build invocation, including timing information, build cache usage, Dockerfile source, etc. Builds view also provides you full access to all of the logs and properties of individual build steps.

If you are working with multiple Buildx builder instances (for example, running builds inside a Docker container or Kubernetes cluster), Builds view include a new Builders settings view to make it even easier to manage additional builders or set default builder instances.

Builds view is currently in beta as we are continuing to improve them. To enable them, go to Settings > Features in development > Turn on Builds view.

Builds view — List of active and completed builds, including an active builds progress bar and timer
Figure 2: Builds view — List of active and completed builds
Builds view — Build details with logs visible
Figure 3: Builds view — Build details with logs visible
Builds view — Builder settings with default builder expanded
Figure 4: Builds view — Builder settings with default builder expanded

Faster startup and file sharing for macOS 

Launching Docker Desktop on Apple Silicon Macs is at least 25% quicker in 4.21 compared to previous Docker Desktop versions. Previously the start time would scale linearly with the amount of memory allocated to Docker, which meant that users with higher-spec Macs would experience slower startup. This bug has been fixed and now Docker starts in four seconds on Apple Silicon. 

Docker Desktop 4.21 uses VirtioFS by default on macOS 12.5+, which provides substantial performance gains when sharing host files with containers (for example, via docker run -v). The time taken to build the Redis engine drops from seven minutes on Docker Desktop 4.20 to only two minutes on Docker Desktop 4.21, for example.

Conclusion

Upgrade now to explore what’s new in the 4.21 release of Docker Desktop. Do you have feedback? Leave feedback on our public GitHub roadmap and let us know what else you’d like to see.

Learn more

]]>
Docker Acquires Mutagen for Continued Investment in Performance and Flexibility of Docker Desktop https://www.docker.com/blog/mutagen-acquisition/ Tue, 27 Jun 2023 17:00:13 +0000 https://www.docker.com/?p=43663 I’m excited to announce that Docker, voted the most-used and most-desired tool in Stack Overflow’s 2023 Developer Survey, has acquired Mutagen IO, Inc., the company behind the open source Mutagen file synchronization and networking technologies that enable high-performance remote development. Mutagen’s synchronization and forwarding capabilities facilitate the seamless transfer of code, binary artifacts, and network requests between arbitrary locations, connecting local and remote development environments. When combined with Docker’s existing developer tools, Mutagen unlocks new possibilities for developers to innovate and accelerate development velocity with local and remote containerized development.

“Docker is more than a container tool. It comprises multiple developer tools that have become the industry standard for self-service developer platforms, empowering teams to be more efficient, secure, and collaborative,” says Docker CEO Scott Johnston. “Bringing Mutagen into the Docker family is another example of how we continuously evolve our offering to meet the needs of developers with a product that works seamlessly and improves the way developers work.”

Mutagen banner 2400x1260 Docker logo and Mutagen logo on red background

The Mutagen acquisition introduces novel mechanisms for developers to extract the highest level of performance from their local hardware while simultaneously opening the gateway to the newest remote development solutions. We continue scaling the abilities of Docker Desktop to meet the needs of the growing number of developers, businesses, and enterprises relying on the platform.

 “Docker Desktop is focused on equipping every developer and dev team with blazing-fast tools to accelerate app creation and iteration by harnessing the combined might of local and cloud resources. By seamlessly integrating and magnifying Mutagen’s capabilities within our platform, we will provide our users and customers with unrivaled flexibility and an extraordinary opportunity to innovate rapidly,” says Webb Stevens, General Manager, Docker Desktop.

 “There are so many captivating integration and experimentation opportunities that were previously inaccessible as a third-party offering,” says Jacob Howard, the CEO at Mutagen. “As Mutagen’s lead developer and a Docker Captain, my ultimate goal has always been to enhance the development experience for Docker users. As an integral part of Docker’s technology landscape, Mutagen is now in a privileged position to achieve that goal.”

Jacob will join Docker’s engineering team, spearheading the integration of Mutagen’s technologies into Docker Desktop and other Docker products.

You can get started with Mutagen today by downloading the latest version of Docker Desktop and installing the Mutagen extension, available in the Docker Extensions Marketplace. Support for current Mutagen offerings, open source and paid, will continue as we develop new and better integration options.

FAQ | Docker Acquisition of Mutagen

With Docker’s acquisition of Mutagen, you’re sure to have questions. We’ve answered the most common ones in this FAQ.

As with all of our open source efforts, Docker strives to do right by the community. We want this acquisition to benefit everyone — community and customer — in keeping with our developer obsession.

What will happen to Mutagen Pro subscriptions and the Mutagen Extension for Docker Desktop?

Both will continue as we evaluate and develop new and better integration options. Existing Mutagen Pro subscribers will see an update to the supplier on their invoices, but no other billing changes will occur.

Will Mutagen become closed-source?

There are no plans to change the licensing structure of Mutagen’s open source components. Docker has always valued the contributions of open source communities.

Will Mutagen or its companion projects be discontinued?

There are no plans to discontinue any Mutagen projects. 

Will people still be able to contribute to Mutagen’s open source projects?

Yes! Mutagen has always benefited from outside collaboration in the form of feedback, discussion, and code contributions, and there’s no desire to change that relationship. For more information about how to participate in Mutagen’s development, see the contributing guidelines.

What about other downstream users, companies, and projects using Mutagen?

Mutagen’s open source licenses continue to allow the embedding and use of Mutagen by other projects, products, and tooling.

Who will provide support for Mutagen projects and products?

In the short term, support for Mutagen’s projects and products will continue to be provided through the existing support channels. We will work to merge support into Docker’s channels in the near future.

Is this replacing Virtiofs, gRPC-FUSE, or osxfs?

No, virtual filesystems will continue to be the default path for bind mounts in Docker Desktop. Docker is continuing to invest in the performance of these technologies.

How does Mutagen compare with other virtual or remote filesystems?

Mutagen is a synchronization engine rather than a virtual or remote filesystem. Mutagen can be used to synchronize files to native filesystems, such as ext4, trading typically imperceptible amounts of latency for full native filesystem performance.

How does Mutagen compare with other synchronization solutions?

Mutagen focuses primarily on configuration and functionality relevant to developers.

How can I get started with Mutagen?

To get started with Mutagen, download the latest version of Docker Desktop and install the Mutagen Extension from the Docker Desktop Extensions Marketplace.

]]>
Shorter Feedback Loops Developing Java Apps with Digma’s Free Docker Extension https://www.docker.com/blog/shorter-feedback-loops-developing-java-apps-with-digmas-free-docker-extension/ Tue, 20 Jun 2023 13:03:17 +0000 https://www.docker.com/?p=43247 Many engineering teams follow a similar process when developing Docker images. This process encompasses activities such as development, testing, and building, and the images are then released as the subsequent version of the application. During each of these distinct stages, a significant quantity of data can be gathered, offering valuable insights into the impact of code modifications and providing a clearer understanding of the stability and maturity of the product features.

Observability has made a huge leap in recent years, and advanced technologies such as OpenTelemetry (OTEL) and eBPF have simplified the process of collecting runtime data for applications. Yet, for all that progress, developers may still be on the fence about whether and how to use this new resource. The effort required to transform the raw data into something meaningful and beneficial for the development process may seem to outweigh the advantages offered by these technologies.

The practice of continuous feedback envisions a development flow in which this particular problem has been solved. By making the evaluation of code runtime data continuous, developers can benefit from shorter feedback loops and tighter control of their codebase. Instead of waiting for issues to develop in production systems, or even surface in testing, various developer tools can watch the code observability data for you and provide early warning and rigorous linting for regressions, code smells, or issues. 

At the same time, gathering information back into the code from multiple deployment environments, such as the test or production environment, can help developers understand how a specific function, query, or event is performing in the real world at a single glance.

Docker and Digma logos on orange background

Meet Digma, the first linter for your runtime data 

Digma is a free developer tool that was created to bridge the continuous feedback gap in the DevOps loop. It aims to make sense of the gazillion metrics traces and logs the code is spewing out — so that developers won’t have to. And, it does this continuously and automatically. 

To make the tool even more practical, Digma processes the observability data and uses it to lint the source code itself, right in the IDE. From a developer’s perspective, this means a whole new level of information about the code is revealed — allowing for better code design based on real-world feedback, as well as quick turnaround for fixing regression or avoiding common pitfalls and anti-patterns.

How Digma is deployed

Digma is packaged as a self-contained Docker extension to make it easy for developers to evaluate their code without registration to an external service or sharing any data that might not be allowed by corporate policy (Figure 1). As such, Digma acts as your own intelligent agent for monitoring code execution, especially in development and testing. The backend component is able to track execution over time and highlight any inadvertent changes to behavior, performance, or error patterns that should be addressed.  

To collect data about the code, behind the scenes Digma leverages OpenTelemetry, a widely used open standard for modern observability. To make getting started easier, Digma takes care of the configuration work for you, so from the developer’s point of view, getting started with continuous feedback is equivalent to flipping a switch.

 Illustration of Digma process showing feedback providers and observability sources.
Figure 1: Overview of Digma.

Once the information is collected using OTEL, Digma runs it through what could be described as a reverse pipeline, aggregating the data, analyzing it and providing it via an API to multiple outlets including the IDE plugin, Jaeger, and the Docker extension dashboard that we’ll describe in this example.

Note: Currently, Digma supports Java applications running on IntelliJ IDEA; however, support for additional languages and platforms will be rolling out in the coming months.

Installing Digma

Prerequisites: Docker Desktop 4.8 or later.

Note: You must ensure that the Docker Extension is enabled (Figure 2).

Screenshot of Docker Desktop showing "Enable Docker Extensions" selected with blue checkmark.
Figure 2: Enabling Docker Extensions.

Step 1. Installing the Digma Docker Extension

In the Extensions Marketplace, search for Digma and select Install (Figure 3).

Screenshot of Extensions Marketplace showing results of search for Digma.
Figure 3: Install Digma.

Step 2. Collecting feedback about your code

After installing the Digma extension from the marketplace, you’ll be directed to a quickstart page that will guide you through the next steps to collect feedback about your code. Digma can collect information from two different sources:

  • Your tests/debug sessions within the IDE
  • Any Docker containers you may have running in Docker desktop

Getting feedback while debugging/running code in the IDE

Digma’s IDE extension makes the process of collecting data about your code in the IDE trivial. The entire setup is reduced to a single toggle button click (Figure 4). Behind the scenes, Digma adds variables to the runtime configuration, so that it uses OpenTelemetry for observability. No prior knowledge or observability is needed, and no code changes are necessary to get this to work. 

With the observability toggle enabled, you’ll start getting immediate feedback as you run your code locally, or even when you execute your tests. Digma will be on the lookout for any regressions and will automatically spot code smells and issues.

Screenshot of Digma dialog box with Observability toggle switch enabled (in blue).
Figure 4: Observability toggle switch.

Getting feedback from running containers

In addition to the IDE, you can also choose to collect data from any running container on your machine running Java code. The process to do that is extremely simple and also documented in the Digma extension Getting Started page. It does not involve changing any Dockerfile or code. Basically, you download the OTEL agent, mount it to a volume on the container, and set some environment variables that point the Java process to use it.

curl --create-dirs -O -L --output-dir ./otel
https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar

curl --create-dirs -O -L --output-dir ./otel
https://github.com/digma-ai/otel-java-instrumentation/releases/latest/download/digma-otel-agent-extension.jar

export JAVA_TOOL_OPTIONS="-javaagent:/otel/javaagent.jar -Dotel.exporter.otlp.endpoint=http://localhost:5050 -Dotel.javaagent.extensions=/otel/digma-otel-agent-extension.jar"
export OTEL_SERVICE_NAME={--ENTER YOUR SERVICE NAME HERE--}
export DEPLOYMENT_ENV=LOCAL_DOCKER

docker run -d -v "/$(pwd)/otel:/otel" --env JAVA_TOOL_OPTIONS --env OTEL_SERVICE_NAME --env DEPLOYMENT_ENV {-- APPEND PARAMS AND REPO/IMAGE --}

Uncovering how the code behaves in runtime

Whether you collect data from running containers or in the IDE, Digma will continuously analyze the code behavior and make the analysis results available both as code annotations in the IDE and as dashboards in the Digma extension itself. To demonstrate a live example, we’ve created a sample application based on the Java Spring Boot PetClinic app. 

In this scenario, we’ll clone the repo and run the code from the IDE, triggering a few actions to see what they reveal about the code. For convenience, we’ve created run configurations to simulate common API calls and create interesting data:

  1. Open the PetClinic project here in your IntelliJ IDE.
  2. Run the petclinic-service configuration.
  3. Access the API and play around with it on http://localhost:9753/ or simply run the ClientTested run config included in the workspace.

Almost immediately, the result of the dynamic linting analysis will start appearing over the code in the IDE (Figure 5):

Screenshot showing Digma Insights after analysis.
Figure 5: Results of Digma analysis.

At the same time, the Digma extension itself will present a dashboard cataloging all of the assets that have been identified, including code locations, API endpoints, database queries, and more (Figure 6). For each asset, you’ll be able to access basic statistics about its performance as well as a more detailed list of issues of concern that may need to be tracked about their runtime behavior.

 Screenshot of Digma dashboard listing assets that have been identified.
Figure 6: Digma dashboard.

Using the observability data

One of the main problems Digma tries to solve is not how to collect observability data but how to turn it into a useful and practical asset that can speed up development processes and improve the code. Digma’s insights can be directly applied during design time, based on existing data, as well as to validate changes as they are being run in dev/test/prod and to get early feedback and shorter loops into the development process.

Examples of design time planning code insights:

  • See runtime usage for the modified functions and understand who will be affected by the change
  • Concurrency to plan for, bottlenecks and criticality
  • Existing errors and exceptions that need to be handled by the new component
  • See complete visualization of the flow of control using the modified code 

Examples of runtime code validations for shorter feedback loops:

  • Catch code and query modeling smells, such as N+1 Selects, are detected and highlighted
  • Identify new performance bottlenecks and regressions as a result of the changes
  • Spot scaling issues earlier and address them as a part of the dev cycle

As Digma evolves, it continues to track and provide clear visibility into specific areas that are important to developers with the goal of having complete clarity about how each piece of code is behaving in real-world scenarios and catching issues and regressions much earlier in the process.

Who should use Digma?

Unlike many other observability solutions, Digma is code first and developer first. Digma is completely free for developers, does not require any code changes or sharing data, and will get you from zero to impactful data about your code within minutes.  If you are working on Java code, use the JetBrains IDE, and you want to improve your code with actual execution data, you can get started by picking up the Digma extension from the marketplace. 

You can provide feedback in our Slack channel and tell us where Digma improved your dev cycle.

]]>
Unlock Docker Desktop Real-Time Insights with the Grafana Docker Extension https://www.docker.com/blog/unlock-docker-desktop-real-time-insights-with-the-grafana-docker-extension/ Fri, 09 Jun 2023 15:46:00 +0000 https://www.docker.com/?p=43134 More than one million, that’s the number of active Grafana installations worldwide. In the realm of data visualization and monitoring, Grafana has emerged as a go-to solution for organizations and individuals alike. With its powerful features and intuitive interface, Grafana enables users to gain valuable insights from their data. 

Picture this scenario: You have a locally hosted web application that you need to test thoroughly, but accessing it remotely for testing purposes seems like an insurmountable task. This is where the Grafana Cloud Docker Extension comes to the rescue. This extension offers a seamless solution to the problem by establishing a secure connection between your local environment and the Grafana Cloud platform.

In this article, we’ll explore the benefits of using the Grafana Cloud Docker Extension and describe how it can bridge the gap between your local machine and the remote Grafana Cloud platform.

Graphic showing Docker and Grafana logos on dark blue background

Overview of Grafana Cloud

Grafana Cloud is a fully managed, cloud-hosted observability platform ideal for cloud-native environments. With its robust features, wide user adoption, and comprehensive documentation, Grafana Cloud is a leading observability platform that empowers organizations to gain insights, make data-driven decisions, and ensure the reliability and performance of their applications and infrastructure.

Grafana Cloud is an open, composable observability stack that supports various popular data sources, such as Prometheus, Elasticsearch, Amazon CloudWatch, and more (Figure 1). By configuring these data sources, users can create custom dashboards, set up alerts, and visualize their data in real-time. The platform also offers performance and load testing with Grafana Cloud k6 and incident response and management with Grafana Incident and Grafana OnCall to enhance the observability experience.

Graphic showing Hosted Grafana architecture with connections to elements including Node.js, Prometheus, Amazon Cloudwatch, and Google Cloud monitoring.
Figure 1: Hosted Grafana architecture.

Why run Grafana Cloud as a Docker Extension?

The Grafana Cloud Docker Extension is your gateway to seamless testing and monitoring, empowering you to make informed decisions based on accurate data and insights. The Docker Desktop integration in Grafana Cloud brings seamless monitoring capabilities to your local development environment. 

Docker Desktop, a user-friendly application for Mac, Windows, and Linux, enables you to containerize your applications and services effortlessly. With the Grafana Cloud extension integrated into Docker Desktop, you can now monitor your local Docker Desktop instance and gain valuable insights.

The following quick-start video shows how to monitor your local Docker Desktop instance using the Grafana Cloud Extension in Docker Desktop:

The Docker Desktop integration in Grafana Cloud provides a set of prebuilt Grafana dashboards specifically designed for monitoring Docker metrics and logs. These dashboards offer visual representations of essential Docker-related metrics, allowing you to track container resource usage, network activity, and overall performance. Additionally, the integration also includes monitoring for Linux host metrics and logs, providing a comprehensive view of your development environment.

Using the Grafana Cloud extension, you can visualize and analyze the metrics and logs generated by your Docker Desktop instance. This enables you to identify potential issues, optimize resource allocation, and ensure the smooth operation of your containerized applications and microservices.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a Grafana Cloud account.

Note: You must ensure that the Docker Extension is enabled (Figure 2).

Screenshot showing search for Grafana in Extensions Marketplace.
Figure 2: Enabling Docker Extensions.

Step 1. Install the Grafana Cloud Docker Extension

In the Extensions Marketplace, search for Grafana and select Install (Figure 3).

Screenshot of Docker Desktop showing installation of extension.
Figure 3: Installing the extension.

Step 2. Create your Grafana Cloud account

A Grafana Cloud account is required to use the Docker Desktop integration. If you don’t have a Grafana Cloud account, you can sign up for a free account today (Figure 4).

 Screenshot of Grafana Cloud sign-up page.
Figure 4: Signing up for Grafana Cloud.

Step 3. Find the Connections console

In your Grafana instance on Grafana Cloud, use the left-hand navigation bar to find the Connections Console (Home > Connections > Connect data) as shown in Figure 5.

Screenshot of Grafana Cloud Connections console.
Figure 5: Connecting data.

Step 4. Install the Docker Desktop integration

To start sending metrics and logs to the Grafana Cloud, install the Docker Desktop integration (Figure 6). This integration lets you fetch the values of connection variables required to connect to your account.

 Screenshot of Connections console showing installation of Docker Desktop integration.
Figure 6: Installing the Docker Desktop Integration.

Step 5. Connect your Docker Desktop instance to Grafana Cloud

It’s time to open and connect the Docker Desktop extension to the Grafana Cloud (Figure 7). Enter the connection variable you found while installing the Docker Desktop integration on Grafana Cloud.

Screenshot showing connection of Docker Desktop extension to Grafana Cloud.
Figure 7: Connecting the Docker Desktop extension to Grafana Cloud.

Step 6. Check if Grafana Cloud is receiving data from Docker Desktop

Test the connection to ensure that the agent is collecting data (Figure 8).

Screenshot showing test of connection to ensure data collection.
Figure 8: Checking the connection.

Step 7. View the Grafana dashboard

The Grafana dashboard shows the integration with Docker Desktop (Figure 9).

 Screenshot of Grafana Dashboards page.
Figure 9: Grafana dashboard.

Step 8. Start monitoring your Docker Desktop instance

After the integration is installed, the Docker Desktop extension will start sending metrics and logs to Grafana Cloud.

You will see three prebuilt dashboards installed in Grafana Cloud for Docker Desktop.

Docker Overview dashboard

This Grafana dashboard gives a general overview of the Docker Desktop instance based on the metrics exposed by the cadvisor Prometheus exporter (Figure 10).

Screenshot of Grafana Docker overview dashboard showing metrics such as CPU and memory usage.
Figure 10: Docker Overview dashboard.

The key metrics monitored are:

  • Number of containers/images
  • CPU metrics
  • Memory metrics
  • Network metrics

This dashboard also contains a shortcut at the top for the logs dashboard so you can correlate logs and metrics for troubleshooting.

Docker Logs dashboard

This Grafana dashboard provides logs and metrics related to logs of the running Docker containers on the Docker Desktop engine (Figure 11).

Screenshot of Grafana Docker Logs dashboard showing statistics related to the running Docker containers.
Figure 11: Docker Logs dashboard.

Logs and metrics can be filtered based on the Docker Desktop instance and the container using the drop-down for template variables on the top of the dashboard.

Docker Desktop Node Exporter/Nodes dashboard

This Grafana dashboard provides the metrics of the Linux virtual machine used to host the Docker engine for Docker Desktop (Figure 12).

Screenshot of Docker Nodes dashboard showing metrics such as disk space and memory usage.
Figure 12: Docker Nodes dashboard.

How to monitor Redis in a Docker container with Grafana Cloud

Because the Grafana Agent is embedded inside the Grafana Cloud extension for Docker Desktop, it can easily be configured to monitor other systems running on Docker Desktop that are supported by the Grafana Agent.

For example, we can monitor a Redis instance running inside a container in Docker Desktop using the Redis integration for Grafana Cloud and the Docker Desktop extension.

If we have a Redis database running inside our Docker Desktop instance, we can install the Redis integration on Grafana Cloud by navigating to the Connections Console (Home > Connections > Connect data) and clicking on the Redis tile (Figure 13).

Screenshot of Connections Console showing adding Redis as data source.
Figure 13: Installing the Redis integration on Grafana Cloud.

To start collecting metrics from the Redis server, we can copy the corresponding agent snippet into our agent configuration in the Docker Desktop extension. Click on Configuration in the Docker Desktop extension and add the following snippet under the integrations key. Then press Save configuration (Figure 14).

Screenshot of Connections console showing configuration of Redis integration.
Figure 14: Configuring Redis integration.
integrations:
  redis_exporter:
    enabled: true
    redis_addr: 'localhost:6379'

In its default settings, the Grafana agent container is not connected to the default bridge network of Docker desktop. To connect the agent to this container, run the following command:

docker network connect bridge grafana-docker-desktop-extension-agent

This step allows the agent to connect and scrape metrics from applications running on other containers. Now you can see Redis metrics on the dashboard installed as part of the Redis solution for Grafana Cloud (Figure 15).

 Screenshot showing Redis metrics on the dashboard.
Figure 15: Viewing Redis metrics.

Conclusion

With the Docker Desktop integration in Grafana Cloud and its prebuilt Grafana dashboards, monitoring your Docker Desktop environment becomes a streamlined process. The Grafana Cloud Docker Extension allows you to gain valuable insights into your local Docker instance, make data-driven decisions, and optimize your development workflow with the power of Grafana Cloud. Explore a new realm of testing possibilities and elevate your monitoring game to new heights. 

Check out the Grafana Cloud Docker Extension on Docker Hub.

]]>
Docker Desktop GrafanaCloud nonadult
Docker Desktop 4.20: Docker Engine and CLI Updated to Moby 24.0 https://www.docker.com/blog/docker-desktop-4-20-docker-engine-and-cli-updated-to-moby-24-0/ Thu, 01 Jun 2023 14:00:00 +0000 https://www.docker.com/?p=42974 We are happy to announce the major release of Moby 24.0 in Docker Desktop 4.20. We have dedicated significant effort to this release, marking a major milestone in the open source Moby project.

This effort began in September of last year when we announced we were extending the integration of the Docker Engine with containerd. Since then, we have continued working on the image store integration and contributing this work to the Moby project. 

In this release, we are adding support for image history, pulling images from private repositories, image importing, and support for the classic builder when using the containerd image store.

Patch Release Update: Experiencing startup issues mentioning “An unexpected error occurred”? Try updating to the newest version! We’ve addressed this in our 4.20.1 patch release.

Feature graphic with white 4.20 text on purple background

To explore and test these new features, activate the option Use containerd for pulling and storing images in the Features in Development panel within the Docker Desktop Settings (Figure 1). 

Screenshot of "Features in Development" page with "Use containerd for pulling and storing images" selected.
Figure 1: Select “Use containerd for pulling and storing images” in the “Features in development” screen.

When enabling this feature, you will notice your existing images are no longer listed, but there’s no need to worry. The containerd image store functions differently from the classic store, and you can only view one of them at a time. Images in the classic store can be accessed again by disabling the feature.

On the other hand, if you want to transfer your existing images to the new containerd store, simply push your images to the Docker Hub using the command docker push <my image name>. Then, activate the containerd store in the Settings and pull the image using docker pull <my image name>.

You can read all bug fixes and enhancements in the Moby release notes.

SBOM and provenance attestations using BuildKit v0.11

BuildKit v0.11 helps you secure your software supply chain by allowing you to add an SBOM (Software Bill of Materials) and provenance attestations in the SLSA format to your container images. This can be done with the container driver as described in the blog post Generating SBOMs for Your Image with BuildKit or by enabling the containerd image store and following the steps in the SBOM attestations documentation.

New Dockerfile inspection features

In version 4.20, you can use the docker build command to preview the configuration of your upcoming build or view a list of available build targets in your Dockerfile. This is particularly beneficial when dealing with multi-stage Dockerfiles or when diving into new projects.

When you run the build command, it processes your Dockerfile, evaluates all environment variables, and determines reachable build stages, but stops before running the build steps. You can use --print=outline to get detailed information on all build arguments, secrets, and SSH forwarding keys, along with their current values. Alternatively, use --print=targets to list all the possible stages that can be built with the --target flag.

We also aim to present textual descriptions for these elements by parsing the comments in your Dockerfile. Currently, you need to define BUILDX_EXPERIMENTAL environment variable to use the --print flag (Figure 2).

Screenshot showing Buildx_experimental environmental variable.
Figure 2: Using the new `–print=outline` command outputs detailed information
on build arguments, secrets, SSH forwarding keys and associated values.

Check out the Dockerfile 1.5 changelog for more updates, such as loading images from on-disk OCI layout and improved Git source and checksum validation for ADD commands in the labs channel. Read the Highlights from the BuildKit v0.11 Release blog post to learn more.

Docker Compose dry run

In our continued efforts to make Docker Compose better for developer workflows, we’re addressing a long-standing user request. You can now dry run any Compose command by adding a flag (--dry-run). This gives you insight on what exactly Compose will generate or execute so nothing unexpected happens.

We’re also excited to highlight contributions from the community, such as First implementation of viz subcommand #10376 by @BenjaminGuzman, which generates a graph of your stack, with details like networks and ports. Try docker compose alpha viz on your project to check out your own graph.

Conclusion

We love hearing your feedback. Please leave any feedback on our public GitHub roadmap and let us know what else you’d like to see. Check out the Docker Desktop 4.20 release notes for a full breakdown of what’s new in the latest release.

]]>
Simplifying Kubernetes Development: Docker Desktop + Red Hat OpenShift https://www.docker.com/blog/blog-docker-desktop-red-hat-openshift/ Thu, 25 May 2023 14:01:00 +0000 https://www.docker.com/?p=42878 It’s Red Hat Summit week, and we wanted to use this as an opportunity to highlight several aspects of the Docker and Red Hat partnership. In this post, we highlight Docker Desktop and Red Hat OpenShift. In another post, we talk about 160% year-over-year growth in pulls of Red Hat’s Universal Base Image on Docker Hub.

Why Docker Desktop + Red Hat OpenShift?

Docker Desktop + Red Hat OpenShift allows the thousands of enterprises that depend on Red Hat OpenShift to leverage the Docker Desktop platform that more than 20 million active developers already know and trust to eliminate daily friction and empower them to deliver results.

Red Hat logo with the word OpenSHift on a purple background

What problem it solves

Sometimes, it feels like coding is easy compared to the sprint demo and getting everybody’s approval to move forward. Docker Desktop does the yak shaving to make developing, using, and testing containerized applications on Mac and Windows local environments easy, and the Red Hat OpenShift extension for Docker Desktop extends that with one-click pushes to Red Hat’s cloud container platform.

One especially convenient use of the Red Hat OpenShift extension for Docker Desktop is for quickly previewing or sharing work, where the ease of access and friction-free deploys without waiting for CI can reduce cycle times early in the dev process and enable rapid iteration leading up to full CI and production deployment.

Getting started

If you don’t already have Docker Desktop installed, refer to our guide on Getting Started with Docker.

Installing the extension

The Red Hat OpenShift extension is one of many in the Docker Extensions Marketplace. Select Install to install the extension.

Screenshot of Docker Extensions Marketplace showing search for OpenShift

👋 Pro tip: Have an idea for an extension that isn’t in the marketplace? Build your own and let us know about it! Or don’t tell us about it and keep it for internal use only — private plugins for company or team-specific workflows are also a thing.

Signing into the OpenShift cluster in the extension

From within your Red Hat OpenShift cluster web console, select the copy login command from the user menu:

Screenshot of Red Hat OpenShift cluster web console, with “copy login command” selected in the user menu

👋 Don’t have an OpenShift cluster? Red Hat supports OpenShift exploration and development with a developer sandbox program that offers immediate access to a cluster, guided tutorials, and more. To sign up, go to their Developer Sandbox portal.

That leads to a page with the details you can use to fetch the Kubernetes context you can use in the Red Hat OpenShift extension and elsewhere:

Screenshot of the issued API token

From there, you can come back to the Red Hat OpenShift extension in Docker Desktop. In the Deploy to OpenShift screen, you can change or log in to a Kubernetes context. In the login screen, paste the whole oc login line, including the token and server URL from the previous:

Screenshot of "Login to OpenShift" dialog box

Your first deploy

Using the extension is even easier than installing it; you just need a containerized app. A sample racing-game-app is ready to go with a Dockerfile (docs) and OpenShift manifest (docs), and it’s a joy to play.

To get started, clone https://github.com/container-demo/HexGL.

👋 Pro tip: Have an app you want to containerize? Docker’s newly introduced docker init command automates the creation of Dockerfiles, Compose manifests, and .dockerignore files.

Build your image

Using your local clone of sample racing-game-app, cd into the repo on your command line, where we’ll build the image:

docker build --platform linux/amd64 -t sample-racing-game .

The --platform linux/amd64 flag forces x86 compatible image, even if you’re building from a Mac with Apple Silicon/ARM CPU. The -t sample-racing-game names and tags your image with something more recognizable than a SHA256. Finally, the trailing dot (“.”) tells Docker to build from the current directory.

Starting with Docker version 23, docker build leverages BuildKit with more performance, better caching, and support for things like build secrets that make our lives as developers easier and more secure.

Test the image locally

Now that you’ve built your image, you can test it out locally. Just run the image using:

docker run -d -p 8080:8080 sample-racing-game

And open your browser to http://localhost:8080/ to take a look at the result.

Deploy to OpenShift

With the image built, it’s time to test it out on the cluster. We don’t even have to push it to a registry first.

Visit the Red Hat OpenShift extension in Docker Desktop, select our new sample-racing-game image, and select Push to OpenShift and Deploy from the action button pulldown:

Screenshot of Deploy to OpenShift page with sample-racing-game selected for deployment

The Push to OpenShift and Deploy action will push the image to the OpenShift cluster’s internal private registry without any need to push to another registry first. It’s not a substitute for a private registry, but it’s convenient for iterative development where you don’t plan on keeping the iterative builds.

Once you click the button, the extension will do exactly as you intend and keep you updated about progress:

Screenshot of Deploy to OpenShift page showing deployment progress for sample-racing-game

Finally, the extension will display the URL for the app and attempt to open your default browser to show it off:

Screenshot of racing game demo app start page with blue and orange racing graphic

Cleanup

Once you’re done with your app, or before attempting to redeploy it, use the web terminal to all traces of your deployment using this one-liner:

oc get all -oname | grep sample-racing-game | xargs oc delete
Screenshot of web terminal commands showing deleted files

That will avoid unnecessarily using resources and any errors if you try to redeploy to the same namespace later.

So…what did we learn?

The Red Hat OpenShift extension gives you a one-click deploy from Docker Desktop to an OpenShift cluster. It’s especially convenient for previewing iterative work in a shared cluster for sprint demos and approval as a way to accelerate iteration.

The above steps showed you how to install and configure the extension, build an image, and deploy it to the cluster with a minimum of clicks. Now show us what you’ll build next with Docker and Red Hat, tag @docker on Twitter or me @misterbisson@techub.social to share!

Learn More

Connect with Docker

]]>
Boost Your Local Testing Game with the LambdaTest Tunnel Docker Extension https://www.docker.com/blog/boost-your-local-testing-game-with-lambdatest-tunnel-docker-extension/ Tue, 16 May 2023 14:48:07 +0000 https://www.docker.com/?p=42430 As the demand for web applications continues to rise, so does the importance of testing them thoroughly. One challenge that testers face is how to test applications that are hosted locally on their machines. This is where the LambdaTest Tunnel Docker Extension comes in handy. This extension allows you to establish a secure connection between your local environment and the LambdaTest platform, making it possible to test your locally hosted pages and applications on a remote browser. 

In this article, we’ll explore the benefits of using the LambdaTest Tunnel Docker Extension and describe how it can streamline your testing workflow.

White Docker logo on black background with LambdaTest logo in blue

Overview of LambdaTest Tunnel

LambdaTest Tunnel is a secure and encrypted tunneling feature that allows devs and QAs to test their locally hosted web applications or websites on the cloud-based real machines. It establishes a secure connection between the user’s local machine and the real machine in the cloud (Figure 1).

By downloading the LambdaTest Tunnel binary, you can securely connect your local machine to LambdaTest cloud servers even when behind corporate firewalls. This allows you to test locally hosted websites or web applications across various browsers, devices, and operating systems available on the LambdaTest platform. Whether your web files are written in HTML, CSS, PHP, Python, or similar languages, you can use LambdaTest Tunnel to test them.

Diagram of LambdaTest Tunnel network setup, showing connection from the tunnel client to the API gateway to the LambdaTest's private network with proxy server and browser VMs.
Figure 1: Overview of LambdaTest Tunnel.

Why use LambdaTest Tunnel?

LambdaTest Tunnel offers numerous benefits for web developers, testers, and QA professionals. These include secure and encrypted connection, cross-browser compatibility testing, localhost testing, etc.

Let’s see the LambdaTest Tunnel benefits one by one:

  • It provides a secure and encrypted connection between your local machine and the virtual machines in the cloud, thereby ensuring the privacy of your test data and online communications.
  • With LambdaTest Tunnel, you can test your web applications or websites, local folder, and files across a wide range of browsers and operating systems without setting up complex and expensive local testing environments.
  • It lets you test your locally hosted web applications or websites on cloud-based real OS machines.
  • You can even run accessibility tests on desktop browsers while testing locally hosted web applications and pages.

Why run LambdaTest Tunnel as a Docker Extension?

With Docker Extensions, you can build and integrate software applications into your daily workflow. Using LambdaTest Tunnel as a Docker extension provides a seamless and hassle-free experience for establishing a secure connection and performing cross-browser testing of locally hosted websites and web applications on the LambdaTest platform without manually launching the tunnel through the command line interface (CLI).

The LambdaTest Tunnel Docker Extension opens up a world of options for your testing workflows by adding a variety of features. Docker Desktop has an easy-to-use one-click installation feature that allows you to use the LambdaTest Tunnel Docker Extension directly from Docker Desktop.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a LambdaTest account. Note: You must ensure the Docker extension is enabled (Figure 2).

Screenshot of Docker Desktop with Docker Extensions enabled.
Figure 2: Enable Docker Extensions.

Step 1: Install the LambdaTest Docker Extension

In the Extensions Marketplace, search for LambdaTest Tunnel extension and select Install (Figure 3).

Screen shot of Extensions Marketplace, showing blue Install button for LambdaTest Tunnel.
Figure 3: Install LambdaTest Tunnel.

Step 2: Set up the Docker LambdaTest Tunnel

Open the LambdaTest Tunnel and select the Setup Tunnel to configure the tunnel (Figure 4).

Screenshot of LambdaTest Tunnel setup page.
Figure 4: Configure the tunnel.

Step 3: Enter your LambdaTest credentials

Provide your LambdaTest Username, Access Token, and preferred Tunnel Name. You can get your Username and Access Token from your LambdaTest Profile under Password & Security. 
Once these details have been entered, click on the Launch Tunnel (Figure 5).

Screenshot of LambdaTest Docker Tunnel page with black "Launch Tunnel" button.
Figure 5: Launch LambdaTest Tunnel.

The LambdaTest Tunnel will be launched, and you can see the running tunnel logs (Figure 6).

Screenshot of LambdaTest Docker Tunnel page with list of running tunnels.
Figure 6: Running logs.

Once you have configured the LambdaTest Tunnel via Docker Extension, it should appear on the LambdaTest Dashboard (Figure 7).

Screenshot of new active tunnel in LambdaTest dashboard.
Figure 7: New active tunnel.

Local testing using LambdaTest Tunnel Docker Extension

Let’s walk through a scenario using LambdaTest Tunnel. Suppose a web developer has created a new web application that allows users to upload and view images. The developer needs to ensure that the application can handle a variety of different image formats and sizes and that it can render display images across different browsers and devices.

To do this, the developer first sets up a local development environment and installs the LambdaTest Tunnel Docker Extension. They then use the web application to open and manipulate local image files.

Next, the developer uses the LambdaTest Tunnel to securely expose their local development environment to the internet. This step allows them to test the application in real-time on different browsers and devices using LambdaTest’s cloud-based digital experience testing platform.

Now let’s see the steps to perform local testing using the LambdaTest Tunnel Docker Extension.

1. Go to the LambdaTest Dashboard and navigate to Real Time Testing > Browser Testing (Figure 8).

Screenshot of LambdaTest Tunnel showing Browser Testing selected.
Figure 8: Navigate to Browser Testing.

2. In the console, enter the localhost URL, select browser, browser version, operating system, etc. and select START (Figure 9).

Screenshot of LambdaTest Tunnel page showing browser options to choose from.
Figure 9: Configure testing.

3. A cloud-based real operating system will fire up where you can perform testing of local files or folders (Figure 10).

Screenshot of LambdaTest showing local test example.
Figure 10: Perform local testing.

Learn more about how to set up the LambdaTest Tunnel Docker Extension in the documentation

Conclusion

The LambdaTest Tunnel Docker Extension makes it easy to perform local testing without launching the tunnel from the CLI. You can run localhost tests over an online cloud grid of 3000+ real browsers and operating system combinations. You don’t have to worry about the challenges of local infrastructure because LambdaTest provides you with a cloud grid of zero downtime. 

Check out the LambdaTest Tunnel Docker Extension on DockerHub. The LambdaTest Tunnel Docker Extension source code is available on GitHub, and contributions are welcome. 

]]>
Building a Local Application Development Environment for Kubernetes with the Gefyra Docker Extension  https://www.docker.com/blog/building-a-local-application-development-environment-for-kubernetes-with-the-gefyra-docker-extension/ Wed, 03 May 2023 14:00:00 +0000 https://www.docker.com/?p=42061 If you’re using a Docker-based development approach, you’re already well on your way toward creating cloud-native software. Containerizing your software ensures that you have all the system-level dependencies, language-specific requirements, and application configurations managed in a containerized way, bringing you closer to the environment in which your code will eventually run. 

In complex systems, however, you may need to connect your code with several auxiliary services, such as databases, storage volumes, APIs, caching layers, message brokers, and others. In modern Kubernetes-based architectures, you also have to deal with service meshes and cloud-native deployment patterns, such as probes, configuration, and structural and behavioral patterns. 

Kubernetes offers a uniform interface for orchestrating scalable, resilient, and services-based applications. However, its complexity can be overwhelming, especially for developers without extensive experience setting up Kubernetes clusters. That’s where Gefyra comes in, making it easier for developers to work with Kubernetes and improve the process of creating secure, reliable, and scalable software.

Gefyra and Docker logos on a dark background with a lighter purple outline of two puzzle pieces

What is Gefyra? 

Gefyra, named after the Greek word for “bridge,” is a comprehensive toolkit that facilitates Docker-based development with Kubernetes. If you plan to use Kubernetes as your production platform, it’s essential to work with the same environment during development. This approach ensures that you have the highest possible “dev/prod-parity,” minimizing friction when transitioning from development to production. 

Gefyra is an open source project that provides docker run on steroids. It allows you to connect your local Docker with any Kubernetes cluster and run a container locally that behaves as if it would run in the cluster. You can write code locally in your favorite code editor using the tools you love. 

Additionally, Gefyra does not require you to build a container image from your code changes, push the image to a registry, or trigger a restart in the cluster. Instead, it saves you from this tedious cycle by connecting your local code right into the cluster without any changes to your existing Dockerfile. This approach is useful not only for new code but also when introspecting existing code with a debugger that you can attach to a running container. That makes Gefyra a productivity superstar for any Kubernetes-based development work.

How does Gefyra work?

Gefyra installs several cluster-side components that enable it to control the local development machine and the development cluster. These components include a tunnel between the local development machine and the Kubernetes cluster, a local DNS resolver that behaves like the cluster DNS, and sophisticated IP routing mechanisms. Gefyra uses popular open source technologies, such as Docker, WireGuard, CoreDNS, Nginx, and Rsync, to build on top of these components.

The local development setup involves running a container instance of the application on the developer machine, with a sidecar container called Cargo that acts as a network gateway and provides a CoreDNS server that forwards all requests to the cluster (Figure 1). Cargo encrypts all the passing traffic with WireGuard using ad hoc connection secrets. Developers can use their existing tooling, including their favorite code editor and debuggers, to develop their applications.

Yellow graphic with white text boxes showing development setup, including: IDE, Volumes, Shell, Logs, Debugger, and connection to Gefyra, including App Container and Cargo sidecar container.
Figure 1: Local development setup.

Gefyra manages two ends of a WireGuard connection and automatically establishes a VPN tunnel between the developer and the cluster, making the connection robust and fast without stressing the Kubernetes API server (Figure 2). Additionally, the client side of Gefyra manages a local Docker network with a VPN endpoint, allowing the container to join the VPN that directs all traffic into the cluster.

Yellow graphic with boxes and arrows showing connection between Developer Machine and Developer Cluster.
Figure 2: Connecting developer machine and cluster.

Gefyra also allows bridging existing traffic from the cluster to the local container, enabling developers to test their code with real-world requests from the cluster and collaborate on changes in a team. The local container instance remains connected to auxiliary services and resources in the cluster while receiving requests from other Pods, Services, or the Ingress. This setup eliminates the need for building container images in a continuous integration pipeline and rolling out a cluster update for simple changes.

Why run Gefyra as a Docker Extension?

Gefyra’s core functionality is contained in a Python library available in its repository. The CLI that comes with the project has a long list of arguments that may be overwhelming for some users. To make it more accessible, Gefyra developed the Docker Desktop extension, which is easy for developers to use without having to delve into the intricacies of Gefyra.

The Gefyra extension for Docker Desktop enables developers to work with a variety of Kubernetes clusters, including the built-in Kubernetes cluster, local providers such as Minikube, K3d, or Kind, Getdeck Beiboot, or any remote clusters. Let’s get started.

Installing the Gefyra Docker Desktop

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.) In Settings | Extensions select the Enable Docker Extensions box (Figure 3).

 Screenshot showing Docker Desktop interface with "Enable Docker Extensions" selected.
Figure 3: Enable Docker Extensions.

You must also enable Kubernetes under Settings (Figure 4).

Screenshot of Docker Desktop with "Enable Kubernetes" and "Show system containers (advanced)" selected.
Figure 4: Enable Kubernetes.

Gefyra is in the Docker Extensions Marketplace. In the following instructions, we’ll install Gefyra in Docker Desktop. 

Step 2: Add the Gefyra extension

Open Docker Desktop and select Add Extensions to find the Gefyra extension in the Extensions Marketplace (Figure 5).

Screenshot showing search for "Gefyra" in Docker Extensions Marketplace.
Figure 5: Locate Gefyra in the Docker Extensions Marketplace.

Once Gefyra is installed, you can open the extension and find the start screen of Gefyra that lists all containers that are connected to a Kubernetes cluster. Of course, this section is empty on a fresh install.
To launch a local container with Gefyra, just like with Docker, you need to click on the Run Container button at the top right (Figure 6).

 Screenshot showing Gefyra start screen.
Figure 6: Gefyra start screen.

The next steps will vary based on whether you’re working with a local or remote Kubernetes cluster. If you’re using a local cluster, simply select the matching kubeconfig file and optionally set the context (Figure 7). 

For remote clusters, you may need to manually specify additional parameters. Don’t worry if you’re unsure how to do this, as the next section will provide a detailed example for you to follow along with.

Screenshot of Gefyra interface showing blue "Choose Kubeconfig" button.
Figure 7: Selecting Kubeconfig.

The Kubernetes demo workloads

The following example showcases how Gefyra leverages the Kubernetes functionality included in Docker Desktop to create a development environment for a simple application that consists of two services — a backend and a frontend (Figure 8). 

Both services are implemented as Python processes, and the frontend service uses a color property obtained from the backend to generate an HTML document. Communication between the two services is established via HTTP, with the backend address being passed to the frontend as an environment variable.

 Yellow graphic showing connection of frontend and backend services.
Figure 8: Frontend and backend services.

The Gefyra team has created a repository for the Kubernetes demo workloads, which can be found on GitHub

If you prefer to watch a video explaining what’s covered in this tutorial, check out this video on YouTube

Prerequisite

Ensure that the current Kubernetes context is switched to Docker Desktop. This step allows the user to interact with the Kubernetes cluster and deploy applications to it using kubectl.

kubectl config current-context
docker-desktop

Clone the repository

The next step is to clone the repository:

git clone https://github.com/gefyrahq/gefyra-demos

Applying the workload

The following YAML file sets up a simple two-tier app consisting of a backend service and a frontend service with communication between the two services established via the SVC_URL environment variable passed to the frontend container. 

It defines two pods, named backend and frontend, and two services, named backend and frontend, respectively. The backend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-backend image on port 5002. The frontend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-frontend image on port 5003. The frontend container also includes an environment variable named SVC_URL, which is set to the value backend.default.svc.cluster.local:5002.

The backend service is defined to select the backend pod using the app: backend label, and expose port 5002. The frontend service is defined to select the frontend pod using the app: frontend label, and expose port 80 as a load balancer, which routes traffic to port 5003 of the frontend container.

/gefyra-demos/kcd-munich> kubectl apply -f manifests/demo.yaml
pod/backend created
pod/frontend created
service/backend created
service/frontend created

Let’s watch the workload getting ready:

kubectl get pods
NAME         READY    STATUS    RESTARTS   AGE
backend     1/1            Running    0                    2m6s
frontend     1/1            Running    0                    2m6s

After ensuring that the backend and frontend pods have finished initializing (check for the READY column in the output), you can access the application by navigating to http://localhost in your web browser. This URL is served from the Kubernetes environment of Docker Desktop. 

Upon loading the page, you will see the application’s output displayed in your browser. Although the output may not be visually stunning, it is functional and should provide the necessary functionality for your needs.

Blue bar displaying "Hello World" in black text.

Now, let’s explore how we can correct or adjust the color of the output generated by the frontend component.

Using Gefyra “Run Container” with the frontend process

In the first part of this section, you will see how to execute a frontend process on your local machine that is associated with a resource based on the Kubernetes cluster: the backend API. This can be anything ranging from a database to a message broker or any other service utilized in the architecture.

Kick off a local container with Run Container from the Gefyra start screen (Figure 9).

Screenshot of Gefyra interface showing blue "Run container" button.
Figure 9: Run a local container.

Once you’ve entered the first step of this process, you will find the kubeconfig` and context to be set automatically. That’s a lifesaver if you don’t know where to find the default kubeconfig on your host.

Just hit the Next button and proceed with the container settings (Figure 10).

Screenshot of Gefyra interface showing the "Set Kubernetes Settings" step.
Figure 10: Container settings.

In the Container Settings step, you can configure the Kubernetes-related parameters for your local container. In this example, everything happens in the default Kubernetes namespace. Select it in the first drop-down input (Figure 11). 

In the drop-down input below Image, you can specify the image to run locally. Note that it lists all images that are being used in the selected namespace (from the Namespace selector). Isn’t that convenient? You don’t need to worry about the images being used in the cluster or find them yourself. Instead, you get a suggestion to work with the image at hand, as we want to do in this example (Figure 12). You could still specify any arbitrary images if you like, for example, a completely new image you just built on your machine.

Screenshot of Gefyra interface showing "Select a Workload" drop-down menu under Container Settings.
Figure 11: Select namespace and workload.
Screenshot of Gefyra interface showing drop-down menu of images.
Figure 12: Select image to run.

To copy the environment of the frontend container running in the cluster, you will need to select pod/frontend from the Copy Environment From selector (Figure 13). This step is important because you need the backend service address, which is passed to the pod in the cluster using an environment variable.

Finally, for the upper part of the container settings, you need to overwrite the following run command of the container image to enable code reloading:

poetry run flask --app app --debug run --port 5002 --host 0.0.0.0
Screenshot of Gefyra interface showing selection of  “pod/frontend” under “Copy Environment From."
Figure 13: Copy environment of frontend container.

Let’s start the container process on port 5002 and expose this port on the local machine. In addition, let’s mount the code directory (/gefyra-demos/kcd-munich/frontend) to make code changes immediately visible. That’s it for now. A click on the Run button starts the process.

Screenshot of Gefyra interface showing installation progress bar.
Figure 14: Installing Gefyra components.

It takes a few seconds to install Gefyra’s cluster-side components, prepare the local networking part, and pull the container image to start locally (Figure 14). Once this is ready, you will get redirected to the native container view of Docker Desktop from this container (Figure 15).

Screenshot showing native container view of Docker Desktop.
Figure 15: Log view.

You can look around in the container using the Terminal tab (Figure 16). Type in the env command in the shell, and you will see all the environment variables coming with Kubernetes.

Screenshot showing Terminal view of running container.
Figure 16: Terminal view.

We’re particularly interested in the SVC_URL variable that points the frontend to the backend process, which is, of course, still running in the cluster. Now, when browsing to the URL http://localhost:5002, you will get a slightly different output:

Blue bar displaying "Hello KCD" in black text

Why is that? Let’s look at the code that we already mounted into the local container, specifically the app.py that runs a Flask server (Figure 17).

Screenshot of colorful app.py code on black background.
Figure 17: App.py code.

The last line of the code in the Gefyra example displays the text Hello KCD!, and any changes made to this code are immediately updated in the local container. This feature is noteworthy because developers can freely modify the code and see the changes reflected in real-time without having to rebuild or redeploy the container.

Line 12 of the code in the Gefyra example sends a request to a service URL, which is stored in the variable SVC. The value of SVC is read from an environment variable named SVC_URL, which is copied from the pod in the Kubernetes cluster. The URL, backend.default.svc.cluster.local:5002, is a fully qualified domain name (FQDN) that points to a Kubernetes service object and a port. 

These URLs are commonly used by applications in Kubernetes to communicate with each other. The local container process is capable of sending requests to services running in Kubernetes using the native connection parameters, without the need for developers to make any changes, which may seem like magic at times.

In most development scenarios, the capabilities of Gefyra we just discussed are sufficient. In other words, you can use Gefyra to run a local container that can communicate with resources in the Kubernetes cluster, and you can access the app on a local port. However, what if you need to modify the backend while the frontend is still running in Kubernetes? This is where the “bridge” feature of Gefyra comes in, which we will explore next.

Gefyra “bridge” with the backend process

We could choose to run the frontend process locally and connect it to the backend process running in Kubernetes through a bridge. However, this approach may not always be necessary or desirable, especially for backend developers who may not be interested in the frontend. In this case, it may be more convenient to leave the frontend running in the cluster and stop the local instance by selecting the stop button in Docker Desktop’s container view.

First of all, we have to run a local instance of the backend service. It’s the same as with the frontend, but this time with the backend container image (Figure 18).

Screenshot of Gefyra interface showing "pod/backend" setup.
Figure 18: Running a backend container image.

Compared to the frontend example from above, you can run the backend container image (quay.io/gefyra/gefyra-demo-backend:latest), which is suggested by the drop-down selector. This time we need to copy the environment from the backend pod running in Kubernetes. Note that the volume mount is now set to the code of the backend service to make it work.

After starting the container, you can check http://localhost:5002/color, which serves the backend API response. Looking at the app.py of the backend service shows the source of this response. In line 8, this app returns a JSON response with the color property set to green (Figure 19).

Screenshot showing app.py code with "color" set to "green".
Figure 19: Checking the color.

At this point, keep in mind that we’re only running a local instance of the backend service. This time, a connection to a Kubernetes-based resource is not needed as this container runs without any external dependency.

The idea is to make the frontend process that serves from the Kubernetes cluster on http://localhost (still blue) pick up our backend information to render its output. That’s done using Gefyra’s bridge feature. In the next step, we will overlay the backend process running in the cluster with our local container instance so that the local code becomes effective in the cluster.

Getting back to the Gefyra container list on the start screen, you can find the Bridge column on each locally running container (Figure 20). Once you click this button, you can create a bridge of your local container into the cluster.

Screenshot of Gefyra interface showing "Bridge" column on far right.
Figure 20: The Bridge column is visible on the far right.

In the next dialog, we need to enter the bridge configuration (Figure 21).

Screenshot of Gefyra interface showing Bridge Settings.
Figure 21: Enter the bridge configuration.

Let’s set the “Target” for the bridge to the backend pod, which is currently serving the frontend process in the cluster, and set a timeout for the bridge to 60 seconds. We also need to map the port of the proxy running in the cluster with the local instance. 

If your local container is configured to listen on a different port from the cluster, you can specify the mapping here (Figure 22). In this example, the service is running on port 5003 in both the cluster and on the local machine, so we need to map that port. After clicking the Bridge button, it takes a few seconds to return to the container list on Gefyra’s start view.

 Screenshot of Gefyra interface showing port mapping configuration.
Figure 22: Specify port mapping.

Observe the change in the icon of the Bridge button, which now depicts a stop symbol (Figure 23). This means the bridge function is now operational and can be terminated by simply clicking this button again.

Screenshot of Gefyra showing closeup view of Bridge column and blue stop button.
Figure 23: The Bridge column showing a stop symbol.

At this point, the local code is able to handle requests from the frontend process in the cluster by using the URL stored in the SVC_URL variable, without making any changes to the frontend process itself. To confirm this, you can open http://localhost in your browser (which is served from the Kubernetes of Docker Desktop) and check that the output is now green. This is because the local code is returning the value green for the color property. You can change this value to any valid one in your IDE, and it will be immediately reflected in the cluster. This is the amazing power of this tool.

Remember to release the bridge of your container once you are finished making changes to your backend. This will reset the cluster to its original state, and the frontend will display the original “beautiful” blue H1 again. This approach allows us to intercept containers running in Kubernetes with our local code without modifying the Kubernetes cluster itself. That’s because we did not make any changes to the Kubernetes cluster itself. Instead, we kind of intercepted containers running in Kubernetes with our local code and released that intercept afterwards.

Conclusion

Gefyra is an easy-to-use Docker Desktop extension that connects with Kubernetes to improve development workflows and team collaboration. It lets you run containers as usual while being connected with Kubernetes, thereby saving time and ensuring high dev/prod parity. 

The Blueshoe development team would appreciate a star on GitHub and welcomes you to join their Discord community for more information.

About the Author

Michael Schilonka is a strong believer that Kubernetes can be a software development platform, too. He is the co-founder and managing director of the Munich-based agency Blueshoe and the technical lead of Gefyra and Getdeck. He talks about Kubernetes in general and how they are using Kubernetes for development. Follow him on LinkedIn to stay connected.

]]>