Kubernetes – Docker https://www.docker.com Mon, 05 Jun 2023 19:16:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Kubernetes – Docker https://www.docker.com 32 32 Simplifying Kubernetes Development: Docker Desktop + Red Hat OpenShift https://www.docker.com/blog/blog-docker-desktop-red-hat-openshift/ Thu, 25 May 2023 14:01:00 +0000 https://www.docker.com/?p=42878 It’s Red Hat Summit week, and we wanted to use this as an opportunity to highlight several aspects of the Docker and Red Hat partnership. In this post, we highlight Docker Desktop and Red Hat OpenShift. In another post, we talk about 160% year-over-year growth in pulls of Red Hat’s Universal Base Image on Docker Hub.

Why Docker Desktop + Red Hat OpenShift?

Docker Desktop + Red Hat OpenShift allows the thousands of enterprises that depend on Red Hat OpenShift to leverage the Docker Desktop platform that more than 20 million active developers already know and trust to eliminate daily friction and empower them to deliver results.

Red Hat logo with the word OpenSHift on a purple background

What problem it solves

Sometimes, it feels like coding is easy compared to the sprint demo and getting everybody’s approval to move forward. Docker Desktop does the yak shaving to make developing, using, and testing containerized applications on Mac and Windows local environments easy, and the Red Hat OpenShift extension for Docker Desktop extends that with one-click pushes to Red Hat’s cloud container platform.

One especially convenient use of the Red Hat OpenShift extension for Docker Desktop is for quickly previewing or sharing work, where the ease of access and friction-free deploys without waiting for CI can reduce cycle times early in the dev process and enable rapid iteration leading up to full CI and production deployment.

Getting started

If you don’t already have Docker Desktop installed, refer to our guide on Getting Started with Docker.

Installing the extension

The Red Hat OpenShift extension is one of many in the Docker Extensions Marketplace. Select Install to install the extension.

Screenshot of Docker Extensions Marketplace showing search for OpenShift

👋 Pro tip: Have an idea for an extension that isn’t in the marketplace? Build your own and let us know about it! Or don’t tell us about it and keep it for internal use only — private plugins for company or team-specific workflows are also a thing.

Signing into the OpenShift cluster in the extension

From within your Red Hat OpenShift cluster web console, select the copy login command from the user menu:

Screenshot of Red Hat OpenShift cluster web console, with “copy login command” selected in the user menu

👋 Don’t have an OpenShift cluster? Red Hat supports OpenShift exploration and development with a developer sandbox program that offers immediate access to a cluster, guided tutorials, and more. To sign up, go to their Developer Sandbox portal.

That leads to a page with the details you can use to fetch the Kubernetes context you can use in the Red Hat OpenShift extension and elsewhere:

Screenshot of the issued API token

From there, you can come back to the Red Hat OpenShift extension in Docker Desktop. In the Deploy to OpenShift screen, you can change or log in to a Kubernetes context. In the login screen, paste the whole oc login line, including the token and server URL from the previous:

Screenshot of "Login to OpenShift" dialog box

Your first deploy

Using the extension is even easier than installing it; you just need a containerized app. A sample racing-game-app is ready to go with a Dockerfile (docs) and OpenShift manifest (docs), and it’s a joy to play.

To get started, clone https://github.com/container-demo/HexGL.

👋 Pro tip: Have an app you want to containerize? Docker’s newly introduced docker init command automates the creation of Dockerfiles, Compose manifests, and .dockerignore files.

Build your image

Using your local clone of sample racing-game-app, cd into the repo on your command line, where we’ll build the image:

docker build --platform linux/amd64 -t sample-racing-game .

The --platform linux/amd64 flag forces x86 compatible image, even if you’re building from a Mac with Apple Silicon/ARM CPU. The -t sample-racing-game names and tags your image with something more recognizable than a SHA256. Finally, the trailing dot (“.”) tells Docker to build from the current directory.

Starting with Docker version 23, docker build leverages BuildKit with more performance, better caching, and support for things like build secrets that make our lives as developers easier and more secure.

Test the image locally

Now that you’ve built your image, you can test it out locally. Just run the image using:

docker run -d -p 8080:8080 sample-racing-game

And open your browser to http://localhost:8080/ to take a look at the result.

Deploy to OpenShift

With the image built, it’s time to test it out on the cluster. We don’t even have to push it to a registry first.

Visit the Red Hat OpenShift extension in Docker Desktop, select our new sample-racing-game image, and select Push to OpenShift and Deploy from the action button pulldown:

Screenshot of Deploy to OpenShift page with sample-racing-game selected for deployment

The Push to OpenShift and Deploy action will push the image to the OpenShift cluster’s internal private registry without any need to push to another registry first. It’s not a substitute for a private registry, but it’s convenient for iterative development where you don’t plan on keeping the iterative builds.

Once you click the button, the extension will do exactly as you intend and keep you updated about progress:

Screenshot of Deploy to OpenShift page showing deployment progress for sample-racing-game

Finally, the extension will display the URL for the app and attempt to open your default browser to show it off:

Screenshot of racing game demo app start page with blue and orange racing graphic

Cleanup

Once you’re done with your app, or before attempting to redeploy it, use the web terminal to all traces of your deployment using this one-liner:

oc get all -oname | grep sample-racing-game | xargs oc delete
Screenshot of web terminal commands showing deleted files

That will avoid unnecessarily using resources and any errors if you try to redeploy to the same namespace later.

So…what did we learn?

The Red Hat OpenShift extension gives you a one-click deploy from Docker Desktop to an OpenShift cluster. It’s especially convenient for previewing iterative work in a shared cluster for sprint demos and approval as a way to accelerate iteration.

The above steps showed you how to install and configure the extension, build an image, and deploy it to the cluster with a minimum of clicks. Now show us what you’ll build next with Docker and Red Hat, tag @docker on Twitter or me @misterbisson@techub.social to share!

Learn More

Connect with Docker

]]>
Boost Your Local Testing Game with the LambdaTest Tunnel Docker Extension https://www.docker.com/blog/boost-your-local-testing-game-with-lambdatest-tunnel-docker-extension/ Tue, 16 May 2023 14:48:07 +0000 https://www.docker.com/?p=42430 As the demand for web applications continues to rise, so does the importance of testing them thoroughly. One challenge that testers face is how to test applications that are hosted locally on their machines. This is where the LambdaTest Tunnel Docker Extension comes in handy. This extension allows you to establish a secure connection between your local environment and the LambdaTest platform, making it possible to test your locally hosted pages and applications on a remote browser. 

In this article, we’ll explore the benefits of using the LambdaTest Tunnel Docker Extension and describe how it can streamline your testing workflow.

White Docker logo on black background with LambdaTest logo in blue

Overview of LambdaTest Tunnel

LambdaTest Tunnel is a secure and encrypted tunneling feature that allows devs and QAs to test their locally hosted web applications or websites on the cloud-based real machines. It establishes a secure connection between the user’s local machine and the real machine in the cloud (Figure 1).

By downloading the LambdaTest Tunnel binary, you can securely connect your local machine to LambdaTest cloud servers even when behind corporate firewalls. This allows you to test locally hosted websites or web applications across various browsers, devices, and operating systems available on the LambdaTest platform. Whether your web files are written in HTML, CSS, PHP, Python, or similar languages, you can use LambdaTest Tunnel to test them.

Diagram of LambdaTest Tunnel network setup, showing connection from the tunnel client to the API gateway to the LambdaTest's private network with proxy server and browser VMs.
Figure 1: Overview of LambdaTest Tunnel.

Why use LambdaTest Tunnel?

LambdaTest Tunnel offers numerous benefits for web developers, testers, and QA professionals. These include secure and encrypted connection, cross-browser compatibility testing, localhost testing, etc.

Let’s see the LambdaTest Tunnel benefits one by one:

  • It provides a secure and encrypted connection between your local machine and the virtual machines in the cloud, thereby ensuring the privacy of your test data and online communications.
  • With LambdaTest Tunnel, you can test your web applications or websites, local folder, and files across a wide range of browsers and operating systems without setting up complex and expensive local testing environments.
  • It lets you test your locally hosted web applications or websites on cloud-based real OS machines.
  • You can even run accessibility tests on desktop browsers while testing locally hosted web applications and pages.

Why run LambdaTest Tunnel as a Docker Extension?

With Docker Extensions, you can build and integrate software applications into your daily workflow. Using LambdaTest Tunnel as a Docker extension provides a seamless and hassle-free experience for establishing a secure connection and performing cross-browser testing of locally hosted websites and web applications on the LambdaTest platform without manually launching the tunnel through the command line interface (CLI).

The LambdaTest Tunnel Docker Extension opens up a world of options for your testing workflows by adding a variety of features. Docker Desktop has an easy-to-use one-click installation feature that allows you to use the LambdaTest Tunnel Docker Extension directly from Docker Desktop.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a LambdaTest account. Note: You must ensure the Docker extension is enabled (Figure 2).

Screenshot of Docker Desktop with Docker Extensions enabled.
Figure 2: Enable Docker Extensions.

Step 1: Install the LambdaTest Docker Extension

In the Extensions Marketplace, search for LambdaTest Tunnel extension and select Install (Figure 3).

Screen shot of Extensions Marketplace, showing blue Install button for LambdaTest Tunnel.
Figure 3: Install LambdaTest Tunnel.

Step 2: Set up the Docker LambdaTest Tunnel

Open the LambdaTest Tunnel and select the Setup Tunnel to configure the tunnel (Figure 4).

Screenshot of LambdaTest Tunnel setup page.
Figure 4: Configure the tunnel.

Step 3: Enter your LambdaTest credentials

Provide your LambdaTest Username, Access Token, and preferred Tunnel Name. You can get your Username and Access Token from your LambdaTest Profile under Password & Security. 
Once these details have been entered, click on the Launch Tunnel (Figure 5).

Screenshot of LambdaTest Docker Tunnel page with black "Launch Tunnel" button.
Figure 5: Launch LambdaTest Tunnel.

The LambdaTest Tunnel will be launched, and you can see the running tunnel logs (Figure 6).

Screenshot of LambdaTest Docker Tunnel page with list of running tunnels.
Figure 6: Running logs.

Once you have configured the LambdaTest Tunnel via Docker Extension, it should appear on the LambdaTest Dashboard (Figure 7).

Screenshot of new active tunnel in LambdaTest dashboard.
Figure 7: New active tunnel.

Local testing using LambdaTest Tunnel Docker Extension

Let’s walk through a scenario using LambdaTest Tunnel. Suppose a web developer has created a new web application that allows users to upload and view images. The developer needs to ensure that the application can handle a variety of different image formats and sizes and that it can render display images across different browsers and devices.

To do this, the developer first sets up a local development environment and installs the LambdaTest Tunnel Docker Extension. They then use the web application to open and manipulate local image files.

Next, the developer uses the LambdaTest Tunnel to securely expose their local development environment to the internet. This step allows them to test the application in real-time on different browsers and devices using LambdaTest’s cloud-based digital experience testing platform.

Now let’s see the steps to perform local testing using the LambdaTest Tunnel Docker Extension.

1. Go to the LambdaTest Dashboard and navigate to Real Time Testing > Browser Testing (Figure 8).

Screenshot of LambdaTest Tunnel showing Browser Testing selected.
Figure 8: Navigate to Browser Testing.

2. In the console, enter the localhost URL, select browser, browser version, operating system, etc. and select START (Figure 9).

Screenshot of LambdaTest Tunnel page showing browser options to choose from.
Figure 9: Configure testing.

3. A cloud-based real operating system will fire up where you can perform testing of local files or folders (Figure 10).

Screenshot of LambdaTest showing local test example.
Figure 10: Perform local testing.

Learn more about how to set up the LambdaTest Tunnel Docker Extension in the documentation

Conclusion

The LambdaTest Tunnel Docker Extension makes it easy to perform local testing without launching the tunnel from the CLI. You can run localhost tests over an online cloud grid of 3000+ real browsers and operating system combinations. You don’t have to worry about the challenges of local infrastructure because LambdaTest provides you with a cloud grid of zero downtime. 

Check out the LambdaTest Tunnel Docker Extension on DockerHub. The LambdaTest Tunnel Docker Extension source code is available on GitHub, and contributions are welcome. 

]]>
Building a Local Application Development Environment for Kubernetes with the Gefyra Docker Extension  https://www.docker.com/blog/building-a-local-application-development-environment-for-kubernetes-with-the-gefyra-docker-extension/ Wed, 03 May 2023 14:00:00 +0000 https://www.docker.com/?p=42061 If you’re using a Docker-based development approach, you’re already well on your way toward creating cloud-native software. Containerizing your software ensures that you have all the system-level dependencies, language-specific requirements, and application configurations managed in a containerized way, bringing you closer to the environment in which your code will eventually run. 

In complex systems, however, you may need to connect your code with several auxiliary services, such as databases, storage volumes, APIs, caching layers, message brokers, and others. In modern Kubernetes-based architectures, you also have to deal with service meshes and cloud-native deployment patterns, such as probes, configuration, and structural and behavioral patterns. 

Kubernetes offers a uniform interface for orchestrating scalable, resilient, and services-based applications. However, its complexity can be overwhelming, especially for developers without extensive experience setting up Kubernetes clusters. That’s where Gefyra comes in, making it easier for developers to work with Kubernetes and improve the process of creating secure, reliable, and scalable software.

Gefyra and Docker logos on a dark background with a lighter purple outline of two puzzle pieces

What is Gefyra? 

Gefyra, named after the Greek word for “bridge,” is a comprehensive toolkit that facilitates Docker-based development with Kubernetes. If you plan to use Kubernetes as your production platform, it’s essential to work with the same environment during development. This approach ensures that you have the highest possible “dev/prod-parity,” minimizing friction when transitioning from development to production. 

Gefyra is an open source project that provides docker run on steroids. It allows you to connect your local Docker with any Kubernetes cluster and run a container locally that behaves as if it would run in the cluster. You can write code locally in your favorite code editor using the tools you love. 

Additionally, Gefyra does not require you to build a container image from your code changes, push the image to a registry, or trigger a restart in the cluster. Instead, it saves you from this tedious cycle by connecting your local code right into the cluster without any changes to your existing Dockerfile. This approach is useful not only for new code but also when introspecting existing code with a debugger that you can attach to a running container. That makes Gefyra a productivity superstar for any Kubernetes-based development work.

How does Gefyra work?

Gefyra installs several cluster-side components that enable it to control the local development machine and the development cluster. These components include a tunnel between the local development machine and the Kubernetes cluster, a local DNS resolver that behaves like the cluster DNS, and sophisticated IP routing mechanisms. Gefyra uses popular open source technologies, such as Docker, WireGuard, CoreDNS, Nginx, and Rsync, to build on top of these components.

The local development setup involves running a container instance of the application on the developer machine, with a sidecar container called Cargo that acts as a network gateway and provides a CoreDNS server that forwards all requests to the cluster (Figure 1). Cargo encrypts all the passing traffic with WireGuard using ad hoc connection secrets. Developers can use their existing tooling, including their favorite code editor and debuggers, to develop their applications.

Yellow graphic with white text boxes showing development setup, including: IDE, Volumes, Shell, Logs, Debugger, and connection to Gefyra, including App Container and Cargo sidecar container.
Figure 1: Local development setup.

Gefyra manages two ends of a WireGuard connection and automatically establishes a VPN tunnel between the developer and the cluster, making the connection robust and fast without stressing the Kubernetes API server (Figure 2). Additionally, the client side of Gefyra manages a local Docker network with a VPN endpoint, allowing the container to join the VPN that directs all traffic into the cluster.

Yellow graphic with boxes and arrows showing connection between Developer Machine and Developer Cluster.
Figure 2: Connecting developer machine and cluster.

Gefyra also allows bridging existing traffic from the cluster to the local container, enabling developers to test their code with real-world requests from the cluster and collaborate on changes in a team. The local container instance remains connected to auxiliary services and resources in the cluster while receiving requests from other Pods, Services, or the Ingress. This setup eliminates the need for building container images in a continuous integration pipeline and rolling out a cluster update for simple changes.

Why run Gefyra as a Docker Extension?

Gefyra’s core functionality is contained in a Python library available in its repository. The CLI that comes with the project has a long list of arguments that may be overwhelming for some users. To make it more accessible, Gefyra developed the Docker Desktop extension, which is easy for developers to use without having to delve into the intricacies of Gefyra.

The Gefyra extension for Docker Desktop enables developers to work with a variety of Kubernetes clusters, including the built-in Kubernetes cluster, local providers such as Minikube, K3d, or Kind, Getdeck Beiboot, or any remote clusters. Let’s get started.

Installing the Gefyra Docker Desktop

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.) In Settings | Extensions select the Enable Docker Extensions box (Figure 3).

 Screenshot showing Docker Desktop interface with "Enable Docker Extensions" selected.
Figure 3: Enable Docker Extensions.

You must also enable Kubernetes under Settings (Figure 4).

Screenshot of Docker Desktop with "Enable Kubernetes" and "Show system containers (advanced)" selected.
Figure 4: Enable Kubernetes.

Gefyra is in the Docker Extensions Marketplace. In the following instructions, we’ll install Gefyra in Docker Desktop. 

Step 2: Add the Gefyra extension

Open Docker Desktop and select Add Extensions to find the Gefyra extension in the Extensions Marketplace (Figure 5).

Screenshot showing search for "Gefyra" in Docker Extensions Marketplace.
Figure 5: Locate Gefyra in the Docker Extensions Marketplace.

Once Gefyra is installed, you can open the extension and find the start screen of Gefyra that lists all containers that are connected to a Kubernetes cluster. Of course, this section is empty on a fresh install.
To launch a local container with Gefyra, just like with Docker, you need to click on the Run Container button at the top right (Figure 6).

 Screenshot showing Gefyra start screen.
Figure 6: Gefyra start screen.

The next steps will vary based on whether you’re working with a local or remote Kubernetes cluster. If you’re using a local cluster, simply select the matching kubeconfig file and optionally set the context (Figure 7). 

For remote clusters, you may need to manually specify additional parameters. Don’t worry if you’re unsure how to do this, as the next section will provide a detailed example for you to follow along with.

Screenshot of Gefyra interface showing blue "Choose Kubeconfig" button.
Figure 7: Selecting Kubeconfig.

The Kubernetes demo workloads

The following example showcases how Gefyra leverages the Kubernetes functionality included in Docker Desktop to create a development environment for a simple application that consists of two services — a backend and a frontend (Figure 8). 

Both services are implemented as Python processes, and the frontend service uses a color property obtained from the backend to generate an HTML document. Communication between the two services is established via HTTP, with the backend address being passed to the frontend as an environment variable.

 Yellow graphic showing connection of frontend and backend services.
Figure 8: Frontend and backend services.

The Gefyra team has created a repository for the Kubernetes demo workloads, which can be found on GitHub

If you prefer to watch a video explaining what’s covered in this tutorial, check out this video on YouTube

Prerequisite

Ensure that the current Kubernetes context is switched to Docker Desktop. This step allows the user to interact with the Kubernetes cluster and deploy applications to it using kubectl.

kubectl config current-context
docker-desktop

Clone the repository

The next step is to clone the repository:

git clone https://github.com/gefyrahq/gefyra-demos

Applying the workload

The following YAML file sets up a simple two-tier app consisting of a backend service and a frontend service with communication between the two services established via the SVC_URL environment variable passed to the frontend container. 

It defines two pods, named backend and frontend, and two services, named backend and frontend, respectively. The backend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-backend image on port 5002. The frontend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-frontend image on port 5003. The frontend container also includes an environment variable named SVC_URL, which is set to the value backend.default.svc.cluster.local:5002.

The backend service is defined to select the backend pod using the app: backend label, and expose port 5002. The frontend service is defined to select the frontend pod using the app: frontend label, and expose port 80 as a load balancer, which routes traffic to port 5003 of the frontend container.

/gefyra-demos/kcd-munich> kubectl apply -f manifests/demo.yaml
pod/backend created
pod/frontend created
service/backend created
service/frontend created

Let’s watch the workload getting ready:

kubectl get pods
NAME         READY    STATUS    RESTARTS   AGE
backend     1/1            Running    0                    2m6s
frontend     1/1            Running    0                    2m6s

After ensuring that the backend and frontend pods have finished initializing (check for the READY column in the output), you can access the application by navigating to http://localhost in your web browser. This URL is served from the Kubernetes environment of Docker Desktop. 

Upon loading the page, you will see the application’s output displayed in your browser. Although the output may not be visually stunning, it is functional and should provide the necessary functionality for your needs.

Blue bar displaying "Hello World" in black text.

Now, let’s explore how we can correct or adjust the color of the output generated by the frontend component.

Using Gefyra “Run Container” with the frontend process

In the first part of this section, you will see how to execute a frontend process on your local machine that is associated with a resource based on the Kubernetes cluster: the backend API. This can be anything ranging from a database to a message broker or any other service utilized in the architecture.

Kick off a local container with Run Container from the Gefyra start screen (Figure 9).

Screenshot of Gefyra interface showing blue "Run container" button.
Figure 9: Run a local container.

Once you’ve entered the first step of this process, you will find the kubeconfig` and context to be set automatically. That’s a lifesaver if you don’t know where to find the default kubeconfig on your host.

Just hit the Next button and proceed with the container settings (Figure 10).

Screenshot of Gefyra interface showing the "Set Kubernetes Settings" step.
Figure 10: Container settings.

In the Container Settings step, you can configure the Kubernetes-related parameters for your local container. In this example, everything happens in the default Kubernetes namespace. Select it in the first drop-down input (Figure 11). 

In the drop-down input below Image, you can specify the image to run locally. Note that it lists all images that are being used in the selected namespace (from the Namespace selector). Isn’t that convenient? You don’t need to worry about the images being used in the cluster or find them yourself. Instead, you get a suggestion to work with the image at hand, as we want to do in this example (Figure 12). You could still specify any arbitrary images if you like, for example, a completely new image you just built on your machine.

Screenshot of Gefyra interface showing "Select a Workload" drop-down menu under Container Settings.
Figure 11: Select namespace and workload.
Screenshot of Gefyra interface showing drop-down menu of images.
Figure 12: Select image to run.

To copy the environment of the frontend container running in the cluster, you will need to select pod/frontend from the Copy Environment From selector (Figure 13). This step is important because you need the backend service address, which is passed to the pod in the cluster using an environment variable.

Finally, for the upper part of the container settings, you need to overwrite the following run command of the container image to enable code reloading:

poetry run flask --app app --debug run --port 5002 --host 0.0.0.0
Screenshot of Gefyra interface showing selection of  “pod/frontend” under “Copy Environment From."
Figure 13: Copy environment of frontend container.

Let’s start the container process on port 5002 and expose this port on the local machine. In addition, let’s mount the code directory (/gefyra-demos/kcd-munich/frontend) to make code changes immediately visible. That’s it for now. A click on the Run button starts the process.

Screenshot of Gefyra interface showing installation progress bar.
Figure 14: Installing Gefyra components.

It takes a few seconds to install Gefyra’s cluster-side components, prepare the local networking part, and pull the container image to start locally (Figure 14). Once this is ready, you will get redirected to the native container view of Docker Desktop from this container (Figure 15).

Screenshot showing native container view of Docker Desktop.
Figure 15: Log view.

You can look around in the container using the Terminal tab (Figure 16). Type in the env command in the shell, and you will see all the environment variables coming with Kubernetes.

Screenshot showing Terminal view of running container.
Figure 16: Terminal view.

We’re particularly interested in the SVC_URL variable that points the frontend to the backend process, which is, of course, still running in the cluster. Now, when browsing to the URL http://localhost:5002, you will get a slightly different output:

Blue bar displaying "Hello KCD" in black text

Why is that? Let’s look at the code that we already mounted into the local container, specifically the app.py that runs a Flask server (Figure 17).

Screenshot of colorful app.py code on black background.
Figure 17: App.py code.

The last line of the code in the Gefyra example displays the text Hello KCD!, and any changes made to this code are immediately updated in the local container. This feature is noteworthy because developers can freely modify the code and see the changes reflected in real-time without having to rebuild or redeploy the container.

Line 12 of the code in the Gefyra example sends a request to a service URL, which is stored in the variable SVC. The value of SVC is read from an environment variable named SVC_URL, which is copied from the pod in the Kubernetes cluster. The URL, backend.default.svc.cluster.local:5002, is a fully qualified domain name (FQDN) that points to a Kubernetes service object and a port. 

These URLs are commonly used by applications in Kubernetes to communicate with each other. The local container process is capable of sending requests to services running in Kubernetes using the native connection parameters, without the need for developers to make any changes, which may seem like magic at times.

In most development scenarios, the capabilities of Gefyra we just discussed are sufficient. In other words, you can use Gefyra to run a local container that can communicate with resources in the Kubernetes cluster, and you can access the app on a local port. However, what if you need to modify the backend while the frontend is still running in Kubernetes? This is where the “bridge” feature of Gefyra comes in, which we will explore next.

Gefyra “bridge” with the backend process

We could choose to run the frontend process locally and connect it to the backend process running in Kubernetes through a bridge. However, this approach may not always be necessary or desirable, especially for backend developers who may not be interested in the frontend. In this case, it may be more convenient to leave the frontend running in the cluster and stop the local instance by selecting the stop button in Docker Desktop’s container view.

First of all, we have to run a local instance of the backend service. It’s the same as with the frontend, but this time with the backend container image (Figure 18).

Screenshot of Gefyra interface showing "pod/backend" setup.
Figure 18: Running a backend container image.

Compared to the frontend example from above, you can run the backend container image (quay.io/gefyra/gefyra-demo-backend:latest), which is suggested by the drop-down selector. This time we need to copy the environment from the backend pod running in Kubernetes. Note that the volume mount is now set to the code of the backend service to make it work.

After starting the container, you can check http://localhost:5002/color, which serves the backend API response. Looking at the app.py of the backend service shows the source of this response. In line 8, this app returns a JSON response with the color property set to green (Figure 19).

Screenshot showing app.py code with "color" set to "green".
Figure 19: Checking the color.

At this point, keep in mind that we’re only running a local instance of the backend service. This time, a connection to a Kubernetes-based resource is not needed as this container runs without any external dependency.

The idea is to make the frontend process that serves from the Kubernetes cluster on http://localhost (still blue) pick up our backend information to render its output. That’s done using Gefyra’s bridge feature. In the next step, we will overlay the backend process running in the cluster with our local container instance so that the local code becomes effective in the cluster.

Getting back to the Gefyra container list on the start screen, you can find the Bridge column on each locally running container (Figure 20). Once you click this button, you can create a bridge of your local container into the cluster.

Screenshot of Gefyra interface showing "Bridge" column on far right.
Figure 20: The Bridge column is visible on the far right.

In the next dialog, we need to enter the bridge configuration (Figure 21).

Screenshot of Gefyra interface showing Bridge Settings.
Figure 21: Enter the bridge configuration.

Let’s set the “Target” for the bridge to the backend pod, which is currently serving the frontend process in the cluster, and set a timeout for the bridge to 60 seconds. We also need to map the port of the proxy running in the cluster with the local instance. 

If your local container is configured to listen on a different port from the cluster, you can specify the mapping here (Figure 22). In this example, the service is running on port 5003 in both the cluster and on the local machine, so we need to map that port. After clicking the Bridge button, it takes a few seconds to return to the container list on Gefyra’s start view.

 Screenshot of Gefyra interface showing port mapping configuration.
Figure 22: Specify port mapping.

Observe the change in the icon of the Bridge button, which now depicts a stop symbol (Figure 23). This means the bridge function is now operational and can be terminated by simply clicking this button again.

Screenshot of Gefyra showing closeup view of Bridge column and blue stop button.
Figure 23: The Bridge column showing a stop symbol.

At this point, the local code is able to handle requests from the frontend process in the cluster by using the URL stored in the SVC_URL variable, without making any changes to the frontend process itself. To confirm this, you can open http://localhost in your browser (which is served from the Kubernetes of Docker Desktop) and check that the output is now green. This is because the local code is returning the value green for the color property. You can change this value to any valid one in your IDE, and it will be immediately reflected in the cluster. This is the amazing power of this tool.

Remember to release the bridge of your container once you are finished making changes to your backend. This will reset the cluster to its original state, and the frontend will display the original “beautiful” blue H1 again. This approach allows us to intercept containers running in Kubernetes with our local code without modifying the Kubernetes cluster itself. That’s because we did not make any changes to the Kubernetes cluster itself. Instead, we kind of intercepted containers running in Kubernetes with our local code and released that intercept afterwards.

Conclusion

Gefyra is an easy-to-use Docker Desktop extension that connects with Kubernetes to improve development workflows and team collaboration. It lets you run containers as usual while being connected with Kubernetes, thereby saving time and ensuring high dev/prod parity. 

The Blueshoe development team would appreciate a star on GitHub and welcomes you to join their Discord community for more information.

About the Author

Michael Schilonka is a strong believer that Kubernetes can be a software development platform, too. He is the co-founder and managing director of the Munich-based agency Blueshoe and the technical lead of Gefyra and Getdeck. He talks about Kubernetes in general and how they are using Kubernetes for development. Follow him on LinkedIn to stay connected.

]]>
Docker and Ambassador Labs Announce Telepresence for Docker, Improving the Kubernetes Development Experience https://www.docker.com/blog/telepresence-for-docker/ Thu, 23 Mar 2023 16:01:46 +0000 https://www.docker.com/?p=41437

I’ve been a long-time user and avid fan of both Docker and Kubernetes, and have many happy memories of attending the Docker Meetups in London in the early 2010s. I closely watched as Docker revolutionized the developers’ container-building toolchain and Kubernetes became the natural target to deploy and run these containers at scale.

Today we’re happy to announce Telepresence for Docker, simplifying how teams develop and test on Kubernetes for faster app delivery. Docker and Ambassador Labs both help cloud-native developers to be super-productive, and we’re excited about this partnership to accelerate the developer experience on Kubernetes.

telepresence new2

What exactly does this mean?

  • When building with Kubernetes, you can now use Telepresence alongside the Docker toolchain you know and love.
  • You can buy Telepresence directly from Docker, and log in to Ambassador Cloud using your Docker ID and credentials.
  • You can get installation and product support from your current Docker support and services team.

Kubernetes development: Flexibility, scale, complexity

Kubernetes revolutionized the platform world, providing operational flexibility and scale for most organizations that have adopted it. But Kubernetes also introduces complexity when configuring local development environments.
We know you like building applications using your own local tools, where the feedback is instant, you can iterate quickly, and the environment you’re working in mirrors production. This combination increases velocity and reduces the time to successful deployment. But, you can face slow and painful development and troubleshooting obstacles when trying to integrate and test code into a real-world application running on Kubernetes. You end up having to replicate all of the services locally or remotely to test changes, which requires you to know about Kubernetes and the services built by others. The result, which we’ve seen at many organizations, is siloed teams, deferred deploying changes, and delayed organizational time to value.

Bridging remote environments with local development toolchains

Telepresence for Docker seamlessly bridges local dev machines to remote dev and staging Kubernetes clusters, so you don’t have to manage the complexity of Kubernetes, be a Kubernetes expert, or worry about consuming laptop resources when deploying large services locally.

The remote-to-local approach helps your teams to quickly collaborate and iterate on code locally while testing the effects of those code changes interactively within the full context of your distributed application. This way, you can work locally on services using the tools you know and love while also being connected to a remote Kubernetes cluster.

How does Telepresence for Docker work?

Telepresence for Docker works by running a traffic manager pod in Kubernetes and Telepresence client daemons on developer workstations. As shown in the following diagram, the traffic manager acts as a two-way network proxy that can intercept connections and route traffic between the cluster and containers running on developer machines.

Illustration showing the traffic manager acting as a two-way network proxy that can intercept connections and route traffic between the cluster and containers running on developer machine.

Once you have connected your development machine to a remote Kubernetes cluster, you have several options for how the local containers can integrate with the cluster. These options are based on the concepts of intercepts, where Telepresence for Docker can re-route — or intercept — traffic destined to and from a remote service to your local machine. Intercepts enable you to interact with an application in a remote cluster and see the results from the local changes you made on an intercepted service.

Here’s how you can use intercepts:

  • No intercepts: The most basic integration involves no intercepts at all, simply establishing a connection between the container and the cluster. This enables the container to access cluster resources, such as APIs and databases.
  • Global intercepts: You can set up global intercepts for a service. This means all traffic for a service will be re-routed from Kubernetes to your local container.
  • Personal intercepts: The more advanced alternative to global intercepts is personal intercepts. Personal intercepts let you define conditions for when a request should be routed to your local container. The conditions could be anything from only routing requests that include a specific HTTP header, to requests targeting a specific route of an API.

Benefits for platform teams: Reduce maintenance and cloud costs

On top of increasing the velocity of individual developers and development teams, Telepresence for Docker also enables platform engineers to maintain a separation of concerns (and provide appropriate guardrails). Platform engineers can define, configure, and manage shared remote clusters that multiple Telepresence for Docker users can interact within during their day-to-day development and testing workflows. Developers can easily intercept or selectively reroute remote traffic to the service on their local machine, and test (and share with stakeholders) how their current changes look and interact with remote dependencies.

Compared to static staging environments, this offers a simple way to connect local code into a shared dev environment and fuels easy, secure collaboration with your team or other stakeholders. Instead of provisioning cloud virtual machines for every developer, this approach offers a more cost-effective way to have a shared cloud development environment.

Get started with Telepresence for Docker today

We’re excited that the Docker and Ambassador Labs partnership brings Telepresence for Docker to the 12-million-strong (and growing) community of registered Docker developers. Telepresence for Docker is available now. Keep using the local tools and development workflow you know and love, but with faster feedback, easier collaboration, and reduced cloud environment costs.

You can quickly get started with your Docker ID, or contact us to learn more.

 

 

 

]]>
Announcing Docker+Wasm Technical Preview 2 https://www.docker.com/blog/announcing-dockerwasm-technical-preview-2/ Wed, 22 Mar 2023 16:12:16 +0000 https://www.docker.com/?p=41468 We recently announced the first Technical Preview of Docker+Wasm, a special build that makes it possible to run Wasm containers with Docker using the WasmEdge runtime. Starting from version 4.15, everyone can try out the features by activating the containerd image store experimental feature.

We didn’t want to stop there, however. Since October, we’ve been working with our partners to make running Wasm workloads with Docker easier and to support more runtimes.

Now we are excited to announce a new Technical Preview of Docker+Wasm with the following three new runtimes:

All of these runtimes, including WasmEdge, use the runwasi library.

What is runwasi?

Runwasi is a multi-company effort to make a library in Rust that makes it easier to write containerd shims for Wasm workloads. Last December, the runwasi project was donated and moved to the Cloud Native Computing Foundation’s containerd organization in GitHub.

With a lot of work from people at Microsoft, Second State, Docker, and others, we now have enough features in runwasi to run Wasm containers with Docker or in a Kubernetes cluster. We still have a lot of work to do, but there are enough features for people to start testing.

If you would like to chat with us or other runwasi maintainers, join us on the CNCF’s #runwasi channel.

Get the update

Ready to dive in and try it for yourself? Great! Before you do, understand that this is a technical preview build of Docker Desktop, so things might not work as expected. Be sure to back up your containers and images before proceeding.

Download and install the appropriate version for your system, then activate the containerd image store (Settings > Features in development > Use containerd for pulling and storing images), and you’ll be ready to go.

Features in development screen with "Use containerd" option selected.
Figure 1: Docker Desktop beta features in development.

Let’s take Wasm for a spin 

The WasmEdge runtime is still present in Docker Desktop, so you can run: 

$ docker run --rm --runtime=io.containerd.wasmedge.v1 
--platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

You can even run the same image with the wasmtime runtime:

$ docker run --rm --runtime=io.containerd.wasmtime.v1 
--platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

In the next example, we will deploy a Wasm workload to Docker Desktop’s Kubernetes cluster using the slight runtime. To begin, make sure to activate Kubernetes in Docker Desktop’s settings, then create an example.yaml file:

cat > example.yaml <<EOT
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasm-slight
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasm-slight
  template:
    metadata:
      labels:
        app: wasm-slight
    spec:
      runtimeClassName: wasmtime-slight-v1
      containers:
        - name: hello-slight
          image: dockersamples/slight-rust-hello:latest
          command: ["/"]
          resources:
            requests:
              cpu: 10m
              memory: 10Mi
            limits:
              cpu: 500m
              memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
  name: wasm-slight
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: wasm-slight
EOT

Note the runtimeClassName, kubernetes will use this to select the right runtime for your application. 

You can now run:

$ kubectl apply -f example.yaml

Once Kubernetes has downloaded the image and started the container, you should be able to curl it:

$ curl localhost/hello
hello

You now have a Wasm container running locally in Kubernetes. How exciting! 🎉

Note: You can take this same yaml file and run it in AKS.

Now let’s see how we can use this to run Bartholomew. Bartholomew is a micro-CMS made by Fermyon that works with the spin runtime. You’ll need to clone this repository; it’s a slightly modified Bartholomew template

The repository already contains a Dockerfile that you can use to build the Wasm container:

FROM scratch
COPY . .
ENTRYPOINT [ "/modules/bartholomew.wasm" ]

The Dockerfile copies all the contents of the repository to the image and defines the build bartholomew Wasm module as the entry point of the image.

$ cd docker-wasm-bartholomew
$ docker build -t my-cms .
[+] Building 0.0s (5/5) FINISHED
 => [internal] load build definition from Dockerfile          	0.0s
 => => transferring dockerfile: 147B                          	0.0s
 => [internal] load .dockerignore                             	0.0s
 => => transferring context: 2B                               	0.0s
 => [internal] load build context                             	0.0s
 => => transferring context: 2.84kB                           	0.0s
 => CACHED [1/1] COPY . .                                     	0.0s
 => exporting to image                                        	0.0s
 => => exporting layers                                       	0.0s
 => => exporting manifest sha256:cf85929e5a30bea9d436d447e6f2f2e  0.0s
 => => exporting config sha256:0ce059f2fe907a91a671f37641f4c5d73  0.0s
 => => naming to docker.io/library/my-cms:latest              	0.0s
 => => unpacking to docker.io/library/my-cms:latest           	0.0s

You are now ready to run your first WebAssembly micro-CMS:

$ docker run --runtime=io.containerd.spin.v1 -p 3000:80 my-cms

If you go to http://localhost:3000, you should be able to see the Bartholomew landing page (Figure 2).

The Bartholomew landing page showing an example homepage written in Markdown.
Figure 2: Bartholomew landing page.

We’d love your feedback

All of this work is fresh from the oven and relies on the containerd image store in Docker, which is an experimental feature we’ve been working on for almost a year now. The good news is that we already see how this hard work can benefit everyone by adding more features to Docker. We’re still working on it, so let us know what you need. 

If you want to help us shape the future of WebAssembly with Docker, try it out, let us know what you think, and leave feedback on our public roadmap.

]]>
February Extensions: Easily Connect Local Containers to a Kubernetes Cluster and More https://www.docker.com/blog/new-docker-extensions-february-2023/ Tue, 28 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40846

Although February is the shortest month of the year, we’ve been busy at Docker and we have new Docker Extensions to share with you. Docker extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s look at the exciting new extensions from February.

And, if you’d like to see everything that’s available, check out our full Extensions Marketplace.

banner 2023 feb extensions roundup

Get visibility on your Kubernetes cluster 

Do you need to harden your Kubernetes cluster but lack the visibility to do so? With the Kubescape extension for Docker Desktop, you can secure your Kubernetes cluster and gain insight into your cluster’s security posture via an easy-to-use online dashboard.

The Kubescape extension works by installing the Kubescape in-cluster components, connecting them to the ARMO platform and providing insights into the Kubernetes cluster deployed by Docker Desktop via the dashboard on the ARMO platform.

With the Kubescape extension, you can:

  • Regularly scan your configurations and images
  • Visualize your RBAC rules
  • Receive automatic fix suggestions where applicable

Read Secure Your Kubernetes Clusters with the Kubescape Docker Extension to learn more.

kubescape extension docker setup

Connect your local containers to any Kubernetes cluster

Do you need a fast and dependable way to connect your local containers to any Kubernetes cluster? With Gefyra for Docker Desktop, you can easily bridge running containers into Kubernetes clusters. Gefyra aims to ease the burdens of Kubernetes-based development for developers with a seamless integration as an extension in Docker Desktop. 

The Gefyra extension lets you run a container locally and connect it to a Kubernetes cluster so you can:

  • Talk to other services
  • Let other services talk to your local container
  • Debug
  • Achieve faster iterations — no build/push/deploy/repeat
The Gefyra extension setup process in Docker Desktop, including kubernetes settings, container settings, and container start.

Deploy Alfresco using Docker containers

The Alfresco Docker extension simplifies deploying the Alfresco Digital Business Platform for testing purposes. This extension provides a single Run button in the UI to run all the containers required behind the scenes, so you can spend less time configuring and more time building and testing your product.

With the Alfresco extension on Docker Desktop, you can:

  • Pull latest Alfresco Docker images
  • Run Alfresco Docker containers
  • Use Alfresco deployment locally in your browser
  • Stop deployment and recover your system to initial status
alfresco deployment tool docker desktop

Easily deploy and test NebulaGraph

NebulaGraph is a popular open source graph database that can handle large volumes of with milliseconds of latency, scale up quickly, and have the ability to perform fast graph analytics. With the NebulaGraph extension on Docker Desktop, you can test, learn, and develop on top of the distributed version of NebulaGraph core, in one click. 

nebulargraph extension docker setup

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. Check out these resources for more information on Docker Extensions:

]]>
Secure Your Kubernetes Clusters with the Kubescape Docker Extension https://www.docker.com/blog/secure-kubernetes-with-kubescape-extension/ Tue, 21 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40587 Container adoption in enterprises continues to grow, and Kubernetes has become the de facto standard for deploying and operating containerized applications. At the same time, security is shifting left and should be addressed earlier in the software development lifecycle (SDLC). Security has morphed from being a static gateway at the end of the development process to something that (ideally) is embedded every step of the way. This can potentially increase the effort for engineering and DevOps teams.

kubescape extension banner

Kubescape, a CNCF project initially created by ARMO, is intended to solve this problem. Kubescape provides a self-service, simple, and easily actionable security solution that meets developers where they are: Docker Desktop.

What is Kubescape?

Kubescape is an open source Kubernetes security platform for your IDE, CI/CD pipelines, and clusters.

Kubescape includes risk analysis, security compliance, and misconfiguration scanning. Targeting all security stakeholders, Kubescape offers an easy-to-use CLI interface, flexible output formats, and automated scanning capabilities. Kubescape saves Kubernetes users and admins time, effort, and resources.

How does Kubescape work?

Security researchers and professionals codify best practices in controls: preventative, detective, or corrective measures that can be taken to avoid — or contain — a security breach. These are grouped in frameworks by government and non-profit organizations such as the US Cybersecurity and Infrastructure Security Agency, MITRE, and the Center for Internet Security.

Kubescape contains a library of security controls that codify Kubernetes best practices derived from the most prevalent security frameworks in the industry. These controls can be run against a running cluster or manifest files under development. They’re written in Rego, the purpose-built declarative policy language that supports Open Policy Agent (OPA).

Kubescape is commonly used as a command-line tool. It can be used to scan code manually or can be triggered by an IDE integration or a CI tool. By default, the CLI results are displayed in a console-friendly manner, but they can be exported to JSON or JUnit XML, rendered to HTML or PDF, or submitted to ARMO Platform (a hosted backend for Kubescape).

kubescape command line interface diagram

Regular scans can be run using an in-cluster operator, which also enables the scanning of container images for known vulnerabilities.

kubescape scan in cluster operator

Why run Kubescape as a Docker extension?

Docker extensions are fundamental for building and integrating software applications into daily workflows. With the Kubescape Docker Desktop extension, engineers can easily shift security left without changing work habits.

The Kubescape Docker Desktop extension helps developers adopt security hygiene as early as the first lines of code. As shown in the following diagram, Kubescape enables engineers to adopt security as they write code during every step of the SDLC.

Specifically, the Kubescape in-cluster component triggers periodic scans of the cluster and shows results in ARMO Platform. Findings shown in the dashboard can be further explored, and the extension provides users with remediation advice and other actionable insights.

kubescape scan in cluster operator

Installing the Kubescape Docker extension

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.)  In Settings | Extensions select the Enable Docker Extensions box.

step one enable kubescape extension

You must also enable Kubernetes under Preferences

step one enable kubernetes

Kubescape is in the Docker Extensions Marketplace. 

In the following instructions, we’ll install Kubescape in Docker Desktop. After the extension scans automatically, the results will be shown in ARMO Platform. Here is a demo of using Kubescape on Docker Desktop:

Step 2: Add the Kubescape extension

Open Docker Desktop and select Add Extensions to find the Kubescape extension in the Extensions Marketplace.

step two add kubescape extension

Step 3: Installation

Install the Kubescape Docker Extension.

step three install kubescape extension

Step 4: Register and deploy

Once the Kubescape Docker Extension is installed, you’re ready to deploy Kubescape.

step four register deploy kubescape select provider

Currently, the only hosting provider available is ARMO Platform. We’re looking forward to adding more soon.

step four register deploy kubescape sign up

To link up your cluster, the host requires an ARMO account.

step four register deploy kubescape connect armo account

After you’ve linked your account, you can deploy Kubescape.

step four register deploy kubescape deploy

Accessing the dashboard

Once your cluster is deployed, you can view the scan output on your host (ARMO Platform) and start improving your cluster’s security posture immediately.

armo scan dashboard

Security compliance

One step to improve your cluster’s security posture is to protect against the threats posed by misconfigurations.

ARMO Platform will display any misconfigurations in your YAML, offer information about severity, and provide remediation advice. These scans can be run against one or more of the frameworks offered and can run manually or be scheduled to run periodically.

armo scan misconfigurations

Vulnerability scanning

Another step to improve your cluster’s security posture is protecting against threats posed by vulnerabilities in images.

The Kubescape vulnerability scanner scans the container images in the cluster right after the first installation and uploads the results to ARMO Platform. Kubescape’s vulnerability scanner supports the ability to scan new images as they are deployed to the cluster. Scans can be carried out manually or periodically, based on configurable cron jobs.

armo kubescape vulnerability scanner

RBAC Visualization

With ARMO Platform, you can also visualize Kubernetes RBAC (role-based access control), which allows you to dive deep into account access controls. The visualization makes pinpointing over-privileged accounts easy, and you can reduce your threat landscape with well-defined privileges. The following example shows a subject with all privileges granted on a resource.

armo kubescape rbac visualizer

Kubescape, using ARMO Platform as a portal for additional inquiry and investigation, helps you strengthen and maintain your security posture

Next steps

The Kubescape Docker extension brings security to where you’re working. Kubescape enables you to shift security to the beginning of the development process by enabling you to implement security best practices from the first line of code. You can use the Kubernetes CLI tool to get insights, or export them to ARMO Platform for easy review and remediation advice.

Give the Kubescape Docker extension a try, and let us know what you think at cncf-kubescape-users@lists.cncf.io.

]]>
Kubescape Docker Desktop nonadult
Enable No-Code Kubernetes with the harpoon Docker Extension https://www.docker.com/blog/no-code-kubernetes-harpoon-docker-extension/ Wed, 01 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40167 (This post is co-written by Dominic Holt, Founder & CEO of harpoon.)

No-code deploy Kubernetes with the harpoon Docker Extension.

Kubernetes has been a game-changer for ensuring scalable, high availability container orchestration in the Software, DevOps, and Cloud Native ecosystems. While the value is great, it doesn’t come for free. Significant effort goes into learning Kubernetes and all the underlying infrastructure and configuration necessary to power it. Still more effort goes into getting a cluster up and running that’s configured for production with automated scalability, security, and cluster maintenance.

All told, Kubernetes can take an incredible amount of effort, and you may end up wondering if there’s an easier way to get all the value without all the work.

Meet harpoon

With harpoon, anyone can provision a Kubernetes cluster and deploy their software to the cloud without writing code or configuration. Get your software up and running in seconds with a drag and drop interface. When it comes to monitoring and updating your software, harpoon handles that in real-time to make sure everything runs flawlessly. You’ll be notified if there’s a problem, and harpoon can re-deploy or roll back your software to ensure a seamless experience for your end users. harpoon does this dynamically for any software — not just a small, curated list.

To run your software on Kubernetes in the cloud, just enter your credentials and click the start button. In a few minutes, your production environment will be fully running with security baked in. Adding any software is as simple as searching for it and dragging it onto the screen. Want to add your own software? Connect your GitHub account with only a couple clicks and choose which repository to build and deploy in seconds with no code or complicated configurations.

harpoon enables you to do everything you need, like logging and monitoring, scaling clusters, creating services and ingress, and caching data in seconds with no code. harpoon makes DevOps attainable for anyone, leveling the playing field by delivering your software to your customers at the same speed as the largest and most technologically advanced companies at a fraction of the cost.

The architecture of harpoon

harpoon works in a hybrid SaaS model and runs on top of Kubernetes itself, which hosts the various microservices and components that form the harpoon enterprise platform. This is what you interface with when you’re dragging and dropping your way to nirvana. By providing cloud service provider credentials to an account owned by you or your organization, harpoon uses terraform to provision all of the underlying virtual infrastructure in your account, including your own Kubernetes cluster. In this way, you have complete control over all of your infrastructure and clusters.

The architecture for harpoon to no-code deploy Kubernetes to AWS.

Once fully provisioned, harpoon’s UI can send commands to various harpoon microservices in order to communicate with your cluster and create Kubernetes deployments, services, configmaps, ingress, and other key constructs.

If the cloud’s not for you, we also offer a fully on-prem, air-gapped version of harpoon that can be deployed essentially anywhere.

Why harpoon?

Building production software environments is hard, time-consuming, and costly, with average costs to maintain often starting at $200K for an experienced DevOps engineer and going up into the tens of millions for larger clusters and teams. Using harpoon instead of writing custom scripts can save hundreds of thousands of dollars per year in labor costs for small companies and millions per year for mid to large size businesses.

Using harpoon will enable your team to have one of the highest quality production environments available in mere minutes. Without writing any code, harpoon automatically sets up your production environment in a secure environment and enables you to dynamically maintain your cluster without any YAML or Kubernetes expertise. Better yet, harpoon is fun to use. You shouldn’t have to worry about what underlying technologies are deploying your software to the cloud. It should just work. And making it work should be simple. 

Why run harpoon as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With the harpoon Docker Extension, you can simplify the deployment process with drag and drop, visually deploying and configuring your applications directly into your Kubernetes environment. Currently, the harpoon extension for Docker Desktop supports the following features:

  • Link harpoon to a cloud service provider like AWS and deploy a Kubernetes cluster and the underlying virtual infrastructure.
  • Easily accomplish simple or complex enterprise-grade cloud deployments without writing any code or configuration scripts.
  • Connect your source code repository and set up an automated deployment pipeline without any code in seconds.
  • Supercharge your DevOps team with real-time visual cues to check the health and status of your software as it runs in the cloud.
  • Drag and drop container images from Docker Hub, source, or private container registries
  • Manage your K8s cluster with visual pods, ingress, volumes, configmaps, secrets, and nodes.
  • Dynamically manipulate routing in a service mesh with only simple clicks and port numbers.

How to use the harpoon Docker Extension

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Settings tab in Docker Desktop.

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Go to Settings > Extensions > and check the “Enable Docker Extensions” box.

Enable Docker Extensions under Settings on Docker Desktop.

Step 2: Install the harpoon Docker Extension

The harpoon extension is available on the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for harpoon in the Extensions Marketplace, then select Install.

The harpoon Docker Extension on the Extension Marketplace.

This will download and install the latest version of the harpoon Docker Extension from Docker Hub.

Installation process for the harpoon Docker Extension.

Step 3: Register with harpoon

If you’re new to harpoon, then you might need to register by clicking the Register button. Otherwise, you can use your credentials to log in.

Register with harpoon or log into your account.

While you can drag out any software or Kubernetes components you like, if you want to do actual deployments, you will first need to link your cloud service provider account. At the moment, harpoon supports Amazon Web Services (AWS). Over time, we’ll be supporting all of the major cloud service providers.

If you want to deploy software on top of AWS, you will need to provide harpoon with an access key ID and a secret access key. Since harpoon is deploying all of the necessary infrastructure in AWS in addition to the Kubernetes cluster, we require fairly extensive access to the account in order to successfully provision the environment. Your keys are only used for provisioning the necessary infrastructure to stand up Kubernetes in your account and to scale up/down your cluster as you designate. We take security very seriously at harpoon, and aside from using an extensive and layered security approach for harpoon itself, we use both disk and field level encryption for any sensitive data.

Link your AWS account to deploy Kubernetes with harpoon through Docker Desktop.

The following are the specific permissions harpoon needs to successfully deploy a cluster:

  • AmazonRDSFullAccess
  • IAMFullAccess
  • AmazonEC2FullAccess
  • AmazonVPCFullAccess
  • AmazonS3FullAccess
  • AWSKeyManagementServicePowerUser

Step 5: Start the cluster

Once you’ve linked your cloud service provider account, you just click the “Start” button on the cloud/node element in the workspace. That’s it. No, really! The cloud/node element will turn yellow and provide a countdown. While your experience may vary a bit, we tend to find that you can get a cluster up in under 6 minutes. When the cluster is running, the cloud will return and the element will glow a happy blue color.

Start the Kubernetes cluster through the harpoon Docker Extension.

Step 6: Deployment

You can search for any container image you’d like from Docker Hub, or link your GitHub account to search any GitHub repository (public or private) to deploy with harpoon. You can drag any search result over to the workspace for a visual representation of the software.

Deploying containers is as easy as hitting the “Deploy” button. Github containers will require you to build the repository first. In order for harpoon to successfully build a GitHub repository, we currently require the repository to have a top-level Dockerfile, which is industry best practice. If the Dockerfile is there, once you click the “Build” button, harpoon will automatically find it and build a container image. After a successful build, the “Deploy” button will become enabled and you can deploy the software directly.

Deploy software to Kubernetes through the harpoon Docker Extension.

Once you have a deployment, you can attach any Kubernetes element to it, including ingress, configmaps, secrets, and persistent volume claims.

You can find more info here if you need help: https://docs.harpoon.io/en/latest/usage.html 

Next steps

The harpoon Docker Extension makes it easy to provision and manage your Kubernetes clusters. You can visually deploy your software to Kubernetes and configure it without writing code or configuration. By integrating directly with Docker Desktop, we hope to make it easy for DevOps teams to dynamically start and maintain their cluster without any YAML, helm chart, or Kubernetes expertise.

Check out the harpoon Docker Extension for yourself!

]]>
Testing with Telepresence and Docker https://www.docker.com/blog/testing-with-telepresence-and-docker/ Fri, 12 Aug 2022 14:00:18 +0000 https://www.docker.com/?p=36359 Ever found yourself wishing for a way to synchronize local changes with a remote Kubernetes environment? There’s a Docker extension for that! Read on to learn more about how Telepresence partners with Docker Desktop to help you run integration tests quickly and where to get started.

A version of this article was first published on Ambassador’s blog.

Telepresence and docker 900x600 1

Run integration tests locally with the Telepresence Extension on Docker Desktop

Testing your microservices-based application becomes difficult when it can no longer be run locally due to resource requirements. Moving to the cloud for testing is a no-brainer, but how do you synchronize your local changes against your remote Kubernetes environment?

Run integration tests locally instead of waiting on remote deployments with Telepresence, now available as an Extension on Docker Desktop. By using Telepresence with Docker, you get flexible remote development environments that work with your Docker toolchain so you can code and test faster for Kubernetes-based apps.

Install Telepresence for Docker through these quick steps:

  1. Download Docker Desktop.
  2. Open Docker Desktop.
  3. In the Docker Dashboard, click “Add Extensions” in the left navigation bar.
  4. In the Extensions Marketplace, search for the Ambassador Telepresence extension.
  5. Click install.

Connect to Ambassador Cloud through the Telepresence extension:

After you install the Telepresence extension in Docker Desktop, you need to generate an API key to connect the Telepresence extension to Ambassador Cloud.

  1. Click the Telepresence extension in Docker Desktop, then click Get Started.
  2. Click the Get API Key button to open Ambassador Cloud in a browser window.
  3. Sign in with your Google, GitHub, or Gitlab account. Ambassador Cloud opens to your profile and displays the API key.
  4. Copy the API key and paste it into the API key field in the Docker Dashboard.

Connect to your cluster in Docker Desktop:

  1. Select the desired cluster from the dropdown menu and click Next. This cluster is now set to kubectl’s current context.
  2. Click Connect to [your cluster]. Your cluster is connected, and you can now create intercepts.

To get hands-on with the example shown in the above recording, please follow these instructions:

1. Enable and start your Docker Desktop Kubernetes cluster locally 1. 

Install the Telepresence extension to Docker Desktop if you haven’t already.

2. Install the emojivoto application in your local Docker Desktop cluster (we will use this to simulate a remote K8s cluster).

Use the following command to apply the Emojivoto application to your cluster.

kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment

3. Start the web service in a single container with Docker.

Create a file docker-compose.yml and paste the following into that file:

version: '3'

services:
web:
image: buoyantio/emojivoto-web:v11
environment:
- WEB_PORT=8080
- EMOJISVC_HOST=emoji-svc.emojivoto:8080
- VOTINGSVC_HOST=voting-svc.emojivoto:8080
- INDEX_BUNDLE=dist/index_bundle.js
ports:
- &amp;quot;8080:8080&amp;quot;
network_mode: host

In your terminal run docker compose up to start running the web service locally.

4. Using a test container, curl the “list” API endpoint in Emojivoto and watch it fail (because it can’t access the backend cluster).

In a new terminal, we will test the Emojivoto app with another container. Run the following command docker run -it --rm --network=host alpine. Then we’ll install curl: apk --no-cache add curl.

Finally, curl localhost:8080/api/list, and you should get an rpc error message because we are not connected to the backend cluster and cannot resolve the emoji or voting services:

&amp;gt; docker run -it --rm --network=host alpine
apk --no-cache add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz
(1/5) Installing ca-certificates (20211220-r0)
(2/5) Installing brotli-libs (1.0.9-r5)
(3/5) Installing nghttp2-libs (1.46.0-r0)
(4/5) Installing libcurl (7.80.0-r0)
(5/5) Installing curl (7.80.0-r0)
Executing busybox-1.34.1-r3.trigger
Executing ca-certificates-20211220-r0.trigger
OK: 8 MiB in 19 packages

curl localhost:8080/api/list
{&amp;quot;error&amp;quot;:&amp;quot;rpc error: code = Unavailable desc = connection error: desc = \&amp;quot;transport: Error while dialing dial tcp: lookup emoji-svc on 192.168.65.5:53: no such host\&amp;quot;&amp;quot;}

5. Run Telepresence connect via Docker Desktop.

Open Docker Desktop and click the Telepresence extension on the left-hand side. Click the blue “Connect” button. Copy and paste an API key from Ambassador Cloud if you haven’t done so already (https://app.getambassador.io/cloud/settings/licenses-api-keys). Select the cluster you deployed the Emojivoto application to by selecting the appropriate Kubernetes Context from the menu. Click next and the extension will connect Telepresence to your local cluster.

6. Re-run the curl and watch this succeed.

Now let’s re-run the curl command. Instead of an error, the list of emojis should be returned indicating that we are connected to the remote cluster:

curl localhost:8080/api/list
[{&amp;quot;shortcode&amp;quot;:&amp;quot;:joy:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f602;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:sunglasses:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f60e;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:doughnut:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f369;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:stuck_out_tongue_winking_eye:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f61c;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:money_mouth_face:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f911;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:flushed:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f633;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:mask:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f637;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:nerd_face:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f913;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:gh

7. Now, let’s “intercept” traffic being sent to the web service running in our K8s cluster and reroute it to the “local” Docker Compose instance by creating a Telepresence intercept.

Select the emojivoto namespace and click the intercept button next to “web”. Set the target port to 8080 and service port to 80, then create the intercept.

After the intercept is created you will see it listed in the UI. Click the nearby blue button with three dots to access your preview URL. Open this URL in your browser, and you can interact with your local instance of the web service with its dependencies running in your Kubernetes cluster.

Congratulations, you’ve successfully created a Telepresence intercept and a sharable preview URL! Send this to your teammates and they will be able to see the results of your local service interacting with the Docker Desktop cluster.

 

Want to learn more about Telepresence or find other life-improving Docker Extensions? Check out the following related resources:

 

]]>
Docker + Telepresence Extension Demo Video nonadult
Welcome Tilt: Fixing the pains of microservice development for Kubernetes https://www.docker.com/blog/welcome-tilt-fixing-the-pains-of-microservice-development-for-kubernetes/ Tue, 24 May 2022 15:00:59 +0000 https://www.docker.com/?p=33668 Big news! The Tilt team is joining Docker. The Tilt project is joining too.

We think this is a great fit and I will tell you why.

logos

The Problem

Modern apps are made of so many services. They’re everywhere. Every team we talk to is trying to figure out how to set up environments to run their apps in dev.  Simple `start.sh` scripts inevitably grow into mini bespoke orchestrators. They need to start servers in the right order, update them in-place, and monitor when one is failing. We built Tilt, a dev environment as code for teams on Kubernetes, to help solve these problems. Whether your dev env is local processes or containers, in a local cluster or a remote cloud, Tilt keeps you in flow and your feedback loops fast.

So how does this make sense at Docker?

When we started building Tilt in 2018, we thought of Docker as the container company selling Swarm to enterprises. In 2019, the Docker’s Next Chapter blog post announced a change in focus to invest more in great tools for developers and development teams to help them spend more time on innovation, less time on everything else. 

Tilt interoperates with Docker Buildkit, Docker Desktop, and Docker Compose. Improvements to these tools help Tilt users too! We always had a hunch that our product roadmaps might overlap. And in the years since Docker focused on developers, we’ve been converging more and more.

Once we started talking more with Docker, we found more in common than just a problem space including:

  • A product philosophy around deeply understanding devs’ existing workflows, so we can make dramatic improvements in user experience that feel magic;
  • An engineering philosophy around patterns and flexibility so devs can adapt their tools to their needs;
  • A business philosophy around building a sustainable company so we can continue to make great free, open-source tools for every developer.

So you could say we got along. What’s next?

What Does a Combined Tilt + Docker Look Like?

Tilt will remain open-source. It’s great! You should try it! We’ll still be responding to issues and hanging out in the community slack channelBut this has never been about Tilt the technology. Or even about Kubernetes. Our history is full of experiments. Dan Bentley and I started hacking on ideas in 2017. We knew we were unhappy about microservice dev. But we weren’t sure what the first stepping stone might be.

This was more of a research project than a company. Some examples:

  • Our first prototype was a more interactive, developer-focused CI.
  • We almost trolled ourselves into becoming a Bazel company.
  • We bought two lab coats, poster board, glue, and glitter so we could show off our prototypes at the GothamGo conference.
  • We built many weird demos: (1) Mishell (an interactive multi-service shell) represented by a French-speaking hermit crab named Michel, and PETS (Process for Editing Tons of Services) represented by three cats overwhelmed by microservice dev. Our teammate Han Yu had a blast with mascot design (yeah to Docker for animal mascots):

Tilt CatsTilt Crab

The first version of Tilt was a bare-bones terminal app to update containers in a Kubernetes cluster. It resonated immediately. Tilt has grown a lot since then. Running all or just a few of your services is easier than ever. Why did we focus on Kubernetes? Kubernetes contains a few simple, brilliant ideas for how to operate apps. Tilt borrows a lot of ideas (and a lot of core libraries) from Kubernetes on how to be scriptable and adaptable. But more importantly, the Kubernetes community is lovely. They appreciate Goose-themed trolling. We found a worldwide community of people enthusiastic about building better tools.

We’re sad we missed everyone at Kubecon EU this year! We didn’t know if this deal would finish before, during, or after the conference.

That said, over the next couple months, we’ll be swapping notes with the Docker team about what we’ve both learned and what we’ve both tried.  We don’t know yet where this will take us. Maybe you’ll see Tilt & Kubernetes features in Docker Compose. Or maybe you’ll see Docker Desktop features in Tilt. There will be research and tinkering, where we’ll be in our lab coats and glitter, but do expect us to bring the power of Tilt to Docker.

]]>