Docker Extensions – Docker https://www.docker.com Tue, 11 Jul 2023 14:22:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Docker Extensions – Docker https://www.docker.com 32 32 We Thank the Stack Overflow Community for Ranking Docker the #1 Most-Used Developer Tool https://www.docker.com/blog/docker-stack-overflow-survey-thank-you-2023/ Wed, 21 Jun 2023 13:00:00 +0000 https://www.docker.com/?p=43490 Stack Overflow’s annual 2023 Developer Survey engaged more than 90,000 developers to learn about their work, the technologies they use, their likes and dislikes, and much, much more. As a company obsessed with serving developers, we’re honored that Stack Overflow’s community ranked Docker the #1 most-desired and #1 most-used developer tool. Since our inclusion in the survey four years ago, the Stack Overflow community has consistently ranked Docker highly, and we deeply appreciate this ongoing recognition and support.

docker logo and stack overflow logo with heart emojis in chat windows

Giving developers speed, security, and choice

While we’re pleased with this recognition, for us it means we cannot slow down: We need to go even faster in our effort to serve developers. In what ways? Well, our developer community tells us they value speed, security, and choice:

  • Speed: Developers want to maximize their time writing code for their app — and minimize set-up and overhead — so they can ship early and often.
  • Security: Specifically, non-intrusive, informative, and actionable security. Developers want to catch and fix vulnerabilities right now when coding in their “inner loop,” not 30 minutes later in CI or seven days later in production.
  • Choice: Developers want the freedom to explore new technologies and select the right tool for the right job and not be constrained to use lowest-common-denominator technologies in “everything-but-the-kitchen-sink” monolithic tools.

And indeed, these are the “North Stars” that inform our roadmap and prioritize our product development efforts. Recent examples include:

Speed

Security

  • Docker Scout: Automatically detects vulnerabilities and recommends fixes while devs are coding in their “inner loop.”
  • Attestations: Docker Build automatically generates SBOMs and SLSA Provenance and attaches them to the image.

Choice

  • Docker Extensions: Launched just over a year ago, and since then, partners and community members have created and published to Docker Hub more than 700 Docker Extensions for a wide range of developer tools covering Kubernetes app development, security, observability, and more.
  • Docker-Sponsored Open Source Projects: Available 100% for free on Docker Hub, this sponsorship program supports more than 600 open source community projects.
  • Multiple architectures: A single docker build command can produce an image that runs on multiple architectures, including x86, ARM, RISC-V, and even IBM mainframes.

What’s next?

While we’re pleased that our efforts have been well-received by our developer community, we’re not slowing down. So many exciting changes in our industry today present us with new opportunities to serve developers.

For example, the lines between the local developer laptop and the cloud are becoming increasingly blurred. This offers opportunities to combine the power of the cloud with the convenience and low latency of local development. Another example is AI/ML. Specifically, LLMs in feedback loops with users offer opportunities to automate more tasks to further reduce the toil on developers.

Watch these spaces — we’re looking forward to sharing more with you soon.

Thank you!

Docker only exists because of our community of developers, Docker Captains and Community Leaders, customers, and partners, and we’re grateful for your on-going support as reflected in this year’s Stack Overflow survey results. On behalf of everyone here at Team Docker: THANK YOU. And we look forward to continuing to build the future together with you.

Learn more

]]>
Shorter Feedback Loops Developing Java Apps with Digma’s Free Docker Extension https://www.docker.com/blog/shorter-feedback-loops-developing-java-apps-with-digmas-free-docker-extension/ Tue, 20 Jun 2023 13:03:17 +0000 https://www.docker.com/?p=43247 Many engineering teams follow a similar process when developing Docker images. This process encompasses activities such as development, testing, and building, and the images are then released as the subsequent version of the application. During each of these distinct stages, a significant quantity of data can be gathered, offering valuable insights into the impact of code modifications and providing a clearer understanding of the stability and maturity of the product features.

Observability has made a huge leap in recent years, and advanced technologies such as OpenTelemetry (OTEL) and eBPF have simplified the process of collecting runtime data for applications. Yet, for all that progress, developers may still be on the fence about whether and how to use this new resource. The effort required to transform the raw data into something meaningful and beneficial for the development process may seem to outweigh the advantages offered by these technologies.

The practice of continuous feedback envisions a development flow in which this particular problem has been solved. By making the evaluation of code runtime data continuous, developers can benefit from shorter feedback loops and tighter control of their codebase. Instead of waiting for issues to develop in production systems, or even surface in testing, various developer tools can watch the code observability data for you and provide early warning and rigorous linting for regressions, code smells, or issues. 

At the same time, gathering information back into the code from multiple deployment environments, such as the test or production environment, can help developers understand how a specific function, query, or event is performing in the real world at a single glance.

Docker and Digma logos on orange background

Meet Digma, the first linter for your runtime data 

Digma is a free developer tool that was created to bridge the continuous feedback gap in the DevOps loop. It aims to make sense of the gazillion metrics traces and logs the code is spewing out — so that developers won’t have to. And, it does this continuously and automatically. 

To make the tool even more practical, Digma processes the observability data and uses it to lint the source code itself, right in the IDE. From a developer’s perspective, this means a whole new level of information about the code is revealed — allowing for better code design based on real-world feedback, as well as quick turnaround for fixing regression or avoiding common pitfalls and anti-patterns.

How Digma is deployed

Digma is packaged as a self-contained Docker extension to make it easy for developers to evaluate their code without registration to an external service or sharing any data that might not be allowed by corporate policy (Figure 1). As such, Digma acts as your own intelligent agent for monitoring code execution, especially in development and testing. The backend component is able to track execution over time and highlight any inadvertent changes to behavior, performance, or error patterns that should be addressed.  

To collect data about the code, behind the scenes Digma leverages OpenTelemetry, a widely used open standard for modern observability. To make getting started easier, Digma takes care of the configuration work for you, so from the developer’s point of view, getting started with continuous feedback is equivalent to flipping a switch.

 Illustration of Digma process showing feedback providers and observability sources.
Figure 1: Overview of Digma.

Once the information is collected using OTEL, Digma runs it through what could be described as a reverse pipeline, aggregating the data, analyzing it and providing it via an API to multiple outlets including the IDE plugin, Jaeger, and the Docker extension dashboard that we’ll describe in this example.

Note: Currently, Digma supports Java applications running on IntelliJ IDEA; however, support for additional languages and platforms will be rolling out in the coming months.

Installing Digma

Prerequisites: Docker Desktop 4.8 or later.

Note: You must ensure that the Docker Extension is enabled (Figure 2).

Screenshot of Docker Desktop showing "Enable Docker Extensions" selected with blue checkmark.
Figure 2: Enabling Docker Extensions.

Step 1. Installing the Digma Docker Extension

In the Extensions Marketplace, search for Digma and select Install (Figure 3).

Screenshot of Extensions Marketplace showing results of search for Digma.
Figure 3: Install Digma.

Step 2. Collecting feedback about your code

After installing the Digma extension from the marketplace, you’ll be directed to a quickstart page that will guide you through the next steps to collect feedback about your code. Digma can collect information from two different sources:

  • Your tests/debug sessions within the IDE
  • Any Docker containers you may have running in Docker desktop

Getting feedback while debugging/running code in the IDE

Digma’s IDE extension makes the process of collecting data about your code in the IDE trivial. The entire setup is reduced to a single toggle button click (Figure 4). Behind the scenes, Digma adds variables to the runtime configuration, so that it uses OpenTelemetry for observability. No prior knowledge or observability is needed, and no code changes are necessary to get this to work. 

With the observability toggle enabled, you’ll start getting immediate feedback as you run your code locally, or even when you execute your tests. Digma will be on the lookout for any regressions and will automatically spot code smells and issues.

Screenshot of Digma dialog box with Observability toggle switch enabled (in blue).
Figure 4: Observability toggle switch.

Getting feedback from running containers

In addition to the IDE, you can also choose to collect data from any running container on your machine running Java code. The process to do that is extremely simple and also documented in the Digma extension Getting Started page. It does not involve changing any Dockerfile or code. Basically, you download the OTEL agent, mount it to a volume on the container, and set some environment variables that point the Java process to use it.

curl --create-dirs -O -L --output-dir ./otel
https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar

curl --create-dirs -O -L --output-dir ./otel
https://github.com/digma-ai/otel-java-instrumentation/releases/latest/download/digma-otel-agent-extension.jar

export JAVA_TOOL_OPTIONS="-javaagent:/otel/javaagent.jar -Dotel.exporter.otlp.endpoint=http://localhost:5050 -Dotel.javaagent.extensions=/otel/digma-otel-agent-extension.jar"
export OTEL_SERVICE_NAME={--ENTER YOUR SERVICE NAME HERE--}
export DEPLOYMENT_ENV=LOCAL_DOCKER

docker run -d -v "/$(pwd)/otel:/otel" --env JAVA_TOOL_OPTIONS --env OTEL_SERVICE_NAME --env DEPLOYMENT_ENV {-- APPEND PARAMS AND REPO/IMAGE --}

Uncovering how the code behaves in runtime

Whether you collect data from running containers or in the IDE, Digma will continuously analyze the code behavior and make the analysis results available both as code annotations in the IDE and as dashboards in the Digma extension itself. To demonstrate a live example, we’ve created a sample application based on the Java Spring Boot PetClinic app. 

In this scenario, we’ll clone the repo and run the code from the IDE, triggering a few actions to see what they reveal about the code. For convenience, we’ve created run configurations to simulate common API calls and create interesting data:

  1. Open the PetClinic project here in your IntelliJ IDE.
  2. Run the petclinic-service configuration.
  3. Access the API and play around with it on http://localhost:9753/ or simply run the ClientTested run config included in the workspace.

Almost immediately, the result of the dynamic linting analysis will start appearing over the code in the IDE (Figure 5):

Screenshot showing Digma Insights after analysis.
Figure 5: Results of Digma analysis.

At the same time, the Digma extension itself will present a dashboard cataloging all of the assets that have been identified, including code locations, API endpoints, database queries, and more (Figure 6). For each asset, you’ll be able to access basic statistics about its performance as well as a more detailed list of issues of concern that may need to be tracked about their runtime behavior.

 Screenshot of Digma dashboard listing assets that have been identified.
Figure 6: Digma dashboard.

Using the observability data

One of the main problems Digma tries to solve is not how to collect observability data but how to turn it into a useful and practical asset that can speed up development processes and improve the code. Digma’s insights can be directly applied during design time, based on existing data, as well as to validate changes as they are being run in dev/test/prod and to get early feedback and shorter loops into the development process.

Examples of design time planning code insights:

  • See runtime usage for the modified functions and understand who will be affected by the change
  • Concurrency to plan for, bottlenecks and criticality
  • Existing errors and exceptions that need to be handled by the new component
  • See complete visualization of the flow of control using the modified code 

Examples of runtime code validations for shorter feedback loops:

  • Catch code and query modeling smells, such as N+1 Selects, are detected and highlighted
  • Identify new performance bottlenecks and regressions as a result of the changes
  • Spot scaling issues earlier and address them as a part of the dev cycle

As Digma evolves, it continues to track and provide clear visibility into specific areas that are important to developers with the goal of having complete clarity about how each piece of code is behaving in real-world scenarios and catching issues and regressions much earlier in the process.

Who should use Digma?

Unlike many other observability solutions, Digma is code first and developer first. Digma is completely free for developers, does not require any code changes or sharing data, and will get you from zero to impactful data about your code within minutes.  If you are working on Java code, use the JetBrains IDE, and you want to improve your code with actual execution data, you can get started by picking up the Digma extension from the marketplace. 

You can provide feedback in our Slack channel and tell us where Digma improved your dev cycle.

]]>
Unlock Docker Desktop Real-Time Insights with the Grafana Docker Extension https://www.docker.com/blog/unlock-docker-desktop-real-time-insights-with-the-grafana-docker-extension/ Fri, 09 Jun 2023 15:46:00 +0000 https://www.docker.com/?p=43134 More than one million, that’s the number of active Grafana installations worldwide. In the realm of data visualization and monitoring, Grafana has emerged as a go-to solution for organizations and individuals alike. With its powerful features and intuitive interface, Grafana enables users to gain valuable insights from their data. 

Picture this scenario: You have a locally hosted web application that you need to test thoroughly, but accessing it remotely for testing purposes seems like an insurmountable task. This is where the Grafana Cloud Docker Extension comes to the rescue. This extension offers a seamless solution to the problem by establishing a secure connection between your local environment and the Grafana Cloud platform.

In this article, we’ll explore the benefits of using the Grafana Cloud Docker Extension and describe how it can bridge the gap between your local machine and the remote Grafana Cloud platform.

Graphic showing Docker and Grafana logos on dark blue background

Overview of Grafana Cloud

Grafana Cloud is a fully managed, cloud-hosted observability platform ideal for cloud-native environments. With its robust features, wide user adoption, and comprehensive documentation, Grafana Cloud is a leading observability platform that empowers organizations to gain insights, make data-driven decisions, and ensure the reliability and performance of their applications and infrastructure.

Grafana Cloud is an open, composable observability stack that supports various popular data sources, such as Prometheus, Elasticsearch, Amazon CloudWatch, and more (Figure 1). By configuring these data sources, users can create custom dashboards, set up alerts, and visualize their data in real-time. The platform also offers performance and load testing with Grafana Cloud k6 and incident response and management with Grafana Incident and Grafana OnCall to enhance the observability experience.

Graphic showing Hosted Grafana architecture with connections to elements including Node.js, Prometheus, Amazon Cloudwatch, and Google Cloud monitoring.
Figure 1: Hosted Grafana architecture.

Why run Grafana Cloud as a Docker Extension?

The Grafana Cloud Docker Extension is your gateway to seamless testing and monitoring, empowering you to make informed decisions based on accurate data and insights. The Docker Desktop integration in Grafana Cloud brings seamless monitoring capabilities to your local development environment. 

Docker Desktop, a user-friendly application for Mac, Windows, and Linux, enables you to containerize your applications and services effortlessly. With the Grafana Cloud extension integrated into Docker Desktop, you can now monitor your local Docker Desktop instance and gain valuable insights.

The following quick-start video shows how to monitor your local Docker Desktop instance using the Grafana Cloud Extension in Docker Desktop:

The Docker Desktop integration in Grafana Cloud provides a set of prebuilt Grafana dashboards specifically designed for monitoring Docker metrics and logs. These dashboards offer visual representations of essential Docker-related metrics, allowing you to track container resource usage, network activity, and overall performance. Additionally, the integration also includes monitoring for Linux host metrics and logs, providing a comprehensive view of your development environment.

Using the Grafana Cloud extension, you can visualize and analyze the metrics and logs generated by your Docker Desktop instance. This enables you to identify potential issues, optimize resource allocation, and ensure the smooth operation of your containerized applications and microservices.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a Grafana Cloud account.

Note: You must ensure that the Docker Extension is enabled (Figure 2).

Screenshot showing search for Grafana in Extensions Marketplace.
Figure 2: Enabling Docker Extensions.

Step 1. Install the Grafana Cloud Docker Extension

In the Extensions Marketplace, search for Grafana and select Install (Figure 3).

Screenshot of Docker Desktop showing installation of extension.
Figure 3: Installing the extension.

Step 2. Create your Grafana Cloud account

A Grafana Cloud account is required to use the Docker Desktop integration. If you don’t have a Grafana Cloud account, you can sign up for a free account today (Figure 4).

 Screenshot of Grafana Cloud sign-up page.
Figure 4: Signing up for Grafana Cloud.

Step 3. Find the Connections console

In your Grafana instance on Grafana Cloud, use the left-hand navigation bar to find the Connections Console (Home > Connections > Connect data) as shown in Figure 5.

Screenshot of Grafana Cloud Connections console.
Figure 5: Connecting data.

Step 4. Install the Docker Desktop integration

To start sending metrics and logs to the Grafana Cloud, install the Docker Desktop integration (Figure 6). This integration lets you fetch the values of connection variables required to connect to your account.

 Screenshot of Connections console showing installation of Docker Desktop integration.
Figure 6: Installing the Docker Desktop Integration.

Step 5. Connect your Docker Desktop instance to Grafana Cloud

It’s time to open and connect the Docker Desktop extension to the Grafana Cloud (Figure 7). Enter the connection variable you found while installing the Docker Desktop integration on Grafana Cloud.

Screenshot showing connection of Docker Desktop extension to Grafana Cloud.
Figure 7: Connecting the Docker Desktop extension to Grafana Cloud.

Step 6. Check if Grafana Cloud is receiving data from Docker Desktop

Test the connection to ensure that the agent is collecting data (Figure 8).

Screenshot showing test of connection to ensure data collection.
Figure 8: Checking the connection.

Step 7. View the Grafana dashboard

The Grafana dashboard shows the integration with Docker Desktop (Figure 9).

 Screenshot of Grafana Dashboards page.
Figure 9: Grafana dashboard.

Step 8. Start monitoring your Docker Desktop instance

After the integration is installed, the Docker Desktop extension will start sending metrics and logs to Grafana Cloud.

You will see three prebuilt dashboards installed in Grafana Cloud for Docker Desktop.

Docker Overview dashboard

This Grafana dashboard gives a general overview of the Docker Desktop instance based on the metrics exposed by the cadvisor Prometheus exporter (Figure 10).

Screenshot of Grafana Docker overview dashboard showing metrics such as CPU and memory usage.
Figure 10: Docker Overview dashboard.

The key metrics monitored are:

  • Number of containers/images
  • CPU metrics
  • Memory metrics
  • Network metrics

This dashboard also contains a shortcut at the top for the logs dashboard so you can correlate logs and metrics for troubleshooting.

Docker Logs dashboard

This Grafana dashboard provides logs and metrics related to logs of the running Docker containers on the Docker Desktop engine (Figure 11).

Screenshot of Grafana Docker Logs dashboard showing statistics related to the running Docker containers.
Figure 11: Docker Logs dashboard.

Logs and metrics can be filtered based on the Docker Desktop instance and the container using the drop-down for template variables on the top of the dashboard.

Docker Desktop Node Exporter/Nodes dashboard

This Grafana dashboard provides the metrics of the Linux virtual machine used to host the Docker engine for Docker Desktop (Figure 12).

Screenshot of Docker Nodes dashboard showing metrics such as disk space and memory usage.
Figure 12: Docker Nodes dashboard.

How to monitor Redis in a Docker container with Grafana Cloud

Because the Grafana Agent is embedded inside the Grafana Cloud extension for Docker Desktop, it can easily be configured to monitor other systems running on Docker Desktop that are supported by the Grafana Agent.

For example, we can monitor a Redis instance running inside a container in Docker Desktop using the Redis integration for Grafana Cloud and the Docker Desktop extension.

If we have a Redis database running inside our Docker Desktop instance, we can install the Redis integration on Grafana Cloud by navigating to the Connections Console (Home > Connections > Connect data) and clicking on the Redis tile (Figure 13).

Screenshot of Connections Console showing adding Redis as data source.
Figure 13: Installing the Redis integration on Grafana Cloud.

To start collecting metrics from the Redis server, we can copy the corresponding agent snippet into our agent configuration in the Docker Desktop extension. Click on Configuration in the Docker Desktop extension and add the following snippet under the integrations key. Then press Save configuration (Figure 14).

Screenshot of Connections console showing configuration of Redis integration.
Figure 14: Configuring Redis integration.
integrations:
  redis_exporter:
    enabled: true
    redis_addr: 'localhost:6379'

In its default settings, the Grafana agent container is not connected to the default bridge network of Docker desktop. To connect the agent to this container, run the following command:

docker network connect bridge grafana-docker-desktop-extension-agent

This step allows the agent to connect and scrape metrics from applications running on other containers. Now you can see Redis metrics on the dashboard installed as part of the Redis solution for Grafana Cloud (Figure 15).

 Screenshot showing Redis metrics on the dashboard.
Figure 15: Viewing Redis metrics.

Conclusion

With the Docker Desktop integration in Grafana Cloud and its prebuilt Grafana dashboards, monitoring your Docker Desktop environment becomes a streamlined process. The Grafana Cloud Docker Extension allows you to gain valuable insights into your local Docker instance, make data-driven decisions, and optimize your development workflow with the power of Grafana Cloud. Explore a new realm of testing possibilities and elevate your monitoring game to new heights. 

Check out the Grafana Cloud Docker Extension on Docker Hub.

]]>
Docker Desktop GrafanaCloud nonadult
Boost Your Local Testing Game with the LambdaTest Tunnel Docker Extension https://www.docker.com/blog/boost-your-local-testing-game-with-lambdatest-tunnel-docker-extension/ Tue, 16 May 2023 14:48:07 +0000 https://www.docker.com/?p=42430 As the demand for web applications continues to rise, so does the importance of testing them thoroughly. One challenge that testers face is how to test applications that are hosted locally on their machines. This is where the LambdaTest Tunnel Docker Extension comes in handy. This extension allows you to establish a secure connection between your local environment and the LambdaTest platform, making it possible to test your locally hosted pages and applications on a remote browser. 

In this article, we’ll explore the benefits of using the LambdaTest Tunnel Docker Extension and describe how it can streamline your testing workflow.

White Docker logo on black background with LambdaTest logo in blue

Overview of LambdaTest Tunnel

LambdaTest Tunnel is a secure and encrypted tunneling feature that allows devs and QAs to test their locally hosted web applications or websites on the cloud-based real machines. It establishes a secure connection between the user’s local machine and the real machine in the cloud (Figure 1).

By downloading the LambdaTest Tunnel binary, you can securely connect your local machine to LambdaTest cloud servers even when behind corporate firewalls. This allows you to test locally hosted websites or web applications across various browsers, devices, and operating systems available on the LambdaTest platform. Whether your web files are written in HTML, CSS, PHP, Python, or similar languages, you can use LambdaTest Tunnel to test them.

Diagram of LambdaTest Tunnel network setup, showing connection from the tunnel client to the API gateway to the LambdaTest's private network with proxy server and browser VMs.
Figure 1: Overview of LambdaTest Tunnel.

Why use LambdaTest Tunnel?

LambdaTest Tunnel offers numerous benefits for web developers, testers, and QA professionals. These include secure and encrypted connection, cross-browser compatibility testing, localhost testing, etc.

Let’s see the LambdaTest Tunnel benefits one by one:

  • It provides a secure and encrypted connection between your local machine and the virtual machines in the cloud, thereby ensuring the privacy of your test data and online communications.
  • With LambdaTest Tunnel, you can test your web applications or websites, local folder, and files across a wide range of browsers and operating systems without setting up complex and expensive local testing environments.
  • It lets you test your locally hosted web applications or websites on cloud-based real OS machines.
  • You can even run accessibility tests on desktop browsers while testing locally hosted web applications and pages.

Why run LambdaTest Tunnel as a Docker Extension?

With Docker Extensions, you can build and integrate software applications into your daily workflow. Using LambdaTest Tunnel as a Docker extension provides a seamless and hassle-free experience for establishing a secure connection and performing cross-browser testing of locally hosted websites and web applications on the LambdaTest platform without manually launching the tunnel through the command line interface (CLI).

The LambdaTest Tunnel Docker Extension opens up a world of options for your testing workflows by adding a variety of features. Docker Desktop has an easy-to-use one-click installation feature that allows you to use the LambdaTest Tunnel Docker Extension directly from Docker Desktop.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a LambdaTest account. Note: You must ensure the Docker extension is enabled (Figure 2).

Screenshot of Docker Desktop with Docker Extensions enabled.
Figure 2: Enable Docker Extensions.

Step 1: Install the LambdaTest Docker Extension

In the Extensions Marketplace, search for LambdaTest Tunnel extension and select Install (Figure 3).

Screen shot of Extensions Marketplace, showing blue Install button for LambdaTest Tunnel.
Figure 3: Install LambdaTest Tunnel.

Step 2: Set up the Docker LambdaTest Tunnel

Open the LambdaTest Tunnel and select the Setup Tunnel to configure the tunnel (Figure 4).

Screenshot of LambdaTest Tunnel setup page.
Figure 4: Configure the tunnel.

Step 3: Enter your LambdaTest credentials

Provide your LambdaTest Username, Access Token, and preferred Tunnel Name. You can get your Username and Access Token from your LambdaTest Profile under Password & Security. 
Once these details have been entered, click on the Launch Tunnel (Figure 5).

Screenshot of LambdaTest Docker Tunnel page with black "Launch Tunnel" button.
Figure 5: Launch LambdaTest Tunnel.

The LambdaTest Tunnel will be launched, and you can see the running tunnel logs (Figure 6).

Screenshot of LambdaTest Docker Tunnel page with list of running tunnels.
Figure 6: Running logs.

Once you have configured the LambdaTest Tunnel via Docker Extension, it should appear on the LambdaTest Dashboard (Figure 7).

Screenshot of new active tunnel in LambdaTest dashboard.
Figure 7: New active tunnel.

Local testing using LambdaTest Tunnel Docker Extension

Let’s walk through a scenario using LambdaTest Tunnel. Suppose a web developer has created a new web application that allows users to upload and view images. The developer needs to ensure that the application can handle a variety of different image formats and sizes and that it can render display images across different browsers and devices.

To do this, the developer first sets up a local development environment and installs the LambdaTest Tunnel Docker Extension. They then use the web application to open and manipulate local image files.

Next, the developer uses the LambdaTest Tunnel to securely expose their local development environment to the internet. This step allows them to test the application in real-time on different browsers and devices using LambdaTest’s cloud-based digital experience testing platform.

Now let’s see the steps to perform local testing using the LambdaTest Tunnel Docker Extension.

1. Go to the LambdaTest Dashboard and navigate to Real Time Testing > Browser Testing (Figure 8).

Screenshot of LambdaTest Tunnel showing Browser Testing selected.
Figure 8: Navigate to Browser Testing.

2. In the console, enter the localhost URL, select browser, browser version, operating system, etc. and select START (Figure 9).

Screenshot of LambdaTest Tunnel page showing browser options to choose from.
Figure 9: Configure testing.

3. A cloud-based real operating system will fire up where you can perform testing of local files or folders (Figure 10).

Screenshot of LambdaTest showing local test example.
Figure 10: Perform local testing.

Learn more about how to set up the LambdaTest Tunnel Docker Extension in the documentation

Conclusion

The LambdaTest Tunnel Docker Extension makes it easy to perform local testing without launching the tunnel from the CLI. You can run localhost tests over an online cloud grid of 3000+ real browsers and operating system combinations. You don’t have to worry about the challenges of local infrastructure because LambdaTest provides you with a cloud grid of zero downtime. 

Check out the LambdaTest Tunnel Docker Extension on DockerHub. The LambdaTest Tunnel Docker Extension source code is available on GitHub, and contributions are welcome. 

]]>
Building a Local Application Development Environment for Kubernetes with the Gefyra Docker Extension  https://www.docker.com/blog/building-a-local-application-development-environment-for-kubernetes-with-the-gefyra-docker-extension/ Wed, 03 May 2023 14:00:00 +0000 https://www.docker.com/?p=42061 If you’re using a Docker-based development approach, you’re already well on your way toward creating cloud-native software. Containerizing your software ensures that you have all the system-level dependencies, language-specific requirements, and application configurations managed in a containerized way, bringing you closer to the environment in which your code will eventually run. 

In complex systems, however, you may need to connect your code with several auxiliary services, such as databases, storage volumes, APIs, caching layers, message brokers, and others. In modern Kubernetes-based architectures, you also have to deal with service meshes and cloud-native deployment patterns, such as probes, configuration, and structural and behavioral patterns. 

Kubernetes offers a uniform interface for orchestrating scalable, resilient, and services-based applications. However, its complexity can be overwhelming, especially for developers without extensive experience setting up Kubernetes clusters. That’s where Gefyra comes in, making it easier for developers to work with Kubernetes and improve the process of creating secure, reliable, and scalable software.

Gefyra and Docker logos on a dark background with a lighter purple outline of two puzzle pieces

What is Gefyra? 

Gefyra, named after the Greek word for “bridge,” is a comprehensive toolkit that facilitates Docker-based development with Kubernetes. If you plan to use Kubernetes as your production platform, it’s essential to work with the same environment during development. This approach ensures that you have the highest possible “dev/prod-parity,” minimizing friction when transitioning from development to production. 

Gefyra is an open source project that provides docker run on steroids. It allows you to connect your local Docker with any Kubernetes cluster and run a container locally that behaves as if it would run in the cluster. You can write code locally in your favorite code editor using the tools you love. 

Additionally, Gefyra does not require you to build a container image from your code changes, push the image to a registry, or trigger a restart in the cluster. Instead, it saves you from this tedious cycle by connecting your local code right into the cluster without any changes to your existing Dockerfile. This approach is useful not only for new code but also when introspecting existing code with a debugger that you can attach to a running container. That makes Gefyra a productivity superstar for any Kubernetes-based development work.

How does Gefyra work?

Gefyra installs several cluster-side components that enable it to control the local development machine and the development cluster. These components include a tunnel between the local development machine and the Kubernetes cluster, a local DNS resolver that behaves like the cluster DNS, and sophisticated IP routing mechanisms. Gefyra uses popular open source technologies, such as Docker, WireGuard, CoreDNS, Nginx, and Rsync, to build on top of these components.

The local development setup involves running a container instance of the application on the developer machine, with a sidecar container called Cargo that acts as a network gateway and provides a CoreDNS server that forwards all requests to the cluster (Figure 1). Cargo encrypts all the passing traffic with WireGuard using ad hoc connection secrets. Developers can use their existing tooling, including their favorite code editor and debuggers, to develop their applications.

Yellow graphic with white text boxes showing development setup, including: IDE, Volumes, Shell, Logs, Debugger, and connection to Gefyra, including App Container and Cargo sidecar container.
Figure 1: Local development setup.

Gefyra manages two ends of a WireGuard connection and automatically establishes a VPN tunnel between the developer and the cluster, making the connection robust and fast without stressing the Kubernetes API server (Figure 2). Additionally, the client side of Gefyra manages a local Docker network with a VPN endpoint, allowing the container to join the VPN that directs all traffic into the cluster.

Yellow graphic with boxes and arrows showing connection between Developer Machine and Developer Cluster.
Figure 2: Connecting developer machine and cluster.

Gefyra also allows bridging existing traffic from the cluster to the local container, enabling developers to test their code with real-world requests from the cluster and collaborate on changes in a team. The local container instance remains connected to auxiliary services and resources in the cluster while receiving requests from other Pods, Services, or the Ingress. This setup eliminates the need for building container images in a continuous integration pipeline and rolling out a cluster update for simple changes.

Why run Gefyra as a Docker Extension?

Gefyra’s core functionality is contained in a Python library available in its repository. The CLI that comes with the project has a long list of arguments that may be overwhelming for some users. To make it more accessible, Gefyra developed the Docker Desktop extension, which is easy for developers to use without having to delve into the intricacies of Gefyra.

The Gefyra extension for Docker Desktop enables developers to work with a variety of Kubernetes clusters, including the built-in Kubernetes cluster, local providers such as Minikube, K3d, or Kind, Getdeck Beiboot, or any remote clusters. Let’s get started.

Installing the Gefyra Docker Desktop

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.) In Settings | Extensions select the Enable Docker Extensions box (Figure 3).

 Screenshot showing Docker Desktop interface with "Enable Docker Extensions" selected.
Figure 3: Enable Docker Extensions.

You must also enable Kubernetes under Settings (Figure 4).

Screenshot of Docker Desktop with "Enable Kubernetes" and "Show system containers (advanced)" selected.
Figure 4: Enable Kubernetes.

Gefyra is in the Docker Extensions Marketplace. In the following instructions, we’ll install Gefyra in Docker Desktop. 

Step 2: Add the Gefyra extension

Open Docker Desktop and select Add Extensions to find the Gefyra extension in the Extensions Marketplace (Figure 5).

Screenshot showing search for "Gefyra" in Docker Extensions Marketplace.
Figure 5: Locate Gefyra in the Docker Extensions Marketplace.

Once Gefyra is installed, you can open the extension and find the start screen of Gefyra that lists all containers that are connected to a Kubernetes cluster. Of course, this section is empty on a fresh install.
To launch a local container with Gefyra, just like with Docker, you need to click on the Run Container button at the top right (Figure 6).

 Screenshot showing Gefyra start screen.
Figure 6: Gefyra start screen.

The next steps will vary based on whether you’re working with a local or remote Kubernetes cluster. If you’re using a local cluster, simply select the matching kubeconfig file and optionally set the context (Figure 7). 

For remote clusters, you may need to manually specify additional parameters. Don’t worry if you’re unsure how to do this, as the next section will provide a detailed example for you to follow along with.

Screenshot of Gefyra interface showing blue "Choose Kubeconfig" button.
Figure 7: Selecting Kubeconfig.

The Kubernetes demo workloads

The following example showcases how Gefyra leverages the Kubernetes functionality included in Docker Desktop to create a development environment for a simple application that consists of two services — a backend and a frontend (Figure 8). 

Both services are implemented as Python processes, and the frontend service uses a color property obtained from the backend to generate an HTML document. Communication between the two services is established via HTTP, with the backend address being passed to the frontend as an environment variable.

 Yellow graphic showing connection of frontend and backend services.
Figure 8: Frontend and backend services.

The Gefyra team has created a repository for the Kubernetes demo workloads, which can be found on GitHub

If you prefer to watch a video explaining what’s covered in this tutorial, check out this video on YouTube

Prerequisite

Ensure that the current Kubernetes context is switched to Docker Desktop. This step allows the user to interact with the Kubernetes cluster and deploy applications to it using kubectl.

kubectl config current-context
docker-desktop

Clone the repository

The next step is to clone the repository:

git clone https://github.com/gefyrahq/gefyra-demos

Applying the workload

The following YAML file sets up a simple two-tier app consisting of a backend service and a frontend service with communication between the two services established via the SVC_URL environment variable passed to the frontend container. 

It defines two pods, named backend and frontend, and two services, named backend and frontend, respectively. The backend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-backend image on port 5002. The frontend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-frontend image on port 5003. The frontend container also includes an environment variable named SVC_URL, which is set to the value backend.default.svc.cluster.local:5002.

The backend service is defined to select the backend pod using the app: backend label, and expose port 5002. The frontend service is defined to select the frontend pod using the app: frontend label, and expose port 80 as a load balancer, which routes traffic to port 5003 of the frontend container.

/gefyra-demos/kcd-munich> kubectl apply -f manifests/demo.yaml
pod/backend created
pod/frontend created
service/backend created
service/frontend created

Let’s watch the workload getting ready:

kubectl get pods
NAME         READY    STATUS    RESTARTS   AGE
backend     1/1            Running    0                    2m6s
frontend     1/1            Running    0                    2m6s

After ensuring that the backend and frontend pods have finished initializing (check for the READY column in the output), you can access the application by navigating to http://localhost in your web browser. This URL is served from the Kubernetes environment of Docker Desktop. 

Upon loading the page, you will see the application’s output displayed in your browser. Although the output may not be visually stunning, it is functional and should provide the necessary functionality for your needs.

Blue bar displaying "Hello World" in black text.

Now, let’s explore how we can correct or adjust the color of the output generated by the frontend component.

Using Gefyra “Run Container” with the frontend process

In the first part of this section, you will see how to execute a frontend process on your local machine that is associated with a resource based on the Kubernetes cluster: the backend API. This can be anything ranging from a database to a message broker or any other service utilized in the architecture.

Kick off a local container with Run Container from the Gefyra start screen (Figure 9).

Screenshot of Gefyra interface showing blue "Run container" button.
Figure 9: Run a local container.

Once you’ve entered the first step of this process, you will find the kubeconfig` and context to be set automatically. That’s a lifesaver if you don’t know where to find the default kubeconfig on your host.

Just hit the Next button and proceed with the container settings (Figure 10).

Screenshot of Gefyra interface showing the "Set Kubernetes Settings" step.
Figure 10: Container settings.

In the Container Settings step, you can configure the Kubernetes-related parameters for your local container. In this example, everything happens in the default Kubernetes namespace. Select it in the first drop-down input (Figure 11). 

In the drop-down input below Image, you can specify the image to run locally. Note that it lists all images that are being used in the selected namespace (from the Namespace selector). Isn’t that convenient? You don’t need to worry about the images being used in the cluster or find them yourself. Instead, you get a suggestion to work with the image at hand, as we want to do in this example (Figure 12). You could still specify any arbitrary images if you like, for example, a completely new image you just built on your machine.

Screenshot of Gefyra interface showing "Select a Workload" drop-down menu under Container Settings.
Figure 11: Select namespace and workload.
Screenshot of Gefyra interface showing drop-down menu of images.
Figure 12: Select image to run.

To copy the environment of the frontend container running in the cluster, you will need to select pod/frontend from the Copy Environment From selector (Figure 13). This step is important because you need the backend service address, which is passed to the pod in the cluster using an environment variable.

Finally, for the upper part of the container settings, you need to overwrite the following run command of the container image to enable code reloading:

poetry run flask --app app --debug run --port 5002 --host 0.0.0.0
Screenshot of Gefyra interface showing selection of  “pod/frontend” under “Copy Environment From."
Figure 13: Copy environment of frontend container.

Let’s start the container process on port 5002 and expose this port on the local machine. In addition, let’s mount the code directory (/gefyra-demos/kcd-munich/frontend) to make code changes immediately visible. That’s it for now. A click on the Run button starts the process.

Screenshot of Gefyra interface showing installation progress bar.
Figure 14: Installing Gefyra components.

It takes a few seconds to install Gefyra’s cluster-side components, prepare the local networking part, and pull the container image to start locally (Figure 14). Once this is ready, you will get redirected to the native container view of Docker Desktop from this container (Figure 15).

Screenshot showing native container view of Docker Desktop.
Figure 15: Log view.

You can look around in the container using the Terminal tab (Figure 16). Type in the env command in the shell, and you will see all the environment variables coming with Kubernetes.

Screenshot showing Terminal view of running container.
Figure 16: Terminal view.

We’re particularly interested in the SVC_URL variable that points the frontend to the backend process, which is, of course, still running in the cluster. Now, when browsing to the URL http://localhost:5002, you will get a slightly different output:

Blue bar displaying "Hello KCD" in black text

Why is that? Let’s look at the code that we already mounted into the local container, specifically the app.py that runs a Flask server (Figure 17).

Screenshot of colorful app.py code on black background.
Figure 17: App.py code.

The last line of the code in the Gefyra example displays the text Hello KCD!, and any changes made to this code are immediately updated in the local container. This feature is noteworthy because developers can freely modify the code and see the changes reflected in real-time without having to rebuild or redeploy the container.

Line 12 of the code in the Gefyra example sends a request to a service URL, which is stored in the variable SVC. The value of SVC is read from an environment variable named SVC_URL, which is copied from the pod in the Kubernetes cluster. The URL, backend.default.svc.cluster.local:5002, is a fully qualified domain name (FQDN) that points to a Kubernetes service object and a port. 

These URLs are commonly used by applications in Kubernetes to communicate with each other. The local container process is capable of sending requests to services running in Kubernetes using the native connection parameters, without the need for developers to make any changes, which may seem like magic at times.

In most development scenarios, the capabilities of Gefyra we just discussed are sufficient. In other words, you can use Gefyra to run a local container that can communicate with resources in the Kubernetes cluster, and you can access the app on a local port. However, what if you need to modify the backend while the frontend is still running in Kubernetes? This is where the “bridge” feature of Gefyra comes in, which we will explore next.

Gefyra “bridge” with the backend process

We could choose to run the frontend process locally and connect it to the backend process running in Kubernetes through a bridge. However, this approach may not always be necessary or desirable, especially for backend developers who may not be interested in the frontend. In this case, it may be more convenient to leave the frontend running in the cluster and stop the local instance by selecting the stop button in Docker Desktop’s container view.

First of all, we have to run a local instance of the backend service. It’s the same as with the frontend, but this time with the backend container image (Figure 18).

Screenshot of Gefyra interface showing "pod/backend" setup.
Figure 18: Running a backend container image.

Compared to the frontend example from above, you can run the backend container image (quay.io/gefyra/gefyra-demo-backend:latest), which is suggested by the drop-down selector. This time we need to copy the environment from the backend pod running in Kubernetes. Note that the volume mount is now set to the code of the backend service to make it work.

After starting the container, you can check http://localhost:5002/color, which serves the backend API response. Looking at the app.py of the backend service shows the source of this response. In line 8, this app returns a JSON response with the color property set to green (Figure 19).

Screenshot showing app.py code with "color" set to "green".
Figure 19: Checking the color.

At this point, keep in mind that we’re only running a local instance of the backend service. This time, a connection to a Kubernetes-based resource is not needed as this container runs without any external dependency.

The idea is to make the frontend process that serves from the Kubernetes cluster on http://localhost (still blue) pick up our backend information to render its output. That’s done using Gefyra’s bridge feature. In the next step, we will overlay the backend process running in the cluster with our local container instance so that the local code becomes effective in the cluster.

Getting back to the Gefyra container list on the start screen, you can find the Bridge column on each locally running container (Figure 20). Once you click this button, you can create a bridge of your local container into the cluster.

Screenshot of Gefyra interface showing "Bridge" column on far right.
Figure 20: The Bridge column is visible on the far right.

In the next dialog, we need to enter the bridge configuration (Figure 21).

Screenshot of Gefyra interface showing Bridge Settings.
Figure 21: Enter the bridge configuration.

Let’s set the “Target” for the bridge to the backend pod, which is currently serving the frontend process in the cluster, and set a timeout for the bridge to 60 seconds. We also need to map the port of the proxy running in the cluster with the local instance. 

If your local container is configured to listen on a different port from the cluster, you can specify the mapping here (Figure 22). In this example, the service is running on port 5003 in both the cluster and on the local machine, so we need to map that port. After clicking the Bridge button, it takes a few seconds to return to the container list on Gefyra’s start view.

 Screenshot of Gefyra interface showing port mapping configuration.
Figure 22: Specify port mapping.

Observe the change in the icon of the Bridge button, which now depicts a stop symbol (Figure 23). This means the bridge function is now operational and can be terminated by simply clicking this button again.

Screenshot of Gefyra showing closeup view of Bridge column and blue stop button.
Figure 23: The Bridge column showing a stop symbol.

At this point, the local code is able to handle requests from the frontend process in the cluster by using the URL stored in the SVC_URL variable, without making any changes to the frontend process itself. To confirm this, you can open http://localhost in your browser (which is served from the Kubernetes of Docker Desktop) and check that the output is now green. This is because the local code is returning the value green for the color property. You can change this value to any valid one in your IDE, and it will be immediately reflected in the cluster. This is the amazing power of this tool.

Remember to release the bridge of your container once you are finished making changes to your backend. This will reset the cluster to its original state, and the frontend will display the original “beautiful” blue H1 again. This approach allows us to intercept containers running in Kubernetes with our local code without modifying the Kubernetes cluster itself. That’s because we did not make any changes to the Kubernetes cluster itself. Instead, we kind of intercepted containers running in Kubernetes with our local code and released that intercept afterwards.

Conclusion

Gefyra is an easy-to-use Docker Desktop extension that connects with Kubernetes to improve development workflows and team collaboration. It lets you run containers as usual while being connected with Kubernetes, thereby saving time and ensuring high dev/prod parity. 

The Blueshoe development team would appreciate a star on GitHub and welcomes you to join their Discord community for more information.

About the Author

Michael Schilonka is a strong believer that Kubernetes can be a software development platform, too. He is the co-founder and managing director of the Munich-based agency Blueshoe and the technical lead of Gefyra and Getdeck. He talks about Kubernetes in general and how they are using Kubernetes for development. Follow him on LinkedIn to stay connected.

]]>
Enabling a No-Code Performance Testing Platform Using the Ddosify Docker Extension https://www.docker.com/blog/no-code-performance-testing-using-ddosify-extension/ Tue, 28 Mar 2023 18:12:32 +0000 https://www.docker.com/?p=41206 Performance testing is a critical component of software testing and performance evaluation. It involves simulating a large number of users accessing a system simultaneously to determine the system’s behavior under high user loads. This process helps organizations understand how their systems will perform in real-world scenarios and identify potential performance bottlenecks. Testing the performance of your application under different load conditions also helps identify bottlenecks and improve your application’s performance. 

In this article, we provide an introduction to the Ddosify Docker Extension and show how to get started using it for performance testing.  

banner ddosify extension

The importance of performance testing

Performance testing should be regularly performed to ensure that your application is performing well under different load conditions so that your customers can have a great experience. Kissmetrics found that a 1-second delay in page response time can lead to a seven percent decrease in conversions and that half of the customers expect a website to load in less than 2 seconds. A 1-second delay in page response could result in a potential loss of several million dollars in annual sales for an e-commerce site.

Meet Ddosify

Ddosify is a high-performance, open-core performance testing platform that focuses on load and latency testing. Ddosify offers a suite of three products:

1. Ddosify Engine: An open source, single-node, load-testing tool (6K+ stars) that can be used to test your application from your terminal using a simple JSON file. Ddosify is written in Golang and can be deployed on Linux, macOS, and Windows. Developers and small companies are using Ddosify Engine to test their applications. The tool is available on GitHub.

2. Ddosify Cloud: An open core SaaS platform that allows you to test your application without any programming expertise. Ddosify Cloud uses Ddosify Engine in a distributed manner and provides a web interface to generate load test scenarios without code. Users can test their applications from different locations around the world and can generate advanced reports. We are using different technologies including Docker, Kubernetes, InfluxDB, RabbitMQ, React.js, Golang, AWS, and PostgreSQL within this platform and all working together transparently for the user. This tool is available on the Ddosify website.

3. Ddosify Docker Extension: This tool has similarities to Ddosify Engine, but has an easy-to-use user interface thanks to the extension capability of Docker Desktop. This feature allows you to test your application within Docker Desktop. The Ddosify Docker Extension is available free of charge from the Extension marketplace. The Ddosify Docker Extension repository is open source and available on GitHub. The tool is also available from the Docker Extensions Marketplace.

In this article, we will focus on the Ddosify Docker Extension.

The architecture of Ddosify

Ddosify Docker Extension uses the Ddosify Engine as a base image under the hood. We collect settings, including request count, duration, and headers, from the extension UI and send them to the Ddosify Engine. 

The Ddosify Engine performs the load testing and returns the results to the extension. The extension then displays the results to the user (Figure 1). 

Illustration showing that the Ddosify Engine performs the load testing and returns the results to the extension. The extension then displays the results to the user.
Figure 1: Overview of Ddosify.

Why Ddosify?

Ddosify is easy to use and offers many features, including dynamic variables, CSV data import, various load types, correlation, and assertion. Ddosify also has different options for different use cases. If you are an individual developer, you can use the Ddosify Engine or Ddosify Docker Extension free of charge. If you need code-free load testing, advanced reporting, multi-geolocation, and more requests per second (RPS), you can use the Ddosify Cloud. 

With Ddosify, you can: 

  • Identify performance issues of your application by simulating high user traffic.
  • Optimize your infrastructure and ensure that you are only paying for the resources that you need.
  • Identify bugs before your customers do. Some bugs are only triggered under high load.
  • Measure your system capacity and identify its limitations.

Why run Ddosify as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With Ddosify Docker Extension, you can easily perform load testing on your application from within Docker Desktop. You don’t need to install anything on your machine except Docker Desktop. Features of Ddosify Docker Extension include:

  • Strong community with 6K+ GitHub stars and a total of 1M+ downloads on all platforms. Community members contribute by proposing/adding features and fixing bugs.
  • Currently supports HTTP and HTTPS protocols. Other protocols are on the way.
  • Supports various load types. Test your system’s limits across different load types, including:
    • Linear
    • Incremental
    • Waved
  • Dynamic variables (parameterization) support. Just like Postman, Ddosify supports dynamic variables.
  • Save load testing results as PDF.

Getting started

As a prerequisite, you need Docker Desktop 4.10.0 or higher installed on your machine. You can download Docker Desktop from our website.

Step 1: Install Ddosify Docker Extension

Because Ddosify is an extension partner of Docker, you can easily install Ddosify Docker Extension from the Docker Extensions Marketplace (Figure 2). Start Docker Desktop and select Add Extensions. Next, filter by Testing Tools and select Ddosify. Click on the Install button to install the Ddosify Docker Extension. After a few seconds, Ddosify Docker Extension will be installed on your machine.

Screenshot showing how to install Ddosify Docker. Start Docker Desktop and select Add Extensions. Next, filter by Testing Tools and select Ddosify. Click on the Install button to install the Ddosify Docker Extension. After a few seconds, Ddosify Docker Extension will be installed on your machine.
Figure 2: Installing Ddosify.

Step 2: Start load testing

You can start load testing your application from the Docker Desktop (Figure 3). Start Docker Desktop and click on the Ddosify icon in the Extensions section. The UI of the Ddosify Docker Extension will be opened.

Screenshot showing how to start load testing your application from the Docker Desktop. Start Docker Desktop and click on the Ddosify icon in the Extensions section. The UI of the Ddosify Docker Extension will be opened.
Figure 3: Starting load testing.

You can start load testing by entering the target URL of your application. You can choose HTTP Methods (GET, POST, PUT, DELETE, etc.), protocol (HTTP, HTTPS), request count, duration, load type (linear, incremental, waved), timeout, body, headers, basic auth, and proxy settings. We chose the following values: 

URL:https://testserver.ddosify.com/account/register/
Method:POST
Protocol:HTTPS
Request Count: 100
Duration:5
Load Type:Linear
Timeout:10
Body:{“username”: “{{_randomUserName}}”, “email”: “{{_randomEmail}}”, “password”: “{{_randomPassword}}”}
Headers:User-Agent: DdosifyDockerExtension/0.1.2Content-Type: application/json

In this configuration, we are sending 100 requests to the target URL for 5 seconds (Figure 4). The RPS is 20. The target URL is a test server that is used to register new users with body parameters. We are using dynamic variables (random) for username, email, and password in the body. You can learn more about dynamic variables from the Ddosify documentation.

UI screenshot showing 100 requests sending to the target URL for 5 seconds.
Figure 4: Parameters for sample load test.

Then click on the Start Load Test button to begin load testing. The results will be displayed in the UI (Figure 5).

Click on the Start Load Test button to begin load testing. The results will be displayed in the UI. In this screenshot, we see a list of failed runs and successful runsm 48 users created, and server did not respond to 32 requests.
Figure 5: Ddosify test results.

The test results include the following information:

  • 48 requests successfully created users. Response Code: 201
  • 20 requests failed to create users because of the duplicate username and emails with the server. Response Code: 400
  • 32 requests failed to create users because of the timeout. The server could not respond within 10 seconds, so we should increase the timeout value or optimize the server

You can also save the load test results. Click on the Report button to save the results as a PDF file (Figure 6).

Screenshot showing the Report button to click to save the results as a PDF file.
Figure 6: Save results as PDF.

Conclusion

In this article, we showed how to install Ddosify Docker Extension and quickly start load testing your application from Docker Desktop. We created random users on a test server with 100 requests for 5 seconds, and we saw that the server could not handle all the requests because of the timeout. 

If you need help with Ddosify, you can create an issue on our GitHub repository or join our Discord server.

Resources

]]>
Distributed Cloud-Native Graph Database with NebulaGraph Docker Extension https://www.docker.com/blog/distributed-cloud-native-graph-database-nebulagraph-docker-extension/ Thu, 09 Mar 2023 15:00:00 +0000 https://www.docker.com/?p=40887 Graph databases have become a popular solution for storing and querying complex relationships between data. As the amount of graph data grows and the need for high concurrency increases, a distributed graph database is essential to handle the scale.

Finding a distributed graph database that automatically shards the data, while allowing businesses to scale from small to trillion-edge-level without changing the underlying storage, architecture of the service, or application code, however, can be a challenge. 

In this article, we’ll look at NebulaGraph, a modern, open source database to help organizations meet these challenges.

banner nebulagraph extension

Meet NebulaGraph

NebulaGraph is a modern, open source, cloud-native graph database, designed to address the limitations of traditional graph databases, such as poor scalability, high latency, and low throughput. NebulaGraph is also highly scalable and flexible, with the ability to handle large-scale graph data ranging from small to trillion-edge-level.

NebulaGraph has built a thriving community of more than 1000 enterprise users since 2018, along with a rich ecosystem of tools and support. These benefits make it a cost-effective solution for organizations looking to build graph-based applications, as well as a great learning resource for developers and data scientists.

The NebulaGraph cloud-native database also offers Kubernetes Operators for easy deployment and management in cloud environments. This feature makes it a great choice for organizations looking to take advantage of the scalability and flexibility of cloud infrastructure.

Architecture of the NebulaGraph database

NebulaGraph consists of three services: the Graph Service, the Storage Service, and the Meta Service (Figure 1). The Graph Service, which consists of stateless processes (nebula-graphd), is responsible for graph queries. The Storage Service (nebula-storaged) is a distributed (Raft) storage layer that persistently stores the graph data. The Meta Service is responsible for managing user accounts, schema information, and Job management. With this design, NebulaGraph offers great scalability, high availability, cost-effectiveness, and extensibility.

nebulagraph database architecture infographic
Figure 1: Overview of NebulaGraph services.

Why NebulaGraph?

NebulaGraph is ideal for graph database needs because of its architecture and design, which allow for high performance, scalability, and cost-effectiveness. The architecture follows a separation of storage and computing architecture, which provides the following benefits:

  • Automatic sharding: NebulaGraph automatically shards graph data, allowing businesses to scale from small to trillion-edge-level data volumes without having to change the underlying storage, architecture, or application code.
  • High performance: With its optimized architecture and design, NebulaGraph provides high performance for complex graph queries and traversal operations.
  • High availability: If part of the Graph Service fails, the data stored by the Storage Service remains intact.
  • Flexibility: NebulaGraph supports property graphs and provides a powerful query language, called Nebula Graph Query Language (nGQL), which supports complex graph queries and traversal operations. 
  • Support for APIs: It provides a range of APIs and connectors that allow it to integrate with other tools and services in a distributed system.

Why run NebulaGraph as a Docker Extension?

In production environments, NebulaGraph can be deployed on Kubernetes or in the cloud, hiding the complexity of cluster management and maintenance from the user. However, for development, testing, and learning purposes, setting up a NebulaGraph cluster on a desktop or local environment can still be a challenging and costly process, especially for users who are not familiar with containers or command-line tools.

This is where the NebulaGraph Docker Extension comes in. It provides an elegant and easy-to-use solution for setting up a fully functional NebulaGraph cluster in just a few clicks, making it the perfect choice for developers, data scientists, and anyone looking to learn and experiment with NebulaGraph.

Getting started with NebulaGraph in Docker Desktop

Setting up

Prerequisites: Docker Desktop 4.10 or later.

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Settings tab in Docker Desktop. Within Docker Desktop, confirm that the Docker Extensions feature is enabled (Figure 2). Go to Settings > Extensions and select Enable Docker Extensions.

docker desktop extensions settings
Figure 2: Enabling Docker Extensions within the Docker Desktop.

All Docker Extension resources are hidden by default, so, to ensure its visibility, go to Settings > Extensions and check the Show Docker Extensions system containers.

Step 2: Install the NebulaGraph Docker Extension

The NebulaGraph extension is available from the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for NebulaGraph in the Extensions Marketplace, then select Install (Figure 3).

nebulagraph extensions marketplace
Figure 3: Installing NebulaGraph from the Extensions Marketplace.

This step will download and install the latest version of the NebulaGraph Docker Extension from Docker Hub. You can see the installation process by clicking Details (Figure 4).

nebulagraph installation progress docker desktop
Figure 4: Installation progress.

Step 3: Waiting for the cluster to be up and running

After the extension is installed, for the first run, it normally takes fewer than 5 minutes for the cluster to be fully functional. While waiting, we can quickly go through the Home tab and Get Started tab to see details of NebulaGraph and NebulaGraph Studio, the WebGUI Utils.

We can also confirm whether it’s ready by observing the containers’ status from the Resources tab of the Extension as shown in Figure 5.

nebulagraph resources docker desktop
Figure 5: Checking the status of containers.

Step 4: Get started with NebulaGraph

After the cluster is healthy, we can follow the Get Started steps to log in to the NebulaGraph Studio, then load the initial dataset, and query the graph (Figure 6).

nebulagraph get started docker desktop
Figure 6: Logging in to NebulaGraph Studio.

Step 5: Learn more from the starter datasets 

In a graph database, the focus is on the relationships between the data. With the starter datasets available in NebulaGraph Studio, you can get a better understanding of these relationships. All you need to do is click the Download button on each dataset card on the welcome page (Figure 7).

nebulagraph starter datasets
Figure 7: Starter datasets.

For example, in the demo_sns (social network) dataset, you can use the following query to find new friend recommendations by identifying second-degree friends with the most mutual friends:

Einstein
nebula console demo sns
Figure 8: Query results shown in the Nebula console.

Instead of just displaying the query results, you can also return the entire pattern and easily gain insights. For example, in Figure 9, we can see LeBron James is on two mutual friend paths with Tim:

nebula console demo sns results graph
Figure 9: Graphing the query results.

Another example can be found in the demo_fraud_detection (graph of loan) dataset, where you can perform a 10-degree check for risky applicants, as shown in the following query:

MATCH 
p_=(p:`applicant`)-[*1..10]-(p2:`applicant`)
WHERE id(p)=="p_200" AND 
p2.`applicant`.is_risky == "True"
RETURN p_ LIMIT 100

The results shown in Figure 10 indicate that this applicant is suspected to be risky because of their connection to p_190.

nebula console demo fraud detection graph
Figure 10: Results of query showing fraud detection risk.

By exploring the relationships between data points, we can gain deeper insights into our data and make more informed decisions. Whether you are interested in finding new friends, detecting fraudulent activity, or any other use case, the starter datasets provide a valuable starting point.

We encourage you to download the datasets, experiment with different queries, and see what new insights you can uncover, then share with us in the NebulaGraph community.

Try NebulaGraph for yourself

To learn more about NebulaGraph, visit our website, documentation site, star our GitHub repo, or join our community chat.

]]>
February Extensions: Easily Connect Local Containers to a Kubernetes Cluster and More https://www.docker.com/blog/new-docker-extensions-february-2023/ Tue, 28 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40846

Although February is the shortest month of the year, we’ve been busy at Docker and we have new Docker Extensions to share with you. Docker extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s look at the exciting new extensions from February.

And, if you’d like to see everything that’s available, check out our full Extensions Marketplace.

banner 2023 feb extensions roundup

Get visibility on your Kubernetes cluster 

Do you need to harden your Kubernetes cluster but lack the visibility to do so? With the Kubescape extension for Docker Desktop, you can secure your Kubernetes cluster and gain insight into your cluster’s security posture via an easy-to-use online dashboard.

The Kubescape extension works by installing the Kubescape in-cluster components, connecting them to the ARMO platform and providing insights into the Kubernetes cluster deployed by Docker Desktop via the dashboard on the ARMO platform.

With the Kubescape extension, you can:

  • Regularly scan your configurations and images
  • Visualize your RBAC rules
  • Receive automatic fix suggestions where applicable

Read Secure Your Kubernetes Clusters with the Kubescape Docker Extension to learn more.

kubescape extension docker setup

Connect your local containers to any Kubernetes cluster

Do you need a fast and dependable way to connect your local containers to any Kubernetes cluster? With Gefyra for Docker Desktop, you can easily bridge running containers into Kubernetes clusters. Gefyra aims to ease the burdens of Kubernetes-based development for developers with a seamless integration as an extension in Docker Desktop. 

The Gefyra extension lets you run a container locally and connect it to a Kubernetes cluster so you can:

  • Talk to other services
  • Let other services talk to your local container
  • Debug
  • Achieve faster iterations — no build/push/deploy/repeat
The Gefyra extension setup process in Docker Desktop, including kubernetes settings, container settings, and container start.

Deploy Alfresco using Docker containers

The Alfresco Docker extension simplifies deploying the Alfresco Digital Business Platform for testing purposes. This extension provides a single Run button in the UI to run all the containers required behind the scenes, so you can spend less time configuring and more time building and testing your product.

With the Alfresco extension on Docker Desktop, you can:

  • Pull latest Alfresco Docker images
  • Run Alfresco Docker containers
  • Use Alfresco deployment locally in your browser
  • Stop deployment and recover your system to initial status
alfresco deployment tool docker desktop

Easily deploy and test NebulaGraph

NebulaGraph is a popular open source graph database that can handle large volumes of with milliseconds of latency, scale up quickly, and have the ability to perform fast graph analytics. With the NebulaGraph extension on Docker Desktop, you can test, learn, and develop on top of the distributed version of NebulaGraph core, in one click. 

nebulargraph extension docker setup

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. Check out these resources for more information on Docker Extensions:

]]>
Docker Desktop 4.17: New Functionality for a Better Development Experience https://www.docker.com/blog/docker-desktop-4-17-new-development-functionality/ Mon, 27 Feb 2023 16:22:03 +0000 https://www.docker.com/?p=40830 We’re excited to announce the Docker 4.17 release, which introduces new functionality into Docker Desktop to improve your developer experience. With Docker 4.17, you’ll have easier access to vulnerability data and recommendations on how to act on that information. Also, we’re making it easier than ever to bring the tools you already love into Docker Desktop with self-published Docker Extensions.

Read on to check out the highlights from this release.

banner 4.17 docker desktop

Improved local image analysis

Container image security presents challenges such as dependency awareness, vulnerability awareness, and practical remediation in day-to-day reality. Since Docker Desktop 4.14, we’ve consistently added features to help you understand your images and their vulnerabilities. Improvements in 4.17 were designed with developers in mind to address software supply chain security. 

We’re pleased to announce Early Access to the new Docker Scout service. Docker Scout provides visibility into vulnerabilities and recommendations for quick remediation. Now you can use Docker Scout to analyze and remediate vulnerabilities on local images in Docker Desktop and the Docker CLI. 

Check out the Docker Scout documentation to learn more about how to get started.

What can you do with Docker Scout?

  • Image analysis results: Filter images based on vulnerability information, look for specific vulnerabilities, or confirm when vulnerabilities have been remediated. You’ll see results based on the layer in which a vulnerability is introduced, so you know exactly where the alert is coming from.
  • Remediation advice: Get guidance on available remediation options. Docker Scout shows you the recommended remediation path depending on the layer of the vulnerability. Docker Scout also shows a preview before you resolve anything, so you know how many vulnerabilities will be resolved by a specific update.
docker scout fixes for base image
  • Remote registries: You can use Docker Desktop to view and pull images from Artifactory repositories to analyze them.
  • Command-line interface: As of Docker Desktop 4.17, the docker scan command is deprecated and replaced with a command for Docker Scout – docker scout. Read the release notes for more detail. 

Update to Docker Desktop 4.17 to access these new features and take them for a test run. You can also provide feedback directly in Docker Desktop by navigating to the images tab and selecting Give feedback. We look forward to hearing from you!  

A new way to publish Docker Extensions

We are excited to introduce a new way to publish a Docker Extension. When submitting an extension to the Marketplace, you now have two publishing options:

  • Docker Reviewed
  • Self-Published – New!

Self-Published extensions are automatically validated. If all validation checks pass, it is published on the Extensions Marketplace and accessible to all users within a few hours. Self-Published is the fastest way to get developers the tools they need and to get feedback from them as you work to evolve and polish your extension. 

Developers can identify self-published extensions in the Extensions Marketplace by the not reviewed label. Extensions that are manually reviewed by the Docker Extensions team have a reviewed label, as shown in the following screenshot. 

self published docker extension

We are excited about the increased reach and accessibility the new self-publishing workflow brings to both Docker Extension publishers and users. 

If you have an idea for an extension that isn’t already in the Extensions Marketplace, you can submit it to our ideas discussion board

Let us know what you think

Thanks for using Docker Desktop! Learn more about what’s in store with our public roadmap on GitHub, and let us know what other features you’d like to see.

Check out the release notes for a full breakdown of what’s new in Docker Desktop 4.17.

]]>
Secure Your Kubernetes Clusters with the Kubescape Docker Extension https://www.docker.com/blog/secure-kubernetes-with-kubescape-extension/ Tue, 21 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40587 Container adoption in enterprises continues to grow, and Kubernetes has become the de facto standard for deploying and operating containerized applications. At the same time, security is shifting left and should be addressed earlier in the software development lifecycle (SDLC). Security has morphed from being a static gateway at the end of the development process to something that (ideally) is embedded every step of the way. This can potentially increase the effort for engineering and DevOps teams.

kubescape extension banner

Kubescape, a CNCF project initially created by ARMO, is intended to solve this problem. Kubescape provides a self-service, simple, and easily actionable security solution that meets developers where they are: Docker Desktop.

What is Kubescape?

Kubescape is an open source Kubernetes security platform for your IDE, CI/CD pipelines, and clusters.

Kubescape includes risk analysis, security compliance, and misconfiguration scanning. Targeting all security stakeholders, Kubescape offers an easy-to-use CLI interface, flexible output formats, and automated scanning capabilities. Kubescape saves Kubernetes users and admins time, effort, and resources.

How does Kubescape work?

Security researchers and professionals codify best practices in controls: preventative, detective, or corrective measures that can be taken to avoid — or contain — a security breach. These are grouped in frameworks by government and non-profit organizations such as the US Cybersecurity and Infrastructure Security Agency, MITRE, and the Center for Internet Security.

Kubescape contains a library of security controls that codify Kubernetes best practices derived from the most prevalent security frameworks in the industry. These controls can be run against a running cluster or manifest files under development. They’re written in Rego, the purpose-built declarative policy language that supports Open Policy Agent (OPA).

Kubescape is commonly used as a command-line tool. It can be used to scan code manually or can be triggered by an IDE integration or a CI tool. By default, the CLI results are displayed in a console-friendly manner, but they can be exported to JSON or JUnit XML, rendered to HTML or PDF, or submitted to ARMO Platform (a hosted backend for Kubescape).

kubescape command line interface diagram

Regular scans can be run using an in-cluster operator, which also enables the scanning of container images for known vulnerabilities.

kubescape scan in cluster operator

Why run Kubescape as a Docker extension?

Docker extensions are fundamental for building and integrating software applications into daily workflows. With the Kubescape Docker Desktop extension, engineers can easily shift security left without changing work habits.

The Kubescape Docker Desktop extension helps developers adopt security hygiene as early as the first lines of code. As shown in the following diagram, Kubescape enables engineers to adopt security as they write code during every step of the SDLC.

Specifically, the Kubescape in-cluster component triggers periodic scans of the cluster and shows results in ARMO Platform. Findings shown in the dashboard can be further explored, and the extension provides users with remediation advice and other actionable insights.

kubescape scan in cluster operator

Installing the Kubescape Docker extension

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.)  In Settings | Extensions select the Enable Docker Extensions box.

step one enable kubescape extension

You must also enable Kubernetes under Preferences

step one enable kubernetes

Kubescape is in the Docker Extensions Marketplace. 

In the following instructions, we’ll install Kubescape in Docker Desktop. After the extension scans automatically, the results will be shown in ARMO Platform. Here is a demo of using Kubescape on Docker Desktop:

Step 2: Add the Kubescape extension

Open Docker Desktop and select Add Extensions to find the Kubescape extension in the Extensions Marketplace.

step two add kubescape extension

Step 3: Installation

Install the Kubescape Docker Extension.

step three install kubescape extension

Step 4: Register and deploy

Once the Kubescape Docker Extension is installed, you’re ready to deploy Kubescape.

step four register deploy kubescape select provider

Currently, the only hosting provider available is ARMO Platform. We’re looking forward to adding more soon.

step four register deploy kubescape sign up

To link up your cluster, the host requires an ARMO account.

step four register deploy kubescape connect armo account

After you’ve linked your account, you can deploy Kubescape.

step four register deploy kubescape deploy

Accessing the dashboard

Once your cluster is deployed, you can view the scan output on your host (ARMO Platform) and start improving your cluster’s security posture immediately.

armo scan dashboard

Security compliance

One step to improve your cluster’s security posture is to protect against the threats posed by misconfigurations.

ARMO Platform will display any misconfigurations in your YAML, offer information about severity, and provide remediation advice. These scans can be run against one or more of the frameworks offered and can run manually or be scheduled to run periodically.

armo scan misconfigurations

Vulnerability scanning

Another step to improve your cluster’s security posture is protecting against threats posed by vulnerabilities in images.

The Kubescape vulnerability scanner scans the container images in the cluster right after the first installation and uploads the results to ARMO Platform. Kubescape’s vulnerability scanner supports the ability to scan new images as they are deployed to the cluster. Scans can be carried out manually or periodically, based on configurable cron jobs.

armo kubescape vulnerability scanner

RBAC Visualization

With ARMO Platform, you can also visualize Kubernetes RBAC (role-based access control), which allows you to dive deep into account access controls. The visualization makes pinpointing over-privileged accounts easy, and you can reduce your threat landscape with well-defined privileges. The following example shows a subject with all privileges granted on a resource.

armo kubescape rbac visualizer

Kubescape, using ARMO Platform as a portal for additional inquiry and investigation, helps you strengthen and maintain your security posture

Next steps

The Kubescape Docker extension brings security to where you’re working. Kubescape enables you to shift security to the beginning of the development process by enabling you to implement security best practices from the first line of code. You can use the Kubernetes CLI tool to get insights, or export them to ARMO Platform for easy review and remediation advice.

Give the Kubescape Docker extension a try, and let us know what you think at cncf-kubescape-users@lists.cncf.io.

]]>
Kubescape Docker Desktop nonadult