Observability – Docker https://www.docker.com Tue, 20 Jun 2023 13:03:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Observability – Docker https://www.docker.com 32 32 Shorter Feedback Loops Developing Java Apps with Digma’s Free Docker Extension https://www.docker.com/blog/shorter-feedback-loops-developing-java-apps-with-digmas-free-docker-extension/ Tue, 20 Jun 2023 13:03:17 +0000 https://www.docker.com/?p=43247 Many engineering teams follow a similar process when developing Docker images. This process encompasses activities such as development, testing, and building, and the images are then released as the subsequent version of the application. During each of these distinct stages, a significant quantity of data can be gathered, offering valuable insights into the impact of code modifications and providing a clearer understanding of the stability and maturity of the product features.

Observability has made a huge leap in recent years, and advanced technologies such as OpenTelemetry (OTEL) and eBPF have simplified the process of collecting runtime data for applications. Yet, for all that progress, developers may still be on the fence about whether and how to use this new resource. The effort required to transform the raw data into something meaningful and beneficial for the development process may seem to outweigh the advantages offered by these technologies.

The practice of continuous feedback envisions a development flow in which this particular problem has been solved. By making the evaluation of code runtime data continuous, developers can benefit from shorter feedback loops and tighter control of their codebase. Instead of waiting for issues to develop in production systems, or even surface in testing, various developer tools can watch the code observability data for you and provide early warning and rigorous linting for regressions, code smells, or issues. 

At the same time, gathering information back into the code from multiple deployment environments, such as the test or production environment, can help developers understand how a specific function, query, or event is performing in the real world at a single glance.

Docker and Digma logos on orange background

Meet Digma, the first linter for your runtime data 

Digma is a free developer tool that was created to bridge the continuous feedback gap in the DevOps loop. It aims to make sense of the gazillion metrics traces and logs the code is spewing out — so that developers won’t have to. And, it does this continuously and automatically. 

To make the tool even more practical, Digma processes the observability data and uses it to lint the source code itself, right in the IDE. From a developer’s perspective, this means a whole new level of information about the code is revealed — allowing for better code design based on real-world feedback, as well as quick turnaround for fixing regression or avoiding common pitfalls and anti-patterns.

How Digma is deployed

Digma is packaged as a self-contained Docker extension to make it easy for developers to evaluate their code without registration to an external service or sharing any data that might not be allowed by corporate policy (Figure 1). As such, Digma acts as your own intelligent agent for monitoring code execution, especially in development and testing. The backend component is able to track execution over time and highlight any inadvertent changes to behavior, performance, or error patterns that should be addressed.  

To collect data about the code, behind the scenes Digma leverages OpenTelemetry, a widely used open standard for modern observability. To make getting started easier, Digma takes care of the configuration work for you, so from the developer’s point of view, getting started with continuous feedback is equivalent to flipping a switch.

 Illustration of Digma process showing feedback providers and observability sources.
Figure 1: Overview of Digma.

Once the information is collected using OTEL, Digma runs it through what could be described as a reverse pipeline, aggregating the data, analyzing it and providing it via an API to multiple outlets including the IDE plugin, Jaeger, and the Docker extension dashboard that we’ll describe in this example.

Note: Currently, Digma supports Java applications running on IntelliJ IDEA; however, support for additional languages and platforms will be rolling out in the coming months.

Installing Digma

Prerequisites: Docker Desktop 4.8 or later.

Note: You must ensure that the Docker Extension is enabled (Figure 2).

Screenshot of Docker Desktop showing "Enable Docker Extensions" selected with blue checkmark.
Figure 2: Enabling Docker Extensions.

Step 1. Installing the Digma Docker Extension

In the Extensions Marketplace, search for Digma and select Install (Figure 3).

Screenshot of Extensions Marketplace showing results of search for Digma.
Figure 3: Install Digma.

Step 2. Collecting feedback about your code

After installing the Digma extension from the marketplace, you’ll be directed to a quickstart page that will guide you through the next steps to collect feedback about your code. Digma can collect information from two different sources:

  • Your tests/debug sessions within the IDE
  • Any Docker containers you may have running in Docker desktop

Getting feedback while debugging/running code in the IDE

Digma’s IDE extension makes the process of collecting data about your code in the IDE trivial. The entire setup is reduced to a single toggle button click (Figure 4). Behind the scenes, Digma adds variables to the runtime configuration, so that it uses OpenTelemetry for observability. No prior knowledge or observability is needed, and no code changes are necessary to get this to work. 

With the observability toggle enabled, you’ll start getting immediate feedback as you run your code locally, or even when you execute your tests. Digma will be on the lookout for any regressions and will automatically spot code smells and issues.

Screenshot of Digma dialog box with Observability toggle switch enabled (in blue).
Figure 4: Observability toggle switch.

Getting feedback from running containers

In addition to the IDE, you can also choose to collect data from any running container on your machine running Java code. The process to do that is extremely simple and also documented in the Digma extension Getting Started page. It does not involve changing any Dockerfile or code. Basically, you download the OTEL agent, mount it to a volume on the container, and set some environment variables that point the Java process to use it.

curl --create-dirs -O -L --output-dir ./otel
https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar

curl --create-dirs -O -L --output-dir ./otel
https://github.com/digma-ai/otel-java-instrumentation/releases/latest/download/digma-otel-agent-extension.jar

export JAVA_TOOL_OPTIONS="-javaagent:/otel/javaagent.jar -Dotel.exporter.otlp.endpoint=http://localhost:5050 -Dotel.javaagent.extensions=/otel/digma-otel-agent-extension.jar"
export OTEL_SERVICE_NAME={--ENTER YOUR SERVICE NAME HERE--}
export DEPLOYMENT_ENV=LOCAL_DOCKER

docker run -d -v "/$(pwd)/otel:/otel" --env JAVA_TOOL_OPTIONS --env OTEL_SERVICE_NAME --env DEPLOYMENT_ENV {-- APPEND PARAMS AND REPO/IMAGE --}

Uncovering how the code behaves in runtime

Whether you collect data from running containers or in the IDE, Digma will continuously analyze the code behavior and make the analysis results available both as code annotations in the IDE and as dashboards in the Digma extension itself. To demonstrate a live example, we’ve created a sample application based on the Java Spring Boot PetClinic app. 

In this scenario, we’ll clone the repo and run the code from the IDE, triggering a few actions to see what they reveal about the code. For convenience, we’ve created run configurations to simulate common API calls and create interesting data:

  1. Open the PetClinic project here in your IntelliJ IDE.
  2. Run the petclinic-service configuration.
  3. Access the API and play around with it on http://localhost:9753/ or simply run the ClientTested run config included in the workspace.

Almost immediately, the result of the dynamic linting analysis will start appearing over the code in the IDE (Figure 5):

Screenshot showing Digma Insights after analysis.
Figure 5: Results of Digma analysis.

At the same time, the Digma extension itself will present a dashboard cataloging all of the assets that have been identified, including code locations, API endpoints, database queries, and more (Figure 6). For each asset, you’ll be able to access basic statistics about its performance as well as a more detailed list of issues of concern that may need to be tracked about their runtime behavior.

 Screenshot of Digma dashboard listing assets that have been identified.
Figure 6: Digma dashboard.

Using the observability data

One of the main problems Digma tries to solve is not how to collect observability data but how to turn it into a useful and practical asset that can speed up development processes and improve the code. Digma’s insights can be directly applied during design time, based on existing data, as well as to validate changes as they are being run in dev/test/prod and to get early feedback and shorter loops into the development process.

Examples of design time planning code insights:

  • See runtime usage for the modified functions and understand who will be affected by the change
  • Concurrency to plan for, bottlenecks and criticality
  • Existing errors and exceptions that need to be handled by the new component
  • See complete visualization of the flow of control using the modified code 

Examples of runtime code validations for shorter feedback loops:

  • Catch code and query modeling smells, such as N+1 Selects, are detected and highlighted
  • Identify new performance bottlenecks and regressions as a result of the changes
  • Spot scaling issues earlier and address them as a part of the dev cycle

As Digma evolves, it continues to track and provide clear visibility into specific areas that are important to developers with the goal of having complete clarity about how each piece of code is behaving in real-world scenarios and catching issues and regressions much earlier in the process.

Who should use Digma?

Unlike many other observability solutions, Digma is code first and developer first. Digma is completely free for developers, does not require any code changes or sharing data, and will get you from zero to impactful data about your code within minutes.  If you are working on Java code, use the JetBrains IDE, and you want to improve your code with actual execution data, you can get started by picking up the Digma extension from the marketplace. 

You can provide feedback in our Slack channel and tell us where Digma improved your dev cycle.

]]>
Configure, Manage, and Simplify Your Observability Data Pipelines with the Calyptia Core Docker Extension https://www.docker.com/blog/manage-observability-data-pipelines-with-calyptia-core-docker-extension/ Fri, 16 Dec 2022 15:00:00 +0000 https://www.docker.com/?p=39277 This post was co-written with Eduardo Silva, Founder and CEO of Calyptia.


Manage observability pipelines with the Calyptia Core Docker Extension.

Applications produce a lot of observability data. And it can be a constant struggle to source, ingest, filter, and output that data to different systems. Managing these observability data pipelines is essential for being able to leverage your data and quickly gain actionable insights.

In cloud and containerized environments, Fluent Bit is a popular choice for marshaling data across cloud-native environments. A super fast, lightweight, and highly scalable logging and metrics processor and forwarder, it recently reached three billion downloads.

Calyptia Core, from the creators of Fluent Bit, further simplifies the data collection process with a powerful processing engine. Calyptia Core lets you create custom observability data pipelines and take control of your data.

And with the new Calyptia Core Docker Extension, you can build and manage observability pipelines within Docker Desktop. Let’s take a look at how it works!

Diagram for Calyptia Core observability pipelines.

What is Calyptia Core?

Calyptia Core plugs into your existing observability and security infrastructure to help you process large amounts of logs, metrics, security, and event data. With Calyptia Core, you can:

  • Connect common sources to the major destinations (e.g. Splunk, Datadog, Elasticsearch, etc.)
  • Process 100k events per second per replicas with efficient routing.
  • Automatically collect data from Kubernetes and its various flavors (GKE, EKS, AKS, OpenShift, Tanzu, etc).
  • Build reliability into your data pipeline at scale to debug data issues.

Why Calyptia Core?

Observability as a concept is common in the day-to-day life of engineers. But the different data standards, data schemas, storage backends, and dev stacks contribute to tool fatigue, resulting in lower developer productivity and increased total cost of ownership.  

Calyptia Core aims to simplify the process of building an observability pipeline. You can also augment the streaming observability data to add custom markers and discard or mask unneeded fields.  

Why run Calyptia Core as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With Calyptia Core as a Docker Extension, you now have an easier, faster way to deploy Calyptia Core.

Once the extension is installed and started, you’ll have a running Calyptia core. This allows you to easily define and manage your observability pipelines and concentrate on what matters most — discovering actionable insights from the data.

Getting started with Calyptia Core

Calyptia Core is in Docker Extension Marketplace. In the tutorial below, we’ll install Calyptia Core in Docker Desktop, build a data pipeline with mock data, and visualize it with Vivo.

Initial setup

Make sure you’ve installed the latest version of Docker Desktop (or at least v4.8+). You’ll also need to enable Kubernetes under the Preferences tab. This will start a Kubernetes single-node cluster when starting Docker Desktop.

Enable Kubernetes in Docker Desktop.

Installing the Calyptia Core Docker Extension

Step 1

Open Docker Desktop and click “Add Extensions” under Extensions to go to the Docker Extension Marketplace.

Select Add Extensions to add extensions to Docker Desktop.

Step 2

Install the Calyptia Core Docker Extension.

The Extensions Marketplace in Docker Desktop.

By clicking on the details, you can see what containers or binaries are pulled during installation.

Installing the Calyptia Core Docker Extension.

Step 3

Once the extension is installed, you’re ready to deploy Calyptia Core! Select “Deploy Core” and you’ll be asked to login and authenticate the token for the Docker Extension.

Calyptia Core Docker Extension welcome page.

In your browser, you’ll see a message from https://core.calyptia.com/ asking to confirm the device.

Calyptia Core Docker Extension device confirmation.
Calyptia Core device confirmed.

Step 4

After confirming, Calyptia Core will be deployed. You can now select “Manage Core” to build, configure, and manage your data pipelines.

Managing observability pipelines in the Calyptia Core Docker Extension.

You’ll be taken to core.calyptia.com, where you can build your custom observability data pipelines from a host of source and destination connectors.

Calyptia Core manage core instances and pipelines.

Step 5

In this tutorial, let’s create a new pipeline and set docker-extension as the name.

Set the observability pipelines name in Calyptia Core.

Add “Mock Data” as a source and “Vivo” as the destination.

NOTE: Vivo is a real time data viewer embedded in the Calyptia Core Docker Extension. You can make changes to the data pipelines like adding new fields or connectors and view the streaming observability data from Vivo in the Docker Extension.

Select Calyptia Core source.
Select Calyptia Core destination.

Step 6

Hit “Save & Deploy” to create the pipeline in the Docker Desktop environment.

Calyptia Core deploy pipeline.

With the Vivo Live Data Viewer, you can view the data without leaving Docker Desktop.

Live Data Viewer in the Calyptia Core Docker Extension.

Conclusion

The Calyptia Core Docker Extension makes it simple to manage and deploy observability pipelines without leaving the Docker Desktop developer environment. And that’s just the beginning. You can also use automated logging in Calyptia Core for automated data collection from your Kubernetes pods and use metadata  to perform processing rules before it’s delivered to the chosen destination.

Give the Calyptia Core Docker Extension a try, and let us know what you think at hello@calyptia.com.

]]>
Calyptia Core for Docker Desktop nonadult