Logging – Docker https://www.docker.com Fri, 16 Dec 2022 17:15:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Logging – Docker https://www.docker.com 32 32 Configure, Manage, and Simplify Your Observability Data Pipelines with the Calyptia Core Docker Extension https://www.docker.com/blog/manage-observability-data-pipelines-with-calyptia-core-docker-extension/ Fri, 16 Dec 2022 15:00:00 +0000 https://www.docker.com/?p=39277 This post was co-written with Eduardo Silva, Founder and CEO of Calyptia.


Manage observability pipelines with the Calyptia Core Docker Extension.

Applications produce a lot of observability data. And it can be a constant struggle to source, ingest, filter, and output that data to different systems. Managing these observability data pipelines is essential for being able to leverage your data and quickly gain actionable insights.

In cloud and containerized environments, Fluent Bit is a popular choice for marshaling data across cloud-native environments. A super fast, lightweight, and highly scalable logging and metrics processor and forwarder, it recently reached three billion downloads.

Calyptia Core, from the creators of Fluent Bit, further simplifies the data collection process with a powerful processing engine. Calyptia Core lets you create custom observability data pipelines and take control of your data.

And with the new Calyptia Core Docker Extension, you can build and manage observability pipelines within Docker Desktop. Let’s take a look at how it works!

Diagram for Calyptia Core observability pipelines.

What is Calyptia Core?

Calyptia Core plugs into your existing observability and security infrastructure to help you process large amounts of logs, metrics, security, and event data. With Calyptia Core, you can:

  • Connect common sources to the major destinations (e.g. Splunk, Datadog, Elasticsearch, etc.)
  • Process 100k events per second per replicas with efficient routing.
  • Automatically collect data from Kubernetes and its various flavors (GKE, EKS, AKS, OpenShift, Tanzu, etc).
  • Build reliability into your data pipeline at scale to debug data issues.

Why Calyptia Core?

Observability as a concept is common in the day-to-day life of engineers. But the different data standards, data schemas, storage backends, and dev stacks contribute to tool fatigue, resulting in lower developer productivity and increased total cost of ownership.  

Calyptia Core aims to simplify the process of building an observability pipeline. You can also augment the streaming observability data to add custom markers and discard or mask unneeded fields.  

Why run Calyptia Core as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With Calyptia Core as a Docker Extension, you now have an easier, faster way to deploy Calyptia Core.

Once the extension is installed and started, you’ll have a running Calyptia core. This allows you to easily define and manage your observability pipelines and concentrate on what matters most — discovering actionable insights from the data.

Getting started with Calyptia Core

Calyptia Core is in Docker Extension Marketplace. In the tutorial below, we’ll install Calyptia Core in Docker Desktop, build a data pipeline with mock data, and visualize it with Vivo.

Initial setup

Make sure you’ve installed the latest version of Docker Desktop (or at least v4.8+). You’ll also need to enable Kubernetes under the Preferences tab. This will start a Kubernetes single-node cluster when starting Docker Desktop.

Enable Kubernetes in Docker Desktop.

Installing the Calyptia Core Docker Extension

Step 1

Open Docker Desktop and click “Add Extensions” under Extensions to go to the Docker Extension Marketplace.

Select Add Extensions to add extensions to Docker Desktop.

Step 2

Install the Calyptia Core Docker Extension.

The Extensions Marketplace in Docker Desktop.

By clicking on the details, you can see what containers or binaries are pulled during installation.

Installing the Calyptia Core Docker Extension.

Step 3

Once the extension is installed, you’re ready to deploy Calyptia Core! Select “Deploy Core” and you’ll be asked to login and authenticate the token for the Docker Extension.

Calyptia Core Docker Extension welcome page.

In your browser, you’ll see a message from https://core.calyptia.com/ asking to confirm the device.

Calyptia Core Docker Extension device confirmation.
Calyptia Core device confirmed.

Step 4

After confirming, Calyptia Core will be deployed. You can now select “Manage Core” to build, configure, and manage your data pipelines.

Managing observability pipelines in the Calyptia Core Docker Extension.

You’ll be taken to core.calyptia.com, where you can build your custom observability data pipelines from a host of source and destination connectors.

Calyptia Core manage core instances and pipelines.

Step 5

In this tutorial, let’s create a new pipeline and set docker-extension as the name.

Set the observability pipelines name in Calyptia Core.

Add “Mock Data” as a source and “Vivo” as the destination.

NOTE: Vivo is a real time data viewer embedded in the Calyptia Core Docker Extension. You can make changes to the data pipelines like adding new fields or connectors and view the streaming observability data from Vivo in the Docker Extension.

Select Calyptia Core source.
Select Calyptia Core destination.

Step 6

Hit “Save & Deploy” to create the pipeline in the Docker Desktop environment.

Calyptia Core deploy pipeline.

With the Vivo Live Data Viewer, you can view the data without leaving Docker Desktop.

Live Data Viewer in the Calyptia Core Docker Extension.

Conclusion

The Calyptia Core Docker Extension makes it simple to manage and deploy observability pipelines without leaving the Docker Desktop developer environment. And that’s just the beginning. You can also use automated logging in Calyptia Core for automated data collection from your Kubernetes pods and use metadata  to perform processing rules before it’s delivered to the chosen destination.

Give the Calyptia Core Docker Extension a try, and let us know what you think at hello@calyptia.com.

]]>
Calyptia Core for Docker Desktop nonadult
Enable Cloud-Native Log Observability With Parseable https://www.docker.com/blog/enable-cloud-native-log-observability-with-parseable/ Tue, 22 Nov 2022 15:00:00 +0000 https://www.docker.com/?p=38908 Docker Cloud Native Observability with Parseable inline v1b

Observability is the practice of understanding the internal state of a system from its output. It’s based on a trio of key indicators: logs, metrics, and traces. Because metrics and traces are numerical, it’s easy to visualize that data through graphics. Logs are unfortunately text heavy and relatively difficult to visualize or observe. 

No matter the data type and its underlying nature, actionable log data helps you solve problems and make smarter business decisions. And that’s where Parseable comes in.

image1

Introducing Parseable

The SaaS observability ecosystem is thriving, but there’s little to no movement in open source, developer-friendly observability platforms. That’s what we’re looking to address with Parseable. 

Parseable is an open source, developer-centric platform created to ingest and query log data. It’s designed to be efficient, easy to use, and highly flexible. To achieve this, Parseable uses a cloud-native, containerized architectural approach to create a simple and dependency-free platform. 

Specifically, Parseable uses Apache Arrow and Parquet under the hood to efficiently store log data and query at blazingly fast speeds. It uses S3 or other compatible storage platforms to support seamless storage while remaining stateless.

Graph displaying Parseable server architecture.

What’s unique about Parseable?

Here are some exciting features that set Parseable apart from other observability platforms:

  • It maintains a SQL-compatible API for querying log data.
  • The Parquet open data format enables complete data ownership and wide-ranging possibilities for data analysis.
  • The single binary and container-based deployment model (including UI) helps you deploy in minutes — if not seconds. 
  • Its indexing-free design rivals the performance of indexed systems while offering lower CPU usage and less storage overhead. 
  • It’s written in Rust with low latency and high throughput.

How does Parseable work?

Parseable exposes HTTP REST API endpoints. This lets you ingest, query, and manage your log streams on the Parseable server. There are three major API categories:

  • Log stream creation, ingestion, and management
  • Log stream query and search
  • Overall health status

API reference information and examples are available on the Parseable public workspace, at Postman.

Parseable is compatible with standard logging agents like FluentBit, LogStash, Vector, syslog and others via their HTTP output agents. It also offers a built-in, intuitive GUI for log query and analysis.

Why use the Parseable Docker extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With the Parseable extension, we aim to provide a simple, one-click approach for deploying Parseable. 

Once the extension is installed and running, you’ll have a running Parseable server that can ingest logs from any logging agents or directly from your application. You’ll also have access to the Parseable UI.

Overall, the Parseable extension brings richer log observability to development platforms.

Getting started with Parseable

Prerequisites

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box:

Docker Desktop extensions settings with Docker Extensions and Show Docker Extensions system containers enabled.

Installing the Parseable Docker extension

While we’re working to bring Parseable to the Extensions Marketplace, you’ll currently need to download it via the CLI. Launch your terminal and run the following command to clone the GitHub repository and install the Parseable Extension:

git clone https://github.com/parseablehq/parseable-docker-extension

cd parseable-docker-extension
make install-extension

The Parseable extension will appear in the Docker Dashboard’s left sidebar, under the Extensions heading.

Using Parseable

Docker Desktop Parseable extension page detailing login credentials and how to use the extension.

Parseable requires you to enter the following configuration settings and environment variables during the initial setup:

  • Local Port Number (the port number you want Parseable listening on)
  • Local Storage Path (the path within the container where Parseable stages data)
  • Local Volume Path (the path where your local storage path is mounted)
  • S3/MinIO URL
  • S3/MinIO Bucket Name
  • S3/MinIO Access Key
  • S3/MinIO Secret Key
  • S3/MinIO Region

Click “Deploy” after you’ve entered all required configuration details.

The Docker Desktop Parseable user interface displaying the required environment variables.

You should see the URL http://localhost:8000 within the extension window:

Docker Desktop displaying the successful deployment of the Parseable container.

Next, Docker Desktop will redirect you to your browser’s login page. Your credentials are identical to what you provided in the Login Credentials section (default user/password: parseable, parseable):

The Parseable login page prompting the user to add their credentials.

After logging in, you’ll see the logs page with the option to select a log stream. If you used the default MinIO bucket embedded in the Extensions UI, some demo data is already present. Alternatively, if you’re using your own S3-compatible bucket, use the Parseable API to create a log stream and send logs to the log stream.

Once you’re done, you can choose a log stream and the time range for which you want the logs. You can even add filters and search fields:

The Parseable log stream offering filter, time range, and stream options.

Parseable currently supports data filtering by label, metadata, and specific column values. For example, you can choose a column and specify an operator or value for the column. Only the log data rows matching this filter will be shown. We’re working on improving this with support for multiple-column data types.

The Parseable stream log with detailed column filters.

This entire process takes about a minute. To see it in action, check out this quick walkthrough video:

Try Parseable today!

In this post, we quickly showcased Parseable and its key features. You also learned how to locally run it with a single click using the extension. Finally, we explored how to ingest logs to your running Parseable instance and query those logs via the Parseable UI. 

But, you can test drive Parseable for yourself, today! Follow our CLI workflow to install this extension directly. Plus, keep an eye out for Parseable’s launch on the Extensions Marketplace — it’s coming soon!

To learn more, join the Parseable community on Slack and help us spread the word by adding a star to the repo.

We really hope you enjoyed this article and this new approach to log data ingestion and query. Docker Extensions makes this single-click approach possible.

Contribute to the Parseable Docker extension

We’re committed to making Parseable more powerful for our developers and users — and we need help! We’re actively looking for contributors to the Parseable Docker extension project. 

The current code is simple and easy to get started with, and we’re always around to give potential contributors a hand. This can be a great first project, so please feel free to share your ideas.

]]>
Parseable Docker Extension Walkthrough nonadult