Community – Docker https://www.docker.com Tue, 11 Jul 2023 19:51:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Community – Docker https://www.docker.com 32 32 We Thank the Stack Overflow Community for Ranking Docker the #1 Most-Used Developer Tool https://www.docker.com/blog/docker-stack-overflow-survey-thank-you-2023/ Wed, 21 Jun 2023 13:00:00 +0000 https://www.docker.com/?p=43490 Stack Overflow’s annual 2023 Developer Survey engaged more than 90,000 developers to learn about their work, the technologies they use, their likes and dislikes, and much, much more. As a company obsessed with serving developers, we’re honored that Stack Overflow’s community ranked Docker the #1 most-desired and #1 most-used developer tool. Since our inclusion in the survey four years ago, the Stack Overflow community has consistently ranked Docker highly, and we deeply appreciate this ongoing recognition and support.

docker logo and stack overflow logo with heart emojis in chat windows

Giving developers speed, security, and choice

While we’re pleased with this recognition, for us it means we cannot slow down: We need to go even faster in our effort to serve developers. In what ways? Well, our developer community tells us they value speed, security, and choice:

  • Speed: Developers want to maximize their time writing code for their app — and minimize set-up and overhead — so they can ship early and often.
  • Security: Specifically, non-intrusive, informative, and actionable security. Developers want to catch and fix vulnerabilities right now when coding in their “inner loop,” not 30 minutes later in CI or seven days later in production.
  • Choice: Developers want the freedom to explore new technologies and select the right tool for the right job and not be constrained to use lowest-common-denominator technologies in “everything-but-the-kitchen-sink” monolithic tools.

And indeed, these are the “North Stars” that inform our roadmap and prioritize our product development efforts. Recent examples include:

Speed

Security

  • Docker Scout: Automatically detects vulnerabilities and recommends fixes while devs are coding in their “inner loop.”
  • Attestations: Docker Build automatically generates SBOMs and SLSA Provenance and attaches them to the image.

Choice

  • Docker Extensions: Launched just over a year ago, and since then, partners and community members have created and published to Docker Hub more than 700 Docker Extensions for a wide range of developer tools covering Kubernetes app development, security, observability, and more.
  • Docker-Sponsored Open Source Projects: Available 100% for free on Docker Hub, this sponsorship program supports more than 600 open source community projects.
  • Multiple architectures: A single docker build command can produce an image that runs on multiple architectures, including x86, ARM, RISC-V, and even IBM mainframes.

What’s next?

While we’re pleased that our efforts have been well-received by our developer community, we’re not slowing down. So many exciting changes in our industry today present us with new opportunities to serve developers.

For example, the lines between the local developer laptop and the cloud are becoming increasingly blurred. This offers opportunities to combine the power of the cloud with the convenience and low latency of local development. Another example is AI/ML. Specifically, LLMs in feedback loops with users offer opportunities to automate more tasks to further reduce the toil on developers.

Watch these spaces — we’re looking forward to sharing more with you soon.

Thank you!

Docker only exists because of our community of developers, Docker Captains and Community Leaders, customers, and partners, and we’re grateful for your on-going support as reflected in this year’s Stack Overflow survey results. On behalf of everyone here at Team Docker: THANK YOU. And we look forward to continuing to build the future together with you.

Learn more

]]>
Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces https://www.docker.com/blog/build-machine-learning-apps-with-hugging-faces-docker-spaces/ Thu, 23 Mar 2023 17:44:24 +0000 https://www.docker.com/?p=41553 The Hugging Face Hub is a platform that enables collaborative open source machine learning (ML). The hub works as a central place where users can explore, experiment, collaborate, and build technology with machine learning. On the hub, you can find more than 140,000 models, 50,000 ML apps (called Spaces), and 20,000 datasets shared by the community.

Using Spaces makes it easy to create and deploy ML-powered applications and demos in minutes. Recently, the Hugging Face team added support for Docker Spaces, enabling users to create any custom app they want by simply writing a Dockerfile.

Another great thing about Spaces is that once you have your app running, you can easily share it with anyone around the world. 🌍

This guide will step through the basics of creating a Docker Space, configuring it, and deploying code to it. We’ll show how to build a basic FastAPI app for text generation that will be used to demo the google/flan-t5-small model, which can generate text given input text. Models like this are used to power text completion in all sorts of apps. (You can check out a completed version of the app at Hugging Face.)

banner hugging face docker

Prerequisites

To follow along with the steps presented in this article, you’ll need to be signed in to the Hugging Face Hub — you can sign up for free if you don’t have an account already.

Create a new Docker Space 🐳

To get started, create a new Space as shown in Figure 1.

Screenshot of Hugging Face Spaces, showing "Create new Space" button in upper right.
Figure 1: Create a new Space.

Next, you can choose any name you prefer for your project, select a license, and use Docker as the software development kit (SDK) as shown in Figure 2. 

Spaces provides pre-built Docker templates like Argilla and Livebook that let you quickly start your ML projects using open source tools. If you choose the “Blank” option, that means you want to create your Dockerfile manually. Don’t worry, though; we’ll provide a Dockerfile to copy and paste later. 😅

Screenshot of Spaces interface where you can add name, license, and select an SDK.
Figure 2: Adding details for the new Space.

When you finish filling out the form and click on the Create Space button, a new repository will be created in your Spaces account. This repository will be associated with the new space that you have created.

Note: If you’re new to the Hugging Face Hub 🤗, check out Getting Started with Repositories for a nice primer on repositories on the hub.

Writing the app

Ok, now that you have an empty space repository, it’s time to write some code. 😎

The sample app will consist of the following three files:

  • requirements.txt — Lists the dependencies of a Python project or application
  • app.py — A Python script where we will write our FastAPI app
  • Dockerfile — Sets up our environment, installs requirements.txt, then launches app.py

To follow along, create each file shown below via the web interface. To do that, navigate to your Space’s Files and versions tab, then choose Add fileCreate a new file (Figure 3). Note that, if you prefer, you can also utilize Git.

Screenshot showing selection of "Create a new file" under "Add file" dropdown menu.
Figure 3: Creating new files.

Make sure that you name each file exactly as we have done here. Then, copy the contents of each file from here and paste them into the corresponding file in the editor. After you have created and populated all the necessary files, commit each new file to your repository by clicking on the Commit new file to main button.

Listing the Python dependencies 

It’s time to list all the Python packages and their specific versions that are required for the project to function properly. The contents of the requirements.txt file typically include the name of the package and its version number, which can be specified in a variety of formats such as exact version numbers, version ranges, or compatible versions. The file lists FastAPI, requests, and uvicorn for the API along with sentencepiece, torch, and transformers for the text-generation model.

fastapi==0.74.*
requests==2.27.*
uvicorn[standard]==0.17.*
sentencepiece==0.1.*
torch==1.11.*
transformers==4.*

Defining the FastAPI web application

The following code defines a FastAPI web application that uses the transformers library to generate text based on user input. The app itself is a simple single-endpoint API. The /generate endpoint takes in text and uses a transformers pipeline to generate a completion, which it then returns as a response.

To give folks something to see, we reroute FastAPI’s interactive Swagger docs from the default /docs endpoint to the root of the app. This way, when someone visits your Space, they can play with it without having to write any code.

from fastapi import FastAPI
from transformers import pipeline

# Create a new FastAPI app instance
app = FastAPI()

# Initialize the text generation pipeline
# This function will be able to generate text
# given an input.
pipe = pipeline("text2text-generation", 
model="google/flan-t5-small")

# Define a function to handle the GET request at `/generate`
# The generate() function is defined as a FastAPI route that takes a 
# string parameter called text. The function generates text based on the # input using the pipeline() object, and returns a JSON response 
# containing the generated text under the key "output"
@app.get("/generate")
def generate(text: str):
    """
    Using the text2text-generation pipeline from `transformers`, generate text
    from the given input text. The model used is `google/flan-t5-small`, which
    can be found [here](<https://huggingface.co/google/flan-t5-small>).
    """
    # Use the pipeline to generate text from the given input text
    output = pipe(text)
    
    # Return the generated text in a JSON response
    return {"output": output[0]["generated_text"]}

Writing the Dockerfile

In this section, we will write a Dockerfile that sets up a Python 3.9 environment, installs the packages listed in requirements.txt, and starts a FastAPI app on port 7860.

Let’s go through this process step by step:

FROM python:3.9

The preceding line specifies that we’re going to use the official Python 3.9 Docker image as the base image for our container. This image is provided by Docker Hub, and it contains all the necessary files to run Python 3.9.

WORKDIR /code

This line sets the working directory inside the container to /code. This is where we’ll copy our application code and dependencies later on.

COPY ./requirements.txt /code/requirements.txt

The preceding line copies the requirements.txt file from our local directory to the /code directory inside the container. This file lists the Python packages that our application depends on

RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

This line uses pip to install the packages listed in requirements.txt. The --no-cache-dir flag tells pip to not use any cached packages, the --upgrade flag tells pip to upgrade any already-installed packages if newer versions are available, and the -r flag specifies the requirements file to use.

RUN useradd -m -u 1000 user
USER user
ENV HOME=/home/user \\
	PATH=/home/user/.local/bin:$PATH

These lines create a new user named user with a user ID of 1000, switch to that user, and then set the home directory to /home/user. The ENV command sets the HOME and PATH environment variables. PATH is modified to include the .local/bin directory in the user’s home directory so that any binaries installed by pip will be available on the command line. Refer the documentation to learn more about the user permission.

WORKDIR $HOME/app

This line sets the working directory inside the container to $HOME/app, which is /home/user/app.

COPY --chown=user . $HOME/app

The preceding line copies the contents of our local directory into the /home/user/app directory inside the container, setting the owner of the files to the user that we created earlier.

CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]

This line specifies the command to run when the container starts. It starts the FastAPI app using uvicorn and listens on port 7860. The --host flag specifies that the app should listen on all available network interfaces, and the app:app argument tells uvicorn to look for the app object in the app module in our code.

Here’s the complete Dockerfile:

# Use the official Python 3.9 image
FROM python:3.9

# Set the working directory to /code
WORKDIR /code

# Copy the current directory contents into the container at /code
COPY ./requirements.txt /code/requirements.txt

# Install requirements.txt 
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

# Set up a new user named "user" with user ID 1000
RUN useradd -m -u 1000 user
# Switch to the "user" user
USER user
# Set home to the user's home directory
ENV HOME=/home/user \\
	PATH=/home/user/.local/bin:$PATH

# Set the working directory to the user's home directory
WORKDIR $HOME/app

# Copy the current directory contents into the container at $HOME/app setting the owner to the user
COPY --chown=user . $HOME/app

# Start the FastAPI app on port 7860, the default port expected by Spaces
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]

Once you commit this file, your space will switch to Building, and you should see the container’s build logs pop up so you can monitor its status. 👀

If you want to double-check the files, you can find all the files at our app Space.

Note: For a more basic introduction on using Docker with FastAPI, you can refer to the official guide from the FastAPI docs.

Using the app 🚀

If all goes well, your space should switch to Running once it’s done building, and the Swagger docs generated by FastAPI should appear in the App tab. Because these docs are interactive, you can try out the endpoint by expanding the details of the /generate endpoint and clicking Try it out! (Figure 4).

Screenshot of FastAPI showing "Try it out!" option on the right-hand side.
Figure 4: Trying out the app.

Conclusion

This article covered the basics of creating a Docker Space, building and configuring a basic FastAPI app for text generation that uses the google/flan-t5-small model. You can use this guide as a starting point to build more complex and exciting applications that leverage the power of machine learning.

If you’re interested in learning more about Docker templates and seeing curated examples, check out the Docker Examples page. There you’ll find a variety of templates to use as a starting point for your own projects, as well as tips and tricks for getting the most out of Docker templates. Happy coding!

]]>
Docker and Hugging Face Partner to Democratize AI https://www.docker.com/blog/docker-and-hugging-face-partner-to-democratize-ai/ Thu, 23 Mar 2023 17:43:09 +0000 https://www.docker.com/?p=41645 Today, Hugging Face and Docker are announcing a new partnership to democratize AI and make it accessible to all software engineers. Hugging Face is the most used open platform for AI, where the machine learning (ML) community has shared more than 150,000 models; 25,000 datasets; and 30,000 ML apps, including Stable Diffusion, Bloom, GPT-J, and open source ChatGPT alternatives. These apps enable the community to explore models, replicate results, and lower the barrier of entry for ML — anyone with a browser can interact with the models.

Docker is the leading toolset for easy software deployment, from infrastructure to applications. Docker is also the leading platform for software teams’ collaboration.

Docker and Hugging Face partner so you can launch and deploy complex ML apps in minutes. With the recent support for Docker on Hugging Face Spaces, folks can create any custom app they want by simply writing a Dockerfile. What’s great about Spaces is that once you’ve got your app running, you can easily share it with anyone worldwide! 🌍 Spaces provides an unparalleled level of flexibility and enables users to build ML demos with their preferred tools — from MLOps tools and FastAPI to Go endpoints and Phoenix apps.

Spaces also come with pre-defined templates of popular open source projects for members that want to get their end-to-end project in production in a matter of seconds with just a few clicks.

Screen showing options to select the Space SDK, with Docker and 3 templates selected.

Spaces enable easy deployment of ML apps in all environments, not just on Hugging Face. With “Run with Docker,” millions of software engineers can access more than 30,000 machine learning apps and run them locally or in their preferred environment.

Screen showing app options, with Run with Docker selected.

“At Hugging Face, we’ve worked on making AI more accessible and more reproducible for the past six years,” says Clem Delangue, CEO of Hugging Face. “Step 1 was to let people share models and datasets, which are the basic building blocks of AI. Step 2 was to let people build online demos for new ML techniques. Through our partnership with Docker Inc., we make great progress towards Step 3, which is to let anyone run those state-of-the-art AI models locally in a matter of minutes.”

You can also discover popular Spaces in the Docker Hub and run them locally with just a couple of commands.

To get started, read Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces. Or try Hugging Face Spaces now.

]]>
Secure Your Kubernetes Clusters with the Kubescape Docker Extension https://www.docker.com/blog/secure-kubernetes-with-kubescape-extension/ Tue, 21 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40587 Container adoption in enterprises continues to grow, and Kubernetes has become the de facto standard for deploying and operating containerized applications. At the same time, security is shifting left and should be addressed earlier in the software development lifecycle (SDLC). Security has morphed from being a static gateway at the end of the development process to something that (ideally) is embedded every step of the way. This can potentially increase the effort for engineering and DevOps teams.

kubescape extension banner

Kubescape, a CNCF project initially created by ARMO, is intended to solve this problem. Kubescape provides a self-service, simple, and easily actionable security solution that meets developers where they are: Docker Desktop.

What is Kubescape?

Kubescape is an open source Kubernetes security platform for your IDE, CI/CD pipelines, and clusters.

Kubescape includes risk analysis, security compliance, and misconfiguration scanning. Targeting all security stakeholders, Kubescape offers an easy-to-use CLI interface, flexible output formats, and automated scanning capabilities. Kubescape saves Kubernetes users and admins time, effort, and resources.

How does Kubescape work?

Security researchers and professionals codify best practices in controls: preventative, detective, or corrective measures that can be taken to avoid — or contain — a security breach. These are grouped in frameworks by government and non-profit organizations such as the US Cybersecurity and Infrastructure Security Agency, MITRE, and the Center for Internet Security.

Kubescape contains a library of security controls that codify Kubernetes best practices derived from the most prevalent security frameworks in the industry. These controls can be run against a running cluster or manifest files under development. They’re written in Rego, the purpose-built declarative policy language that supports Open Policy Agent (OPA).

Kubescape is commonly used as a command-line tool. It can be used to scan code manually or can be triggered by an IDE integration or a CI tool. By default, the CLI results are displayed in a console-friendly manner, but they can be exported to JSON or JUnit XML, rendered to HTML or PDF, or submitted to ARMO Platform (a hosted backend for Kubescape).

kubescape command line interface diagram

Regular scans can be run using an in-cluster operator, which also enables the scanning of container images for known vulnerabilities.

kubescape scan in cluster operator

Why run Kubescape as a Docker extension?

Docker extensions are fundamental for building and integrating software applications into daily workflows. With the Kubescape Docker Desktop extension, engineers can easily shift security left without changing work habits.

The Kubescape Docker Desktop extension helps developers adopt security hygiene as early as the first lines of code. As shown in the following diagram, Kubescape enables engineers to adopt security as they write code during every step of the SDLC.

Specifically, the Kubescape in-cluster component triggers periodic scans of the cluster and shows results in ARMO Platform. Findings shown in the dashboard can be further explored, and the extension provides users with remediation advice and other actionable insights.

kubescape scan in cluster operator

Installing the Kubescape Docker extension

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.)  In Settings | Extensions select the Enable Docker Extensions box.

step one enable kubescape extension

You must also enable Kubernetes under Preferences

step one enable kubernetes

Kubescape is in the Docker Extensions Marketplace. 

In the following instructions, we’ll install Kubescape in Docker Desktop. After the extension scans automatically, the results will be shown in ARMO Platform. Here is a demo of using Kubescape on Docker Desktop:

Step 2: Add the Kubescape extension

Open Docker Desktop and select Add Extensions to find the Kubescape extension in the Extensions Marketplace.

step two add kubescape extension

Step 3: Installation

Install the Kubescape Docker Extension.

step three install kubescape extension

Step 4: Register and deploy

Once the Kubescape Docker Extension is installed, you’re ready to deploy Kubescape.

step four register deploy kubescape select provider

Currently, the only hosting provider available is ARMO Platform. We’re looking forward to adding more soon.

step four register deploy kubescape sign up

To link up your cluster, the host requires an ARMO account.

step four register deploy kubescape connect armo account

After you’ve linked your account, you can deploy Kubescape.

step four register deploy kubescape deploy

Accessing the dashboard

Once your cluster is deployed, you can view the scan output on your host (ARMO Platform) and start improving your cluster’s security posture immediately.

armo scan dashboard

Security compliance

One step to improve your cluster’s security posture is to protect against the threats posed by misconfigurations.

ARMO Platform will display any misconfigurations in your YAML, offer information about severity, and provide remediation advice. These scans can be run against one or more of the frameworks offered and can run manually or be scheduled to run periodically.

armo scan misconfigurations

Vulnerability scanning

Another step to improve your cluster’s security posture is protecting against threats posed by vulnerabilities in images.

The Kubescape vulnerability scanner scans the container images in the cluster right after the first installation and uploads the results to ARMO Platform. Kubescape’s vulnerability scanner supports the ability to scan new images as they are deployed to the cluster. Scans can be carried out manually or periodically, based on configurable cron jobs.

armo kubescape vulnerability scanner

RBAC Visualization

With ARMO Platform, you can also visualize Kubernetes RBAC (role-based access control), which allows you to dive deep into account access controls. The visualization makes pinpointing over-privileged accounts easy, and you can reduce your threat landscape with well-defined privileges. The following example shows a subject with all privileges granted on a resource.

armo kubescape rbac visualizer

Kubescape, using ARMO Platform as a portal for additional inquiry and investigation, helps you strengthen and maintain your security posture

Next steps

The Kubescape Docker extension brings security to where you’re working. Kubescape enables you to shift security to the beginning of the development process by enabling you to implement security best practices from the first line of code. You can use the Kubernetes CLI tool to get insights, or export them to ARMO Platform for easy review and remediation advice.

Give the Kubescape Docker extension a try, and let us know what you think at cncf-kubescape-users@lists.cncf.io.

]]>
Kubescape Docker Desktop nonadult
Enable No-Code Kubernetes with the harpoon Docker Extension https://www.docker.com/blog/no-code-kubernetes-harpoon-docker-extension/ Wed, 01 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40167 (This post is co-written by Dominic Holt, Founder & CEO of harpoon.)

No-code deploy Kubernetes with the harpoon Docker Extension.

Kubernetes has been a game-changer for ensuring scalable, high availability container orchestration in the Software, DevOps, and Cloud Native ecosystems. While the value is great, it doesn’t come for free. Significant effort goes into learning Kubernetes and all the underlying infrastructure and configuration necessary to power it. Still more effort goes into getting a cluster up and running that’s configured for production with automated scalability, security, and cluster maintenance.

All told, Kubernetes can take an incredible amount of effort, and you may end up wondering if there’s an easier way to get all the value without all the work.

Meet harpoon

With harpoon, anyone can provision a Kubernetes cluster and deploy their software to the cloud without writing code or configuration. Get your software up and running in seconds with a drag and drop interface. When it comes to monitoring and updating your software, harpoon handles that in real-time to make sure everything runs flawlessly. You’ll be notified if there’s a problem, and harpoon can re-deploy or roll back your software to ensure a seamless experience for your end users. harpoon does this dynamically for any software — not just a small, curated list.

To run your software on Kubernetes in the cloud, just enter your credentials and click the start button. In a few minutes, your production environment will be fully running with security baked in. Adding any software is as simple as searching for it and dragging it onto the screen. Want to add your own software? Connect your GitHub account with only a couple clicks and choose which repository to build and deploy in seconds with no code or complicated configurations.

harpoon enables you to do everything you need, like logging and monitoring, scaling clusters, creating services and ingress, and caching data in seconds with no code. harpoon makes DevOps attainable for anyone, leveling the playing field by delivering your software to your customers at the same speed as the largest and most technologically advanced companies at a fraction of the cost.

The architecture of harpoon

harpoon works in a hybrid SaaS model and runs on top of Kubernetes itself, which hosts the various microservices and components that form the harpoon enterprise platform. This is what you interface with when you’re dragging and dropping your way to nirvana. By providing cloud service provider credentials to an account owned by you or your organization, harpoon uses terraform to provision all of the underlying virtual infrastructure in your account, including your own Kubernetes cluster. In this way, you have complete control over all of your infrastructure and clusters.

The architecture for harpoon to no-code deploy Kubernetes to AWS.

Once fully provisioned, harpoon’s UI can send commands to various harpoon microservices in order to communicate with your cluster and create Kubernetes deployments, services, configmaps, ingress, and other key constructs.

If the cloud’s not for you, we also offer a fully on-prem, air-gapped version of harpoon that can be deployed essentially anywhere.

Why harpoon?

Building production software environments is hard, time-consuming, and costly, with average costs to maintain often starting at $200K for an experienced DevOps engineer and going up into the tens of millions for larger clusters and teams. Using harpoon instead of writing custom scripts can save hundreds of thousands of dollars per year in labor costs for small companies and millions per year for mid to large size businesses.

Using harpoon will enable your team to have one of the highest quality production environments available in mere minutes. Without writing any code, harpoon automatically sets up your production environment in a secure environment and enables you to dynamically maintain your cluster without any YAML or Kubernetes expertise. Better yet, harpoon is fun to use. You shouldn’t have to worry about what underlying technologies are deploying your software to the cloud. It should just work. And making it work should be simple. 

Why run harpoon as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With the harpoon Docker Extension, you can simplify the deployment process with drag and drop, visually deploying and configuring your applications directly into your Kubernetes environment. Currently, the harpoon extension for Docker Desktop supports the following features:

  • Link harpoon to a cloud service provider like AWS and deploy a Kubernetes cluster and the underlying virtual infrastructure.
  • Easily accomplish simple or complex enterprise-grade cloud deployments without writing any code or configuration scripts.
  • Connect your source code repository and set up an automated deployment pipeline without any code in seconds.
  • Supercharge your DevOps team with real-time visual cues to check the health and status of your software as it runs in the cloud.
  • Drag and drop container images from Docker Hub, source, or private container registries
  • Manage your K8s cluster with visual pods, ingress, volumes, configmaps, secrets, and nodes.
  • Dynamically manipulate routing in a service mesh with only simple clicks and port numbers.

How to use the harpoon Docker Extension

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Settings tab in Docker Desktop.

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Go to Settings > Extensions > and check the “Enable Docker Extensions” box.

Enable Docker Extensions under Settings on Docker Desktop.

Step 2: Install the harpoon Docker Extension

The harpoon extension is available on the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for harpoon in the Extensions Marketplace, then select Install.

The harpoon Docker Extension on the Extension Marketplace.

This will download and install the latest version of the harpoon Docker Extension from Docker Hub.

Installation process for the harpoon Docker Extension.

Step 3: Register with harpoon

If you’re new to harpoon, then you might need to register by clicking the Register button. Otherwise, you can use your credentials to log in.

Register with harpoon or log into your account.

While you can drag out any software or Kubernetes components you like, if you want to do actual deployments, you will first need to link your cloud service provider account. At the moment, harpoon supports Amazon Web Services (AWS). Over time, we’ll be supporting all of the major cloud service providers.

If you want to deploy software on top of AWS, you will need to provide harpoon with an access key ID and a secret access key. Since harpoon is deploying all of the necessary infrastructure in AWS in addition to the Kubernetes cluster, we require fairly extensive access to the account in order to successfully provision the environment. Your keys are only used for provisioning the necessary infrastructure to stand up Kubernetes in your account and to scale up/down your cluster as you designate. We take security very seriously at harpoon, and aside from using an extensive and layered security approach for harpoon itself, we use both disk and field level encryption for any sensitive data.

Link your AWS account to deploy Kubernetes with harpoon through Docker Desktop.

The following are the specific permissions harpoon needs to successfully deploy a cluster:

  • AmazonRDSFullAccess
  • IAMFullAccess
  • AmazonEC2FullAccess
  • AmazonVPCFullAccess
  • AmazonS3FullAccess
  • AWSKeyManagementServicePowerUser

Step 5: Start the cluster

Once you’ve linked your cloud service provider account, you just click the “Start” button on the cloud/node element in the workspace. That’s it. No, really! The cloud/node element will turn yellow and provide a countdown. While your experience may vary a bit, we tend to find that you can get a cluster up in under 6 minutes. When the cluster is running, the cloud will return and the element will glow a happy blue color.

Start the Kubernetes cluster through the harpoon Docker Extension.

Step 6: Deployment

You can search for any container image you’d like from Docker Hub, or link your GitHub account to search any GitHub repository (public or private) to deploy with harpoon. You can drag any search result over to the workspace for a visual representation of the software.

Deploying containers is as easy as hitting the “Deploy” button. Github containers will require you to build the repository first. In order for harpoon to successfully build a GitHub repository, we currently require the repository to have a top-level Dockerfile, which is industry best practice. If the Dockerfile is there, once you click the “Build” button, harpoon will automatically find it and build a container image. After a successful build, the “Deploy” button will become enabled and you can deploy the software directly.

Deploy software to Kubernetes through the harpoon Docker Extension.

Once you have a deployment, you can attach any Kubernetes element to it, including ingress, configmaps, secrets, and persistent volume claims.

You can find more info here if you need help: https://docs.harpoon.io/en/latest/usage.html 

Next steps

The harpoon Docker Extension makes it easy to provision and manage your Kubernetes clusters. You can visually deploy your software to Kubernetes and configure it without writing code or configuration. By integrating directly with Docker Desktop, we hope to make it easy for DevOps teams to dynamically start and maintain their cluster without any YAML, helm chart, or Kubernetes expertise.

Check out the harpoon Docker Extension for yourself!

]]>
Develop Your Cloud App Locally with the LocalStack Extension https://www.docker.com/blog/develop-your-cloud-app-locally-with-the-localstack-extension/ Fri, 13 Jan 2023 15:00:00 +0000 https://www.docker.com/?p=39772 LocalStack Docker Extension

Local deployment is a great way to improve your development speed, lower your cloud costs, and develop for the cloud when access is restricted due to regulations. But it can also mean one more tool to manage when you’re developing an application.

With the LocalStack Docker Extension, you get a fully functional local cloud stack integrated directly into Docker Desktop, so it’s easy to develop and test cloud-native applications in one place.

Let’s take a look at local deployment and how to use the LocalStack Docker Extension.

Why run cloud applications locally?

By running your cloud app locally, you have complete control over your environment. That control makes it easier to reproduce results consistently and test new features. This gives you faster deploy-test-redeploy cycles and makes it easier to debug and replicate bugs. And since you’re not using cloud resources, you can create and tear down resources at will without incurring cloud costs.

Local cloud development also allows you to work in regulated environments where access to the cloud is restricted. By running the app on their machines, you can still work on projects without being constrained by external restrictions. 

How LocalStack works

LocalStack is a cloud service emulator that provides a fully functional local cloud stack for developing and testing AWS cloud and serverless applications. With 45K+ GitHub stars and 450+ contributors, LocalStack is backed by a large, active open-source community with 100,000+ active users worldwide.

LocalStack acts as a local “mini-cloud” operating system with multiple components, such as process management, file system abstraction, event processing, schedulers, and more. These LocalStack components run in a Docker container and expose a set of external network ports for integrations, SDKs, or CLI interfaces to connect to LocalStack APIs.

Diagram of the LocalStack architecture using Docker containers.
The LocalStack architecture is designed to be lightweight and cross-platform compatible to make it easy to use a local cloud stack.

With LocalStack, you can simulate the functionality of many AWS cloud services, like Lambda and S3, without having to connect to the actual cloud environment. You can even apply your complex CDK applications or Terraform configurations and emulate everything locally.

The official LocalStack Docker image has been downloaded 100+ million times and provides a multi-arch build that’s compatible with AMD/x86 and ARM-based CPU architectures. LocalStack supports over 80 AWS APIs, including compute (Lambda, ECS), databases (RDS, DynamoDB), messaging (SQS, MSK), and other sophisticated services (Glue, Athena). It offers advanced collaboration features and integrations, with Infrastructure-as-Code toolings, continuous integration (CI) systems, and much more, thus enabling an efficient development and testing loop for developers. 

Why run LocalStack as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With LocalStack as a Docker Extension, you now have an easier, faster way to run LocalStack.

The extension creates a running LocalStack instance. This allows you to easily configure LocalStack to fit the needs of a local cloud sandbox for development, testing and experimentation. Currently, the LocalStack extension for Docker Desktop supports the following features:

  • Control LocalStack: Start, stop, and restart LocalStack from Docker Desktop. You can also see the current status of your LocalStack instance and navigate to the LocalStack Web Application.
  • LocalStack insights: You can see the log information of the LocalStack instance and all the available services and their status on the service page.
  • LocalStack configurations: You can manage and use your profiles via configurations and create new configurations for your LocalStack instance.

How to use the LocalStack Docker Extension 

In this section, we’ll emulate some simple AWS commands by running LocalStack through Docker Desktop. For this tutorial, you’ll need to have Docker Desktop(v4.8+) and the AWS CLI installed.

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Preferences tab in Docker Desktop.

Enable Docker Extensions on Docker Desktop.

Step 2: Install the LocalStack extension

The LocalStack extension is available on the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for LocalStack in the Extensions Marketplace, then select Install.

Search the Extensions Marketplace on Docker Desktop for LocalStack.

Alternatively, you can install the LocalStack Extension for Docker Desktop by pulling our public Docker image from Docker Hub:

docker extension install localstack/localstack-docker-desktop:0.3.1

Step 3: Initialize LocalStack

Once the extension is installed, you’re ready to use LocalStack! When you open the extension for the first time, you’ll be prompted to select where LocalStack will be mounted. Open the drop-down and choose the username.
You can also change this setting by navigating to the Configurations tab and selecting the mount point.

Select where LocalStack will be mounted.

Use the Start button to get started using LocalStack. If LocalStack’s Docker image isn’t present, the extension will pull it automatically (which may take some time).

Step 4. Run the basic AWS command

To demonstrate the functionalities of LocalStack, you can try to mock all AWS commands against the local infrastructure using awslocal, our wrapper around the AWS CLI. You can install it using pip.

pip install awscli-local

After it’s installed, all the available services will be displayed in the LocalStack Docker image on startup.

List of available AWS actions on the LocalStack Docker Extension.

You can now run some basic AWS commands to check if the extension is working correctly. Try these commands to create a hello-world file on LocalStack’s S3, fully emulated locally:

awslocal s3 mb s3://test 
echo "hello world" > /tmp/hello-world 
awslocal s3 cp /tmp/hello-world s3://test/hello-world 
awslocal s3 ls s3://test/

You should see a hello-world file in your local S3 bucket. You can now navigate to the Docker Desktop to see that S3 is running while the rest of the services are still marked available.

Amazon S3 running on the LocalStack Docker Extension.

Navigate to the logs and you’ll see the API requests being made with 200 status codes. If you’re running LocalStack to emulate a local AWS infrastructure, you can check the logs to see if a particular API request has gone wrong and further debug it through the logs.

Since the resources are ephemeral, you can stop LocalStack anytime to start fresh. And unlike doing this on AWS, you can spin up or down any resources you want without worrying about lingering resources inferring costs.

View the logs for the LocalStack Docker Extension.

Step 5: Use configuration profiles to quickly spin up different environments

Using LocalStack’s Docker Extension, you can create a variety of pre-made configuration profiles, specific LocalStack Configuration variables, or API keys. When you select a configuration profile before starting the container, you directly pass these variables to the running LocalStack container.

This makes it easy for you to change the behavior of LocalStack so you can quickly spin up local cloud environments already configured to your needs.

What will you build with LocalStack?

The LocalStack Docker Extension makes it easy to control the LocalStack container via a user interface. By integrating directly with Docker Desktop, we hope to make your development process easier and faster.

And even more is on the way! In upcoming iterations, the extension will be further developed to increase supported AWS APIs, integrations with LocalStack Web Application, and toolings like Cloud Pods, LocalStack’s state management and team collaboration feature.

Please let us know what you think! LocalStack is an open source, community focused project. If you’d like to contribute, you can follow our contributing documentation to set up the project on your local machine and use developer tools to develop new features or fix old bugs. You can also use the LocalStack Docker Extension issue tracker to create new issues or propose new features to LocalStack through our LocalStack Discuss forum.

]]>
Reduce Your Image Size with the Dive-In Docker Extension https://www.docker.com/blog/reduce-your-image-size-with-the-dive-in-docker-extension/ Tue, 20 Dec 2022 15:00:00 +0000 https://www.docker.com/?p=39415 This guest post is written by Prakhar Srivastav, Senior Software Engineer at Google.


Check out the Dive-In Docker Extension

Anyone who’s built their own containers, either for local development or for cloud deployment, knows the advantages of keeping container sizes small. In most cases, keeping the container image size small translates to real dollars saved by reducing bandwidth and storage costs on the cloud. In addition, smaller images ensure faster transfer and deployments when using them in a CI/CD server.

However, even for experienced Docker users, it can be hard to understand how to reduce the sizes of their containers. The Docker CLI can be very helpful for this, but it can be intimidating to figure out where to start. That’s where Dive comes in.

What is Dive?

Dive is an open-source tool for exploring a Docker image and its layer contents, then discovering ways to shrink the size of your Docker/OCI image.

At a high level, it works by analyzing the layers of a Docker image. With every layer you add, more space will be taken up by the image. Or you can say each line in the Dockerfile (like a separate RUN instruction) adds a new layer to your image.

Dive takes this information and does the following:

  • Breaks down the image contents in the Docker image layer by layer.
  • Shows the contents of each layer in detail.
  • Shows the total size of the image.
  • Shows how much space was potentially wasted.
  • Shows the efficiency score of the image.

While Dive is awesome and extremely helpful, it’s a command line tool and uses a TUI (terminal UI) to display all the analysis. This can sometimes seem limiting and hard to use for some users. 

Wouldn’t it be cool to show all this useful data from Dive in an easy-to-use UI? Enter Dive-In, a new Docker Extension that integrates Dive into Docker Desktop!

Prerequisites

You’ll need to download Docker Desktop 4.8 or later before getting started. Make sure to choose the correct version for your OS and then install it.

Next, hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box.

Enable extensions in Docker Desktop.

Dive-In: A Docker Extension for Dive

Dive-In is a Docker extension that’s built on top of Dive so Docker users can explore their containers directly from Docker Desktop.

Install the Dive-In Docker Extension.

To get started, search for Dive-In in the Extensions Marketplace, then install it.

Alternatively, you can also run:

docker extension install prakhar1989/dive-in

When you first access Dive-In, it’ll take a few seconds to pull the Dive image from Docker Hub. Once it does, it should show a grid of all the images that you can analyze.

Welcome to the Dive-In Docker Extension.

Note: Currently Dive-In does not show the dangling images (or the images that have the repo tag of “none”). This is to keep this grid uncluttered and as actionable as possible.

To analyze an image, click on the analyze button, which calls Dive behind the scenes to gather the data. Based on the size of the image this can sometimes take some time.  When it’s done, it’ll present the results.

On the top, Dive-In shows three key metrics for the image which are useful in getting a high level view about how inefficient the image is. The lower the efficiency score, the more room for improvement.

See how to reduce image size with the Dive-In Docker Extension.

Below the key metrics, it shows a table of the largest files in the image, which can be a good starting point for reducing the size.

Finally, as you scroll down, it shows the information of all the layers along with the size of each of them, which is extremely helpful in seeing which layer is contributing the most to the final size.

View image layers with the Dive-In Docker Extension.

And that’s it! 

Conclusion

The Dive-In Docker Extension helps you explore a Docker image and discover ways to shrink the size. It’s built on top of Dive, a popular open-source tool. Use Dive-In to gain insights into your container right from Docker Desktop!

Try it out for yourself and let me know what you think. Pull requests are also welcome!

About the Author

Prakhar Srivastav is a senior software engineer at Google where he works on Firebase to make app development easier for developers. When he’s not staring at Vim, he can be found playing guitar or exploring the outdoors.

]]>
Configure, Manage, and Simplify Your Observability Data Pipelines with the Calyptia Core Docker Extension https://www.docker.com/blog/manage-observability-data-pipelines-with-calyptia-core-docker-extension/ Fri, 16 Dec 2022 15:00:00 +0000 https://www.docker.com/?p=39277 This post was co-written with Eduardo Silva, Founder and CEO of Calyptia.


Manage observability pipelines with the Calyptia Core Docker Extension.

Applications produce a lot of observability data. And it can be a constant struggle to source, ingest, filter, and output that data to different systems. Managing these observability data pipelines is essential for being able to leverage your data and quickly gain actionable insights.

In cloud and containerized environments, Fluent Bit is a popular choice for marshaling data across cloud-native environments. A super fast, lightweight, and highly scalable logging and metrics processor and forwarder, it recently reached three billion downloads.

Calyptia Core, from the creators of Fluent Bit, further simplifies the data collection process with a powerful processing engine. Calyptia Core lets you create custom observability data pipelines and take control of your data.

And with the new Calyptia Core Docker Extension, you can build and manage observability pipelines within Docker Desktop. Let’s take a look at how it works!

Diagram for Calyptia Core observability pipelines.

What is Calyptia Core?

Calyptia Core plugs into your existing observability and security infrastructure to help you process large amounts of logs, metrics, security, and event data. With Calyptia Core, you can:

  • Connect common sources to the major destinations (e.g. Splunk, Datadog, Elasticsearch, etc.)
  • Process 100k events per second per replicas with efficient routing.
  • Automatically collect data from Kubernetes and its various flavors (GKE, EKS, AKS, OpenShift, Tanzu, etc).
  • Build reliability into your data pipeline at scale to debug data issues.

Why Calyptia Core?

Observability as a concept is common in the day-to-day life of engineers. But the different data standards, data schemas, storage backends, and dev stacks contribute to tool fatigue, resulting in lower developer productivity and increased total cost of ownership.  

Calyptia Core aims to simplify the process of building an observability pipeline. You can also augment the streaming observability data to add custom markers and discard or mask unneeded fields.  

Why run Calyptia Core as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With Calyptia Core as a Docker Extension, you now have an easier, faster way to deploy Calyptia Core.

Once the extension is installed and started, you’ll have a running Calyptia core. This allows you to easily define and manage your observability pipelines and concentrate on what matters most — discovering actionable insights from the data.

Getting started with Calyptia Core

Calyptia Core is in Docker Extension Marketplace. In the tutorial below, we’ll install Calyptia Core in Docker Desktop, build a data pipeline with mock data, and visualize it with Vivo.

Initial setup

Make sure you’ve installed the latest version of Docker Desktop (or at least v4.8+). You’ll also need to enable Kubernetes under the Preferences tab. This will start a Kubernetes single-node cluster when starting Docker Desktop.

Enable Kubernetes in Docker Desktop.

Installing the Calyptia Core Docker Extension

Step 1

Open Docker Desktop and click “Add Extensions” under Extensions to go to the Docker Extension Marketplace.

Select Add Extensions to add extensions to Docker Desktop.

Step 2

Install the Calyptia Core Docker Extension.

The Extensions Marketplace in Docker Desktop.

By clicking on the details, you can see what containers or binaries are pulled during installation.

Installing the Calyptia Core Docker Extension.

Step 3

Once the extension is installed, you’re ready to deploy Calyptia Core! Select “Deploy Core” and you’ll be asked to login and authenticate the token for the Docker Extension.

Calyptia Core Docker Extension welcome page.

In your browser, you’ll see a message from https://core.calyptia.com/ asking to confirm the device.

Calyptia Core Docker Extension device confirmation.
Calyptia Core device confirmed.

Step 4

After confirming, Calyptia Core will be deployed. You can now select “Manage Core” to build, configure, and manage your data pipelines.

Managing observability pipelines in the Calyptia Core Docker Extension.

You’ll be taken to core.calyptia.com, where you can build your custom observability data pipelines from a host of source and destination connectors.

Calyptia Core manage core instances and pipelines.

Step 5

In this tutorial, let’s create a new pipeline and set docker-extension as the name.

Set the observability pipelines name in Calyptia Core.

Add “Mock Data” as a source and “Vivo” as the destination.

NOTE: Vivo is a real time data viewer embedded in the Calyptia Core Docker Extension. You can make changes to the data pipelines like adding new fields or connectors and view the streaming observability data from Vivo in the Docker Extension.

Select Calyptia Core source.
Select Calyptia Core destination.

Step 6

Hit “Save & Deploy” to create the pipeline in the Docker Desktop environment.

Calyptia Core deploy pipeline.

With the Vivo Live Data Viewer, you can view the data without leaving Docker Desktop.

Live Data Viewer in the Calyptia Core Docker Extension.

Conclusion

The Calyptia Core Docker Extension makes it simple to manage and deploy observability pipelines without leaving the Docker Desktop developer environment. And that’s just the beginning. You can also use automated logging in Calyptia Core for automated data collection from your Kubernetes pods and use metadata  to perform processing rules before it’s delivered to the chosen destination.

Give the Calyptia Core Docker Extension a try, and let us know what you think at hello@calyptia.com.

]]>
Calyptia Core for Docker Desktop nonadult
Implement User Authentication Into Your Web Application Using SuperTokens https://www.docker.com/blog/implement-user-authentication-into-your-web-application-using-supertokens/ Thu, 08 Dec 2022 15:00:00 +0000 https://www.docker.com/?p=39178 This article was co-authored by Advait Ruia, CEO at SuperTokens.


Integrate open source user authentication with the SuperTokens Docker Extension.

Authentication directly affects the UX, dev experience, and security of any app. Authentication solutions ensure that sensitive user data is protected and only owners of this data have access to it. Although authentication is a vital part of web services, building it correctly can be time-consuming and expensive. For a personal project, a simple email/password solution can be built in a day, but the security and reliability requirements of production-ready applications add additional complexities. 

While there are a lot of resources available online, it takes time to go through all the content for every aspect of authentication (and even if you do, you may miss important information). And it takes even more effort to make sure your application is up to date with security best practices. If you’re going to move quickly while still meeting high standards, you need a solution that has the right level of abstraction, gives you maximum control, is secure, and is simple to use — just like if you build it from scratch, but without spending the time to learn, build, and maintain it. 

Meet SuperTokens

SuperTokens is an open-source authentication solution. It provides an end-to-end solution to easily implement the following features:

  • Support for popular login methods:
    • Email/password
    • Passwordless (OTP or magic link based)
    • Social login through OAuth 2.0
  • Role-based access control
  • Session management
  • User management
  • Option to self-host the SuperTokens core or use the managed service

SDKs are available for all popular languages and front-end frameworks such as Node.js, React.js, Reactive Native, Vanilla JS, and more.

The architecture of SuperTokens

SuperTokens’ architecture is optimized to add secure authentication for your users without compromising on user and developer experience. It consists of three building blocks:

  1. Frontend SDK: The frontend SDK is responsible for rendering the login UI, managing authentication flows, and user sessions. There are SDKs for Vanilla JS (Vue / Angular / JS), ReactJS, and React-Native.
  2. Backend SDK: The backend SDK provides APIs for sign-up, sign-in, sign-out, session refreshing, etc. Your frontend will talk to these APIs, which are exposed on the same domain as your application’s APIs. Available SDKs: Node.js, Python, and GoLang.
  3. SuperTokens Core: The HTTP service for the core authentication logic and database operations. This service is used by the Backend SDK. It’s responsible for interfacing with the database and is queried by our backend SDK for operations that require the database.
Architecture diagram of self-hosted SuperTokens core.
Architecture diagram of a self-hosted core.

To learn more about the SuperTokens architecture, watch this video

What’s unique about SuperTokens?

Here are some features that set SuperTokens apart from other user-authentication solutions:

  1. Supertokens is easy to set up and offers quick start guides specific to your use case. 
  2. It’s open source, which means you can self-host the SuperTokens core and have control over user data. When you self-host the SuperTokens core, there are no usage limits — it can be used for free, forever.
  3. It has low vendor lock-in since users have complete control over how SuperTokens works and where their data is stored.
  4. The frontend of Supertokens is highly customizable. The authentication UI and authentication flows can be customized to your use case. The SuperTokens frontend SDK also offers helper functions for users who are looking to build their own custom UI.
  5. SuperTokens integrates natively into your frontend and API layer. This means you have complete control over authentication flows. Through overrides, you can add analytics, add custom logic, or completely change authentication flows to fit your use case.

Why run Supertokens in Docker Desktop?

Docker Extensions help you build and integrate software applications into your daily workflows. With the SuperTokens extension, you get a simple way to quickly deploy Supertokens.

Once the extension is installed and started, you’ll have a running Supertokens core application. The extension allows you to connect to your preferred database, set the environment variable, and get your core connected to your backend.

The SuperTokens extension speeds up the process of getting started with SuperTokens and, over time, we hope to make it the best place to manage the SuperTokens core.

Getting started with SuperTokens 

Step 1: Pick your authentication method

Your first step is picking the authentication strategy, or recipe, you want to implement in your applications:

You can find user guides for all supported recipes here.

Step 2: Integrate with the SuperTokens Frontend and Backend SDKs.

After picking your recipe, you can start integrating the SuperTokens frontend and backend SDKs into your tech stack.

For example, if you want both email password and social authentication methods in your application, you can use this guide to initialize SuperTokens in your frontend and backend.

Step 3: Connect to the SuperTokens Core

The final step is setting up the SuperTokens core. SuperTokens offers a managed service to get started quickly, but today we’re going to take a look at how you can self-host and manage the SuperTokens core using the SuperTokens Docker extension.

Running the Supertokens core from Docker Desktop

Prerequisites: Docker Desktop 4.8 or later

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Go to Settings > Extensions > and check the “Enable Docker Extensions” box.

Enable Docker Extensions in Docker Desktop settings.

Setting up the extension

Step 1: Clone the SuperTokens extension

Run this command to clone the extension:

git clone git@github.com:supertokens/supertokens-docker-extension.git

Step 2: Follow the instructions in the README.md to set up the SuperTokens Extension

Build the extension:

make build-extension

Add the extension to Docker Desktop:

docker extension install supertokens/supertokens-docker-extension:latest

Once the extension is added to Docker Desktop, you can run the SuperTokens core.

Step 3: Select which database you want to use to persist user data.

SuperTokens currently supports MySQL and PostgreSQL. Choose which Docker image to load.

Choose MySQL or PostgreSQL database for SuperTokens.

Step 4: Add your database connection URI

You’ll need to create a database SuperTokens can write to. Follow this guide to see how to do this. If you don’t provide a connection URI, SuperTokens will run with an in-memory database.

In addition to the connection URI, you can add environment variables to the Docker container to customize the core.

Set up your SuperTokens core with the Docker Extension.

Step 5: Run the Docker container

Select “Start docker container” to start the SuperTokens core. This will start the SuperTokens core on port 3567. You can ping “https://localhost:3567” to check if the core is running successfully.

Ping SuperTokens core on port 3567.

Step 6: Update the connection URI in your backend to “http://localhost:3567”

(Note: This example code snippet is for Node.js, but if you’re using Python or Golang, a similar change should be made. You can find the guide on how to do that here.)

Update the connectionURI in Node.js.

Now that you’ve set up your core and connected it to your backend, your application should be up and ready to authenticate users!

Try SuperTokens for yourself!

To learn more about SuperTokens, you can visit our website or join our Discord community.

We’re committed to making SuperTokens a more powerful user-authentication solution for our developers and users — and we need help! We’re actively looking for active contributors to the SuperTokens Docker extension project. The current code is simple and easy to get started with. And we’re always around to give potential contributors a hand.

If you like SuperTokens, you can help us spread the word by adding a star to the repo.

]]>
How SuperTokens works (Architecture - Part 1) nonadult
Enable Cloud-Native Log Observability With Parseable https://www.docker.com/blog/enable-cloud-native-log-observability-with-parseable/ Tue, 22 Nov 2022 15:00:00 +0000 https://www.docker.com/?p=38908 Docker Cloud Native Observability with Parseable inline v1b

Observability is the practice of understanding the internal state of a system from its output. It’s based on a trio of key indicators: logs, metrics, and traces. Because metrics and traces are numerical, it’s easy to visualize that data through graphics. Logs are unfortunately text heavy and relatively difficult to visualize or observe. 

No matter the data type and its underlying nature, actionable log data helps you solve problems and make smarter business decisions. And that’s where Parseable comes in.

image1

Introducing Parseable

The SaaS observability ecosystem is thriving, but there’s little to no movement in open source, developer-friendly observability platforms. That’s what we’re looking to address with Parseable. 

Parseable is an open source, developer-centric platform created to ingest and query log data. It’s designed to be efficient, easy to use, and highly flexible. To achieve this, Parseable uses a cloud-native, containerized architectural approach to create a simple and dependency-free platform. 

Specifically, Parseable uses Apache Arrow and Parquet under the hood to efficiently store log data and query at blazingly fast speeds. It uses S3 or other compatible storage platforms to support seamless storage while remaining stateless.

Graph displaying Parseable server architecture.

What’s unique about Parseable?

Here are some exciting features that set Parseable apart from other observability platforms:

  • It maintains a SQL-compatible API for querying log data.
  • The Parquet open data format enables complete data ownership and wide-ranging possibilities for data analysis.
  • The single binary and container-based deployment model (including UI) helps you deploy in minutes — if not seconds. 
  • Its indexing-free design rivals the performance of indexed systems while offering lower CPU usage and less storage overhead. 
  • It’s written in Rust with low latency and high throughput.

How does Parseable work?

Parseable exposes HTTP REST API endpoints. This lets you ingest, query, and manage your log streams on the Parseable server. There are three major API categories:

  • Log stream creation, ingestion, and management
  • Log stream query and search
  • Overall health status

API reference information and examples are available on the Parseable public workspace, at Postman.

Parseable is compatible with standard logging agents like FluentBit, LogStash, Vector, syslog and others via their HTTP output agents. It also offers a built-in, intuitive GUI for log query and analysis.

Why use the Parseable Docker extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With the Parseable extension, we aim to provide a simple, one-click approach for deploying Parseable. 

Once the extension is installed and running, you’ll have a running Parseable server that can ingest logs from any logging agents or directly from your application. You’ll also have access to the Parseable UI.

Overall, the Parseable extension brings richer log observability to development platforms.

Getting started with Parseable

Prerequisites

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box:

Docker Desktop extensions settings with Docker Extensions and Show Docker Extensions system containers enabled.

Installing the Parseable Docker extension

While we’re working to bring Parseable to the Extensions Marketplace, you’ll currently need to download it via the CLI. Launch your terminal and run the following command to clone the GitHub repository and install the Parseable Extension:

git clone https://github.com/parseablehq/parseable-docker-extension

cd parseable-docker-extension
make install-extension

The Parseable extension will appear in the Docker Dashboard’s left sidebar, under the Extensions heading.

Using Parseable

Docker Desktop Parseable extension page detailing login credentials and how to use the extension.

Parseable requires you to enter the following configuration settings and environment variables during the initial setup:

  • Local Port Number (the port number you want Parseable listening on)
  • Local Storage Path (the path within the container where Parseable stages data)
  • Local Volume Path (the path where your local storage path is mounted)
  • S3/MinIO URL
  • S3/MinIO Bucket Name
  • S3/MinIO Access Key
  • S3/MinIO Secret Key
  • S3/MinIO Region

Click “Deploy” after you’ve entered all required configuration details.

The Docker Desktop Parseable user interface displaying the required environment variables.

You should see the URL http://localhost:8000 within the extension window:

Docker Desktop displaying the successful deployment of the Parseable container.

Next, Docker Desktop will redirect you to your browser’s login page. Your credentials are identical to what you provided in the Login Credentials section (default user/password: parseable, parseable):

The Parseable login page prompting the user to add their credentials.

After logging in, you’ll see the logs page with the option to select a log stream. If you used the default MinIO bucket embedded in the Extensions UI, some demo data is already present. Alternatively, if you’re using your own S3-compatible bucket, use the Parseable API to create a log stream and send logs to the log stream.

Once you’re done, you can choose a log stream and the time range for which you want the logs. You can even add filters and search fields:

The Parseable log stream offering filter, time range, and stream options.

Parseable currently supports data filtering by label, metadata, and specific column values. For example, you can choose a column and specify an operator or value for the column. Only the log data rows matching this filter will be shown. We’re working on improving this with support for multiple-column data types.

The Parseable stream log with detailed column filters.

This entire process takes about a minute. To see it in action, check out this quick walkthrough video:

Try Parseable today!

In this post, we quickly showcased Parseable and its key features. You also learned how to locally run it with a single click using the extension. Finally, we explored how to ingest logs to your running Parseable instance and query those logs via the Parseable UI. 

But, you can test drive Parseable for yourself, today! Follow our CLI workflow to install this extension directly. Plus, keep an eye out for Parseable’s launch on the Extensions Marketplace — it’s coming soon!

To learn more, join the Parseable community on Slack and help us spread the word by adding a star to the repo.

We really hope you enjoyed this article and this new approach to log data ingestion and query. Docker Extensions makes this single-click approach possible.

Contribute to the Parseable Docker extension

We’re committed to making Parseable more powerful for our developers and users — and we need help! We’re actively looking for contributors to the Parseable Docker extension project. 

The current code is simple and easy to get started with, and we’re always around to give potential contributors a hand. This can be a great first project, so please feel free to share your ideas.

]]>
Parseable Docker Extension Walkthrough nonadult