developers – Docker https://www.docker.com Tue, 11 Jul 2023 19:50:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png developers – Docker https://www.docker.com 32 32 We Thank the Stack Overflow Community for Ranking Docker the #1 Most-Used Developer Tool https://www.docker.com/blog/docker-stack-overflow-survey-thank-you-2023/ Wed, 21 Jun 2023 13:00:00 +0000 https://www.docker.com/?p=43490 Stack Overflow’s annual 2023 Developer Survey engaged more than 90,000 developers to learn about their work, the technologies they use, their likes and dislikes, and much, much more. As a company obsessed with serving developers, we’re honored that Stack Overflow’s community ranked Docker the #1 most-desired and #1 most-used developer tool. Since our inclusion in the survey four years ago, the Stack Overflow community has consistently ranked Docker highly, and we deeply appreciate this ongoing recognition and support.

docker logo and stack overflow logo with heart emojis in chat windows

Giving developers speed, security, and choice

While we’re pleased with this recognition, for us it means we cannot slow down: We need to go even faster in our effort to serve developers. In what ways? Well, our developer community tells us they value speed, security, and choice:

  • Speed: Developers want to maximize their time writing code for their app — and minimize set-up and overhead — so they can ship early and often.
  • Security: Specifically, non-intrusive, informative, and actionable security. Developers want to catch and fix vulnerabilities right now when coding in their “inner loop,” not 30 minutes later in CI or seven days later in production.
  • Choice: Developers want the freedom to explore new technologies and select the right tool for the right job and not be constrained to use lowest-common-denominator technologies in “everything-but-the-kitchen-sink” monolithic tools.

And indeed, these are the “North Stars” that inform our roadmap and prioritize our product development efforts. Recent examples include:

Speed

Security

  • Docker Scout: Automatically detects vulnerabilities and recommends fixes while devs are coding in their “inner loop.”
  • Attestations: Docker Build automatically generates SBOMs and SLSA Provenance and attaches them to the image.

Choice

  • Docker Extensions: Launched just over a year ago, and since then, partners and community members have created and published to Docker Hub more than 700 Docker Extensions for a wide range of developer tools covering Kubernetes app development, security, observability, and more.
  • Docker-Sponsored Open Source Projects: Available 100% for free on Docker Hub, this sponsorship program supports more than 600 open source community projects.
  • Multiple architectures: A single docker build command can produce an image that runs on multiple architectures, including x86, ARM, RISC-V, and even IBM mainframes.

What’s next?

While we’re pleased that our efforts have been well-received by our developer community, we’re not slowing down. So many exciting changes in our industry today present us with new opportunities to serve developers.

For example, the lines between the local developer laptop and the cloud are becoming increasingly blurred. This offers opportunities to combine the power of the cloud with the convenience and low latency of local development. Another example is AI/ML. Specifically, LLMs in feedback loops with users offer opportunities to automate more tasks to further reduce the toil on developers.

Watch these spaces — we’re looking forward to sharing more with you soon.

Thank you!

Docker only exists because of our community of developers, Docker Captains and Community Leaders, customers, and partners, and we’re grateful for your on-going support as reflected in this year’s Stack Overflow survey results. On behalf of everyone here at Team Docker: THANK YOU. And we look forward to continuing to build the future together with you.

Learn more

]]>
Full-Stack Reproducibility for AI/ML with Docker and Kaskada https://www.docker.com/blog/full-stack-reproducibility-for-ai-ml-with-docker-kaskada/ Tue, 20 Jun 2023 12:55:51 +0000 https://www.docker.com/?p=43301 Docker is used by millions of developers to optimize the setup and deployment of development environments and application stacks. As artificial intelligence (AI) and machine learning (ML) are becoming key components of many applications, the core benefits of Docker are now doing more heavy lifting and accelerating the development cycle. 

Gartner predicts that “by 2027, over 90% of new software applications that are developed in the business will contain ML models or services as enterprises utilize the massive amounts of data available to the business.”

This article, written by our partner DataStax, outlines how Kaskada, open source, and Docker are helping developers optimize their AI/ML efforts.

Stylized brain in hexagon on light blue background with Docker and Kaskada logos

Introduction

As a data scientist or machine learning practitioner, your work is all about experimentation. You start with a hunch about the story your data will tell, but often you’ll only find an answer after false starts and failed experiments. The faster you can iterate and try things, the faster you’ll get to answers. In many cases, the insights gained from solving one problem are applicable to other related problems. Experimentation can lead to results much faster when you’re able to build on the prior work of your colleagues.

But there are roadblocks to this kind of collaboration. Without the right tools, data scientists waste time managing code dependencies, resolving version conflicts, and repeatedly going through complex installation processes. Building on the work of colleagues can be hard due to incompatible environments — the dreaded “it works for me” syndrome.

Enter Docker and Kaskada, which offer a similar solution to these different problems: a declarative language designed specifically for the problem at hand and an ecosystem of supporting tools (Figure 1).

Illustration showing representation of Dockerfile defining steps to build a reproducible dev environment.
Figure 1: Dockerfile defines the build steps.

For Docker, the Dockerfile format describes the exact steps needed to build a reproducible development environment and an ecosystem of tools for working with containers (Docker Hub, Docker Desktop, Kubernetes, etc.). With Docker, data scientists can package their code and dependencies into an image that can run as a container on any machine, eliminating the need for complex installation processes and ensuring that colleagues can work with the exact same development environment.

With Kaskada, data scientists can compute and share features as code and use those throughout the ML lifecycle — from training models locally to maintaining real-time features in production. The computations required to produce these datasets are often complex and expensive because standard tools like Spark have difficulty reconstructing the temporal context required for training real-time ML models.

Kaskada solves this problem by providing a way to compute features — especially those that require reasoning about time — and sharing feature definitions as code. This approach allows data scientists to collaborate with each other and with machine learning engineers on feature engineering and reuse code across projects. Increased reproducibility dramatically speeds cycle times to get models into production, increases model accuracy, and ultimately improves machine learning results.

Example walkthrough

Let’s see how Docker and Kaskada improve the machine learning lifecycle by walking through a simplified example. Imagine you’re trying to build a real-time model for a mobile game and want to predict an outcome, for example, whether a user will pay for an upgrade.

Setting up your experimentation environment

To begin, start a Docker container that comes preinstalled with Jupyter and Kaskada:

docker run --rm -p 8888:8888 kaskadaio/jupyter
open <jupyter URL from logs> 

This step instantly gives you a reproducible development environment to work in, but you might want to customize this environment. Additional development tools can be added by creating a new Dockerfile using this image as the “base image”:

# Dockerfile
FROM kaskadaio/jupyter

COPY requirements.txt
RUN pip install -r requirements.txt

In this example, you started with Jupyter and Kaskada, copied over a requirements file and installed all the dependencies in it. You now have a new Docker image that you can use as a data science workbench and share across your organization: Anyone in your organization with this Dockerfile can reproduce the same environment you’re using by building and running your Dockerfile.

docker build -t experimentation_env .
docker run --rm -p 8888:8888 experimentation_env

The power of Docker comes from the fact that you’ve created a file that describes your environment and now you can share this file with others.

Training your model

Inside a new Jupyter notebook, you can begin the process of exploring solutions to the problem — predicting purchase behavior. To begin, you’ll create tables to organize the different types of events produced by the imaginary game.

% pip install kaskada
% load_ext fenlmagic

session = LocalBuilder().build()

table.create_table(
table_name = "GamePlay",
time_column_name = "timestamp",
entity_key_column_name = "user_id",
)
table.create_table(
table_name = "Purchase",
time_column_name = "timestamp",
entity_key_column_name = "user_id",
)

table.load(
table_name = "GamePlay", 
file = "historical_game_play_events.parquet",
)
table.load(
table_name = "Purchase", 
file = "historical_purchase_events.parquet",
)

Kaskada is easy to install and use locally. After installing, you’re ready to start creating tables and loading event data into them. Kaskada’s vectorized engine is built for high-performance local execution, and, in many cases, you can start experimenting on your data locally, without the complexity of managing distributed compute clusters.

Kaskada’s query language was designed to make it easy for data scientists to build features and training examples directly from raw event data. A single query can replace complex ETL and pre-aggregation pipelines, and Kaskda’s unique temporal operations unlock native time travel for building training examples “as of” important times in the past.

%% fenl --var training

# Create views derived from the source tables
let GameVictory = GamePlay | when(GamePlay.won)
let GameDefeat = GamePlay | when(not GamePlay.won)

# Compute some features as inputs to our model
let features = {
  loss_duration: sum(GameVictory.duration),
  purchase_count: count(Purchase),
}

# Observe our features at the time of a player's second victory
let example = features
  | when(count(GameDefeat, window=since(GameVictory)) == 2)
  | shift_by(hours(1))

# Compute a target value
# In this case comparing purchase count at prediction and label time
let target = count(Purchase) > example.purchase_count

# Combine feature and target values computed at the different times
in extend(example, {target})

In the following example, you first apply filtering to the events, build simple features, observe them at the points in time when your model will be used to make predictions, then combine the features with the value you want to predict, computed an hour later. Kaskada lets you describe all these operations “from scratch,” starting with raw events and ending with an ML training dataset.

from sklearn.linear_model import LogisticRegression
from sklearn import preprocessing

x = training.dataframe[['loss_duration']]
y = training.dataframe['target']

scaler = preprocessing.StandardScaler().fit(X)
X_scaled = scaler.transform(X)

model = LogisticRegression(max_iter=1000)
model.fit(X_scaled, y)

Kaskada’s query language makes it easy to write an end-to-end transformation from raw events to a training dataset.

Conclusion

Docker and Kaskada enable data scientists and ML engineers to solve real-time ML problems quickly and reproducibly. With Docker, you can manage your development environment with ease, ensuring that your code runs the same way on every machine. With Kaskada, you can collaborate with colleagues on feature engineering and reuse queries as code across projects. Whether you’re working independently or as part of a team, these tools can help you get answers faster and more efficiently than ever before.

Get started with Kaskada’s official images on Docker Hub.

]]>
Unlock Docker Desktop Real-Time Insights with the Grafana Docker Extension https://www.docker.com/blog/unlock-docker-desktop-real-time-insights-with-the-grafana-docker-extension/ Fri, 09 Jun 2023 15:46:00 +0000 https://www.docker.com/?p=43134 More than one million, that’s the number of active Grafana installations worldwide. In the realm of data visualization and monitoring, Grafana has emerged as a go-to solution for organizations and individuals alike. With its powerful features and intuitive interface, Grafana enables users to gain valuable insights from their data. 

Picture this scenario: You have a locally hosted web application that you need to test thoroughly, but accessing it remotely for testing purposes seems like an insurmountable task. This is where the Grafana Cloud Docker Extension comes to the rescue. This extension offers a seamless solution to the problem by establishing a secure connection between your local environment and the Grafana Cloud platform.

In this article, we’ll explore the benefits of using the Grafana Cloud Docker Extension and describe how it can bridge the gap between your local machine and the remote Grafana Cloud platform.

Graphic showing Docker and Grafana logos on dark blue background

Overview of Grafana Cloud

Grafana Cloud is a fully managed, cloud-hosted observability platform ideal for cloud-native environments. With its robust features, wide user adoption, and comprehensive documentation, Grafana Cloud is a leading observability platform that empowers organizations to gain insights, make data-driven decisions, and ensure the reliability and performance of their applications and infrastructure.

Grafana Cloud is an open, composable observability stack that supports various popular data sources, such as Prometheus, Elasticsearch, Amazon CloudWatch, and more (Figure 1). By configuring these data sources, users can create custom dashboards, set up alerts, and visualize their data in real-time. The platform also offers performance and load testing with Grafana Cloud k6 and incident response and management with Grafana Incident and Grafana OnCall to enhance the observability experience.

Graphic showing Hosted Grafana architecture with connections to elements including Node.js, Prometheus, Amazon Cloudwatch, and Google Cloud monitoring.
Figure 1: Hosted Grafana architecture.

Why run Grafana Cloud as a Docker Extension?

The Grafana Cloud Docker Extension is your gateway to seamless testing and monitoring, empowering you to make informed decisions based on accurate data and insights. The Docker Desktop integration in Grafana Cloud brings seamless monitoring capabilities to your local development environment. 

Docker Desktop, a user-friendly application for Mac, Windows, and Linux, enables you to containerize your applications and services effortlessly. With the Grafana Cloud extension integrated into Docker Desktop, you can now monitor your local Docker Desktop instance and gain valuable insights.

The following quick-start video shows how to monitor your local Docker Desktop instance using the Grafana Cloud Extension in Docker Desktop:

The Docker Desktop integration in Grafana Cloud provides a set of prebuilt Grafana dashboards specifically designed for monitoring Docker metrics and logs. These dashboards offer visual representations of essential Docker-related metrics, allowing you to track container resource usage, network activity, and overall performance. Additionally, the integration also includes monitoring for Linux host metrics and logs, providing a comprehensive view of your development environment.

Using the Grafana Cloud extension, you can visualize and analyze the metrics and logs generated by your Docker Desktop instance. This enables you to identify potential issues, optimize resource allocation, and ensure the smooth operation of your containerized applications and microservices.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a Grafana Cloud account.

Note: You must ensure that the Docker Extension is enabled (Figure 2).

Screenshot showing search for Grafana in Extensions Marketplace.
Figure 2: Enabling Docker Extensions.

Step 1. Install the Grafana Cloud Docker Extension

In the Extensions Marketplace, search for Grafana and select Install (Figure 3).

Screenshot of Docker Desktop showing installation of extension.
Figure 3: Installing the extension.

Step 2. Create your Grafana Cloud account

A Grafana Cloud account is required to use the Docker Desktop integration. If you don’t have a Grafana Cloud account, you can sign up for a free account today (Figure 4).

 Screenshot of Grafana Cloud sign-up page.
Figure 4: Signing up for Grafana Cloud.

Step 3. Find the Connections console

In your Grafana instance on Grafana Cloud, use the left-hand navigation bar to find the Connections Console (Home > Connections > Connect data) as shown in Figure 5.

Screenshot of Grafana Cloud Connections console.
Figure 5: Connecting data.

Step 4. Install the Docker Desktop integration

To start sending metrics and logs to the Grafana Cloud, install the Docker Desktop integration (Figure 6). This integration lets you fetch the values of connection variables required to connect to your account.

 Screenshot of Connections console showing installation of Docker Desktop integration.
Figure 6: Installing the Docker Desktop Integration.

Step 5. Connect your Docker Desktop instance to Grafana Cloud

It’s time to open and connect the Docker Desktop extension to the Grafana Cloud (Figure 7). Enter the connection variable you found while installing the Docker Desktop integration on Grafana Cloud.

Screenshot showing connection of Docker Desktop extension to Grafana Cloud.
Figure 7: Connecting the Docker Desktop extension to Grafana Cloud.

Step 6. Check if Grafana Cloud is receiving data from Docker Desktop

Test the connection to ensure that the agent is collecting data (Figure 8).

Screenshot showing test of connection to ensure data collection.
Figure 8: Checking the connection.

Step 7. View the Grafana dashboard

The Grafana dashboard shows the integration with Docker Desktop (Figure 9).

 Screenshot of Grafana Dashboards page.
Figure 9: Grafana dashboard.

Step 8. Start monitoring your Docker Desktop instance

After the integration is installed, the Docker Desktop extension will start sending metrics and logs to Grafana Cloud.

You will see three prebuilt dashboards installed in Grafana Cloud for Docker Desktop.

Docker Overview dashboard

This Grafana dashboard gives a general overview of the Docker Desktop instance based on the metrics exposed by the cadvisor Prometheus exporter (Figure 10).

Screenshot of Grafana Docker overview dashboard showing metrics such as CPU and memory usage.
Figure 10: Docker Overview dashboard.

The key metrics monitored are:

  • Number of containers/images
  • CPU metrics
  • Memory metrics
  • Network metrics

This dashboard also contains a shortcut at the top for the logs dashboard so you can correlate logs and metrics for troubleshooting.

Docker Logs dashboard

This Grafana dashboard provides logs and metrics related to logs of the running Docker containers on the Docker Desktop engine (Figure 11).

Screenshot of Grafana Docker Logs dashboard showing statistics related to the running Docker containers.
Figure 11: Docker Logs dashboard.

Logs and metrics can be filtered based on the Docker Desktop instance and the container using the drop-down for template variables on the top of the dashboard.

Docker Desktop Node Exporter/Nodes dashboard

This Grafana dashboard provides the metrics of the Linux virtual machine used to host the Docker engine for Docker Desktop (Figure 12).

Screenshot of Docker Nodes dashboard showing metrics such as disk space and memory usage.
Figure 12: Docker Nodes dashboard.

How to monitor Redis in a Docker container with Grafana Cloud

Because the Grafana Agent is embedded inside the Grafana Cloud extension for Docker Desktop, it can easily be configured to monitor other systems running on Docker Desktop that are supported by the Grafana Agent.

For example, we can monitor a Redis instance running inside a container in Docker Desktop using the Redis integration for Grafana Cloud and the Docker Desktop extension.

If we have a Redis database running inside our Docker Desktop instance, we can install the Redis integration on Grafana Cloud by navigating to the Connections Console (Home > Connections > Connect data) and clicking on the Redis tile (Figure 13).

Screenshot of Connections Console showing adding Redis as data source.
Figure 13: Installing the Redis integration on Grafana Cloud.

To start collecting metrics from the Redis server, we can copy the corresponding agent snippet into our agent configuration in the Docker Desktop extension. Click on Configuration in the Docker Desktop extension and add the following snippet under the integrations key. Then press Save configuration (Figure 14).

Screenshot of Connections console showing configuration of Redis integration.
Figure 14: Configuring Redis integration.
integrations:
  redis_exporter:
    enabled: true
    redis_addr: 'localhost:6379'

In its default settings, the Grafana agent container is not connected to the default bridge network of Docker desktop. To connect the agent to this container, run the following command:

docker network connect bridge grafana-docker-desktop-extension-agent

This step allows the agent to connect and scrape metrics from applications running on other containers. Now you can see Redis metrics on the dashboard installed as part of the Redis solution for Grafana Cloud (Figure 15).

 Screenshot showing Redis metrics on the dashboard.
Figure 15: Viewing Redis metrics.

Conclusion

With the Docker Desktop integration in Grafana Cloud and its prebuilt Grafana dashboards, monitoring your Docker Desktop environment becomes a streamlined process. The Grafana Cloud Docker Extension allows you to gain valuable insights into your local Docker instance, make data-driven decisions, and optimize your development workflow with the power of Grafana Cloud. Explore a new realm of testing possibilities and elevate your monitoring game to new heights. 

Check out the Grafana Cloud Docker Extension on Docker Hub.

]]>
Docker Desktop GrafanaCloud nonadult
Enabling a No-Code Performance Testing Platform Using the Ddosify Docker Extension https://www.docker.com/blog/no-code-performance-testing-using-ddosify-extension/ Tue, 28 Mar 2023 18:12:32 +0000 https://www.docker.com/?p=41206 Performance testing is a critical component of software testing and performance evaluation. It involves simulating a large number of users accessing a system simultaneously to determine the system’s behavior under high user loads. This process helps organizations understand how their systems will perform in real-world scenarios and identify potential performance bottlenecks. Testing the performance of your application under different load conditions also helps identify bottlenecks and improve your application’s performance. 

In this article, we provide an introduction to the Ddosify Docker Extension and show how to get started using it for performance testing.  

banner ddosify extension

The importance of performance testing

Performance testing should be regularly performed to ensure that your application is performing well under different load conditions so that your customers can have a great experience. Kissmetrics found that a 1-second delay in page response time can lead to a seven percent decrease in conversions and that half of the customers expect a website to load in less than 2 seconds. A 1-second delay in page response could result in a potential loss of several million dollars in annual sales for an e-commerce site.

Meet Ddosify

Ddosify is a high-performance, open-core performance testing platform that focuses on load and latency testing. Ddosify offers a suite of three products:

1. Ddosify Engine: An open source, single-node, load-testing tool (6K+ stars) that can be used to test your application from your terminal using a simple JSON file. Ddosify is written in Golang and can be deployed on Linux, macOS, and Windows. Developers and small companies are using Ddosify Engine to test their applications. The tool is available on GitHub.

2. Ddosify Cloud: An open core SaaS platform that allows you to test your application without any programming expertise. Ddosify Cloud uses Ddosify Engine in a distributed manner and provides a web interface to generate load test scenarios without code. Users can test their applications from different locations around the world and can generate advanced reports. We are using different technologies including Docker, Kubernetes, InfluxDB, RabbitMQ, React.js, Golang, AWS, and PostgreSQL within this platform and all working together transparently for the user. This tool is available on the Ddosify website.

3. Ddosify Docker Extension: This tool has similarities to Ddosify Engine, but has an easy-to-use user interface thanks to the extension capability of Docker Desktop. This feature allows you to test your application within Docker Desktop. The Ddosify Docker Extension is available free of charge from the Extension marketplace. The Ddosify Docker Extension repository is open source and available on GitHub. The tool is also available from the Docker Extensions Marketplace.

In this article, we will focus on the Ddosify Docker Extension.

The architecture of Ddosify

Ddosify Docker Extension uses the Ddosify Engine as a base image under the hood. We collect settings, including request count, duration, and headers, from the extension UI and send them to the Ddosify Engine. 

The Ddosify Engine performs the load testing and returns the results to the extension. The extension then displays the results to the user (Figure 1). 

Illustration showing that the Ddosify Engine performs the load testing and returns the results to the extension. The extension then displays the results to the user.
Figure 1: Overview of Ddosify.

Why Ddosify?

Ddosify is easy to use and offers many features, including dynamic variables, CSV data import, various load types, correlation, and assertion. Ddosify also has different options for different use cases. If you are an individual developer, you can use the Ddosify Engine or Ddosify Docker Extension free of charge. If you need code-free load testing, advanced reporting, multi-geolocation, and more requests per second (RPS), you can use the Ddosify Cloud. 

With Ddosify, you can: 

  • Identify performance issues of your application by simulating high user traffic.
  • Optimize your infrastructure and ensure that you are only paying for the resources that you need.
  • Identify bugs before your customers do. Some bugs are only triggered under high load.
  • Measure your system capacity and identify its limitations.

Why run Ddosify as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With Ddosify Docker Extension, you can easily perform load testing on your application from within Docker Desktop. You don’t need to install anything on your machine except Docker Desktop. Features of Ddosify Docker Extension include:

  • Strong community with 6K+ GitHub stars and a total of 1M+ downloads on all platforms. Community members contribute by proposing/adding features and fixing bugs.
  • Currently supports HTTP and HTTPS protocols. Other protocols are on the way.
  • Supports various load types. Test your system’s limits across different load types, including:
    • Linear
    • Incremental
    • Waved
  • Dynamic variables (parameterization) support. Just like Postman, Ddosify supports dynamic variables.
  • Save load testing results as PDF.

Getting started

As a prerequisite, you need Docker Desktop 4.10.0 or higher installed on your machine. You can download Docker Desktop from our website.

Step 1: Install Ddosify Docker Extension

Because Ddosify is an extension partner of Docker, you can easily install Ddosify Docker Extension from the Docker Extensions Marketplace (Figure 2). Start Docker Desktop and select Add Extensions. Next, filter by Testing Tools and select Ddosify. Click on the Install button to install the Ddosify Docker Extension. After a few seconds, Ddosify Docker Extension will be installed on your machine.

Screenshot showing how to install Ddosify Docker. Start Docker Desktop and select Add Extensions. Next, filter by Testing Tools and select Ddosify. Click on the Install button to install the Ddosify Docker Extension. After a few seconds, Ddosify Docker Extension will be installed on your machine.
Figure 2: Installing Ddosify.

Step 2: Start load testing

You can start load testing your application from the Docker Desktop (Figure 3). Start Docker Desktop and click on the Ddosify icon in the Extensions section. The UI of the Ddosify Docker Extension will be opened.

Screenshot showing how to start load testing your application from the Docker Desktop. Start Docker Desktop and click on the Ddosify icon in the Extensions section. The UI of the Ddosify Docker Extension will be opened.
Figure 3: Starting load testing.

You can start load testing by entering the target URL of your application. You can choose HTTP Methods (GET, POST, PUT, DELETE, etc.), protocol (HTTP, HTTPS), request count, duration, load type (linear, incremental, waved), timeout, body, headers, basic auth, and proxy settings. We chose the following values: 

URL:https://testserver.ddosify.com/account/register/
Method:POST
Protocol:HTTPS
Request Count: 100
Duration:5
Load Type:Linear
Timeout:10
Body:{“username”: “{{_randomUserName}}”, “email”: “{{_randomEmail}}”, “password”: “{{_randomPassword}}”}
Headers:User-Agent: DdosifyDockerExtension/0.1.2Content-Type: application/json

In this configuration, we are sending 100 requests to the target URL for 5 seconds (Figure 4). The RPS is 20. The target URL is a test server that is used to register new users with body parameters. We are using dynamic variables (random) for username, email, and password in the body. You can learn more about dynamic variables from the Ddosify documentation.

UI screenshot showing 100 requests sending to the target URL for 5 seconds.
Figure 4: Parameters for sample load test.

Then click on the Start Load Test button to begin load testing. The results will be displayed in the UI (Figure 5).

Click on the Start Load Test button to begin load testing. The results will be displayed in the UI. In this screenshot, we see a list of failed runs and successful runsm 48 users created, and server did not respond to 32 requests.
Figure 5: Ddosify test results.

The test results include the following information:

  • 48 requests successfully created users. Response Code: 201
  • 20 requests failed to create users because of the duplicate username and emails with the server. Response Code: 400
  • 32 requests failed to create users because of the timeout. The server could not respond within 10 seconds, so we should increase the timeout value or optimize the server

You can also save the load test results. Click on the Report button to save the results as a PDF file (Figure 6).

Screenshot showing the Report button to click to save the results as a PDF file.
Figure 6: Save results as PDF.

Conclusion

In this article, we showed how to install Ddosify Docker Extension and quickly start load testing your application from Docker Desktop. We created random users on a test server with 100 requests for 5 seconds, and we saw that the server could not handle all the requests because of the timeout. 

If you need help with Ddosify, you can create an issue on our GitHub repository or join our Discord server.

Resources

]]>
Docker and Ambassador Labs Announce Telepresence for Docker, Improving the Kubernetes Development Experience https://www.docker.com/blog/telepresence-for-docker/ Thu, 23 Mar 2023 16:01:46 +0000 https://www.docker.com/?p=41437

I’ve been a long-time user and avid fan of both Docker and Kubernetes, and have many happy memories of attending the Docker Meetups in London in the early 2010s. I closely watched as Docker revolutionized the developers’ container-building toolchain and Kubernetes became the natural target to deploy and run these containers at scale.

Today we’re happy to announce Telepresence for Docker, simplifying how teams develop and test on Kubernetes for faster app delivery. Docker and Ambassador Labs both help cloud-native developers to be super-productive, and we’re excited about this partnership to accelerate the developer experience on Kubernetes.

telepresence new2

What exactly does this mean?

  • When building with Kubernetes, you can now use Telepresence alongside the Docker toolchain you know and love.
  • You can buy Telepresence directly from Docker, and log in to Ambassador Cloud using your Docker ID and credentials.
  • You can get installation and product support from your current Docker support and services team.

Kubernetes development: Flexibility, scale, complexity

Kubernetes revolutionized the platform world, providing operational flexibility and scale for most organizations that have adopted it. But Kubernetes also introduces complexity when configuring local development environments.
We know you like building applications using your own local tools, where the feedback is instant, you can iterate quickly, and the environment you’re working in mirrors production. This combination increases velocity and reduces the time to successful deployment. But, you can face slow and painful development and troubleshooting obstacles when trying to integrate and test code into a real-world application running on Kubernetes. You end up having to replicate all of the services locally or remotely to test changes, which requires you to know about Kubernetes and the services built by others. The result, which we’ve seen at many organizations, is siloed teams, deferred deploying changes, and delayed organizational time to value.

Bridging remote environments with local development toolchains

Telepresence for Docker seamlessly bridges local dev machines to remote dev and staging Kubernetes clusters, so you don’t have to manage the complexity of Kubernetes, be a Kubernetes expert, or worry about consuming laptop resources when deploying large services locally.

The remote-to-local approach helps your teams to quickly collaborate and iterate on code locally while testing the effects of those code changes interactively within the full context of your distributed application. This way, you can work locally on services using the tools you know and love while also being connected to a remote Kubernetes cluster.

How does Telepresence for Docker work?

Telepresence for Docker works by running a traffic manager pod in Kubernetes and Telepresence client daemons on developer workstations. As shown in the following diagram, the traffic manager acts as a two-way network proxy that can intercept connections and route traffic between the cluster and containers running on developer machines.

Illustration showing the traffic manager acting as a two-way network proxy that can intercept connections and route traffic between the cluster and containers running on developer machine.

Once you have connected your development machine to a remote Kubernetes cluster, you have several options for how the local containers can integrate with the cluster. These options are based on the concepts of intercepts, where Telepresence for Docker can re-route — or intercept — traffic destined to and from a remote service to your local machine. Intercepts enable you to interact with an application in a remote cluster and see the results from the local changes you made on an intercepted service.

Here’s how you can use intercepts:

  • No intercepts: The most basic integration involves no intercepts at all, simply establishing a connection between the container and the cluster. This enables the container to access cluster resources, such as APIs and databases.
  • Global intercepts: You can set up global intercepts for a service. This means all traffic for a service will be re-routed from Kubernetes to your local container.
  • Personal intercepts: The more advanced alternative to global intercepts is personal intercepts. Personal intercepts let you define conditions for when a request should be routed to your local container. The conditions could be anything from only routing requests that include a specific HTTP header, to requests targeting a specific route of an API.

Benefits for platform teams: Reduce maintenance and cloud costs

On top of increasing the velocity of individual developers and development teams, Telepresence for Docker also enables platform engineers to maintain a separation of concerns (and provide appropriate guardrails). Platform engineers can define, configure, and manage shared remote clusters that multiple Telepresence for Docker users can interact within during their day-to-day development and testing workflows. Developers can easily intercept or selectively reroute remote traffic to the service on their local machine, and test (and share with stakeholders) how their current changes look and interact with remote dependencies.

Compared to static staging environments, this offers a simple way to connect local code into a shared dev environment and fuels easy, secure collaboration with your team or other stakeholders. Instead of provisioning cloud virtual machines for every developer, this approach offers a more cost-effective way to have a shared cloud development environment.

Get started with Telepresence for Docker today

We’re excited that the Docker and Ambassador Labs partnership brings Telepresence for Docker to the 12-million-strong (and growing) community of registered Docker developers. Telepresence for Docker is available now. Keep using the local tools and development workflow you know and love, but with faster feedback, easier collaboration, and reduced cloud environment costs.

You can quickly get started with your Docker ID, or contact us to learn more.

 

 

 

]]>
Enable No-Code Kubernetes with the harpoon Docker Extension https://www.docker.com/blog/no-code-kubernetes-harpoon-docker-extension/ Wed, 01 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40167 (This post is co-written by Dominic Holt, Founder & CEO of harpoon.)

No-code deploy Kubernetes with the harpoon Docker Extension.

Kubernetes has been a game-changer for ensuring scalable, high availability container orchestration in the Software, DevOps, and Cloud Native ecosystems. While the value is great, it doesn’t come for free. Significant effort goes into learning Kubernetes and all the underlying infrastructure and configuration necessary to power it. Still more effort goes into getting a cluster up and running that’s configured for production with automated scalability, security, and cluster maintenance.

All told, Kubernetes can take an incredible amount of effort, and you may end up wondering if there’s an easier way to get all the value without all the work.

Meet harpoon

With harpoon, anyone can provision a Kubernetes cluster and deploy their software to the cloud without writing code or configuration. Get your software up and running in seconds with a drag and drop interface. When it comes to monitoring and updating your software, harpoon handles that in real-time to make sure everything runs flawlessly. You’ll be notified if there’s a problem, and harpoon can re-deploy or roll back your software to ensure a seamless experience for your end users. harpoon does this dynamically for any software — not just a small, curated list.

To run your software on Kubernetes in the cloud, just enter your credentials and click the start button. In a few minutes, your production environment will be fully running with security baked in. Adding any software is as simple as searching for it and dragging it onto the screen. Want to add your own software? Connect your GitHub account with only a couple clicks and choose which repository to build and deploy in seconds with no code or complicated configurations.

harpoon enables you to do everything you need, like logging and monitoring, scaling clusters, creating services and ingress, and caching data in seconds with no code. harpoon makes DevOps attainable for anyone, leveling the playing field by delivering your software to your customers at the same speed as the largest and most technologically advanced companies at a fraction of the cost.

The architecture of harpoon

harpoon works in a hybrid SaaS model and runs on top of Kubernetes itself, which hosts the various microservices and components that form the harpoon enterprise platform. This is what you interface with when you’re dragging and dropping your way to nirvana. By providing cloud service provider credentials to an account owned by you or your organization, harpoon uses terraform to provision all of the underlying virtual infrastructure in your account, including your own Kubernetes cluster. In this way, you have complete control over all of your infrastructure and clusters.

The architecture for harpoon to no-code deploy Kubernetes to AWS.

Once fully provisioned, harpoon’s UI can send commands to various harpoon microservices in order to communicate with your cluster and create Kubernetes deployments, services, configmaps, ingress, and other key constructs.

If the cloud’s not for you, we also offer a fully on-prem, air-gapped version of harpoon that can be deployed essentially anywhere.

Why harpoon?

Building production software environments is hard, time-consuming, and costly, with average costs to maintain often starting at $200K for an experienced DevOps engineer and going up into the tens of millions for larger clusters and teams. Using harpoon instead of writing custom scripts can save hundreds of thousands of dollars per year in labor costs for small companies and millions per year for mid to large size businesses.

Using harpoon will enable your team to have one of the highest quality production environments available in mere minutes. Without writing any code, harpoon automatically sets up your production environment in a secure environment and enables you to dynamically maintain your cluster without any YAML or Kubernetes expertise. Better yet, harpoon is fun to use. You shouldn’t have to worry about what underlying technologies are deploying your software to the cloud. It should just work. And making it work should be simple. 

Why run harpoon as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With the harpoon Docker Extension, you can simplify the deployment process with drag and drop, visually deploying and configuring your applications directly into your Kubernetes environment. Currently, the harpoon extension for Docker Desktop supports the following features:

  • Link harpoon to a cloud service provider like AWS and deploy a Kubernetes cluster and the underlying virtual infrastructure.
  • Easily accomplish simple or complex enterprise-grade cloud deployments without writing any code or configuration scripts.
  • Connect your source code repository and set up an automated deployment pipeline without any code in seconds.
  • Supercharge your DevOps team with real-time visual cues to check the health and status of your software as it runs in the cloud.
  • Drag and drop container images from Docker Hub, source, or private container registries
  • Manage your K8s cluster with visual pods, ingress, volumes, configmaps, secrets, and nodes.
  • Dynamically manipulate routing in a service mesh with only simple clicks and port numbers.

How to use the harpoon Docker Extension

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Settings tab in Docker Desktop.

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Go to Settings > Extensions > and check the “Enable Docker Extensions” box.

Enable Docker Extensions under Settings on Docker Desktop.

Step 2: Install the harpoon Docker Extension

The harpoon extension is available on the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for harpoon in the Extensions Marketplace, then select Install.

The harpoon Docker Extension on the Extension Marketplace.

This will download and install the latest version of the harpoon Docker Extension from Docker Hub.

Installation process for the harpoon Docker Extension.

Step 3: Register with harpoon

If you’re new to harpoon, then you might need to register by clicking the Register button. Otherwise, you can use your credentials to log in.

Register with harpoon or log into your account.

While you can drag out any software or Kubernetes components you like, if you want to do actual deployments, you will first need to link your cloud service provider account. At the moment, harpoon supports Amazon Web Services (AWS). Over time, we’ll be supporting all of the major cloud service providers.

If you want to deploy software on top of AWS, you will need to provide harpoon with an access key ID and a secret access key. Since harpoon is deploying all of the necessary infrastructure in AWS in addition to the Kubernetes cluster, we require fairly extensive access to the account in order to successfully provision the environment. Your keys are only used for provisioning the necessary infrastructure to stand up Kubernetes in your account and to scale up/down your cluster as you designate. We take security very seriously at harpoon, and aside from using an extensive and layered security approach for harpoon itself, we use both disk and field level encryption for any sensitive data.

Link your AWS account to deploy Kubernetes with harpoon through Docker Desktop.

The following are the specific permissions harpoon needs to successfully deploy a cluster:

  • AmazonRDSFullAccess
  • IAMFullAccess
  • AmazonEC2FullAccess
  • AmazonVPCFullAccess
  • AmazonS3FullAccess
  • AWSKeyManagementServicePowerUser

Step 5: Start the cluster

Once you’ve linked your cloud service provider account, you just click the “Start” button on the cloud/node element in the workspace. That’s it. No, really! The cloud/node element will turn yellow and provide a countdown. While your experience may vary a bit, we tend to find that you can get a cluster up in under 6 minutes. When the cluster is running, the cloud will return and the element will glow a happy blue color.

Start the Kubernetes cluster through the harpoon Docker Extension.

Step 6: Deployment

You can search for any container image you’d like from Docker Hub, or link your GitHub account to search any GitHub repository (public or private) to deploy with harpoon. You can drag any search result over to the workspace for a visual representation of the software.

Deploying containers is as easy as hitting the “Deploy” button. Github containers will require you to build the repository first. In order for harpoon to successfully build a GitHub repository, we currently require the repository to have a top-level Dockerfile, which is industry best practice. If the Dockerfile is there, once you click the “Build” button, harpoon will automatically find it and build a container image. After a successful build, the “Deploy” button will become enabled and you can deploy the software directly.

Deploy software to Kubernetes through the harpoon Docker Extension.

Once you have a deployment, you can attach any Kubernetes element to it, including ingress, configmaps, secrets, and persistent volume claims.

You can find more info here if you need help: https://docs.harpoon.io/en/latest/usage.html 

Next steps

The harpoon Docker Extension makes it easy to provision and manage your Kubernetes clusters. You can visually deploy your software to Kubernetes and configure it without writing code or configuration. By integrating directly with Docker Desktop, we hope to make it easy for DevOps teams to dynamically start and maintain their cluster without any YAML, helm chart, or Kubernetes expertise.

Check out the harpoon Docker Extension for yourself!

]]>
Reduce Your Image Size with the Dive-In Docker Extension https://www.docker.com/blog/reduce-your-image-size-with-the-dive-in-docker-extension/ Tue, 20 Dec 2022 15:00:00 +0000 https://www.docker.com/?p=39415 This guest post is written by Prakhar Srivastav, Senior Software Engineer at Google.


Check out the Dive-In Docker Extension

Anyone who’s built their own containers, either for local development or for cloud deployment, knows the advantages of keeping container sizes small. In most cases, keeping the container image size small translates to real dollars saved by reducing bandwidth and storage costs on the cloud. In addition, smaller images ensure faster transfer and deployments when using them in a CI/CD server.

However, even for experienced Docker users, it can be hard to understand how to reduce the sizes of their containers. The Docker CLI can be very helpful for this, but it can be intimidating to figure out where to start. That’s where Dive comes in.

What is Dive?

Dive is an open-source tool for exploring a Docker image and its layer contents, then discovering ways to shrink the size of your Docker/OCI image.

At a high level, it works by analyzing the layers of a Docker image. With every layer you add, more space will be taken up by the image. Or you can say each line in the Dockerfile (like a separate RUN instruction) adds a new layer to your image.

Dive takes this information and does the following:

  • Breaks down the image contents in the Docker image layer by layer.
  • Shows the contents of each layer in detail.
  • Shows the total size of the image.
  • Shows how much space was potentially wasted.
  • Shows the efficiency score of the image.

While Dive is awesome and extremely helpful, it’s a command line tool and uses a TUI (terminal UI) to display all the analysis. This can sometimes seem limiting and hard to use for some users. 

Wouldn’t it be cool to show all this useful data from Dive in an easy-to-use UI? Enter Dive-In, a new Docker Extension that integrates Dive into Docker Desktop!

Prerequisites

You’ll need to download Docker Desktop 4.8 or later before getting started. Make sure to choose the correct version for your OS and then install it.

Next, hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box.

Enable extensions in Docker Desktop.

Dive-In: A Docker Extension for Dive

Dive-In is a Docker extension that’s built on top of Dive so Docker users can explore their containers directly from Docker Desktop.

Install the Dive-In Docker Extension.

To get started, search for Dive-In in the Extensions Marketplace, then install it.

Alternatively, you can also run:

docker extension install prakhar1989/dive-in

When you first access Dive-In, it’ll take a few seconds to pull the Dive image from Docker Hub. Once it does, it should show a grid of all the images that you can analyze.

Welcome to the Dive-In Docker Extension.

Note: Currently Dive-In does not show the dangling images (or the images that have the repo tag of “none”). This is to keep this grid uncluttered and as actionable as possible.

To analyze an image, click on the analyze button, which calls Dive behind the scenes to gather the data. Based on the size of the image this can sometimes take some time.  When it’s done, it’ll present the results.

On the top, Dive-In shows three key metrics for the image which are useful in getting a high level view about how inefficient the image is. The lower the efficiency score, the more room for improvement.

See how to reduce image size with the Dive-In Docker Extension.

Below the key metrics, it shows a table of the largest files in the image, which can be a good starting point for reducing the size.

Finally, as you scroll down, it shows the information of all the layers along with the size of each of them, which is extremely helpful in seeing which layer is contributing the most to the final size.

View image layers with the Dive-In Docker Extension.

And that’s it! 

Conclusion

The Dive-In Docker Extension helps you explore a Docker image and discover ways to shrink the size. It’s built on top of Dive, a popular open-source tool. Use Dive-In to gain insights into your container right from Docker Desktop!

Try it out for yourself and let me know what you think. Pull requests are also welcome!

About the Author

Prakhar Srivastav is a senior software engineer at Google where he works on Firebase to make app development easier for developers. When he’s not staring at Vim, he can be found playing guitar or exploring the outdoors.

]]>
Developer Engagement in the Remote Work Era with RedMonk and Miva https://www.docker.com/blog/developer-engagement-in-the-remote-work-era/ Fri, 21 Oct 2022 14:00:00 +0000 https://www.docker.com/?p=38097 With the rise of remote-first and hybrid work models in the tech world, promoting developer engagement has become more important than ever. Maintaining a culture of engagement, productivity, and collaboration can be a hurdle for businesses making this new shift to remote work. But it’s far from impossible.

docker redmonk miva webinar 900x600 1

As a fully-remote, developer-focused company, Docker was thrilled to join in a like-minded conversation with RedMonk and Miva. Jake Levirne (Head of Product at Docker) was joined by Jon Burchmore (CTO at Miva) for a talk led by RedMonk’s Sr. Analyst Rachel Stephens. In this webinar on developer engagement in the remote work era, these industry specialists discuss navigating developer engagement with a focus on productivity, collaboration, and much more.

Navigating developer engagement

Companies with newly-distributed work environments often struggle to create an engaging culture for their employees. This remains especially true for the developer workforce. Because of this, developer engagement has become a priority for more organizations than ever, including Miva.

“We actually brought [developer engagement] up as a part of our developer roadmap. As we were talking about ‘this is our product roadmap for 2022 — what’s the biggest challenge?’, my answer was ‘keeping people engaged so that we can keep productivity high.’”

Jon Burchmore

Like Miva, other organizations are starting to incorporate developer engagement into their broader business decisions. Teams are intentionally choosing tools and processes that support not only development requirements but also involvement and preference. By taking a look at productivity and collaboration, we can see the impact of these decisions. 

Measuring developer productivity and collaboration

As both an art and a science, measuring developer productivity and collaboration can be difficult. While metrics can be informative, Jon is most interested in seeing the qualitative impact.

“How much is the team engaging with itself […] and is that engagement bottom up […] or from peer-to-peer? And a healthy team to me feels like a team where the peers are engaging as well. It’s not everyone just going upstream to get their problems solved.” – Jon Burchmore

As Jake adds, it’s more than just tracking lines of code. It’s about focusing on the outcomes. While developer engagement can be difficult to measure, the message is clear. Engaged developers are non-negotiable for high-performing teams.

Approaching developer collaboration

Developer collaboration is another linchpin for building developer engagement. Teams are now challenging themselves to find more opportunities for pair programming or similar types of coworking. Healthy collaboration should also not be limited to single teams.

“When you unlock collaboration both within teams and across teams, I think that’s what allows you to build what effectively are the increasingly complex real-world applications that are needed to keep creating business value.” – Jake Levirne

Organizations are taking a more holistic, inter-team perspective to avoid the dreaded, siloed productivity approach.

Watch the full, on-demand webinar

These points are just a snapshot of our talk with RedMonk and Miva on the challenges of developer engagement in the remote work era. Hear the rest of the discussion and more detail by watching the full conversation on demand.

]]>
developers Archives | Docker nonadult
How to Colorize Black & White Pictures With OpenVINO on Ubuntu Containers https://www.docker.com/blog/how-to-colorize-black-white-pictures-ubuntu-containers/ Fri, 02 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=36935 If you’re looking to bring a stack of old family photos back to life, check out Ubuntu’s demo on how to use OpenVINO on Ubuntu containers to colorize monochrome pictures. This magical use of containers, neural networks, and Kubernetes is packed with helpful resources and a fun way to dive into deep learning!

A version of Part 1 and Part 2 of this article was first published on Ubuntu’s blog.

Ubuntu and intel repost 900x600 1

Table of contents:

OpenVINO on Ubuntu containers: making developers’ lives easier

Suppose you’re curious about AI/ML and what you can do with OpenVINO on Ubuntu containers. In that case, this blog is an excellent read for you too.

Docker image security isn’t only about provenance and supply chains; it’s also about the user experience. More specifically, the developer experience.

Removing toil and friction from your app development, containerization, and deployment processes avoids encouraging developers to use untrusted sources or bad practices in the name of getting things done. As AI/ML development often requires complex dependencies, it’s the perfect proof point for secure and stable container images.

Why Ubuntu Docker images?

As the most popular container image in its category, the Ubuntu base image provides a seamless, easy-to-set-up experience. From public cloud hosts to IoT devices, the Ubuntu experience is consistent and loved by developers.

One of the main reasons for adopting Ubuntu-based container images is the software ecosystem. More than 30,000 packages are available in one “install” command, with the option to subscribe to enterprise support from Canonical. It just makes things easier.

In this blog, you’ll see that using Ubuntu Docker images greatly simplifies component containerization. We even used a prebuilt & preconfigured container image for the NGINX web server from the LTS images portfolio maintained by Canonical for up to 10 years.

Beyond providing a secure, stable, and consistent experience across container images, Ubuntu is a safe choice from bare metal servers to containers. Additionally, it comes with hardware optimization on clouds and on-premises, including Intel hardware.

Why OpenVINO?

When you’re ready to deploy deep learning inference in production, binary size and memory footprint are key considerations – especially when deploying at the edge. OpenVINO provides a lightweight Inference Engine with a binary size of just over 40MB for CPU-based inference. It also provides a Model Server for serving models at scale and managing deployments.

OpenVINO includes open-source developer tools to improve model inference performance. The first step is to convert a deep learning model (trained with TensorFlow, PyTorch,…) to an Intermediate Representation (IR) using the Model Optimizer. In fact, it cuts the model’s memory usage in half by converting it from FP32 to FP16 precision. You can unlock additional performance by using low-precision tools from OpenVINO. The Post-training Optimisation Tool (POT) and Neural Network Compression Framework (NNCF) provide quantization, binarisation, filter pruning, and sparsity algorithms. As a result, Intel devices’ throughput increases on CPUs, integrated GPUs, VPUs, and other accelerators.

Open Model Zoo provides pre-trained models that work for real-world use cases to get you started quickly. Additionally, Python and C++ sample codes demonstrate how to interact with the model. More than 280 pre-trained models are available to download, from speech recognition to natural language processing and computer vision.

For this blog series, we will use the pre-trained colorization models from Open Model Zoo and serve them with Model Server.

colorize example albert einstein sticks tongue out

OpenVINO and Ubuntu container images

The Model Server – by default – ships with the latest Ubuntu LTS, providing a consistent development environment and an easy-to-layer base image. The OpenVINO tools are also available as prebuilt development and runtime container images.

To learn more about Canonical LTS Docker Images and OpenVINO™, read:

Neural networks to colorize a black & white image

Now, back to the matter at hand: how will we colorize grandma and grandpa’s old pictures? Thanks to Open Model Zoo, we won’t have to train a neural network ourselves and will only focus on the deployment. (You can still read about it.)

architecture diagram colorizer demo app microk8s
Architecture diagram of the colorizer demo app running on MicroK8s

Our architecture consists of three microservices: a backend, a frontend, and the OpenVINO Model Server (OVMS) to serve the neural network predictions. The Model Server component hosts two different demonstration neural networks to compare their results (V1 and V2). These components all use the Ubuntu base image for a consistent software ecosystem and containerized environment.

A few reads if you’re not familiar with this type of microservices architecture:

gRPC vs REST APIs

The OpenVINO Model Server provides inference as a service via HTTP/REST and gRPC endpoints for serving models in OpenVINO IR or ONNX format. It also offers centralized model management to serve multiple different models or different versions of the same model and model pipelines.

The server offers two sets of APIs to interface with it: REST and gRPC. Both APIs are compatible with TensorFlow Serving and expose endpoints for prediction, checking model metadata, and monitoring model status. For use cases where low latency and high throughput are needed, you’ll probably want to interact with the model server via the gRPC API. Indeed, it introduces a significantly smaller overhead than REST. (Read more about gRPC.)

OpenVINO Model Server is distributed as a Docker image with minimal dependencies. For this demo, we will use the Model Server container image deployed to a MicroK8s cluster. This combination of lightweight technologies is suitable for small deployments. It suits edge computing devices, performing inferences where the data is being produced – for increased privacy, low latency, and low network usage.

Ubuntu minimal container images

Since 2019, the Ubuntu base images have been minimal, with no “slim” flavors. While there’s room for improvement (keep posted), the Ubuntu Docker image is a less than 30MB download, making it one of the tiniest Linux distributions available on containers.

In terms of Docker image security, size is one thing, and reducing the attack surface is a fair investment. However, as is often the case, size isn’t everything. In fact, maintenance is the most critical aspect. The Ubuntu base image, with its rich and active software ecosystem community, is usually a safer bet than smaller distributions.

A common trap is to start smaller and install loads of dependencies from many different sources. The end result will have poor performance, use non-optimized dependencies, and not be secure. You probably don’t want to end up effectively maintaining your own Linux distribution … So, let us do it for you.

colorize example cat walking through grass
“What are you looking at?” (original picture source)

Demo architecture

“As a user, I can drag and drop black and white pictures to the browser so that it displays their ready-to-download colorized version.” – said the PM (me).

For that – replied the one-time software engineer (still me) – we only need:

  • A fancy yet lightweight frontend component.
  • OpenVINO™ Model Server to serve the neural network colorization predictions.
  • A very light backend component.

Whilst we could target the Model Server directly with the frontend (it exposes a REST API), we need to apply transformations to the submitted image. The colorization models, in fact, each expect a specific input.

Finally, we’ll deploy these three services with Kubernetes because … well … because it’s groovy. And if you think otherwise (everyone is allowed to have a flaw or two), you’ll find a fully functional docker-compose.yaml in the source code repository.

architecture diagram for demo app
Architecture diagram for the demo app (originally colored tomatoes)

In the upcoming sections, we will first look at each component and then show how to deploy them with Kubernetes using MicroK8s. Don’t worry; the full source code is freely available, and I’ll link you to the relevant parts.

Neural network – OpenVINO Model Server

The colorization neural network is published under the BSD 2-clause License, accessible from the Open Model Zoo. It’s pre-trained, so we don’t need to understand it in order to use it. However, let’s look closer to understand what input it expects. I also strongly encourage you to read the original work from Richard Zhang, Phillip Isola, and Alexei A. Efros. They made the approach super accessible and understandable on this website and in the original paper.

neural network architecture
Neural network architecture (from arXiv:1603.08511 [cs.CV])

As you can see on the network architecture diagram, the neural network uses an unusual color space: LAB. There are many 3-dimensional spaces to code colors: RGB, HSL, HSV, etc. The LAB format is relevant here as it fully isolates the color information from the lightness information. Therefore, a grayscale image can be coded with only the L (for Lightness) axis. We will send only the L axis to the neural network’s input. It will generate predictions for the colors coded on the two remaining axes: A and B.

From the architecture diagram, we can also see that the model expects a 256×256 pixels input size. For these reasons, we cannot just send our RGB-coded grayscale picture in its original size to the network. We need to first transform it.

We compare the results of two different model versions for the demo. Let them be called ‘V1’ (Siggraph) and ‘V2’. The models are served with the same instance of the OpenVINO™ Model Server as two different models. (We could also have done it with two different versions of the same model – read more in the documentation.)

Finally, to build the Docker image, we use the first stage from the Ubuntu-based development kit to download and convert the model. We then rebase on the more lightweight Model Server image.

# Dockerfile
FROM openvino/ubuntu20_dev:latest AS omz
# download and convert the model
…
FROM openvino/model_server:latest
# copy the model files and configure the Model Server
…

Backend – Ubuntu-based Flask app (Python)

For the backend microservice that interfaces between the user-facing frontend and the Model Server hosting the neural network, we chose to use Python. There are many valuable libraries to manipulate data, including images, specifically for machine learning applications. To provide web serving capabilities, Flask is an easy choice.

The backend takes an HTTP POST request with the to-be-colorized picture. It synchronously returns the colorized result using the neural network predictions. In between – as we’ve just seen – it needs to convert the input to match the model architecture and to prepare the output to show a displayable result.

Here’s what the transformation pipeline looks like on the input:

transformation pipline on input

And the output looks something like that:

transformation pipline on output

To containerize our Python Flask application, we use the first stage with all the development dependencies to prepare our execution environment. We copy it onto a fresh Ubuntu base image to run it, configuring the model server’s gRPC connection.

Frontend – Ubuntu-based NGINX container and Svelte app

Finally, I put together a fancy UI for you to try the solution out. It’s an effortless single-page application with a file input field. It can display side-by-side the results from the two different colorization models.

I used Svelte to build the demo as a dynamic frontend. Below each colorization result, there’s even a saturation slider (using a CSS transformation) so that you can emphasize the predicted colors and better compare the before and after.

To ship this frontend application, we again use a Docker image. We first build the application using the Node base image. We then rebase it on top of the preconfigured NGINX LTS image maintained by Canonical. A reverse proxy on the frontend side serves as a passthrough to the backend on the /API endpoint to simplify the deployment configuration. We do that directly in an NGINX.conf configuration file copied to the NGINX templates directory. The container image is preconfigured to use these template files with environment variables.

Deployment with Kubernetes

I hope you had the time to scan some black and white pictures because things are about to get serious(ly colorized).

We’ll assume you already have a running Kubernetes installation from the next section. If not, I encourage you to run the following steps or go through this MicroK8s tutorial.

# https://microk8s.io/docs
sudo snap install microk8s --classic
 
# Add current user ($USER) to the microk8s group
sudo usermod -a -G microk8s $USER &amp;&amp; sudo chown -f -R $USER ~/.kube
newgrp microk8s
 
# Enable the DNS, Storage, and Registry addons required later
microk8s enable dns storage registry
 
# Wait for the cluster to be in a Ready state
microk8s status --wait-ready
 
# Create an alias to enable the `kubectl` command
sudo snap alias microk8s.kubectl kubectl
ubuntu command line kubernetes cluster

Yes, you deployed a Kubernetes cluster in about two command lines.

Build the components’ Docker images

Every component comes with a Dockerfile to build itself in a standard environment and ship its deployment dependencies (read What are containers for more information). They all create an Ubuntu-based Docker image for a consistent developer experience.

Before deploying our colorizer app with Kubernetes, we need to build and push the components’ images. They need to be hosted in a registry accessible from our Kubernetes cluster. We will use the built-in local registry with MicroK8s. Depending on your network bandwidth, building and pushing the images will take a few minutes or more.

sudo snap install docker
cd ~ &amp;&amp; git clone https://github.com/valentincanonical/colouriser-demo.git
 
# Backend
docker build backend -t localhost:32000/backend:latest
docker push localhost:32000/backend:latest
 
# Model Server
docker build modelserver -t localhost:32000/modelserver:latest
docker push localhost:32000/modelserver:latest
 
# Frontend
docker build frontend -t localhost:32000/frontend:latest
docker push localhost:32000/frontend:latest

Apply the Kubernetes configuration files

All the components are now ready for deployment. The Kubernetes configuration files are available as deployments and services YAML descriptors in the ./K8s folder of the demo repository. We can apply them all at once, in one command:

kubectl apply -f ./k8s

Give it a few minutes. You can watch the app being deployed with watch kubectl status. Of all the services, the frontend one has a specific NodePort configuration to make it publicly accessible by targeting the Node IP address.

ubuntu command line kubernetes configuration files

Once ready, you can access the demo app at http://localhost:30000/ (or replace localhost with a cluster node IP address if you’re using a remote cluster). Pick an image from your computer, and get it colorized!

All in all, the project was pretty easy considering the task we accomplished. Thanks to Ubuntu containers, building each component’s image with multi-stage builds was a consistent and straightforward experience. And thanks to OpenVINO™ and the Open Model Zoo, serving a pre-trained model with excellent inference performance was a simple task accessible to all developers.

That’s a wrap!

You didn’t even have to share your pics over the Internet to get it done. Thanks for reading this article; I hope you enjoyed it. Feel free to reach out on socials. I’ll leave you with the last colorization example.

colorized example christmas cat
Christmassy colorization example (original picture source)

To learn more about Ubuntu, the magic of Docker images, or even how to make your own Dockerfiles, see below for related resources:

]]>
How to Set Up Your Local Node.js Development Environment Using Docker https://www.docker.com/blog/how-to-setup-your-local-node-js-development-environment-using-docker/ Tue, 30 Aug 2022 21:16:50 +0000 https://www.docker.com/blog/?p=26568 Docker is the de facto toolset for building modern applications and setting up a CI/CD pipeline – helping you build, ship, and run your applications in containers on-prem and in the cloud. 

Whether you’re running on simple compute instances such as AWS EC2 or something fancier like a hosted Kubernetes service, Docker’s toolset is your new BFF. 

But what about your local Node.js development environment? Setting up local dev environments while also juggling the hurdles of onboarding can be frustrating, to say the least.

While there’s no silver bullet, with the help of Docker and its toolset, we can make things a whole lot easier.

Table of contents:

How to set up a local Node.js dev environment — Part 1

Docker's Moby Dock whale pointing to whiteboard with Node.js logo.

In this tutorial, we’ll walk through setting up a local Node.js development environment for a relatively complex application that uses React for its front end, Node and Express for a couple of micro-services, and MongoDb for our datastore. We’ll use Docker to build our images and Docker Compose to make everything a whole lot easier.

If you have any questions, comments, or just want to connect. You can reach me in our Community Slack or on Twitter at @rumpl.

Let’s get started.

Prerequisites

To complete this tutorial, you will need:

  • Docker installed on your development machine. You can download and install Docker Desktop.
  • Sign-up for a Docker ID through Docker Hub.
  • Git installed on your development machine.
  • An IDE or text editor to use for editing files. I would recommend VSCode.

Step 1: Fork the Code Repository

The first thing we want to do is download the code to our local development machine. Let’s do this using the following git command:

git clone https://github.com/rumpl/memphis.git

Now that we have the code local, let’s take a look at the project structure. Open the code in your favorite IDE and expand the root level directories. You’ll see the following file structure.

├── docker-compose.yml
├── notes-service
├── reading-list-service
├── users-service
└── yoda-ui

The application is made up of a couple of simple microservices and a front-end written in React.js. It also uses MongoDB as its datastore.

Typically at this point, we would start a local version of MongoDB or look through the project to find where our applications will be looking for MongoDB. Then, we would start each of our microservices independently and start the UI in hopes that the default configuration works.

However, this can be very complicated and frustrating. Especially if our microservices are using different versions of Node.js and configured differently.

Instead, let’s walk through making this process easier by dockerizing our application and putting our database into a container. 

Step 2: Dockerize your applications

Docker is a great way to provide consistent development environments. It will allow us to run each of our services and UI in a container. We’ll also set up things so that we can develop locally and start our dependencies with one docker command.

The first thing we want to do is dockerize each of our applications. Let’s start with the microservices because they are all written in Node.js, and we’ll be able to use the same Dockerfile.

Creating Dockerfiles

Create a Dockerfile in the notes-services directory and add the following commands.

Dockerfile in the notes-service directory using Node.js.

This is a very basic Dockerfile to use with Node.js. If you are not familiar with the commands, you can start with our getting started guide. Also, take a look at our reference documentation.

Building Docker Images

Now that we’ve created our Dockerfile, let’s build our image. Make sure you’re still located in the notes-services directory and run the following command:

cd notes-service
docker build -t notes-service .
Docker build terminal output located in the notes-service directory.

Now that we have our image built, let’s run it as a container and test that it’s working.

docker run --rm -p 8081:8081 --name notes notes-service

Docker run terminal output located in the notes-service directory.

From this error, we can see we’re having trouble connecting to the mongodb. Two things are broken at this point:

  1. We didn’t provide a connection string to the application.
  2. We don’t have MongoDB running locally.

To resolve this, we could provide a connection string to a shared instance of our database, but we want to be able to manage our database locally and not have to worry about messing up our colleagues’ data they might be using to develop. 

Step 3: Run MongoDB in a localized container

Instead of downloading MongoDB, installing, configuring, and then running the Mongo database service. We can use the Docker Official Image for MongoDB and run it in a container.

Before we run MongoDB in a container, we want to create a couple of volumes that Docker can manage to store our persistent data and configuration. I like to use the managed volumes that Docker provides instead of using bind mounts. You can read all about volumes in our documentation.

Creating volumes for Docker

To create our volumes, we’ll create one for the data and one for the configuration of MongoDB.

docker volume create mongodb

docker volume create mongodb_config

Creating a user-defined bridge network

Now we’ll create a network that our application and database will use to talk with each other. The network is called a user-defined bridge network and gives us a nice DNS lookup service that we can use when creating our connection string.

docker network create mongodb

Now, we can run MongoDB in a container and attach it to the volumes and network we created above. Docker will pull the image from Hub and run it for you locally.

docker run -it --rm -d -v mongodb:/data/db -v mongodb_config:/data/configdb -p 27017:27017 --network mongodb --name mongodb mongo

Step 4: Set your environment variables

Now that we have a running MongoDB, we also need to set a couple of environment variables so our application knows what port to listen on and what connection string to use to access the database. We’ll do this right in the docker run command.

docker run \
-it --rm -d \
--network mongodb \
--name notes \
-p 8081:8081 \
-e SERVER_PORT=8081 \
-e SERVER_PORT=8081 \
-e DATABASE_CONNECTIONSTRING=mongodb://mongodb:27017/yoda_notes \
notes-service

Step 5: Test your database connection

Let’s test that our application is connected to the database and is able to add a note.

curl --request POST \
--url http://localhost:8081/services/m/notes \
  --header 'content-type: application/json' \
  --data '{
"name": "this is a note",
"text": "this is a note that I wanted to take while I was working on writing a blog post.",
"owner": "peter"
}'

You should receive the following JSON back from our service.

{"code":"success","payload":{"_id":"5efd0a1552cd422b59d4f994","name":"this is a note","text":"this is a note that I wanted to take while I was working on writing a blog post.","owner":"peter","createDate":"2020-07-01T22:11:33.256Z"}}

Once we are done testing, run ‘docker stop notes mongodb’ to stop the containers.

Awesome! We’ve completed the first steps in Dockerizing our local development environment for Node.js. In Part II, we’ll take a look at how we can use Docker Compose to simplify the process we just went through.

How to set up a local Node.js dev environment — Part 2

In Part I, we took a look at creating Docker images and running containers for Node.js applications. We also took a look at setting up a database in a container and how volumes and networks play a part in setting up your local development environment.

In Part II, we’ll take a look at creating and running a development image where we can compile, add modules and debug our application all inside of a container. This helps speed up the developer setup time when moving to a new application or project. In this case, our image should have Node.js installed as well as NPM or YARN. 

We’ll also take a quick look at using Docker Compose to help streamline the processes of setting up and running a full microservices application locally on your development machine.

Let’s create a development image we can use to run our Node.js application.

Step 1: Develop your Dockerfile

Create a local directory on your development machine that we can use as a working directory to save our Dockerfile and any other files that we’ll need for our development image.

$ mkdir -p ~/projects/dev-image

Create a Dockerfile in this folder and add the following commands.

FROM node:18.7.0
RUN apt-get update && apt-get install -y \
  nano \
  Vim

We start off by using the node:18.7.0 official image. I’ve found that this image is fine for creating a development image. I like to add a couple of text editors to the image in case I want to quickly edit a file while inside the container.

We did not add an ENTRYPOINT or CMD to the Dockerfile because we will rely on the base image’s ENTRYPOINT, and we will override the CMD when we start the image.

Step 2: Build your Docker image

Let’s build our image.

$ docker build -t node-dev-image .

And now we can run it.

$ docker run -it --rm --name dev -v $(pwd):/code node-dev-image bash

You will be presented with a bash command prompt. Now, inside the container, we can create a JavaScript file and run it with Node.js.

Step 3: Test your image

Run the following commands to test our image.

$ node -e 'console.log("hello from inside our container")'
hello from inside our container

If all goes well, we have a working development image. We can now do everything that we would do in our normal bash terminal.

If you run the above Docker command inside of the notes-service directory, then you will have access to the code inside of the container. You can start the notes-service by simply navigating to the /code directory and running npm run start.

Step 4: Use Compose to Develop locally

The notes-service project uses MongoDB as its data store. If you remember from Part I, we had to start the Mongo container manually and connect it to the same network as our notes-service. We also had to create a couple of volumes so we could persist our data across restarts of our application and MongoDB.

Instead, we’ll create a Compose file to start our notes-service and the MongoDb with one command. We’ll also set up the Compose file to start the notes-service in debug mode. This way, we can connect a debugger to the running node process.

Open the notes-service in your favorite IDE or text editor and create a new file named docker-compose.dev.yml. Copy and paste the below commands into the file.

services:
 notes:
   build:
     context: .
   ports:
     - 8080:8080
     - 9229:9229
   environment:
     - SERVER_PORT=8080
     - DATABASE_CONNECTIONSTRING=mongodb://mongo:27017/notes
   volumes:
     - ./:/code
   command: npm run debug
 
 mongo:
   image: mongo:4.2.8
   ports:
     - 27017:27017
   volumes:
     - mongodb:/data/db
     - mongodb_config:/data/configdb
 volumes:
   mongodb:
   Mongodb_config:


This compose file is super convenient because now we don’t have to type all the parameters to pass to the `docker run` command. We can declaratively do that in the compose file.

We are exposing port 9229 so that we can attach a debugger. We are also mapping our local source code into the running container so that we can make changes in our text editor and have those changes picked up in the container.

One other really cool feature of using the compose file is that we have service resolution setup to use the service names. As a result, we are now able to use “mongo” in our connection string. We use “mongo” because that is what we have named our mongo service in the compose file.

Let’s start our application and confirm that it is running properly.

$ docker compose -f docker-compose.dev.yml up --build

We pass the “–build” flag so Docker will compile our image and then start it.

If all goes well, you should see the logs from the notes and mongo services:

Docker compose terminal ouput showing logs from the notes and mongo services.
[Click to Enlarge]

Now let’s test our API endpoint. Run the following curl command:

$ curl --request GET --url http://localhost:8080/services/m/notes

You should receive the following response:

{"code":"success","meta":{"total":0,"count":0},"payload":[]}

Step 5: Connect to a Debugger

We’ll use the debugger that comes with the Chrome browser. Open Chrome on your machine, and then type the following into the address bar.

About:inspect

The following screen will open.

The DevTools function on the Chrome browser, showing a list of devices and remote targets.

Click the “Open dedicated DevTools for Node” link. This will open the DevTools that are connected to the running Node.js process inside our container.

Let’s change the source code and then set a breakpoint. 

Add the following code to the server.js file on line 19 and save the file.

server.use( '/foo', (req, res) => {
   return res.json({ "foo": "bar" })
 })

If you take a look at the terminal where our compose application is running, you’ll see that nodemon noticed the changes and reloaded our application.

Docker compose terminal output showcasing the nodemon-reloaded application.
[Click to Enlarge]

Navigate back to the Chrome DevTools and set a breakpoint on line 20. Then, run the following curl command to trigger the breakpoint.

$ curl --request GET --url http://localhost:8080/foo

💥 BOOM 💥 You should have seen the code break on line 20, and now you are able to use the debugger just like you would normally. You can inspect and watch variables, set conditional breakpoints, view stack traces, and a bunch of other stuff.

Conclusion

In this article, we completed the first steps in Dockerizing our local development environment for Node.js. Then, we took things a step further and created a general development image that can be used like our normal command line. We also set up our compose file to map our source code into the running container and exposed the debugging port.

For further reading and additional resources:

]]>