Shashank Sharma – Docker https://www.docker.com Tue, 11 Jul 2023 19:52:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Shashank Sharma – Docker https://www.docker.com 32 32 Integrated Terminal for Running Containers, Extended Integration with Containerd, and More in Docker Desktop 4.12 https://www.docker.com/blog/integrated-terminal-for-running-containers-extended-integration-with-containerd-and-more-in-docker-desktop-4-12/ Thu, 01 Sep 2022 17:10:00 +0000 https://www.docker.com/?p=37252 Docker Desktop 4.12 is now live! This release brings some key quality-of-life improvements to the Docker Dashboard. We’ve also made some changes to our container image management and added it as an experimental feature. Finally, we’ve made it easier to find useful Extensions. Let’s dive in.

Execute commands in a running container straight from the Docker Dashboard

Developers often need to explore a running container’s contents to understand its current state or debug it when issues arise. With Docker Desktop 4.12, you can quickly start an interactive session in a running container directly through a Docker Dashboard terminal. This easy access lets you run commands without needing an external CLI. 

Opening this integrated terminal is equal to running docker exec -it <container-id> /bin/sh (or docker exec -it cmd.exe if you’re using Windows containers) in your system terminal. Docker detects a running container’s default user from the image’s Dockerfile. If there’s none specified, it defaults to root. Placing this in the Docker Dashboard gives you real-time access to logs and other information about your running containers. 

Your session is persisted if you navigate throughout the Dashboard and return — letting you easily pick up where you left off. The integrated terminal also supports copy, paste, search, and session clearing.

Dashboard View Container Details
Dashboard Integrated CLI

Still want to use your external terminal? No problem. We’ve added two easy ways to launch a session externally.

Option 1: Use the “Open in External Terminal” button straight from this tab. Even if you prefer an integrated terminal, this might help you run commands and watch logs simultaneously, for example.

Open in External Terminal Option

Option 2: Change your default settings to always open your system default terminal. We’ve added the option to choose what fits your workflow. After applying this setting, the “Open in terminal” button from the Containers tab will always open your system terminal.

Extending Docker Desktop’s integration with containerd

We’re extending Docker Desktop’s integration with containerd to include image management. This integration is available as an opt-in, experimental feature within this latest release.

Experimental Features Containerd

Docker’s involvement in the containerd project extends all the way back to 2016. Docker has used containerd within the Docker Engine to manage the container lifecycle (creating, starting, and stopping) for a while now! 

This new feature is a step towards deeper containerd integration with Docker Engine. It lets you use containerd to store images and then push and pull them. When enabled in the latest Docker Desktop version, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save

This integration has the following benefits:

  • Containerd’s snapshotter implementation helps you quickly plug in new features. One example is using stargz to lazy pull images on startup.
  • The containerd content store can natively store multi-platform images and other OCI-compatible objects. This lets you build and manipulate multi-platform images, for example, or leverage other related features.

You can learn more in our recent announcement, which fully explains containerd’s integration with Docker.

Easily discover extensions

We’ve added two new ways to interact with extensions in Docker Desktop 4.12.

Docker Extensions are now available directly within the Docker menu. From there, you can browse the Marketplace for new extensions, manage your installed extensions, or change extension settings. 

You can also search for extensions in the Extensions Marketplace! Narrow things down by name or keyword to find the tool you need.

Extensions Docker Menu

Two new extensions have also joined the Extensions Marketplace:

Docker Volumes Backup & Share

Docker Volumes Backup & Share lets you effortlessly back up, clone, restore, and share Docker volumes. You can now easily create copies of your volumes and share them through SSH or by pushing them to a registry. Learn more about Volumes Backup & Share on Docker Hub

Volumes Backup and Share

Mini Cluster

Mini Cluster enables developers who work with Apache Mesos to deploy and test their Mesos applications with ease. Learn more about Mini Cluster on Docker Hub.

Try out Dev Environments with Awesome Compose samples

We’ve updated our GitHub Awesome Compose samples to highlight projects that you can easily launch as Dev Environments in Docker Desktop. This helps you quickly understand how to add multi-service applications as Dev Environment projects. Look for the following green icon in the list of Docker Compose application samples:

Dev Environment Compatible Icon

Here’s our new Awesome Compose/Dev Environments feature in action:

image5 1

Get started with Docker Desktop 4.12 today

While we’ve explored some headlining features in this release, Docker Desktop 4.12 also adds important security enhancements under the hood. To learn about these fixes and more, browse our full release notes

Have any feedback for us? Upvote, comment, or submit new ideas via our in-product links or our public roadmap

Looking to become a new Docker Desktop user? Visit our Get Started page to jumpstart your development journey.

]]>
Virtual Desktop Support, Mac Permission Changes, & New Extensions in Docker Desktop 4.11 https://www.docker.com/blog/docker-desktop-4-11-virtual-environments-mac-permissions-new-extensions/ Tue, 02 Aug 2022 14:00:07 +0000 https://www.docker.com/?p=36276 Docker Desktop 4.11 is now live! With this release, we added some highly-requested features designed to help make developers’ lives easier and help security-minded organizations breathe easier.

Run Docker Desktop for Windows in Virtual Desktop Environments

Docker Desktop for Windows is officially supported on VMware ESXi and Azure Windows VMs for our Docker Business subscribers. Now you can use Docker Desktop on your virtual environments and get the same experience as running it natively on Windows, Mac, or Linux machines.

Currently, we support virtual environments where the host hypervisors are VMware ESXi or Windows Hyper-V — both on-premises and in the cloud. Citrix Hypervisor support is also coming soon. As a Docker Business subscriber, you’ll receive dedicated support for running Docker Desktop in your virtual environments.

To learn more about running Docker Desktop for Windows in a virtual environment, please visit our documentation.

Changes to permission requirements on Docker Desktop for Mac

The first time you run Docker Desktop for Mac, you have to authenticate as root in order to install a privileged helper process. This process is needed to perform a limited set of privileged operations and runs in the background on the host machine while Docker Desktop is running.

In Docker Desktop v4.11, you don’t have to run this privilege helper service at all. Use the —user flag in the install command to set everything up in advance. Docker Desktop will then run without needing root on the Mac.

For more details on Docker Desktop for Mac’s permission requirements, check out our documentation.

New Extensions in the Marketplace

We’re excited to announce the addition of two new extensions to the Extensions Marketplace:

  • vcluster  – Create and manage virtual Kubernetes clusters using vcluster. Learn more about vcluster here.
  • PGAdmin4 – Quickly admin and monitor PostgreSQL databases with PGAdmin4 tools. Learn more about PGAdmin here.

Customize your Docker Desktop Theme

Prefer dark mode on the Docker Dashboard? With 4.11 you can now customize your preference or have it respect your system settings. Go to settings in the upper right corner to try it for yourself.

This gif shows the user how to switch from light to dark mode in Docker Desktop

Fixing the Frequency of Docker Desktop Feedback Prompts

Thanks to all your feedback, we identified a bug that was asking some users for feedback too frequently. Docker Desktop should now only request your feedback twice a year.

As we outlined here, you’ll be asked for feedback 30 days after the product is installed. You can choose to give feedback or decline. You then won’t be asked for a rating again for 180 days.

These scores help us understand user experience trends so we can keep improving Docker Desktop — and your comments have helped us make changes like this. Thanks for helping us fix this for everyone!

Have any feedback for us?

Upvote, comment, or submit new ideas via either our in-product links or our public roadmap. Check out our release notes to learn more about Docker Desktop 4.11.

Looking to become a new Docker Desktop user? Visit our Get Started page to jumpstart your development journey.

]]>
Quickly Spin Up New Development Projects with Awesome Compose https://www.docker.com/blog/quickly-spin-up-new-development-projects-with-awesome-compose/ Wed, 13 Jul 2022 14:00:30 +0000 https://www.docker.com/?p=34783 Containers optimize our daily development work. They’re standardized, so that we can easily switch between development environments — either migrating to testing or reusing container images for production workloads.

However, a challenge arises when you need more than one container. For example, you may develop a web frontend connected to a database backend with both running inside containers. While possible, this approach risks negating some (or all) of that container magic, since we must also consider storage interaction, network interaction, and port configurations. Those added complexities are tricky to navigate.

How Docker Compose Can Help

Docker Compose streamlines many development workloads based around multi-container implementations. One such example is a WordPress website that’s protected with an NGINX reverse proxy, and requires a MySQL database backend.

Alternatively, consider an eCommerce platform with a complex microservices architecture. Each cycle runs inside its own container — from the product catalog, to the shopping cart, to payment processing, and, finally, product shipping. These processes rely on the same database backend container runtime, using a Redis container for caching and performance.

Maintaining a functional eCommerce platform means running several container instances. This doesn’t fully address the additional challenges of scalability or reliable performance.

While Docker Compose lets us create our own solutions, building the necessary Dockerfile scripts and YAML files can take some time. To simplify these processes, Docker introduced the open source Awesome Compose library in March 2020. Developers can now access pre-built samples to kickstart their Docker Compose projects.

What does that look like in practice? Let’s first take a more detailed look at Docker Compose. Next, we’ll explore step-by-step how to spin up a new development project using Awesome Compose.

Having some practical knowledge of Docker concepts and base commands is helpful while following along. However, this isn’t required! If you’d like to brush up or become familiarized with Docker, check out our orientation page and our CLI reference page.

How Docker Compose Works

Docker Compose is based on a compose.yaml file. This file specifies the platform’s building blocks — typically referencing active ports and the necessary, standalone Docker container images.

The diagram below represents snippets of a compose.yaml file for a WordPress site with a MySQL database, a WordPress frontend, and an NGINX reverse proxy:

 

Compose YAML

 

We’re using three separate Docker images in this example: MySQL, WordPress, and NGINX. Each of these three containers has its own characteristics, such as network ports and volumes.

mysql:
  image: mysql:8.0.28
  container_name: demomysql
  networks:
  	- network
wordpress:
  depends_on:
  	- mysql
  image: wordpress:5.9.1-fpm-alpine
  container_name: demowordpress
  networks:
  	- network
nginx:
  depends_on:
  	- wordpress
  image: nginx:1.21.4-alpine
  container_name: nginx
    ports:
  	- 80:80
  volumes:
  	- wordpress:/var/www/html

 

Originally, you’d have to use the docker run command to start each individual container. However, this introduces hiccups while managing interactions across each container related to network and storage. It’s much more efficient to consolidate all necessary objects into a docker compose scenario.

To help developers deploy baseline scenarios faster, Docker provides a GitHub repository with several environments, available for you to reuse, called Docker Awesome Compose. Let’s explore how to run these on your own machine.

How to Use Docker Compose

Getting Started

First, you’ll need to download and install Docker Desktop (for macOS, Windows, or Linux). Note that all example outputs in this article, however, come from a Windows Docker host.

You can verify that Docker is installed by running a simple docker run hello-world command:

C:\>docker run hello-world

 

This should produce the following output, indicating that things are working correctly:

 

Install Confirmation

 

You’ll also need to install Docker Compose on your machine. Similarly, you can verify this installation by running a basic docker compose command, which triggers a corresponding response:

 

C:\>docker compose

 

Docker Compose Output

 

Next, either locally download or clone the Awesome Compose GitHub repository. If you have Git running locally, simply enter the following command:

git clone https://github.com/docker/awesome-compose.git

 

Git Clone

 

If you’re not running Git, you can download the Awesome Compose repository as a ZIP file. You’ll then extract it within its own folder.

Adjusting Your Awesome Compose Code

After downloading Awesome Compose, jump into the appropriate subfolder and spin up your sample environment. For this example, we’ll use WordPress with MariaDB. You’ll then want to access your wordpress-mysql subfolder.

Next, open your compose.yaml file within your favorite editor and inspect its contents. Make the following changes in your provided YAML file:

 

  • Update line 9: volumes: - mariadb:/var/lib/mysql
  • Provide a complex password for the following variables:
    • MYSQL_ROOT_PASSWORD (line 12)
    • MYSQL_PASSWORD (line 15)
    • WORDPRESS_DB_PASSWORD (line 27)
  • Update line 30: volumes: mariadb (to reflect the name used in line 9 for this volume)

 

While this example has mariadb enabled, you can switch to a mysql example by commenting out image: mariadb:10.7 and uncommenting #image: mysql:8.0.27.

Your updated file should look like this:

services:
  db:
    # We use a mariadb image which supports both amd64 & arm64 architecture
    image: mariadb:10.7
    # If you really want to use MySQL, uncomment the following line
    #image: mysql:8.0.27
    #command: '--default-authentication-plugin=mysql_native_password'
    volumes:
      - mariadb:/var/lib/mysql
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=P@55W.RD123
      - MYSQL_DATABASE=wordpress
      - MYSQL_USER=wordpress
      - MYSQL_PASSWORD=P@55W.RD123
    expose:
      - 3306
      - 33060
  wordpress:
    image: wordpress:latest
    ports:
      - 80:80
    restart: always
    environment:
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_USER=wordpress
      - WORDPRESS_DB_PASSWORD=P@55W.RD123
      - WORDPRESS_DB_NAME=wordpress
volumes:
  mariadb:

 

Save these file changes and close your editor.

Running Docker Compose

Starting up Docker Compose is easy. To begin, ensure you’re in the wordpress-mysql folder and run the following from the Command Prompt:

docker compose up -d

 

This command kicks off the startup process. It downloads and soon runs your various container images from Docker Hub. Now, enter the following Docker command to confirm your containers are running as intended:

docker compose ps

 

This command should show all running containers and their active ports:

Compose ps Output

 

Verify that your WordPress app is active by navigating to http://localhost:80 in your browser — which should display the WordPress welcome page.

If you complete the required fields, it’ll redirect you to the WordPress dashboard, where you can start using WordPress. This experience is identical to running on a server or hosting environment.

 

Welcome WordPress

 

Once testing is complete (or you’ve finished your daily development work), you can shut down your environment by entering the docker compose down command.

 

Compose Down Output

 

Reusing Your Environment

If you want to continue developing in this environment later, simply re-enter docker compose up -d. This action displays the development setup containing all of the previous information in the MySQL database. This takes just a few seconds.

 

WordPress Dashboard

 

However, what if you want to reuse the same environment with a fresh database?

To bring down the environment and remove the volume — which we defined within compose.yaml — run the following command:

docker compose down -v

 

Compose Down Complete

 

Now, if you restart your environment with docker compose up, Docker Compose will summon a new WordPress instance. WordPress will have you configure your settings again, including the WordPress user, password, and website name:

 

WordPress New Instance

 

While Awesome Compose sample projects work out of the box, always start with the README.md instructions file. You’ll typically need to update your sample YAML file with some environmental specifics — such as a password, username, or chosen database name. If you skip this step, the runtime won’t start correctly.

Awesome Compose Simplifies Multi-Container Management

Agile developers always need access to various application development-and-testing environments. Containers have been immensely helpful in providing this. However, more complex microservices architectures — which rely on containers running in tandem — are still quite challenging. Luckily, Docker Compose makes these management processes far more approachable.

Awesome Compose is Docker’s open-source library of sample workloads that empowers developers to quickly start using Docker Compose. The extensive library includes popular industry workloads such as ASP.NET, WordPress, and React web frontends. These can connect to MySQL, MariaDB, or MongoDB backends.

You can spin up samples from the Awesome Compose library in minutes. This lets you quickly deploy new environments locally or virtually. Our example also highlighted how easy customizing your Docker Compose YAML files and getting started are.

Now that you understand the basics of Awesome Compose, check out our other samples and explore how Docker Compose can streamline your next development project.

]]>
New Extensions, Improved logs, and more in Docker Desktop 4.10 https://www.docker.com/blog/new-extensions-improved-logs-and-more-in-docker-desktop-4-10/ Thu, 30 Jun 2022 19:04:08 +0000 https://www.docker.com/?p=34532 We’re excited to announce the launch of Docker Desktop 4.10. We’ve listened to your feedback, and here’s what you can expect from this release. 

Easily find what you need in container logs

If you’re going through logs to find specific error codes and the requests that triggered them — or gathering all logs in a given timeframe — the process should feel frictionless. To make logs more usable, we’ve made a host of improvements to this functionality within the Docker Dashboard. 

First, we’ve improved the search functionality in a few ways:

  • You can begin searching simply by typing Cmd + F / Ctrl + F (for Mac and Windows).
  • Log search results matches are now highlighted. You can use the right/left arrows or  Enter / Shift + Enter  to jump between matches, while still keeping previous logs and subsequent logs in view.
  • We’ve added regular expression search, in case you want to do things like find all errors codes in a range.

Second, we’ve also made some usability enhancements:

  • Smart scroll, so that you don’t have to manually disable “stick to bottom” of logs. When you’re at the bottom of the logs, we’ll automatically stick to the bottom, but the second you scroll up it’ll stick again. If you want to restore this sticky behavior, simply click the arrow in the bottom right corner.
  • You can now select any external links present within your logs.
  • Selecting something in the terminal automatically copies that selection to the clipboard.

Third, we’ve added a new feature:

  • You can now clear a running container’s logs, making it easy to start fresh after you’ve made a change.

Take a tour of the functionality: 

Adding Environment Variables on Image Run 

Previously you could easily add environment variables while starting a container from the CLI, but you’d quickly encounter roadblocks while starting your container afterwards from the Docker Dashboard. It wasn’t possible to add these variables while running an image. Now, when running a new container from an image, you can add environment variables that immediately take effect at runtime.

Adding Environment Variables Extensions

We’re also looking into adding some more features that let you quickly edit environment variables in running containers. Please share your feedback or other ideas on this roadmap item.

Containers Overview: bringing back ports, container name, and status

We want to give a big thanks to everyone who left feedback on the new Containers tab. It helped highlight where our changes missed the mark, and helped us quickly address them. In 4.10, we’ve:

  • Made container names and image names more legible, so you can quickly identify which container you need to manage
  • Brought back ports on the Overview page
  • Restored the “container status” icon so you can easily see which ones are running.

Easy Management with Bulk Container Actions

Many people loved the addition of bulk container deletion, which lets users delete everything at once. You can now simultaneously start, stop, pause, and resume multiple containers or apps you’re working on rather than going one by one. You can start your day and every app you need in a few clicks. You also have more flexibility while pausing and resuming — since you may want to pause all containers at once, while still keeping the Docker Engine running. This lets you tackle tasks in other parts of the Dashboard.

bulk container actions

What’s up with the Homepage?

We’ve heard your feedback! When we first rolled out the new Homepage, we wanted to make it easier and faster to run your first container. Based on community feedback, we’re updating how we deliver that Homepage content. In this release, we’ve removed the Homepage so your default starting page is once again the Containers tab. 

But, don’t worry! While we rework this functionality, you can still access some of our most popular Docker Official Images while no containers are running. If you’d like to share any feedback, please leave it here.

Run A Sample Container

New Extensions are Joining the Lineup

We’re happy to announce the addition of two new extensions to the Extensions Marketplace:

  • Ddosify – a simple, high performance, and open-source tool for load testing, written in Golang. Learn more about Ddosify here
  • Lacework Scanner – enables developers to leverage Lacework Inline Scanner directly within Docker Desktop. Learn more about Lacework here

Please help us keep improving

Your feedback and suggestions are essential to keeping us on the right track! You can upvote, comment, or submit new ideas via either our in-product links or our public roadmap. Check out our release notes to learn even more about Docker Desktop 4.10. 

Looking to become a new Docker Desktop user? Visit our Get Started page to jumpstart your development journey.

]]>
Easily Find What You Need in Container Logs Using Docker Desktop 4.10 nonadult
How to Train and Deploy a Linear Regression Model Using PyTorch – Part 1 https://www.docker.com/blog/how-to-train-and-deploy-a-linear-regression-model-using-pytorch-part-1/ Thu, 16 Jun 2022 14:00:29 +0000 https://www.docker.com/?p=34094 Python is one of today’s most popular programming languages and is used in many different applications. The 2021 StackOverflow Developer Survey showed that Python remains the third most popular programming language among developers. In GitHub’s 2021 State of the Octoverse report, Python took the silver medal behind Javascript.

Thanks to its longstanding popularity, developers have built many popular Python frameworks and libraries like Flask, Django, and FastAPI for web development.

However, Python isn’t just for web development. It powers libraries and frameworks like NumPy (Numerical Python), Matplotlib, scikit-learn, PyTorch, and others which are pivotal in engineering and machine learning. Python is arguably the top language for AI, machine learning, and data science development. For deep learning (DL), leading frameworks like TensorFlow, PyTorch, and Keras are Python-friendly.

We’ll introduce PyTorch and how to use it for a simple problem like linear regression. We’ll also provide a simple way to containerize your application. Also, keep an eye out for Part 2 — where we’ll dive deeply into a real-world problem and deployment via containers. Let’s get started.

What is PyTorch?

A Brief History and Evolution of PyTorch

Torch debuted in 2002 as a deep-learning library developed in the Lua language. Accordingly, Soumith Chintala and Adam Paszke (both from Meta) developed PyTorch in 2016 and based it on the Torch library. Since then, developers have flocked to it. PyTorch was the third-most-popular framework per the 2021 StackOverflow Developer Survey. However, it’s the most loved DL library among developers and ranks third in popularity. Pytorch is also the DL framework of choice for Tesla, Uber, Microsoft, and over 7,300 others.

PyTorch enables tensor computation with GPU acceleration, plus deep neural networks built on a tape-based autograd system. We’ll briefly break these terms down, in case you’ve just started learning about these technologies.

  • A tensor, in a machine learning context, refers to an n-dimensional array.
  • A tape-based autograd means that Pytorch uses reverse-mode automatic differentiation, which is a mathematical technique to compute derivatives (or gradients) effectively using a computer.

Since diving into these mathematics might take too much time, check out these links for more information:

PyTorch is a vast library and contains plenty of features for various deep learning applications. To get started, let’s evaluate a use case like linear regression.

What is Linear Regression?

Linear Regression is one of the most commonly used mathematical modeling techniques. It models a linear relationship between two variables. This technique helps determine correlations between two variables — or determines the value-dependent variable based on a particular value of the independent variable.

In machine learning, linear regression often applies to prediction and forecasting applications. You can solve it analytically, typically without needing any DL framework. However, this is a good way to understand the PyTorch framework and kick off some analytical problem-solving.

Numerous books and web resources address the theory of linear regression. We’ll cover just enough theory to help you implement the model. We’ll also explain some key terms. If you want to explore further, check out the useful resources at the end of this section.

Linear Regression Model

You can represent a basic linear regression model with the following equation:

Y = mX + bias

What does each portion represent?

  • Y is the dependent variable, also called a target or a label.
  • X is the independent variable, also called a feature(s) or co-variate(s).
  • bias is also called offset.
  • m refers to the weight or “slope.”

These terms are often interchangeable. The dependent and independent variables can be scalars or tensors.

The goal of the linear regression is to choose weights and biases so that any prediction for a new data point — based on the existing dataset — yields the lowest error rate. In simpler terms, linear regression is finding the best possible curve (line, in this case) to match your data distribution.

Loss Function

A loss function is an error function that expresses the error (or loss) between real and predicted values. A very popular way to measure loss is by using a root mean squared error, which we’ll also use.

Gradient Descent Algorithms

Gradient descent is a class of optimization algorithms that tries to solve the problem (either analytically or using deep learning models) by starting from an initial guess of weights and bias. It then iteratively reduces errors by updating weights and bias values with successively better guesses.

A simplified approach uses the derivative of the loss function and minimizes the loss. The derivative is the slope of the mathematical curve, and we’re attempting to reach the bottom of it — hence the name gradient descent. The stochastic gradient method samples smaller batches of data to compute updates which are computationally better than passing the entire dataset at each iteration.

To learn more about this theory, the following resources are helpful:

Linear Regression with Pytorch

Now, let’s talk about implementing a linear regression model using PyTorch. The script shown in the steps below is main.py — which resides in the GitHub repository and is forked from the “Dive Into Deep learning” example repository. You can find code samples within the pytorch directory.

For our regression example, you’ll need the following:

  • Python 3
  • PyTorch module (pip install torch) installed on your system
  • NumPy module (pip install numpy) installed
  • Optionally, an editor (VS Code is used in our example)

Problem Statement

As mentioned previously, linear regression is analytically solvable. We’re using deep learning to solve this problem since it helps you quickly get started and easily check the validity of your training data. This compares your training data against the data set.

We’ll attempt the following using Python and PyTorch:

  • Creating synthetic data where we’re aware of weights and bias
  • Using the PyTorch framework and built-in functions for tensor operations, dataset loading, model definition, and training

We don’t need a validation set for this example since we already have the ground truth. We’d assess our results by measuring the error against the weights and bias values used while creating our synthetic data.

Step 1: Import Libraries and Namespaces

For our simple linear regression, we’ll import the torch library in Python. We’ll also add some specific namespaces from our torch import. This helps create cleaner code:


# Step 1 import libraries and namespaces

import torch

from torch.utils import data

# `nn` is an abbreviation for neural networks

from torch import nn

Step 2: Create a Dataset

For simplicity’s sake, this example creates a synthetic dataset that aims to form a linear relationship between two variables with some bias.

i.e. y = mx + bias + noise


#Step 2: Create Dataset

#Define a function to generate noisy data

def synthetic_data(m, c, num_examples):

"""Generate y = mX + bias(c) + noise"""

X = torch.normal(0, 1, (num_examples, len(m)))

y = torch.matmul(X, m) + c

y += torch.normal(0, 0.01, y.shape)

return X, y.reshape((-1, 1))

&amp;amp;amp;amp;nbsp;

true_m = torch.tensor([2, -3.4])

true_c = 4.2

features, labels = synthetic_data(true_m, true_c, 1000)

Here, we use the built-in PyTorch function torch.normal to return a tensor of normally distributed random numbers. We’re also using the torch.matmul function to multiply tensor X with tensor m, and Y is distributed normally again.

The dataset looks like this when visualized using a simple scatter plot:

scatterplot

The code to create the visualization can be found in this GitHub repository.

Step 3: Read the Dataset and Define Small Batches of Data

#Step 3: Read dataset and create small batch

#define a function to create a data iterator. Input is the features and labels from synthetic data

# Output is iterable batched data using torch.utils.data.DataLoader

def load_array(data_arrays, batch_size, is_train=True):

"""Construct a PyTorch data iterator."""

dataset = data.TensorDataset(*data_arrays)

return data.DataLoader(dataset, batch_size, shuffle=is_train)

&amp;amp;amp;nbsp;

batch_size = 10

data_iter = load_array((features, labels), batch_size)

&amp;amp;amp;nbsp;

next(iter(data_iter))

Here, we use the PyTorch functions to read and sample the dataset. TensorDataset stores the samples and their corresponding labels, while DataLoader wraps an iterable around the TensorDataset for easier access.

The iter function creates a Python iterator, while next obtains the first item from that iterator.

Step 4: Define the Model

PyTorch offers pre-built models for different cases. For our case, a single-layer, feed-forward network with two inputs and one output layer is sufficient. The PyTorch documentation provides details about the nn.linear implementation.

The model also requires the initialization of weights and biases. In the code, we initialize the weights using a Gaussian (normal) distribution with a mean value of 0, and a standard deviation value of 0.01. The bias is simply zero.


#Step4: Define model &amp;amp;amp; initialization

# Create a single layer feed-forward network with 2 inputs and 1 outputs.

net = nn.Linear(2, 1)

&amp;amp;amp;nbsp;

#Initialize model params

net.weight.data.normal_(0, 0.01)

net.bias.data.fill_(0)

Step 5: Define the Loss Function

The loss function is defined as a root mean squared error. The loss function tells you how far from the regression line the data points are:


#Step 5: Define loss function
# mean squared error loss function
loss = nn.MSELoss()

Step 6: Define an Optimization Algorithm

For optimization, we’ll implement a stochastic gradient descent method.
The lr stands for learning rate and determines the update step during training.


#Step 6: Define optimization algorithm
# implements a stochastic gradient descent optimization method
trainer = torch.optim.SGD(net.parameters(), lr=0.03)

Step 7: Training

For training, we’ll use specialized training data for n epochs (five in our case), iteratively using minibatch features and corresponding labels. For each minibatch, we’ll do the following:

  • Compute predictions and calculate the loss
  • Calculate gradients by running the backpropagation
  • Update the model parameters
  • Compute the loss after each epoch

# Step 7: Training

# Use complete training data for n epochs, iteratively using a minibatch features and corresponding label

# For each minibatch:

# &amp;nbsp; Compute predictions by calling net(X) and calculate the loss l

# &amp;nbsp; Calculate gradients by running the backpropagation

# &amp;nbsp; Update the model parameters using optimizer

# &amp;nbsp; Compute the loss after each epoch and print it to monitor progress

num_epochs = 5

for epoch in range(num_epochs):

for X, y in data_iter:

l = loss(net(X) ,y)

trainer.zero_grad() #sets gradients to zero

l.backward() # back propagation

trainer.step() # parameter update

l = loss(net(features), labels)

print(f'epoch {epoch + 1}, loss {l:f}')

Results

Finally, compute errors by comparing the true value with the trained model parameters. A low error value is desirable. You can compute the results with the following code snippet:


#Results
m = net.weight.data
print('error in estimating m:', true_m - m.reshape(true_m.shape))
c = net.bias.data
print('error in estimating c:', true_c - c)

When you run your code, the terminal window outputs the following:

python3 main.py 
features: tensor([1.4539, 1.1952]) 
label: tensor([3.0446])
epoch 1, loss 0.000298
epoch 2, loss 0.000102
epoch 3, loss 0.000101
epoch 4, loss 0.000101
epoch 5, loss 0.000101
error in estimating m: tensor([0.0004, 0.0005])
error in estimating c: tensor([0.0002])

As you can see, errors gradually shrink alongside the values.

Containerizing the Script

In the previous example, we had to install multiple Python packages just to run a simple script. Containers, meanwhile, let us easily package all dependencies into an image and run an application.

We’ll show you how to quickly and easily Dockerize your script. Part 2 of the blog will discuss containerized deployment in greater detail.

Containerize the Script

Containers help you bundle together your code, dependencies, and libraries needed to run applications in an isolated environment. Let’s tackle a simple workflow for our linear regression script.

We’ll achieve this using Docker Desktop. Docker Desktop incorporates Dockerfiles, which specify an image’s overall contents.

Make sure to pull a Python base image (version 3.10) for our example:

FROM python:3.10

Next, we’ll install the numpy and torch dependencies needed to run our code:

RUN apt update && apt install -y python3-pip
RUN pip3 install numpy torch

Afterwards, we’ll need to place our main.py script into a directory:

COPY main.py app/

Finally, the CMD instruction defines important executables. In our case, we’ll run our main.py script:

CMD ["python3", "app/main.py" ]

Our complete Dockerfile is shown below, and exists within this GitHub repo:

FROM python:3.10
RUN apt update && apt install -y python3-pip
RUN pip3 install numpy torch
COPY main.py app/
CMD ["python3", "app/main.py" ]

Build the Docker Image

Now that we have every instruction that Docker Desktop needs to build our image, we’ll follow these steps to create it:

  1. In the GitHub repository, our sample script and Dockerfile are located in a directory called pytorch. From the repo’s home folder, we can enter cd deeplearning-docker/pytorch to access the correct directory.
  2. Our Docker image is named linear_regression. To build your image, run the docker build -t linear_regression. command.

Run the Docker Image

Now that we have our image, we can run it as a container with the following command:

docker run linear_regression

This command will create a container and execute the main.py script. Once we run the container, it’ll re-print the loss and estimates. The container will automatically exit after executing these commands. You can view your container’s status via Docker Desktop’s Container interface:

containers docker desktop

Desktop shows us that linear_regression executed the commands and exited successfully.

We can view our error estimates via the terminal or directly within Docker Desktop. I used a Docker Extension called Logs Explorer to view my container’s output (shown below):

Alternatively, you may also experiment using the Docker image that we created in this blog.

logs

As we can see, the results from running the script on my system and inside the container are comparable.

To learn more about using containers with Python, visit these helpful links:

Want to learn more about PyTorch theories and examples?

We took a very tiny peek into the world of Python, PyTorch, and deep learning. However, many resources are available if you’re interested in learning more. Here are some great starting points:

Additionally, endless free and paid courses exist on websites like YouTube, Udemy, Coursera, and others.

Stay tuned for more!

In this blog, we’ve introduced PyTorch and linear regression, and we’ve used the PyTorch framework to solve a very simple linear regression problem. We’ve also shown a very simple way to containerize your PyTorch application.

But, we have much, much more to discuss on deployment. Stay tuned for our follow-up blog — where we’ll tackle the ins and outs of deep-learning deployments! You won’t want to miss this one.

]]>
PyTorch at Uber - Sidney Zhang, Uber nonadult
Using Awesome-Compose to Build and Deploy Your Multi-Container Application https://www.docker.com/blog/using-awesome-compose-to-build-and-deploy-your-multi-container-application/ Thu, 21 Apr 2022 16:57:34 +0000 https://www.docker.com/?p=33148 In my last blog, I showed you how to get up and running with Docker Desktop. While we briefly introduced you to every major component bundled within Docker Desktop, it’s now time to drill a little deeper. In this blog, you’ll learn how to use one of our most-popular tools: Docker Compose. We’ll discuss what Docker Compose does, dive into Docker’s awesome-compose GitHub repository, and show you how to deploy a sample React application with a Spring backend and MariaDB database.

What is Docker Compose?

Docker Compose is a tool that helps you run multi-container applications. With Compose, you use a YAML file to configure your services. Here are some key benefits of Docker Compose:

  • Simplified management of multiple environments through project names, which provide isolation
  • Data-loss prevention, by automatically copying data from previous container runs to new ones
  • Faster creation times, by only updating containers that have changed since the last run
  • Easy app portability between environments using defined environment variables

You can read more deeply about these features on our Compose documentation page.

Use Cases

Docker Compose has use cases in development, testing, and single-host production environments:

  • Development environments: Compose helps developers easily test their development environment by orchestrating the application in an isolated environment. This is achieved by specifying all application dependencies within the Compose file.
  • Automated testing environments:  Like with development environments, Compose lets developers easily create isolated testing environments for their applications. This helps with shift-left testing, which is common to CI/CD. These isolated test environments are easily destructible once testing is finished.
  • Single host deployments: Developers can re-use their Compose file — with some changes — in specific cases such as single-server deployments. This helps developers partially re-use their local development workflows in production.

We’ll focus on using Docker Compose to locally orchestrate and run your multi-container applications, to simplify testing. Docker Compose relies on Compose files to work effectively. So, how do you use them?

Using a Compose File

There are three main steps to using a Compose file:

  1. Define your app environment with a Dockerfile so that it’s reproducible anywhere.
  2. Define your app’s services within docker-compose.yml, so they can run together in an isolated environment.
  3. Enter the docker compose up command. This starts and runs your entire app.

What is Awesome-Compose?

If you view a sample Compose file, it appears pretty simple for basic applications. However, it can quickly grow complex as you add more services to your application. To boost your productivity and get you started with Docker Compose, we maintain a GitHub repository called awesome-compose.

Awesome-compose is a curated collection of Docker Compose samples. These samples offer a starting point for services integration using a Compose file — and for managing their deployment with Docker Compose. They’re geared towards your local development environment and aren’t recommended for production workflows. However, awesome-compose provides you with a great starting blueprint, and is designed to save you development time and effort.

What does that look like in practice?  Let’s take a sample application and build a simple React app with a Spring backend and a MariaDB database. MariaDB is preferable over MySQL, since it’s compatible with AMD64 and ARM64 architecture.

Our Sample Application

Creating a Simple, non Container-Based Application

Let’s first create a sample “Hello from Docker” React application. To create a sample React application, type the following in your terminal:

npx create-react-app my-app

This creates a basic React application. You can now hop into your application folder and edit App.js to your liking. For the purposes of this guide, here’s a quick example:

function App() {

  return (
    <div className="App">
      <header className="App-header">
        <img src={logo} className="App-logo" alt="logo" />
        Hello from Docker 
        <a
          className="App-link"
          href="https://reactjs.org"
          target="_blank"
          rel="noopener noreferrer"
        >
          Learn React
        </a>
      </header>
    </div>
  );
}

After making any changes (in our case, choosing to print “Hello from Docker”), use the npm start command to view the React application. Here’s how that looks:

React Port 3000

As you can see in the address bar, this application runs on port 3000.

Creating a Similar Multi-Container Application

Let’s say that you’re developing a similar application with a React frontend, a Spring (Java) backend, and a MariaDB database. This requires some additional steps. You’d want to use containers in this situation, because those containers package code and all dependencies into a single unit. This lets an application run quickly and reliably from one computing environment to another.

For simplicity’s sake, let’s say that this app needs one container for the frontend, one for the backend, and one for the database. This is a perfect opportunity for us to use react-java-mysql sample application from our awesome-compose GitHub project. As mentioned earlier in the blog,  MariaDB is preferable over MySQL, since it’s compatible with AMD64 and ARM64 architecture. However, you can use either by changing your DB image source in your Compose file. The process is as simple as changing a single line. 

To use awesome-compose, you’ll need to install Docker and Docker Compose on your machine. We also recommend installing Docker Desktop. This provides some added productivity benefits that you’ll soon learn more about.

The project structure of this sample application can be divided into the backend, frontend, database, and the Docker Compose file. The folder directory looks like this:

Folder Directory

Similarly, the Docker Compose file (docker-compose.yaml) consists of three services:

  • Backend
  • Frontend
  • DB

While examining the Compose file, you’ll see the following code:

frontend:
    build: frontend
    ports:
    - 3000:3000

Docker Compose is mapping port 3000 of the frontend-service container to host port 3000.

Deploying with Docker Compose

To deploy your sample application, navigate to the react-java-mysql file directory by using these two git commands: 

git clone https://github.com/docker/awesome-compose
cd react-java-mysql

If you want to download only react-java-mysql, enter the following commands:

git clone \
  --depth 1  \
  --filter=blob:none  \
  --sparse \
https://github.com/docker/awesome-compose \;

cd awesome-compose

git sparse-checkout set react-java-mysql 
cd react-java-mysql

The Docker Compose file is located within this folder. Next, enter the following into your terminal:

docker compose up -d


This jumpstarts multi-stage container deployment on your local machine. Since it’s the first time you’re running the containers, the process may take a few moments. Thankfully, only changed containers will update the next time you run this application. The rest is cached, dramatically shortening the testing phase between changes. Once you’ve finished, you should see a terminal output similar to this:

Terminal Output

It’s important to enter docker compose instead of docker-compose. We’re using Docker Compose V2 to build the application, which has introduced some basic syntax changes. You can read more about this transition in our documentation.

Want to see what’s currently active? You can summon a list of running containers with the docker ps command. This outputs the following:

image 2

Visualizing your containers is even easier within Docker Desktop, via the Dashboard:

Container Vis

The Docker Dashboard doesn’t only display running containers belonging to your multi-container Compose application. It also lets you easily start, pause, stop, or delete any container. Additionally, Docker Desktop lets you manage a particular container via CLI commands.

From the Docker Dashboard, you can click the “Open in Browser” button to open the localhost. You can alternatively navigate to it directly within your browser.

Docker Dashboard

Since your application is running on port 3000, you’ll see the following:

React LocalHost

To stop or delete the application, you can use the Docker Desktop interface shown below:

Delete App


Finally, you can always use the
docker compose down command to achieve the same result.

Key Takeaways

Congratulations! You now know the basics of using Docker Compose and the awesome-compose Github repository. Both tools can also accommodate deployments that are more complex — as shown while running a multi-container application from the awesome-compose repository.

The awesome-compose repository is created to give you an easy starting point for building your multi-container applications. You also learn how you can visualize and control your multi-container applications in the local development environment using Docker Desktop.

You can also contribute to the awesome-compose library! If you’re interested in helping our community better leverage Docker Compose, please follow our Contribution Guide. You’ll receive assistance with making your pull requests and getting those sample applications live.

Join us for DockerCon 2022!

Want to learn more about cloud-native development, and how Docker’s tools can help? Join us at DockerCon2022 on Tuesday, May 10. 

DockerCon is a free, one-day virtual event geared towards developers and development teams who are building the next generation of modern applications. We invite you to come discover more, learn more, excel within your role, and connect with thousands of your fellow tech community members. DockerCon is right around the corner, so register today!

]]>
Getting Started with Docker Desktop https://www.docker.com/blog/getting-started-with-docker-desktop/ Tue, 29 Mar 2022 21:26:17 +0000 https://www.docker.com/?p=32835
If you’re curious about Docker but haven’t used it, you’re at the right place. While Docker is technical at its core, our goal is to make our tools approachable for all users regardless of their familiarity with containers. This blog introduces Docker technology, Docker Desktop, and why you should care about both.

What is Docker?

Before talking about Docker, let’s take a moment to highlight containers. A container packages code and all its dependencies into a single unit, thus letting an application run quickly and reliably from one computing environment to another. This makes such applications easily portable between machines and solves the “it works on my machine” problem. Though the technology behind containers has been around for a while, Docker made it easier to work with containers. Since its debut in 2013, Docker has become an industry standard. Currently, the core technology exists as a popular, open-source container runtime called Docker Engine.  

To create Docker containers, you’ll first need a Docker image. If you’re familiar with object-oriented programming concepts, think of images as classes and containers as objects.    Images include everything needed to run an application: code, runtime, system tools, system libraries, and settings. 

What can I use Docker for?  

Docker simplifies application development and removes complexities for you and many other developers around the world. This allows you to be more productive and devote more time to your actual development process. You can deploy both simple and complex applications more easily. Docker is ideal for the following use cases, and many more:

  • Software prototyping and packaging 
  • Microservice architecture implementation  
  • Network modeling
  • Continuous integration and delivery
  • Reducing debugging overhead
  • Running more workloads on the same hardware

What is Docker Desktop? 

One of the best ways to get started with Docker is by installing Docker Desktopespecially if you’re a developer using Mac or Windows. That said, you might be wondering, “What’s Docker Desktop, and how’s it different from the open-source Docker Engine?”  

While some developers envision Docker Desktop as just a GUI on top of Docker Engine, that characterization barely scratches the surface. Docker Desktop is an easy-to-install application and includes Docker Engine, Docker CLI client, Docker Compose, Docker Content Trust, Kubernetes, and Credential Helper. Docker Desktop still uses Docker Engine at its core. However, the seamless integration and interoperability of these tools makes Docker Desktop user-friendly—regardless of your experience with Docker. 

By installing and using Docker Desktop, you’ll enjoy the following features: 

  • Simple and easy-to-install environment to build, ship, and run your containers
  • Easy way to create and manage using volumes
  • Local and remote management of Docker images
  • Better collaboration by sharing repeatable and reproducible development from your local machine to the container
  • Simple, one-click Kubernetes setup for your local machine
  • A dashboard for a quick overview of running containers, images, and volumes
  • Support for building and using multi-architecture images

Docker Desktop adds these additional features atop existing open-source tooling, allowing you to easily maintain, monitor, and update Docker tooling. It also provides you with a consistent experience across different OSes. Docker Desktop makes collaboration easy using Docker Dev Environments, allowing teams to share their work with one click via Git or Docker Hub. It also has an easy-to-use UI for many common actions:

  • Starting a container 
  • Pausing and resuming a container 
  • Stopping a container
  • Setting up a single-node local Kubernetes cluster
  • Creating or deleting volumes 

Additionally, both the GUI and CLI are always available to you based on your preferences. 

How do I get started? 

Check out our official documentation to learn more about best practices. The documentation has helpful quickstart resources and language-specific guides. The Docker Desktop documentation also provides an overview of key features with usage instructions. 

Additionally, Docker users can learn, connect, and collaborate with each other via our Docker Community Slack channel. You can chat with Docker community leaders, Docker Captains, and your fellow local developers in the channel. You’ll also get up-to-date information about Docker-related events, conferences around the world, and Docker community all-hands events. Docker is also the most-loved tool according to Stack Overflow’s 2021 Developer Survey. Other users are always willing to lend a helping hand. 

Hoping to learn at your own pace? A list of comprehensive hosted labs, self-guided tutorials, books, and self-guided online courses are summarized in the documentation under the education resources section.

Exploring Docker Desktop with a quick example

If you’ve installed Docker Desktop and want to explore more, here’s a quick example to get you started:

  1. Open Docker Desktop. 
  2. Type the following command in your terminal:
    docker run -d -p 80:80 docker/getting-started
  3. Open your browser to http://localhost
  4. Follow the instructions for either Mac or Windows to access your dashboard

You should see something like the screenshot below, where a container called objective_merkle is visibly running. Container names are randomly generated—with the first word being an adjective, and the last name referencing either a notable scientist or hacker (more information in this GitHub repo).

container apps dd

If you look at the command, there are a few flags after the command “docker run” to get the container running. A simple explanation for them is:

  • -d runs the application in the background
  • -p 80:80 provides the mapping from the host port to the container port. You can learn more about port mapping here
  • docker/getting-started is the container image being used

Once you type the command, Docker recognizes the flags, executes the command, and looks for the image locally. If you don’t have an image by this name on your system, Docker will automatically find and retrieve it from Docker Hub. If you’re new to Docker, just think that Docker Hub is to Docker what GitHub is to Git.

The image that you pulled is on Docker Hub. Another way to pull this using the following command on your terminal 

docker pull docker/getting-started

The image is a simple to-do list manager running on Node.Js. This tutorial does not require any JavaScript experience. Detailed information can be found on the tutorial page or by clicking on http://localhost after running the container. This tutorial dives much deeper into various aspects of Docker and Docker Desktop than this blog. 

One More Thing

This blog centers primarily on Docker Desktop for Mac and Windows, but we’re thrilled to announce that Docker Desktop for Linux is coming soon. Docker Desktop for Linux (DD4L) is the second-most popular feature request in our public roadmap. If you want to become an early adopter, check out our guide for installing the Docker Desktop for Linux Tech Preview. You can play a key role in helping improve Docker Desktop for Linux prior to launch. 

Check out the hands-on demo of Docker Desktop for Linux at our Community All Hands event on March 31, 2022. You’re also invited to join other developers—and boost your development skills—at DockerCon 2022. Pre-event training kicks off on May 9th, while our virtual event begins on May 10th. See you there!

]]>
Vulnerability Alert: Avoiding “Dirty Pipe” CVE-2022-0847 on Docker Engine and Docker Desktop https://www.docker.com/blog/vulnerability-alert-avoiding-dirty-pipe-cve-2022-0847-on-docker-engine-and-docker-desktop/ Wed, 16 Mar 2022 16:40:14 +0000 https://www.docker.com/blog/vulnerability-alert-avoiding-dirty-pipe-cve-2022-0847-on-docker-engine-and-docker-desktop/ You might have heard about a new Linux vulnerability that was released last week, CVE-2022-0847, aka “Dirty Pipe”. This vulnerability overwrites supposedly read-only files in the Linux kernel host, which could enable attackers to modify files inside the host images from the container instance.

If you use Docker Engine natively, we recommend you should update your Linux OS to a version that has patched the vulnerability, e.g. Linux 5.16.11, 5.15.25, and 5.10.102. 

For those of you using Docker Desktop, we recently released a patch of our own for Mac and for Windows.

To read more about the vulnerability itself, the blog by Max Kellerman provides the details and the blog by Rory McKune shows how this vulnerability could be exploited on containers. 

]]>