containers – Docker https://www.docker.com Tue, 11 Jul 2023 19:50:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png containers – Docker https://www.docker.com 32 32 Docker Acquires Mutagen for Continued Investment in Performance and Flexibility of Docker Desktop https://www.docker.com/blog/mutagen-acquisition/ Tue, 27 Jun 2023 17:00:13 +0000 https://www.docker.com/?p=43663 I’m excited to announce that Docker, voted the most-used and most-desired tool in Stack Overflow’s 2023 Developer Survey, has acquired Mutagen IO, Inc., the company behind the open source Mutagen file synchronization and networking technologies that enable high-performance remote development. Mutagen’s synchronization and forwarding capabilities facilitate the seamless transfer of code, binary artifacts, and network requests between arbitrary locations, connecting local and remote development environments. When combined with Docker’s existing developer tools, Mutagen unlocks new possibilities for developers to innovate and accelerate development velocity with local and remote containerized development.

“Docker is more than a container tool. It comprises multiple developer tools that have become the industry standard for self-service developer platforms, empowering teams to be more efficient, secure, and collaborative,” says Docker CEO Scott Johnston. “Bringing Mutagen into the Docker family is another example of how we continuously evolve our offering to meet the needs of developers with a product that works seamlessly and improves the way developers work.”

Mutagen banner 2400x1260 Docker logo and Mutagen logo on red background

The Mutagen acquisition introduces novel mechanisms for developers to extract the highest level of performance from their local hardware while simultaneously opening the gateway to the newest remote development solutions. We continue scaling the abilities of Docker Desktop to meet the needs of the growing number of developers, businesses, and enterprises relying on the platform.

 “Docker Desktop is focused on equipping every developer and dev team with blazing-fast tools to accelerate app creation and iteration by harnessing the combined might of local and cloud resources. By seamlessly integrating and magnifying Mutagen’s capabilities within our platform, we will provide our users and customers with unrivaled flexibility and an extraordinary opportunity to innovate rapidly,” says Webb Stevens, General Manager, Docker Desktop.

 “There are so many captivating integration and experimentation opportunities that were previously inaccessible as a third-party offering,” says Jacob Howard, the CEO at Mutagen. “As Mutagen’s lead developer and a Docker Captain, my ultimate goal has always been to enhance the development experience for Docker users. As an integral part of Docker’s technology landscape, Mutagen is now in a privileged position to achieve that goal.”

Jacob will join Docker’s engineering team, spearheading the integration of Mutagen’s technologies into Docker Desktop and other Docker products.

You can get started with Mutagen today by downloading the latest version of Docker Desktop and installing the Mutagen extension, available in the Docker Extensions Marketplace. Support for current Mutagen offerings, open source and paid, will continue as we develop new and better integration options.

FAQ | Docker Acquisition of Mutagen

With Docker’s acquisition of Mutagen, you’re sure to have questions. We’ve answered the most common ones in this FAQ.

As with all of our open source efforts, Docker strives to do right by the community. We want this acquisition to benefit everyone — community and customer — in keeping with our developer obsession.

What will happen to Mutagen Pro subscriptions and the Mutagen Extension for Docker Desktop?

Both will continue as we evaluate and develop new and better integration options. Existing Mutagen Pro subscribers will see an update to the supplier on their invoices, but no other billing changes will occur.

Will Mutagen become closed-source?

There are no plans to change the licensing structure of Mutagen’s open source components. Docker has always valued the contributions of open source communities.

Will Mutagen or its companion projects be discontinued?

There are no plans to discontinue any Mutagen projects. 

Will people still be able to contribute to Mutagen’s open source projects?

Yes! Mutagen has always benefited from outside collaboration in the form of feedback, discussion, and code contributions, and there’s no desire to change that relationship. For more information about how to participate in Mutagen’s development, see the contributing guidelines.

What about other downstream users, companies, and projects using Mutagen?

Mutagen’s open source licenses continue to allow the embedding and use of Mutagen by other projects, products, and tooling.

Who will provide support for Mutagen projects and products?

In the short term, support for Mutagen’s projects and products will continue to be provided through the existing support channels. We will work to merge support into Docker’s channels in the near future.

Is this replacing Virtiofs, gRPC-FUSE, or osxfs?

No, virtual filesystems will continue to be the default path for bind mounts in Docker Desktop. Docker is continuing to invest in the performance of these technologies.

How does Mutagen compare with other virtual or remote filesystems?

Mutagen is a synchronization engine rather than a virtual or remote filesystem. Mutagen can be used to synchronize files to native filesystems, such as ext4, trading typically imperceptible amounts of latency for full native filesystem performance.

How does Mutagen compare with other synchronization solutions?

Mutagen focuses primarily on configuration and functionality relevant to developers.

How can I get started with Mutagen?

To get started with Mutagen, download the latest version of Docker Desktop and install the Mutagen Extension from the Docker Desktop Extensions Marketplace.

]]>
Full-Stack Reproducibility for AI/ML with Docker and Kaskada https://www.docker.com/blog/full-stack-reproducibility-for-ai-ml-with-docker-kaskada/ Tue, 20 Jun 2023 12:55:51 +0000 https://www.docker.com/?p=43301 Docker is used by millions of developers to optimize the setup and deployment of development environments and application stacks. As artificial intelligence (AI) and machine learning (ML) are becoming key components of many applications, the core benefits of Docker are now doing more heavy lifting and accelerating the development cycle. 

Gartner predicts that “by 2027, over 90% of new software applications that are developed in the business will contain ML models or services as enterprises utilize the massive amounts of data available to the business.”

This article, written by our partner DataStax, outlines how Kaskada, open source, and Docker are helping developers optimize their AI/ML efforts.

Stylized brain in hexagon on light blue background with Docker and Kaskada logos

Introduction

As a data scientist or machine learning practitioner, your work is all about experimentation. You start with a hunch about the story your data will tell, but often you’ll only find an answer after false starts and failed experiments. The faster you can iterate and try things, the faster you’ll get to answers. In many cases, the insights gained from solving one problem are applicable to other related problems. Experimentation can lead to results much faster when you’re able to build on the prior work of your colleagues.

But there are roadblocks to this kind of collaboration. Without the right tools, data scientists waste time managing code dependencies, resolving version conflicts, and repeatedly going through complex installation processes. Building on the work of colleagues can be hard due to incompatible environments — the dreaded “it works for me” syndrome.

Enter Docker and Kaskada, which offer a similar solution to these different problems: a declarative language designed specifically for the problem at hand and an ecosystem of supporting tools (Figure 1).

Illustration showing representation of Dockerfile defining steps to build a reproducible dev environment.
Figure 1: Dockerfile defines the build steps.

For Docker, the Dockerfile format describes the exact steps needed to build a reproducible development environment and an ecosystem of tools for working with containers (Docker Hub, Docker Desktop, Kubernetes, etc.). With Docker, data scientists can package their code and dependencies into an image that can run as a container on any machine, eliminating the need for complex installation processes and ensuring that colleagues can work with the exact same development environment.

With Kaskada, data scientists can compute and share features as code and use those throughout the ML lifecycle — from training models locally to maintaining real-time features in production. The computations required to produce these datasets are often complex and expensive because standard tools like Spark have difficulty reconstructing the temporal context required for training real-time ML models.

Kaskada solves this problem by providing a way to compute features — especially those that require reasoning about time — and sharing feature definitions as code. This approach allows data scientists to collaborate with each other and with machine learning engineers on feature engineering and reuse code across projects. Increased reproducibility dramatically speeds cycle times to get models into production, increases model accuracy, and ultimately improves machine learning results.

Example walkthrough

Let’s see how Docker and Kaskada improve the machine learning lifecycle by walking through a simplified example. Imagine you’re trying to build a real-time model for a mobile game and want to predict an outcome, for example, whether a user will pay for an upgrade.

Setting up your experimentation environment

To begin, start a Docker container that comes preinstalled with Jupyter and Kaskada:

docker run --rm -p 8888:8888 kaskadaio/jupyter
open <jupyter URL from logs> 

This step instantly gives you a reproducible development environment to work in, but you might want to customize this environment. Additional development tools can be added by creating a new Dockerfile using this image as the “base image”:

# Dockerfile
FROM kaskadaio/jupyter

COPY requirements.txt
RUN pip install -r requirements.txt

In this example, you started with Jupyter and Kaskada, copied over a requirements file and installed all the dependencies in it. You now have a new Docker image that you can use as a data science workbench and share across your organization: Anyone in your organization with this Dockerfile can reproduce the same environment you’re using by building and running your Dockerfile.

docker build -t experimentation_env .
docker run --rm -p 8888:8888 experimentation_env

The power of Docker comes from the fact that you’ve created a file that describes your environment and now you can share this file with others.

Training your model

Inside a new Jupyter notebook, you can begin the process of exploring solutions to the problem — predicting purchase behavior. To begin, you’ll create tables to organize the different types of events produced by the imaginary game.

% pip install kaskada
% load_ext fenlmagic

session = LocalBuilder().build()

table.create_table(
table_name = "GamePlay",
time_column_name = "timestamp",
entity_key_column_name = "user_id",
)
table.create_table(
table_name = "Purchase",
time_column_name = "timestamp",
entity_key_column_name = "user_id",
)

table.load(
table_name = "GamePlay", 
file = "historical_game_play_events.parquet",
)
table.load(
table_name = "Purchase", 
file = "historical_purchase_events.parquet",
)

Kaskada is easy to install and use locally. After installing, you’re ready to start creating tables and loading event data into them. Kaskada’s vectorized engine is built for high-performance local execution, and, in many cases, you can start experimenting on your data locally, without the complexity of managing distributed compute clusters.

Kaskada’s query language was designed to make it easy for data scientists to build features and training examples directly from raw event data. A single query can replace complex ETL and pre-aggregation pipelines, and Kaskda’s unique temporal operations unlock native time travel for building training examples “as of” important times in the past.

%% fenl --var training

# Create views derived from the source tables
let GameVictory = GamePlay | when(GamePlay.won)
let GameDefeat = GamePlay | when(not GamePlay.won)

# Compute some features as inputs to our model
let features = {
  loss_duration: sum(GameVictory.duration),
  purchase_count: count(Purchase),
}

# Observe our features at the time of a player's second victory
let example = features
  | when(count(GameDefeat, window=since(GameVictory)) == 2)
  | shift_by(hours(1))

# Compute a target value
# In this case comparing purchase count at prediction and label time
let target = count(Purchase) > example.purchase_count

# Combine feature and target values computed at the different times
in extend(example, {target})

In the following example, you first apply filtering to the events, build simple features, observe them at the points in time when your model will be used to make predictions, then combine the features with the value you want to predict, computed an hour later. Kaskada lets you describe all these operations “from scratch,” starting with raw events and ending with an ML training dataset.

from sklearn.linear_model import LogisticRegression
from sklearn import preprocessing

x = training.dataframe[['loss_duration']]
y = training.dataframe['target']

scaler = preprocessing.StandardScaler().fit(X)
X_scaled = scaler.transform(X)

model = LogisticRegression(max_iter=1000)
model.fit(X_scaled, y)

Kaskada’s query language makes it easy to write an end-to-end transformation from raw events to a training dataset.

Conclusion

Docker and Kaskada enable data scientists and ML engineers to solve real-time ML problems quickly and reproducibly. With Docker, you can manage your development environment with ease, ensuring that your code runs the same way on every machine. With Kaskada, you can collaborate with colleagues on feature engineering and reuse queries as code across projects. Whether you’re working independently or as part of a team, these tools can help you get answers faster and more efficiently than ever before.

Get started with Kaskada’s official images on Docker Hub.

]]>
What is the Best Container Security Workflow for Your Organization? https://www.docker.com/blog/what-is-the-best-container-security-workflow/ Wed, 14 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37470 Since containers are a primary means for developing and deploying today’s microservices, keeping them secure is highly important. But where should you start? A solid container security workflow often begins with assessing your images. These images can contain a wide spectrum of vulnerabilities. Per Sysdig’s latest report, 75% of images have vulnerabilities considered either highly or critically severe. 

There’s good news though — you can patch these vulnerabilities! And with better coordination and transparency, it’s possible to catch these issues in development before they impact your users. This protects everyday users and enterprise customers who require strong security. 

Snyk’s Fani Bahar and Hadar Mutai dove into this container security discussion during their DockerCon session. By taking a shift-left approach and rallying teams around key security goals, stronger image security becomes much more attainable. 

Let’s hop into Fani and Hadar’s talk and digest their key takeaways for developers and organizations. You’ll learn how attitudes, structures, and tools massively impact container security.

Security requires the right mindset across organizations

Mindset is one of the most difficult hurdles to overcome when implementing stronger container security. While teams widely consider security to be important, many often find it annoying in practice. That’s because security has traditionally taken monumental effort to get right. Even today, container security has become “the topic that most developers tend to avoid,” according to Hadar. 

And while teams scramble to meet deadlines or launch dates, the discovery of higher-level vulnerabilities can cause delays. Security soon becomes an enemy rather than a friend. So how do we flip the script? Ideally, a sound container-security workflow should do the following:

  • Support the agile development principles we’ve come to appreciate with microservices development
  • Promote improved application security in production
  • Unify teams around shared security goals instead of creating conflicting priorities

Two main personas are invested in improving application security: developers and DevSecOps. These separate personas have very similar goals. Developers want to ship secure applications that run properly. Meanwhile, DevSecOps teams want everything that’s deployed to be secured. 

The trick to unifying these goals is creating an effective container-security workflow that benefits everyone. Plus, this workflow must overcome the top challenges impacting container security — today and in the future. Let’s analyze those challenges that Hadar highlighted. 

Organizations face common container security challenges

Unraveling the mystery behind security seems daunting, but understanding common challenges can help you form a strategy. Organizations grapple with the following: 

  • Vulnerability overload (container images can introduce upwards of 900)
  • Prioritizing security fixes over others
  • Understanding how container security fundamentally works (this impacts whether a team can fix issues)
  • Lengthier development pipelines stemming from security issues (and testing)
  • Integrating useful security tools, that developers support, into existing workflows and systems

From this, we can see that teams have to work together to align on security. This includes identifying security outcomes and defining roles and responsibilities, while causing minimal disruption. Container security should be as seamless as possible. 

DevSecOps maturity and organizational structures matter

DevSecOps stands for Development, Security, and Operations, but what does that mean? Security under a DevSecOps system becomes a shared responsibility and a priority quite early in the software development lifecycle. While some companies have this concept down pat, many others are new to it. Others lie somewhere in the middle. 

As Fani mentioned, a company’s development processes and security maturity determine how they’re categorized. We have two extremes. On one hand, a company might’ve fully “realized” DevSecOps, meaning they’ve successfully scaled their processes and bolstered security. Conversely, a company might be in the exploratory phase. They’ve heard about DevSecOps and know they want it (or need it). But, their development processes aren’t well-entrenched, and their security posture isn’t very strong. 

Those in the exploratory phase might find themselves asking the following questions:

  • Can we improve our security?
  • Which organizations can we learn from?
  • Which best practices should we follow?

Meanwhile, other companies are either DevOps mature (but security immature) or DevSecOps ready. Knowing where your company sits can help you take the correct next steps to either scale processes or security. 

The impact of autonomy vs. centralization on security

You’ll typically see two methodologies used to organize teams. One focuses on autonomy, while the other prioritizes centralization.

Autonomous approaches

Autonomous organizations might house multiple teams that are more or less siloed. Each works on its own application and oversees that application’s security. This involves building, testing, and validation. Security ownership falls on those developers and anyone else integrated within the team. 

But that’s not to say DevSecOps fades completely into the background! Instead, it fills a support and enablement role. This DevSecOps team could work directly with developers on a case-by-case basis or even build useful, internal tools to make life easier. 

Centralized approaches

Otherwise, your individual developers could rally around a centralized DevOps and AppSec (app security) team. This group is responsible for testing and setting standards across different development teams. For example, DevAppSec would define approved base images and lay out a framework for container design that meets stringent security protocols. This plan must harmonize with each application team throughout the organization. 

Why might you even use approved parent images? These images have undergone rigorous testing to ensure no show-stopping vulnerabilities exist. They also contain basic sets of functionality aimed at different projects. DevSecOps has to find an ideal compromise between functionality and security to support ongoing engineering efforts. 

Whichever camp you fall into will essentially determine how “piecemeal” your plan is. How your developers work best will also influence your security plan. For instance, your teams might be happiest using their own specialized toolsets. In this case, moving to centralization might cause friction or kick off a transition period. 

On the flip side, will autonomous teams have the knowledge to employ strong security after relying on centralized policies? 

It’s worth mentioning that plenty of companies will keep their existing structures. However, any structural changes like those above can affect container security in the short and long term. 

Diverse tools define the container security workflow

Next, Fani showed us just how robust the container security tooling market is. For each step in the development pipeline, and therefore workflow, there are multiple tools for the job. You have your pick between IDEs. You have repositories and version control. You also have integration tools, storage, and orchestration. 

These serve a purpose for the following facets of development: 

  • Local development
  • GitOps
  • CI/CD
  • Registry
  • Production container management

Thankfully, there’s no overarching best or “worst” tool for a given job. But, your organization should choose a tool that delivers exceptional container security with minimal disruption. You should even consider how platforms like Docker Desktop can contribute directly or indirectly to your security workflows, through tools like image management and our Software Bill of Materials (SBOM) feature.

You don’t want to redesign your processes to accommodate a tool. For example, it’s possible that Visual Studio Code suits your teams better than IntelliJ IDEA. The same goes for Jenkins vs. CircleCI, or GitHub vs. Bitbucket. Your chosen tool should fit within existing security processes and even enhance them. Not only that, but these tools should mesh well together to avoid productivity hurdles. 

Container security workflow examples

The theories behind security are important but so are concrete examples. Fani kicked off these examples by hopping into an autonomous team workflow. More and more organizations are embracing autonomy since it empowers individual teams. 

Examining an autonomous workflow

As with any modern workflow, development and security will lean on varying degrees of automation. This is the case with Fani’s example, which begins with a code push to a Git repository. That action initiates a Jenkins job, which is a set of sequential, user-defined tasks. Next, something like the Snyk plugin scans for build-breaking issues. 

If Snyk detects no issues, then the Jenkins job is deemed successful. Snyk monitors continuously from then on and alerts teams to any new issues: 

This flowchart details the security and development steps taken from an initial code push to a successful Jenkins job.
[Click to Enlarge]

When issues are found, your container security tool might flag those build issues, notify developers, provide artifact access, and offer any appropriate remediation steps. From there, the cycle repeats itself. Or, it might be safer to replace vulnerable components or dependencies with alternatives. 

Examining a common base workflow

With DevSecOps at the security helm, processes can look a little different. Hadar walked us through these unique container security stages to highlight DevOps’ key role. This is adjacent to — but somewhat separate from — the developer’s workflows. However, they’re centrally linked by a common registry: 

This flow chart illustrates a common base workflow where DevOps vets base images, which are then selected by the developer team from a common registry.
[Click to Enlarge]

DevOps begins by choosing an appropriate base image, customizing it, optimizing it, and putting it through its paces to ensure strong security. Approved images travel to the common development registry. Conversely, DevOps will fix any vulnerabilities before making that image available internally. 

Each developer then starts with a safe, vetted image that passes scanning without sacrificing important, custom software packages. Issues require fixing and bounce you back to square one, while success means pushing your container artifacts to a downstream registry. 

Creating safer containers for the future 

Overall, container security isn’t as complex as many think. By aligning on security and developing core processes alongside tooling, it’s possible to make rapid progress. Automation plays a huge role. And while there are many ways to tackle container security workflows, no single approach definitively takes the cake. 

Safer public base images and custom images are important ingredients while building secure applications. You can watch Fani and Hadar’s complete talk to learn more. You can also read more about the Snyk Extension for Docker Desktop on Docker Hub.

]]>
What is the Best Container Security Workflow for Your Organization? nonadult
Four Ways Docker Boosts Enterprise Software Development https://www.docker.com/blog/four-ways-docker-boosts-enterprise-software-development/ Tue, 13 Sep 2022 15:00:00 +0000 https://www.docker.com/?p=37465 In this guest post, David Balakirev, Regional CTO at Adnovum, describes how they show the benefits of container technology based on Docker. Adnovum is a Swiss software company which offers comprehensive support in the fast and secure digitalization of business processes from consulting and design to implementation and operation.

1. Containers provide standardized development

Everybody wins when solution providers focus on providing value and not on the intricacies of the target environment. This is where containers shine.

With the wide-scale adoption of container technology products (like Docker) and the continued spread of standard container runtime platforms (like Kubernetes), developers have less compatibility aspects to consider. While it’s still important to be familiar with the target environment, the specific operating system, installed utilities, and services are less of a concern as long as we can work with the same platform during development. We believe this is one of the reasons for the growing number of new container runtime options.

For workloads targeting on-premises environments, the runtime platform can be selected based on the level of orchestration needed. Some teams decide on running their handful of services via Docker-Compose, this is typical for development and testing environments, and not unheard of for productive installations. For use-cases which warrant a full-blown container orchestrator, Kubernetes (and derivatives like OpenShift) are still dominant.

Those developing for the cloud can choose from a plethora of options. Kubernetes is present in all major cloud platforms, but there are also options for those with monolithic workloads, from semi to fully managed services to get those simple web applications out there (like Azure App Services or App Engine from the Google Cloud Platform).

For those venturing into serverless, the deployment unit is typically either a container image or source code which then a platform turns into a container.

With all of these options, it’s been interesting to follow how our customers adopted container technology. The IT strategy of smaller firms seemed to react faster to using solution providers like us.

But larger companies are also catching up. We welcome the trend where enterprise customers recognize the benefits of building and shipping software using containers — and other cloud-native technologies.

Overall, we can say that shipping solutions as containers is becoming the norm. We use Docker at Adnovum, and we’ve seen specific benefits for our developers. Let’s look at those benefits more.

2. Limited exposure means more security

Targeting container platforms (as opposed to traditional OS packages) also comes with security consequences. For example, say we’re given a completely managed Kubernetes platform. This means the client’s IT team is responsible for configuring and operating the cluster in a secure fashion. In these cases, our developers can focus their attention on the application we deliver. Thanks to container technology, we can further limit exposure to various attacks and vulnerabilities.

This ties into the basic idea of containers: by only packaging what is strictly necessary for your application, you may also reduce the possible attack surface. This can be achieved by building images from scratch or by choosing secure base images to enclose your deliverables.When choosing secure base images on Docker Hub, we recommend filtering for container images produced by verified parties:

nginx official image verified publisher

There are also cases when the complete packaging process is handled by your development tool(s). We use Spring Boot in many of our web application projects. Spring Boot incorporates buildpacks, which can build Docker OCI images from your web applications in an efficient and reliable way. This relieves developers from hunting for base images and reduces (but does not completely eliminate) the need to do various optimizations.

source oci image
Source: https://buildpacks.io/docs/concepts/

Developers using Docker Desktop can also try local security scanning to spot vulnerabilities before they would enter your code and artifact repositories: https://docs.docker.com/engine/scan/

3. Containers support diverse developer environments

While Adnovum specializes in web and mobile application development, within those boundaries we utilize a wide range of technologies. Supporting such heterogeneous environments can be tricky.

Imagine we have one spring boot developer who works on Linux, and another who develops the Angular frontend on a Mac. They both rely on a set of tools and dependencies to develop the project on their machine:

  • A local database instance
  • Test-doubles (mocks, etc.) for 3rd party services
  • Browsers — sometimes multiple versions
  • Developer tooling, including runtimes and build tools

In our experience, it can be difficult to support these tools across multiple operating systems if they’re installed natively. Instead, we try to push as many of these into containers as possible. This helps us to align the developer experience and reduce maintenance costs across platforms.

Our developers working on Windows or Mac can use Docker Desktop, which not only allows them to run containers but brings along some additional functionality (Docker Desktop is also available on Linux, alternatively you may opt to use docker-engine directly). For example, we can use docker-compose out of the box, which means we don’t need to worry about ensuring people can install it on various operating systems. Doing this over many such tools can add up to a significant cognitive and cost relief for your support team.

Outsourcing your dependencies this way is also useful if your developers need to work across multiple projects at once. After all, nobody enjoys installing multiple versions of databases, browsers, and tools.

We can typically apply this technique to our more recent projects, whereas for older projects with technology predating the mass adoption of Docker, we still have homework to do.

4. Containers aid reproducibility

As professional software makers, we want to ensure that not only do we provide excellent solutions for our clients, but if there are any concerns (functionality or security), we can trace back the issue to the exact code change which produced the artifact — typically a container image for web applications. Eventually, we may also need to rebuild a fixed version of said artifact, which can prove to be challenging. This is because build environments also evolve over time, continuously shifting the compatibility window of what they offer.

In our experience, automation (specifically Infrastructure-as-code) is key for providing developers with a reliable and scalable build infrastructure. We want to be able to re-create environments swiftly in case of software or hardware failure, or provision infrastructure components according to older configuration parameters for investigations. Our strategy is to manage all infrastructure via tools like Ansible or Terraform, and we strongly encourage engineers to avoid managing services by hand. This is true for our data-center and cloud environments as well.

Whenever possible, we also prefer running services as containers, instead of installing them as traditional packages. You’ll find many of the trending infrastructure services like NGINX and PostgreSQL on Docker Hub.

We try to push hermetic builds because they can bootstrap their own dependencies, which significantly decreases their reliance on what is installed in the build context that your specific CI/CD platform offers. Historically, we had challenges with supporting automated UI tests which relied on browsers installed on the machine. As the number of our projects grew, their expectations for browser versions diverged. This quickly became difficult to support even with our dedication to automation. Later, we faced similar challenges with tools like Node.js and the Java JDK where it was almost impossible to keep up with demand.

Eventually, we decided to adopt bootstrapping and containers in our automated builds, allowing teams to define what version of Chrome or Java their project needed. During the CI/CD pipeline, the required version dependency will be downloaded before the build, in case it’s not already cached.

Immutability means our dependencies, and our products, for that matter,never change after they’re built. Unfortunately, this isn’t exactly how Docker tags work. In fact, Docker tags are mutable by design, and this can be confusing at first if you are accustomed to SemVer.

Let’s say your Dockerfile starts like this:

FROM acme:1.2.3

It would be only logical to assume that whenever you (re-)build your own image, the same base image would be used. In reality, the label could point to different images in case somebody decides to publish a new image under the same label. They may do this for a number of reasons: sometimes out of necessity, but it could also be for malicious reasons.

In case you want to make sure you’ll be using the exact same image as before, you can start to refer to images via their digest. This is a trade-off in usability and security at the same time. While using digests brings you closer to truly reproducible builds, it also means if the authors’ of a base image issue a new image version under the same tag, then your builds won’t be using the latest version. Whichever side you’re leaning towards, you should use base images from trusted sources and introduce vulnerability scanning into your pipelines.

Combining immutability (with all its challenges), automation, and hermetic builds, we’ll be able to rebuild older versions of our code. You may need to do this to reproduce a bug — or to address vulnerabilities before you ship a fixed artifact.

While we still see opportunities for ourselves to improve in our journey towards reproducibility, employing containers along the way was a decision we would make again.

Conclusion

Containers, and specifically Docker, can be a significant boost for all groups of developers from small shops to enterprises. As with most topics, getting to know the best practices comes through experience and using the right sources for learning.

To get the most out of Docker’s wide range of features make sure to consult the documentation.

To learn more about how Adnovum helps companies and organizations to reach their digital potential, please visit our website.

]]>
How to Colorize Black & White Pictures With OpenVINO on Ubuntu Containers https://www.docker.com/blog/how-to-colorize-black-white-pictures-ubuntu-containers/ Fri, 02 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=36935 If you’re looking to bring a stack of old family photos back to life, check out Ubuntu’s demo on how to use OpenVINO on Ubuntu containers to colorize monochrome pictures. This magical use of containers, neural networks, and Kubernetes is packed with helpful resources and a fun way to dive into deep learning!

A version of Part 1 and Part 2 of this article was first published on Ubuntu’s blog.

Ubuntu and intel repost 900x600 1

Table of contents:

OpenVINO on Ubuntu containers: making developers’ lives easier

Suppose you’re curious about AI/ML and what you can do with OpenVINO on Ubuntu containers. In that case, this blog is an excellent read for you too.

Docker image security isn’t only about provenance and supply chains; it’s also about the user experience. More specifically, the developer experience.

Removing toil and friction from your app development, containerization, and deployment processes avoids encouraging developers to use untrusted sources or bad practices in the name of getting things done. As AI/ML development often requires complex dependencies, it’s the perfect proof point for secure and stable container images.

Why Ubuntu Docker images?

As the most popular container image in its category, the Ubuntu base image provides a seamless, easy-to-set-up experience. From public cloud hosts to IoT devices, the Ubuntu experience is consistent and loved by developers.

One of the main reasons for adopting Ubuntu-based container images is the software ecosystem. More than 30,000 packages are available in one “install” command, with the option to subscribe to enterprise support from Canonical. It just makes things easier.

In this blog, you’ll see that using Ubuntu Docker images greatly simplifies component containerization. We even used a prebuilt & preconfigured container image for the NGINX web server from the LTS images portfolio maintained by Canonical for up to 10 years.

Beyond providing a secure, stable, and consistent experience across container images, Ubuntu is a safe choice from bare metal servers to containers. Additionally, it comes with hardware optimization on clouds and on-premises, including Intel hardware.

Why OpenVINO?

When you’re ready to deploy deep learning inference in production, binary size and memory footprint are key considerations – especially when deploying at the edge. OpenVINO provides a lightweight Inference Engine with a binary size of just over 40MB for CPU-based inference. It also provides a Model Server for serving models at scale and managing deployments.

OpenVINO includes open-source developer tools to improve model inference performance. The first step is to convert a deep learning model (trained with TensorFlow, PyTorch,…) to an Intermediate Representation (IR) using the Model Optimizer. In fact, it cuts the model’s memory usage in half by converting it from FP32 to FP16 precision. You can unlock additional performance by using low-precision tools from OpenVINO. The Post-training Optimisation Tool (POT) and Neural Network Compression Framework (NNCF) provide quantization, binarisation, filter pruning, and sparsity algorithms. As a result, Intel devices’ throughput increases on CPUs, integrated GPUs, VPUs, and other accelerators.

Open Model Zoo provides pre-trained models that work for real-world use cases to get you started quickly. Additionally, Python and C++ sample codes demonstrate how to interact with the model. More than 280 pre-trained models are available to download, from speech recognition to natural language processing and computer vision.

For this blog series, we will use the pre-trained colorization models from Open Model Zoo and serve them with Model Server.

colorize example albert einstein sticks tongue out

OpenVINO and Ubuntu container images

The Model Server – by default – ships with the latest Ubuntu LTS, providing a consistent development environment and an easy-to-layer base image. The OpenVINO tools are also available as prebuilt development and runtime container images.

To learn more about Canonical LTS Docker Images and OpenVINO™, read:

Neural networks to colorize a black & white image

Now, back to the matter at hand: how will we colorize grandma and grandpa’s old pictures? Thanks to Open Model Zoo, we won’t have to train a neural network ourselves and will only focus on the deployment. (You can still read about it.)

architecture diagram colorizer demo app microk8s
Architecture diagram of the colorizer demo app running on MicroK8s

Our architecture consists of three microservices: a backend, a frontend, and the OpenVINO Model Server (OVMS) to serve the neural network predictions. The Model Server component hosts two different demonstration neural networks to compare their results (V1 and V2). These components all use the Ubuntu base image for a consistent software ecosystem and containerized environment.

A few reads if you’re not familiar with this type of microservices architecture:

gRPC vs REST APIs

The OpenVINO Model Server provides inference as a service via HTTP/REST and gRPC endpoints for serving models in OpenVINO IR or ONNX format. It also offers centralized model management to serve multiple different models or different versions of the same model and model pipelines.

The server offers two sets of APIs to interface with it: REST and gRPC. Both APIs are compatible with TensorFlow Serving and expose endpoints for prediction, checking model metadata, and monitoring model status. For use cases where low latency and high throughput are needed, you’ll probably want to interact with the model server via the gRPC API. Indeed, it introduces a significantly smaller overhead than REST. (Read more about gRPC.)

OpenVINO Model Server is distributed as a Docker image with minimal dependencies. For this demo, we will use the Model Server container image deployed to a MicroK8s cluster. This combination of lightweight technologies is suitable for small deployments. It suits edge computing devices, performing inferences where the data is being produced – for increased privacy, low latency, and low network usage.

Ubuntu minimal container images

Since 2019, the Ubuntu base images have been minimal, with no “slim” flavors. While there’s room for improvement (keep posted), the Ubuntu Docker image is a less than 30MB download, making it one of the tiniest Linux distributions available on containers.

In terms of Docker image security, size is one thing, and reducing the attack surface is a fair investment. However, as is often the case, size isn’t everything. In fact, maintenance is the most critical aspect. The Ubuntu base image, with its rich and active software ecosystem community, is usually a safer bet than smaller distributions.

A common trap is to start smaller and install loads of dependencies from many different sources. The end result will have poor performance, use non-optimized dependencies, and not be secure. You probably don’t want to end up effectively maintaining your own Linux distribution … So, let us do it for you.

colorize example cat walking through grass
“What are you looking at?” (original picture source)

Demo architecture

“As a user, I can drag and drop black and white pictures to the browser so that it displays their ready-to-download colorized version.” – said the PM (me).

For that – replied the one-time software engineer (still me) – we only need:

  • A fancy yet lightweight frontend component.
  • OpenVINO™ Model Server to serve the neural network colorization predictions.
  • A very light backend component.

Whilst we could target the Model Server directly with the frontend (it exposes a REST API), we need to apply transformations to the submitted image. The colorization models, in fact, each expect a specific input.

Finally, we’ll deploy these three services with Kubernetes because … well … because it’s groovy. And if you think otherwise (everyone is allowed to have a flaw or two), you’ll find a fully functional docker-compose.yaml in the source code repository.

architecture diagram for demo app
Architecture diagram for the demo app (originally colored tomatoes)

In the upcoming sections, we will first look at each component and then show how to deploy them with Kubernetes using MicroK8s. Don’t worry; the full source code is freely available, and I’ll link you to the relevant parts.

Neural network – OpenVINO Model Server

The colorization neural network is published under the BSD 2-clause License, accessible from the Open Model Zoo. It’s pre-trained, so we don’t need to understand it in order to use it. However, let’s look closer to understand what input it expects. I also strongly encourage you to read the original work from Richard Zhang, Phillip Isola, and Alexei A. Efros. They made the approach super accessible and understandable on this website and in the original paper.

neural network architecture
Neural network architecture (from arXiv:1603.08511 [cs.CV])

As you can see on the network architecture diagram, the neural network uses an unusual color space: LAB. There are many 3-dimensional spaces to code colors: RGB, HSL, HSV, etc. The LAB format is relevant here as it fully isolates the color information from the lightness information. Therefore, a grayscale image can be coded with only the L (for Lightness) axis. We will send only the L axis to the neural network’s input. It will generate predictions for the colors coded on the two remaining axes: A and B.

From the architecture diagram, we can also see that the model expects a 256×256 pixels input size. For these reasons, we cannot just send our RGB-coded grayscale picture in its original size to the network. We need to first transform it.

We compare the results of two different model versions for the demo. Let them be called ‘V1’ (Siggraph) and ‘V2’. The models are served with the same instance of the OpenVINO™ Model Server as two different models. (We could also have done it with two different versions of the same model – read more in the documentation.)

Finally, to build the Docker image, we use the first stage from the Ubuntu-based development kit to download and convert the model. We then rebase on the more lightweight Model Server image.

# Dockerfile
FROM openvino/ubuntu20_dev:latest AS omz
# download and convert the model
…
FROM openvino/model_server:latest
# copy the model files and configure the Model Server
…

Backend – Ubuntu-based Flask app (Python)

For the backend microservice that interfaces between the user-facing frontend and the Model Server hosting the neural network, we chose to use Python. There are many valuable libraries to manipulate data, including images, specifically for machine learning applications. To provide web serving capabilities, Flask is an easy choice.

The backend takes an HTTP POST request with the to-be-colorized picture. It synchronously returns the colorized result using the neural network predictions. In between – as we’ve just seen – it needs to convert the input to match the model architecture and to prepare the output to show a displayable result.

Here’s what the transformation pipeline looks like on the input:

transformation pipline on input

And the output looks something like that:

transformation pipline on output

To containerize our Python Flask application, we use the first stage with all the development dependencies to prepare our execution environment. We copy it onto a fresh Ubuntu base image to run it, configuring the model server’s gRPC connection.

Frontend – Ubuntu-based NGINX container and Svelte app

Finally, I put together a fancy UI for you to try the solution out. It’s an effortless single-page application with a file input field. It can display side-by-side the results from the two different colorization models.

I used Svelte to build the demo as a dynamic frontend. Below each colorization result, there’s even a saturation slider (using a CSS transformation) so that you can emphasize the predicted colors and better compare the before and after.

To ship this frontend application, we again use a Docker image. We first build the application using the Node base image. We then rebase it on top of the preconfigured NGINX LTS image maintained by Canonical. A reverse proxy on the frontend side serves as a passthrough to the backend on the /API endpoint to simplify the deployment configuration. We do that directly in an NGINX.conf configuration file copied to the NGINX templates directory. The container image is preconfigured to use these template files with environment variables.

Deployment with Kubernetes

I hope you had the time to scan some black and white pictures because things are about to get serious(ly colorized).

We’ll assume you already have a running Kubernetes installation from the next section. If not, I encourage you to run the following steps or go through this MicroK8s tutorial.

# https://microk8s.io/docs
sudo snap install microk8s --classic
 
# Add current user ($USER) to the microk8s group
sudo usermod -a -G microk8s $USER &amp;&amp; sudo chown -f -R $USER ~/.kube
newgrp microk8s
 
# Enable the DNS, Storage, and Registry addons required later
microk8s enable dns storage registry
 
# Wait for the cluster to be in a Ready state
microk8s status --wait-ready
 
# Create an alias to enable the `kubectl` command
sudo snap alias microk8s.kubectl kubectl
ubuntu command line kubernetes cluster

Yes, you deployed a Kubernetes cluster in about two command lines.

Build the components’ Docker images

Every component comes with a Dockerfile to build itself in a standard environment and ship its deployment dependencies (read What are containers for more information). They all create an Ubuntu-based Docker image for a consistent developer experience.

Before deploying our colorizer app with Kubernetes, we need to build and push the components’ images. They need to be hosted in a registry accessible from our Kubernetes cluster. We will use the built-in local registry with MicroK8s. Depending on your network bandwidth, building and pushing the images will take a few minutes or more.

sudo snap install docker
cd ~ &amp;&amp; git clone https://github.com/valentincanonical/colouriser-demo.git
 
# Backend
docker build backend -t localhost:32000/backend:latest
docker push localhost:32000/backend:latest
 
# Model Server
docker build modelserver -t localhost:32000/modelserver:latest
docker push localhost:32000/modelserver:latest
 
# Frontend
docker build frontend -t localhost:32000/frontend:latest
docker push localhost:32000/frontend:latest

Apply the Kubernetes configuration files

All the components are now ready for deployment. The Kubernetes configuration files are available as deployments and services YAML descriptors in the ./K8s folder of the demo repository. We can apply them all at once, in one command:

kubectl apply -f ./k8s

Give it a few minutes. You can watch the app being deployed with watch kubectl status. Of all the services, the frontend one has a specific NodePort configuration to make it publicly accessible by targeting the Node IP address.

ubuntu command line kubernetes configuration files

Once ready, you can access the demo app at http://localhost:30000/ (or replace localhost with a cluster node IP address if you’re using a remote cluster). Pick an image from your computer, and get it colorized!

All in all, the project was pretty easy considering the task we accomplished. Thanks to Ubuntu containers, building each component’s image with multi-stage builds was a consistent and straightforward experience. And thanks to OpenVINO™ and the Open Model Zoo, serving a pre-trained model with excellent inference performance was a simple task accessible to all developers.

That’s a wrap!

You didn’t even have to share your pics over the Internet to get it done. Thanks for reading this article; I hope you enjoyed it. Feel free to reach out on socials. I’ll leave you with the last colorization example.

colorized example christmas cat
Christmassy colorization example (original picture source)

To learn more about Ubuntu, the magic of Docker images, or even how to make your own Dockerfiles, see below for related resources:

]]>
The Docker-Sponsored Open Source Program has a new look! https://www.docker.com/blog/docker-sponsored-open-source-program-has-a-new-look/ Thu, 01 Sep 2022 17:00:00 +0000 https://www.docker.com/?p=37239 It’s no secret that developers love open source software. About 70–90% of code is entirely made up of it! Plus, using open source has a ton of benefits like cost savings and scalability. But most importantly, it promotes faster innovation.

That’s why Docker announced our community program, the Docker-Sponsored Open Source (DSOS) Program, in 2020. While our mission and vision for the program haven’t changed (yes, we’re still committed to building a platform for collaboration and innovation!), some of the criteria and benefits have received a bit of a facelift.

We recently discussed these updates at our quarterly Community All-Hands, so check out that video if you haven’t yet. But since you’re already here, let’s give you a rundown of what’s new.

New criteria & benefits

Over the past two years, we’ve been working to incorporate all of the amazing community feedback we’ve received about the DSOS program. And we heard you! Not only have we updated the application process which will decrease the wait time for approval, but we’ve also added on some major benefits that will help improve the reach and visibility of your projects.

  1. New application process — The new, streamlined application process lets you apply with a single click and provides status updates along the way
  2. Updated funding criteria — You can now apply for the program even if your project is commercially funded! However, you must not currently have a pathway to commercialization (this is reevaluated yearly). Adjusting our qualification criteria opens the door for even more projects to join the 300+ we already have!
  3. Insights & Analytics — Exclusive to DSOS members, you now have access to a plethora of data to help you better understand how your software is being used.
  4. DSOS badge on Docker Hub — This makes it easier for your project to be discovered and build brand awareness.

What hasn’t changed

Despite all of these updates, we made sure to keep the popular program features you love. Docker knows the importance of open source as developers create new technologies and turn their innovations into a reality. That’s why there are no changes to the following program benefits:

  • Free autobuilds — Docker will automatically build images from source code in your external repository and automatically push the build image to your Docker Hub Repository.
  • Unlimited pulls and egress — This is for all users pulling public images from your project namespace.
  • Free 1-year Docker Team subscription — This feature is for core contributors of your project namespace. This includes Docker Desktop, 15 concurrent builds, unlimited Docker Hub image vulnerability scans, unlimited scoped tokens, role-based access control, and audit logs.

We’ve also kept the majority of our qualification criteria the same (aside from what was mentioned above). To qualify for the program, your project namespace must:

  • Be shared in public repos
  • Meet the Open Source Initiative’s definition of open source 
  • Be in active development (meaning image updates are pushed regularly within the past 6 months or dependencies are updated regularly, even if the project source code is stable)
  • Not have a pathway to commercialization. Your organization must not seek to make a profit through services or by charging for higher tiers. Accepting donations to sustain your efforts is allowed.

Want to learn more about the program? Reach out to OpenSource@Docker.com with your questions! We look forward to hearing from you.

]]>
Extending Docker’s Integration with containerd https://www.docker.com/blog/extending-docker-integration-with-containerd/ Thu, 01 Sep 2022 16:44:09 +0000 https://www.docker.com/?p=37231 We’re extending Docker’s integration with containerd to include image management! To share this work early and get feedback, this integration is available as an opt-in experimental feature with the latest Docker Desktop 4.12.0 release.

The Docker Desktop Experimental Features settings with the option for using containerd for pulling and storing images enabled.

What is containerd?

In the simplest terms, containerd is a broadly-adopted open container runtime. It manages the complete container lifecycle of its host system! This includes pulling and pushing images as well as handling the starting and stopping of containers. Not to mention, containerd is a low-level brick in the container experience. Rather than being used directly by developers, it’s designed to be embedded into systems like Docker and Kubernetes.

Docker’s involvement in the containerd project can be traced all the way back to 2016. You could say, it’s a bit of a passion project for us! While we had many reasons for starting the project, our goal was to move the container supervision out of the core Docker Engine and into a separate daemon. This way, it could be reused in projects like Kubernetes. It was donated to the Cloud Native Computing Foundation (CNCF), and it’s now a graduated (stable) project as of 2017.

What does containerd replace in the Docker Engine?

As we mentioned earlier, Docker has used containerd as part of Docker Engine for managing the container lifecycle (creating, starting, and stopping) for a while now! This new work is a step towards a deeper integration of containerd into the Docker Engine. It lets you use containerd to store images and then push and pull them. Containerd also uses snapshotters instead of graph drivers for mounting the root file system of a container. Due to containerd’s pluggable architecture, it can support multiple snapshotters as well. 

Want to learn more? Michael Crosby wrote a great explanation about snapshotters on the Moby Blog.

Why migrate to containerd for image management?

Containerd is the leading open container runtime and, better yet, it’s already a part of Docker Engine! By switching to containerd for image management, we’re better aligning ourselves with the broader industry tooling. 

This migration modifies two main things:

  • We’re replacing Docker’s graph drivers with containerd’s snapshotters.
  • We’ll be using containerd to push, pull, and store images.

What does this mean for Docker users?

We know developers love how Docker commands work today and that many tools rely on the existing Docker API. With this in mind, we’re fully vested in making sure that the integration is as transparent as possible and doesn’t break existing workflows. To do this, we’re first rolling it out as an experimental, opt-in feature so that we can get early feedback. When enabled in the latest Docker Desktop, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save.

This integration has the following benefits:

  1. Containerd’s snapshotter implementation helps you quickly plug in new features. Some examples include using stargz to lazy-pull images on startup or nydus and dragonfly for peer-to-peer image distribution.
  2. The containerd content store can natively store multi-platform images and other OCI-compatible objects. This enables features like the ability to build and manipulate multi-platform images using Docker Engine (and possibly other content in the future!).

If you plan to build the multi-platform image, the below graphic shows what to expect when you run the build command with the containerd store enabled. 

A recording of a docker build terminal output with the containerd store enabled.

Without the experimental feature enabled, you will get an error message stating that this feature is not supported on docker driver as shown in the graphic below. 

A recording of an unsuccessful docker build output with the containerd store disabled.

If you decide not to enable the experimental feature, no big deal! Things will work like before. If you have additional questions, you can access details in our release notes.

Roadmap for the containerd integration

We want to be as transparent as possible with the Docker community when it comes to this containerd integration (no surprises here!). For this reason, we’ve laid out a roadmap. The integration will happen in two key steps:

  1. We’ll ship an initial version in Docker Desktop which enables common workflows but doesn’t touch existing images to prove that this approach works.
  2. Next, we’ll write the code to migrate user images to use containerd and activate the feature for all our users.

We work to make expanding integrations like this as seamless as possible so you, our end user, can reap the benefits! This way, you can create new, exciting things while leveraging existing features in the ecosystem such as namespaces, containerd plug-ins, and more.

We’ve released this experimental feature first in Docker Desktop so that we can get feedback quickly from the community. But, you can also expect this feature in a future Docker Engine release.  


The details on the ongoing integration work can be accessed here

Conclusion

In summary, Docker users can now look forward to full containerd integration. This brings many exciting features from native multi-platform support to encrypted images and lazy pulls. So make sure to download the latest version of Docker Desktop and enable the containerd experimental feature to take it for a spin! 

We love sharing things early and getting feedback from the Docker community — it helps us build products that work better for you. Please join us on our community Slack channel or drop us a line using our feedback form.

]]>
Testing with Telepresence and Docker https://www.docker.com/blog/testing-with-telepresence-and-docker/ Fri, 12 Aug 2022 14:00:18 +0000 https://www.docker.com/?p=36359 Ever found yourself wishing for a way to synchronize local changes with a remote Kubernetes environment? There’s a Docker extension for that! Read on to learn more about how Telepresence partners with Docker Desktop to help you run integration tests quickly and where to get started.

A version of this article was first published on Ambassador’s blog.

Telepresence and docker 900x600 1

Run integration tests locally with the Telepresence Extension on Docker Desktop

Testing your microservices-based application becomes difficult when it can no longer be run locally due to resource requirements. Moving to the cloud for testing is a no-brainer, but how do you synchronize your local changes against your remote Kubernetes environment?

Run integration tests locally instead of waiting on remote deployments with Telepresence, now available as an Extension on Docker Desktop. By using Telepresence with Docker, you get flexible remote development environments that work with your Docker toolchain so you can code and test faster for Kubernetes-based apps.

Install Telepresence for Docker through these quick steps:

  1. Download Docker Desktop.
  2. Open Docker Desktop.
  3. In the Docker Dashboard, click “Add Extensions” in the left navigation bar.
  4. In the Extensions Marketplace, search for the Ambassador Telepresence extension.
  5. Click install.

Connect to Ambassador Cloud through the Telepresence extension:

After you install the Telepresence extension in Docker Desktop, you need to generate an API key to connect the Telepresence extension to Ambassador Cloud.

  1. Click the Telepresence extension in Docker Desktop, then click Get Started.
  2. Click the Get API Key button to open Ambassador Cloud in a browser window.
  3. Sign in with your Google, GitHub, or Gitlab account. Ambassador Cloud opens to your profile and displays the API key.
  4. Copy the API key and paste it into the API key field in the Docker Dashboard.

Connect to your cluster in Docker Desktop:

  1. Select the desired cluster from the dropdown menu and click Next. This cluster is now set to kubectl’s current context.
  2. Click Connect to [your cluster]. Your cluster is connected, and you can now create intercepts.

To get hands-on with the example shown in the above recording, please follow these instructions:

1. Enable and start your Docker Desktop Kubernetes cluster locally 1. 

Install the Telepresence extension to Docker Desktop if you haven’t already.

2. Install the emojivoto application in your local Docker Desktop cluster (we will use this to simulate a remote K8s cluster).

Use the following command to apply the Emojivoto application to your cluster.

kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment

3. Start the web service in a single container with Docker.

Create a file docker-compose.yml and paste the following into that file:

version: '3'

services:
web:
image: buoyantio/emojivoto-web:v11
environment:
- WEB_PORT=8080
- EMOJISVC_HOST=emoji-svc.emojivoto:8080
- VOTINGSVC_HOST=voting-svc.emojivoto:8080
- INDEX_BUNDLE=dist/index_bundle.js
ports:
- &amp;quot;8080:8080&amp;quot;
network_mode: host

In your terminal run docker compose up to start running the web service locally.

4. Using a test container, curl the “list” API endpoint in Emojivoto and watch it fail (because it can’t access the backend cluster).

In a new terminal, we will test the Emojivoto app with another container. Run the following command docker run -it --rm --network=host alpine. Then we’ll install curl: apk --no-cache add curl.

Finally, curl localhost:8080/api/list, and you should get an rpc error message because we are not connected to the backend cluster and cannot resolve the emoji or voting services:

&amp;gt; docker run -it --rm --network=host alpine
apk --no-cache add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz
(1/5) Installing ca-certificates (20211220-r0)
(2/5) Installing brotli-libs (1.0.9-r5)
(3/5) Installing nghttp2-libs (1.46.0-r0)
(4/5) Installing libcurl (7.80.0-r0)
(5/5) Installing curl (7.80.0-r0)
Executing busybox-1.34.1-r3.trigger
Executing ca-certificates-20211220-r0.trigger
OK: 8 MiB in 19 packages

curl localhost:8080/api/list
{&amp;quot;error&amp;quot;:&amp;quot;rpc error: code = Unavailable desc = connection error: desc = \&amp;quot;transport: Error while dialing dial tcp: lookup emoji-svc on 192.168.65.5:53: no such host\&amp;quot;&amp;quot;}

5. Run Telepresence connect via Docker Desktop.

Open Docker Desktop and click the Telepresence extension on the left-hand side. Click the blue “Connect” button. Copy and paste an API key from Ambassador Cloud if you haven’t done so already (https://app.getambassador.io/cloud/settings/licenses-api-keys). Select the cluster you deployed the Emojivoto application to by selecting the appropriate Kubernetes Context from the menu. Click next and the extension will connect Telepresence to your local cluster.

6. Re-run the curl and watch this succeed.

Now let’s re-run the curl command. Instead of an error, the list of emojis should be returned indicating that we are connected to the remote cluster:

curl localhost:8080/api/list
[{&amp;quot;shortcode&amp;quot;:&amp;quot;:joy:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f602;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:sunglasses:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f60e;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:doughnut:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f369;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:stuck_out_tongue_winking_eye:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f61c;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:money_mouth_face:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f911;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:flushed:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f633;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:mask:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f637;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:nerd_face:&amp;quot;,&amp;quot;unicode&amp;quot;:&amp;quot;&amp;#x1f913;&amp;quot;},{&amp;quot;shortcode&amp;quot;:&amp;quot;:gh

7. Now, let’s “intercept” traffic being sent to the web service running in our K8s cluster and reroute it to the “local” Docker Compose instance by creating a Telepresence intercept.

Select the emojivoto namespace and click the intercept button next to “web”. Set the target port to 8080 and service port to 80, then create the intercept.

After the intercept is created you will see it listed in the UI. Click the nearby blue button with three dots to access your preview URL. Open this URL in your browser, and you can interact with your local instance of the web service with its dependencies running in your Kubernetes cluster.

Congratulations, you’ve successfully created a Telepresence intercept and a sharable preview URL! Send this to your teammates and they will be able to see the results of your local service interacting with the Docker Desktop cluster.

 

Want to learn more about Telepresence or find other life-improving Docker Extensions? Check out the following related resources:

 

]]>
Docker + Telepresence Extension Demo Video nonadult
Slim.AI Docker Extension for Docker Desktop https://www.docker.com/blog/slim-ai-docker-extension-for-docker-desktop/ Fri, 05 Aug 2022 14:00:21 +0000 https://www.docker.com/?p=36295 Extensions are great for expanding the capability of Docker Desktop. We’re proud to feature this extension from Slim.AI which promises deep container insight and optimization features. Follow along as Slim.AI walks through how to install, use, and connect with these handy new features!

A version of this article was first published on Slim.AI’s blog.

This image showcases the two logos for Docker and Slim.AI with the title "Slim.AI New Feature Docker Extensions" above them.

We’re excited to announce that we’ve been working closely with the team at Docker developing our own Slim.AI Docker Extension to help developers build secure, optimized containers faster. You can find the Slim Extension in the Docker Marketplace or on Docker Hub.

Docker Extensions give developers a simple way to install, and run helpful container development tools directly within the Docker Desktop. For more information about Docker Extensions, check out https://docs.docker.com/desktop/extensions/.

The Slim.AI Docker Extension brings some of the Slim Platform’s capabilities directly to your local environment. Our initial release, available to everyone, is focused on being the easiest way for developers to get visibility into the composition and construction of their images and help reduce friction when selecting, troubleshooting, optimizing, and securing images.

This image is a screenshot of the Docker Desktop app using the Slim.AI extension.

Why should I install the Slim.AI Extension?

At Slim, we believe that knowing your software is a key building block to creating secure, small, production-ready containers and reducing software supply chain risk. One big challenge many of us face when attempting to optimize and secure container images is that images often lack important documentation. This leaves us in a pinch when trying to figure out even basic details about whether or not an image is usable, well constructed, and secure.

This is where, we believe, the Slim Docker Extension can help.

Currently, the Slim Docker extension is free to developers and includes the following capabilities:

Available Free to All Developers without Authentication to the Slim Platform

  • Easy to access deep analyses of your local container images by tag with quick access to information like the local arch, exposed ports, shells, volumes, and certs
  • Security insights including whether the containers runs with a root user and a complete list of files that have special permissions
  • Optimization opportunities including a counts of deleted and duplicate files
  • Fully searchable File Explorer filterable by layer, instruction, and file type with the ability to view the contents of any text-based file
  • The ability to compare any two local images or image tags with deep analyses and File Explorer capabilities
  • Reverse engineered dockerfiles for each image when the originals are not available

This image is a screenshot of Docker Desktop using Slim.AI showcasing the Docker image details found under File Explorer.

Features available to developers with Slim.AI accounts (https://portal.slim.dev):

  • Search across multiple public and authenticated registries for quality images including support for Docker Hub, Github, DigitalOcean, ECR, MCR, GCR, with more coming soon.
  • View deep analysis, insights, and File Explorer for images across available registries prior to pulling them down to your local machine.

This image is a screenshot of the Slim.AI sign-in page, offering sign in options such as Github, Gitlab, and Bitbucket.

How do I install the extension?

  • Make sure you’re running Docker Desktop version 4.10 or greater. You can get the latest version of Docker Desktop at docker.com/desktop.
  • Go to Slim.AI Extension on Docker Hub. (https://hub.docker.com/extensions/slimdotai/dd-ext)
  • Click “Open in Docker Desktop”.
  • This will open the Slim.AI extension in the Extensions Marketplace in Docker Desktop. Click “Install.” The installation should take just a few seconds.

This screenshot showcases the Docker Desktop Extensions Marketplace with a list of extensions to choose from.

This Docker Desktop screenshot is of the Slim.AI extension download page.

How do I use the Slim.AI Docker Desktop Extension?

  • Once installed, click on the Slim.AI extension in the left nav of Docker Desktop.
  • You should see a “Welcome” screen. Go ahead and click to view our Terms and Privacy Policy. Then, click “Start analyzing local containers”.
  • The Slim.AI Docker Desktop Extension will list the images on your local machine.
  • You can use the search bar to find specific images by name.
  • Click the “Explore” button or carrot icon to view details about your images:
    • The File Explorer is a complete searchable view into your image’s file system. You can filter by layer, file type, and whether or not it is an Addition, Modification, or Deletion in a given layer. You can also view the content of non-binary files by clicking on the file name then clicking File Contents in the new window.
    • The Overview displays important metadata and insights about your image including the user, certs, exposed ports, volumes, environment variables, and more.
    • The Docker File shows a reverse engineered dockerfile we generate when the original dockerfile may not be available.
  • Click the “Compare” button to compare two images or image tags.
    • Select the tag via the dropdown under the image name. Then, click the “Compare” button in its card.
    • Select a second image or tag, and click the “Compare” button in its card.
    • You will be taken to a comparison view where you can explore the differences in the files, metadata, and reverse engineered dockerfiles.

How do I connect the Slim.AI Docker Desktop Extension to my Slim.AI Account?

  • Once installed, click on the “Login” button at the top of the extension.
  • Sign in using your GitHub, GitLab, or BitBucket account. (Accounts are free for individual developers.)
  • Navigate back to the Slim Docker Desktop Extension
  • Once successfully connected, you can use the search bar to search over all of your connected registries and explore remote images before pulling them down to your local machine.

This image is a screenshot of the Docker Desktop top navigation using Slim.AI, including a search bar, login button, and hamburger menu.

What if I don’t have a Slim.AI account?

The Slim platform is currently free to use. You can create an account from the Docker Desktop Extension by clicking the Login button in the top right of the extension. You will be taken to a sign in page where you can authenticate using Github, Gitlab, or Bitbucket.

What’s on the roadmap?

We have a number of features and refinements planned for the extension, but we need your feedback to help us improve. Please provide your feedback here.

Planned capabilities include:

  • Improvements to the Overview to provide more useful insights
  • Design and UX updates to make the experience even easier to use
  • More capabilities that connect your local experience to the Slim Portal

Interested in learning more about how extensions can expand your experience in Docker Desktop? Check out our Docker Hub extensions library or see below for further reading: 

]]>
Virtual Desktop Support, Mac Permission Changes, & New Extensions in Docker Desktop 4.11 https://www.docker.com/blog/docker-desktop-4-11-virtual-environments-mac-permissions-new-extensions/ Tue, 02 Aug 2022 14:00:07 +0000 https://www.docker.com/?p=36276 Docker Desktop 4.11 is now live! With this release, we added some highly-requested features designed to help make developers’ lives easier and help security-minded organizations breathe easier.

Run Docker Desktop for Windows in Virtual Desktop Environments

Docker Desktop for Windows is officially supported on VMware ESXi and Azure Windows VMs for our Docker Business subscribers. Now you can use Docker Desktop on your virtual environments and get the same experience as running it natively on Windows, Mac, or Linux machines.

Currently, we support virtual environments where the host hypervisors are VMware ESXi or Windows Hyper-V — both on-premises and in the cloud. Citrix Hypervisor support is also coming soon. As a Docker Business subscriber, you’ll receive dedicated support for running Docker Desktop in your virtual environments.

To learn more about running Docker Desktop for Windows in a virtual environment, please visit our documentation.

Changes to permission requirements on Docker Desktop for Mac

The first time you run Docker Desktop for Mac, you have to authenticate as root in order to install a privileged helper process. This process is needed to perform a limited set of privileged operations and runs in the background on the host machine while Docker Desktop is running.

In Docker Desktop v4.11, you don’t have to run this privilege helper service at all. Use the —user flag in the install command to set everything up in advance. Docker Desktop will then run without needing root on the Mac.

For more details on Docker Desktop for Mac’s permission requirements, check out our documentation.

New Extensions in the Marketplace

We’re excited to announce the addition of two new extensions to the Extensions Marketplace:

  • vcluster  – Create and manage virtual Kubernetes clusters using vcluster. Learn more about vcluster here.
  • PGAdmin4 – Quickly admin and monitor PostgreSQL databases with PGAdmin4 tools. Learn more about PGAdmin here.

Customize your Docker Desktop Theme

Prefer dark mode on the Docker Dashboard? With 4.11 you can now customize your preference or have it respect your system settings. Go to settings in the upper right corner to try it for yourself.

This gif shows the user how to switch from light to dark mode in Docker Desktop

Fixing the Frequency of Docker Desktop Feedback Prompts

Thanks to all your feedback, we identified a bug that was asking some users for feedback too frequently. Docker Desktop should now only request your feedback twice a year.

As we outlined here, you’ll be asked for feedback 30 days after the product is installed. You can choose to give feedback or decline. You then won’t be asked for a rating again for 180 days.

These scores help us understand user experience trends so we can keep improving Docker Desktop — and your comments have helped us make changes like this. Thanks for helping us fix this for everyone!

Have any feedback for us?

Upvote, comment, or submit new ideas via either our in-product links or our public roadmap. Check out our release notes to learn more about Docker Desktop 4.11.

Looking to become a new Docker Desktop user? Visit our Get Started page to jumpstart your development journey.

]]>