security – Docker https://www.docker.com Thu, 16 Feb 2023 01:08:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png security – Docker https://www.docker.com 32 32 5 Developer Workstation Security Best Practices https://www.docker.com/blog/developer-workstation-security-best-practices/ Wed, 15 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40372 Supply chain attacks increased by 300% between 2020 and 2021, making clear that security breaches are happening earlier in the software development lifecycle. Research also shows that in 2021, 80% of cyber security breaches were due to human error, and 20% involved attacks on desktops and laptops. 

Developer workstations are being targeted for several reasons. Workstations have access to critical code and infrastructure, and the earlier a vulnerability is introduced, the more difficult it can be to identify the breach. Developers need to trust not only the dependencies they use directly but also the dependencies of those dependencies, called transitive dependencies. As we see more incidents stemming from developer workstations, developer workstation security should be a top priority for security-conscious organizations. 

Poor security practices in software development translate to trust-breaking breaches and expensive losses, with the average cost of a security breach reaching $9.44 million in the United States. Developers are increasingly responsible for not only the development of products but also for secure development. 

Organizations, regardless of industry, must be securing developer workstations in order to be prepared for the evolving and growing number of attacks. 

Docker’s white paper, Securing Developer Workstations with Docker, covers the top security risks when developing with containers — and how to best mitigate those risks with Docker. By understanding the potential attack vectors, your teams can mitigate evolving security threats.

Let’s take a look at five actions you can take to secure your developer workstations.

5 workstation security best practices featured

1. Prevent malware attacks

Malware refers to malicious software meant to attack software, hardware, or networks. In container development, malware can be particularly damaging not only because of the potentially harmful activities to be run within the container but also because of potential access to external systems like the host’s file system and network.

Containers should be secured by using only trusted images and dependencies, isolating and restricting permissions where possible, and running up-to-date software in up-to-date environments.

2. Build secure software supply chains

Supply chain attacks exploit direct and transitive dependencies. You may be familiar with Log4Shell, a vulnerability affecting an estimated hundreds of millions of devices. The vulnerability behind the infamous SolarWinds security incident was also a supply chain attack. Supply chain attacks increased by 300% in 2021, and security experts don’t expect them to slow down any time soon. 

Supply chain attacks can be mitigated through secure supply chain best practices. These include making sure every step of the supply chain is trustworthy, adding key automation, and making sure brand environments are clearly defined.

3. Account for local admin rights in policies

Individual developers may have different needs for their workstations. Many developers prefer to have local admin rights. Organizations are responsible for creating and enforcing policies that help developers work securely. How your team handles local admin rights is a team decision, and although the outcomes may differ per team, the conversation around local admin rights is a necessity to keep teams secure.

Finding the balance of security and autonomy is an active state. No balance can be achieved and then forgotten about. Instead, organizations must regularly review tools and configurations so developers can do their jobs without being unnecessarily blocked or accidentally jeopardizing their team, product, and customers.

4. Prevent hazardous misconfigurations

Configurations are necessary at almost every step of the software development lifecycle and connect development tools with production resources, such as environments and sensitive data. While more permissive configurations make anything seem possible, unfortunately, that flexibility can accidentally provide malicious actors access to sensitive resources. Configurations that are too strict frustrate developers and limit productivity.

Misconfiguration does not happen on purpose, but it can be mitigated. There’s no one-size-fits-all solution for configurations, given every team and organization has its own tooling, process, and network considerations. Regardless of your organization’s needs, make sure you’re considering the developer workstations and how your IT admins manage local configurations.

5. Protect against insider threats

Although most breaches come from outside of an organization, 20% of breaches in 2021 were caused by internal actors. For the same reasons that external attackers target the early stages of the software development lifecycle, internal bad actors have used similar strategies to bypass internal security safeguards.

Security measures that limit opportunities for external attackers also limit opportunities for internal bad actors. When considering settings, configurations, permissions, and scanning, remember that regardless of where the attack comes from, the trend of attacks is moving earlier in the development cycle, making securing developer workstations a critical step in your security strategy.

Hardened Docker Desktop: Stronger security for enterprises

With capabilities like Hardened Docker Desktop, we want every developer using Docker to be able to work securely and create secure products without being slowed down or needlessly distracted. In Securing Developer Workstations with Docker, we share container security best practices developed and tested by industry experts.

Read the white paper: Securing Developer Workstations with Docker.

]]>
New in Docker Desktop 4.15: Improving Usability and Performance for Easier Builds https://www.docker.com/blog/docker-desktop-4-15-improved-usability-and-performance/ Thu, 01 Dec 2022 17:00:00 +0000 https://www.docker.com/?p=38997 Docker Desktop 4.15 is here, packed with usability upgrades to make it simpler to find the images you want, manage your containers, discover vulnerabilities, and work with dev environments. Not enough for you? Well, it’s also easier to build and share extensions to add functionality to Docker Desktop. And Wasm+Docker has moved from technical preview to beta!

Let’s dig into what’s new and great in Docker Desktop 4.15.

Improvements for macOS

Move Faster with VirtioFS — now GA

Back in March, we introduced VirtioFS to improve sharing performance for macOS users. With Docker Desktop 4.15, it’s now generally available and you can enable it on the Preferences page. 

Using VirtioFS and Apple’s Virtualization Framework can significantly reduce the time it takes to complete common tasks like package installs, database imports, and running unit tests. For developers, these gains in speed mean less time waiting for common operations to complete and more time focusing on innovation. 

This option is available for macOS 12.5 and above. If you encounter any issues, you can turn this off in your settings tab.

Turn on VirtioFS in Docker Desktop preferences.

Adminless install during first run on macOS

Now you don’t need to grant admin privileges to install and run Docker Desktop. Previously, Docker Desktop for Mac had to install the privileged helper process com.docker.vmnetd on install or on the first run.

There are some actions that may still require admin privileges (like binding to a privileged port), but when it’s needed, Docker Desktop will proactively inform you that you need to grant permission.

For more information see permission requirements for Mac.

Jump in faster with quick search

When you work with Docker Desktop, you probably know exactly which container you want to start with or image you want to run. But there might be times when you don’t remember if it’s already running — or if you pulled it locally at all.

So you might check a few of the current tabs in the Docker Dashboard, or maybe do a docker ps in the CLI. By the time you find what you need, you’ve checked a few different places, spent some time searching Docker Hub to find the right image, and probably got a little annoyed.

With quick search, you get to skip all of this (especially the annoyance!) and find exactly what you’re looking for in one simple search — along with relevant actions like the option to start/stop a container or run a new image. It even searches the Docker Hub API to help you run any public and private images you’ve hosted there!

To get started, click the search bar in the header (or use the shortcut: command+K on Mac / ctrl+K on Windows) and start typing.

The first tab shows results for any local containers or compose apps. You can perform quick actions like start, stop, delete, view logs, or start an interactive terminal session with a running container. You can also get a quick overview of the environment variables.

Do a quick search of all local containers or compose apps.

If you flip over to the Images tab, you’ll see results for Docker Hub images, local images, and images from remote repositories. (To see remote repository images, make sure you’re signed into your Docker Hub account.) Use the filter to easily narrow down the result types you want to see.

Do a quick search of Hub images, local images, and remote repository images.

When you filter for local images, you’ll see some quick actions like run and an overview of which containers are using the image.

Perform quick actions on images from search.

With Docker Hub images, you can pull the image by tag, run it (running will also pull the image as the first step), view documentation, or go to Docker Hub for more details.

Perform quick actions on Docker Hub images from search.

Finally, with images in remote repositories, you can pull by tag or get quick info, like last updated date, size, or vulnerabilities.

Get quick info about images in remote repositories.

Be sure to check out the tips in the footer of the search modal for more shortcuts and ways to use it. We’d love to hear your feedback on the experience and if there’s anything else you’d like to see added!

Flexible image management

Based on user feedback, Docker Desktop 4.15 includes user experience enhancements for the images tab. Cleaning up multiple images can now be done easier with multi-select checkboxes (this functionality used to be behind the “bulk clean up” button).

You can also manage your columns to only show exactly what you want. Want to view your complete container and image names? Drag the dividers in the table header to resize columns. You can also sort columns by header attributes or hide columns to create more space and reduce clutter.

And if you navigate away from your tab, don’t worry! State persistence will keep everything in place so your sorting and search results will be right where you left them.

Know your image vulnerabilities automatically

Docker Desktop now automatically analyzes images for vulnerabilities. When you explore an image, Docker Desktop will automatically provide you with vulnerability information at a base image and image layer level. The base image overview provides a high level view of any dependencies in packages that introduce Common Vulnerabilities and Exposures (CVEs). And it’ll let you know if there’s a newer base image version available.

Automatically analyze images for vulnerabilities.

If you’d prefer images were only analyzed on viewing them, you can turn off auto-analysis in Settings > Features in development > Experimental features > Enable background SBOM indexing.

Thanks to everyone who provided feedback in our 4.14 release. And let us know what you think of the new image overview!

Use Dev Environments with any IDE

When you create a new Dev Environment via Docker Desktop, you can now use any editor or IDE you’ve installed locally. Docker Desktop bind mounts your source code directory to the container. You can interact with the files locally as you normally do, and all your changes will be reflected in the Dev Environment.

Create a dev environment in Docker Desktop you can use with any IDE.

Dev Environments help you manage and run your apps in Docker, while isolated from your local environment. For example, if you’re in the middle of a complicated refactor, Dev Environments makes it easier to review a PR without having to stash WIP. (Pro tip: you can install our extension for Chrome or Firefox to quickly open any PR as a Dev Environment!)

We’ve been making lots of little fixes to make Dev Environments better, including:

  • Custom names for projects
  • Better private repo support
  • Better port handling
  • CLI fixes (like interactive docker dev open)

Let us know what other improvements you’d like to see!

Building and sharing Docker Extensions just got easier

Did you know that you can build your own Docker Extension? Whether you’re just sharing it with your team or adding it to the Extensions Marketplace, Docker Desktop 4.15 makes the process easier and faster.

Meet the Build tab

In the Extensions Marketplace, you’ve got your Browse tab, your Manage tab, and, now, your Build tab. The Build tab brings all the resources you need to get started into one centralized view. You’ll find links to videos, documentation, community resources, and more! To start building, click “+ Add Extensions” in Docker Desktop and navigate to the new Build tab.

The Build tab in the Docker Desktop Extensions Marketplace.

Share a link to your extension with others

So now you’ve made an extension to share with your teammates or the community. You could submit it to the Extensions Marketplace, but what if you aren’t quite ready to? (Or what if it’s just for your team?)

Prior to Docker Desktop 4.15, the extension developer had to share a CLI command that looked something like this: docker extension install IMAGE[:TAG]. Then anyone who wanted to install the extension had to paste that command into their CLI. 

In Docker Desktop 4.15, we’ve simplified the process for both you and the developer you want to run your extension. When your extension’s ready to share, use the share button in the Manage tab to create a link. When the person you share it with opens on the link, they’ll be able to select the “Install” button from Docker Desktop.

Have an idea for a new Docker Extension?

If you have an idea for a new Docker Extension, we’ve got a new way that you can share them with Docker and the community. Inside the Extensions Marketplace, there’s a link to request an extension. This link will take you to our new GitHub repo that allows you to add your idea to our discussions and upvote existing ones. If you’re an extension developer, but aren’t sure what to build, be sure to check out the repo for some inspiration!

Docker Extensions GitHub repo for extension ideas.

Docker+Wasm is now beta

We’ve integrated WasmEdge’s runwasi containerd shim into Docker Desktop (previously provided in a Technical Preview build).

This allows developers to run Wasm applications and compose Wasm applications with other containerized workloads, such as a database, in a single application stack. Learn more about it in the documentation and be on the lookout for more soon!

What else would make your life easier?

Take a test drive of the new usability upgrades and let us know what you think! Is there something you think we missed? Be sure to check out our public roadmap to see upcoming features — and to suggest any other ones you’d like to see.

And don’t forget to check out the release notes for a full breakdown of what’s new in Docker Desktop 4.15!

]]>
security Archives | Docker nonadult
Find and Fix Vulnerabilities Faster Now that Docker’s a CNA https://www.docker.com/blog/docker-becomes-mitre-cna/ Thu, 01 Dec 2022 15:00:00 +0000 https://www.docker.com/?p=39025 Docker is a CNA through MITRA CVE

CNAs, or CVE Numbering Authorities, are an essential part of vulnerability reporting because they compose a cohort of bug bounty programs, organizations, and companies involved in the secure software supply chain. When millions of developers depend on your projects, like in Docker’s case, it’s important to be a CNA to reinforce your commitment to cybersecurity and good stewardship as part of the software supply chain.

Previously, Docker reported CVEs directly through MITRE and GitHub without CNA status (there are many other organizations that still do this today, and CVE reporting does not require CNA status).

But not anymore! Docker is now officially a CNA under MITRE, which means you should get better notifications and documentation when we publish a vulnerability.

What are CNAs? (And where does MITRE fit in?)

To understand how CNAs, CVEs, and MITRE fit together, let’s start with the reason behind all those acronyms. Namely, a vulnerability.

When a vulnerability pops up, it’s really important that it has a unique identifier so developers know they’re all talking about the same vulnerability. (Let’s be honest, calling it, “That Java bug” really isn’t going to cut it.)

So someone has to give it a CVE (Common Vulnerabilities and Exposures) designation. That’s where a CNA comes in. They submit a request to their root CNA, which is often MITRE (and no, MITRE isn’t an acronym). A new CVE number, or several, is then assigned depending on how the report is categorized, thus making it official. And to keep all the CNAs on the same page, there are companies that maintain the CVE system.

MITRE is a non-profit corporation that maintains the system with sponsorship from the US government’s CISA (Cybersecurity and Infrastructure Security Agency). Like CISA, MITRE helps lead the charge in protecting public interest when it comes to defense, cybersecurity, and a myriad of other industries.

The CVE system provides references and information about the scary-ickies or the ultra terrifying vulnerabilities found in the world of technology, making vulnerabilities for shared resources and technologies easy to publicize, notify folks about, and take action against.

If you feel like learning more about the CVE program check out MITRE’s suite of videos here or the program’s homepage.

Where does Docker fit in?

Docker has reported CVEs in the past directly through MITRE and has, for example, used the reporting functionality through GitHub on Docker Engine. By becoming a CNA, however, we can take a more direct and coordinated approach with our reporting.

And better reporting means better awareness for everyone using our tools!

Docker went through the process of becoming a CNA (including some training and homework) so we can more effectively report on vulnerabilities related to Docker Desktop and Docker Hub. The checklist for CNA status also includes having appropriate disclosure and advisory policies in place. Docker’s status as a CNA means we can centralize CVE reporting for our different offerings connected to Docker Desktop, as well as those connected to Docker Hub and the registry. 

By becoming a CNA, Docker can be more active in the community of companies that make up the software supply chain. MITRE, as the default CNA and one of the root CNAs (CISA is a root CNA too), acts as the unbiased reviewer of vulnerability reports. Other organizations, vendors, or bug bounty programs, like Microsoft, HashiCorp, Rapid7, VMware, Red Hat, and hundreds of others, also act as CNAs.

Keep in mind that Docker’s status as a CNA means we’ll only report for products and projects we maintain. Being a CNA also includes consideration of when certain products might be end-of-life and how that affects CVE assignment. 

Ch-ch-changes?

Will the experience of using Docker Hub and Docker Desktop because of Docker’s new CNA status? Short answer: no. Long answer: the core experience of using Docker will not change. We’ve just leveled up in tackling vulnerabilities and providing better notifications about those vulnerabilities.

By better notifications, we mean a centralized repository for our security advisories. Because these reported vulnerabilities will link back to MITRE’s CVE program, it makes them far easier to search for, track, and tell your friends, your dog, or your cat about.

To see the latest vulnerabilities as Docker reports them and CVEs become assigned, check out our advisory location here: https://docs.docker.com/security/. For historic advisories also check https://docs.docker.com/desktop/release-notes/ and https://docs.docker.com/engine/release-notes/.

Keep in mind that CVEs that get reported are those that affect the consumers of Docker’s toolset and will require remediation from us and potential upgrade actions from the user, just like any other CVE announcement you might have seen in the news recently.

So keep your fins ready for when CVEs we may announce might apply to you.

Help Docker help you

We still encourage users and security researchers to report anything concerning they encounter with their use of Docker Hub and/or Docker Desktop to security@docker.com. (For reference, our security and privacy guidelines can be found here.)

We also still encourage proper configuration according to Docker documentation and to not to do anything Moby wouldn’t do. (That means you should be whale-intentioned in your builds and help your fin-ends and family using Docker configure it properly.)

And while we can’t promise to stop using whale puns any time soon, we can promise to continue to be good stewards for developers — and a big part of that includes proper security procedures.

]]>
Apache Log4j 2 CVE-2021-44228 https://www.docker.com/blog/apache-log4j-2-cve-2021-44228/ Sat, 11 Dec 2021 22:40:44 +0000 https://www.docker.com/blog/apache-log4j-2-cve-2021-44228/ Update: 13 December 2021

As an update to CVE-2021-44228, the fix made in version 2.15.0 was incomplete in certain non-default configurations. An additional issue was identified and is tracked with CVE-2021-45046. For a more complete fix to this vulnerability, it’s recommended to update to Log4j2 2.16.0

————————————————————————————-

Original post below has now been updated:

15 December 2021 12:49 PM PT

We know that many of you are working hard on fixing the new and serious Log4j 2 vulnerability CVE-2021-44228, which has a 10.0 CVSS score. We send our #hugops and best wishes to all of you working on this vulnerability, now going by the name Log4Shell. This vulnerability in Log4j 2, a very common Java logging library, allows remote code execution, often from a context that is easily available to an attacker. For example, it was found in Minecraft servers which allowed the commands to be typed into chat logs as these were then sent to the logger. This makes it a very serious vulnerability, as the logging library is used so widely and it may be simple to exploit. Many open source maintainers are working hard with fixes and updates to the software ecosystem.

We want to help you as much as we can in this challenging time, and we have collected as much information as possible for you here, including how to detect the CVE and potential mitigations. 

We will update this post as more information becomes available.

delete log 4j

Am I vulnerable?

The vulnerable versions of Log4j 2 are versions 2.0 to version 2.14.1 inclusive. The first fixed version is 2.15.0. The fix in 2.15.0 was incomplete and 2.16.0 is now the recommended version to upgrade to. We strongly encourage you to update to the latest version if you can. If you are using a version before 2.0, you are also not vulnerable.

You may not be vulnerable if you are using these versions, as your configuration may already mitigate this (see the Mitigations section below), or the things you log may not include any user input. This may be difficult to validate however without understanding all the code paths that may log in detail, and where they may get input from. So you probably will want to upgrade all code using vulnerable versions.

The configuration for the docker scan command previously shipped in Docker Desktop versions 4.3.0 and earlier unfortunately do not pick up this vulnerability on scans. Please update to Docker Desktop 4.3.1+ with docker scan 0.11.0+, which we released today, 11 December 2021.

If you are using docker scan from Linux you can download binaries from GitHub and install in the plugins directory as explained in the instructions here. We will soon update the Linux CLI version to include the updated docker scan.

If you use the updated version, you should see a message in the output log like this:

Upgrade org.apache.logging.log4j:log4j-core@2.14.0 to org.apache.logging.log4j:log4j-core@2.15.0 to fix
  ✗ Arbitrary Code Execution (new) [Critical Severity][https://snyk.io/vuln/SNYK-JAVA-ORGAPACHELOGGINGLOG4J-2314720] in org.apache.logging.log4j:log4j-core@2.14.0
    introduced by org.apache.logging.log4j:log4j-core@2.14.0

To test this, you can check a vulnerable image, for example this image contains a vulnerable version.

docker scan elastic/logstash:7.13.3

or to cut out all the other vulnerabilities

docker scan elastic/logstash:7.13.3 | grep 'Arbitrary Code Execution'

For more information about docker scan, see the documentation.

Docker Hub Scans

​​Updated: Docker Hub security scans after 1700 UTC 13 December 2021 are now correctly identifying the Log4j2 vulnerability. Scans before this date do not currently reflect this vulnerability. We are looking into how to remediate this and will update this post when we do. Please use docker scan from the updated version above for images that were pushed ahead of 1700 UTC 13 December 2021.

Mitigations

You may well want to use a web application firewall (WAF) as an initial part of your mitigation and fix process.

This issue can be mitigated in prior releases of Log4j 2 (<2.16.0) by removing the JndiLookup class from the classpath.

example:

zip -q -d
log4j-core-*.jarorg/apache/logging/log4j/core/lookup/JndiLookup.class

Docker Official Images

A number of the Docker Official images do contain the vulnerable versions of Log4j 2. For information on the current status updates for Docker Official Images please see https://docs.docker.com/security/ .

Other images on Docker Hub

We are working with the Docker Verified Publishers to identify and update their affected images. We are looking at ways to show you images that are affected and we will continue to update this post as we have more information.

Is Docker’s infrastructure affected?

Docker Desktop and Docker Hub are not affected by the log4j 2 vulnerability. Docker largely uses Go code to build our applications, not Java. Although we do use some Java applications internally, we have confirmed we are not vulnerable to CVE-2021-44228 and CVE-2021-45046.

]]>
Docker Security Roundup: News, Articles, Sessions https://www.docker.com/blog/docker-security-roundup-news-articles-sessions/ Thu, 29 Jul 2021 21:18:41 +0000 https://www.docker.com/blog/?p=28544 With the eyes of the security world converging on Black Hat USA next week, now is a good time to remember that building secure applications is paramount.

In the latest chapter in Docker’s security story, Docker CTO Justin Cormack last month provided an important update on software supply chain security. He blogged about the publication of a white paper, “Software Supply Chain Best Practices,” by the security technical advisory group of the Cloud Native Computing Foundation (CNCF).

Slide1

The long-awaited document is important because the software supply chain — that stage of the software development journey in which software is written, assembled, built or tested before production — has become a favored target of cyber criminals. Justin was one of the prime movers of the project and one of the CNCF reviewers who helped steer the 50-page document through multiple iterations to completion.

The paper aims to make secure supply chains easier and more widely adopted through four key principles, which Justin summarizes:

“In simpler language, this means that you need to be able to securely trace all the code you are using, which exact versions you are using, where they came from, and in an automated way so that there are no errors. Your build environments should be minimal, secure and well defined, i.e. containerized. And you should be making sure everything is authenticated securely.”

Contributing writer Robert Lemos quoted Justin’s blog in a Dark Reading article last week. The article, titled “What Does It Take to Secure Containers,” quotes Justin on why creating a trusted pipeline is so critical:

“Every time you use software that you didn’t write yourself, often open source software that you use in your applications, you are trusting both that the software you added is what you thought it is, and that it is trustworthy not hostile. Usually both these things are true, but when they go wrong, like when hundreds of people installed updates from SolarWinds that turned out to contain code to attack their infrastructure, the consequences are serious.”

Security at DockerCon

Several other facets of our security story were on the menu at DockerCon in May.

Alvaro Muro, an integrations engineer at Sysdig, led a webinar on Top Dockerfile Security Best Practices showing how these practices for image builds help you prevent security issues and optimize containerized applications. And he shared ways to avoid unnecessary privileges, reduce the attack surface with multistage builds, prevent confidential data leaks, detect bad practices and more.

In their talk, An Ounce of Prevention: Curing Insecure Container Images, Red Ventures’ Seyfat Khamidov and Eric Smalling of Snyk shared keys to catching vulnerabilities in your Docker containers before they go to production, such as scanning individual containers and incorporating container security scanning into your continuous integration build jobs. They also covered what Red Ventures is doing to scan container images at scale, and the new integration between Docker and Snyk for scanning container images for security vulnerabilities.

You know that feeling of panic when you scan a container and find a long list of vulnerabilities? Yeah, that one. In his DockerCon presentation, My Container Image Has 500 Vulnerabilities, Now What?, Snyk’s Matt Jarvis talks you off the ledge. How do you assess and prioritize security risk? How do you start to remediate? He lays out what you need to consider and how to get started.

Speaking of the SolarWinds breach, GitLab’s Brendan O’Leary dissected that and a number of other supply chain attacks in his talk, As Strong as the Weakest Link: Securing the Software Supply Chain. He delved into the simple, practical security measures that were missed, allowing the attacks to get a foothold in the first place.

Finally, in a session titled Secure Container Workloads for K8s in Containerd, Om Moolchandani, CISO and CTO at Accurics, spells out how security can be easily embedded into Docker development workflows and Kubernetes deployments to increase resiliency while practically eliminating the effort required to “be secure.” He also highlights open source tools that enable you to establish security guardrails, ensuring you build in security from the start, with programmatic enforcement in development pipelines, and stay secure with automated enforcement in the K8s runtime.

At Docker, security is more than a watchword — it’s an obsession. To learn more, read Justin’s blog and watch the recorded sessions listed above. They’re still available and still free.

Join the Next Community All Hands on September 16th!

We’re excited to announce that our next Community All Hands will be in exactly 2 months,  on Thursday September 16th 2021 @ 8am PST/5pm CET. The event is a unique opportunity for Docker staff, Captains, Community Leaders and the broader Docker community to come together for live company updates, product updates, demos, community updates, contributor shout-outs and of course, a live Q&A. Make sure to register for the event here!

]]>
Level Up Security with Scoped Access Tokens https://www.docker.com/blog/level-up-security-with-scoped-access-tokens/ Tue, 20 Jul 2021 17:36:59 +0000 https://www.docker.com/blog/?p=28512 Scoped tokens are here 💪!

Scopes give you more fine grained control over what access your tokens have to your content and other public content on Docker Hub! 

It’s been a while since we first introduced tokens into Docker Hub (back in 2019!) and we are now excited to say that we have added the ability for accounts on a Pro or Team plan to apply scopes to their Personal Access Tokens (PATs) as a way to authenticate with Docker Hub. 

1 1

Access tokens can be used as a substitute for your password in Docker Hub, adding scopes to these tokens gives you more fine grained control over what access the machine logged in has. This is great for setting up things like service accounts in CI systems, registry mirrors or even on your local machine to make sure you are not giving too much access away. 

PATs are an alternative to using passwords for authentication to Docker Hub when using Docker command line

docker login --username <username>

When prompted for your password you can simply provide a token. The other advantages of tokens are that you can create and manage multiple tokens at once, being able to see when they were last used and if things look wrong – revoke the tokens access. This and our API support make it easy to manage the rotation of your tokens to help improve the security of your supply chain. 

Create and Manage Personal Access Tokens in Docker Hub 

Personal access tokens are created and managed in your Account Settings.

2 1

Then head to security: 

3 1

From here, you can:

4 1
  • Create new access tokens
  • Modify existing tokens
  • Delete access tokens

The other way you can manage your tokens is through the Hub APIs. We have Swagger docs for our APIs and the new docs for scoped tokens can be found here:

https://docs.docker.com/docker-hub/api/latest/#tag/access-tokens

Scopes available 

When you are creating a token Pro and Team plan members will now have access to 4 scopes:
Read, write, delete: The scope of this token allows you to read, write and delete all of the repos that you have access to. (It does not allow you to modify account settings as a password authentication would) 

Read, write: This scope is for read/write within repos you have access to (all the public content on Hub & your private content). This is the sort of scope to use within a CI that is also pushing to a repo

Read only: This scope is read only for all repos you have have access to, this is great when used in production where it only needs to pull content from your repos to run it/

Public repo read only: This scope is for reading only public content, so nothing from your or your team’s repos. This is great when you want to set up a system which is just pulling say Docker Official Images or Trusted content from Docker Hub. 

These scopes are for Pro accounts (which get 5 tokens) and Team accounts (which give each team member unlimited tokens). Free users can continue to use their single read, write, delete token and revoke/reissue this as they need. 

Scoped access tokens levels up the security of Docker users supply chain with how you can authenticate into Docker Hub. Available for Pro and Team plans, we are excited for you to try the scope tokens out and start giving us some feedback. 

Want to learn more about Docker Scoped Tokens? Make sure to follow us on Twitter: @Docker. We’ll be hosting a live Twitter Spaces event on Thursday, Jul 22, 2021 from 8:30 – 9:00 am PST, where you’ll hear from Docker engineers, product managers and a Docker Captain!

If you have feedback or other ideas, remember to add them to our public roadmap. We are always interested in what you would like us to build next!

]]>
Bringing “docker scan” to Linux https://www.docker.com/blog/bringing-docker-scan-to-linux/ Wed, 09 Jun 2021 19:00:00 +0000 https://www.docker.com/blog/?p=28416 At the end of last year we launched vulnerability scanning options as part of the Docker platform. We worked together with our partner Snyk to include security testing options along multiple points of your inner loop.  We incorporated scanning options into the Hub, so that you can configure your repositories to automatically scan all the pushed images. We also added a scanning command to the Docker CLI on Docker Desktop for Mac and Windows, so that you can run vulnerability scans for images on your local machine. The earlier in your development that you find these vulnerabilities, the easier and cheaper it is to fix them.  Vulnerability scan results also provide remediation guidance on things that you can do to remove the reported vulnerabilities. Some of the examples of remediation include recommendations for alternative base images with lower vulnerability counts, or package upgrades that have already resolved the specified vulnerabilities.  

We are now making another update in our security journey, by bringing “docker scan” to the  Docker CLI on Linux. The experience of scanning on Linux is identical to what we have already launched for Desktop CLI, with scanning support for linux/amd64 (x86-64) Docker images. The CLI command is the same  docker scan,  supporting all of the same flags. These flags include the options to add Dockerfiles with images submitted for scanning and to specify the minimum severity level for the reported vulnerabilities.  

Information about the docker scan command, with all the details about the supported flags, is provided in the Vulnerability Scanning for Docker Local Images section in the Docker documentation. Vulnerability reports are also the same, listing for each vulnerability, information about severity levels, the image layers where vulnerabilities are manifested, the exploit maturity and remediation suggestions.  

The major difference with scanning on Linux is that instead of upgrading your Docker Desktop, you will need to install or upgrade your Docker Engine. Directions for installing the Engine are provided in the Install Docker Engine section of Docker documentation, including instructions for several different distros, including CentOS, Debian, Fedora and Ubuntu. And because this is  Linux, we have open sourced the scanning CLI plugin…  Go ahead, give it a try, or take a look at this page for other Docker open source projects that may help you to build, share and run your applications

If you want to learn more about application vulnerabilities, and you missed DockerCon 21, you can go here for a recording of the DockerCon LIVE panel on Security, or watch a great session called ‘My Container Image Has 500 Vulnerabilities.  Now What?’.  Or, look for any other DockerCon recording…  There were all sorts of great sessions on things that you can do to build, share and run your applications.  Or, for more information about the Docker partnership with Snyk, and plans for future partnership collaborations, please check out this blog post by Snyk’s Sarah Conway

Docker Linux Scan 2 1
Docker Linux Scan 1
]]>
New in Docker Hub: Personal Access Tokens https://www.docker.com/blog/docker-hub-new-personal-access-tokens/ Thu, 19 Sep 2019 22:56:40 +0000 https://blog.docker.com/?p=24285 The Docker Hub access token list view.
The Hub token list view.

On the heels of our recent update on image tag details, the Docker Hub team is excited to share the availability of personal access tokens (PATs) as an alternative way to authenticate into Docker Hub.

Already available as part of Docker Trusted Registry, personal access tokens can now be used as a substitute for your password in Docker Hub, especially for integrating your Hub account with other tools. You’ll be able to leverage these tokens for authenticating your Hub account from the Docker CLI – either from Docker Desktop or Docker Engine

docker login --username <username>

When you’re prompted for a password, enter your token instead.

The advantage of using tokens is the ability to create and manage multiple tokens at once so you can generate different tokens for each integration – and revoke them independently at any time.

Create and Manage Personal Access Tokens in Docker Hub 

Personal access tokens are created and managed in your Account Settings.

From here, you can:

  • Create new access tokens
  • Modify existing tokens
  • Delete access tokens
The creating an access token screen in Docker Hub.
Creating an access token in Docker Hub.

Note that the actual token is only shown once, at the time of creation. You will need to copy the token and save it in either a credential manager or use it immediately. If you lose a token, you will need to delete the lost token and create a new one. 

The Next Step for Tokens

Personal access tokens open a new set of ways to authenticate into your Docker Hub account. Their introduction also serves as a foundational building block for more advanced access control capabilities, including multi-factor authentication and team-based access controls – both areas that we’re working on at the moment. We’re excited to share this and many other updates that are coming to Docker Hub over the next few months. Give access tokens a try and let us know what you think!

To learn more about personal access tokens for Docker Hub:


New in #DockerHub: Personal Access Tokens
Click To Tweet


]]>
What is Notary and why is it important to CNCF? https://www.docker.com/blog/notary-important-cncf/ Tue, 24 Oct 2017 15:00:00 +0000 https://blog.docker.com/?p=19144
notary

As you may have heard, the Notary project has been invited to join the Cloud Native Computing Foundation (CNCF). Much like its real world namesake, Notary is a platform for establishing trust over pieces of content.

In life, certain important events such as buying a house are facilitated by a trusted third party called a “notary.” When buying a house, this person is typically employed by the lender to verify your identity and serve as a witness to your signatures on the mortgage agreement. The notary carries a special stamp and will also sign the documents as an affirmation that a notary was present and verified all the required information relating to the borrowers.

In a similar manner, the Notary project, initially sponsored by Docker, is designed to provide high levels of trust  over digital content using strong cryptographic signatures. In addition to ensuring the provenance of the software, it also provides guarantees that the content is not modified without approval of the author anywhere in the supply chain.  This then allows higher level systems like Docker Enterprise Edition (EE)  with Docker Content Trust (which uses Notary) to establish clear policy on the usage of content.  For instance, a policy can be set that only signed content can be used at runtime and deployed by the orchestrators in the Docker platform. Overall Notary is a core piece of plumbing in Docker’s  approach to the secure supply chain whereby security is seamlessly and uniformly embedded into a workflow from development all the way through to operations.

Notary is an implementation of The Update Framework (TUF) written in Go. TUF was developed at the NYU Tandon School of Engineering. TUF was submitted to join CNCF in partnership with Notary. The combined nature of these two projects makes for a particularly compelling donation– both the specification and most widely deployed implementation are coming in together under the auspices of the CNCF.

With technologies such as containerd and Kubernetes already members of CNCF, Notary and TUF are the first security-related projects to be added to the CNFC. This year has seen a significant uptick in data security compromises and we believe the CNCF is positioning itself ahead of the curve by inviting Notary and TUF to join. We hope that more security-focused projects are added to the CNCF over time.

Notary is already used in production environments beyond container distribution with Cloudflare integrating it into their PAL tool for container identity bootstrapping and Kolide using it to secure their autoupdater for the osquery tool. If current trends continue, there will be many more users in search of tools to secure their software distribution channels in the near future and Notary, TUF, and the CNCF will be well positioned to meet that need.

 


What is #Notary and why is it important to @CloudNativeFdn ? @moby #security
Click To Tweet


]]>
Docker Engine 1.10 Security Improvements https://www.docker.com/blog/docker-engine-1-10-security/ Thu, 04 Feb 2016 23:30:30 +0000 https://blog.docker.com/?p=9940 It’s been a crazy past few months with DockerCon and the holidays but yet we are still hacking away on the Docker Engine and have some really awesome security features I would like to highlight with the release of Docker Engine 1.10.

Security is very important to us and our approach is two-fold; one is to provide a secure foundation on which to build applications and second, to provide capabilities to secure the applications themselves. Docker Engine is the foundation on which you pull, build and run containers and all the features listed below are about giving your more granular controls for access, resources and other kickass stuff…

OK, enough with the introduction – let’s get to the good stuff!

Seccomp Profiles

Docker Security-Banner-01
Seccomp filtering allows a process to specify a berkeley packet filter to syscalls. In layman’s terms, this allows a user to catch a syscall and “allow”, “deny”, “trap”, “kill”, or “trace” it via the syscall number and arguments passed. It adds an extra level of granularity in locking down the processes in your containers to only do what they need.

This was first added into Runc and is in Docker Engine 1.10 with the passing of a profile defining the syscalls and the filters for them. There is also a default profile used when none is passed and if the container is not run as privileged. This was hashed out in docker/docker/#18780. We aimed to provide a usable default without being overly restrictive. However, you can also run containers with --security-opt seccomp:unconfined if you need to run without any seccomp profile.

If you would like to know more about Seccomp here and the original pull request here: docker/docker/#17989.

And here is the demo:

A Sneak Peek at Security Profiles

The seccomp profiles mentioned above are just the start of something even better. What started as a side project for a better way to write custom apparmor profiles, https://github.com/jfrazelle/bane, has turned into a proposal for native security profiles in Docker Engine. Now this is still being worked on but I wanted to give a teaser of what is to come – and of course give a plug to my awesome tool 🙂 You can read up and follow the conversation on this here.

User Namespaces

Phil Estes has worked hard to get User Namespaces into the stable release for Docker Engine 1.10. User Namespaces expands on the idea of granular access policies by allowing multiple namespaces to reside on the same Docker host.

Check out the demo:

Authorization Plugins

Using an authorization plugin, a Docker administrator can configure granular access policies for managing access to Docker daemon. System administrators can use these plugins to configure user access policies for their infrastructure. The plugins act as interceptors that can and allow or deny the docker API request based off the rules created by you! These plugins are installed and configured will work the same as the current plugins for volumes and networking via the Docker plugin API.

Many thanks to Dima Stoppel, Liron Levin, and the Twistlock team in contributing this feature to Docker Engine.

Learn more about creating or using an Authorization plugin here.

You can view the pull request here docker/docker/#15365.

There is a go package for easily making an Auth plugin available here. 

Coming Soon — PIDs Control Group

This next feature will be in 1.11 but I wanted to give a teaser now. This is a new cgroup to limit the number of processes that can be forked inside a cgroup. It shipped in the 4.3 kernel. We decided to make this feature secure by default, meaning we are setting the PIDs Limit for the docker cgroup parent to 512 (actual number may change but something along these lines), more than enough for the average user, but not enough to do great harm. Of course if you need more you can override the default, or even set it as unlimited.

Awesome right? But the coolest part about this is the feature came to Linux, then Runc, and then Docker all by the same person, Aleksa Sarai!  View the commit to the kernel here. 

View the pull request to runc and the pull request to Docker.

Also this has been added to the docker stats command and API you know and love.

See how easy it is to prevent a fork bomb with --pids-limit for a container now:

The first part of the video shows a fork bomb in a container with bash, the second part shows docker stats. The container with the large number of pids is chrome 😉

As you can tell there are a lot of awesome things coming in Docker Engine 1.10. As always catch you on #docker-dev IRC or the github repo.

Watch this video on best practices for building secure Docker images

For more information, visit our Docker Security Center.

Additional Resources on Docker 1.10


Learn More about Docker

]]>
Best practices for building secure Docker images nonadult