Justin Cormack – Docker https://www.docker.com Thu, 06 Jul 2023 13:53:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Justin Cormack – Docker https://www.docker.com 32 32 Docker Acquires Mutagen for Continued Investment in Performance and Flexibility of Docker Desktop https://www.docker.com/blog/mutagen-acquisition/ Tue, 27 Jun 2023 17:00:13 +0000 https://www.docker.com/?p=43663 I’m excited to announce that Docker, voted the most-used and most-desired tool in Stack Overflow’s 2023 Developer Survey, has acquired Mutagen IO, Inc., the company behind the open source Mutagen file synchronization and networking technologies that enable high-performance remote development. Mutagen’s synchronization and forwarding capabilities facilitate the seamless transfer of code, binary artifacts, and network requests between arbitrary locations, connecting local and remote development environments. When combined with Docker’s existing developer tools, Mutagen unlocks new possibilities for developers to innovate and accelerate development velocity with local and remote containerized development.

“Docker is more than a container tool. It comprises multiple developer tools that have become the industry standard for self-service developer platforms, empowering teams to be more efficient, secure, and collaborative,” says Docker CEO Scott Johnston. “Bringing Mutagen into the Docker family is another example of how we continuously evolve our offering to meet the needs of developers with a product that works seamlessly and improves the way developers work.”

Mutagen banner 2400x1260 Docker logo and Mutagen logo on red background

The Mutagen acquisition introduces novel mechanisms for developers to extract the highest level of performance from their local hardware while simultaneously opening the gateway to the newest remote development solutions. We continue scaling the abilities of Docker Desktop to meet the needs of the growing number of developers, businesses, and enterprises relying on the platform.

 “Docker Desktop is focused on equipping every developer and dev team with blazing-fast tools to accelerate app creation and iteration by harnessing the combined might of local and cloud resources. By seamlessly integrating and magnifying Mutagen’s capabilities within our platform, we will provide our users and customers with unrivaled flexibility and an extraordinary opportunity to innovate rapidly,” says Webb Stevens, General Manager, Docker Desktop.

 “There are so many captivating integration and experimentation opportunities that were previously inaccessible as a third-party offering,” says Jacob Howard, the CEO at Mutagen. “As Mutagen’s lead developer and a Docker Captain, my ultimate goal has always been to enhance the development experience for Docker users. As an integral part of Docker’s technology landscape, Mutagen is now in a privileged position to achieve that goal.”

Jacob will join Docker’s engineering team, spearheading the integration of Mutagen’s technologies into Docker Desktop and other Docker products.

You can get started with Mutagen today by downloading the latest version of Docker Desktop and installing the Mutagen extension, available in the Docker Extensions Marketplace. Support for current Mutagen offerings, open source and paid, will continue as we develop new and better integration options.

FAQ | Docker Acquisition of Mutagen

With Docker’s acquisition of Mutagen, you’re sure to have questions. We’ve answered the most common ones in this FAQ.

As with all of our open source efforts, Docker strives to do right by the community. We want this acquisition to benefit everyone — community and customer — in keeping with our developer obsession.

What will happen to Mutagen Pro subscriptions and the Mutagen Extension for Docker Desktop?

Both will continue as we evaluate and develop new and better integration options. Existing Mutagen Pro subscribers will see an update to the supplier on their invoices, but no other billing changes will occur.

Will Mutagen become closed-source?

There are no plans to change the licensing structure of Mutagen’s open source components. Docker has always valued the contributions of open source communities.

Will Mutagen or its companion projects be discontinued?

There are no plans to discontinue any Mutagen projects. 

Will people still be able to contribute to Mutagen’s open source projects?

Yes! Mutagen has always benefited from outside collaboration in the form of feedback, discussion, and code contributions, and there’s no desire to change that relationship. For more information about how to participate in Mutagen’s development, see the contributing guidelines.

What about other downstream users, companies, and projects using Mutagen?

Mutagen’s open source licenses continue to allow the embedding and use of Mutagen by other projects, products, and tooling.

Who will provide support for Mutagen projects and products?

In the short term, support for Mutagen’s projects and products will continue to be provided through the existing support channels. We will work to merge support into Docker’s channels in the near future.

Is this replacing Virtiofs, gRPC-FUSE, or osxfs?

No, virtual filesystems will continue to be the default path for bind mounts in Docker Desktop. Docker is continuing to invest in the performance of these technologies.

How does Mutagen compare with other virtual or remote filesystems?

Mutagen is a synchronization engine rather than a virtual or remote filesystem. Mutagen can be used to synchronize files to native filesystems, such as ext4, trading typically imperceptible amounts of latency for full native filesystem performance.

How does Mutagen compare with other synchronization solutions?

Mutagen focuses primarily on configuration and functionality relevant to developers.

How can I get started with Mutagen?

To get started with Mutagen, download the latest version of Docker Desktop and install the Mutagen Extension from the Docker Desktop Extensions Marketplace.

]]>
Announcing Docker SBOM: A step towards more visibility into Docker images https://www.docker.com/blog/announcing-docker-sbom-a-step-towards-more-visibility-into-docker-images/ Thu, 07 Apr 2022 15:00:26 +0000 https://www.docker.com/?p=33004 Today, Docker takes its first step in making what is inside your container images more visible so that you can better secure your software supply chain. Included in Docker Desktop 4.7.0 is a new, experimental docker sbom CLI command that displays the SBOM (Software Bill Of Materials) of any Docker image. It will also be included in our Linux packages in an upcoming release. The functionality was developed as an open source collaboration with Anchore using their Syft project.

As I wrote in my blog post last week, at Docker our priorities are performance, trust and great experiences. This work is focused on improving trust in the supply chain by making it easier to see what is in images and providing SBOMs to consumers of software, and improving the developer experience by making container images more transparent, so you can easily see what is inside of them. This command is just a first step that Docker is taking to make container images more self descriptive. We believe that the best time to determine and record what is in a container image is when you are putting the image together with docker build. To enable this, we are working on making it easy for partners and the community to add SBOM functionality to docker build using BuildKit’s extensibility.

As this information is generated at build time, we believe that it should be included as part of the image artifact. This means that if you move images between registries (or even into air gapped environments), you should still be able to read the SBOM and other image build metadata off of the image.

We’re looking to collaborate with partners and those in the community on our SBOM work in BuildKit. Take a look at our PoC and leave feedback here.

What is an SBOM?

A Software Bill Of Materials (SBOM) is analogous to a packing list for a shipment; it’s all the components that make up the software, or were used to build it. For container images, this includes the operating system packages that are installed (e.g.: ca-certificates) along with language specific packages that the software depends on (e.g.: log4j). The SBOM could include only some of this information or even more details, like the versions of components and where they came from.

SBOMs are sometimes required by governments or other software consumers who are trying to improve their supply chain security. This is because knowing what is inside your software gives you confidence that it is safe to use and can be useful in understanding impact when a vulnerability is made public.

Using the container image SBOM to check for a vulnerability

Let’s take a quick look at what the docker sbom command can do to help when a vulnerability like log4shell is made public. When a vulnerability like this appears, it’s crucial that you can quickly determine if your software is impacted. We’ll use the neo4j:4.4.5 Docker Official Image. Just running docker sbom neo4j:4.4.5 outputs a tabulated form of the SBOM:

$ docker sbom neo4j:4.4.5

Syft v0.42.2

 ✔ Loaded image            

 ✔ Parsed image            

 ✔ Cataloged packages      [385 packages]




NAME                      VERSION                        TYPE

... 

bsdutils                  1:2.36.1-8+deb11u1             deb

ca-certificates           20210119                       deb

...

log4j-api                 2.17.1                         java-archive  

log4j-core                2.17.1                         java-archive  

...

Note that the output includes not only the Debian packages that have been installed inside the image but also the Java libraries used by the application. Getting this information reliably and with minimal effort allows you to promptly respond and reduce the chance that you will be breached. In the above example, we can see that Neo4j uses version 2.17.1 of the log4j-core library which means that it is not affected by log4shell.

Without docker sbom or another SBOM scanning tool, you would need to check your application’s source code to see which version of log4j-core you are using. When you have several applications or services deployed and multiple versions of them, this can be difficult.

In addition to outputting the SBOM in a table, the docker sbom command has options for outputting SBOM in the standard SPDX and CycloneDX formats along with the GitHub and native Syft formats.

We are sharing the docker sbom functionality early, as an experimental command, with the intention of getting feedback from the community on the direction that we’re going. We’d like to know about your use cases and any other feedback that you have. You can leave it on the command’s repo.

What’s next?

We’d love to collaborate with partners and the community on bringing SBOMs to all container images through BuildKit so please hack on our example and leave feedback on our RFC. Please also give the experimental docker sbom command a try and leave us any feedback that you have. You can also read more about the docker sbom collaboration with Anchore on their blog.

DockerCon2022

Join us for DockerCon2022 on Tuesday, May 10. DockerCon is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/

]]>
Investing In Performance, Trust and Great Experiences for Developers https://www.docker.com/blog/investing-in-performance-trust-and-great-experiences-for-developers/ Thu, 31 Mar 2022 18:00:44 +0000 https://www.docker.com/?p=32880 Docker is nine years old? Seems both like yesterday and a long time ago! The technology world has changed a lot since then, and Docker has played a key role in making it easy for developers to build and ship applications wherever they’re needed.

What were the key changes that Docker introduced? Well, when Docker came out, it was compared to what was popular at the time, so we had years of “Docker versus virtual machines.” We also had lots of “Docker is just cgroups and namespaces wrapped up.” With the perspective that nine years brings, we realize these are not the important pieces at all.

First and foremost, Docker succeeded because it started with a great developer experience. You can just bring up new environments, applications and experiments with a simple command; docker run -it ubuntu replaces so much that you might have had to do before. The time-to-magic is so short, and there is so much you can build on from there.

Similar to the standardization of shipping containers decades ago, the value is not the container boxes themselves but the global supply chains powered by those containers, supported by the off-the-shelf infrastructure available to build new supply chains fast.

Fast forward to today, and the software supply chain has rallied around Docker container images as the fundamental deployment unit of modern applications. Hundreds of new companies have been created, and huge communities have sprung up, in particular around Docker, Kubernetes and the broader cloud native movement. The pace of innovation across these communities is impressive, and we see many more opportunities as the Docker revolution spreads to many more developers, each with their own needs and dreams for better tooling.

The fundamental aim of the cloud native movement is to build software supply chains that can operate continuously while being repeatable, scalable, and able to manage the software needs of whole teams and organizations. 

We believe Docker lies at the heart of the modern software supply chain, and there are three big themes guiding our efforts in this area: Performance, Trust and Experience. 

Performance. When you build developer tooling, you always need to think about getting out of the way of the developer, being there to support them in the background, not making them context switch to wait. This is why we have invested so much time in areas such as Buildkit and Docker Desktop filesystem performance, that are often invisible, but require deep and complex work underneath.

Programmer Excuse

Trust. Organizations are now dependent on the open source software from which they build their applications. At the same time, some of these may be hostile, buggy or worse, Trojan horses brought into your organization. We need to strengthen the trust for our whole supply chains, and for all the activities that we do with software. Zero trust is often misunderstood, but it is slowly remaking how we build infrastructure. Security used to be largely firewalls at the perimeter of an organization, hardened on the outside and soft on the inside – the armadillo model. The zero trust model reflects the experience that any individual barrier may be breached and the rest of the system needs to stay resilient. In this area, we are working on making Docker Desktop as secure as possible, with full organizational controls available in Docker Business. We are working on projects across supply chain security, building on the widely trusted Docker Official Images that are the basis of container supply chains, bringing zero trust to your supply chain.

Experience. This is all about the magic of things just working easily and simply. To reach a huge audience of developers we need to make everything even easier to use and to learn. This does not mean making it so simple that the power that more experienced users is not there of course, just finding simple ways to get started and get work done, while providing powerful tools underneath to grow into. Docker has always been admired for its simplicity, power and ease of use, but we recognize that there is much more to do to bring the experience to ever larger numbers of developers. Our preview of Docker Extensions provides just one example of how we’re trying to make it easier for developers to integrate their preferred tools, do their jobs and be productive. 

We believe that the work is just starting to build a sustainable long term software ecosystem built on these principles of performance, trust and experience and are excited to work with our community to accelerate this future. With that in mind, I invite you to visit our public roadmap and let us know what you want to see from Docker.

]]>
Apache Log4j 2 CVE-2021-44228 https://www.docker.com/blog/apache-log4j-2-cve-2021-44228/ Sat, 11 Dec 2021 22:40:44 +0000 https://www.docker.com/blog/apache-log4j-2-cve-2021-44228/ Update: 13 December 2021

As an update to CVE-2021-44228, the fix made in version 2.15.0 was incomplete in certain non-default configurations. An additional issue was identified and is tracked with CVE-2021-45046. For a more complete fix to this vulnerability, it’s recommended to update to Log4j2 2.16.0

————————————————————————————-

Original post below has now been updated:

15 December 2021 12:49 PM PT

We know that many of you are working hard on fixing the new and serious Log4j 2 vulnerability CVE-2021-44228, which has a 10.0 CVSS score. We send our #hugops and best wishes to all of you working on this vulnerability, now going by the name Log4Shell. This vulnerability in Log4j 2, a very common Java logging library, allows remote code execution, often from a context that is easily available to an attacker. For example, it was found in Minecraft servers which allowed the commands to be typed into chat logs as these were then sent to the logger. This makes it a very serious vulnerability, as the logging library is used so widely and it may be simple to exploit. Many open source maintainers are working hard with fixes and updates to the software ecosystem.

We want to help you as much as we can in this challenging time, and we have collected as much information as possible for you here, including how to detect the CVE and potential mitigations. 

We will update this post as more information becomes available.

delete log 4j

Am I vulnerable?

The vulnerable versions of Log4j 2 are versions 2.0 to version 2.14.1 inclusive. The first fixed version is 2.15.0. The fix in 2.15.0 was incomplete and 2.16.0 is now the recommended version to upgrade to. We strongly encourage you to update to the latest version if you can. If you are using a version before 2.0, you are also not vulnerable.

You may not be vulnerable if you are using these versions, as your configuration may already mitigate this (see the Mitigations section below), or the things you log may not include any user input. This may be difficult to validate however without understanding all the code paths that may log in detail, and where they may get input from. So you probably will want to upgrade all code using vulnerable versions.

The configuration for the docker scan command previously shipped in Docker Desktop versions 4.3.0 and earlier unfortunately do not pick up this vulnerability on scans. Please update to Docker Desktop 4.3.1+ with docker scan 0.11.0+, which we released today, 11 December 2021.

If you are using docker scan from Linux you can download binaries from GitHub and install in the plugins directory as explained in the instructions here. We will soon update the Linux CLI version to include the updated docker scan.

If you use the updated version, you should see a message in the output log like this:

Upgrade org.apache.logging.log4j:log4j-core@2.14.0 to org.apache.logging.log4j:log4j-core@2.15.0 to fix
  ✗ Arbitrary Code Execution (new) [Critical Severity][https://snyk.io/vuln/SNYK-JAVA-ORGAPACHELOGGINGLOG4J-2314720] in org.apache.logging.log4j:log4j-core@2.14.0
    introduced by org.apache.logging.log4j:log4j-core@2.14.0

To test this, you can check a vulnerable image, for example this image contains a vulnerable version.

docker scan elastic/logstash:7.13.3

or to cut out all the other vulnerabilities

docker scan elastic/logstash:7.13.3 | grep 'Arbitrary Code Execution'

For more information about docker scan, see the documentation.

Docker Hub Scans

​​Updated: Docker Hub security scans after 1700 UTC 13 December 2021 are now correctly identifying the Log4j2 vulnerability. Scans before this date do not currently reflect this vulnerability. We are looking into how to remediate this and will update this post when we do. Please use docker scan from the updated version above for images that were pushed ahead of 1700 UTC 13 December 2021.

Mitigations

You may well want to use a web application firewall (WAF) as an initial part of your mitigation and fix process.

This issue can be mitigated in prior releases of Log4j 2 (<2.16.0) by removing the JndiLookup class from the classpath.

example:

zip -q -d
log4j-core-*.jarorg/apache/logging/log4j/core/lookup/JndiLookup.class

Docker Official Images

A number of the Docker Official images do contain the vulnerable versions of Log4j 2. For information on the current status updates for Docker Official Images please see https://docs.docker.com/security/ .

Other images on Docker Hub

We are working with the Docker Verified Publishers to identify and update their affected images. We are looking at ways to show you images that are affected and we will continue to update this post as we have more information.

Is Docker’s infrastructure affected?

Docker Desktop and Docker Hub are not affected by the log4j 2 vulnerability. Docker largely uses Go code to build our applications, not Java. Although we do use some Java applications internally, we have confirmed we are not vulnerable to CVE-2021-44228 and CVE-2021-45046.

]]>
Docker Verified Publisher: Trusted Sources, Trusted Content https://www.docker.com/blog/docker-verified-publisher-trusted-sources-trusted-content/ Tue, 07 Dec 2021 20:00:14 +0000 https://www.docker.com/blog/docker-verified-publisher-trusted-sources-trusted-content/ Six months since its launch at DockerCon, the Docker Verified Publisher program delivers on its promise to developers and partners alike

The Docker Verified Publisher program means trusted content and trusted sources for the millions of Docker users. At the May 2021 DockerCon, Docker announced its Secure Software Supply Chain initiative, highlighting Docker Verified Publisher as a key component of that trusted content. 

The trusted images in Docker Hub help development teams build secure software supply chains, minimizing exposure to malicious content early in the process to save time and money later. Docker allows developers to quickly and confidently discover and use images in their applications from known, trusted sources. 

DVP blog graphic

Docker Verified Publisher partners join the trusted content Docker provides, along with Docker Official Images and the Docker Open Source program. In short, the Docker Verified Publisher program promises developers that the images they use are from the trusted software publisher. And a Docker Hub search shows trusted sources first.

Trusted images and software security are at the forefront of what the new Docker Business subscription tier offers, too. These trusted images can be allowed into large organizations – while preventing unverified, untrusted community images via the Docker Business Image Management features in the Docker Hub organization control plane. And of course, those trusted images include Docker Verified Publisher partners.

Dozens of software publishers have joined the Docker Verified Publisher program already, and more are poised to join before Docker’s new Docker Desktop license policies take effect (31 January 2022).

Docker Verified Publisher partners enjoy benefits such as:

  • Removal of rate limiting on all repos in the DVP partners’ namespace, providing a premium user experience: all Docker users, whether they have a Docker subscription or not, are be able to pull the partner’s images as much as they want
  • DVP badging on partner namespace and repos, indicating the trusted content and verified source (part of Docker’s Secure Software Supply Chain initiative)
  • Priority search ranking in Docker Hub 
  • Co-marketing opportunities including social shares, posts on the popular Docker blog, the exclusive right to sponsor DockerCon 2022,etc.
  • Inclusion as one of two trusted sources in the image access controls included in the Docker Business subscription tier, bringing essential security and management capabilities to larger Docker customers
  • Regular reporting to track key partner repo metrics such as pull requests, unique IP addresses, and more
  • And more benefits added regularly

To learn more and join the Docker Verified Publisher program, just email partners@docker.com or visit this page to contact us.

]]>
News from AWS re:Invent – Docker Official Images on Amazon ECR Public https://www.docker.com/blog/news-from-aws-reinvent-docker-official-images-on-amazon-ecr-public/ Tue, 30 Nov 2021 00:00:00 +0000 https://www.docker.com/blog/news-from-aws-reinvent-docker-official-images-on-amazon-ecr-public/ We are happy to announce today that, in partnership with Amazon, Docker Official Images are now available on AWS ECR Public. This is especially exciting because Docker Official Images are some of the most popularly used images on Docker Hub, acting as a key and trusted starting point for base images for the entire container ecosystem. Having them available on ECR Public, in addition to Docker Hub, makes it easier for Amazon customers to use these images conveniently and securely, and gives developers the flexibility to download Docker Official Images from their choice of registry.

aws ecs efs

The images are available to browse in the ECR Public gallery at https://gallery.ecr.aws/docker right now. You can pull the images by simply switching from using docker pull ubuntu:16.04 to docker pull public.ecr.aws/docker/library/ubuntu:16.04. We automatically push images to ECR Public when they are updated on Docker Hub so you will get all the latest releases wherever you pull from.

Note that while pulls from ECR Public do work from outside AWS, they are rate limited if not authenticated with an Amazon account, and you should generally use the Docker Hub addresses if you are pulling from outside AWS. Please see the ECR Public quotas documentation for more about how limits work with ECR Public.

If you are an AWS customer, pulling Docker Official Images from ECR Public offers several advantages. ECR Public is replicated across all AWS regions, so pulls are local to the region you pull from. This helps ensure lower latency for requests and ensures that all your resources are in the same failure zone, which is the recommended architectural pattern.

In addition, Amazon announced today a pull-through cache from ECR Public into your private registry that can be used even in a VPC that is connected with AWS PrivateLink and does not have external network connectivity to the public internet. This means that a security isolated infrastructure can still easily access the secure Docker Official Images that you need, without having to enable general internet access.

Docker also now has the AWS Graviton Ready designation from Amazon, which reflects how much work has gone into making Docker’s trusted content and Docker Official Images work across the Arm64 architecture that Graviton uses. We know that many of you use Graviton in production, and many also use these same images on Apple Silicon laptops, or on your Raspberry Pi. We are happy to continue to support this growing ecosystem.

We will be continuing to work together with Amazon to roll out more features to make it easier for you to work with Docker and AWS together, so please give us feedback in our public roadmap if there are things we can do to make your experience easier.

DockerCon Live 2022  

Join us for DockerCon Live 2022 on Tuesday, May 10. DockerCon Live is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon Live 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/

]]>
Notary v2 Project Update https://www.docker.com/blog/notary-v2-project-update/ Wed, 27 Oct 2021 14:00:00 +0000 https://www.docker.com/blog/notary-v2-project-update/ Supply chain security is something that has been increasingly important to all of us in the last few years. Almost as important as the global supply chains that are having problems distributing goods around the world! There have been many attacks via the supply chain. This is where some piece of software that you use turns out to be compromised or to contain vulnerabilities that in turn compromises your production environment.

We have written about secure supply chain best practices . Docker is committed to helping you build security into your supply chain, and we are working on more tools to help you with this. We provide Docker Trusted Content, including Docker Official Images and Docker Verified Publisher images for you to use as a  trusted starting point for building your applications.

notary horizontal color

We have also been heavily involved with many community projects around supply chain security. In particular we are heavily involved in the Notary v2 project in the Cloud Native Computing Foundation (CNCF). We last wrote about this in January. This project is the next generation of the original Notary project that Docker started in 2015 and then donated to the CNCF. Notary (to simplify!) is a project for adding cryptographic signatures to container images so that you can make sure that the image someone produced is the same one that you are using, and that it has not been tampered with on the way.

Over the years we have learned a lot of things about how it is used, and the problems that have hindered wider adoption, and these are part of the community feedback into the design of Notary v2. We are looking to build a signing framework that can be used in every registry, and where signatures can be pushed and pulled with images so that you can identify that an image that you pull from your private on premise registry is the same as the Docker Official Image on Docker Hub, for example. This is one of the many use cases that are important to the community and which Notary v1 did not adequately address. We also want to make it much simpler to use, so we can have signature checks on by default for all users, rather than having opt-in signatures.

Today the project has released an early alpha prototype for further experimentation and for your feedback. Steve Lasker has written a blog post with the details. Check out the demos and please give feedback on whether these workflows fit your use cases, or how we can improve them.


Remember you can give us feedback about any aspect of our products on the Docker public roadmap. We are especially interested in your feedback around supply chain security and what you would like to see; we have had lots of really helpful feedback recently that is helping us work out where to take our products and tools.

]]>
Secure Software Supply Chain Best Practices https://www.docker.com/blog/secure-software-supply-chain-best-practices/ Thu, 24 Jun 2021 15:01:00 +0000 https://www.docker.com/blog/?p=28451 Last month, the Cloud Native Computing Foundation (CNCF) Security Technical Advisory Group published a detailed document about Software Supply Chain Best Practices. You can get the full document from their GitHub repo. This was the result of months of work from a large team, with special thanks to Jonathan Meadows and Emily Fox. As one of the CNCF reviewers I had the pleasure of reading several iterations and seeing it take shape and improve over time.

supply chain shutterstock 1033377703

Supply chain security has gone from a niche concern to something that makes headlines, in particular after the SolarWinds “Sunburst” attack last year. Last week it was an important part of United States President Joe Biden’s Executive Order on Cybersecurity. So what is it? Every time you use software that you didn’t write yourself, often open source software that you use in your applications, you are trusting both that the software you added is what you thought it is, and that it is trustworthy not hostile. Usually both these things are true, but when they go wrong, like when hundreds of people installed updates from SolarWinds that turned out to contain code to attack their infrastructure, the consequences are serious. As people have hardened their production environments, attacking software as it is written, assembled, built or tested, before production, has become an easier route.

The CNCF Security paper started after discussions I had with Jonathan about what work needs to be done to make secure supply chains easier and more widely adopted. The paper does a really good job in explaining the four key principles:

  • First, every step in a supply chain should be “trustworthy” as a result of a combination of cryptographic attestation and verification
  • Second, automation is critical to supply chain security. Automating as much of the software supply chain as possible can significantly reduce the possibility of human error and configuration drift. 
  • Third, the build environments used in a supply chain should be clearly defined, with limited scope.  
  • Fourth, all entities operating in the supply chain environment must be required to mutually authenticate using hardened authentication mechanisms with regular key rotation.

In simpler language, this means that you need to be able to securely trace all the code you are using, which exact versions you are using,  where they came from, and in an automated way so that there are no errors. Your build environments should be minimal, secure and well defined, i.e. containerised. And you should be making sure everything is authenticated securely.

The majority of people do not meet all these criteria making exact traceability difficult. The report has strong recommendations for environments that are more sensitive, such as those dealing with payments and other sensitive areas. Over time these requirements will become much more widely used because the risks are serious for everyone.

At Docker we believe in the importance of a secure software supply chain and we are going to bring you simple tools that improve your security. We already set the standard with Docker Official Images. They are the most widely trusted images that  developers and development teams use as a secure basis for their application builds. Additionally, we have CVE scanning in conjunction with Snyk, which helps identify the many risks in the software supply chain. We are currently working with the CNCF, Amazon and Microsoft on the Notary v2 project to update container signing  which we will ship in a few months. This is a revamp of Notary v1 and Docker Content Trust that makes signatures portable between registries and will improve usability that has broad industry consensus. We have more plans to improve security for developers and would love your feedback and ideas in our roadmap repository.

]]>
Donating Docker Distribution to the CNCF https://www.docker.com/blog/donating-docker-distribution-to-the-cncf/ Thu, 04 Feb 2021 15:00:00 +0000 https://www.docker.com/blog/?p=27437 68747470733a2f2f7777772e646f636b65722e636f6d2f73697465732f64656661756c742f66696c65732f6f79737465722d72656769737472792d332e706e67

We are happy to announce that Docker has contributed Docker Distribution to the Cloud Native Computing Foundation (CNCF). Docker is committed to the Open Source community and open standards for many of our projects, and this move will ensure Docker Distribution has a broad group maintaining what is the foundation for many registries. 

What is Docker Distribution?

Distribution is the open source code that is the basis of the container registry that is part of Docker Hub, and also many other container registries. It is the reference implementation of a container registry and is extremely widely used, so it is a foundational part of the container ecosystem. This makes its new home in the CNCF highly appropriate.

Docker Distribution was a major rewrite of the original Registry code which was written in Python and was a much earlier design not using content addressed storage. This new version, written in Go, was designed to be an extensible library, so that different backends and subsystems could be designed. Docker formed the Open Container Initiative (OCI) in 2015, in the Linux Foundation, in order to standardise the specifications for the container ecosystem, including the registry and image formats.

Why are we donating Docker Distribution to the CNCF?

There are now many registries, with a lot of companies and organizations providing registries internally or as a service. Many of these are based on the code in Docker Distribution, but we found that many people had small forks and changes that they were not contributing to the upstream version, and the project needed a broader group of maintainers. To make the project clearly an industry wide collaboration, hosting it in the CNCF was the obvious place, as it is the home of many successful collaborative projects, such as Kubernetes and Containerd.

We approached the major users of the Docker Distribution code at scale to become maintainers of the project. This includes maintainers from Docker, GitHub, GitLab, Digital Ocean, Mirantis and the Harbor project which is itself a graduated CNCF project that extends the core registry with other services. In addition, we have invited a maintainer from the OCI, and we are open to more participation in the future. The project is now simply called “Distribution” and can be found at github.com/distribution/distribution.

The Distribution project has been accepted into the CNCF Sandbox, but as it is a mature project we will be proposing that it moves to incubation shortly. We welcome the new maintainers and look forward to the new contributions and future for the project in the CNCF.

]]>
Docker’s sessions at KubeCon 2020 https://www.docker.com/blog/dockers-sessions-at-kubecon-2020/ Mon, 10 Aug 2020 09:11:06 +0000 https://www.docker.com/blog/?p=26831 In a few weeks, August 17-20, lots of us at Docker in Europe were looking forward to hopping on the train down to Amsterdam for KubeCon CloudNativeCon Europe. But like every other event since March, this one is virtual so we will all be at home joining remotely. Most of the sessions are pre recorded with live Q&A, the format that we used at DockerCon 2020. As a speaker I really enjoyed this format at DockerCon, we got an opportunity to clarify and answer extra questions during the talk. It will be rather different from the normal KubeCon experience with thousands of people at the venue though!

KubeCon 1

Our talks

Chris Crone has been closely involved with the CNAB (Cloud Native Application Bundle) project since the launch in late 2018. He will be talking about how to Simplify Your Cloud Native Application Packaging and Deployments, and will explain why CNAB is a great tool for developers. Packaging up entire applications into self contained artifacts is a really useful tool, an extension of packaging up a single container. The tooling, especially Porter has been making a lot of progress recently so if you heard about CNAB before and are wondering what has been happening this talk is for you, or if you are new to CNAB.

On the subject of putting new things in registries, Silvin Lubecki and Djordje Lukic from our Paris team will be giving a talk about storing absolutely anything into a container registry, Sharing is Caring! Push Your Cloud Application to a Container Registry. The movement for putting everything into container registries is taking off now, once they were just for containers, but now we are seeing Helm charts and lots more cloud native artifacts being put into registries, but there are some difficulties which Silvin and Djordje will help you out with.

I am giving a talk about working in security, How to Work in Cloud Native Security, Demystifying the Security Role. Have you ever wanted to work in security? It is a really interesting field, with a real shortage of people, so if you are working in tech or about to start, I will talk about how to get into the field. It is actually surprisingly accessible and a fascinating field.

Since the last KubeCon, Docker, Microsoft, Amazon and many others have been working on a new version of Notary, the CNCF project that is a tool for signing containers. With Steve Lasker from Microsoft and Omar Paul from Amazon we will cover the current progress and the roadmap in the Intro and Update.

Finally I will be in the open CNCF meeting and public Q&A, which will be held live, along with Chris Aniszczyk, Liz Rice, Saad Ali, Michelle Noorali, Sheng Liang and Katie Gamanji. Come along and ask questions about the CNCF!

What about Docker Captains?

In addition, don’t miss the talks from the Docker Captains. Lee Calcote, is talking about the intricacies of service mesh performance and giving the introduction to the CNCF SIG Network. Adrian Mouat will be talking at the Cloud Native Security Day on day 0 of the conference, on Image Provenance and Security in Kubernetes.

]]>