sbom – Docker https://www.docker.com Tue, 11 Jul 2023 19:52:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png sbom – Docker https://www.docker.com 32 32 Generating SBOMs for Your Image with BuildKit https://www.docker.com/blog/generate-sboms-with-buildkit/ Tue, 24 Jan 2023 15:00:00 +0000 https://www.docker.com/?p=39978 Learn how to use BuildKit to generate SBOMs for your images and packages.

The latest release series of BuildKit, v0.11, introduces support for build-time attestations and SBOMs, allowing publishers to create images with records of how the image was built. This makes it easier for you to answer common questions, like which packages are in the image, where the image was built from, and whether you can reproduce the same results locally.

This new data helps you make informed decisions about the security of the images you consume — without needing to do all the manual work yourself.

In this blog post, we’ll discuss what attestations and SBOMs are, how to build images that contain SBOMs, and how to start analyzing the resulting data!

What are attestations?

An attestation is a declaration that a statement is true. With software, an attestation is a record that specifies a statement about a software artifact. For example, it could include who built it and when, what inputs it was built with, what outputs it produced, etc.

By writing these attestations, and distributing them alongside the artifacts themselves, you can see these details that might otherwise be tricky to find. To get this kind of information without attestations, you’d have to try and reverse-engineer how the image was built by trying to locate the source code and even attempting to reproduce the build yourself.

To provide this valuable information to the end-users of your images, BuildKit v0.11 lets you build these attestations as part of your normal build process. All it takes is adding a few options to your build step.

BuildKit supports attestations in the in-toto format (from the in-toto framework). Currently, the Dockerfile frontend produces two types of attestations that answer two different questions:

  • SBOM (Software Bill of Materials) – An SBOM contains a list of software components inside an image. This will include the names of various packages installed, their version numbers, and any other associated metadata. You can use this to see, at a glance, if an image contains a specific package or determine if an image is vulnerable to specific CVEs.
  • SLSA Provenance – The Provenance of the image describes details of the build process, such as what materials (like, images, URLs, files, etc.) were consumed, what build parameters were set, as well as source maps that allow mapping the resulting image back to the Dockerfile that created it. You can use this to analyze how an image was built, determine whether the sources consumed all appear legitimate, and even attempt to rebuild the image yourself.

Users can also define their own custom attestation types via a custom BuildKit frontend. In this post, we’ll focus on SBOMs and how to use them with Dockerfiles.

Getting the latest release

Building attestations into your images requires the latest releases of both Buildx and BuildKit – you can get the latest versions by updating Docker Desktop to the most recent version.

You can check your version number, and ensure it matches the buildx v0.10 release series:

$ docker buildx version
github.com/docker/buildx 0.10.0 ...

To use the latest release of BuildKit, create a docker-container builder using buildx:

$ docker buildx create --use --name=buildkit-container --driver=docker-container

You can check that the new builder is configured correctly, and ensure it matches the buildkit v0.11 release series:

$ docker buildx inspect | grep -i buildkit
Buildkit:  v0.11.1

If you’re using the docker/setup-buildx-action in GitHub Actions, then you’ll get this all automatically without needing to update.

With that out of the way, you can move on to building an image containing SBOMs!

Adding SBOMs to your images

You’re now ready to generate an SBOM for your image!

Let’s start with the following Dockerfile to create an nginx web server:

# syntax=docker/dockerfile:1.5

FROM nginx:latest
COPY ./static /usr/share/nginx/html

You can build and push this image, along with its SBOM, in one step:

$ docker buildx build --sbom=true -t <myorg>/<myimage> --push .

That’s all you need! In your build output, you should spot a message about generating the SBOM:

...
=> [linux/amd64] generating sbom using docker.io/docker/buildkit-syft-scanner:stable-1                           	0.2s
...

BuildKit generates SBOMs using scanner plugins. By default, it uses buildkit-syft-scanner, a scanner built on top of Anchore’s Syft open-source project, to do the heavy lifting. If you like, you can use another scanner by specifying the generator= option. 

Here’s how you view the generated SBOM using buildx imagetools:

$ docker buildx imagetools inspect <myorg>/<myimage> --format "{{ json .SBOM.SPDX }}"
{
	"spdxVersion": "SPDX-2.3",
	"dataLicense": "CC0-1.0",
	"SPDXID": "SPDXRef-DOCUMENT",
	"name": "/run/src/core/sbom",
	"documentNamespace": "https://anchore.com/syft/dir/run/src/core/sbom-a589a536-b5fb-49e8-9120-6a12ce988b67",
	"creationInfo": {
	"licenseListVersion": "3.18",
	"creators": [
	"Organization: Anchore, Inc",
	"Tool: syft-v0.65.0",
	"Tool: buildkit-v0.11.0"
	],
	"created": "2023-01-05T16:13:17.47415867Z"
	},
	...

SBOMs also work with the local and tar exporters. When you export with these exporters, instead of attaching the attestations directly to the output image, the attestations are exported as separate files into the output filesystem:

$ docker buildx build --sbom=true -o ./image .
$ ls -lh ./image
-rw-------  1 user user 6.5M Jan 17 14:36 sbom.spdx.json
...

Viewing the SBOM in this case is as simple as cat-ing the result:

$ cat ./image/sbom.spdx.json | jq .predicate
{
	"spdxVersion": "SPDX-2.3",
	"dataLicense": "CC0-1.0",
	"SPDXID": "SPDXRef-DOCUMENT",
	…

Supplementing SBOMs

Generating SBOMs using a scanner is a good first start! But some packages won’t be correctly detected because they’ve been installed in a slightly unconventional way.

If that’s the case, you can still get this information into your SBOMs with a bit of manual interaction.

Let’s suppose you’ve installed foo v1.2.3 into your image by downloading it using curl:

RUN curl https://example.com/releases/foo-v1.2.3-amd64.tar.gz | tar xzf - && \
    mv foo /usr/local/bin/

Software installed this way likely won’t appear in your SBOM unless the SBOM generator you’re using has special support for this binary (for example, Syft has support for detecting certain known binaries).

You can manually generate an SBOM for this piece of software by writing an SPDX snippet to a location of your choice on the image filesystem using a Dockerfile heredoc:

COPY /usr/local/share/sbom/foo.spdx.json <<"EOT"
{
	"spdxVersion": "SPDX-2.3",
	"SPDXID": "SPDXRef-DOCUMENT",
	"name": "foo-v1.2.3",
	...
}
EOT

This SBOM should then be picked up by your SBOM generator and included in the final SBOM for the whole image. This behavior is included out-of-the-box in buildkit-syft-scanner, but may not be part of every generator’s toolkit.

Even more SBOMs!

While the above section is good for scanning a basic image, it might struggle to provide more detailed package and file information. BuildKit can help you scan additional components of your build, including intermediate stages and your build context using the BUILDKIT_SBOM_SCAN_STAGE and BUILDKIT_SBOM_SCAN_CONTEXT arguments respectively.

In the case of multi-stage builds, this allows you to track dependencies from previous stages, even though that software might not appear in your final image.

For example, for a demo C/C++ program, you might have the following Dockerfile:

# syntax=docker/dockerfile:1.5

FROM ubuntu:22.04 AS build
ARG BUILDKIT_SBOM_SCAN_STAGE=true
RUN apt-get update && apt-get install -y git build-essential
WORKDIR /src
RUN git clone https://example.com/myorg/myrepo.git .
RUN make build

FROM scratch
COPY --from=build /src/build/ /

If you just scanned the resulting image, it wouldn’t reveal that the build tools, like Git or GCC (included in the build-essential package), were ever used in the build process! By integrating SBOM scanning into your build using the BUILDKIT_SBOM_SCAN_STAGE build argument, you can get much richer information that would otherwise have been completely lost.

You can access these additionally generated SBOM documents in imagetools as well:

$ docker buildx imagetools inspect <myorg>/<myimage> --format "{{ range .SBOM.AdditionalSPDXs }}{{ json . }}{{ end }}"
{
	"spdxVersion": "SPDX-2.3",
	"SPDXID": "SPDXRef-DOCUMENT",
	...
}
{
	"spdxVersion": "SPDX-2.3",
	"SPDXID": "SPDXRef-DOCUMENT",
	...
}
...

For the local and tar exporters, these will appear as separate files in your output directory:

$ docker buildx build --sbom=true -o ./image .
$ ls -lh ./image
-rw------- 1 user user 4.3M Jan 17 14:40 sbom-build.spdx.json
-rw------- 1 user user  877 Jan 17 14:40 sbom.spdx.json
...

Analyzing images

Now that you’re publishing images containing SBOMs, it’s important to find a way to analyze them to take advantage of this additional data.

As mentioned above, you can extract the attached SBOM attestation using the imagetools subcommand:

$ docker buildx imagetools inspect <myorg>/<myimage> --format "{{json .SBOM.SPDX}}"
{
	"spdxVersion": "SPDX-2.3",
	"dataLicense": "CC0-1.0",
	"SPDXID": "SPDXRef-DOCUMENT",
	...

If your target image is built for multiple architectures using the --platform flag, then you’ll need a slightly different syntax to extract the SBOM attestation:

$ docker buildx imagetools inspect <myorg>/<myimage> --format "{{ json (index .SBOM "linux/amd64").SPDX}}"
{
	"spdxVersion": "SPDX-2.3",
	"dataLicense": "CC0-1.0",
	"SPDXID": "SPDXRef-DOCUMENT",
	...

Now suppose you want to list all of the packages, and their versions, inside an image. You can modify the value passed to the --format flag to be a go template that lists the packages:

$ docker buildx imagetools inspect <myorg>/<myimage> --format '{{ range .SBOM.SPDX.packages }}{{ println .name .versionInfo }}{{ end }}' | sort
adduser 3.118
apt 2.2.4
base-files 11.1+deb11u6
base-passwd 3.5.51
bash 5.1-2+deb11u1
bsdutils 1:2.36.1-8+deb11u1
ca-certificates 20210119
coreutils 8.32-4+b1
curl 7.74.0-1.3+deb11u3
...

Alternatively, you might want to get the version information for a piece of software that you know is installed:

$ docker buildx imagetools inspect <myorg>/<myimage> --format '{{ range .SBOM.SPDX.packages }}{{ if eq .name "nginx" }}{{ println .versionInfo }}{{ end }}{{ end }}'
1.23.3-1~bullseye

You can even take the whole SBOM and use it to scan for CVEs using a tool that can use SBOMs to search for CVEs (like Anchore’s Grype):

$ docker buildx imagetools inspect <myorg>/<myimage> --format '{{ json .SBOM.SPDX }}' | grype
NAME          	INSTALLED            	FIXED-IN 	TYPE  VULNERABILITY 	SEVERITY   
apt           	2.2.4                             	deb   CVE-2011-3374 	Negligible  
bash          	5.1-2+deb11u1        	(won't fix) deb   CVE-2022-3715 	 
...

These operations should complete super quickly! Because the SBOM was already generated at build, you’re just querying already-existing data from Docker Hub instead of needing to generate it from scratch every time.

Going further

In this post, we’ve only covered the absolute basics to getting started with BuildKit and SBOMs — you can find out more about the things we’ve talked about on docs.docker.com:

And you can find out more about other features released in the latest BuildKit v0.11 release here.

]]>
In Case You Missed It: Docker Community All-Hands https://www.docker.com/blog/docker-community-all-hands-6-highlights/ Tue, 06 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37309

That’s a wrap! Community All-Hands has officially come to a close. Our sixth All-Hands featured over 35 talks across 10 channels — with topics ranging from “getting started with Docker” to running machine learning on AI hardware accelerators.

As always, every channel was buzzing with activity. Your willingness to jump in, ask questions, and help others is what the Docker community’s all about. And we loved having the chance to chat with everyone directly! 

Couldn’t attend our recent Community All-Hands event? We’ll cover some important announcements, interesting presentations, and more that you missed.

Docker CPO looks back at a year of developer obsession

Headlining Community All-Hands were some important announcements on the main stage, kicked off by Jake Levirne, our Head of Products. This past year, our engineers focused on improving developer experiences across every product. Integrated features like Dev Environments, Docker Extensions, SBOM, and Compose V2 have helped streamline workflows — along with numerous usability and OS-specific improvements. 

Over the last year, the Docker engineering team:

  • Released 24 new features
  • Made 37,000 internal commits
  • Curated 52 extensions and counting within Docker Desktop and Docker Hub
  • Hosted over eight million Docker Desktop downloads

We couldn’t have made these improvements without your feedback. Keep your votes, comments, and messages coming — they’re essential for helping us ship the features you need. Keep an eye out for continued updates about UX enhancements, Trusted Open Source, and user-centric partnerships.

How to use SBOMs to support multiple layers

Following Jake, our presenters dove deeper into the technical depths. Next up was a session on viewing images through layered software bills of materials (SBOMs), led by Docker Principal Software Engineer Jim Clark. 

SBOMs are extremely helpful for knowing what’s in your images and apps. But where it gets complex is that many images stem from base images. And even those base images can have their own base images, making full image transparency difficult. Multi-layer images have historically been harder to analyze. To get a full picture of a multi-layer image, you’ll need to know things like:

  • Which packages are included
  • How those packages are distributed between layers
  • How image rebuilds can impact packages
  • If security fixes are available for individual packages

Jim shared that it’s now possible to gather this information. While this feature is still under development, users will soon see layer sizes, total packages per layer, and be able to view complete Dockerfiles on GitHub. 

And as a next step, the team is also focused on understanding shared content and tracking public data. This is another step toward building developer trust, and knowing exactly what’s going into your projects.

Docker Desktop meets multi-platform image support via containerd

Rounding out our major announcements was Djordje Lukic, Staff Software Engineer, with a session on containerd image management. Containerd has been our container runtime since 2016. Since then, we’re extended its integration within Docker Desktop and Docker Engine. 

Containerd migration offers some key benefits: 

  • There’s less code to maintain
  • We can ship features more rapidly and shorten release cycles
  • It’s easier to improve our developer tooling
  • We can bring multi-platform support to Docker, while following the Open Container Initiative (OCI) more closely and supporting different snapshotters.

Leveraging containerd more heavily means we can consolidate portions of the Docker Daemon. Check out our containerd announcement blog to learn more. 

Showcasing attendees’ favorite talks

Every Community All-Hands channel hosted unique sets of topics, while each session highlighted relationships between Docker and today’s top technologies. Here are some popular talks from Community All-Hands and why they’re worth watching. 

Developing Go Apps with Docker

From the “Best Practices” channel.

Go (or Golang) is a language well-loved and highly sought after by professional developers. We support it as a core language and maintain a Go language-specific use guide within our docs. 

Follow along with Muhammad Quanit as he explores containerized Go applications. Muhammad covers best practices, the importance of multi-stage builds, and other tips for optimizing your Dockerfiles. By using a Go web server, he demonstrates the “Dockerization” process and the usefulness of IDE extensions.

Integration Testing Your Legacy Java Microservice with docker-maven-plugin

From the “Demos” channel.

Enterprises and development teams often maintain Java code bases upwards of 10 years old. While these services may still be functional, it’s been challenging to bind automated testing to each individual microservice repository. Docker Compose does enable batch testing, but that extra granularity is needed.

Join Terry Brady as he shows you how to run JUnit microservices tests, automated maven testing, code coverage calculation, and even test-resource management. Don’t worry about rewriting your legacy code. Instead, learn how integration testing and dedicated test containers help make life easier. 

How Does Docker Run Machine Learning on Specialized AI Hardware Accelerators

From the “Cutting Edge” channel.

Currently, 35% of companies report using AI in some fashion, while another 42% of respondents say they’re considering it. Machine learning (ML) — a subset of AI — has been critical to creating predictive models, extracting value from big data, and automating many tedious processes. 

Shashank Prasanna outlines just how important specialized hardware is to powering these algorithms. And while ML gains steam, companies are unveiling numerous supporting chipsets and GPUs. How does Docker handle these accelerators? Follow along as Shashank highlights Docker’s capabilities within multi-processor systems, and how these differ from traditional, single-CPU systems from an AI standpoint.

But wait, there’s more! 

The above talks are just a small sample of our learning sessions. Swing by our Docker YouTube channel to browse through our entire content library. 

You can also check out playlists from each event channel: 

  • Mainstage – showcases of the community and Docker’s latest developments 
  • Best Practices – tips to get the most from your Docker applications
  • Demos – in-depth presentations that tackle unique use cases, step by step
  • Security – best practices for building stronger, attack-resistant containers and applications
  • Extensions – the basics of building extensions while demonstrating their usefulness in different scenarios
  • Cutting Edge – talks about how Docker and today’s leading technologies unite
  • International Waters – multilingual tech talks and panel discussions on trends
  • Open Source – panels on the Docker Sponsored Open-Source Program and the value of open source
  • Unconference – informal talks on getting started with Docker and Docker experiences

Thank you and see you next time!

From key Docker announcements, to technical talks, to our buzzworthy Community Awards ceremony, we had an absolute blast with you at Community All-Hands. Also, a huge special thanks to DJ Alessandro Vozza for keeping the music and excitement going!

And don’t forget to download the latest Docker Desktop to check out the releases and try out any new tricks you’ve learned.

See you at our next All-Hands event, and thank you for making this community stronger. Happy developing!

Learn about our recent releases

]]>
Recapping the last year of developer-focused innovation in Docker Desktop nonadult
Capturing Build Information with BuildKit https://www.docker.com/blog/capturing-build-information-buildkit/ Thu, 14 Apr 2022 20:52:50 +0000 https://www.docker.com/?p=33094 Although every Docker image has a manifest — a JSON collection of tags, digital signatures, and configuration details — Docker images can still lack some basic information at build time. Those missing details could be useful to developers. So, how do we fill in the blanks?

In this guide, we’ll highlight a tentpole feature of BuildKit v0.10: new build information-structure generation, from build metadata. This allows you to see all sources (images, Git repositories, and HTTP URLs) and configurations passed on to your build. This information is also embeddable within the image config.

We’ll discuss how to tackle this process, and share some best practices along the way.

Getting Started

While this feature is automatically activated upon updating to BuildKit v0.10, we also recommend using BuildKit’s Dockerfile v1.4 to reliably capture original image names. You can do so by adding the following syntax to your Dockerfile: # syntax=docker/dockerfile:1.4.

Additionally, we recommend creating a new docker-container builder with Buildx that uses the latest stable version of BuildKit. Enter the following CLI command:

$ docker buildx create --use --bootstrap --name mybuilder

Note: to return to the default builder, enter the $ docker buildx use default command.

 

Next, let’s create a basic Dockerfile:

 # syntax=docker/dockerfile:1.4

FROM busybox AS base
ARG foo=baz
RUN echo bar &gt; /foo

FROM alpine:3.15 AS build
COPY --from=base /foo /
RUN echo baz &gt; /bar

FROM scratch
COPY --from=build /bar /
ADD https://raw.githubusercontent.com/moby/moby/master/README.md /

 

We’ll build this image using Buildx v0.8.1 — which comes packaged within the latest version of Docker Desktop (v4.7). The latest Buildx version lets you inspect and use any build information that’s been generated:

$ docker buildx build --build-arg foo=bar --metadata-file metadata.json .

Storing Build Metadata as a File

We’re using the --metadata-file flag, which writes the build result metadata within the metadata.json file. This flag helps retrieve metadata information about your build result — including the digest of your resulting image, and the new containerimage.buildinfo key.

The --metadata-file flag improves upon the previous --iidfile flag, which would only capture the resulting image ID. The following Dockerfile shows the containerimage.buildinfo key in practice:

{
"containerimage.buildinfo": {
"frontend": "dockerfile.v0",
"attrs": {
"build-arg:foo": "bar",
"filename": "Dockerfile",
"source": "docker/dockerfile:1.4"
},
"sources": [
{
"type": "docker-image",
"ref": "docker.io/library/alpine:3.15",
"pin": "sha256:d6d0a0eb4d40ef96f2310ead734848b9c819bb97c9d846385c4aca1767186cd4"
},
{
"type": "docker-image",
"ref": "docker.io/library/busybox:latest",
"pin": "sha256:caa382c432891547782ce7140fb3b7304613d3b0438834dce1cad68896ab110a"
},
{
"type": "http",
"ref": "https://raw.githubusercontent.com/moby/moby/master/README.md",
"pin": "sha256:419455202b0ef97e480d7f8199b26a721a417818bc0e2d106975f74323f25e6c"
}
]
},
"containerimage.config.digest": "sha256:cd82085d327d4b41f86212fc372f75682f131b5ce5c0c918dabaa0fbf04ec53f",
"containerimage.descriptor": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:6afb8217371adb4b75bd7767d711da74ba226ed868fa5560e01a8961ab150ccb",
"size": 732
},
"containerimage.digest": "sha256:6afb8217371adb4b75bd7767d711da74ba226ed868fa5560e01a8961ab150ccb",
}

 

What’s noteworthy about the structure of this result? The new containerimage.buildinfo key now contains your build information. You’ll also see a host of important field names:

  • frontend defines the BuildKit frontend responsible for the build (we’re building from a Dockerfile above).
  • attrs encompasses the build configuration parameters (e.g. when typing --build-arg).
  • sources defines build sources.

Additionally, each sources entry describes an external source that your Dockerfile used while building the result. However, it’s worth highlighting some other JSON data:

  • type can reference a docker-image for all container images referenced with FROM, or with git using a Git context. Finally, type can reference HTTP URL contexts or remote URLs used by ADD commands.
  • ref is the reference defined in your Dockerfile.
  • pin lets you know which dependency version is installed at any time (digest).

Remember that sources are captured for all of your build stages, and not just for the last stage’s base image that was exported.

Storing Build Metadata as Part of Your Image

Your metadata file isn’t the only transport available. BuildKit also embeds build information within the image config as your image is pushed. This makes your build information portable. Here’s what that push command looks like:

$ docker buildx build --build-arg foo=bar --tag crazymax/buildinfo:latest --push .

You can check the build information for any existing image — while on the latest Buildx version — using the imagetools inspect command:

$ docker buildx imagetools inspect crazymax/buildinfo:latest --format "{{json .BuildInfo}}"

{
"frontend": "dockerfile.v0",
"sources": [
{
"type": "docker-image",
"ref": "docker.io/library/alpine:3.15",
"pin": "sha256:d6d0a0eb4d40ef96f2310ead734848b9c819bb97c9d846385c4aca1767186cd4"
},
{
"type": "docker-image",
"ref": "docker.io/library/busybox:latest",
"pin": "sha256:caa382c432891547782ce7140fb3b7304613d3b0438834dce1cad68896ab110a"
},
{
"type": "http",
"ref": "https://raw.githubusercontent.com/moby/moby/master/README.md",
"pin": "sha256:419455202b0ef97e480d7f8199b26a721a417818bc0e2d106975f74323f25e6c"
}
]
}

 

Unlike with your metadata-file results, build attributes aren’t automatically available within the image config. Attribute inclusion is currently an opt-in configuration to avoid troublesome leaks. Developers occasionally use these attributes as secrets, which isn’t ideal from a security standpoint. We recommend using --mount=type=secret instead.

To add build attributes to your embedded image config, use the image output attribute, buildinfo-attrs:

$ docker buildx build \
--build-arg foo=bar \
--output=type=image,name=crazymax/buildinfo:latest,push=true,buildinfo-attrs=true .

 

Alternatively, you can use the Buildx BUILDKIT_INLINE_BUILDINFO_ATTRS build argument:

$ docker buildx build \
--build-arg BUILDKIT_INLINE_BUILDINFO_ATTRS=1 \
--build-arg foo=bar \
--tag crazymax/buildinfo:latest \
--push .

That’s it! You may now review any newly-generated build dependencies stemming from your image builds.

What’s next?

Transparency is important. Always aim to make your Docker images more self-descriptive, decipherable, reproducible, and visible. In this case, we’ve made it easier to uncover any inputs used while building an image. Additionally, you might compare images updated with security patches to pinned versions of your build sources. That lets you know if your image is up-to-date or safe to use.

This is one important step in our Secure Software Supply Chain (SSSC) journey, and towards better build reproducibility. More information about reproducibility is also available within our BuildKit repo.

However, we want to go even further. Docker is bringing SBOMs to all container images via BuildKit. We want to get our development community involved in this effort to bolster BuildKit — and take that next major step towards higher image-level transparency.

Are you interested in learning more about BuildKit? Our latest BuildKit release has shipped with other useful features — like those showcased in Tonis Tiigi’s blog post. If you’ve been clamoring for improved remote cache support and rapid image rebase, it’s well worth a read!

]]>
Topic Spotlight: Here’s What You Can Expect at DockerCon 2022 https://www.docker.com/blog/what-you-can-expect-at-dockercon-2022/ Wed, 13 Apr 2022 19:02:47 +0000 https://www.docker.com/?p=33090

With less than one month to go before DockerCon 2022, we’re excited to unveil one of our most immersive agendas yet. We’re also planning to offer multiple tracks throughout the day — allowing you to jump between topics that grab your attention.

DockerCon 2022 is hosted virtually and will be streamed live. If you haven’t registered, we encourage you to join us May 9th and 10th for two free days of concentrated learning, skill sharpening, collaboration, and engagement with fellow developers.

Social_Twitter Horizontal Submarine

DockerCon brings together developers, Docker Community Leaders, Docker Captains, and our partners to boost your understanding of cloud-native development. But, we’re also excited to see how you’ve incorporated Docker into your projects. We want to help every developer discover more about Docker, learn how to conquer common development challenges, and excel within their respective roles.

That said, what’s in store? Follow along as we highlight what’s new this year and showcase a few can’t-miss topics.

What’s new at DockerCon 2022

We’re keeping things fresh and interesting by having our presenters connect with the audience and participate in live chats throughout the event. Accordingly, you’ll have the opportunity to chat with your favorite presenters. Here’s what you can look forward to (including some cool announcements):

Day-Zero Pre-Event Workshop

May 9th — from 7:00 a.m. to 10:00 a.m. PDT, and later from 4:00 p.m. to 7:00 p.m. PDT.

We want developers of all experience levels to get up and running with Docker as quickly as possible. Our instructor-led course will outline how to build, share, and run your applications using Docker containers. However, even developers with some Docker experience may learn some useful tips. Through hands-on instruction, you’ll discover that harnessing Docker is simple, approachable, and enjoyable.

We’re also introducing early learner demo content. Stay tuned for useful code samples, repo access, and even information on useful extensions.

Engaging Sessions

DockerCon 2022 opens with a fun, pre-show countdown filled with games and challenges — plus a live keynote. You’ll then be free to explore each session:

  • Mainstage – Live stream with engaging, center-stage talks on industry trends, new features, and team-building, plus panel sessions — with live hosts to guide you through your DockerCon experience.
  • Discover – Development tips and ways to incorporate tech stacks with Docker
  • Learn – Walkthroughs for deploying production environments, discussing best practices, and harnessing different programming languages
  • Excel – Detailed guidance on using containers, Docker, and applying workflows to emerging use cases
  • Blackbelt – Code-heavy, demo-driven walkthroughs of advanced technologies, Docker containerization, and building integrated application environments

DockerCon 2022 will also include a number of demos and chats about industry trends. You’ll get plenty of news and keep current on today’s exciting tech developments.

Each session spans 15 to 60 minutes. Feel free to move between virtual rooms as often as you’d like! You can view our complete agenda here, and all sessions are tagged for language and topic.

Exciting Announcements and Highlights

While DockerCon 2022 will feature a diverse topic lineup — all of which are immensely valuable to the community — we’d like to briefly showcase some topics that we find particularly noteworthy.

Introducing Docker SBOM

While we’re quick to implement containers and software tools to accelerate application development, we often don’t know each package’s contents in great detail. That can be a problem when using unverified, third-party images — or in any instance where component transparency is desirable.

Accordingly, we’ll be presenting our Software Bill of Materials (SBOM): a new way to generate lists of all container image contents. This is possible with a simple CLI command. The SBOM is useful from a security standpoint, yet it also allows you to better understand how your applications come together. Follow along as we show you how to summon your specific Docker SBOM, and why that information is so useful.

Using Python and Docker for Data Science and Scientific Computing

If you’re someone with a strong interest in machine learning, data science, and data-centric computing, you won’t want to miss this one. Researchers and professionals who work with big data daily have to perform a number of resource-intensive tasks. Data analysis is taxing, and restricting those processes to hardware that can only be scaled vertically is a massive challenge.

Scalable, containerized applications can solve this problem. If you want to learn more about optimizing your image builds, bolstering security, and improving Docker-related workflows, we encourage you to swing by this session.

From Legacy to Kubernetes

For newer developers and even experienced developers, setting up Kubernetes effectively can be a challenge. Configuration takes time and can feel like a chore. You have to memorize plenty of components, and understand how each impacts your implementation over time.

This presentation will show you how to spin up Kubernetes using Docker Desktop — our GUI-based platform for building, shipping, and deploying containerized applications. Built atop Docker Engine, one standout of Docker Desktop’s feature set is the ability to create a single, local Kubernetes cluster in one click.

Follow along as we dive into the basics of Kubernetes, moving legacy apps into containers, and implementing security best practices

Register Today!

We couldn’t be more thrilled to kick off DockerCon 2022. It’s our largest event of the year! More importantly, however, DockerCon allows you to meet and interact with tens of thousands of other developers. There’s plenty of conversation to be had, and we guarantee that you’ll learn a thing or two along the way. We’ll feature some informative sessions, casual chats, and even unveil a few surprises!

Registering for DockerCon is quick, easy, and free. Please visit our registration page to sign up and learn more about the awesome things coming at DockerCon 2022!

]]>
3 Software Developer Trends You’ll Hear About at DockerCon https://www.docker.com/blog/3-software-developer-trends-youll-hear-about-at-dockercon/ Thu, 07 Apr 2022 18:36:45 +0000 https://www.docker.com/?p=33008

Join us for DockerCon2022 on Tuesday, May 10. DockerCon is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2022 offers engaging live content to help you build, share and run your applications. Register today at https://docker.events.cube365.net/dockercon/2022

Register for DockerCon2022

The world of software development moves fast and is always changing. It’s why continuous learning is so important — and why it’s essential to invest the time to keep up with the latest techniques, technologies, and, sadly, threats.

Because as a developer, it’s not just enough to come up with new, innovative ways to accomplish your goals. Those methods also need to be strong enough and reliable enough that users can trust in them.

It’s one of the many reasons why we hold DockerCon every year. We want everyone in the developer community to have the tools you need to take on any new challenge easily so you can keep innovating and redefining what’s possible.

This year is no exception. With DockerCon right around the corner, let’s take a look at three big trends in the application development world — and get a sneak peek into how DockerCon will help you get the tools you need to tackle them!

More complexity, more quality, and less time

Developers have always been under pressure to deliver better quality software, faster. That’s not new. But today’s application development is layering that drive for good and fast with complexity and continuous delivery. The result? Expectations that have never been higher.

Every day brings increasing demand for feature-rich, consumer-grade experiences that are, at the same time, secure and resilient by design. And developers are rising to meet this demand with a vast array of tools, services, and technologies — many of them open source and easy to consume right off the shelf.

All that complexity ends up creating a double-edge sword. You have more freedom and more tools at hand than ever before, enabling you to find just the right cog for your application machine. But at the same time, you have to spend time sourcing, comparing, evaluating, implementing, and managing these tools, which takes time away from innovation. (RedMonk calls it the developer experience gap. They recently sat down to chat about it with Docker CTO Justin Cormack.)

It’s a tough problem, but one you can address by finding ways to simplify workflows and, as a result, increase productivity. If you’re running into this, then be sure to check out our deep dive into the upcoming Docker Extensions feature at DockerCon. You might also want to check out the session about quickly creating production-ready APIs or the session about reducing the complexity of integration tests. There are also sessions around best practices and lightning talks with cool hacks for using Docker. Make sure you register so you can see the full agenda when it goes live!

The evolution of cloud-native development

It’s an incredibly exciting time for cloud-native application development. And if you’re reading this blog, then you probably know how container technology has fundamentally changed the way we build applications.

But there’s also new, exciting advancements on the horizon, ones that can leverage containers and other technology. Trends like low-code and no-code development are enabling lines of business to collaborate with developers much more directly in the design of cloud-native solutions, while other advancements like WebAssembly, composable applications, and FaaS/serverless are making applications more complex, fluid, and ephemeral. Even infrastructure changes like edge computing are decreasing response times and increasing flexibility in applications.

Trends like these are empowering developers to do great things, but you need development environments and workflows that can give you the flexibility you need to take advantage of these advancements.

At DockerCon, you’ll find talks on using Docker with WebAssembly, AWS Lambda, microservices, IoT, and more to help give you flexibility and choice in your application architecture.

Software supply chains are getting more complex

Open source is everywhere these days. It’s become a de facto standard for development, unlocking new levels of application capabilities — and the potential for introducing vulnerabilities into your software supply chain.

These can become big problems with the potential to affect entire ecosystems. You need to be able to know exactly what’s in your software and be able to trust the components that you’re building into it. This also includes the many dependencies that make up your application.

It’s why we recently released a new SBOM command in the Docker CLI as part of an open source collaboration with Anchore using their Syft project. It’s also why we’re dedicated to helping you secure your software supply chain.

Make sure to join us at DockerCon for sessions about these and other software supply chain advancements. In addition to a sessions specifically about the SBOM CLI command, there’s also sessions about protecting your systems from vulnerabilities, using trusted content from Docker Official Images or Docker Verified Publishers, and more.

Join the conversation at DockerCon 2022

All this is just a taste of what’s to come at DockerCon 2022 on Tuesday, May 10th. The full agenda will be released next week, so make sure you register to get notified when it goes live!

Also, if you’re new to Docker or containerized applications (or just want a refresher), be sure to sign up for the 3-hour Getting Started with Docker workshop on Monday, May 9th led by Docker Sr. Developer Advocate Shy Ruparel.

Come hang out with the Docker community and learn how to navigate these trends so you can spend more time innovating and less time on everything else.

]]>
Announcing Docker SBOM: A step towards more visibility into Docker images https://www.docker.com/blog/announcing-docker-sbom-a-step-towards-more-visibility-into-docker-images/ Thu, 07 Apr 2022 15:00:26 +0000 https://www.docker.com/?p=33004 Today, Docker takes its first step in making what is inside your container images more visible so that you can better secure your software supply chain. Included in Docker Desktop 4.7.0 is a new, experimental docker sbom CLI command that displays the SBOM (Software Bill Of Materials) of any Docker image. It will also be included in our Linux packages in an upcoming release. The functionality was developed as an open source collaboration with Anchore using their Syft project.

As I wrote in my blog post last week, at Docker our priorities are performance, trust and great experiences. This work is focused on improving trust in the supply chain by making it easier to see what is in images and providing SBOMs to consumers of software, and improving the developer experience by making container images more transparent, so you can easily see what is inside of them. This command is just a first step that Docker is taking to make container images more self descriptive. We believe that the best time to determine and record what is in a container image is when you are putting the image together with docker build. To enable this, we are working on making it easy for partners and the community to add SBOM functionality to docker build using BuildKit’s extensibility.

As this information is generated at build time, we believe that it should be included as part of the image artifact. This means that if you move images between registries (or even into air gapped environments), you should still be able to read the SBOM and other image build metadata off of the image.

We’re looking to collaborate with partners and those in the community on our SBOM work in BuildKit. Take a look at our PoC and leave feedback here.

What is an SBOM?

A Software Bill Of Materials (SBOM) is analogous to a packing list for a shipment; it’s all the components that make up the software, or were used to build it. For container images, this includes the operating system packages that are installed (e.g.: ca-certificates) along with language specific packages that the software depends on (e.g.: log4j). The SBOM could include only some of this information or even more details, like the versions of components and where they came from.

SBOMs are sometimes required by governments or other software consumers who are trying to improve their supply chain security. This is because knowing what is inside your software gives you confidence that it is safe to use and can be useful in understanding impact when a vulnerability is made public.

Using the container image SBOM to check for a vulnerability

Let’s take a quick look at what the docker sbom command can do to help when a vulnerability like log4shell is made public. When a vulnerability like this appears, it’s crucial that you can quickly determine if your software is impacted. We’ll use the neo4j:4.4.5 Docker Official Image. Just running docker sbom neo4j:4.4.5 outputs a tabulated form of the SBOM:

$ docker sbom neo4j:4.4.5

Syft v0.42.2

 ✔ Loaded image            

 ✔ Parsed image            

 ✔ Cataloged packages      [385 packages]




NAME                      VERSION                        TYPE

... 

bsdutils                  1:2.36.1-8+deb11u1             deb

ca-certificates           20210119                       deb

...

log4j-api                 2.17.1                         java-archive  

log4j-core                2.17.1                         java-archive  

...

Note that the output includes not only the Debian packages that have been installed inside the image but also the Java libraries used by the application. Getting this information reliably and with minimal effort allows you to promptly respond and reduce the chance that you will be breached. In the above example, we can see that Neo4j uses version 2.17.1 of the log4j-core library which means that it is not affected by log4shell.

Without docker sbom or another SBOM scanning tool, you would need to check your application’s source code to see which version of log4j-core you are using. When you have several applications or services deployed and multiple versions of them, this can be difficult.

In addition to outputting the SBOM in a table, the docker sbom command has options for outputting SBOM in the standard SPDX and CycloneDX formats along with the GitHub and native Syft formats.

We are sharing the docker sbom functionality early, as an experimental command, with the intention of getting feedback from the community on the direction that we’re going. We’d like to know about your use cases and any other feedback that you have. You can leave it on the command’s repo.

What’s next?

We’d love to collaborate with partners and the community on bringing SBOMs to all container images through BuildKit so please hack on our example and leave feedback on our RFC. Please also give the experimental docker sbom command a try and leave us any feedback that you have. You can also read more about the docker sbom collaboration with Anchore on their blog.

DockerCon2022

Join us for DockerCon2022 on Tuesday, May 10. DockerCon is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/

]]>