Docker community – Docker https://www.docker.com Fri, 02 Jun 2023 20:20:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Docker community – Docker https://www.docker.com 32 32 How to Use the Node Docker Official Image https://www.docker.com/blog/how-to-use-the-node-docker-official-image/ Wed, 26 Oct 2022 14:04:08 +0000 https://www.docker.com/?p=38370 Topping Stack Overflow’s 2022 list of most popular web frameworks and technologies, Node.js continues to grow as a critical MERN stack component. And since Node applications are written in JavaScript — the world’s leading programming language — many developers will feel right at home using it. We introduced the Node Docker Official Image (DOI) due to Node.js’ popularity and to solve some common development challenges. 

The Node.js Foundation describes Node as “an open-source, cross-platform JavaScript runtime environment.” Developers use it to create performant, scalable server and networking applications. Despite Node’s advantages, building and deploying cross-platform services can be challenging with traditional workflows.

Conversely, the Node Docker Official Image accelerates and simplifies your development processes while allowing additional configuration. You can deploy containerized Node applications in minutes. Throughout this guide, we’ll discuss the Node Official Image, how to use it, and some valuable best practices. 

In this tutorial:

What is the Node Docker Official Image?

node js docker official image blog 900x600 1

The Node Docker Official Image contains all source code, core dependencies, tools, and libraries your application needs to work correctly. 

This image supports multiple CPU architectures like amd64, arm32v6, arm32v7, arm64v8, ppc641le, and s390x. You can also choose between multiple tags (or image versions) for any project. Choosing a pinned version like node:19.0.0-slim locks you into a stable, streamlined version of Node.js. 

Node.js use cases

Node.js lets developers write server-side code in JavaScript. The runtime environment then transforms this JavaScript into hardware-friendly machine code. As a result, the CPU can process these low-level instructions. 

Node is event-driven (through user actions), non-blocking, and known for being lightweight while simultaneously handling numerous operations. As a result, you can use the Node DOI to create the following: 

  • Web server applications
  • Networking applications

Node works well here because it supports HTTP requests and socket connections. An asynchronous I/O library lets Node containers read and write various system files that support applications. 

You could use the Node DOI to build streaming apps, single-page applications, chat apps, to-do list apps, and microservices. Or — if you’re like Community All-Hands’ Kathleen Juell — you could use Node.js to help serve static content. Containerized Node will shine in any scenario dictated by numerous client-server requests. 

Docker Captain Bret Fisher also offered his thoughts on Dockerized Node.js during DockerCon 2022. He discussed best practices for managing Node.js projects while diving into optimization. 

Lastly, we also maintain some Node sample applications within our GitHub Awesome Compose library. You can learn to use Node with different databases or even incorporate an NGINX proxy. 

About Docker Official Images

We’ve curated the Node Docker Official Image as one of many core container images on Docker Hub. The Node.js community maintains this image alongside members of the Docker community. 

Like other Docker Official Images, the Node DOI offers a common starting point for Node and JavaScript developers. We also maintain an evolving list of Node best practices while regularly pushing critical security updates. This distinguishes Docker Official Images from alternatives on Docker Hub. 

How to run Node in Docker

Before getting started, download the latest Docker Desktop release and install it. Docker Desktop includes the Docker CLI, Docker Compose, and additional core development tools. The Docker Dashboard (Docker Desktop’s UI component) will help you manage images and containers. 

You’re then ready to Dockerize Node!

Enter a quick pull command

Pulling the Node DOI is the quickest way to begin. Enter docker pull node in your terminal to grab the default latest Node version from Docker Hub. You can readily use this tag for testing or local development. But, a pinned version might be safer for production use. Here’s how the pull process works: 

Your CLI will display a status message once it’s done. You can also double-check this within Docker Desktop! Click the Images tab on the left sidebar and scan through your listed images. Docker Desktop will display your node image:

Docker UI listing local images, including the Node Docker Official Image..

Your node:latest image is a hefty 942.33 MB. If you inspect your Node image’s contents using docker sbom node, you’ll see that it currently includes 623 packages. The Node image contains numerous dependencies and modules that support Node and various applications. 

However, your final Node image can be much slimmer! We’ll tackle optimization while discussing Dockerfiles. After all, the Node DOI has 24 supported tags spread amongst four major Node versions. Each has its own impact on image size.  

Confirm that Node is functional

Want to run your new image as a container? Hover over your listed node image and click the blue “Run” button. In this state, your Node container will produce some minimal log entries and run continuously in case requests come through. 

Exit this container before moving on by clicking the square “stop” button in Docker Desktop or by entering docker stop YourContainerName in the CLI. 

Create your Node image from a Dockerfile

Building from a Dockerfile gives you ultimate control over image composition, configuration, and your overall application. However, Node requires very little to function properly. Here’s a barebones Dockerfile to get you up and running (using a pinned, Debian-based image version): 

FROM node:19-bullseye

Docker will build your image from your chosen Node version. 

It’s safest to use node:19-bullseye because this image supports numerous use cases. This version is also stable and prevents you from pulling in new breaking changes, which sometimes happens with latest tags. 

To build your image from a Dockerfile, run the docker build -t my-nodejs-app . command. You can then run your new image by entering docker run -it --rm --name my-running-app my-nodejs-app.

Optimize your Node image

The complete version of Node often includes extra packages that weigh your application down. This leaves plenty of room for optimization. 

For example, removing unneeded development dependencies reduces image bloat. You can do this by adding a RUN instruction to our previous file: 

FROM node:19-bullseye

RUN npm prune --production

This approach is pretty granular. It also relies on you knowing exactly what you do and don’t need for your project. Alternatively, switching to a slim image build offers the quickest results. You’ll encounter similar caveats but spend less time writing individual Dockerfile instructions. The easiest approach is to replace node:19-bullseye with its node:19-bullseye-slim counterpart. This alone shrinks image size by 75%. 

You can even pull node:19-alpine to save more disk space. However, this tag contains even fewer dependencies and isn’t officially supported by the Node.js Foundation. Keep this in mind while developing. 

Finally, multi-stage builds lead to smaller image sizes. These let you copy only what you need between build stages to combat bloat. 

Using Docker Compose

Say you have a start script, an existing package.json file, and (possibly) want to operate Node alongside other services. Spinning up Node containers with Docker Compose can be pretty handy in these situations.

Here’s a sample docker-compose.yml file: 

services:
  node:
    image: "node:19-bullseye"
    user: "node"
    working_dir: /home/node/app
    environment:
      - NODE_ENV=production
    volumes:
      - ./:/home/node/app
    ports:
      - "8888:8888"
    command: "npm start"

You’ll see some parameters that we didn’t specify earlier in our Dockerfile. For example, the user parameter lets you run your container as an unprivileged user. This follows the principle of least privilege. 

To jumpstart your Node container, simply enter the docker compose up -d command. Like before, you can verify that Node is running within Docker Desktop. The docker container ls --all command also displays all existing containers within the CLI.  

Running a simple Node script

Your project doesn’t always need a  Dockerfile. In these cases, you can directly leverage the Node DOI with the following command: 

docker run -it --rm --name my-running-script -v "$PWD":/usr/src/app -w /usr/src/app node:19-bullseye node your-daemon-or-script.js

This simplistic approach is ideal for single-file projects.

Docker Node best practices

It’s important to get the most out of Docker and the Node Official Image. We’ve briefly outlined the benefits of running as a non-root node user, but here are some useful tips for developing with Node: 

  • Easily pass secrets and other runtime configurations to your application by setting NODE_ENV to production, as seen here: -e “NODE_ENV=production”.
  • Place any installed, global Node dependencies into a non-root user directory.
  • Remember to manually install curl if using an alpine image tag, since it’s not included by default.
  • Wrap your Node process in an init system with the --init flag, so it can successfully run as PID1. 
  • Set memory limitations for your containers that run on the same host. 
  • Include the package.json start command directly within your Dockerfile, to reduce active container processes and let Node properly receive exit signals. 

This isn’t an exhaustive list. To view more details, check out our best practices documentation.

Get started with Node today

As you’ve seen, spinning up a Node container from the Node Docker Official Image is quick and requires just a few steps depending on your workflow. You’ll no longer need to worry about platform-specific builds or get bogged down with complex development processes. 

We’ve also covered many ways to help your Node builds perform better. Check out our top containerization tips article to learn even more about optimization and security. 

Ready to get started? Swing by Docker Hub and pull our Node image to start experimenting. In no time, you’ll have your server and networking applications up and running. You can also learn more on our GitHub read.me page.

]]>
Docker community Archives | Docker nonadult
How to Use the Postgres Docker Official Image https://www.docker.com/blog/how-to-use-the-postgres-docker-official-image/ Wed, 05 Oct 2022 14:00:00 +0000 https://www.docker.com/?p=37840 Postgres is one of the top relational, multi-model databases currently available. It’s designed to power database applications — which either serve important data directly to end users or through another application via APIs. Your typical website might fit that first example well, while a finance app (like PayPal) typically uses APIs to process GET or POST database requests. 

Postgres’ object-relational structure and concurrency are advantages over alternatives like MySQL. And while no database technology is objectively the best, Postgres shines if you value extensibility, data integrity, and open-source software. It’s highly scalable and supports complex, standards-based SQL queries. 

The Postgres Docker Official Image (DOI) lets you create a Postgres container tailored specifically to your application. This image also handles many core setup tasks for you. We’ll discuss containerization, and the Postgres DOI, and show you how to get started.

In this tutorial:

Why should you containerize Postgres? 

postgres official docker image 900x600 1

Since your Postgres database application can run alongside your main application, containerization is often beneficial. This makes it much quicker to spin up and deploy Postgres anywhere you need it. Containerization also separates your data from your database application. Should your application fail, it’s easy to launch another container while shielding your data from harm. 

This is simpler than installing Postgres locally, performing additional configuration, and starting your own background processes. Such workflows take extra time, require deeper technical knowledge, and don’t adapt well to changing application requirements. That’s why Docker containers come in handy — they’re approachable and tuned for rapid development.

What’s the Postgres Docker Official Image?

Like any other Docker image, the Postgres Docker Official Image contains all source code, core dependencies, tools, and libraries your application needs. The Postgres DOI tells your database application how to behave and interact with data. Meanwhile, your Postgres container is a running instance of this standard image.

Specifically, Postgres is perfect for the following use cases:

  • Connecting Docker shared volumes to your application
  • Testing your storage solutions during development
  • Testing your database application against newer versions of your main application or Postgres itself

The PostgreSQL Docker Community maintains this image and added it to Docker Hub due to its widespread appeal.

Can you deploy Postgres containers in production?

Yes! Though this answer comes with some caveats and depends on how many containers you want to run simultaneously. 

While it’s possible to use the Postgres Official Image in production, Docker Postgres containers are best suited for local development. This lets you use tools like Docker Compose to collectively manage your services. You aren’t forced to juggle multiple database containers at scale, which can be challenging. 

Launching production Postgres containers means using an orchestration system like Kubernetes to stay up and running. You may also need third-party components to supplement Docker’s offerings. However, you can absolutely give this a try if you’re comfortable with Kubernetes! Arctype’s Shanika Wickramasinghe shares one method for doing so.

For these reasons, you can perform prod testing with just a few containers. But, it’s best to reconsider your deployment options for anything beyond that.

How to run Postgres in Docker

To begin, download the latest Docker Desktop release and install it. Docker Desktop includes the Docker CLI, Docker Compose, and supplemental development tools. Meanwhile, the Docker Dashboard (Docker Desktop’s UI component) will help you manage images and containers. 

Afterward, it’s time to Dockerize Postgres!

Enter a quick pull command

Pulling the Postgres Docker Official Image is the fastest way to get started. In your terminal, enter docker pull postgres to grab the latest Postgres version from Docker Hub. 

Alternatively, you can pin your preferred version with a specific tag. Though we usually associate pinning with Dockerfiles, the concept is similar to a basic pull request. 

For example, you’d enter the docker pull postgres:14.5 command if you prefer postgres v14.5. Generally, we recommend using a specific version of Postgres. The :latest version automatically changes with each new Postgres release — and it’s hard to know if those newer versions will introduce breaking changes or vulnerabilities. 

Either way, Docker will download your Postgres image locally onto your machine. Here’s how the process looks via the CLI:

Once the pull is finished, your terminal should notify you. You can also confirm this within Docker Desktop! From the left sidebar, click the Images tab and scan the list that appears in the main window. Docker Desktop will display your postgres image, which weighs in at 355.45 MB.

Docker Desktop user interface displaying the list of current local images, including Postgres.

Postgres is one of the slimmest major database images on Docker Hub. But alpine variants are also available to further reduce your image sizes and include basic packages (perfect for simpler projects). You can learn more about Alpine’s benefits in our recent Docker Official Image article.

Next up, what if you want to run your new image as a container? While many other images let you hover over them in the list and click the blue “Run” button that appears, Postgres needs a little extra attention. Being a database, it requires you to set environment variables before forming a successful connection. Let’s dive into that now.

Start a Postgres instance

Enter the following docker run command to start a new Postgres instance or container: 

docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres

This creates a container named some-postgres and assigns important environment variables before running everything in the background. Postgres requires a password to function properly, which is why that’s included. 

If you have this password already, you can spin up a Postgres container within Docker Desktop. Just click that aforementioned “Run” button beside your image, then manually enter this password within the “Optional Settings” pane before proceeding. 
However, you can also use the Postgres interactive terminal, or psql, to query Postgres directly:

docker run -it --rm --network some-network postgres psql -h some-postgres -U postgres
psql (14.3)
Type "help" for help.

postgres=# SELECT 1;
 ?column? 
----------
        1
(1 row)

Using Docker Compose

Since you’re likely using multiple services, or even a database management tool, Docker Compose can help you run instances more efficiently. With a single YAML file, you can define how your services work. Here’s an example for Postgres:

services:

  db:
    image: postgres
    restart: always
    environment:
      POSTGRES_PASSWORD: example
    volumes:
- pgdata:/var/lib/postgresql/data

volumes: 
  pgdata:

  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

You’ll see that both services are set to restart: always. This makes our data accessible whenever our applications are running and keeps the Adminer management service active simultaneously. When a container fails, this ensures that a new one starts right up.

Say you’re running a web app that needs data immediately upon startup. Your Docker Compose file would reflect this. You’d add your web service and the depends_on parameter to specify startup and shutdown dependencies between services. Borrowing from our docs on controlling startup and shutdown order, your expanded Compose file might look like this:

services:
  web:
    build: .
    ports:
      - "80:8000"
    depends_on:
      db:
        condition: service_healthy
    command: ["python", "app.py"]

  db:
      image: postgres
      restart: always
      environment:
        POSTGRES_PASSWORD: example
	healthcheck:
	  test: [“CMD-SHELL”, “pg_isready”]
        interval	: 1s
	  timeout: 5s
        retries: 10

    adminer:
      image: adminer
      restart: always
      ports:
        - 8080:8080

To launch your Postgres database and supporting services, enter the docker compose -f [FILE NAME] up command. 

Using either docker run, psql, or Docker Compose, you can successfully start up Postgres using the Official Image! These are reliable ways to work with “default” Postgres. However, you can configure your database application even further.

Extending your Postgres image

There are many ways to customize or configure your Postgres image. Let’s tackle four important mechanisms that can help you.

1. Environment variables

We’ve touched briefly on the importance of POSTGRES_PASSWORD to Postgres. Without specifying this, Postgres can’t run effectively. But there are also other variables that influence container behavior: 

  • POSTGRES_USER – Specifies a user with superuser privileges and a database with the same name. Postgres uses the default user when this is empty.
  • POSTGRES_DB – Specifies a name for your database or defaults to the POSTGRES_USER value when left blank. 
  • POSTGRES_INITDB_ARGS – Sends arguments to postgres_initdb and adds functionality
  • POSTGRES_INITDB_WALDIR – Defines a specific directory for the Postgres transaction log. A transaction is an operation and usually describes a change to your database. 
  • POSTGRES_HOST_AUTH_METHOD – Controls the auth-method for host connections to all databases, users, and addresses
  • PGDATA – Defines another default location or subdirectory for database files

These variables live within your plain text .env file. Ultimately, they determine how Postgres creates and connects databases. You can check out our GitHub Postgres Official Image documentation for more details on environment variables.

2. Docker secrets

While environment variables are useful, passing them between host and container doesn’t come without risk. Docker secrets let you access and load those values from files already present in your container. This prevents your environment variables from being intercepted in transit over a port connection. You can use the following command (and iterations of it) to leverage Docker secrets with Postgres: 

docker run --name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d postgres

Note: Docker secrets are only compatible with certain environment variables. Reference our docs to learn more.

3. Initialization scripts

Also called init scripts, these run any executable shell scripts or command-based .sql files once Postgres creates a postgres-data folder. This helps you perform any critical operations before your services are fully up and running. Conversely, Postgres will ignore these scripts if the postgres-data folder initializes.

4. Database configuration

Your Postgres database application acts as a server, and it’s beneficial to control how it runs. Configuring your database not only determines how your Postgres container talks with other services, but also optimizes how Postgres runs and accesses data. 

There are two ways you can handle database configurations with Postgres. You can either apply these configurations locally within a dedicated file or use the command line. The CLI uses an entrypoint script to pass any Docker commands to the Postgres server daemon for processing. 

Note: Available configurations differ between Postgres versions. The configuration file directory also changes slightly while using an alpine variant of the Postgres Docker Official Image.

Important caveats and data storage tips

While Postgres can be pretty user-friendly, it does have some quirks. Keep the following in mind while working with your Postgres container images: 

  • If no database exists when Postgres spins up in a container, it’ll create a default database for you. While this process unfolds, that database won’t accept any incoming connections.
  • Working with a pre-existing database is best when using Docker Compose to start up multiple services. Otherwise, automation tools may fail while Postgres creates a default.
  • Docker will throw an error if a Postgres container exceeds its 64 MB memory allotment.
  • You can use either a docker run command or Docker Compose to allocate more memory to your Postgres containers.

Storing your data in the right place

Data accessibility helps Postgres work correctly, so you’ll also want to make sure you’re storing your data in the right place. This location must be visible to both Postgres and Docker to prevent pesky issues. While there’s no perfect storage solution, remember the following

  • Writing files to the host disk (Docker-managed) and using internal volume management is transparent and user-friendly. However, these files may be inaccessible to tools or apps outside of your containers. 
  • Using bind mounts to connect external data to your Postgres container can solve data accessibility issues. However, you’re responsible for creating the directory and setting up permissions or security.

Lastly, if you decide to start your container via the docker run command, don’t forget to mount the appropriate directory from the host. The -v flag enables this. Browse our Docker run documentation to learn more.

Jumpstart your next Postgres project today

As we’ve discovered, harnessing the Postgres Docker Official Image is pretty straightforward in most cases. Since many customizations are available, you only need to explore the extensibility options you’re comfortable with. Postgres even supports extensions (like PostGIS) — which could add even deeper functionality. 

Overall, Dockerizing Postgres locally has many advantages. Swing by Docker Hub and pull your first Postgres Docker Official Image to start experimenting. You’ll find even deeper instructions for enhancing your database setup on our Postgres GitHub page

Need a springboard? Check out these Docker awesome-compose applications that leverage Postgres: 

]]>
Docker community Archives | Docker nonadult
Creating Kubernetes Extensions in Docker Desktop https://www.docker.com/blog/creating-kubernetes-extensions-in-docker-desktop/ Mon, 26 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37661 This guest post is courtesy of one of our Docker Captains! James Spurin, a DevOps Consultant and Course/Content Creator at DiveInto, recalls his experience creating the Kubernetes Extension for Docker Desktop. Of course, every journey had its challenges. But being able to leverage the powerful open source benefits of the loft.sh vcluster Extension was well worth the effort!

docker kubernetes loft extension 900x600 1

Ever wondered what it would take to create your own Kubernetes Extensions in Docker Desktop? In this blog, we’ll walk through the steps and lessons I learned while creating the k9s Docker Extension and how it leverages the incredible open source efforts of loft.sh vcluster Extension as crucial infrastructure components.

Docker Desktop running the k9s Extension.

Why build a Kubernetes Docker Extension?

When I initially encountered Docker Extensions, I wondered:

“Can we use Docker Extensions to communicate with the inbuilt Docker-managed Kubernetes server provided in Docker Desktop?”

Docker Extensions open many opportunities with the convenient full-stack interface within the Extensions pane.

Traditionally when using Docker, we’d run a container through the UI or CLI. We’d then expose the container’s service port (for example, 8080) to our host system. Next, we’d access the user interface via our web browser with a URL such as http://localhost:8080.

Diagram directing user to run a Docker container via the UI/CLO and expose the service port. Then, the user should access the service via their web brower.

While the UI/CLI makes this relatively simple, this would still involve multiple steps between different components, namely Docker Desktop and a web browser. We may also need to repeat these steps each time we restart the service or close our browser.

Docker Extensions solve this problem by helping us visualize our backend services through the Docker Dashboard.

Diagram showing the user directing accessing the Docker Extension, instead of repeating the original steps.

Combining Docker Desktop, Docker Extensions, and Kubernetes opens up even more opportunities. This toolset lets us productively leverage Docker Desktop from the beginning stages of development to container creation, execution, and testing, leading up to container orchestration with Kubernetes.

Flow diagram beginning with code development and container build then ending with container testing and orchestration with Kubernetes.

Challenges creating the k9s Extension

Wanting to see this in action, I experimented with different ways to leverage Docker Desktop with the inbuilt Kubernetes server. Eventually, I was able to bridge the gap and provide Kubernetes access to a Docker Extension.

At the time, this required a privileged container — a security risk. As a result, this approach was less than ideal and wasn’t something I was comfortable sharing…

three padlocks
Photo by FLY:D on Unsplash

Let’s dive deeper into this.

Docker Desktop uses a hidden virtual machine to run Docker. We also have the Docker-managed Kubernetes instance within this instance, deployed via kubeadm:

Diagram demonstrating a Docker-managed Kubernetes instance within a virtual machine, deployed via kubeadm.

Docker Desktop conveniently provides the user with a local preconfigured kubeconfig file and kubectl command within the user’s home area. This makes accessing Kubernetes less of a hassle. It works and is a fantastic way to fast-tracking access for those looking to leverage Kubernetes from the convenience of Docker.

However, this simplicity poses some challenges from an extension’s viewpoint. Specifically, we’d need to find a way to provide our Docker Extension with an appropriate kubeconfig file for accessing the in-built Kubernetes service.

Finding a solution with loft.sh and vcluster

Fortunately, the team at loft.sh and vcluster were able to address this challenge! Their efforts provide a solid foundation for those looking to create their Kubernetes-based Extensions in Docker Desktop.

Loft website homepage advertising virtual Kubernetes clusters that run inside regular namespaces.

When launching the vcluster Docker Extension, you’ll see that it uses a control loop that verifies Docker Desktop is running Kubernetes.

From an open source viewpoint, this has tremendous reusability for those creating their own Docker Extensions with Kubernetes. The progress indicator shows vcluster checking for a running Kubernetes service, as we can see in the following:

Docker Desktop vcluster Extension pane displaying loading screen as it searches for a running Kubernetes service.

If the service is running, the UI loads accordingly:

Docker Desktop vcluster Extension pane displaying a list of any running Kubernetes services.

If not, an error is displayed as follows:

Docker Desktop vcluster Extension pane displaying an error message.

While internally verifying that the Kubernetes server is running, loft.sh’s vcluster Extension cleverly captures the Docker Desktop Kubernetes kubeconfig. The vcluster Extension does this using a javascript hostcli call out with kubectl binaries included in the extension (to provide compatibility across Windows, Mac, and Linux).

Then, it posts the captured output to a service running within the extension. The service in turn writes a local kubeconfig file for use by the vcluster Extension. 🚀

// Gets docker-desktop kubeconfig file from local and save it in container's /root/.kube/config file-system.
// We have to use the vm.service to call the post api to store the kubeconfig retrieved. Without post api in vm.service
// all the combinations of commands fail
export const updateDockerDesktopK8sKubeConfig = async (ddClient: v1.DockerDesktopClient) => {
    // kubectl config view --raw
    let kubeConfig = await hostCli(ddClient, "kubectl", ["config", "view", "--raw", "--minify", "--context", DockerDesktop]);
    if (kubeConfig?.stderr) {
        console.log("error", kubeConfig?.stderr);
        return false;
    }

    // call backend to store the kubeconfig retrieved
    try {
        await ddClient.extension.vm?.service?.post("/store-kube-config", {data: kubeConfig?.stdout})
    } catch (err) {
        console.log("error", JSON.stringify(err));
    }

How the k9 Extension for Docker Desktop works

With loft.sh’s ‘Docker Desktop Kubernetes Service is Running’ control loop and the kubeconfig capture logic, we have the key ingredients to create our Kubernetes-based Docker Extensions.

oil and flour ingredients
Photo by Anshu A on Unsplash

The k9s Extension that I released for Docker Desktop is essentially these components, with a splash of k9s and ttyd (for the web terminal). It’s the loft.sh vcluster codebase, reduced to a minimum set of components with k9s added.

The source code is available at https://github.com/spurin/k9s-dd-extension

The README.md file, found on GitHub, explains the details of the k9s extension for Docker Desktop.

While loft.sh’s vcluster stores the kubeconfig file in a particular directory, the k9s Extension expands this further by combining this service with a Docker Volume. When the service receives the post request with the kubeconfig, it’s saved as expected.

The kubeconfig file is now in a shared volume that other containers can access, such as the k9s as shown in the following example:

A snippet of code demonstrating a kubeconfig file stored in a shared volume to be accessed by other containers.

When the k9s container starts, it reads the environment variable KUBECONFIG (defined in the container image). Then, it exposes a terminal web-based service on port 35781 with k9s running.

If Kubernetes is running as expected in Docker Desktop, we’ll reuse loft.sh’s Kubernetes control loop to render an iframe, to the service on port 35781.

if (isDDK8sEnabled) {
            const myHTML = '<style>:root { --dd-spacing-unit: 0px; }</style><iframe src="http://localhost:35781" frameborder="0" style="overflow:hidden;height:99vh;width:100%" height="100%" width="100%"></iframe>';
            component = <React.Fragment>
            <div dangerouslySetInnerHTML={{ __html: myHTML }} />
            </React.Fragment>
        } else {
            component = <Box>
                <Alert iconMapping={{
                    error: <ErrorIcon fontSize="inherit"/>,
                }} severity="error" color="error">
                    Seems like Kubernetes is not enabled in your Docker Desktop. Please take a look at the <a
                    href="https://docs.docker.com/desktop/kubernetes/">docker
                    documentation</a> on how to enable the Kubernetes server.
                </Alert>
            </Box>
        }

This renders k9s within the Extension pane when accessing the k9s Docker Extension.

k9 extension running in docker desktop

Conclusion

With that, I hope that sharing my experiences creating the k9s Docker Extension inspires you. By leveraging the source code for the Kubernetes k9s Docker Extension (standing on the shoulders of loft.sh), we open the gate to countless opportunities.

You’ll be able to fast-track the creation of a Kubernetes Extension in Docker Desktop, through changes to just two files: the docker-compose.yaml (for your own container services) and the UI rendering in the control loop.

Of course, all of this wouldn’t be possible without the minds behind vcluster. I’d like to give special thanks to loft.sh’s Lian Li, who I met at Kubecon and introduced me to loft.sh/vcluster. And I’d also like to thank the development team who are referenced both in the vcluster Extension source code and the forked version of k9s!

Thanks for reading – James Spurin

Not sure how to get started or want to learn more about Docker Extensions like this one? Check out the following additional resources:

You can also learn more about James, his top tips for working with Docker, and more in his feature on our Docker Captain Take 5 series. 

]]>
How to Use the Alpine Docker Official Image https://www.docker.com/blog/how-to-use-the-alpine-docker-official-image/ Thu, 08 Sep 2022 14:00:00 +0000 https://www.docker.com/?p=37364 With its container-friendly design, the Alpine Docker Official Image (DOI) helps developers build and deploy lightweight, cross-platform applications. It’s based on Alpine Linux which debuted in 2005, making it one of today’s newest major Linux distros. 

While some developers express security concerns when using relatively newer images, Alpine has earned a solid reputation. Developers favor Alpine for the following reasons:  

In fact, the Alpine DOI is one of our most popular container images on Docker Hub. To help you get started, we’ll discuss this image in greater detail and how to use the Alpine Docker Official Image with your next project. Plus, we’ll explore using Alpine to grab the slimmest image possible. Let’s dive in!

In this tutorial:

What is the Alpine Docker Official Image?

how to use the alpine docker official image 900x600 1

The Alpine DOI is a building block for Alpine Linux Docker containers. It’s an executable software package that tells Docker and your application how to behave. The image includes source code, libraries, tools, and other core dependencies that your application needs. These components help Alpine Linux function while enabling developer-centric features. 

The Alpine Docker Official Image differs from other Linux-based images in a few ways. First, Alpine is based on the musl libc implementation of the C standard library — and uses BusyBox instead of GNU coreutils. While GNU packages many Linux-friendly programs together, BusyBox bundles a smaller number of core functions within one executable. 

While our Ubuntu and Debian images leverage glibc and coreutils, these alternatives are comparatively lightweight and resource-friendly, containing fewer extensions and less bloat.

As a result, Alpine appeals to developers who don’t need uncompromising compatibility or functionality from their image. Our Alpine DOI is also user-friendly and straightforward since there are fewer moving parts.

Alpine Linux performs well on resource-limited devices, which is fitting for developing simple applications or spinning up servers. Your containers will consume less RAM and less storage space. 

The Alpine Docker Official Image also offers the following features:

Multi-arch support lets you run Alpine on desktops, mobile devices, rack-mounted servers, Raspberry Pis, and even newer M-series Macs. Overall, Alpine pairs well with a wide variety of embedded systems. 

These are only some of the advantages to using the Alpine DOI. Next, we’ll cover how to harness the image for your application. 

When to use Alpine

You may be interested in using Alpine, but find yourself asking, “When should I use it?” Containerized Alpine shines in some key areas: 

  • Creating servers
  • Router-based networking
  • Development/testing environments

While there are some other uses for Alpine, most projects will fall under these two categories. Overall, our Alpine container image excels in situations where space savings and security are critical. 

How to run Alpine in Docker

Before getting started, download Docker Desktop and then install it. Docker Desktop is built upon Docker Engine and bundles together the Docker CLI, Docker Compose, and other core components. Launching Docker Desktop also lets you use Docker CLI commands (which we’ll get into later). Finally, the included Docker Dashboard will help you visually manage your images and containers. 

After completing these steps, you’re ready to Dockerize Alpine!

Note: For Linux users, Docker will still work perfectly fine if you have it installed externally on a server, or through your distro’s package manager. However, Docker Desktop for Linux does save time and effort by bundling all necessary components together — while aiding productivity through its user-friendly GUI. 

Use a quick pull command

You’ll have to first pull the Alpine Docker Official Image before using it for your project. The fastest method involves running docker pull alpine from your terminal. This grabs the alpine:latest image (the most current available version) from Docker Hub and downloads it locally on your machine: 

Your terminal output should show when your pull is complete — and which alpine version you’ve downloaded. You can also confirm this within Docker Desktop. Navigate to the Images tab from the left sidebar. And a list of downloaded images will populate on the right. You’ll see your alpine image, tag, and its minuscule (yes, you saw that right) 5.29 MB size:

Docker Desktop UI with list of downloaded images including Alpine.
Other Linux distro images like Ubuntu, Debian, and Fedora are many, many times larger than Alpine.

That’s a quick introduction to using the Alpine Official Image alongside Docker Desktop. But it’s important to remember that every Alpine DOI version originates from a Dockerfile. This plain-text file contains instructions that tell Docker how to build an image layer by layer. Check out the Alpine Linux GitHub repository for more Dockerfile examples. 

Next up, we’ll cover the significance of these Dockerfiles to Alpine Linux, some CLI-based workflows, and other key information.

Build your Dockerfile

Because Alpine is a standard base for container images, we recommend building on top of it within a Dockerfile. Specify your preferred alpine image tag and add instructions to create this file. Our example takes alpine:3.14 and runs an executable mysql client with it: 

FROM alpine:3.14
RUN apk add --no-cache mysql-client
ENTRYPOINT ["mysql"]

In this case, we’re starting from a slim base image and adding our mysql-client using Alpine’s standard package manager. Overall, this lets us run commands against our MySQL database from within our application. 

This is just one of the many ways to get your Alpine DOI up and running. In particular, Alpine is well-suited to server builds. To see this in action, check out Kathleen Juell’s presentation on serving static content with Docker Compose, Next.js, and NGINX. Navigate to timestamp 7:07 within the embedded video. 

The Alpine Official Image has a close relationship with other technologies (something that other images lack). Many of our Docker Official Images support -alpine tags. For instance, our earlier example of serving static content leverages the node:16-alpine image as a builder

This relationship makes Alpine and multi-stage builds an ideal pairing. Since the primary goal of a multi-stage build is to reduce your final image size, we recommend starting with one of the slimmest Docker Official Images.

Grabbing the slimmest possible image

Pulling an -alpine version of a given image typically yields the slimmest result. You can do this using our earlier docker pull [image] command. Or you can create a Dockerfile and specify this image version — while leaving room for customization with added instructions. 

In either case, here are some results using a few of our most popular images. You can see how image sizes change with these tags:

Image tagImage sizeimage:[version number]-alpine size
python:3.9.13867.66 MB46.71 MB
node:18.8.0939.71 MB164.38 MB
nginx:1.23.1134.51 MB22.13 MB

We’ve used the :latest tag since this is the default image tag Docker grabs from Docker Hub. As shown above with Python, pulling the -alpine image version reduces its footprint by nearly 95%! 

From here, the build process (when working from a Dockerfile) becomes much faster. Applications based on slimmer images spin up quicker. You’ll also notice that docker pull and various docker run commands execute swifter with -alpine images. 

However, remember that you’ll likely have to use this tag with a specified version number for your parent image. Running docker pull python-alpine or docker pull python:latest-alpine won’t work. Docker will alert you that the image isn’t found, the repo doesn’t exist, the command is invalid, or login information is required. This applies to any image. 

Get up and running with Alpine today

The Alpine Docker Official Image shines thanks to its simplicity and small size. It’s a fantastic base image — perhaps the most popular amongst Docker users — and offers plenty of room for customization. Alpine is arguably the most user-friendly, containerized Linux distro. We’ve tackled how to use the Alpine Official Image, and showed you how to get the most from it. 

Want to use Alpine for your next application or server? Pull the Alpine Official Image today to jumpstart your build process. You can also learn more about supported tags on Docker Hub. 

Additional resources

]]>
Docker community Archives | Docker nonadult
How to Use the Redis Docker Official Image https://www.docker.com/blog/how-to-use-the-redis-docker-official-image/ Wed, 24 Aug 2022 14:00:00 +0000 https://www.docker.com/?p=36720 Maintained in partnership with Redis, the Redis Docker Official Image (DOI) lets developers quickly and easily containerize a Redis instance. It streamlines the cross-platform deployment process — even letting you use Redis with edge devices if they support your workflows. 

Developers have pulled the Redis DOI over one billion times from Docker Hub. As the world’s most popular key-value store, Redis helps apps concurrently access critical bits of data while remaining resource friendly. It’s highly performant, in-memory, networked, and durable. It also stands apart from relational databases like MySQL and PostgreSQL that use tabular data structures. From day one, Redis has also been open source. 

Finally, Redis cluster nodes are horizontally scalable — making it a natural fit for containerization and multi-container operation. Read on as we explore how to use the Redis Docker Official Image to containerize and accelerate your Redis database deployment.

In this tutorial:

What is the Redis Docker Official Image?

redis docker official image

The Redis DOI is a building block for Redis Docker containers. It’s an executable software package that tells Docker and your application how to behave. It bundles together source code, dependencies, libraries, tools, and other core components that support your application. In this case, these components determine how your app and Redis database interact.

Our Redis Docker Official Image supports multiple CPU architectures. An assortment of over 50 supported tags lets you choose the best Redis image for your project. They’re also multi-layered and run using a default configuration (if you’re simply using docker pull). Complexity and base images also vary between tags. 

That said, you can also configure your Redis Official Image’s Dockerfile as needed. We’ll touch on this while outlining how to use the Redis DOI. Let’s get started.

How to run Redis in Docker

Before proceeding, we recommend installing Docker Desktop. Desktop is built upon Docker Engine and packages together the Docker CLI, Docker Compose, and more. Running Docker Desktop lets you use Docker commands. It also helps you manage images and containers using the Docker Dashboard UI. 

Use a quick pull command

Next, you’ll need to pull the Redis DOI to use it with your project. The quickest method involves visiting the image page on Docker Hub, copying the docker pull command, and running it in your terminal:

Your output confirms that Docker has successfully pulled the :latest Redis image. You can also verify this by hopping into Docker Desktop and opening the Images interface from the left sidebar. Your redis image automatically appears in the list:

Docker Desktop list of local images on disk, including the Redis official Docker image.

We can also see that our new Redis image is 111.14 MB in size. This is pretty lightweight compared to many images. However, using an alpine variant like redis:alpine3.16 further slims your image.

Now that you’re acquainted with Docker Desktop, let’s jump into our CLI workflow to get Redis up and running. 

Start your Redis instance

Redis acts as a server, and related server processes power its functionality. We need to start a Redis instance, or software server process, before linking it with our application. Luckily, you can create a running instance with just one command: 

 docker run --name some-redis -d redis 

We recommend naming your container. This helps you reference later on. It also makes it easier to run additional commands that involve it. Your container will run until you stop it. 

By adding -d redis in this command, Docker will run your Redis service in “detached” mode. Redis, therefore, runs in the background. Your container will also automatically exit when its root process exits. You’ll see that we’re not explicitly telling the service to “start” within this command. By leaving this verbiage out, our Redis service will start and continue running — remaining usable to our application.

Set up Redis persistent storage

Persistent storage is crucial when you want your application to save data between runs. You can have Redis write its data to a destination like an SSD. Persistence is also useful for keeping log files across restarts. 

You can capture every Redis operation using the Redis Database (RDB) method. This lets you designate snapshot intervals and record data at certain points in time. However, that running container from our initial docker run command is using port 6379. You should remove (or stop) this container before moving on, since it’s not critical for this example. 

Once that’s done, this command triggers persistent storage snapshots every 60 seconds: 

 docker run --name some-redis -d redis redis-server --save 60 1 --loglevel warning 

The RDB approach is valuable as it enables “set-and-forget” persistence. It also generates more logs. Logging can be useful for troubleshooting, yet it also requires you to monitor accumulation over time. 

However, you can also forego persistence entirely or choose another option. To learn more, check out Redis’ documentation

Redis stores your persisted data in the VOLUME /data location. These connected volumes are shareable between containers. This shareability becomes useful when Redis lives within one container and your application occupies another. 

Connect with the Redis CLI

The Redis CLI lets you run commands directly within your running Redis container. However, this isn’t automatically possible via Docker. Enter the following commands to enable this functionality: 

 docker network create some-network 
 ​​docker run -it --network some-network --rm redis redis-cli -h some-redis

Your Redis service understands Redis CLI commands. Numerous commands are supported, as are different CLI modes. Read through the Redis CLI documentation to learn more. 

Once you have CLI functionality up and running, you’re free to leverage Redis more directly!

Configurations and modules

Finally, we’ve arrived at customization. While you can run a Redis-powered app using defaults, you can tweak your Dockerfile to grab your pre-existing redis.conf file. This better supports production applications. While Redis can successfully start without these files, they’re central to configuring your services. 

You can see what a redis.conf file looks like on GitHub. Otherwise, here’s a sample Dockerfile

FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

You can also use docker run to achieve this. However, you should first do two things for this method to work correctly. First, create the /myredis/config directory on your host machine. This is where your configuration files will live. 

Second, open Docker Desktop and click the Settings gear in the upper right. Choose Resources > File Sharing to view your list of directories. You’ll see a grayed-out directory entry at the bottom, which is an input field for a named directory. Type in /myredis/config there and hit the “+” button to locally verify your file path:

Docker Desktop resource file sharing settings with the `/myredis/config` added.

You’re now ready to run your command! 

 docker run -v /myredis/conf:/usr/local/etc/redis --name myredis redis redis-server /usr/local/etc/redis/redis.conf 

The Dockerfile gives you more granular control over your image’s construction. Alternatively, the CLI option lets you run your Redis container without a Dockerfile. This may be more approachable if your needs are more basic. Just ensure that your mapped directory is writable and exists locally. 

Also, consider the following: 

  • If you edit your Redis configurations on the fly, you’ll have to use CONFIG REWRITE to automatically identify and apply any field changes on the next run.
  • You can also apply configuration changes manually.

Remember how we connected the Redis CLI earlier? You can now pass arguments directly through the Redis CLI (ideal for testing) and edit configs while your database server is running. 

Notes on using Redis modules

Redis modules let you extend your Redis service, and build new services, and adapt your database without taking a performance hit. Redis also processes them in memory. These standard modules support querying, search, JSON processing, filtering, and more. As a result, Docker Hub’s redislabs/redismod image bundles seven of these official modules together: 

  1. RedisBloom
  2. RedisTimeSeries
  3. RedisJSON
  4. RedisAI
  5. RedisGraph
  6. RedisGears
  7. Redisearch

If you’d like to spin up this container and experiment, simply enter docker run -d -p 6379:6379 redislabs/redismod in your terminal. You can open Docker Desktop to view this container like we did earlier on. 

You can view Redis’ curated modules or visit the Redis Modules Hub to explore further.

Get up and running with Redis today

We’ve explored how to successfully Dockerize Redis. Going further, it’s easy to grab external configurations and change how Redis operates on the fly. This makes it much easier to control how Redis interacts with your application. Head on over to Docker Hub and pull your first Redis Docker Official Image to start experimenting. 

The Redis Stack also helps extend Redis within Docker. It adds modern, developer-friendly data models and processing engines. The Stack also grants easy access to full-text search, document store, graphs, time series, and probabilistic data structures. Redis has published related container images through the Docker Verified Publisher (DVP) program. Check them out!

]]>
Docker community Archives | Docker nonadult
Community All-Hands Q3: Call for Papers https://www.docker.com/blog/community-all-hands-q3-call-for-papers/ Fri, 05 Aug 2022 14:00:55 +0000 https://www.docker.com/?p=36347 Mark your calendars! Join us for our next Community All-Hands event on September 1st. This quarterly event is a unique opportunity for the Docker community and staff to come together and present topics we’re passionate about.

Don’t miss out on this special event with community news, project demos, programming workshops, company and product updates, and more! In this edition, we will be introducing an unconference track, in which you will be able to propose any topic and drive an interactive discussion with the community (the world is your oyster!).

We are looking for speakers who want to share their experience with the community in a 30 to 45-minute presentation. We welcome all topics related to software development and the tech ecosystem. Additionally, we’re looking for unconference hosts who are passionate about a particular topic and want to lead a discussion on it.

Have a topic you’re especially excited about and would like to host? Follow these top five tips to get your topic suggestion accepted, and submit your proposal before August 14th!

Top five tips to get your proposal accepted

To increase the chances of your topic being selected, there’s a few qualities our community especially looks forward to. When brainstorming your proposal, we recommend keeping the following in mind.

1. Keep it short and sweet

Brevity is key when it comes to proposals. The review committee is looking for clear and concise proposals that get to the point.

2. Make it relevant

Your proposal should be relevant to the Docker community. Think about what would be of interest to other members of the community and what would contribute to the overall goal of the All-Hands event, which is to bring the community together.

During this event, developers can look forward to an altruistic platform for learning about new technologies and driving group conversation. For this reason, please don’t submit commercial and promotional content.

3. Don’t be afraid to repeat topics

Everyone has a different perspective, so don’t be afraid to repeat a topic that has been presented before. Your unique perspective is what makes your proposal valuable!

4. Know your audience

Keep your audience in mind when crafting your proposal. The All-Hands event is open to developers of all levels, so make sure your proposal is accessible to a broad range of people.

5. Follow the submission guidelines

Be sure to follow the submission guidelines when submitting your proposal. This includes providing all the required information and using the correct format.

Still not sure what to submit? Here’s a list of ideas

 

Main track:

  • Converting your application from a monolith into a collection of microservices
  • Getting started with Carbon, WASM, or any other technology. Provide a template in DockerHub for people to use!
  • A workshop on how to build modern web development architecture based Jamstack
  • A workshop on how to use some exciting new technologies (Web3, AI, Blockchain)
  • Case study: How you increased the productivity of your team with modern software tooling
  • Best practices for developing and deploying cloud-native applications
  • Showcase your new IoT project and information on how to contribute
  • How you made your tech meetup more inclusive
  • Your new Docker extension and how others can benefit from installing it

Unconference track:

  • Discussion on security best practices
  • Lightning talk about your open source project
  • Birds of a Feather session – Reactive Programming
  • Discussion panel on Artificial Intelligence trends
  • Guided meditation session
  • Docker sessions in French, Spanish or Klingon
  • An interpretative dance session of marine creatures

We hope these tips help you in submitting a successful proposal for our next Community All-Hands event. Make sure to submit your ideas before the August 14th deadline, and we’ll see you there!

]]>
Last Call: Register for our next Community All Hands on March 31st https://www.docker.com/blog/last-call-register-for-our-next-community-all-hands-on-march-31st/ Tue, 29 Mar 2022 14:00:21 +0000 https://www.docker.com/?p=32820
This Thursday is our next Community All Hands event, on March 31st at 8am PST/5pm CET. We will be celebrating Docker’s 9th Birthday. This has been an extraordinary year, so we’re excited to celebrate with you and reflect on the highlights of these past years. Grab some cake and join us for this fun-filled event.

Community All Hands 5
In this edition, we will have three language tracks in Java, JavaScript, and Python. Tune in to your favorite language track for inspiring talks and discussions. Additionally, we’ll have our traditional Open Source track, where members of the community will be able to share their projects and discoveries to improve the overall development experience. 

This is a unique opportunity for Docker staff, Captains, and the broader Docker community to come together for live company updates, product updates, demos, community shout-outs and Q&A. You’ll also get the opportunity to win Docker swag in some exciting games that we have prepared for you.

We have lined up some exciting talks from the Engineering team at Docker and the community including:

– A message from our CEO, Scott Johnston

– Product updates and demos for Docker Desktop for Linux, improvements in file sharing performance and Docker Extensions

– Community updates and ways to get involved.

– Workshops to start containerizing your applications in Java, Javascript and Python.

– Live Q&A with the Engineering team.

– And more!

Below is a complete look at our exciting agenda. We hope to see you there!

Agenda Community All Hands

 

]]>
Join Us for Our Next Docker Community All-Hands! https://www.docker.com/blog/join-us-for-our-next-docker-community-all-hands/ Fri, 10 Sep 2021 13:00:00 +0000 https://www.docker.com/blog/?p=28677 Next week, on Thursday September 16th, 2021 (8am PST/5pm CET) we’ll be hosting our next quarterly Docker Community All-Hands. This virtual event, free and open to everyone, is a unique opportunity for Docker staff and the broader Docker community to come together for company and product updates, live demos, community presentations and a live Q&A. 

We’ve tried to pack as much Docker goodness in the 3 hour program and we look forward to welcoming the 3,000+ attendees that will be tuning in. 

dockertoolox

What we’ll cover

  • Company vision and product roadmap for 2021 and beyond
  • High-level overview of Docker’s technology strategy 
  • Product updates and live demos of new features and integrations
  • Community news and updates
  • Hands-on workshops and lightning talks presented by Docker Captains
  • Regional workshops in French, Spanish and Portuguese by the community

Speakers

  • We’ll kick-off the event with a live panel and live Q&A with members of Docker’s executive and senior staff, including Scott Johnston (CEO), Justin Cormack (CTO), Jean-Laurent de Morlhon (VP of Engineering) and Dieu Cao (Sr. Director of Product Management)
  • We’ll then have a couple of awesome demos from our engineering team
  • We’ll then close out the first hour with our traditional community shout-outs 
  • The following two hours will be 100% community-driven, packed with lightning talks, workshops, demos, panels…in 5 different languages:
  • Live Developer Panel (Francesco Ciulla)

Click here to register for the event and to view the detailed agenda.

]]>
Join Us Next Week for Docker’s Community All-Hands https://www.docker.com/blog/join-us-next-week-for-dockers-community-all-hands/ Thu, 04 Mar 2021 17:48:08 +0000 https://www.docker.com/blog/?p=27580

Next week, on Thursday March 11th, 2021 (8am PST/5pm CET) we’ll be hosting our next quarterly Docker Community All-Hands. This free virtual event, open to everyone, is a unique opportunity for Docker staff and the broader Docker community to come together for company and product updates, live demos, community presentations and a live Q&A. 

As luck would have it, this All-Hands will coincide almost to the day with Docker’s 8th birthday (yay!). To mark the occasion, we’re going to make this event extra special by introducing : 

  • a longer format (ie. 3 hours instead of 1 hour)
  • lots more content (ie. demos, community lightning talks and workshops)
  • regional content in French, Spanish and Portuguese! 

We’re also really excited about the new video platform we’ll be using that makes it much easier for attendees to engage/connect/share with each other.

Who will be presenting

  • Members of Docker’s leadership team including Scott Johnston (CEO), Justin Cormack (CTO), Donnie Berkholz (VP of Products) and Jean-Laurent de Morlhon (VP of Engineering)
  • Members of Docker’s product, engineering and community team
  • Docker Captains and Community Leaders

What we’ll cover

  • Company vision and product roadmap for 2021 and beyond
  • High-level overview of Docker’s technology strategy 
  • Product updates and live demos of new features and integrations
  • Community news and updates
  • Hands-on workshops and lightning talks presented by Docker Captains
  • Regional workshops in French, Spanish and Portuguese by the community

Click here to register for the event and to view the detailed agenda.

]]>
Save the Date! Next Docker Community All Hands https://www.docker.com/blog/save-the-date-next-docker-community-all-hands/ Thu, 11 Feb 2021 18:39:10 +0000 https://www.docker.com/blog/?p=27524

We’re excited to announce that our next Community All Hands will be on March 11th, 2021 @ 8am PST/5pm CET. This quarterly event is a unique opportunity for Docker staff and the broader Docker community to come together for live company updates, product updates, demos, community shout-outs and Q&A. We had more than 1,500 attendees for our last all-hands and we hope to double that this time.  

This all-hands will be particularly special because it will coincide with none other than….you guessed it…Docker’s 8th birthday! For this “birthday edition,” we’re going to make the event extra special.

We’ll start by extending the format from 1 hour to 2 hours to pack in more Docker goodness. The main piece of feedback we got from our last all hands was that it was way too short. We had too much content that we tried to squeeze into 60 minutes. This longer format will give us plenty of time to cover everything we need to cover and let presenters catch their breath 🙂

Another new feature of this all-hands will be integrated chat and multi-casting made possible by a new innovative video conferencing platform we’ll be using. This will give us the opportunity to present content simultaneously in different virtual rooms and enable attendees to not only tune in to content that’s most relevant to them but also engage more with the presenters and other attendees.

Agenda

We’re still building out the agenda but you can expect the event to unfold as follows: 

  • The first hour will focus on company and product updates with presentations from Docker executive staff, including Scott Johnston (CEO), Justin Cormack (CTO), Donnie Berkholz (VP of Products) and Jean-Laurent de Morlhon (VP of Engineering). 
  • The second hour will focus on breakout sessions with live demos and workshops so that attendees can deep dive into particular areas of interest. 
  • The final hour will be focused on community, with a Captain-led live show where we’ll be giving fresh updates on programs, initiatives as well as shout-outs to outstanding Docker contributors.

Openness and participation are key pillars of a healthy open source community. We’re constantly exploring ways to better engage the Docker community, and we hope this new format will make this Community All Hands even more awesome. Join us!

Click here to register for the event. 

]]>