Djordje Lukic – Docker https://www.docker.com Wed, 22 Mar 2023 17:42:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Djordje Lukic – Docker https://www.docker.com 32 32 Announcing Docker+Wasm Technical Preview 2 https://www.docker.com/blog/announcing-dockerwasm-technical-preview-2/ Wed, 22 Mar 2023 16:12:16 +0000 https://www.docker.com/?p=41468 We recently announced the first Technical Preview of Docker+Wasm, a special build that makes it possible to run Wasm containers with Docker using the WasmEdge runtime. Starting from version 4.15, everyone can try out the features by activating the containerd image store experimental feature.

We didn’t want to stop there, however. Since October, we’ve been working with our partners to make running Wasm workloads with Docker easier and to support more runtimes.

Now we are excited to announce a new Technical Preview of Docker+Wasm with the following three new runtimes:

All of these runtimes, including WasmEdge, use the runwasi library.

What is runwasi?

Runwasi is a multi-company effort to make a library in Rust that makes it easier to write containerd shims for Wasm workloads. Last December, the runwasi project was donated and moved to the Cloud Native Computing Foundation’s containerd organization in GitHub.

With a lot of work from people at Microsoft, Second State, Docker, and others, we now have enough features in runwasi to run Wasm containers with Docker or in a Kubernetes cluster. We still have a lot of work to do, but there are enough features for people to start testing.

If you would like to chat with us or other runwasi maintainers, join us on the CNCF’s #runwasi channel.

Get the update

Ready to dive in and try it for yourself? Great! Before you do, understand that this is a technical preview build of Docker Desktop, so things might not work as expected. Be sure to back up your containers and images before proceeding.

Download and install the appropriate version for your system, then activate the containerd image store (Settings > Features in development > Use containerd for pulling and storing images), and you’ll be ready to go.

Features in development screen with "Use containerd" option selected.
Figure 1: Docker Desktop beta features in development.

Let’s take Wasm for a spin 

The WasmEdge runtime is still present in Docker Desktop, so you can run: 

$ docker run --rm --runtime=io.containerd.wasmedge.v1 
--platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

You can even run the same image with the wasmtime runtime:

$ docker run --rm --runtime=io.containerd.wasmtime.v1 
--platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

In the next example, we will deploy a Wasm workload to Docker Desktop’s Kubernetes cluster using the slight runtime. To begin, make sure to activate Kubernetes in Docker Desktop’s settings, then create an example.yaml file:

cat > example.yaml <<EOT
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasm-slight
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasm-slight
  template:
    metadata:
      labels:
        app: wasm-slight
    spec:
      runtimeClassName: wasmtime-slight-v1
      containers:
        - name: hello-slight
          image: dockersamples/slight-rust-hello:latest
          command: ["/"]
          resources:
            requests:
              cpu: 10m
              memory: 10Mi
            limits:
              cpu: 500m
              memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
  name: wasm-slight
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: wasm-slight
EOT

Note the runtimeClassName, kubernetes will use this to select the right runtime for your application. 

You can now run:

$ kubectl apply -f example.yaml

Once Kubernetes has downloaded the image and started the container, you should be able to curl it:

$ curl localhost/hello
hello

You now have a Wasm container running locally in Kubernetes. How exciting! 🎉

Note: You can take this same yaml file and run it in AKS.

Now let’s see how we can use this to run Bartholomew. Bartholomew is a micro-CMS made by Fermyon that works with the spin runtime. You’ll need to clone this repository; it’s a slightly modified Bartholomew template

The repository already contains a Dockerfile that you can use to build the Wasm container:

FROM scratch
COPY . .
ENTRYPOINT [ "/modules/bartholomew.wasm" ]

The Dockerfile copies all the contents of the repository to the image and defines the build bartholomew Wasm module as the entry point of the image.

$ cd docker-wasm-bartholomew
$ docker build -t my-cms .
[+] Building 0.0s (5/5) FINISHED
 => [internal] load build definition from Dockerfile          	0.0s
 => => transferring dockerfile: 147B                          	0.0s
 => [internal] load .dockerignore                             	0.0s
 => => transferring context: 2B                               	0.0s
 => [internal] load build context                             	0.0s
 => => transferring context: 2.84kB                           	0.0s
 => CACHED [1/1] COPY . .                                     	0.0s
 => exporting to image                                        	0.0s
 => => exporting layers                                       	0.0s
 => => exporting manifest sha256:cf85929e5a30bea9d436d447e6f2f2e  0.0s
 => => exporting config sha256:0ce059f2fe907a91a671f37641f4c5d73  0.0s
 => => naming to docker.io/library/my-cms:latest              	0.0s
 => => unpacking to docker.io/library/my-cms:latest           	0.0s

You are now ready to run your first WebAssembly micro-CMS:

$ docker run --runtime=io.containerd.spin.v1 -p 3000:80 my-cms

If you go to http://localhost:3000, you should be able to see the Bartholomew landing page (Figure 2).

The Bartholomew landing page showing an example homepage written in Markdown.
Figure 2: Bartholomew landing page.

We’d love your feedback

All of this work is fresh from the oven and relies on the containerd image store in Docker, which is an experimental feature we’ve been working on for almost a year now. The good news is that we already see how this hard work can benefit everyone by adding more features to Docker. We’re still working on it, so let us know what you need. 

If you want to help us shape the future of WebAssembly with Docker, try it out, let us know what you think, and leave feedback on our public roadmap.

]]>
Extending Docker’s Integration with containerd https://www.docker.com/blog/extending-docker-integration-with-containerd/ Thu, 01 Sep 2022 16:44:09 +0000 https://www.docker.com/?p=37231 We’re extending Docker’s integration with containerd to include image management! To share this work early and get feedback, this integration is available as an opt-in experimental feature with the latest Docker Desktop 4.12.0 release.

The Docker Desktop Experimental Features settings with the option for using containerd for pulling and storing images enabled.

What is containerd?

In the simplest terms, containerd is a broadly-adopted open container runtime. It manages the complete container lifecycle of its host system! This includes pulling and pushing images as well as handling the starting and stopping of containers. Not to mention, containerd is a low-level brick in the container experience. Rather than being used directly by developers, it’s designed to be embedded into systems like Docker and Kubernetes.

Docker’s involvement in the containerd project can be traced all the way back to 2016. You could say, it’s a bit of a passion project for us! While we had many reasons for starting the project, our goal was to move the container supervision out of the core Docker Engine and into a separate daemon. This way, it could be reused in projects like Kubernetes. It was donated to the Cloud Native Computing Foundation (CNCF), and it’s now a graduated (stable) project as of 2017.

What does containerd replace in the Docker Engine?

As we mentioned earlier, Docker has used containerd as part of Docker Engine for managing the container lifecycle (creating, starting, and stopping) for a while now! This new work is a step towards a deeper integration of containerd into the Docker Engine. It lets you use containerd to store images and then push and pull them. Containerd also uses snapshotters instead of graph drivers for mounting the root file system of a container. Due to containerd’s pluggable architecture, it can support multiple snapshotters as well. 

Want to learn more? Michael Crosby wrote a great explanation about snapshotters on the Moby Blog.

Why migrate to containerd for image management?

Containerd is the leading open container runtime and, better yet, it’s already a part of Docker Engine! By switching to containerd for image management, we’re better aligning ourselves with the broader industry tooling. 

This migration modifies two main things:

  • We’re replacing Docker’s graph drivers with containerd’s snapshotters.
  • We’ll be using containerd to push, pull, and store images.

What does this mean for Docker users?

We know developers love how Docker commands work today and that many tools rely on the existing Docker API. With this in mind, we’re fully vested in making sure that the integration is as transparent as possible and doesn’t break existing workflows. To do this, we’re first rolling it out as an experimental, opt-in feature so that we can get early feedback. When enabled in the latest Docker Desktop, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save.

This integration has the following benefits:

  1. Containerd’s snapshotter implementation helps you quickly plug in new features. Some examples include using stargz to lazy-pull images on startup or nydus and dragonfly for peer-to-peer image distribution.
  2. The containerd content store can natively store multi-platform images and other OCI-compatible objects. This enables features like the ability to build and manipulate multi-platform images using Docker Engine (and possibly other content in the future!).

If you plan to build the multi-platform image, the below graphic shows what to expect when you run the build command with the containerd store enabled. 

A recording of a docker build terminal output with the containerd store enabled.

Without the experimental feature enabled, you will get an error message stating that this feature is not supported on docker driver as shown in the graphic below. 

A recording of an unsuccessful docker build output with the containerd store disabled.

If you decide not to enable the experimental feature, no big deal! Things will work like before. If you have additional questions, you can access details in our release notes.

Roadmap for the containerd integration

We want to be as transparent as possible with the Docker community when it comes to this containerd integration (no surprises here!). For this reason, we’ve laid out a roadmap. The integration will happen in two key steps:

  1. We’ll ship an initial version in Docker Desktop which enables common workflows but doesn’t touch existing images to prove that this approach works.
  2. Next, we’ll write the code to migrate user images to use containerd and activate the feature for all our users.

We work to make expanding integrations like this as seamless as possible so you, our end user, can reap the benefits! This way, you can create new, exciting things while leveraging existing features in the ecosystem such as namespaces, containerd plug-ins, and more.

We’ve released this experimental feature first in Docker Desktop so that we can get feedback quickly from the community. But, you can also expect this feature in a future Docker Engine release.  


The details on the ongoing integration work can be accessed here

Conclusion

In summary, Docker users can now look forward to full containerd integration. This brings many exciting features from native multi-platform support to encrypted images and lazy pulls. So make sure to download the latest version of Docker Desktop and enable the containerd experimental feature to take it for a spin! 

We love sharing things early and getting feedback from the Docker community — it helps us build products that work better for you. Please join us on our community Slack channel or drop us a line using our feedback form.

]]>
How to Set Up Your Local Node.js Development Environment Using Docker https://www.docker.com/blog/how-to-setup-your-local-node-js-development-environment-using-docker/ Tue, 30 Aug 2022 21:16:50 +0000 https://www.docker.com/blog/?p=26568 Docker is the de facto toolset for building modern applications and setting up a CI/CD pipeline – helping you build, ship, and run your applications in containers on-prem and in the cloud. 

Whether you’re running on simple compute instances such as AWS EC2 or something fancier like a hosted Kubernetes service, Docker’s toolset is your new BFF. 

But what about your local Node.js development environment? Setting up local dev environments while also juggling the hurdles of onboarding can be frustrating, to say the least.

While there’s no silver bullet, with the help of Docker and its toolset, we can make things a whole lot easier.

Table of contents:

How to set up a local Node.js dev environment — Part 1

Docker's Moby Dock whale pointing to whiteboard with Node.js logo.

In this tutorial, we’ll walk through setting up a local Node.js development environment for a relatively complex application that uses React for its front end, Node and Express for a couple of micro-services, and MongoDb for our datastore. We’ll use Docker to build our images and Docker Compose to make everything a whole lot easier.

If you have any questions, comments, or just want to connect. You can reach me in our Community Slack or on Twitter at @rumpl.

Let’s get started.

Prerequisites

To complete this tutorial, you will need:

  • Docker installed on your development machine. You can download and install Docker Desktop.
  • Sign-up for a Docker ID through Docker Hub.
  • Git installed on your development machine.
  • An IDE or text editor to use for editing files. I would recommend VSCode.

Step 1: Fork the Code Repository

The first thing we want to do is download the code to our local development machine. Let’s do this using the following git command:

git clone https://github.com/rumpl/memphis.git

Now that we have the code local, let’s take a look at the project structure. Open the code in your favorite IDE and expand the root level directories. You’ll see the following file structure.

├── docker-compose.yml
├── notes-service
├── reading-list-service
├── users-service
└── yoda-ui

The application is made up of a couple of simple microservices and a front-end written in React.js. It also uses MongoDB as its datastore.

Typically at this point, we would start a local version of MongoDB or look through the project to find where our applications will be looking for MongoDB. Then, we would start each of our microservices independently and start the UI in hopes that the default configuration works.

However, this can be very complicated and frustrating. Especially if our microservices are using different versions of Node.js and configured differently.

Instead, let’s walk through making this process easier by dockerizing our application and putting our database into a container. 

Step 2: Dockerize your applications

Docker is a great way to provide consistent development environments. It will allow us to run each of our services and UI in a container. We’ll also set up things so that we can develop locally and start our dependencies with one docker command.

The first thing we want to do is dockerize each of our applications. Let’s start with the microservices because they are all written in Node.js, and we’ll be able to use the same Dockerfile.

Creating Dockerfiles

Create a Dockerfile in the notes-services directory and add the following commands.

Dockerfile in the notes-service directory using Node.js.

This is a very basic Dockerfile to use with Node.js. If you are not familiar with the commands, you can start with our getting started guide. Also, take a look at our reference documentation.

Building Docker Images

Now that we’ve created our Dockerfile, let’s build our image. Make sure you’re still located in the notes-services directory and run the following command:

cd notes-service
docker build -t notes-service .
Docker build terminal output located in the notes-service directory.

Now that we have our image built, let’s run it as a container and test that it’s working.

docker run --rm -p 8081:8081 --name notes notes-service

Docker run terminal output located in the notes-service directory.

From this error, we can see we’re having trouble connecting to the mongodb. Two things are broken at this point:

  1. We didn’t provide a connection string to the application.
  2. We don’t have MongoDB running locally.

To resolve this, we could provide a connection string to a shared instance of our database, but we want to be able to manage our database locally and not have to worry about messing up our colleagues’ data they might be using to develop. 

Step 3: Run MongoDB in a localized container

Instead of downloading MongoDB, installing, configuring, and then running the Mongo database service. We can use the Docker Official Image for MongoDB and run it in a container.

Before we run MongoDB in a container, we want to create a couple of volumes that Docker can manage to store our persistent data and configuration. I like to use the managed volumes that Docker provides instead of using bind mounts. You can read all about volumes in our documentation.

Creating volumes for Docker

To create our volumes, we’ll create one for the data and one for the configuration of MongoDB.

docker volume create mongodb

docker volume create mongodb_config

Creating a user-defined bridge network

Now we’ll create a network that our application and database will use to talk with each other. The network is called a user-defined bridge network and gives us a nice DNS lookup service that we can use when creating our connection string.

docker network create mongodb

Now, we can run MongoDB in a container and attach it to the volumes and network we created above. Docker will pull the image from Hub and run it for you locally.

docker run -it --rm -d -v mongodb:/data/db -v mongodb_config:/data/configdb -p 27017:27017 --network mongodb --name mongodb mongo

Step 4: Set your environment variables

Now that we have a running MongoDB, we also need to set a couple of environment variables so our application knows what port to listen on and what connection string to use to access the database. We’ll do this right in the docker run command.

docker run \
-it --rm -d \
--network mongodb \
--name notes \
-p 8081:8081 \
-e SERVER_PORT=8081 \
-e SERVER_PORT=8081 \
-e DATABASE_CONNECTIONSTRING=mongodb://mongodb:27017/yoda_notes \
notes-service

Step 5: Test your database connection

Let’s test that our application is connected to the database and is able to add a note.

curl --request POST \
--url http://localhost:8081/services/m/notes \
  --header 'content-type: application/json' \
  --data '{
"name": "this is a note",
"text": "this is a note that I wanted to take while I was working on writing a blog post.",
"owner": "peter"
}'

You should receive the following JSON back from our service.

{"code":"success","payload":{"_id":"5efd0a1552cd422b59d4f994","name":"this is a note","text":"this is a note that I wanted to take while I was working on writing a blog post.","owner":"peter","createDate":"2020-07-01T22:11:33.256Z"}}

Once we are done testing, run ‘docker stop notes mongodb’ to stop the containers.

Awesome! We’ve completed the first steps in Dockerizing our local development environment for Node.js. In Part II, we’ll take a look at how we can use Docker Compose to simplify the process we just went through.

How to set up a local Node.js dev environment — Part 2

In Part I, we took a look at creating Docker images and running containers for Node.js applications. We also took a look at setting up a database in a container and how volumes and networks play a part in setting up your local development environment.

In Part II, we’ll take a look at creating and running a development image where we can compile, add modules and debug our application all inside of a container. This helps speed up the developer setup time when moving to a new application or project. In this case, our image should have Node.js installed as well as NPM or YARN. 

We’ll also take a quick look at using Docker Compose to help streamline the processes of setting up and running a full microservices application locally on your development machine.

Let’s create a development image we can use to run our Node.js application.

Step 1: Develop your Dockerfile

Create a local directory on your development machine that we can use as a working directory to save our Dockerfile and any other files that we’ll need for our development image.

$ mkdir -p ~/projects/dev-image

Create a Dockerfile in this folder and add the following commands.

FROM node:18.7.0
RUN apt-get update && apt-get install -y \
  nano \
  Vim

We start off by using the node:18.7.0 official image. I’ve found that this image is fine for creating a development image. I like to add a couple of text editors to the image in case I want to quickly edit a file while inside the container.

We did not add an ENTRYPOINT or CMD to the Dockerfile because we will rely on the base image’s ENTRYPOINT, and we will override the CMD when we start the image.

Step 2: Build your Docker image

Let’s build our image.

$ docker build -t node-dev-image .

And now we can run it.

$ docker run -it --rm --name dev -v $(pwd):/code node-dev-image bash

You will be presented with a bash command prompt. Now, inside the container, we can create a JavaScript file and run it with Node.js.

Step 3: Test your image

Run the following commands to test our image.

$ node -e 'console.log("hello from inside our container")'
hello from inside our container

If all goes well, we have a working development image. We can now do everything that we would do in our normal bash terminal.

If you run the above Docker command inside of the notes-service directory, then you will have access to the code inside of the container. You can start the notes-service by simply navigating to the /code directory and running npm run start.

Step 4: Use Compose to Develop locally

The notes-service project uses MongoDB as its data store. If you remember from Part I, we had to start the Mongo container manually and connect it to the same network as our notes-service. We also had to create a couple of volumes so we could persist our data across restarts of our application and MongoDB.

Instead, we’ll create a Compose file to start our notes-service and the MongoDb with one command. We’ll also set up the Compose file to start the notes-service in debug mode. This way, we can connect a debugger to the running node process.

Open the notes-service in your favorite IDE or text editor and create a new file named docker-compose.dev.yml. Copy and paste the below commands into the file.

services:
 notes:
   build:
     context: .
   ports:
     - 8080:8080
     - 9229:9229
   environment:
     - SERVER_PORT=8080
     - DATABASE_CONNECTIONSTRING=mongodb://mongo:27017/notes
   volumes:
     - ./:/code
   command: npm run debug
 
 mongo:
   image: mongo:4.2.8
   ports:
     - 27017:27017
   volumes:
     - mongodb:/data/db
     - mongodb_config:/data/configdb
 volumes:
   mongodb:
   Mongodb_config:


This compose file is super convenient because now we don’t have to type all the parameters to pass to the `docker run` command. We can declaratively do that in the compose file.

We are exposing port 9229 so that we can attach a debugger. We are also mapping our local source code into the running container so that we can make changes in our text editor and have those changes picked up in the container.

One other really cool feature of using the compose file is that we have service resolution setup to use the service names. As a result, we are now able to use “mongo” in our connection string. We use “mongo” because that is what we have named our mongo service in the compose file.

Let’s start our application and confirm that it is running properly.

$ docker compose -f docker-compose.dev.yml up --build

We pass the “–build” flag so Docker will compile our image and then start it.

If all goes well, you should see the logs from the notes and mongo services:

Docker compose terminal ouput showing logs from the notes and mongo services.
[Click to Enlarge]

Now let’s test our API endpoint. Run the following curl command:

$ curl --request GET --url http://localhost:8080/services/m/notes

You should receive the following response:

{"code":"success","meta":{"total":0,"count":0},"payload":[]}

Step 5: Connect to a Debugger

We’ll use the debugger that comes with the Chrome browser. Open Chrome on your machine, and then type the following into the address bar.

About:inspect

The following screen will open.

The DevTools function on the Chrome browser, showing a list of devices and remote targets.

Click the “Open dedicated DevTools for Node” link. This will open the DevTools that are connected to the running Node.js process inside our container.

Let’s change the source code and then set a breakpoint. 

Add the following code to the server.js file on line 19 and save the file.

server.use( '/foo', (req, res) => {
   return res.json({ "foo": "bar" })
 })

If you take a look at the terminal where our compose application is running, you’ll see that nodemon noticed the changes and reloaded our application.

Docker compose terminal output showcasing the nodemon-reloaded application.
[Click to Enlarge]

Navigate back to the Chrome DevTools and set a breakpoint on line 20. Then, run the following curl command to trigger the breakpoint.

$ curl --request GET --url http://localhost:8080/foo

💥 BOOM 💥 You should have seen the code break on line 20, and now you are able to use the debugger just like you would normally. You can inspect and watch variables, set conditional breakpoints, view stack traces, and a bunch of other stuff.

Conclusion

In this article, we completed the first steps in Dockerizing our local development environment for Node.js. Then, we took things a step further and created a general development image that can be used like our normal command line. We also set up our compose file to map our source code into the running container and exposed the debugging port.

For further reading and additional resources:

]]>