case study – Docker https://www.docker.com Wed, 30 Nov 2022 16:36:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png case study – Docker https://www.docker.com 32 32 How Rapid7 Reduced Setup Time From Days to Minutes With Docker https://www.docker.com/blog/how-rapid7-reduced-setup-time-from-days-to-minutes-with-docker/ Fri, 18 Nov 2022 15:00:00 +0000 https://www.docker.com/?p=38772 This post was co-written by Kris Rivera, Principal Software Engineer at Rapid7.

rapid7 1

Rapid7 is a Boston-based provider of security analytics and automation solutions enabling organizations to implement an active approach to cybersecurity. Over 10,000 customers rely on Rapid7 technology, services, and research to improve security outcomes and securely advance their organizations.

The security space is constantly changing, with new threats arising every day. To meet their customers’ needs, Rapid7 focuses on increasing the reliability and velocity of software builds while also maintaining their quality.

That’s why Rapid7 turned to Docker. Their teams use Docker to help development, support the sales pipeline, provide testing environments, and deploy to production in an automated, reliable way. 

By using Docker, Rapid7 transformed their onboarding process by automating manual processes. Setting up a new development environment now takes minutes instead of days. Their developers can produce faster builds that enable regular code releases to support changing requirements.

Automating the onboarding process

When developers first joined Rapid7, they were met with a static, manual process that was time consuming and error-prone. Configuring a development environment isn’t exciting for most developers. They want to spend most of their time creating! And setting up the environment is the least glamorous part of the process.

Docker helped automate this cumbersome process. Using Docker, Rapid7 could create containerized systems that were preconfigured with the right OS and developer tools. Docker Compose enabled multiple containers to communicate with each other, and it had the hooks needed to incorporate custom scripting and debugging tools.

Once the onboarding setup was configured through Docker, the process was simple for other developers to replicate. What once took multiple days now takes minutes.

Expanding containers into production

The Rapid7 team streamlined the setup of the development environment by using a Dockerfile. This helped them create an image with every required dependency and software package.

But they didn’t stop there. As this single Docker image evolved into a more complex system, they realized that they’d need more Docker images and container orchestration. That’s when they integrated Docker Compose into the setup.

Docker Compose simplified Docker image builds for each of Rapid7’s environments. It also encouraged a high level of service separation that split out different initialization steps into separate bounded contexts. Plus, they could leverage Docker Compose for inter-container communication, private networks, Docker volumes, defining environment variables with anchors, and linking containers for communication and aliasing.

This was a real game changer for Rapid7, because Docker Compose truly gave them unprecedented flexibility. Teams then added scripting to orchestrate communication between containers when a trigger event occurs (like when a service has completed).

Using Docker, Docker Compose, and scripting, Rapid7 was able to create a solution for the development team that could reliably replicate a complete development environment. To optimize the initialization, Rapid7 wanted to decrease the startup times beyond what Docker enables out of the box.

Optimizing build times even further

After creating Docker base images, the bottom layers rarely have to change. Essentially, that initial build is a one-time cost. Even if the images change, the cached layers make it a breeze to get through that process quickly. However, you do have to reinstall all software dependencies from scratch again, which is a one-time cost per Docker image update.

Committing the installed software dependencies back to the base image allows for a simple, incremental, and often skippable stage. The Docker image is always usable in development and production, all on the development computer.

All of these efficiencies together streamlined an already fast 15 minute process down to 5 minutes — making it easy for developers to get productive faster.

How to build it for yourself

Check out code examples and explanations about how to replicate this setup for yourself. We’ll now tackle the key steps you’ll need to follow to get started.

Downloading Docker

Download and install the latest version of Docker to be able to perform Docker-in-Docker. Docker-in-Docker lets your Docker environment have Docker installed within a container. This lets your container run other containers or pull images.

To enable Docker-in-Docker, you can apt install the docker.io distribution as one of your first commands in your Dockerfile. Once the container is configured, mount the Docker socket from the host installation:

# Dockerfile

FROM ubuntu:20.04

# Install dependencies

RUN apt update &&  \
           apt install -y docker.io

Next, build your Docker image by running the following command in your CLI or shell script file:

docker build -t <docker-image-name>

Then, start your Docker container with the following command:

docker run -v /var/run/docker.sock:/var/run/docker.sock -ti <docker-image-name>

Using a Docker commit script

Committing layered changes to your base image is what drives the core of the Dev Environments in Docker. Docker fetches the container ID based on the service name, and the changes you make to the running container are committed to the desired image. 

Because the host Docker socket is mounted into the container when executing the docker commit command, the container will apply the change to the base image located in the host Docker installation.

# ! /bin/bash

SERVICE=${1}
IMAGE=${2}

# Commit changes to image
CONTAINER_ID=$(docker ps -aqf “name=${SERVICE}”)

if [ ! -z “$CONTAINER_ID”]; then
	Echo “--- Committing changes from $SERVICE to $IMAGE --- ”
	docker commit $CONTAINER_ID $IMAGE
fi

Updating your environment

Mount the Docker socket from the host installation. Mounting the source code is insufficient without the :z property, which tells Docker that the content will be shared between containers.

You’ll have to mount the host machine’s Docker socket into the container. This lets any Docker operations performed inside the container actually modify the host Docker images and installation. Without this, changes made in the container are only going to persist in the container until it’s stopped and removed.

Add the following code into your Docker Compose file:

# docker-compose.yaml

services:
  service-name:
    image: image-with-docker:latest
    volumes:
        - /host/code/path:/container/code/path:z
        - /var/run/docker.sock:/var/run/docker.sock

Orchestrating components

Once Docker Compose has the appropriate services configured, you can start your environment in two different ways. Use either the docker-compose up command or start the environment by running the individual service with the linked services with the following command:

docker compose start webserver

The main container references the linked service via the linked names. This makes it very easy to override any environment variables with the provided names. Check out the YAML file below:

services:
  webserver:
  mysql:
    ports:
      - '3306:3306'
    volume
	 - dbdata:var/lib/mysql
  redis:
    ports:
      - 6379:6379
    volumes:
	 - redisdata:/data

volumes:
  dbdata:
  redisdata:

Notes: For each service, you’ll want to choose and specify your preferred Docker Official Image version. Additionally, the MySQL Docker Official Image comes with important environment variables defaulted in — though you can specify them here as needed.

Managing separate services

Starting a small part of the stack can also be useful if a developer only needs that specific piece. For example, if we just wanted to start the MySQL service, we’d run the following command:

docker compose start mysql

We can stop this service just as easily with the following command:

docker compose stop mysql

Configuring your environment

Mounting volumes into the database services lets your containers apply the change to their respective databases while letting those databases remain as ephemeral containers.

In the main entry point and script orchestrator, provide a -p attribute to ./start.sh to set the PROD_BUILD environment variable. The build reads the variable inside the entry point and optionally builds a production or development version of the development environment.  

First, here’s how that script looks:

# start.sh

while [ "$1" != ""];
do
	case $1 in
		-p |  --prod) PROD_BUILD="true";;

	esac
		shift
done

Second, here’s a sample shell script:

export PROD_BUILD=$PROD_BUILD

Third, here’s your sample Docker Compose file:

# docker-compose.yaml

services:
  build-frontend:
    entrypoint:
      - bash
      - -c
      - "[[ \"$PROD_BUILD\" == \"true\" ]] && make fe-prod || make fe-dev"

Note: Don’t forget to add your preferred image under build-frontend if you’re aiming to make a fully functional Docker Compose file.

What if we need to troubleshoot any issues that arise? Debugging inside a running container only requires the appropriate debugging library in the mounted source code and an open port to mount the debugger. Here’s our YAML file:

# docker-compose.yaml

services:
  webserver:
    ports:
	- '5678:5678'
    links:
	- mysql
	 - redis
	entrypoint:
      - bash
      - -c
      - ./start-webserver.sh

Note: Like in our previous examples, don’t forget to specify an image underneath webserver when creating a functional Docker Compose file.

In your editor of choice, provide a launch configuration to attach the debugger using the specified port. Once the container is running, run the configuration and the debugger will be attached:

#launch-setting.json
{
	"configurations" : [
		{
			"name":  "Python: Remote Attach",
			"type": "python",
			"request": "attach",
			"port": 5678,
			"host": "localhost",
			"pathMappings": [
				{
					"localRoot": "${workspaceFolder}",
					"remoteRoot": "."
				}
			]
		}
	]
}

Confirming that everything works

Once the full stack is running, it’s easy to access the main entry point web server via a browser on the defined webserver port.

The docker ps command will show your running containers. Docker is managing communication between containers. 

The entire software service is now running completely in Docker. All the code lives on the host computer inside the Docker container. The development environment is now completely portable using only Docker.

Remembering important tradeoffs

This approach has some limitations. First, running your developer environment in Docker will incur additional resource overhead. Docker has to run and requires extra computing resources as a result. Also, including multiple stages will require scripting as a final orchestration layer to enable communication between containers.

Wrapping up

Rapid7’s development team uses Docker to quickly create their development environments. They use the Docker CLI, Docker Desktop, Docker Compose, and shell scripts to create an extremely unique and robust Docker-friendly environment. They can use this to spin up any part of their development environment.

The setup also helps Rapid7 compile frontend assets, start cache and database servers, run the backend service with different parameters, or start the entire application stack. Using a “Docker-in-Docker” approach of mounting the Docker socket within running containers makes this possible. Docker’s ability to commit layers to the base image after dependencies are either updated or installed is also key. 

The shell scripts will export the required environment variables and then run specific processes in a specific order. Finally, Docker Compose makes sure that the appropriate service containers and dependencies are running.

Achieving future development goals

Relying on the Docker tool chain has been truly beneficial for Rapid7, since this has helped them create a consistent environment compatible with any part of their application stack. This integration has helped Rapid7 do the following: 

  • Deploy extremely reliable software to advanced customer environments
  • Analyze code before merging it in development
  • Deliver much more stable code
  • Simplify onboarding 
  • Form an extremely flexible and configurable development environment

By using Docker, Rapid7 is continuously refining its processes to push past the boundaries of what’s possible. Their next goal is to deliver production-grade stable builds on a daily basis, and they’re confident that Docker can help them get there.

]]>
Developer Engagement in the Remote Work Era with RedMonk and Miva https://www.docker.com/blog/developer-engagement-in-the-remote-work-era/ Fri, 21 Oct 2022 14:00:00 +0000 https://www.docker.com/?p=38097 With the rise of remote-first and hybrid work models in the tech world, promoting developer engagement has become more important than ever. Maintaining a culture of engagement, productivity, and collaboration can be a hurdle for businesses making this new shift to remote work. But it’s far from impossible.

docker redmonk miva webinar 900x600 1

As a fully-remote, developer-focused company, Docker was thrilled to join in a like-minded conversation with RedMonk and Miva. Jake Levirne (Head of Product at Docker) was joined by Jon Burchmore (CTO at Miva) for a talk led by RedMonk’s Sr. Analyst Rachel Stephens. In this webinar on developer engagement in the remote work era, these industry specialists discuss navigating developer engagement with a focus on productivity, collaboration, and much more.

Navigating developer engagement

Companies with newly-distributed work environments often struggle to create an engaging culture for their employees. This remains especially true for the developer workforce. Because of this, developer engagement has become a priority for more organizations than ever, including Miva.

“We actually brought [developer engagement] up as a part of our developer roadmap. As we were talking about ‘this is our product roadmap for 2022 — what’s the biggest challenge?’, my answer was ‘keeping people engaged so that we can keep productivity high.’”

Jon Burchmore

Like Miva, other organizations are starting to incorporate developer engagement into their broader business decisions. Teams are intentionally choosing tools and processes that support not only development requirements but also involvement and preference. By taking a look at productivity and collaboration, we can see the impact of these decisions. 

Measuring developer productivity and collaboration

As both an art and a science, measuring developer productivity and collaboration can be difficult. While metrics can be informative, Jon is most interested in seeing the qualitative impact.

“How much is the team engaging with itself […] and is that engagement bottom up […] or from peer-to-peer? And a healthy team to me feels like a team where the peers are engaging as well. It’s not everyone just going upstream to get their problems solved.” – Jon Burchmore

As Jake adds, it’s more than just tracking lines of code. It’s about focusing on the outcomes. While developer engagement can be difficult to measure, the message is clear. Engaged developers are non-negotiable for high-performing teams.

Approaching developer collaboration

Developer collaboration is another linchpin for building developer engagement. Teams are now challenging themselves to find more opportunities for pair programming or similar types of coworking. Healthy collaboration should also not be limited to single teams.

“When you unlock collaboration both within teams and across teams, I think that’s what allows you to build what effectively are the increasingly complex real-world applications that are needed to keep creating business value.” – Jake Levirne

Organizations are taking a more holistic, inter-team perspective to avoid the dreaded, siloed productivity approach.

Watch the full, on-demand webinar

These points are just a snapshot of our talk with RedMonk and Miva on the challenges of developer engagement in the remote work era. Hear the rest of the discussion and more detail by watching the full conversation on demand.

]]>
case study Archives | Docker nonadult