This post was co-written by Kris Rivera, Principal Software Engineer at Rapid7.
Rapid7 is a Boston-based provider of security analytics and automation solutions enabling organizations to implement an active approach to cybersecurity. Over 10,000 customers rely on Rapid7 technology, services, and research to improve security outcomes and securely advance their organizations.
The security space is constantly changing, with new threats arising every day. To meet their customers’ needs, Rapid7 focuses on increasing the reliability and velocity of software builds while also maintaining their quality.
That’s why Rapid7 turned to Docker. Their teams use Docker to help development, support the sales pipeline, provide testing environments, and deploy to production in an automated, reliable way.
By using Docker, Rapid7 transformed their onboarding process by automating manual processes. Setting up a new development environment now takes minutes instead of days. Their developers can produce faster builds that enable regular code releases to support changing requirements.
Automating the onboarding process
When developers first joined Rapid7, they were met with a static, manual process that was time consuming and error-prone. Configuring a development environment isn’t exciting for most developers. They want to spend most of their time creating! And setting up the environment is the least glamorous part of the process.
Docker helped automate this cumbersome process. Using Docker, Rapid7 could create containerized systems that were preconfigured with the right OS and developer tools. Docker Compose enabled multiple containers to communicate with each other, and it had the hooks needed to incorporate custom scripting and debugging tools.
Once the onboarding setup was configured through Docker, the process was simple for other developers to replicate. What once took multiple days now takes minutes.
Expanding containers into production
The Rapid7 team streamlined the setup of the development environment by using a Dockerfile. This helped them create an image with every required dependency and software package.
But they didn’t stop there. As this single Docker image evolved into a more complex system, they realized that they’d need more Docker images and container orchestration. That’s when they integrated Docker Compose into the setup.
Docker Compose simplified Docker image builds for each of Rapid7’s environments. It also encouraged a high level of service separation that split out different initialization steps into separate bounded contexts. Plus, they could leverage Docker Compose for inter-container communication, private networks, Docker volumes, defining environment variables with anchors, and linking containers for communication and aliasing.
This was a real game changer for Rapid7, because Docker Compose truly gave them unprecedented flexibility. Teams then added scripting to orchestrate communication between containers when a trigger event occurs (like when a service has completed).
Using Docker, Docker Compose, and scripting, Rapid7 was able to create a solution for the development team that could reliably replicate a complete development environment. To optimize the initialization, Rapid7 wanted to decrease the startup times beyond what Docker enables out of the box.
Optimizing build times even further
After creating Docker base images, the bottom layers rarely have to change. Essentially, that initial build is a one-time cost. Even if the images change, the cached layers make it a breeze to get through that process quickly. However, you do have to reinstall all software dependencies from scratch again, which is a one-time cost per Docker image update.
Committing the installed software dependencies back to the base image allows for a simple, incremental, and often skippable stage. The Docker image is always usable in development and production, all on the development computer.
All of these efficiencies together streamlined an already fast 15 minute process down to 5 minutes — making it easy for developers to get productive faster.
How to build it for yourself
Check out code examples and explanations about how to replicate this setup for yourself. We’ll now tackle the key steps you’ll need to follow to get started.
Downloading Docker
Download and install the latest version of Docker to be able to perform Docker-in-Docker. Docker-in-Docker lets your Docker environment have Docker installed within a container. This lets your container run other containers or pull images.
To enable Docker-in-Docker, you can apt install
the docker.io
distribution as one of your first commands in your Dockerfile
. Once the container is configured, mount the Docker socket from the host installation:
# Dockerfile
FROM ubuntu:20.04
# Install dependencies
RUN apt update && \
apt install -y docker.io
Next, build your Docker image by running the following command in your CLI or shell script file:
docker build -t <docker-image-name>
Then, start your Docker container with the following command:
docker run -v /var/run/docker.sock:/var/run/docker.sock -ti <docker-image-name>
Using a Docker commit script
Committing layered changes to your base image is what drives the core of the Dev Environments in Docker. Docker fetches the container ID based on the service name, and the changes you make to the running container are committed to the desired image.
Because the host Docker socket is mounted into the container when executing the docker commit
command, the container will apply the change to the base image located in the host Docker installation.
# ! /bin/bash
SERVICE=${1}
IMAGE=${2}
# Commit changes to image
CONTAINER_ID=$(docker ps -aqf “name=${SERVICE}”)
if [ ! -z “$CONTAINER_ID”]; then
Echo “--- Committing changes from $SERVICE to $IMAGE --- ”
docker commit $CONTAINER_ID $IMAGE
fi
Updating your environment
Mount the Docker socket from the host installation. Mounting the source code is insufficient without the :z
property, which tells Docker that the content will be shared between containers.
You’ll have to mount the host machine’s Docker socket into the container. This lets any Docker operations performed inside the container actually modify the host Docker images and installation. Without this, changes made in the container are only going to persist in the container until it’s stopped and removed.
Add the following code into your Docker Compose file:
# docker-compose.yaml
services:
service-name:
image: image-with-docker:latest
volumes:
- /host/code/path:/container/code/path:z
- /var/run/docker.sock:/var/run/docker.sock
Orchestrating components
Once Docker Compose has the appropriate services configured, you can start your environment in two different ways. Use either the docker-compose up
command or start the environment by running the individual service with the linked services with the following command:
docker compose start webserver
The main container references the linked service via the linked names. This makes it very easy to override any environment variables with the provided names. Check out the YAML file below:
services:
webserver:
mysql:
ports:
- '3306:3306'
volume
- dbdata:var/lib/mysql
redis:
ports:
- 6379:6379
volumes:
- redisdata:/data
volumes:
dbdata:
redisdata:
Notes: For each service, you’ll want to choose and specify your preferred Docker Official Image version. Additionally, the MySQL Docker Official Image comes with important environment variables defaulted in — though you can specify them here as needed.
Managing separate services
Starting a small part of the stack can also be useful if a developer only needs that specific piece. For example, if we just wanted to start the MySQL service, we’d run the following command:
docker compose start mysql
We can stop this service just as easily with the following command:
docker compose stop mysql
Configuring your environment
Mounting volumes into the database services lets your containers apply the change to their respective databases while letting those databases remain as ephemeral containers.
In the main entry point and script orchestrator, provide a -p
attribute to ./start.sh
to set the PROD_BUILD
environment variable. The build reads the variable inside the entry point and optionally builds a production or development version of the development environment.
First, here’s how that script looks:
# start.sh
while [ "$1" != ""];
do
case $1 in
-p | --prod) PROD_BUILD="true";;
esac
shift
done
Second, here’s a sample shell script:
export PROD_BUILD=$PROD_BUILD
Third, here’s your sample Docker Compose file:
# docker-compose.yaml
services:
build-frontend:
entrypoint:
- bash
- -c
- "[[ \"$PROD_BUILD\" == \"true\" ]] && make fe-prod || make fe-dev"
Note: Don’t forget to add your preferred image under build-frontend
if you’re aiming to make a fully functional Docker Compose file.
What if we need to troubleshoot any issues that arise? Debugging inside a running container only requires the appropriate debugging library in the mounted source code and an open port to mount the debugger. Here’s our YAML file:
# docker-compose.yaml
services:
webserver:
ports:
- '5678:5678'
links:
- mysql
- redis
entrypoint:
- bash
- -c
- ./start-webserver.sh
Note: Like in our previous examples, don’t forget to specify an image underneath webserver
when creating a functional Docker Compose file.
In your editor of choice, provide a launch configuration to attach the debugger using the specified port. Once the container is running, run the configuration and the debugger will be attached:
#launch-setting.json
{
"configurations" : [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"port": 5678,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "."
}
]
}
]
}
Confirming that everything works
Once the full stack is running, it’s easy to access the main entry point web server via a browser on the defined webserver
port.
The docker ps
command will show your running containers. Docker is managing communication between containers.
The entire software service is now running completely in Docker. All the code lives on the host computer inside the Docker container. The development environment is now completely portable using only Docker.
Remembering important tradeoffs
This approach has some limitations. First, running your developer environment in Docker will incur additional resource overhead. Docker has to run and requires extra computing resources as a result. Also, including multiple stages will require scripting as a final orchestration layer to enable communication between containers.
Wrapping up
Rapid7’s development team uses Docker to quickly create their development environments. They use the Docker CLI, Docker Desktop, Docker Compose, and shell scripts to create an extremely unique and robust Docker-friendly environment. They can use this to spin up any part of their development environment.
The setup also helps Rapid7 compile frontend assets, start cache and database servers, run the backend service with different parameters, or start the entire application stack. Using a “Docker-in-Docker” approach of mounting the Docker socket within running containers makes this possible. Docker’s ability to commit layers to the base image after dependencies are either updated or installed is also key.
The shell scripts will export the required environment variables and then run specific processes in a specific order. Finally, Docker Compose makes sure that the appropriate service containers and dependencies are running.
Achieving future development goals
Relying on the Docker tool chain has been truly beneficial for Rapid7, since this has helped them create a consistent environment compatible with any part of their application stack. This integration has helped Rapid7 do the following:
- Deploy extremely reliable software to advanced customer environments
- Analyze code before merging it in development
- Deliver much more stable code
- Simplify onboarding
- Form an extremely flexible and configurable development environment
By using Docker, Rapid7 is continuously refining its processes to push past the boundaries of what’s possible. Their next goal is to deliver production-grade stable builds on a daily basis, and they’re confident that Docker can help them get there.
Feedback
0 thoughts on "How Rapid7 Reduced Setup Time From Days to Minutes With Docker"