devops – Docker https://www.docker.com Wed, 12 Jul 2023 13:36:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png devops – Docker https://www.docker.com 32 32 How Kinsta Improved the End-to-End Development Experience by Dockerizing Every Step of the Production Cycle https://www.docker.com/blog/how-kinsta-improved-the-end-to-end-development-experience-by-dockerizing-every-step-of-the-production-cycle/ Tue, 11 Jul 2023 14:20:00 +0000 https://www.docker.com/?p=43592 Guest author Amin Choroomi is an experienced software developer at Kinsta. Passionate about Docker and Kubernetes, he specializes in application development and DevOps practices. His expertise lies in leveraging these transformative technologies to streamline deployment processes and enhance software scalability.

One of the biggest challenges of developing and maintaining cloud-native applications at the enterprise level is having a consistent experience through the entire development lifecycle. This process is even harder for remote companies with distributed teams working on different platforms, with different setups, and asynchronous communication. 

At Kinsta, we have projects of all sizes for application hosting, database hosting, and managed WordPress hosting. We need to provide a consistent, reliable, and scalable solution that allows:

  • Developers and quality assurance teams, regardless of their operating systems, to create a straightforward and minimal setup for developing and testing features.
  • DevOps, SysOps, and Infrastructure teams to configure and maintain staging and production environments.
Kinsta logo on blue background with white arrows

Overcoming the challenge of developing cloud-native applications on a distributed team

At Kinsta, we rely heavily on Docker for this consistent experience at every step, from development to production. In this article, we’ll walk you through:

  • How to leverage Docker Desktop to increase developers’ productivity.
  • How we build Docker images and push them to Google Container Registry via CI pipelines with CircleCI and GitHub Actions.
  • How we use CD pipelines to promote incremental changes to production using Docker images, Google Kubernetes Engine, and Cloud Deploy.
  • How the QA team seamlessly uses prebuilt Docker images in different environments.

Using Docker Desktop to improve the developer experience

Running an application locally requires developers to meticulously prepare the environment, install all the dependencies, set up servers and services, and make sure they are properly configured. When you run multiple applications, this approach can be cumbersome, especially when it comes to complex projects with multiple dependencies. And, when you introduce multiple contributors with multiple operating systems, chaos is installed. To prevent this, we use Docker.

With Docker, you can declare the environment configurations, install the dependencies, and build images with everything where it should be. Anyone, anywhere, with any OS can use the same images and have exactly the same experience as anyone else.

Declare your configuration with Docker Compose

To get started, you need to create a Docker Compose file, docker-compose.yml. This is a declarative configuration file written in YAML format that tells Docker your application’s desired state. Docker uses this information to set up the environment for your application.

Docker Compose files come in handy when you have more than one container running and there are dependencies between containers.

To create your docker-compose.yml file:

  1. Start by choosing an image as the base for our application. Search on Docker Hub to find a Docker image that already contains your app’s dependencies. Make sure to use a specific image tag to avoid errors. Using the latest tag can cause unforeseen errors in your application. You can use multiple base images for multiple dependencies — for example, one for PostgreSQL and one for Redis.
  2. Use volumes to persist data on your host if you need to. Persisting data on the host machine helps you avoid losing data if Docker containers are deleted or if you have to recreate them.
  3. Use networks to isolate your setup to avoid network conflicts with the host and other containers. It also helps your containers to find and communicate with each other easily.

Bringing it all together, we have a docker-compose.yml that looks like this:

version: '3.8'

services:
  db:
    image: postgres:14.7-alpine3.17
    hostname: mk_db
    restart: on-failure
    ports:
      - ${DB_PORT:-5432}:5432
    volumes:
      - db_data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: ${DB_USER:-user}
      POSTGRES_PASSWORD: ${DB_PASSWORD:-password}
      POSTGRES_DB: ${DB_NAME:-main}
    networks:
      - mk_network
  redis:
    image: redis:6.2.11-alpine3.17
    hostname: mk_redis
    restart: on-failure
    ports:
      - ${REDIS_PORT:-6379}:6379
    networks:
      - mk_network
      
volumes:
  db_data:

networks:
  mk_network:
    name: mk_network

Containerize the application

Build a Docker image for your application

To begin, we need to build a Docker image using a Dockerfile, and then call that from docker-compose.yml.

Follow these five steps to create your Dockerfile file:

1. Start by choosing an image as a base. Use the smallest base image that works for the app. Usually, alpine images are minimal with nearly zero extra packages installed. You can start with an alpine image and build on top of that:

   docker
   FROM node:18.15.0-alpine3.17

2. Sometimes you need to use a specific CPU architecture to avoid conflicts. For example, suppose that you use an arm64-based processor but you need to build an amd64 image. You can do that by specifying the -- platform in Dockerfile:

   docker
   FROM --platform=amd64 node:18.15.0-alpine3.17

3. Define the application directory and install the dependencies and copy the output to your root directory:

    docker
    WORKDIR /opt/app 
    COPY package.json yarn.lock ./ 
    RUN yarn install 
    COPY . .

4. Call the Dockerfile from docker-compose.yml:

     services:
      ...redis
      ...db
      
      app:
        build:
          context: .
          dockerfile: Dockerfile
        platforms:
          - "linux/amd64"
        command: yarn dev
        restart: on-failure
        ports:
          - ${PORT:-4000}:${PORT:-4000}
        networks:
          - mk_network
        depends_on:
          - redis
          - db

5. Implement auto-reload so that when you change something in the source code, you can preview your changes immediately without having to rebuild the application manually. To do that, build the image first, then run it in a separate service:

     services:
      ... redis
      ... db
      
      build-docker:
        image: myapp
        build:
          context: .
          dockerfile: Dockerfile
      app:
        image: myapp
        platforms:
          - "linux/amd64"
        command: yarn dev
        restart: on-failure
        ports:
          - ${PORT:-4000}:${PORT:-4000}
        volumes:
          - .:/opt/app
          - node_modules:/opt/app/node_modules
        networks:
          - mk_network
        depends_on:
          - redis
          - db
          - build-docker

Pro tip: Note that node_modules is also mounted explicitly to avoid platform-specific issues with packages. This means that, instead of using the node_modules on the host, the Docker container uses its own but maps it on the host in a separate volume.

Incrementally build the production images with continuous integration 

The majority of our apps and services use CI/CD for deployment, and Docker plays an important role in the process. Every change in the main branch immediately triggers a build pipeline through either GitHub Actions or CircleCI. The general workflow is simple: It installs the dependencies, runs the tests, builds the Docker image, and pushes it to Google Container Registry (or Artifact Registry). In this article, we’ll describe the build step.

Building the Docker images

We use multi-stage builds for security and performance reasons.

Stage 1: Builder

In this stage, we copy the entire code base with all source and configuration, install all dependencies, including dev dependencies, and build the app. It creates a dist/ folder and copies the built version of the code there. This image is way too large, however, with a huge set of footprints to be used for production. Also, as we use private NPM registries, we use our private NPM_TOKEN in this stage as well. So, we definitely don’t want this stage to be exposed to the outside world. The only thing we need from this stage is the dist/ folder.

Stage 2: Production

Most people use this stage for runtime because it is close to what we need to run the app. However, we still need to install production dependencies, and that means we leave footprints and need the NPM_TOKEN. So, this stage is still not ready to be exposed. Here, you should also note the yarn cache clean on line 19. That tiny command cuts our image size by up to 60 percent.

Stage 3: Runtime

The last stage needs to be as slim as possible with minimal footprints. So, we just copy the fully baked app from production and move on. We put all those yarn and NPM_TOKEN stuff behind and only run the app.

This is the final Dockerfile.production:

docker
# Stage 1: build the source code 
FROM node:18.15.0-alpine3.17 as builder 
WORKDIR /opt/app 
COPY package.json yarn.lock ./ 
RUN yarn install 
COPY . . 
RUN yarn build 

# Stage 2: copy the built version and build the production dependencies FROM node:18.15.0-alpine3.17 as production 
WORKDIR /opt/app 
COPY package.json yarn.lock ./ 
RUN yarn install --production && yarn cache clean 
COPY --from=builder /opt/app/dist/ ./dist/ 

# Stage 3: copy the production ready app to runtime 
FROM node:18.15.0-alpine3.17 as runtime 
WORKDIR /opt/app 
COPY --from=production /opt/app/ . 
CMD ["yarn", "start"]

Note that, for all the stages, we start copying package.json and yarn.lock files first, installing the dependencies, and then copying the rest of the code base. The reason for this is that Docker builds each command as a layer on top of the previous one, and each build could use the previous layers if available and only build the new layers for performance purposes. 

Let’s say you have changed something in src/services/service1.ts without touching the packages. That means the first four layers of the builder stage are untouched and could be reused. This approach makes the build process incredibly faster.

Pushing the app to Google Container Registry through CircleCI pipelines

There are several ways to build a Docker image in CircleCI pipelines. In our case, we chose to use circleci/gcp-gcr orbs:

Minimum configuration is needed to build and push our app, thanks to Docker.

executors:
  docker-executor:
    docker:
      - image: cimg/base:2023.03
orbs:
  gcp-gcr: circleci/gcp-gcr@0.15.1
jobs:
  ...
  deploy:
    description: Build & push image to Google Artifact Registry
    executor: docker-executor
    steps:
      ...
      - gcp-gcr/build-image:
          image: my-app
          dockerfile: Dockerfile.production
          tag: ${CIRCLE_SHA1:0:7},latest
      - gcp-gcr/push-image:
          image: my-app
          tag: ${CIRCLE_SHA1:0:7},latest

Pushing the app to Google Container Registry through GitHub Actions

As an alternative to CircleCI, we can use GitHub Actions to deploy the application continuously.

We set up gcloud and build and push the Docker image to gcr.io:

jobs:
  setup-build:
    name: Setup, Build
    runs-on: ubuntu-latest

    steps:
    - name: Checkout
      uses: actions/checkout@v3

    - name: Get Image Tag
      run: |
        echo "TAG=$(git rev-parse --short HEAD)" >> $GITHUB_ENV

    - uses: google-github-actions/setup-gcloud@master
      with:
        service_account_key: ${{ secrets.GCP_SA_KEY }}
        project_id: ${{ secrets.GCP_PROJECT_ID }}

    - run: |-
        gcloud --quiet auth configure-docker

    - name: Build
      run: |-
        docker build \
          --tag "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:$TAG" \
          --tag "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:latest" \
          .

    - name: Push
      run: |-
        docker push "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:$TAG"
        docker push "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:latest"

With every small change pushed to the main branch, we build and push a new Docker image to the registry.

Deploying changes to Google Kubernetes Engine using Google Delivery Pipelines

Having ready-to-use Docker images for each and every change also makes it easier to deploy to production or roll back in case something goes wrong. We use Google Kubernetes Engine to manage and serve our apps, and we use Google Cloud Deploy and Delivery Pipelines for our continuous deployment process.

When the Docker image is built after each small change (with the CI pipeline shown previously), we take one step further and deploy the change to our dev cluster using gcloud. Let’s look at that step in CircleCI pipeline:

- run:
    name: Create new release
    command: gcloud deploy releases create release-${CIRCLE_SHA1:0:7} --delivery-pipeline my-del-pipeline --region $REGION --annotations commitId=$CIRCLE_SHA1 --images my-app=gcr.io/${PROJECT_ID}/my-app:${CIRCLE_SHA1:0:7}

This step triggers a release process to roll out the changes in our dev Kubernetes cluster. After testing and getting the approvals, we promote the change to staging and then production. This action is all possible because we have a slim isolated Docker image for each change that has almost everything it needs. We only need to tell the deployment which tag to use.

How the Quality Assurance team benefits from this process

The QA team needs mostly a pre-production cloud version of the apps to be tested. However, sometimes they need to run a prebuilt app locally (with all the dependencies) to test a certain feature. In these cases, they don’t want or need to go through all the pain of cloning the entire project, installing npm packages, building the app, facing developer errors, and going over the entire development process to get the app up and running.

Now that everything is already available as a Docker image on Google Container Registry, all the QA team needs is a service in Docker compose file:

services:
  ...redis
  ...db
  
  app:
    image: gcr.io/${PROJECT_ID}/my-app:latest
    restart: on-failure
    ports:
      - ${PORT:-4000}:${PORT:-4000}
    environment:
      - NODE_ENV=production
      - REDIS_URL=redis://redis:6379
      - DATABASE_URL=postgresql://${DB_USER:-user}:${DB_PASSWORD:-password}@db:5432/main
    networks:
      - mk_network
    depends_on:
      - redis
      - db

With this service, the team can spin up the application on their local machines using Docker containers by running:

docker compose up

This is a huge step toward simplifying testing processes. Even if QA decides to test a specific tag of the app, they can easily change the image tag on line 6 and re-run the Docker compose command. Even if they decide to compare different versions of the app simultaneously, they can easily achieve that with a few tweaks. The biggest benefit is to keep our QA team away from developer challenges.

Advantages of using Docker

  • Almost zero footprints for dependencies: If you ever decide to upgrade the version of Redis or PostgreSQL, you can just change one line and re-run the app. There’s no need to change anything on your system. Additionally, if you have two apps that both need Redis (maybe even with different versions) you can have both running in their own isolated environment, without any conflicts with each other.
  • Multiple instances of the app: There are many cases where we need to run the same app with a different command, such as initializing the DB, running tests, watching DB changes, or listening to messages. In each of these cases, because we already have the built image ready, we just add another service to the Docker compose file with a different command, and we’re done.
  • Easier testing environment: More often than not, you just need to run the app. You don’t need the code, the packages, or any local database connections. You only want to make sure the app works properly, or need a running instance as a backend service while you’re working on your own project. That could also be the case for QA, Pull Request reviewers, or even UX folks who want to make sure their design has been implemented properly. Our Docker setup makes it easy for all of them to take things going without having to deal with too many technical issues.

Learn more

]]>
How to Develop Inside a Container Using Visual Studio Code Remote Containers https://www.docker.com/blog/how-to-develop-inside-a-container-using-visual-studio-code-remote-containers/ Tue, 16 Jun 2020 18:15:28 +0000 https://www.docker.com/blog/?p=26461 Picture1

This is a guest post from Jochen Zehnder. Jochen is a Docker Community Leader and working as a Site Reliability Engineer for 56K.Cloud. He started his career as a Software Developer, where he learned the ins and outs of creating software. He is not only focused on development but also on the automation to bridge the gap to the operations side. At 56K.Cloud he helps companies to adapt technologies and concepts like Cloud, Containers, and DevOps. 56K.Cloud is a Technology company from Switzerland focusing on Automation, IoT, Containerization, and DevOps.

Jochen Zehnder joined 56K.Cloud in February, after working as a software developer for several years. He always tries to make the lives easier for everybody involved in the development process. One VS Code feature that excels at this is the Visual Studio Code Remote – Containers extension. It is one of many extensions of the Visual Studio Remote Development feature.

This post is based on the work Jochen did for the 56K.Cloud internal handbook. It uses Jekyll to generate a static website out of markdown files. This is a perfect example of how to make lives easier for everybody. Nobody should know how to install, configure, … Jekyll to make changes to the handbook. With the Remote Development feature, you add all the configurations and customizations to the version control system of your project. This means a small group implements it, and the whole team benefits.

One thing I need to mention is that as of now, this feature is still in preview. However, I never ran into any issues while using it, and I hope that it will get out of preview soon.

Prerequisites

You need to fulfil the following prerequisites, to use this feature:

  • Install Docker and Docker Compose
  • Install Visual Studio Code

Enable it for an existing folder

The Remote – Container extension provides several ways to develop in a container. You can find more information in the documentation, with several Quick start sections. In this post, I will focus on how to enable this feature for an existing local folder.

As with all the other VS Code extensions, you also manage this with the Command Palette. You can either use the shortcut or the green button in the bottom left corner to open it. In the popup, search for Remote-Containers and select Open Folder in Container

Picture2
VS Code Command Palette

In the next popup, you have to select the folder which you want to open in the container. For this folder, you then need to Add the Development Container Configuration Files. VS Code shows you a list with predefined container configurations. In my case, I selected the Jekyll configuration. After that, VS Code starts building the container image and opens the folder in the container.

Picture3
Add Development Container Configuration Files

If you now have a look at the Explorer you can see, that there is a new folder called `.devcontainer`. In my case, it added two files. The `Dockerfile` contains all the instructions to build the container image. The `devcontainer.json` contains all the needed runtime configurations. Some of the predefined containers will add more files. For example, in the `.vscode` folder to add useful Tasks. You can have a look at the GitHub Repo to find out more about the existing configurations. There you can also find information about how to use the provided template to write your own.

Customizations

The predefined container definitions provide a basic configuration, but you can customize them. Making these adjustments is easy and I explain the two changes I had to do below. The first was to install extra packages in the operating system. To do so, I added the instructions to the `Dockerfile`. The second change was to configure the port mappings. In the `devcontainer.json`, I uncommented the `forwardPorts` attribute and added the needed ports. Be aware, for some changes you just need to restart the container. Whereas for others, you need to rebuild the container image.

Using and sharing

After you opened the folder in the container you can keep on working as you are used to. Even the terminal connects to the shell in the container. Whenever you open a new terminal, it will set the working directory to the folder you opened in the container. In my case, it allows me to type in the Jekyll commands to build and serve the site.

After I made all the configurations and customizations, I committed and pushed the new files to the git repository. This made them available to my colleagues, and they can benefit from my work.

Summary

Visual Studio Code supports multiple ways to do remote development. The Visual Studio Code Remote – Containers extension allows you to develop inside a container. The configuration and customizations are all part of your code. You can add them to the version control system and share them with everybody working on the project.

More Information

For more information about the topic you can head over to the following links:

The Remote Container extension uses Docker as the container runtime. There is also a Docker extension, called: Docker for Visual Studio Code. Brian gave a very good introduction at DockerCon LIVE 2020. The recording of his talk Become a Docker Power User With Microsoft Visual Studio Code is available online.

Find out more about 56K.Cloud

We love Cloud, IoT, Containers, DevOps, and Infrastructure as Code. If you are interested in chatting connect with us on Twitter or drop us an email: info@56K.Cloud. We hope you found this article helpful. If there is anything you would like to contribute or you have questions, please let us know!

This post originally appeared here.

]]>
Developing Docker-Powered Apps on Windows with WSL 2 https://www.docker.com/blog/developing-docker-windows-app-wsl2/ Wed, 14 Aug 2019 17:10:18 +0000 https://blog.docker.com/?p=23979 This is a guest post from Docker Captain Antonis Kalipetis, a Senior Software Engineer at efood.gr — the leading online food delivery service in Greece. He is a Python lover and developer and helps teams embrace containers and improve their development workflow. He loves automating stuff and sharing knowledge around all things containers, DevOps and developer workflows. You can follow him on Twitter @akalipetis.

WSL 2 (or Windows Subsystem for Linux version 2) is Microsoft’s second take on shipping a Linux Kernel with Windows. The first version was awesome as it translated Linux system calls to the equivalent Windows NT call in real time. The second version includes a full fledged virtual machine

It was only natural that Docker would embrace this change and ship a Docker Desktop for Windows version that runs on WSL 2 (WSL 1 had issues running the Docker daemon). This is still a Technical Preview, but after using it for a couple of days, I’ve completely switched my local development to take advantage of it and I’m pretty happy with it.

In this blog, I’ll show you an example of how to develop Docker-powered applications using the Docker Desktop WSL 2 Tech Preview.

Why use Docker Desktop WSL 2 Tech Preview over the “stable” Docker Desktop for Windows?

The main advantage of using the technical preview is that you don’t have to manage your Docker VM anymore.  More specifically:

  • The VM grows and shrinks with your needs in terms of RAM/CPU, so you don’t have to decide its size and preallocate resources. It can shrink to almost zero CPU/RAM if you don’t use it. It works so well that most of the time you forget there’s a VM involved.
  • Filesystem performance is great, with support for inotify and the VM’s disk size can match the size of your machine’s disk.

Apart from the above, if you love Visual Studio Code like I do, you can use the VS Code Remote WSL plugin to develop Docker-powered applications locally (more on that in a bit). You also get the always awesome Docker developer experience while using the VM.

How does Docker Desktop for WSL 2 Tech Preview work?

When you install it, it automatically installs Docker in a managed directory in your default WSL 2 distribution. This installation includes the Docker daemon, the Docker CLI and the Docker Compose CLI. It is kept up to date with Docker Desktop and you can either access it from within WSL, or from PowerShell by switching contexts — see, Docker developer experience in action!

Developing applications with Docker Desktop for WSL 2 Tech Preview

For this example, we’ll develop a simple Python Flask application, with Redis as its data store. Every time you visit the page, the page counter will increase — say hello to Millennium!

Setting up VS Code Remote – WSL

Visual Studio Code recently announced a new set of tools for developing applications remotely — using SSH, Docker or WSL. This splits Visual Studio Code into a “client-server” architecture, with the client (that is the UI) running on your Windows machine and the server (that is your code, Git, plugins, etc) running remotely. In this example, we’re going to use the WSL version.

To start, open VS Code and select “Remote-WSL: New Window”. This will install the VS Code Remote server in your default WSL distribution (the one running Docker) and open a new VS Code workspace in your HOME directory.

screenshot 1

Getting and Exploring the Code

Clone this Github repository by running git clone https://github.com/akalipetis/python-docker-example. Next, run code -r python-docker-example to open this directory in VS Code and let’s go a quick tour!

screenshot 2

Dockerfile and docker-compose.yml

These should look familiar. The Dockerfile is used for building your application container, while docker-compose.yml is the one you could use for deploying it. docker-compose.override.yml contains all the things that are needed for local development.

screenshot 3

Pipfile and Pipfile.lock

These include the application dependencies. Pipenv is the tool used to manage them.

The app.py  file contains the Flask application, which we’re just using in this example. Nothing special here!

Running the application and making changes

In order to run the application, open a WSL terminal (this is done using the integrated terminal feature of VS Code) and run docker-compose up. This will start all the containers (in this case, a Redis container and the one running the application). After doing so, visit http://localhost:5000 in your browser and voila — you’ve visited your new application. That’s not development though, so let’s change and see it in action. Open the app.py in VS Code and change the following line:

screenshot 4

Refresh the web page and observe that:

  1. The message was immediately changed
  2. The visit counter continued counting from the latest value

Under the hood

Let’s see what actually happened.

  • We changed a file in VS Code, which is running on Windows.
  • Since VS Code is running on a client-server mode with the server running is WSL 2, the change was actually made to the file living inside WSL.
  • Since you’re using the Technical Preview of Docker Desktop for WSL 2 and docker-compose.override.yml is using Linux workspaces to mount the code from WSL 2 directly into the running container, the change was propagated inside the container.
    • While this is possible with the “stable” Docker Desktop for Windows, it isn’t as easy. By using Linux workspaces, we don’t need to worry about file system permissions. It’s also super fast, as it’s a local Linux filesystem mount.
  • Flask is using an auto-reloading server by default, which — using inotify — is reloading the server on every file change and within milliseconds from saving your file, your server was reloaded.
  • Data is stored in Redis using a Docker volume, thus the visits counter was not affected by the restart of the server.

Other tips to help you with Docker Desktop for WSL 2

Here are a few additional tips on developing inside containers using the Technical Preview of Docker Desktop for WSL 2:

  • For maximum file system performance, use Docker volumes for your application’s data and Linux Workspaces for your code.
  • To avoid running an extra VM, switch to Windows containers for your “stable” Docker Desktop for Windows environment.
  • Use docker context default|wsl to switch contexts and develop both Windows and Linux Docker-powered applications easily.

Final Thoughts

I’ve switched to Windows and WSL 2 development for the past two months and I can’t describe how happy I am with my development workflow. Using Docker Desktop for WSL 2 for the past couple of days seems really promising, and most of the current issues of using Docker in WSL 2 seem to be resolved. I can’t wait for what comes next!

The only thing currently missing in my opinion is integration with VS Code   Remote Containers (instead of Remote WSL which was used for this blogpost) would allow you to run all your tooling within your Docker container.

Until VS Code Remote Containers support is ready, you can run pipenv install --dev to install the application dependencies on WSL 2, allowing VS Code to provide auto-complete and use all the nice tools included to help in development.

Get the Technical Preview and Learn More

If you’d like to get on board, read the instructions and install the technical preview from the Docker docs.

For more on WSL 2, check out these blog posts:


Developing #Docker Powered Apps on Windows with #WSL2 by Docker Captain @akalipetis
Click To Tweet



]]>