When using micro services with containers, one has to consider modularity and reusability while designing a system.

While using Kubernetes as a distributed system for container deployments, modularity and reusability can be achieved using parameterizing containers to deploy micro services.

Parameterized containers

Assuming container as a function in a program, how many parameters does it have? Each parameter represents an input that can customize a generic container to a specific situation.

Let’s assume we have a Rails application isolated in services like puma, sidekiq/delayed-job and websocket. Each service runs as a separate deployment on a separate container for the same application. When deploying the change we should be building the same image for all the three containers but they should be different function/processes. In our case, we will be running 3 pods with the same image. This can be achieved by building a generic image for containers. The Generic container must be accepting parameters to run different services.

We need to expose parameters and consume them inside the container. There are two ways to pass parameters to our container.

  1. Using environment variables.
  2. Using command line arguments.

In this article, we will use environment variables to run parameterized containers like puma, sidekiq/delayed-job and websocket for Rails applications on kubernetes.

We will deploy wheel on kubernetes using parametrized container approach.

Pre-requisite

Building a generic container image.

Dockerfile in wheel uses bash script setup_while_container_init.sh as a command to start a container. The script is self-explanatory and, as we can see, it consists of two functions web and background. Function web starts the puma service and background starts the delayed_job service.

We create two different deployments on kubernetes for web and background services. Deployment templates are identical for both web and background. The value of environment variable POD_TYPE to init-script runs the particular service in a pod.

Once we have docker image built, let’s deploy the application.

Creating kubernetes deployment manifests for wheel application

Wheel uses PostgreSQL database and we need postgres service to run the application. We will use the postgres image from docker hub and will deploy it as deployment.

Note: For production deployments, database should be deployed as a statefulset or use managed database services.

K8s manifest for deploying PostgreSQL.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: db
  name: db
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: db
    spec:
      containers:
      - image: postgres:9.4
        name: db
        env:
        - name: POSTGRES_USER
          value: postgres
        - name: POSTGRES_PASSWORD
          value: welcome

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: db
  name: db
spec:
  ports:
  - name: headless
    port: 5432
    targetPort: 5432
  selector:
    app: db

Create Postgres DB and the service.

$ kubectl create -f db-deployment.yml -f db-service.yml
deployment db created
service db created

Now that DB is available, we need to access it from the application using database.yml.

We will create configmap to store database credentials and mount it on the config/database.yml in our application deployments.

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: database-config
data:
  database.yml: |
    development:
      adapter: postgresql
      database: wheel_development
      host: db
      username: postgres
      password: welcome
      pool: 5

    test:
      adapter: postgresql
      database: wheel_test
      host: db
      username: postgres
      password: welcome
      pool: 5

    staging:
      adapter: postgresql
      database: postgres
      host: db
      username: postgres
      password: welcome
      pool: 5

Create configmap for database.yml.

$ kubectl create -f database-configmap.yml
configmap database-config created

We have the database ready for our application, now let’s proceed to deploy our Rails services.

Deploying Rails micro-services using the same docker image

In this blog, we will limit our services to web and background with kubernetes deployment.

Let’s create a deployment and service for our web application.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: wheel-web
  labels:
    app: wheel-web
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: wheel-web
    spec:
      containers:
      - image: bigbinary/wheel:generic
        name: web
        imagePullPolicy: Always
        env:
        - name: DEPLOY_TIME
          value: $date
          value: staging
        - name: POD_TYPE
          value: WEB
        ports:
        - containerPort: 80
        volumeMounts:
          - name: database-config
            mountPath: /wheel/config/database.yml
            subPath: database.yml
      volumes:
        - name: database-config
          configMap:
            name: database-config

---

apiVersion: v1
kind: Service
metadata:
  labels:
    app: wheel-web
  name: web
spec:
  ports:
  - name: puma
    port: 80
    targetPort: 80
  selector:
    app: wheel-web
  type: LoadBalancer

Note that we used POD_TYPE as WEB, which will start the puma process from the container startup script.

Let’s create a web/puma deployment and service.

kubectl create -f web-deployment.yml -f web-service.yml
deployment wheel-web created
service web created
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: wheel-background
  labels:
    app: wheel-background
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: wheel-background
    spec:
      containers:
      - image: bigbinary/wheel:generic
        name: background
        imagePullPolicy: Always
        env:
        - name: DEPLOY_TIME
          value: $date
        - name: POD_TYPE
          value: background
        ports:
        - containerPort: 80
        volumeMounts:
          - name: database-config
            mountPath: /wheel/config/database.yml
            subPath: database.yml
      volumes:
        - name: database-config
          configMap:
            name: database-config

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: wheel-background
  name: background
spec:
  ports:
  - name: background
    port: 80
    targetPort: 80
  selector:
    app: wheel-background

For background/delayed-job we set POD_TYPE as background. It will start delayed-job process.

Let’s create background deployment and the service.

kubectl create -f background-deployment.yml -f background-service.yml
deployment wheel-background created
service background created

Get application endpoint.

$ kubectl get svc web -o wide | awk '{print $4}'
a55714dd1a22d11e88d4b0a87a399dcf-2144329260.us-east-1.elb.amazonaws.com

We can access the application using the endpoint.

Now let’s see pods.

$ kubectl get pods
NAME                                READY     STATUS    RESTARTS   AGE
db-5f7d5c96f7-x9fll                 1/1       Running   0          1h
wheel-background-6c7cbb4c75-sd9sd   1/1       Running   0          30m
wheel-web-f5cbf47bd-7hzp8           1/1       Running   0          10m

We see that db pod is running postgres, wheel-web pod is running puma and wheel-background pod is running delayed job.

If we check logs, everything coming to puma is handled by web pod. All the background jobs are handled by background pod.

Similarly, if we are using websocket, separate API pods, traffic will be routed to respective services.

This is how we can deploy Rails micro services using parametrized containers and a generic image.