BigBinary has been working with Gumroad for a while. Following blog post has been posted with permission from Gumroad and we are very grateful to Sahil for allowing us to discuss the work in such an open environment.

Staging environment helps us in testing the code before pushing the code to production. However it becomes hard to manage the staging environment when more people work on different parts of the application. This can be solved by implementing a system where feature branch can have its own individual staging environment.

Heroku has Review Apps feature which can deploy different branches separately. Gumroad, doesn’t use Heroku so we built a custom in-house solution.

The first step was to build the infrastructure. We created a new Auto Scaling Group, Application Load Balancer and route in AWS for the review apps. Load balancer and route are common for all review apps, but a new EC2 instance is created in the ASG when a new review app is commissioned.

review app architecture

The main challange was to forward the incoming requests to the correct server running the review app. This was made possible using Lua in nginx and consul. When a review app is deployed, it writes its IP and port to consul along with the hostname. Each review app server runs an instance of OpenResty (Nginx + Lua modules) with the following configuration.

server {
  listen                   80;
  server_name              _;
  server_name_in_redirect  off;
  port_in_redirect         off;

  try_files $uri/index.html $uri $uri.html @app;

  location @app {
    set $upstream "";
    rewrite_by_lua '
      http   = require "socket.http"
      json   = require "json"
      base64 = require "base64"

      -- read upstream from consul
      host          = ngx.var.http_host
      body, c, l, h = http.request("http://172.17.0.1:8500/v1/kv/" .. host)
      data          = json.decode(body)
      upstream      = base64.decode(data[1].Value)

      ngx.var.upstream = upstream
    ';

    proxy_buffering   off;
    proxy_set_header  Host $host;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_redirect    off;
    proxy_pass        http://$upstream;
  }
}

It forwards all incoming requests to the correct IP:PORT after looking up in consul with the hostname.

The next task was to build a system to deploy the review apps to this infrastructure. We were already using docker in both production and staging environments. We decided to extend it to deploy branches by building docker image for every branch with deploy- prefix in the branch name. When such a branch is pushed to GitHub, a CircleCI job is run to build a docker image with the code and all the necessary packages. This can be configured using a configuration template like this.

jobs:
  build_image:
    <<: *defaults
    parallelism: 2
    steps:
      - checkout
      - setup_remote_docker:
          version: 17.09.0-ce
      - run:
          command: |
            ci_scripts/2.0/build_docker_image.sh
          no_output_timeout: 20m

workflows:
  version: 2

  web_app:
    jobs:
      - build_image:
          filters:
            branches:
              only:
                - /deploy-.*/

It also pushes static assets like JavaScript, CSS and images to an S3 bucket from where they are served directly through CDN. After building the docker image, another CircleCI job is run to do the following tasks.

  • Create a new database in RDS and configure the required credentials.
  • Scale up Review App’s Auto Scaling Group by increasing the number of desired instances by 1.
  • Run redis, database migration, seed-data population, unicorn and resque instances using nomad.

The ease of deploying a review app helped increase our productivity.