San Francisco, USA

5214F Diamond Heights Blvd #553
San Francisco, CA 94131

Pune, India

203, Jewel Towers, 2nd Floor
Lane Number 5, Koregaon Park
Pune 411001, India

301 - 275 - 3997

Rails 6 requires Ruby 2.5 or newer

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

As per rails/rails#34754, a Rails 6 app requires Ruby version 2.5 or newer.

Let’s discuss what we need to know if we are dealing with Rails 6.

Ensuring a valid Ruby version is set while creating a new Rails 6 app

While creating a new Rails 6 app, we need to ensure that the current Ruby version in the shell is set to 2.5 or newer.

If it is set to an older version then the same version will be used by the rails new command to set the Ruby version in .ruby-version and in Gemfile respectively in the created Rails app.

$ ruby -v
ruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-darwin15]

$ rails new meme-wizard
      create  Rakefile
      create  .ruby-version
      create  .gitignore
      create  Gemfile
      [...] omitted the rest of the output

$ cd meme-wizard && grep -C 2 -Rn -a "2.3.1" .
./Gemfile-2-git_source(:github) { |repo| "{repo}.git" }
./Gemfile:4:ruby '2.3.1'
./Gemfile-6-# Bundle edge Rails instead: gem 'rails', github: 'rails/rails'

An easy fix for this is to install a Ruby version 2.5 or newer and use that version prior to running the rails new command.

$ ruby -v
ruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-darwin15]

$ rbenv local 2.6.0

$ ruby -v
ruby 2.6.0p0 (2018-12-25 revision 66547) [x86_64-darwin18]

$ rails new meme-wizard

$ cd meme-wizard && grep -C 2 -Rn -a "2.6.0" .
./Gemfile-2-git_source(:github) { |repo| "{repo}.git" }
./Gemfile:4:ruby '2.6.0'
./Gemfile-6-# Bundle edge Rails instead: gem 'rails', github: 'rails/rails'

Upgrading an older Rails app to Rails 6

While upgrading an older Rails app to Rails 6, we need to update the Ruby version to 2.5 or newer in .ruby-version and Gemfile files respectively.

What else do we need to know?

Since Ruby 2.5 has added Hash#slice method, the extension method with the same name defined by activesupport/lib/active_support/core_ext/hash/slice.rb has been removed from Rails 6.

Similarly, Rails 6 has also removed the extension methods Hash#transform_values and Hash#transform_values! from Active Support in favor of the native methods with the same names which exist in Ruby. These methods were introduced in Ruby 2.4 natively.

If we try to explicitly require active_support/core_ext/hash/transform_values then it would print a deprecation warning.

>> require "active_support/core_ext/hash/transform_values"
# DEPRECATION WARNING: Ruby 2.5+ (required by Rails 6) provides Hash#transform_values natively, so requiring active_support/core_ext/hash/transform_values is no longer necessary. Requiring it will raise LoadError in Rails 6.1. (called from irb_binding at (irb):1)
=> true

Rails 6 database seed uses inline Active Job adapter

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

We use the db:seed task to seed the database in Rails apps. Recently an issue was reported on Rails issue tracker where the db:seed task was not finishing.

In development environment, Rails uses async adapter as the default Active Job adapter. The Async adapter runs jobs with an in-process thread pool.

This specific issue was happening because the seed task was trying to attach a file using Active Storage. Active Storage adds a job in the background during the attachment process. This task was not getting executed properly using the async adapter and it was causing the seed task to hang without exiting.

It was found out that by using the inline adapter in development environment, this issue goes away. But making a wholesale change of making the default adapter in development environment as inline adapter defeats the purpose of having the async adapter as default in the first place.

Instead a change is made to execute all the code related to seeding using inline adapter. The inline adapter makes sure that all the code will be executed immediately.

As the inline adapter does not allow queuing up the jobs in future, this can result into an error if the seeding code somehow triggers such jobs. This issue is already reported on Github.


Active Job is optional framework and we can skip it completely. Now that seeding depends on presence of Active Job, it was throwing an error when Active Job was not part of the application. Also, executing the jobs inline automatically, when users has set the Active Job queue adapter to something of their choice was surprising for the users. So a change has been made to load the seeds inline only when Active Job is included in the application and the queue adapter is async. This makes it backward compatible as well it does not change user’s choice of queue adapter automatically.

Rails 6 adds ActiveRecord::Relation#reselect

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

Rails have rewhere and reorder methods to change the previously set conditions attributes to new attributes which are given as an argument to method.

Before Rails 6, if you want to change the previously set select statement attributes to new attributes, it was done as follows.

>>, :body).unscope(:select).select(:views)

   SELECT "posts"."views" FROM "posts" LIMIT ? ["LIMIT", 1]]

In Rails 6, ActiveRecord::Relation#reselect method is added.

The reselect method is similar to rewhere and reorder. reselect is a short-hand for unscope(:select).select(fields).

Here is how reselect method can be used.

>>, :body).reselect(:views)

   SELECT "posts"."views" FROM "posts" LIMIT ? ["LIMIT", 1]]

Check out the pull request for more details on this.

Rails 6 adds ActiveModel::Errors#of_kind?

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

Rails 6 added of_kind? on ActiveModel::Errors. It returns true if the ActiveModel::Errors object has provided a key and message associated with it. The default message is :invalid.

of_kind? is same as ActiveModel::Errors#added? but, it doesn’t take extra options as a parameter.

Let’s checkout how it works.

Rails 6.0.0.beta2

>> class User < ApplicationRecord
>>   validates :name, presence: true
>> end

>> user =

=> => #<User id: nil, name: nil, password: nil, created_at: nil, updated_at: nil>

>> user.valid?

=> false

>> user.errors

=> #<ActiveModel::Errors:0x00007fc462a1d140 @base=#<User id: nil, name: nil, password: nil, created_at: nil, updated_at: nil>, @messages={:name=>["can't be blank"]}, @details={:name=>[{:error=>:blank}]}>

>> user.errors.of_kind?(:name)

=> false

>> user.errors.of_kind?(:name, :blank)

=> true

>> user.errors.of_kind?(:name, "can't be blank")

=> true

>> user.errors.of_kind?(:name, "is blank")

=> false

Here is the relevant pull request.

Rails 6 shows routes in expanded format

The output of rails routes is in the table format.

$ rails routes
   Prefix Verb   URI Pattern               Controller#Action
    users GET    /users(.:format)          users#index
          POST   /users(.:format)          users#create
 new_user GET    /users/new(.:format)      users#new
edit_user GET    /users/:id/edit(.:format) users#edit
     user GET    /users/:id(.:format)      users#show
          PATCH  /users/:id(.:format)      users#update
          PUT    /users/:id(.:format)      users#update
          DELETE /users/:id(.:format)      users#destroy

If we have long route names, they don’t fit on the terminal window as the output lines wrap with each other.

Example of overlapping routes

Rails 6 has added a way to display the routes in an expanded format.

We can pass --expanded switch to the rails routes command to see this in action.

$ rails routes --expanded

--[ Route 1 ]--------------------------------------------------------------
Prefix            | users
Verb              | GET
URI               | /users(.:format)
Controller#Action | users#index
--[ Route 2 ]--------------------------------------------------------------
Prefix            |
Verb              | POST
URI               | /users(.:format)
Controller#Action | users#create
--[ Route 3 ]--------------------------------------------------------------
Prefix            | new_user
Verb              | GET
URI               | /users/new(.:format)
Controller#Action | users#new
--[ Route 4 ]--------------------------------------------------------------
Prefix            | edit_user
Verb              | GET
URI               | /users/:id/edit(.:format)
Controller#Action | users#edit

This shows the output of the routes command in much more user friendly manner.

The --expanded switch can be used in conjunction with other switches for searching specific routes.

Rails 6 adds ActiveModel::Errors#slice!

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

Rails 6 added slice! on ActiveModel::Errors. With this addition, it becomes quite easy to select just a few keys from errors and show or return them. Before Rails 6, we needed to convert the ActiveModel::Errors object to a hash before slicing the keys.

Let’s checkout how it works.

Rails 5.2

>> user =

=> #<User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil>

>> user.valid?

=> false

>> user.errors

=> #<ActiveModel::Errors:0x00007fc46700df10 @base=#<User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil>, @messages={:email=>["can't be blank"], :password=>["can't be blank"]}, @details={:email=>[{:error=>:blank}], :password=>[{:error=>:blank}]}>

>> user.errors.slice!

=> Traceback (most recent call last):
        1: from (irb):16
NoMethodError (undefined method 'slice!' for #<ActiveModel::Errors:0x00007fa1f0e46eb8>)
Did you mean?  slice_when

>> errors = user.errors.to_h
>> errors.slice!(:email)

=> {:password=>["can't be blank"]}

>> errors

=> {:email=>["can't be blank"]}

Rails 6.0.0.beta2

>> user =

=> #<User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil>

>> user.valid?

=> false

>> user.errors

=> #<ActiveModel::Errors:0x00007fc46700df10 @base=#<User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil>, @messages={:email=>["can't be blank"], :password=>["can't be blank"]}, @details={:email=>[{:error=>:blank}], :password=>[{:error=>:blank}]}>

>> user.errors.slice!(:email)

=> {:password=>["can't be blank"]}

>> user.errors

=> #<ActiveModel::Errors:0x00007fc46700df10 @base=#<User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil>, @messages={:email=>["can't be blank"]}, @details={:email=>[{:error=>:blank}]}>

Here is the relevant pull request.

Rails 6 adds create_or_find_by and create_or_find_by!

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

Rails 6 added create_or_find_by and create_or_find_by!. Both of these methods rely on unique constraints on the database level. If creation fails, it is because of the unique constraints on one or all of the given columnns, and it will try to find the record using find_by!.

create_or_find_by is an improvement over find_or_create_by because find_or_create_by first queries for the record, and then inserts it if none is found. This could lead to a race condition.

As mentioned by DHH in the pull request, create_or_find_by has a few cons too:

  • The table must have unique constraints on the relevant columns.
  • This method relies on exception handling, which is generally slower.

create_or_find_by! raises an exception when creation fails because of the validations.

Let’s see how both methods work in Rails 6.0.0.beta2.

Rails 6.0.0.beta2

>> class CreateUsers < ActiveRecord::Migration[6.0]
>>   def change
>>     create_table :users do |t|
>>       t.string :name, index: { unique: true }
>>       t.timestamps
>>     end
>>   end
>> end

>> class User < ApplicationRecord
>>   validates :name, presence: true
>> end

>> User.create_or_find_by(name: 'Amit')
INSERT INTO "users" ("name", "created_at", "updated_at") VALUES ($1, $2, $3) RETURNING "id"  [["name", "Amit"], ["created_at", "2019-03-07 09:33:23.391719"], ["updated_at", "2019-03-07 09:33:23.391719"]]

=> #<User id: 1, name: "Amit", created_at: "2019-03-07 09:33:23", updated_at: "2019-03-07 09:33:23">

>> User.create_or_find_by(name: 'Amit')
INSERT INTO "users" ("name", "created_at", "updated_at") VALUES ($1, $2, $3) RETURNING "id"  [["name", "Amit"], ["created_at", "2019-03-07 09:46:37.189068"], ["updated_at", "2019-03-07 09:46:37.189068"]]

=> #<User id: 1, name: "Amit", created_at: "2019-03-07 09:33:23", updated_at: "2019-03-07 09:33:23">

>> User.create_or_find_by(name: nil)

=> #<User id: nil, name: nil, created_at: nil, updated_at: nil>

>> User.create_or_find_by!(name: nil)

=> Traceback (most recent call last):
        1: from (irb):2
ActiveRecord::RecordInvalid (Validation failed: Name can't be blank)

Here is the relevant pull request.

Also note, create_or_find_by can lead to primary keys running out, if the type of primary key is int. This happens because each time create_or_find_by hits ActiveRecord::RecordNotUnique, it does not rollback auto-increment of the primary key. The problem is discussed in this pull request.

Rails 6 raises ActiveModel::MissingAttributeError

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

Rails 6 raises ActiveModel::MissingAttributeError when update_columns is used with a non-existing attribute. Before Rails 6, update_columns raised an ActiveRecord::StatementInvalid error.

Rails 5.2

>> User.first.update_columns(email: '')
SELECT  "users".* FROM "users" ORDER BY "users"."id" ASC LIMIT $1  [["LIMIT", 1]]
UPDATE "users" SET "email" = $1 WHERE "users"."id" = $2  [["email", ""], ["id", 1]]

=> Traceback (most recent call last):
        1: from (irb):8
ActiveRecord::StatementInvalid (PG::UndefinedColumn: ERROR:  column "email" of relation "users" does not exist)
LINE 1: UPDATE "users" SET "email" = $1 WHERE "users"."id" = $2
: UPDATE "users" SET "email" = $1 WHERE "users"."id" = $2

Rails 6.0.0.beta2

>> User.first.update_columns(email: '')
SELECT "users".* FROM "users" ORDER BY "users"."id" ASC LIMIT ?  [["LIMIT", 1]]

Traceback (most recent call last):
        1: from (irb):1
ActiveModel::MissingAttributeError (can't write unknown attribute `email`)

Here is the relevant commit.

Rails 6 ActiveRecord::Base.configurations

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

Rails 6 changed the return value of ActiveRecord::Base.configurations to an object of ActiveRecord::DatabaseConfigurations. Before Rails 6, ActiveRecord::Base.configurations returned a hash with all the database configurations. We can call to_h on the object of ActiveRecord::DatabaseConfigurations to get a hash.

A method named configs_for has also been added on to fetch configurations for a particular environment.

Rails 5.2

>> ActiveRecord::Base.configurations

=> {"development"=>{"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/development.sqlite3"}, "test"=>{"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/test.sqlite3"}, "production"=>{"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/production.sqlite3"}}

Rails 6.0.0.beta2

>> ActiveRecord::Base.configurations

=> #<ActiveRecord::DatabaseConfigurations:0x00007fc18274f9f0 @configurations=[#<ActiveRecord::DatabaseConfigurations::HashConfig:0x00007fc18274f680 @env_name="development", @spec_name="primary", @config={"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/development.sqlite3"}>, #<ActiveRecord::DatabaseConfigurations::HashConfig:0x00007fc18274f608 @env_name="test", @spec_name="primary", @config={"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/test.sqlite3"}>, #<ActiveRecord::DatabaseConfigurations::HashConfig:0x00007fc18274f590 @env_name="production", @spec_name="primary", @config={"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/production.sqlite3"}>]>

>> ActiveRecord::Base.configurations.to_h

=> {"development"=>{"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/development.sqlite3"}, "test"=>{"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/test.sqlite3"}, "production"=>{"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/production.sqlite3"}}

>> ActiveRecord::Base.configurations['development']

=> {"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/development.sqlite3"}

>> ActiveRecord::Base.configurations.configs_for(env_name: "development")

=> [#<ActiveRecord::DatabaseConfigurations::HashConfig:0x00007fc18274f680 @env_name="development", @spec_name="primary", @config={"adapter"=>"sqlite3", "pool"=>5, "timeout"=>5000, "database"=>"db/development.sqlite3"}>]

Here is the relevant pull request.

Rails 6 shows unpermitted params in logs in color

Strong parameters allow us to control the user input in our Rails app. In development environment the unpermitted parameters are shown in the log as follows.

Unpermitted params before Rails 6

It is easy to miss this message in the flurry of other messages.

Rails 6 has added a change to show these params in red color for better visibility.

Unpermitted params after Rails 6

Marketing strategy at BigBinary

BigBinary started in 2011. Here are our revenue numbers for the last 7 years.

BigBinary revenue

We achieved this to date without having any outbound marketing and sales strategy.

  • We have never sent a cold email.
  • We have never sent a cold LinkedIn message.
  • The only time we advertised was a period of two months when we tried Google advertisements, with no outcomes.
  • We do not sponsor any podcast.
  • We have not had a sales person.
  • We have not had a marketing person.

We have kept our head down and have focused on what we do best, such as designing, developing, debugging, devops, and blogging.

This is what has worked out for us so far:

  • We contribute to the community through blog posts and open source.
  • We sponsor community events like Rails Girls and Ruby Conf India.
  • We sponsor many React and Ruby meetups.
  • We focus on keeping our existing clients happy.

Over the years I have come across many people who aspire to be freelancers. While it is not for everyone, I encourage them to give freelancing a try.

The greatest hindrance I have seen is that they stress over sales and marketing, and as it should be. Being a freelancer means constant need to find your next client.

I’m not here to say what others ought to do. I’m here to say what has worked out for BigBinary over the last 7 years.

While we plan to experiment with new forms of marketing, networking, and sales channel as we grow, it is not the end-all-be-all for freelancers. While marketing, networking, and sales may be effective for some, it was not how we started BigBinary and may not be how you want to start as well.

For us at BigBinary, it has been writing blogs. When we come across a potentially intriguing blog topic, we save the topic by creating a Github issue. When we have downtime, we pick up a topic from our issues list. It’s as simple as that and has been our primary driver of growth thus far.

While you should experiment to find out what works best for you, you need to find out what suits your personality. If you are good at teaching through videos, consider creating your own YouTube channel. If you contribute to open source, try creating a blog about your efforts and learnings. If you are good at concentrating on a niche technology, build your marketing and business around that.

I can confidently say that majority of people I met and who want to be freelancer would do fine if they simply share what they are learning. Most of these people do technical work. Some of them already blog and others can blog. A blog is a decent start nearly everybody will say. I’m saying that it is a good end too.

If you do not want to do any other form of marketing then that’s fine too. Just blogging will work out fine for you just like it has worked out fine for us at BigBinary.

Just because you are going to be a freelancer you don’t have to change who you are. If you don’t like sending cold emails then don’t. If you do not like networking then that’s alright as well. Write personal emails, dump corporate talk, show compassion and be genuine.

So go on and do some freelancing. It would teach you a lot about software development, business, life, managing money, creating value and capturing value. It will be rough at times. And it would be hard at times. But it would also be a ton of fun.

Rails 6 delete_by, destroy_by ActiveRecord::Relation

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

As described by DHH in the issue, Rails has find_or_create_by, find_by and similar methods to create and find the records matching the specified conditions. Rails was missing similar feature for deleting/destroying the record(s).

Before Rails 6, deleting/destroying the record(s) which are matching the given condition was done as shown below.

  # Example to destroy all authors matching the given condition
  Author.find_by(email: "").destroy
  Author.where(email: "", rating: 4).destroy_all

  # Example to delete all authors matching the given condition
  Author.find_by(email: "").delete
  Author.where(email: "", rating: 4).delete_all

The above examples were missing the symmetry like find_or_create_by and find_by methods.

In Rails 6, the new delete_by and destroy_by methods have been added as ActiveRecord::Relation methods. ActiveRecord::Relation#delete_by is short-hand for relation.where(conditions).delete_all. Similarly, ActiveRecord::Relation#destroy_by is short-hand for relation.where(conditions).destroy_all.

Here is how it can be used.

  # Example to destroy all authors matching the given condition using destroy_by
  Author.destroy_by(email: "")
  Author.destroy_by(email: "", rating: 4)

  # Example to destroy all authors matching the given condition using delete_by
  Author.delete_by(email: "")
  Author.delete_by(email: "", rating: 4)

Check out the pull request for more details on this.

Rails 6 adds ActiveRecord::Relation#touch_all

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

Before moving forward, we need to understand what the touch method does. touch is used to update the updated_at timestamp by defaulting to the current time. It also takes custom time or different columns as parameters.

Rails 6 has added touch_all on ActiveRecord::Relation to touch multiple records in one go. Before Rails 6, we needed to iterate all records using an iterator to achieve this result.

Let’s take an example in which we call touch_all on all user records.

Rails 5.2

>> User.count

=> 3

>> User.all.touch_all

=> Traceback (most recent call last):1: from (irb):2
NoMethodError (undefined method 'touch_all' for #<User::ActiveRecord_Relation:0x00007fe6261f9c58>)

>> User.all.each(&:touch)
SELECT "users".* FROM "users"
begin transaction
  UPDATE "users" SET "updated_at" = ? WHERE "users"."id" = ?  [["updated_at", "2019-03-05 17:45:51.495203"], ["id", 1]]
commit transaction
begin transaction
  UPDATE "users" SET "updated_at" = ? WHERE "users"."id" = ?  [["updated_at", "2019-03-05 17:45:51.503415"], ["id", 2]]
commit transaction
begin transaction
  UPDATE "users" SET "updated_at" = ? WHERE "users"."id" = ?  [["updated_at", "2019-03-05 17:45:51.509058"], ["id", 3]]
commit transaction

=> [#<User id: 1, name: "Sam", created_at: "2019-03-05 16:09:29", updated_at: "2019-03-05 17:45:51">, #<User id: 2, name: "John", created_at: "2019-03-05 16:09:43", updated_at: "2019-03-05 17:45:51">, #<User id: 3, name: "Mark", created_at: "2019-03-05 16:09:45", updated_at: "2019-03-05 17:45:51">]

Rails 6.0.0.beta2

>> User.count

=> 3

>> User.all.touch_all
UPDATE "users" SET "updated_at" = ?  [["updated_at", "2019-03-05 16:08:47.490507"]]

=> 3

touch_all returns count of the records on which it is called.

touch_all also takes a custom time or different columns as parameters.

Rails 6.0.0.beta2

>> User.count

=> 3

>> User.all.touch_all(time:, 3, 2, 1, 0, 0))
UPDATE "users" SET "updated_at" = ?  [["updated_at", "2019-03-02 00:00:00"]]

=> 3

>> User.all.touch_all(:created_at)
UPDATE "users" SET "updated_at" = ?, "created_at" = ?  [["updated_at", "2019-03-05 17:55:41.828347"], ["created_at", "2019-03-05 17:55:41.828347"]]

=> 3

Here is the relevant pull request.

Rails 6 adds negative scopes on enum

This blog is part of our Rails 6 series. Rails 6.0 was recently released.

When an enum attribute is defined on a model, Rails adds some default scopes to filter records based on values of enum on enum field.

Here is how enum scope can be used.

class Post < ActiveRecord::Base
  enum status: %i[drafted active trashed]

Post.drafted # => where(status: :drafted)  # => where(status: :active)

In Rails 6, negative scopes are added on the enum values.

As mentioned by DHH in the pull request,

these negative scopes are convenient when you want to disallow access in controllers

Here is how they can be used.

class Post < ActiveRecord::Base
  enum status: %i[drafted active trashed]

Post.not_drafted # => where.not(status: :drafted)
Post.not_active  # => where.not(status: :active)

Check out the pull request for more details on this.

MJIT Support in Ruby 2.6

This blog is part of our Ruby 2.6 series. Ruby 2.6.0 was released on Dec 25, 2018.

What is JIT?

JIT stands for Just-In-Time compiler. JIT converts repetitive code into bytecode which can then be sent to the processor directly, hence, saving time by not compiling the same piece of code over and over.

Ruby 2.6

MJIT is introduced in Ruby 2.6. It is most commonly known as MRI JIT or Method Based JIT.

It is a part of the Ruby 3x3 project started by Matz. The name “Ruby 3x3” signifies Ruby 3.0 will be 3 times faster than Ruby 2.0 and it will focus mainly on performance. In addition to performance, it also aims for the following things:

  1. Portability
  2. Stability
  3. Security

MJIT is still in development, therefore, MJIT is optional in Ruby 2.6. If you are running Ruby 2.6, then you can execute the following commnad.

ruby --help

You will see following options.

--Jit-wait # Wait program execution until code compiles.
--jit-verbose=num # Level information MJIT compiler prints for Ruby program.
--jit-min-calls=num # Minimum count in loops for which MJIT should work.
--jit-save-temps # Save compiled library to the file.

Vladimir Makarov proposed improving performance by replacing VM instructions with RTL(Register Transfer Language) and introducing the Method based JIT compiler.

Vladimir explained MJIT architecture in his RubyKaigi 2017 conference keynote.

Ruby’s compiler converts the code to YARV(Yet Another Ruby VM) instructions and then these instructions are run by the Ruby Virtual Machine. Code that is executed too often is converted to RTL instructions, which runs faster.

Let’s take a look at how MJIT works.

# mjit.rb

require 'benchmark'

puts Benchmark.measure {
  def test_while
    start_time =
    i = 0

    while i < 4
      i += 1

    puts - start_time

  4.times { test_while }

Let’s run this code with MJIT options and check what we got.

ruby --jit --jit-verbose=1 --jit-wait --disable-gems mjit.rb
Time taken is 4.0e-06
Time taken is 0.0
Time taken is 0.0
Time taken is 0.0
  0.000082   0.000032   0.000114 (  0.000105)
Successful MJIT finish

Nothing interesting right? And why is that? because we are iterating the loop for 4 times and default value for MJIT to work is 5. We can always decide after how many calls MJIT should work by providing --jit-min-calls=#number option.

Let’s tweak the program a bit so MJIT gets to work.

require 'benchmark'

puts Benchmark.measure {
  def test_while
    start_time =
    i = 0

    while i < 4_00_00_000
      i += 1

    puts "Time taken is #{ - start_time}"

  10.times { test_while }

After running the above code we can see some work done by MJIT.

Time taken is 0.457916
Time taken is 0.455921
Time taken is 0.454672
Time taken is 0.452823
JIT success (72.5ms): block (2 levels) in <main>@mjit.rb:15 -> /var/folders/v6/_6sh53vn5gl3lct18w533gr80000gn/T//_ruby_mjit_p66220u0.c
JIT success (140.9ms): test_while@mjit.rb:4 -> /var/folders/v6/_6sh53vn5gl3lct18w533gr80000gn/T//_ruby_mjit_p66220u1.c
JIT compaction (23.0ms): Compacted 2 methods -> /var/folders/v6/_6sh53vn5gl3lct18w533gr80000gn/T//_ruby_mjit_p66220u2.bundle
Time taken is 0.463703
Time taken is 0.102852
Time taken is 0.103335
Time taken is 0.103299
Time taken is 0.103252
Time taken is 0.103261
  2.797843   0.005357   3.141944 (  2.801391)
Successful MJIT finish

Here’s what’s happening. Method ran 4 times and on the 5th call it found it is running same code again. So MJIT started a separate thread to convert the code into RTL instructions, which created a shared object library. Next, threads took that shared code and executed directly. As we passed option --jit-verbose=1 we can see what MJIT did.

What we are seeing in output is the following:

  1. Time taken to compile.
  2. What block of code is compiled by JIT.
  3. Location of compiled code.

We can open the file and see how MJIT converted the piece of code to binary instructions but for that we need to pass another option which is --jit-save-temps and then just inspect those files.

After compiling the code to RTL instructions, take a look at the execution time. It dropped down to 0.10 ms from 0.46 ms. That’s a neat speed bump.

Here is a comparation across some of the Ruby versions for some basic operations.

Ruby time comparison in different versions

Rails comparison on Ruby 2.5, Ruby 2.6 and Ruby 2.6 with JIT

Create a rails application with different Ruby versions and start a server. We can start the rails server with the JIT option, as shown below.

RUBYOPT="--jit" bundle exec rails s

Now, we can start testing the performance on servers. We found that Ruby 2.6 is faster than Ruby 2.5, but enabling JIT in Ruby 2.6 does not add more value to the Rails application.

MJIT status and future directions

  • It is in an early development stage.
  • Does not work on windows.
  • Needs more time to mature.
  • Needs more optimisations.
  • MJIT can use GCC or LLVM in the future C Compilers.

Further reading

  1. Ruby 3x3 Performance Goal
  2. The method JIT compiler for Ruby2.6
  3. Vladimir Makarov’s Ruby Edition

Resolve foreign key constraint conflict

We have a client that uses multi-tenant database where each database holds data for each of their customers. Whenever a new customer is added, a service dynamically creates a new database. In order to seed this new database we were tasked to implement a feature to copy data from existing “demo” database.

The “demo” database is actually a live client where sales team does demo. This ensures that the data that is copied is fresh and not stale.

We implemented a solution where we simply listed all the tables in namespace and used activerecord-import to copy the table data. We used activerecord-import gem to keep code agnostic of underlying database as we used different databases in development from production. Production is “SQL Server” and development database is “PostgreSQL”. Why this project ended up having different database in development and in production is worthy of a separate blog.

When we started using the above mentioned strategy then we quickly ran into a problem. Inserts for some tables were failing.

insert or update on table "dependent_table" violates foreign key constraint "fk_rails"
Detail: Key (column)=(1) is not present in table "main_table".

The issue was we had foreign key constraints on some tables and “dependent” table was being processed before the “main” table.

So initially we thought of simply hard coding the sequence in which to process the tables. It means if any new table is added then we will have to update the service to include the newly added table. So we needed a way to identify the foreign key dependencies and determine the sequence to copy the tables at runtime. To resolve this issue, we thought of using Topological Sorting.

Topological Sorting

To get started we need the list of dependencies of “main” and “dependent” tables. In Postgresql, this sql query fetches the table dependencies.

    tc.table_name AS dependent_table,
    ccu.table_name AS main_table
    information_schema.table_constraints AS tc
    JOIN information_schema.key_column_usage AS kcu
      ON tc.constraint_name = kcu.constraint_name
      AND tc.table_schema = kcu.table_schema
    JOIN information_schema.constraint_column_usage AS ccu
      ON ccu.constraint_name = tc.constraint_name
      AND ccu.table_schema = tc.table_schema
WHERE constraint_type = 'FOREIGN KEY'
and (tc.table_name like 'namespace_%' or ccu.table_name like 'namespace_%');

=> dependent_table  | main_table
   dependent_table1 | main_table1
   dependent_table2 | main_table2

The above query fetches all the dependencies for only the tables have namespace or the tables we are interested in. The output of above query was [[dependent_table1, main_table1], [dependent_table2, main_table2]].

Ruby has a TSort module that for implementing topological sorts. So we needed to run the topological sort on the dependencies. So we inserted the dependencies into a hash and included the TSort functionality into the hash. Following is the way to include the TSort module into hash by subclassing the Hash.

require "tsort"

class TsortableHash < Hash
  include TSort

  alias tsort_each_node each_key

  def tsort_each_child(node, &block)
# Borrowed from

Then we simply added all the tables to dependency hash, as below

tables_to_sort = ["dependent_table1", "dependent_table2", "main_table1"]
dependency_graph = tables_to_sort.inject( {|hash, table| hash[table] = []; hash }

table_dependency_map = fetch_table_dependencies_from_database
=> [["dependent_table1", "main_table1"], ["dependent_table2", "main_table2"]]

# Add missing tables to dependency graph
table_dependency_map.flatten.each {|table| dependency_graph[table] ||= [] }

table_dependency_map.each {|constraint| dependency_graph[constraint[0]] << constraint[1] }

=> ["main_table1", "dependent_table1", "main_table2", "dependent_table2"]

The output above, is the dependency resolved sequence of tables.

Topological sorting is pretty useful in situations where we need to resolve dependencies and Ruby provides a really helpful tool TSort to implement it without going into implementation details. Although I did spend time in understanding the underlying algorithm for fun.

Cache all files with Cloudflare worker and HMAC auth

Cloudflare is a Content Delivery Network (CDN) company that provides various network and security services. In March 2018, they released “Cloudflare Workers” feature for public. Cloudflare Workers allow us to write JavaScript code and run them in Cloudflare edges. This is helpful when we want to pre-process requests before forwarding them to the origin. In this post, we will explain how we implemented HMAC authentication while caching all files in Cloudflare edges.

We have a bunch of files hosted in S3 which are served through CloudFront. To reduce the CloudFront bandwith cost and to make use of a global CDN (we use Price Class 100 in CloudFront), we decided to use Cloudflare for file downloads. This would help us cache files in Cloudflare edges and will eventually reduce the bandwidth costs at origin (CloudFront). But to do this, we had to solve a few problems.

We had been signing CloudFront download URLs to restrict their usage after a period of time. This means the file download URLs are always unique. Since Cloudflare caches files based on URLs, caching will not work when the URLs are signed. We had to remove the URL signing to get it working with Cloudflare, but we can’t allow people to continuously use the same download URL. Cloudflare Workers helped us with this.

We negotiated a deal with Cloudflare and upgraded the subscription to Enterprise plan. Enterprise plan helps us define a Custom Cache Key using which we can configure Cloudflare to cache based on user defined key. Enterprise plan also increased cache file size limits. We wrote following Worker code which configures a custom cache key and authenticates URLs using HMAC.

Cloudflare worker starts with attaching a method to "fetch" event.

addEventListener("fetch", event => {

verifyAndCache function can be defined as follows.

async function verifyAndCache(request) {

  // Convert the string to array of its ASCII values
  function str2ab(str) {
    let uintArray = new Uint8Array(
      str.split("").map(function(char) {
        return char.charCodeAt(0);
    return uintArray;

  // Retrieve to token from query string which is in the format "<time>-<auth_code>"
  function getFullToken(url, query_string_key) {
    let full_token = url.split(query_string_key)[1];
    return full_token

  // Fetch the authentication code from token
  function getAuthCode(full_token) {
    let token = full_token.split("-");
    return token[1].split("/")[0];

  // Fetch timestamp from token
  function getExpiryTimestamp(full_token) {
    let timestamp = full_token.split("-");
    return timestamp[0];

  // Fetch file path from URL
  function getFilePath(url) {
    let url_obj = new URL(url);
    return decodeURI(url_obj.pathname)

  const full_token = getFullToken(request.url, "&verify=")
  const token      = getAuthCode(full_token);
  const str        = getFilePath(encodeURI(request.url)) + "/" + getExpiryTimestamp(full_token);
  const secret     = "< HMAC KEY >";

  // Generate the SHA-256 hash from the secret string
  let key = await crypto.subtle.importKey(
    { name: "HMAC", hash: { name: "SHA-256" } },
    ["sign", "verify"]

  // Sign the "str" with the key generated previously
  let sig = await crypto.subtle.sign({ name: "HMAC" }, key, str2ab(str));

  // convert the Arraybuffer "sig" in string and then, in Base64 digest, and then URLencode it
  let verif = encodeURIComponent(
    btoa(String.fromCharCode.apply(null, new Uint8Array(sig)))

  // Get time in Unix epoch
  let time = Math.floor( / 1000);

  if (time > getExpiryTimestamp(full_token) || verif != token) {
   // Render error response
    const init = {
      status: 403
    const modifiedResponse = new Response(
      `Invalid token`,
    return modifiedResponse;
  } else {
    let url = new URL(request.url);

    // Generate a cache key from URL excluding the unique query string
    let cache_key = + url.pathname;

    let headers = new Headers(request.headers)

    Set an optional header/auth token for additional security in origin.
    For example, using AWS Web Application Firewall (WAF), it is possible to create a filter
    that allows requests only with a custom header to pass through CloudFront distribution.
    headers.set("X-Auth-token", "< Optional Auth Token >")

    Fetch the file using cache_key. File will be served from cache if it's already there,
    or it will send the request to origin. Please note 'cacheKey' is available only in
    Enterprise plan.

    const response = await fetch(request, { cf: { cacheKey: cache_key }, headers: headers })
    return response;

Once the worker is added, configure an associated route in "Workers -> Routes -> Add Route" in Cloudflare.

Add Cloudflare Worker route %

Now, all requests will go through the configured Cloudflare worker. Each request will be verified using HMAC authentication and all files will be cached in Cloudflare edges. This would reduce bandwidth costs at the origin.

Replacing PhantomJS with headless Chrome

We recently replaced PhantomJS with ChromeDriver for system tests in a project since PhantomJS is no longer maintained. Many modern browser features required workarounds and hacks to work on PhantomJS. For example the Element.trigger('click') method does not actually click an element but simulates a DOM click event. These workarounds meant that code was not being tested as the code would behave in real production environment.

ChromeDriver Installation & Configuration

ChromeDriver is needed to use Chrome as the browser for system tests. It can be installed on macOS using homebrew.

brew cask install chromedriver

Remove poltergeist from Gemfile and add selenium-webdriver.


- gem "poltergeist"
+ gem "selenium-webdriver"

Configure Capybara to use ChromeDriver by adding following snippet.

require 'selenium-webdriver'

Capybara.register_driver(:chrome_headless) do |app|
  args = []
  args << 'headless' unless ENV['CHROME_HEADLESS']

  capabilities =
    chromeOptions: { args: args }
    browser: :chrome,
    desired_capabilities: capabilities

Capybara.default_driver = :chrome_headless

Above code would run tests in headless mode by default. For debugging purpose we would like to see the actual browser. That can be easily done by executing following command.

CHROME_HEADLESS=false bin/rails test:system

After switching from Phantom.js to “headless chrome”, we ran into many test failures due to the differences in implementation of Capybara API when using ChromeDriver. Here are solutions to some of the issues we faced.

1. Element.trigger(‘click’) does not exist

Element.trigger('click') simulates a DOM event to click instead of actually clicking the element. This is a bad practice because the element might be obscured behind another element and still trigger the click. Selenium does not support this method, works as the solution but it is not a replacement. We can replace Element.trigger('click') with Element.send_keys(:return) or by executing javascript to trigger click event.



# solutions


# or


# or
# if the link is not visible or is overlapped by another element


2. Element is not visible to click

When we switched to, some tests were failing because the element was not visible as it was behind another element. The easiest solution to fix these failing test was using Element.send_keys(:return) but purpose of the test is to simulate a real user clicking the element. So we had to make sure the element is visible. We fixed the UI issues which prevented the element from being visible.

3. Setting value of hidden fields do not work

When we try to set the value of a hidden input field using the set method of an element, Capybara throws a element not interactable error.

find(".foo-field", visible: false).set("some text")
#Error: element not interactable

page.execute_script('$(".foo-field").val("some text")')

4. Element.visible? returns false if the element is empty

ignore_hidden_elements option of Capybara is false by default. If ignore_hidden_elements is true, Capybara will find elements which are only visible on the page. Let’s say we have <div class="empty-element"></div> on our page. find(".empty-element").visible? returns false because selenium considers empty elements as invisible. This issue can be resolved by using visible: :any.


#ignore hidden elements
Capybara.ignore_hidden_elements = true

# returns false

find('.empty-element', visible: :any)


find('.empty-element', visible: :all)


find('.empty-element', visible: false)

Rails 6 adds ActiveRecord::Relation#pick

Before Rails 6, selecting only the first value for a column from a set of records was cumbersome. Let’s say we want only the first name from all the posts with category “Rails 6”.

>> Post.where(category: "Rails 6").limit(1).pluck(:name).first
   SELECT "posts"."name"
   FROM "posts"
   WHERE "posts"."category" = ?
   LIMIT ?  [["category", "Rails 6"], ["LIMIT", 1]]
=> "Rails 6 introduces awesome shiny features!"

In Rails 6, the new ActiveRecord::Relation#pick method has been added which provides a shortcut to select the first value.

>> Post.where(category: "Rails 6").pick(:name)
   SELECT "posts"."name"
   FROM "posts"
   WHERE "posts"."category" = ?
   LIMIT ?  [["category", "Rails 6"], ["LIMIT", 1]]
=> "Rails 6 introduces awesome shiny features!"

This method internally applies limit(1) on the relation before picking up the first value. So it is useful when the relation is already reduced to a single row.

It can also select values for multiple columns.

>> Post.where(category: "Rails 6").pick(:name, :author)
   SELECT "posts"."name", "posts"."author"
   FROM "posts"
   WHERE "posts"."category" = ?
   LIMIT ?  [["category", "Rails 6"], ["LIMIT", 1]]
=> ["Rails 6.0 new features", "prathamesh"]

Target Tracking Policy for Auto Scaling

In July 2017, AWS introduced Target Tracking Policy for Auto Scaling in EC2. It helps to autoscale based on the metrics like Average CPU Utilization, Load balancer request per target, and so on. Simply stated it scales up and down the resources to keep the metric at a fixed value. For example, if the configured metric is Average CPU Utilization and the value is 60%, the Target Tracking Policy will launch more instances if the Average CPU Utilization goes beyond 60%. It will automatically scale down when the usage decreases. Target Tracking Policy works using a set of CloudWatch alarms which are automatically set when the policy is configured.

It can be configured in EC2 -> Auto Scaling Groups -> Scaling Policies.

EC2 Target Tracking Policy

We can also configure a warm-up period so that it would wait before it launches more instances to keep the metric at the configured value.

Internally, we use terraform to manage AWS resources. We can configure Target Tracking Policy using terraform as follows.

resource "aws_launch_configuration" "web_cluster" {
  name_prefix     = "staging-web-cluster"
  image_id        = "<image ID>"
  instance_type   = "<instance type>"
  key_name        = "<ssh key name>"
  security_groups = ["<security group>"]
  user_data       = "<user_data script>"

  root_block_device {
    volume_size = "<volume size>"

  lifecycle {
    create_before_destroy = true

resource "aws_autoscaling_group" "web_cluster" {
  name                      = "staging-web-cluster-asg"
  min_size                  = "<min ASG size>"
  max_size                  = "<max ASG size>"
  default_cooldown          = "300"
  launch_configuration      = "${ }"
  vpc_zone_identifier       = ["<subnet ID>"]
  health_check_type         = "EC2"
  health_check_grace_period = 300

  target_group_arns = ["<target group arn>"]

resource "aws_autoscaling_policy" "web_cluster_target_tracking_policy" {
  name                      = "staging-web-cluster-target-tracking-policy"
  policy_type               = "TargetTrackingScaling"
  autoscaling_group_name    = "${}"
  estimated_instance_warmup = 200

  target_tracking_configuration {
    predefined_metric_specification {
      predefined_metric_type = "ASGAverageCPUUtilization"

    target_value = "60"

Target Tracking Policy allows us to easily configure and manage autoscaling in EC2. It’s particularly helpful while running services like web servers.