The output of rails routes is in the table format.
$ rails routes
Prefix Verb URI Pattern Controller#Action
users GET /users(.:format) users#index
POST /users(.:format) users#create
new_user GET /users/new(.:format) users#new
edit_user GET /users/:id/edit(.:format) users#edit
user GET /users/:id(.:format) users#show
PATCH /users/:id(.:format) users#update
PUT /users/:id(.:format) users#update
DELETE /users/:id(.:format) users#destroy
If we have long route names, they don’t fit on the terminal window
as the output lines wrap with each other.
Rails 6 added slice! on ActiveModel::Errors. With this addition, it becomes quite
easy to select just a few keys from errors and show or return them. Before Rails 6,
we needed to convert the ActiveModel::Errors object to a hash before slicing the keys.
Rails 6 added create_or_find_by and
create_or_find_by!. Both of these methods rely on unique constraints
on the database level. If creation fails, it is because of the unique constraints on one or all of the given columnns, and
it will try to find the record using find_by!.
Also note, create_or_find_by can lead to
primary keys running out, if the type of primary key is int. This happens because each time
create_or_find_by hits ActiveRecord::RecordNotUnique,
it does not rollback auto-increment of the primary key. The problem is discussed in this
BigBinary started in 2011. Here are our revenue numbers for the last 7 years.
We achieved this to date without having any outbound marketing and sales strategy.
We have never sent a cold email.
We have never sent a cold LinkedIn message.
The only time we advertised was a period of two months when we tried Google advertisements, with no outcomes.
We do not sponsor any podcast.
We have not had a sales person.
We have not had a marketing person.
We have kept our head down and have focused on what we do best, such as designing, developing, debugging, devops, and blogging.
This is what has worked out for us so far:
We contribute to the community through blog posts and open source.
We sponsor community events like Rails Girls and Ruby Conf India.
We sponsor many React and Ruby meetups.
We focus on keeping our existing clients happy.
Over the years I have come across many people who
aspire to be freelancers.
While it is not for everyone, I encourage them to give freelancing a try.
The greatest hindrance I have seen is that
they stress over sales and marketing, and as it should be.
Being a freelancer means constant need to find your next client.
I’m not here to say what others ought to do.
I’m here to say what has worked out for BigBinary over the last 7 years.
While we plan to experiment with new forms of marketing, networking, and sales channel as we grow, it is not the end-all-be-all for freelancers.
While marketing, networking, and sales may be effective for some,
it was not how we started BigBinary and may not be how you want to start as well.
For us at BigBinary, it has been writing blogs. When we come across a potentially intriguing blog topic, we save the topic by creating a Github issue.
When we have downtime, we pick up a topic from our issues list.
It’s as simple as that and has been our primary driver of growth thus far.
While you should experiment to find out what works best for you,
you need to find out what suits your personality.
If you are good at teaching through videos, consider creating your own YouTube channel.
If you contribute to open source,
try creating a blog about your efforts and learnings.
If you are good at concentrating on a niche technology,
build your marketing and business around that.
I can confidently say that majority of people I met
and who want to be freelancer would
do fine if they simply share what they are learning.
Most of these people do technical work.
Some of them already blog and others can blog.
A blog is a decent start nearly everybody will say.
I’m saying that it is a good end too.
If you do not want to do any other form of marketing then that’s fine too.
Just blogging will work out fine for you just like it has worked out fine for us at BigBinary.
Just because you are going to be a freelancer you don’t have to change who you are.
If you don’t like sending cold emails then don’t.
If you do not like networking then that’s alright as well.
Write personal emails, dump corporate talk, show compassion and be genuine.
So go on and do some freelancing.
It would teach you a lot about
and capturing value.
It will be rough at times.
And it would be hard at times.
But it would also be a ton of fun.
As described by DHH in
Rails has find_or_create_by, find_by
and similar methods
to create and find the records
matching the specified conditions.
Rails was missing similar feature for
deleting/destroying the record(s).
Before Rails 6,
deleting/destroying the record(s)
which are matching the
given condition was done as shown below.
The above examples were missing
the symmetry like
find_or_create_by and find_by
In Rails 6, the new delete_by
methods have been added as
ActiveRecord::Relation#delete_by is short-hand
Similarly, ActiveRecord::Relation#destroy_by is short-hand
Before moving forward, we need to understand what the touch method does. touch is used to
update the updated_at timestamp by defaulting to the current time. It also takes custom time or different columns as parameters.
Rails 6 has added touch_all on ActiveRecord::Relation
to touch multiple records in one go. Before Rails 6, we needed to iterate all records using an iterator to achieve this result.
Let’s take an example in which we call touch_all on all user records.
touch_all returns count of the records on which it is called.
touch_all also takes a custom time or different columns as parameters.
JIT stands for Just-In-Time compiler.
JIT converts repetitive code into bytecode
which can then be sent to the processor directly,
hence, saving time by not compiling the same piece of code over and over.
MJIT is introduced in Ruby 2.6.
It is most commonly known as MRI JIT or
Method Based JIT.
It is a part of the Ruby 3x3 project started by Matz.
The name “Ruby 3x3” signifies
Ruby 3.0 will be 3 times faster than Ruby 2.0 and it will focus mainly on performance.
In addition to performance, it also aims for the following things:
MJIT is still in development, therefore, MJIT is optional in Ruby 2.6.
If you are running Ruby 2.6, then you can execute the following commnad.
You will see following options.
Vladimir Makarov proposed improving performance
by replacing VM instructions with RTL(Register Transfer Language)
and introducing the Method based JIT compiler.
Ruby’s compiler converts the code to YARV(Yet Another Ruby VM) instructions
and then these instructions are run by the Ruby Virtual Machine.
Code that is executed too often
is converted to RTL instructions, which runs faster.
Let’s take a look at how MJIT works.
Let’s run this code with MJIT options and check what we got.
Nothing interesting right? And why is that?
because we are iterating the loop for 4 times
and default value for MJIT to work is 5.
We can always decide after how many calls MJIT should work by providing --jit-min-calls=#number option.
Let’s tweak the program a bit so MJIT gets to work.
After running the above code we can see some work done by MJIT.
Here’s what’s happening. Method ran 4 times
and on the 5th call it found it is running same code again.
So MJIT started a separate thread to convert the code into RTL instructions,
which created a shared object library.
Next, threads took that shared code and executed directly.
As we passed option
we can see what MJIT did.
What we are seeing in output is the following:
Time taken to compile.
What block of code is compiled by JIT.
Location of compiled code.
We can open the file
see how MJIT converted the piece of code to binary instructions but for that
we need to pass another option which is --jit-save-temps
then just inspect those files.
After compiling the code to RTL instructions,
take a look at the execution time.
It dropped down to 0.10 ms from 0.46 ms.
That’s a neat speed bump.
Here is a comparation across some of the Ruby versions for some basic operations.
Rails comparison on Ruby 2.5, Ruby 2.6 and Ruby 2.6 with JIT
Create a rails application with different Ruby versions and start a server.
We can start the rails server with the JIT option, as shown below.
Now, we can start testing the performance on servers.
We found that Ruby 2.6 is faster than Ruby 2.5,
but enabling JIT in Ruby 2.6 does not add more value to the Rails application.
MJIT status and future directions
It is in an early development stage.
Does not work on windows.
Needs more time to mature.
Needs more optimisations.
MJIT can use GCC or LLVM in the future C Compilers.
We have a client that uses multi-tenant database
where each database holds data for each of their customers.
Whenever a new customer is added, a service dynamically creates a new database.
In order to seed this new database we were tasked
to implement a feature to copy data from existing “demo” database.
The “demo” database is actually a live client where sales team does demo.
This ensures that the data that is copied is fresh and not stale.
We implemented a solution where we simply listed all the tables in namespace and used activerecord-import
to copy the table data.
We used activerecord-import gem to keep code agnostic of underlying database as we used different databases in development from production.
Production is “SQL Server” and development database is “PostgreSQL”.
Why this project ended up having different database in development and in production
is worthy of a separate blog.
When we started using the above mentioned strategy then
we quickly ran into a problem.
Inserts for some tables were failing.
The issue was we had foreign key constraints on some tables and “dependent” table was being processed before the “main” table.
So initially we thought of simply hard coding the sequence in which to process the tables. It means if any new table is added then we will have to update the service to include the newly added table. So we needed a way to identify the foreign key dependencies and determine the sequence to copy the tables at runtime. To resolve this issue, we thought of using
To get started we need the list of dependencies of “main” and “dependent” tables.
In Postgresql, this sql query fetches the table dependencies.
The above query fetches all the dependencies for only the tables have namespace or the tables we are interested in.
The output of above query was [[dependent_table1, main_table1], [dependent_table2, main_table2]].
Ruby has a TSort module that for implementing topological sorts.
So we needed to run the topological sort on the dependencies. So we inserted the dependencies into a hash and included the TSort functionality into the hash. Following is the way to include the TSort module into hash by subclassing the Hash.
Then we simply added all the tables to dependency hash, as below
The output above, is the dependency resolved sequence of tables.
Topological sorting is pretty useful in situations where we need to resolve dependencies and Ruby provides a really helpful tool TSort to implement it without going into implementation details. Although I did spend time in understanding the underlying algorithm for fun.
is a Content Delivery Network (CDN) company that provides various network and security services.
In March 2018, they
while caching all files in Cloudflare edges.
We have a bunch of files hosted in S3 which are served through CloudFront.
To reduce the CloudFront bandwith cost
to make use of a global CDN (we use Price Class 100 in CloudFront), we decided to use Cloudflare for file downloads. This would help us cache files in Cloudflare edges and will eventually reduce the bandwidth costs at origin (CloudFront). But to do this, we had to solve a few problems.
We had been signing CloudFront download URLs to restrict their usage after a period of time. This means the file download URLs are always unique. Since Cloudflare caches files based on URLs, caching will not work when the URLs are signed. We had to remove the URL signing to get it working with Cloudflare, but we can’t allow people to continuously use the same download URL. Cloudflare Workers helped us with this.
We negotiated a deal with Cloudflare and upgraded the subscription to Enterprise plan. Enterprise plan helps us define a
Custom Cache Key
using which we can configure Cloudflare to cache based on user defined key.
Enterprise plan also increased cache file size limits.
We wrote following Worker code which configures a custom cache key and authenticates URLs using HMAC.
Cloudflare worker starts with attaching a method to "fetch" event.
verifyAndCache function can be defined as follows.
Once the worker is added, configure an associated route in "Workers -> Routes -> Add Route" in Cloudflare.
Now, all requests will go through the configured Cloudflare worker. Each request will be verified using HMAC authentication and all files will be cached in Cloudflare edges. This would reduce bandwidth costs at the origin.
We recently replaced PhantomJS with ChromeDriver for system tests in a project since
PhantomJS is no longer maintained.
Many modern browser features required workarounds and hacks to work on PhantomJS.
For example the Element.trigger('click') method does not actually click an
element but simulates a DOM click event.
These workarounds meant that code was not being tested as the code would behave
in real production environment.
ChromeDriver Installation & Configuration
ChromeDriver is needed to use Chrome as the browser for system tests.
It can be installed on macOS using
Remove poltergeist from Gemfile and add selenium-webdriver.
Configure Capybara to use ChromeDriver by adding following snippet.
Above code would run tests in headless mode by default.
For debugging purpose we would like to see the actual
browser. That can be easily done by executing following command.
CHROME_HEADLESS=false bin/rails test:system
After switching from Phantom.js to “headless chrome”,
we ran into many test failures
due to the differences in implementation of Capybara API
when using ChromeDriver.
Here are solutions to some of the issues we faced.
1. Element.trigger(‘click’) does not exist
2. Element is not visible to click
When we switched to Element.click,
some tests were failing because the element was not visible as it was behind another element.
The easiest solution to fix these failing test was using Element.send_keys(:return)
but purpose of the test is to simulate a real user clicking the element.
So we had to make sure the element is visible.
We fixed the UI issues which prevented the element from being visible.
3. Setting value of hidden fields do not work
When we try to set the value of a hidden input field using the set method of an element,
Capybara throws a element not interactable error.
4. Element.visible? returns false if the element is empty
ignore_hidden_elements option of Capybara is false by default.
If ignore_hidden_elements is true, Capybara will find elements
which are only visible on the page.
Let’s say we have <div class="empty-element"></div> on our page. find(".empty-element").visible? returns false because selenium considers empty elements as invisible. This issue can be resolved by using visible: :any.
In July 2017, AWS
Target Tracking Policy for Auto Scaling in EC2.
It helps to autoscale based on the metrics like Average CPU Utilization, Load balancer request per target, and so on.
Simply stated it scales up and down the resources to keep the metric at a fixed value.
For example, if the configured metric is Average CPU Utilization and the value is 60%, the Target Tracking Policy will launch more instances if the Average CPU Utilization goes beyond 60%.
It will automatically scale down when the usage decreases.
Target Tracking Policy works using a set of CloudWatch alarms which are automatically set when the policy is configured.
It can be configured in EC2 -> Auto Scaling Groups -> Scaling Policies.
We can also configure a warm-up period so that it would wait before it launches more instances to keep the metric at the configured value.
Internally, we use terraform to manage AWS resources. We can configure Target Tracking Policy using terraform as follows.
Target Tracking Policy allows us to easily configure and manage autoscaling in EC2. It’s particularly helpful while running services like web servers.
BigBinary has been working with Gumroad for a while.
Following blog post has been posted with permission from Gumroad and we
are very grateful to Sahil for allowing us to discuss
the work in such an open environment.
applications are these days.
We’d like to talk about how we went about doing this.
Gumroad’s web application is built using Ruby on Rails.
The project was started way back in 2011 as
this hacker news post
From what we could tell, all the code which was using a new(at the time)
frontend framework was processed by RequireJS first
then sprockets whereas
but they were not being processed by RequireJS.
Also, there were some libraries which were sourced using
We were tasked with the work of migrating the RequireJS build system
over to webpack
replacing Bower with NPM. The reason behind this was that we wanted to use newer
tools with wider community support.
Another reason was that we wanted to be able to take advantage of all the
goodies that webpack comes with though that was not a strong motivation at that point.
We decided to break down the task into small pieces which could be worked on
more importantly, could be shipped in iterations. This would enable
us to work on other tasks in the application in parallel
not be blocked on a big chunk of work.
Keeping that in mind we split the task in three different steps.
Step 1: Migrate from RequireJS to webpack with the minimal amount of changes in
the actual code.
Step 2: Use NPM packages in place of Bower components.
Step 3: Use NPM packages in place of libraries present under
Step 1: Migrate from RequireJS to webpack with the minimal amount of changes in the actual code
The first thing we did here was create a new webpack.config.js configuration
file which would be used by webpack. We did our best to accurately translate
the configuration from the RequireJS configuration file using multiple resources
As you can see, the code did not use the newer
As we’ve mentioned earlier, our goal was to have minimal code changes so we did
not want to change to import just yet.
Luckily for us, webpack supports the
for specifying dependencies.
This meant that we would not need to change how dependencies were specified in
In this step we also changed the build system configuration
(The webpack.config.js file in this case) to use NPM packages where possible
instead of using libraries from the vendor/ directory. This meant that we
would need to have aliases in place for instances where the package name was
different from the names we had aliased the libraries to.
For example, this is how the ‘braintree’ alias was set earlier in order to
refer to the Braintree SDK. Now all the code had to do was to mention that
braintree was a dependency.
dependency sourcing did not work as expected because the NPM package name was
‘braintree-web’ and the source code was trying to load ‘braintree’ which was not
known to the build system(webpack). In order to avoid making changes to source
code we used the
provided by webpack as shown below.
We did this for all the dependencies which had been given an alias in the
RequireJS configuration and we got dependency resolution to work as expected.
As a part of this step, we also created a new common chunk and used it to
improve caching. You can read more about this feature
Note that we would tweak this iteratively later but we thought it would be good
to get started with the basic configuration right away.
Step 2: Use NPM packages in place of Bower components
Another goal of the migration was to remove Bower so as to make the build system
simpler. The first reason behind this was that all Bower packages which we were
using were available as NPM packages. The second reason was that Bower itself is
recommending users to migrate to Yarn/webpack for a while now.
What we did here was simple. We removed Bower and the Bower configuration file.
Then, we sourced the required Bower components as NPM packages instead by adding
them to package.json.
We also removed the aliases added to source them from the webpack configuration.
For example, here’s the change required to the configuration file after sourcing
clipboard as an NPM package instead of a Bower component.
from the project and sourced them as NPM packages instead. This way we could
have better visibility and control over the versions of these packages.
As part of this migration we also did some asset-related cleanups.
where required instead of sourcing them into the global scope, etc.
We were continuously measuring the performance of the application before and
after applying changes to make sure that we were not worsening the performance
during the migration. In the end, we found that we had improved the page load
speeds by an average of 2%. Note that this task was not undertaken to improve
the performance of the application. We are now planning to leverage webpack
features and try to improve on this metric further.
Rails 5 was a major release
with a lot of new features
like Action Cable, API Applications, etc.
Active Record attribute API was
also one of the features of
Rails 5 release which did
not receive much attention.
Active Record attributes API is
used by Rails internally for a long time.
In Rails 5 release, attributes API was made
public and allowed support for custom
What is attribute API?
Attribute API converts
the attribute value to an appropriate Ruby type.
Here is how the syntax looks like.
The first argument is the name of the attribute
and the second argument is the cast type.
Cast type can be string, integer
or custom type object.
Before using attribute API,
movie ticket price was a float value, but
after applying attribute on price,
the price value was typecast as integer.
The database still stores the price as float and this
conversion happens only in Ruby land.
Now, we will typecast movie release_date
from datetime to date type.
We can also add default value for an attribute.
Let’s say we want the people to rate
a movie in percentage. Traditionally, we would
do something like this.
With attributes API we can create a custom
type which will be responsible
to cast to percentage rating to
number of stars.
We have to define the cast method in the custom type
class which casts the given value to the expected output.
The attributes API also supports
where clause. Query
will be converted to SQL by calling
serialize method on the type object.
Share this article:
Subscribe to our newsletter
Thank you! You have successfully subscribed to our newsletter.