Rails 6 has also
the extension methods
from Active Support
in favor of the native methods with the same names
which exist in Ruby.
These methods were
introduced in Ruby 2.4 natively.
If we try to explicitly require
then it would print a deprecation warning.
We use the db:seed task to seed the database in Rails apps.
Recently an issue was reported on Rails issue tracker
where the db:seed task was not finishing.
In development environment, Rails uses async adapter
as the default Active Job adapter.
The Async adapter runs jobs with an in-process thread pool.
This specific issue was happening because the seed task was trying to attach a file
using Active Storage. Active Storage adds a job in the background
during the attachment process. This task was not getting executed properly using
the async adapter and it was causing the seed task to hang without exiting.
It was found out that by using the inline adapter in development environment,
this issue goes away. But making a wholesale change of making the default adapter in development
environment as inline adapter defeats the purpose of having the async adapter as default in the first place.
As the inline adapter does not allow queuing up the jobs in future, this can result
into an error if the seeding code somehow triggers such jobs.
This issue is already reported on Github.
Active Job is optional framework and we can skip it completely.
Now that seeding depends on presence of Active Job, it was throwing
an error when Active Job was not part of the application. Also, executing
the jobs inline automatically, when users has set the Active Job queue adapter to something
of their choice was surprising for the users.
So a change has been made to load the seeds inline only when Active Job is
included in the application and the queue adapter is async. This makes
it backward compatible as well it does not change user’s choice of queue adapter
The output of rails routes is in the table format.
$ rails routes
Prefix Verb URI Pattern Controller#Action
users GET /users(.:format) users#index
POST /users(.:format) users#create
new_user GET /users/new(.:format) users#new
edit_user GET /users/:id/edit(.:format) users#edit
user GET /users/:id(.:format) users#show
PATCH /users/:id(.:format) users#update
PUT /users/:id(.:format) users#update
DELETE /users/:id(.:format) users#destroy
If we have long route names, they don’t fit on the terminal window
as the output lines wrap with each other.
Rails 6 added slice! on ActiveModel::Errors. With this addition, it becomes quite
easy to select just a few keys from errors and show or return them. Before Rails 6,
we needed to convert the ActiveModel::Errors object to a hash before slicing the keys.
Rails 6 added create_or_find_by and
create_or_find_by!. Both of these methods rely on unique constraints
on the database level. If creation fails, it is because of the unique constraints on one or all of the given columnns, and
it will try to find the record using find_by!.
Also note, create_or_find_by can lead to
primary keys running out, if the type of primary key is int. This happens because each time
create_or_find_by hits ActiveRecord::RecordNotUnique,
it does not rollback auto-increment of the primary key. The problem is discussed in this
BigBinary started in 2011. Here are our revenue numbers for the last 7 years.
We achieved this to date without having any outbound marketing and sales strategy.
We have never sent a cold email.
We have never sent a cold LinkedIn message.
The only time we advertised was a period of two months when we tried Google advertisements, with no outcomes.
We do not sponsor any podcast.
We have not had a sales person.
We have not had a marketing person.
We have kept our head down and have focused on what we do best, such as designing, developing, debugging, devops, and blogging.
This is what has worked out for us so far:
We contribute to the community through blog posts and open source.
We sponsor community events like Rails Girls and Ruby Conf India.
We sponsor many React and Ruby meetups.
We focus on keeping our existing clients happy.
Over the years I have come across many people who
aspire to be freelancers.
While it is not for everyone, I encourage them to give freelancing a try.
The greatest hindrance I have seen is that
they stress over sales and marketing, and as it should be.
Being a freelancer means constant need to find your next client.
I’m not here to say what others ought to do.
I’m here to say what has worked out for BigBinary over the last 7 years.
While we plan to experiment with new forms of marketing, networking, and sales channel as we grow, it is not the end-all-be-all for freelancers.
While marketing, networking, and sales may be effective for some,
it was not how we started BigBinary and may not be how you want to start as well.
For us at BigBinary, it has been writing blogs. When we come across a potentially intriguing blog topic, we save the topic by creating a Github issue.
When we have downtime, we pick up a topic from our issues list.
It’s as simple as that and has been our primary driver of growth thus far.
While you should experiment to find out what works best for you,
you need to find out what suits your personality.
If you are good at teaching through videos, consider creating your own YouTube channel.
If you contribute to open source,
try creating a blog about your efforts and learnings.
If you are good at concentrating on a niche technology,
build your marketing and business around that.
I can confidently say that majority of people I met
and who want to be freelancer would
do fine if they simply share what they are learning.
Most of these people do technical work.
Some of them already blog and others can blog.
A blog is a decent start nearly everybody will say.
I’m saying that it is a good end too.
If you do not want to do any other form of marketing then that’s fine too.
Just blogging will work out fine for you just like it has worked out fine for us at BigBinary.
Just because you are going to be a freelancer you don’t have to change who you are.
If you don’t like sending cold emails then don’t.
If you do not like networking then that’s alright as well.
Write personal emails, dump corporate talk, show compassion and be genuine.
So go on and do some freelancing.
It would teach you a lot about
and capturing value.
It will be rough at times.
And it would be hard at times.
But it would also be a ton of fun.
As described by DHH in
Rails has find_or_create_by, find_by
and similar methods
to create and find the records
matching the specified conditions.
Rails was missing similar feature for
deleting/destroying the record(s).
Before Rails 6,
deleting/destroying the record(s)
which are matching the
given condition was done as shown below.
The above examples were missing
the symmetry like
find_or_create_by and find_by
In Rails 6, the new delete_by
methods have been added as
ActiveRecord::Relation#delete_by is short-hand
Similarly, ActiveRecord::Relation#destroy_by is short-hand
Before moving forward, we need to understand what the touch method does. touch is used to
update the updated_at timestamp by defaulting to the current time. It also takes custom time or different columns as parameters.
Rails 6 has added touch_all on ActiveRecord::Relation
to touch multiple records in one go. Before Rails 6, we needed to iterate all records using an iterator to achieve this result.
Let’s take an example in which we call touch_all on all user records.
touch_all returns count of the records on which it is called.
touch_all also takes a custom time or different columns as parameters.
JIT stands for Just-In-Time compiler.
JIT converts repetitive code into bytecode
which can then be sent to the processor directly,
hence, saving time by not compiling the same piece of code over and over.
MJIT is introduced in Ruby 2.6.
It is most commonly known as MRI JIT or
Method Based JIT.
It is a part of the Ruby 3x3 project started by Matz.
The name “Ruby 3x3” signifies
Ruby 3.0 will be 3 times faster than Ruby 2.0 and it will focus mainly on performance.
In addition to performance, it also aims for the following things:
MJIT is still in development, therefore, MJIT is optional in Ruby 2.6.
If you are running Ruby 2.6, then you can execute the following commnad.
You will see following options.
Vladimir Makarov proposed improving performance
by replacing VM instructions with RTL(Register Transfer Language)
and introducing the Method based JIT compiler.
Ruby’s compiler converts the code to YARV(Yet Another Ruby VM) instructions
and then these instructions are run by the Ruby Virtual Machine.
Code that is executed too often
is converted to RTL instructions, which runs faster.
Let’s take a look at how MJIT works.
Let’s run this code with MJIT options and check what we got.
Nothing interesting right? And why is that?
because we are iterating the loop for 4 times
and default value for MJIT to work is 5.
We can always decide after how many calls MJIT should work by providing --jit-min-calls=#number option.
Let’s tweak the program a bit so MJIT gets to work.
After running the above code we can see some work done by MJIT.
Here’s what’s happening. Method ran 4 times
and on the 5th call it found it is running same code again.
So MJIT started a separate thread to convert the code into RTL instructions,
which created a shared object library.
Next, threads took that shared code and executed directly.
As we passed option
we can see what MJIT did.
What we are seeing in output is the following:
Time taken to compile.
What block of code is compiled by JIT.
Location of compiled code.
We can open the file
see how MJIT converted the piece of code to binary instructions but for that
we need to pass another option which is --jit-save-temps
then just inspect those files.
After compiling the code to RTL instructions,
take a look at the execution time.
It dropped down to 0.10 ms from 0.46 ms.
That’s a neat speed bump.
Here is a comparation across some of the Ruby versions for some basic operations.
Rails comparison on Ruby 2.5, Ruby 2.6 and Ruby 2.6 with JIT
Create a rails application with different Ruby versions and start a server.
We can start the rails server with the JIT option, as shown below.
Now, we can start testing the performance on servers.
We found that Ruby 2.6 is faster than Ruby 2.5,
but enabling JIT in Ruby 2.6 does not add more value to the Rails application.
MJIT status and future directions
It is in an early development stage.
Does not work on windows.
Needs more time to mature.
Needs more optimisations.
MJIT can use GCC or LLVM in the future C Compilers.
We have a client that uses multi-tenant database
where each database holds data for each of their customers.
Whenever a new customer is added, a service dynamically creates a new database.
In order to seed this new database we were tasked
to implement a feature to copy data from existing “demo” database.
The “demo” database is actually a live client where sales team does demo.
This ensures that the data that is copied is fresh and not stale.
We implemented a solution where we simply listed all the tables in namespace and used activerecord-import
to copy the table data.
We used activerecord-import gem to keep code agnostic of underlying database as we used different databases in development from production.
Production is “SQL Server” and development database is “PostgreSQL”.
Why this project ended up having different database in development and in production
is worthy of a separate blog.
When we started using the above mentioned strategy then
we quickly ran into a problem.
Inserts for some tables were failing.
The issue was we had foreign key constraints on some tables and “dependent” table was being processed before the “main” table.
So initially we thought of simply hard coding the sequence in which to process the tables. It means if any new table is added then we will have to update the service to include the newly added table. So we needed a way to identify the foreign key dependencies and determine the sequence to copy the tables at runtime. To resolve this issue, we thought of using
To get started we need the list of dependencies of “main” and “dependent” tables.
In Postgresql, this sql query fetches the table dependencies.
The above query fetches all the dependencies for only the tables have namespace or the tables we are interested in.
The output of above query was [[dependent_table1, main_table1], [dependent_table2, main_table2]].
Ruby has a TSort module that for implementing topological sorts.
So we needed to run the topological sort on the dependencies. So we inserted the dependencies into a hash and included the TSort functionality into the hash. Following is the way to include the TSort module into hash by subclassing the Hash.
Then we simply added all the tables to dependency hash, as below
The output above, is the dependency resolved sequence of tables.
Topological sorting is pretty useful in situations where we need to resolve dependencies and Ruby provides a really helpful tool TSort to implement it without going into implementation details. Although I did spend time in understanding the underlying algorithm for fun.
is a Content Delivery Network (CDN) company that provides various network and security services.
In March 2018, they
while caching all files in Cloudflare edges.
We have a bunch of files hosted in S3 which are served through CloudFront.
To reduce the CloudFront bandwith cost
to make use of a global CDN (we use Price Class 100 in CloudFront), we decided to use Cloudflare for file downloads. This would help us cache files in Cloudflare edges and will eventually reduce the bandwidth costs at origin (CloudFront). But to do this, we had to solve a few problems.
We had been signing CloudFront download URLs to restrict their usage after a period of time. This means the file download URLs are always unique. Since Cloudflare caches files based on URLs, caching will not work when the URLs are signed. We had to remove the URL signing to get it working with Cloudflare, but we can’t allow people to continuously use the same download URL. Cloudflare Workers helped us with this.
We negotiated a deal with Cloudflare and upgraded the subscription to Enterprise plan. Enterprise plan helps us define a
Custom Cache Key
using which we can configure Cloudflare to cache based on user defined key.
Enterprise plan also increased cache file size limits.
We wrote following Worker code which configures a custom cache key and authenticates URLs using HMAC.
Cloudflare worker starts with attaching a method to "fetch" event.
verifyAndCache function can be defined as follows.
Once the worker is added, configure an associated route in "Workers -> Routes -> Add Route" in Cloudflare.
Now, all requests will go through the configured Cloudflare worker. Each request will be verified using HMAC authentication and all files will be cached in Cloudflare edges. This would reduce bandwidth costs at the origin.
We recently replaced PhantomJS with ChromeDriver for system tests in a project since
PhantomJS is no longer maintained.
Many modern browser features required workarounds and hacks to work on PhantomJS.
For example the Element.trigger('click') method does not actually click an
element but simulates a DOM click event.
These workarounds meant that code was not being tested as the code would behave
in real production environment.
ChromeDriver Installation & Configuration
ChromeDriver is needed to use Chrome as the browser for system tests.
It can be installed on macOS using
Remove poltergeist from Gemfile and add selenium-webdriver.
Configure Capybara to use ChromeDriver by adding following snippet.
Above code would run tests in headless mode by default.
For debugging purpose we would like to see the actual
browser. That can be easily done by executing following command.
CHROME_HEADLESS=false bin/rails test:system
After switching from Phantom.js to “headless chrome”,
we ran into many test failures
due to the differences in implementation of Capybara API
when using ChromeDriver.
Here are solutions to some of the issues we faced.
1. Element.trigger(‘click’) does not exist
2. Element is not visible to click
When we switched to Element.click,
some tests were failing because the element was not visible as it was behind another element.
The easiest solution to fix these failing test was using Element.send_keys(:return)
but purpose of the test is to simulate a real user clicking the element.
So we had to make sure the element is visible.
We fixed the UI issues which prevented the element from being visible.
3. Setting value of hidden fields do not work
When we try to set the value of a hidden input field using the set method of an element,
Capybara throws a element not interactable error.
4. Element.visible? returns false if the element is empty
ignore_hidden_elements option of Capybara is false by default.
If ignore_hidden_elements is true, Capybara will find elements
which are only visible on the page.
Let’s say we have <div class="empty-element"></div> on our page. find(".empty-element").visible? returns false because selenium considers empty elements as invisible. This issue can be resolved by using visible: :any.
In July 2017, AWS
Target Tracking Policy for Auto Scaling in EC2.
It helps to autoscale based on the metrics like Average CPU Utilization, Load balancer request per target, and so on.
Simply stated it scales up and down the resources to keep the metric at a fixed value.
For example, if the configured metric is Average CPU Utilization and the value is 60%, the Target Tracking Policy will launch more instances if the Average CPU Utilization goes beyond 60%.
It will automatically scale down when the usage decreases.
Target Tracking Policy works using a set of CloudWatch alarms which are automatically set when the policy is configured.
It can be configured in EC2 -> Auto Scaling Groups -> Scaling Policies.
We can also configure a warm-up period so that it would wait before it launches more instances to keep the metric at the configured value.
Internally, we use terraform to manage AWS resources. We can configure Target Tracking Policy using terraform as follows.
Target Tracking Policy allows us to easily configure and manage autoscaling in EC2. It’s particularly helpful while running services like web servers.
Share this article:
Subscribe to our newsletter
Thank you! You have successfully subscribed to our newsletter.