Recently, we faced a DDoS attack in one of the clients’ projects. There were many requests from different IPs to root and login paths, and we were running thrice the usual number of servers to keep the system alive.
We are using Cloudflare’s HTTP proxy and it was doing a great job preventing malicious requests, but we wanted to check if we can avoid the loading/captcha pages which Cloudflare uses to filter requests. We came to a conclusion that we would be able to mitigate the ongoing attack if we could throttle requests by IP.
Cloudflare has an inbuilt Rate Limiting feature to throttle requests, but it would be a little expensive in our case since Cloudflare charges by the number of good requests and it was a high traffic website. On further analysis, we found that throttling at application level would be enough in that situation and the gem
helped us with that.
Rack::Attack is a Rack middleware from Kickstarter. It can be configured to throttle requests based on IP or any other parameter.
To use Rack::Attack, include the gem in Gemfile.
After bundle install, configure the middleware in config/application.rb:
Now we can create the initializer config/initializers/rack_attack.rb to configure Rack::Attack.
By default, Rack::Attack uses Rails.cache to store requests information.
In our case, we wanted a separate cache for Rack::Attack and it was configured as follows.
If the web server is behind a proxy like Cloudflare, we have to configure a method to fetch the correct remote_ip address. Otherwise, it would block based on proxy’s IP address and would result in blocking a lot of legit requests.
Requests can be throttled based on IP address or any other parameter.
In the following example, we are setting a limit of 40rpm/IP for “/” path.
The downside of this configuration is that after the 1 minute period, the attacker can launch another 40 requests/IP simultaneously and it would exert pressure on the servers. This can be solved using exponential backoff.
If we want to turn off throttling for some IPs (Eg.: Health check services), then those IPs can be safelisted.
We can log blocked requests separately and this is helpful for analyzing the attack.
A sample initializer with these configurations can be downloaded from here.
Application will now throttle requests and will respond with HTTP 429 Too Many Requests response for the throttled requests.
We now block a lot of malicious requests using Rack::Attack. Here’s a graph with % of blocked requests over a week.
EDIT: Updated the post to add more context to the situation.
is a background job processing library for Ruby.
Sidekiq offers three versions: OSS, Pro and Enterprise.
OSS is free and open source and has basic features.
Pro and Enterprise versions are closed source and paid,
thus comes with more advanced features.
To compare the list of features offered by each of these versions,
please visit Sidekiq website.
Sidekiq Pro 3.4.0
to reliably fetch jobs from the queue in Redis.
In this post, we will discuss the benefits of using super_fetch strategy.
Open source version of Sidekiq comes with basic_fetch strategy.
Let’s see an example to understand how it works.
Let’s add Sidekiq to our Gemfile and run bundle install to install it.
Add following Sidekiq worker in app/workers/sleep_worker.rb.
This worker does nothing great but sleeps for 30 seconds.
Let’s open Rails console
and schedule this worker to run as a background job asynchronously.
As we can see, queue now has 1 job scheduled to be processed.
Let’s start Sidekiq in another terminal tab.
As we can see, the job with ID 5d8bf898c36a60a1096cf4d3
was picked up by Sidekiq
and it started processing the job.
If we check the Sidekiq queue size in the Rails console, it will be zero now.
Let’s shutdown the Sidekiq process gracefully
while Sidekiq is still in the middle of processing our scheduled job.
Press either Ctrl-C or run kill -SIGINT <PID> command.
As we can see, Sidekiq pushed back the unfinished job back to Redis queue
when Sidekiq received a SIGINT signal.
Let’s verify it.
Before we move on, let’s learn some basics about signals such as SIGINT.
A crash course on POSIX signals
SIGINT is an interrupt signal.
It is an alternative to hitting
Ctrl-C from the keyboard.
When a process is running in foreground,
we can hit Ctrl-C to signal the process to shut down.
When the process is running in background,
we can use kill command to send a SIGINT signal to the process’ PID.
A process can optionally catch this signal and shutdown itself gracefully.
If the process does not respect this signal and ignores it,
then nothing really happens and the process keeps running.
Both INT and SIGINT are identical signals.
Another useful signal is SIGTERM.
It is called a termination signal.
A process can either catch it
and perform necessary cleanup or just ignore it.
Similar to a SIGINT signal,
if a process ignores this signal, then the process keeps running.
Note that, if no signal is supplied to the kill command,
SIGTERM is used by default.
Both TERM and SIGTERM are identical signals.
SIGTSTP or TSTP is called terminal stop signal.
It is an alternative to hitting Ctrl-Z on the keyboard.
This signal causes a process to suspend further execution.
SIGKILL is known as kill signal.
This signal is intended to kill the process immediately and forcefully.
A process cannot catch this signal,
therefore the process cannot perform cleanup or graceful shutdown.
This signal is used
when a process does not respect and respond
to both SIGINT and SIGTERM signals.
KILL, SIGKILL and 9 are identical signals.
There are a lot of other signals besides these,
but they are not relevant for this post.
Please check them out here.
A Sidekiq process pays respect
to all of these signals and behaves as we expect.
When Sidekiq receives a TERM or SIGTERM signal,
Sidekiq terminates itself gracefully.
Back to our example
Coming back to our example from above,
we had sent a SIGINT signal to the Sidekiq process.
On receiving this SIGINT signal,
Sidekiq process having PID 40510 terminated quiet workers,
paused the queue and waited for a while
to let busy workers finish their jobs.
Since our busy SleepWorker did not finish quickly,
Sidekiq terminated that busy worker
and pushed it back to the queue in Redis.
After that, Sidekiq gracefully terminated itself with an exit code 0.
Note that, the default timeout is 8 seconds
until which Sidekiq can wait to let the busy workers finish
otherwise it pushes the unfinished jobs back to the queue in Redis.
This timeout can be changed with -t option
given at the startup of Sidekiq process.
to send a TSTP and a TERM together
to ensure that the Sidekiq process shuts down safely and gracefully.
On receiving a TSTP signal,
Sidekiq stops pulling new work
finishes the work which is in-progress.
The idea is to first send a TSTP signal,
wait as much as possible (by default for 8 seconds as discussed above)
to ensure that busy workers finish their jobs
and then send a TERM signal
to shutdown the process.
Sidekiq pushes back the unprocessed job in Redis when terminated gracefully.
It means that Sidekiq pulls the unfinished job and starts processing again when
we restart the Sidekiq process.
We can see that Sidekiq pulled the previously terminated job
with ID 5d8bf898c36a60a1096cf4d3 and processed that job again.
So far so good.
This behavior is implemented using
strategy which is present in the open sourced version of Sidekiq.
Sidekiq uses BRPOP Redis command
to fetch a scheduled job from the queue.
When a job is fetched,
that job gets removed from the queue and
that job no longer exists in Redis.
If this fetched job is processed, then all is good.
Also, if the Sidekiq process is terminated gracefully on
receiving either a SIGINT or a SIGTERM signal,
Sidekiq will push back the unfinished jobs back to the queue in Redis.
But what if the Sidekiq process crashes in the middle
while processing that fetched job?
A process is considered as crashed
if the process does not shutdown gracefully.
As we discussed before,
when we send a SIGKILL signal to a process,
the process cannot receive or catch this signal.
Because the process cannot shutdown gracefully and nicely,
it gets crashed.
When a Sidekiq process is crashed,
the fetched jobs by that Sidekiq process
which are not yet finished get lost
Let’s try to reproduce this scenario.
We will schedule another job.
Now, let’s start Sidekiq process and kill it using SIGKILL or 9 signal.
Let’s check if Sidekiq had pushed the busy (unprocessed) job
back to the queue in Redis before terminating.
No. It does not.
Actually, the Sidekiq process did not get a chance to shutdown gracefully
when it received the SIGKILL signal.
If we restart the Sidekiq process,
it cannot fetch that unprocessed job
since the job was not pushed back to the queue in Redis at all.
the job having name argument as B or ID as 37a5ab4139796c4b9dc1ea6d
is completely lost.
There is no way to get that job back.
Losing job like this may not be a problem for some applications
but for some critical applications this could be a huge issue.
We faced a similar problem like this.
One of our clients’ application is deployed on a Kubernetes cluster.
Our Sidekiq process runs in a Docker container
in the Kubernetes
which we call background pods.
Here’s our stripped down version of
manifest which creates a Kubernetes deployment resource.
Our Sidekiq process runs in the pods spawned by that deployment resource.
When we apply an updated version of this manifest
,for say, changing the Docker image, the running pods are terminated
and new pods are created.
Before terminating the only container in the pod,
Kubernetes executes sidekiqctl stop $pid 60 command
which we have defined using the
Note that, Kubernetes already sends SIGTERM signal
to the container being terminated inside the pod
before invoking the preStop event handler.
The default termination grace period is 30 seconds and it is configurable.
If the container doesn’t terminate within the termination grace period,
a SIGKILL signal will be sent to forcefully terminate the container.
The sidekiqctl stop $pid 60 command executed in the preStop handler does
Sends a SIGTERM signal to the Sidekiq process running in the container.
Waits for 60 seconds.
Sends a SIGKILL signal to kill the Sidekiq process forcefully
if the process has not terminated gracefully yet.
This worked for us when the count of busy jobs was relatively small.
When the number of processing jobs is higher,
Sidekiq does not get enough time
to quiet the busy workers
and fails to push some of them back on the Redis queue.
We found that some of the jobs were getting lost
when our background pod restarted.
We had to restart our background pod for
reasons such as
updating the Kubernetes deployment manifest,
pod being automatically evicted by Kubernetes
due to host node encountering OOM (out of memory) issue, etc.
We tried increasing both
terminationGracePeriodSeconds in the deployment manifest
as well as the sidekiqctl stop command’s timeout.
we still kept facing the same issue
of losing jobs whenever pod restarts.
We even tried sending TSTP and then TERM after a timeout
relatively longer than 60 seconds.
But the pod was getting harshly terminated
without gracefully terminating Sidekiq process running inside it.
Therefore we kept losing the busy jobs
which were running during the pod termination.
Sidekiq Pro’s super_fetch
We were looking for a way to stop losing our Sidekiq jobs
or a way to recover them reliably when our background Kubernetes pod restarts.
We realized that the commercial version of Sidekiq,
Sidekiq Pro offers an additional fetch strategy,
which seemed more efficient and reliable
compared to basic_fetch strategy.
Let’s see what difference super_fetch strategy
makes over basic_fetch.
We will need to use sidekiq-pro gem which needs to be purchased.
Since Sidekiq Pro gem is close sourced, we cannot fetch it
from the default public gem registry,
Instead, we will have to fetch it from a private gem registry
which we get after purchasing it.
We add following code to our Gemfile and run bundle install.
To enable super_fetch,
we need to add following code
in an initializer config/initializers/sidekiq.rb.
Well, that’s it.
Sidekiq will use super_fetch instead of basic_fetch now.
When super_fetch is activated, Sidekiq process’ graceful shutdown behavior
is similar to that of basic_fetch.
That looks good.
As we can see, Sidekiq moved busy job back from a private queue
to the queue in Redis
when Sidekiq received a SIGTERM signal.
Now, let’s try to kill Sidekiq process forcefully
without allowing a graceful shutdown
by sending a SIGKILL signal.
Since Sidekiq was gracefully shutdown before,
if we restart Sidekiq again,
it will re-process the pushed back job having ID f002a41393f9a79a4366d2b5.
It appears that Sidekiq didn’t get any chance
to push the busy job back to the queue in Redis
on receiving a SIGKILL signal.
So, where is the magic of super_fetch?
Did we lose our job again?
Let’s restart Sidekiq and see it ourself.
Whoa, isn’t that cool?
See that line where it says SuperFetch: recovered 1 jobs.
Although the job wasn’t pushed back to the queue in Redis,
Sidekiq somehow recovered our lost job having ID f002a41393f9a79a4366d2b5
and reprocessed that job again!
Interested to learn about how Sidekiq did that? Keep on reading.
Note that, since Sidekiq Pro is a close sourced and commercial software,
we cannot explain super_fetch’s exact implementation details.
As we discussed in-depth before,
Sidekiq’s basic_fetch strategy uses BRPOP Redis command
to fetch a job from the queue in Redis.
It works great to some extent,
but it is prone to losing job
if Sidekiq crashes or does not shutdown gracefully.
On the other hand, Sidekiq Pro offers super_fetch strategy which uses
RPOPLPUSH Redis command to fetch a job.
RPOPLPUSH Redis command provides
a unique approach towards implementing a reliable queue.
RPOPLPUSH command accepts two lists
namely a source list and a destination list.
This command atomically
returns and removes the last element from the source list,
and pushes that element as the first element in the destination list.
Atomically means that both pop and push operations
are performed as a single operation at the same time;
i.e. both should succeed, otherwise both are treated as failed.
super_fetch registers a private queue in Redis
for each Sidekiq process on start-up.
super_fetch atomically fetches a scheduled job
from the public queue in Redis
and pushes that job into the private queue (or working queue)
using RPOPLPUSH Redis command.
Once the job is finished processing,
Sidekiq removes that job from the private queue.
During a graceful shutdown,
Sidekiq moves back the unfinished jobs
from the private queue to the public queue.
If shutdown of Sidekiq process is not graceful,
the unfinished jobs of that Sidekiq process
remain there in the private queue which are called as orphaned jobs.
On restarting or starting another Sidekiq process,
super_fetch looks for such orphaned jobs in the private queues.
If Sidekiq finds orphaned jobs, Sidekiq re-enqueue them and processes again.
It may happen that
we have multiple Sidekiq processes running at the same time.
If a process dies among them, its unfinished jobs become orphans.
This Sidekiq wiki
describes in detail the criteria which super_fetch relies upon
for identifying which jobs are orphaned and which jobs are not orphaned.
If we don’t restart or start another process,
super_fetch may take 5 minutes or 3 hours to recover such orphaned jobs.
The recommended approach is to restart or start another Sidekiq process
to signal super_fetch to look for orphans.
Interestingly, in the older versions of Sidekiq Pro,
super_fetch performed checks for orphaned jobs and queues
every 24 hours
at the Sidekiq process startup.
Due to this, when the Sidekiq process crashes,
the orphaned jobs of that process remain unpicked for up to 24 hours
until the next restart.
This orphan delay check window
had been later lowered to 1 hour in Sidekiq Pro 3.4.1.
Another fun thing to know is that,
there existed two fetch strategies namely
in the older versions of Sidekiq Pro.
Apparently, reliable_fetchdid not work with Docker
and timed_fetch had asymptotic computational complexity O(log N),
which has asymptotic computational complexity O(1).
Both of these strategies had been deprecated
in Sidekiq Pro 3.4.0 in favor of super_fetch.
Later, both of these strategies had been
in Sidekiq Pro 4.0
and are not documented anywhere.
We have enabled super_fetch in our application and
it seemed to be working without any major issues so far.
Our Kubernetes background pods does not seem to
be loosing any jobs when these pods are restarted.
Update : Mike Pheram, author of Sidekiq, posted following
Faktory provides all of the beanstalkd functionality, including the same reliability, with a nicer Web UI. It’s free and OSS.
For Ruby developers,
it’s common to switch between
multiple Ruby versions for multiple projects
as per the needs of the project.
Sometimes, the process of
going back and forth with multiple Ruby versions
could be frustrating for the developer.
To avoid this we add
to our projects so that
version manager tools
such as rvm, rbenv etc.
can easily determine which Ruby version
should be used for that particular project.
One other case that Rails developers
have to take care of is ensuring that
the Ruby version used to run Rails
by the deployment tools
is the one that is desired.
In order to ensure that we
add ruby version to Gemfile.
This will help bundler install dependencies
scoped to the specified Ruby version.
Good News! Rails 5.2 makes our work easy.
In Rails 5.2,
changes have been made
to introduce .ruby-version file
and also add the Ruby version to Gemfile
by default after creating an app.
Let’s create a new project with Ruby 2.5 .
In our new project,
we should be able to see
.ruby-version in its root directory
and it will contain value 2.5.
Also, we should see following line
in the Gemfile.
In today’s era of containerization,
no matter what container we are using
we need an image to run the container.
Docker images are stored on container registries
like Docker hub(cloud),
Google Container Registry(GCR), AWS ECR, quay.io etc.
We can also self-host docker registry on any docker platform.
In this blog post, we will see
how to deploy docker registry
on kubernetes using storage driver S3.
We write scripts to automate setup and deployment of Rails applications.
In those scripts, at many places, we need to run system commands like
bundle install, rake db:create, rake db:migrate and many more.
Let’s suppose we need to run migrations using
rake db:migrate in a rails project setup script.
We can use Kernel#system method.
Executing system returns true or false.
Another feature of system is that it eats up the exceptions.
Let’s suppose our migrations can run successfully. In this case system command
for running migrations will return true.
Let’s suppose we have a migration which is trying to add a column to a table which
does not exist. In this case, system command for running migrations will return
As we can see, even when there is a failure in executing system commands, the return
value is false. Ruby does not raise an exception in those cases.
However, we can use raise explicitly to raise an exception and halt the setup
Ruby 2.6 make our lives easier by providing an option exception: true so that
we do not need to use raise explicitly to halt script execution.
Ruby 2.6 works the same way as previous ruby versions when used
without exception option or used with exception set as false.
Let’s see what happens when an exception is raised inside a thread.
Execution of it looks like this.
the last two lines from the block were not printed.
Also notice that after failing in the thread the program
continued to run in main thread. That’s why we got the
message “In the main thread”.
This is because the default behavior of Ruby is to
silently ignore exceptions in threads and then to
continue to execute in the main thread.
Enabling abort_on_exception to stop on failure
If we want an exception in a thread to stop further processing
both in the thread and in the main thread then we can enable
Thread[.#]abort_on_exception on that thread to achieve that.
Notice that in the below code we are using
As we can see once an exception was encountered in the thread
then processing stopped on both in the thread and in the main thread.
Thread.current.abort_on_exception = true
activates this behavior
the current thread.
If we want this behavior globally for all the
threads then we need to use
Thread.abort_on_exception = true.
Running program with debug flag to stop on failure
Let’s run the original code with
In this case the exception is printed in detail and
the code in main thread was not executed.
Usually when we execute a program with --debug option
then the behavior of the program does not change.
We expect the program to print more stuff but we do not
expect behavior to change. However in this case the --debug
option changes the behavior of the program.
Running program with join on thread to stop on failure
If a thread raises an exception
and abort_on_exception and $DEBUG flags are not set
that exception will be processed
at the time of joining of the thread.
Both Thread#join and Thread#value
will stop processing in the thread and in the main thread
once an exception is encountered.
Introduction of report_on_exception in Ruby 2.4
Almost 6 years ago,
Charles Nutter (headius)
had proposed that the exceptions raised in threads
should be automatically logged and reported, by default.
To make his point,
he explained issues similar to what we discussed above
about the Ruby’s behavior
of silently ignoring exceptions in threads.
is the relevant discussion on his proposal.
Following are some of the notable points discussed.
Enabling Thread[.#]abort_on_exception, by default,
is not always a good idea.
There should be a flag which, if enabled,
would print the thread-killing
In many cases, people spawn one-off threads which are not
hard-referenced using Thread#join or Thread#value.
Such threads gets garbage collected.
Should it report the thread-killing exception
at the time of garbage collection if such a flag is enabled?
Should it warn using
or redirect to STDERR device while reporting?
Charles Nutter suggested that
a configurable global flag Thread.report_on_exception
and instance-level flag Thread#report_on_exception
should be implemented having its default value as true.
When set to true,
it should report print exception
Matz and other core members approved that
Thread[.#]report_on_exception can be implemented
having its default value set to false.
Charles Nutter, Benoit Daloze and other people demanded that
it should be true by default so that programmers
can be aware of the silently disappearing threads
because of exceptions.
Shyouhei Urabe advised that
due to some technical challenges,
the default value should be set to false
so as this feature could land in Ruby.
Once this feature is in then the default value can be changed in a later release.
Let’s try enabling report_on_exception globally
It now reports the exceptions in all threads.
It prints that the Thread:0x007fb10f018200
was terminated with exception: divided by 0 (ZeroDivisionError).
Similarly, another thread Thread:0x007fb10f01aca8
was terminated with exception: undefined method '+' for nil:NilClass (NoMethodError).
Instead of enabling it globally for all threads,
we can enable it for a particular thread
using instance-level Thread#report_on_exception.
In the above case
we have enabled report_on_exception flag just for addition_thread.
Let’s execute it.
Notice how it didn’t report the exception
which killed thread division_thread.
As expected, it reported the exception
that killed thread addition_thread.
With the above changes
reports the exception as soon as it encounters.
However if these threads are
joined then they will still raise exception.
See how we were still be able
to handle the exception raised
in division_thread above after joining it
despite it reported it before
due to Thread#report_on_exception flag.
report_on_exception defaults to true in Ruby 2.5
Benoit Daloze (eregon)
strongly advocated that
both the Thread.report_on_exception and Thread#report_on_exception
should have default value as true.
is the relevant feature request.
We can disable the thread exception reporting
globally using Thread.report_on_exception = false
or for a
particular thread using Thread.current.report_on_exception = false.
In addition to this feature,
Charles Nutter also
that it will be good if there exists
a callback handler
which can accept
a block to be executed when a thread
dies due to an exception.
The callback handler can be at global level
or it can be for a specific thread.
In the absence of such handler libraries need to resort to custom
code to handle exceptions. Here is how
Sidekiq handles exceptions raised in threads.
Important thing to note is that report_on_exception does not change behavior of the code.
It does more reporting when a thread dies and when it comes to thread dies more reporting
is a good thing.
Ruby comes with
a simple standard library for test coverage measurement for a long time.
Before Ruby 2.5
Before Ruby 2.5,
we could measure just the line coverage using Coverage.
Line coverage tells us whether a line is executed or not.
If executed, then how many times that line was executed.
We have a file called score.rb.
Now create another file score_coverage.rb.
We used Coverage#start method
to measure the coverage of score.rb file.
Coverage#result returns the coverage result.
Let’s run it with Ruby 2.4.
Let’s look at the output.
Each value in the array [1, nil, 1, 0, nil, 1, nil]
denotes the count of line executions by the interpreter
for each line in score.rb file.
This array is also called the “line coverage”
of score.rb file.
A nil value in line coverage array means
coverage is disabled for that particular line number
or it is not a relevant line.
Lines like else, end and blank lines
have line coverage disabled.
Here’s how we can read above line coverage result.
Line number 1 (i.e. 0th index in the above result array) was executed once.
Coverage was disabled for line number 2 (i.e. index 1) as it is blank.
Line number 3 (i.e. index 2) was executed once.
Line number 4 did not execute.
Coverage was disabled for line number 5 as it contains only else clause.
Line number 6 was executed once.
Coverage was disabled for line number 7 as it contains just end keyword.
After Ruby 2.5
There was a pull request opened in 2014
to add method coverage and decision coverage metrics in Ruby.
It was rejected
by Yusuke Endoh
as he saw some issues with it
and mentioned that he was also working on a similar implementation.
Please note that column numbers start from 0
line numbers start from 1.
Let’s try to read above printed branch coverage result.
[:if, 0, 3, 0, 7, 3] reads that
starts at line 3 & column 0
ends at line 7 & column 3.
[:then, 1, 4, 2, 4, 15] reads that
starts at line 4 & column 2
ends at line 4 & column 15.
[:else, 2, 6, 2, 6, 15] reads that
starts at line 6 & column 2
ends at line 6 & column 15.
as per the branch coverage format,
we can see that
the branch from if to then was never executed
since COUNTER is 0.
The another branch from if to else was executed once
since COUNTER is 1.
Measuring method coverage helps us
identify which methods were invoked and which were not.
We have a file grade_calculator.rb.
To measure method coverage of above file,
let’s create grade_calculator_coverage.rb
by enabling methods option
on Converage#start method.
Let’s run it using Ruby 2.5.
The format of method coverage result is defined as shown below.
[Object, :grade, 9, 0, 17, 3] => 3
reads that the Object#grade method
which starts from line 9 & column 0
line 17 & column 3
was invoked 3 times.
We can measure all coverages at once also.
What’s the use of these different types of coverages anyway?
Well, one use case is to integrate this
in a test suite and to determine which
lines, branches and methods are executed and which ones are not executed
by the test.
we can sum up these
and evaluate total coverage of
a test suite.
Author of this feature, Yusuke Endoh (mame)
has released coverage-helpers gem
which allows further advanced manipulation and processing of
coverage results obtained using Coverage#result.
Stack trace or backtrace
is a sequential representation
of the stack of method calls
in a program which gets printed
when an exception is raised.
It is often used
to find out the exact location
in a program
from where the exception was raised.
Before Ruby 2.5
Before Ruby 2.5,
the printed backtrace contained
the exception class and the error message
at the top.
Next line contained where in the program
the exception was raised.
Next we got more lines
which contained cascaded method calls.
Consider a simple Ruby program.
Let’s execute it using Ruby 2.4.
In the printed backtrace above,
the first line shows the location,
error message and the exception class name;
whereas the subsequent lines shows
the caller method names and their locations.
Each line in the backtrace above
is often considered as a stack frame
placed on the call stack.
Most of the times,
a backtrace has so many lines that
it makes it very difficult
to fit the whole backtrace in the visible viewport
of the terminal.
Since the backtrace is printed in top to bottom order
the meaningful information
like error message,
and the exact location where the exception was raised
is displayed at top of the backtrace.
It means developers
often need to scroll to the top
in the terminal window
to find out what went wrong.
After Ruby 2.5
Over 4 years ago an
was created to make printing of backtrace
in reverse order configurable.
After much discussion
made the commit to
print backtrace and error message
in reverse order
only when the error output device (STDERR)
is a TTY (i.e. a terminal).
Message will not be printed in reverse order
if the original STDERR
is attached to something like a File object.
Typically our Rails app has services like unicorn/puma,
sidekiq/delayed-job/resque, Websockets and some dedicated API services.
We had one web service exposed to the world
using load balancer and it was working well.
But as the traffic increased it became necessary
to route traffic based on URLs/path.
However Kubernetes does not supports
this type of load balancing out of the box.
There is work in progress for alb-ingress-controller
to support this
but we could not rely on it for
production usage as it is still in alpha.
To have consistent and symmetrical behaviour
across all the attribute extensions,
it was decided to support specifying
a default value
using default option
for all the module and class attribute macros
mattr_accessor, mattr_reader and mattr_writer
generate getter and setter methods
at the module level.
cattr_accessor, cattr_reader, and cattr_writer
macros generate getter and setter methods
at the class level.
Before Rails 5.2
Before Rails 5.2,
this is how we would set the default values
for the module
and class attribute accessor macros.
After Rails 5.2
We can still set
a default value
of a module or class attribute accessor
by providing a block.
In this pull request,
support for specifying a default value
using a new default option
has been introduced.
So instead of
we can now easily write
to the other attribute accessor macros
like mattr_accessor, mattr_reader,
the old way
of specifying a default value
using the block syntax will work
but will not
be documented anywhere.
Also, note that if we try to set
the default value by both ways
i.e. by providing a block
as well as by specifying a
default option; the value
provided by default option
will always take the precedence.
would be set with true
as per the above precedence rule.
is used to speed up
the performance of queries
on a database.
allows us to create
index on a database column
by means of a migration.
the sort order
for the index
But consider the case
where we are fetching
reports from the database.
And while querying the database,
we always want to get
the latest report.
In this case,
it is efficient
to specify the sort order
for the index
to be descending.
specify the sort order
by adding an index
to the required column
by adding a migration
If our Rails application
is using postgres database,
after running the above migration
we can verify
that the sort order
was added in schema.rb
the index for name
has sort order in descending.
Since the default is ascending,
the sort order
for user_id is not specified
MySQL < 8.0.1
For MySQL < 8.0.1,
running the above migration,
the following schema.rb
As we can see,
although the migration runs successfully,
it ignores the sort order
the default ascending order is added.