Rails 4.1 introduced JSONserialization for cookies. Earlier
all the cookies were serialized using Marshal library of Ruby. The
marshalling of cookies can be
because of the possibility of remote code execution vulnerability. So
the change to :json is welcoming.
The new applications created with Rails 4.1 or 4.2 have :json as the default
rake rails:update used for upgrading existing Rails apps to new versions rightly changes the
serializer to :json.
However that change can introduce an issue in the application.
Consider a scenario where the cookies are being used for session
storage. Like many normal Rails apps, the current_user_id is being stored into the session.
Before Rails 4.1 the cookie will be handled by Marshal serializer.
After the upgrade the application will try to unserialize cookies using
JSON which were serialized using Marshal.
So the deserialization of the existing cookies will fail and users will start getting errors.
Hybrid comes to rescue
To prevent this Rails provides with a hybrid serializer. The
hybrid serializer deserializes marshalled cookies and stores
them in JSON format for the next use. All the new cookies will be
serialized in the JSON format. This gives happy path for migrating
existing marshaled cookies to new Rails versions like 4.1 and 4.2.
To use this hybrid serializer, set cookies_serializer config as
:hybrid as follows:
After this, all the existing marshalled cookies will be migrated to :json
format properly and in the future upgrade of Rails, you can
safely change the config from :hybrid to :json which is the
default and safe value of this config.
The second edition of RailsGirls Pune event was an amazing day spent
with some equally amazing folks. The event took place on 13th of December and it saw a huge turnout
of around 150+ women from various colleges and companies. It was a free event for beginners interested in learning about coding and
building applications using Ruby on Rails.
BigBinary was happy to be one of the sponsors of the event.
The event was organized by Rajashree Malvade, Shifa Khan, Pooja Salpekar, Dominika Stempniewicz, and Magdalena Sitarek.
BigBinary team reached the venue, ThoughtWorks office, Pune, at about 8.30 AM.
Rajashree did the introductions and Gautam Rege, and I did the kick off. Gautam introduced Ruby,
along with how magical Ruby is and the importance of the event. I spent some time explaining how RailsGirls
began, as well as Rails Bridge and other similar events.
Next all instructors were grouped together. Grouping was done in
such a way that advanced instructors were paired with intermediate and beginner
The talented folks from ThoughtWorks had created a fun movie explaining the three different tracks - beginner,
intermediate and advanced into which the students were divided.
I, Prathamesh and Richa Trivedi took to one of the advanced track
groups. We started off by pairing people to work with their partner and did a health check of everyones’ system.
Many of the participants in our group had 1-2 years of professional experience in Java, .Net and so forth. This meant they
were quite familiar with setting up various things on their machine and that was a great help.
We started with basics of ruby- variables, loops, blocks, each, methods, classes, etc. This took
about 2 hours and then we started with Rails and MVC.
Santosh paired with Dinesh and participated
in intermediate track of a group of four students. They started with basics of Ruby and later started to build simple
blog app using Rails and deployed apps to Heroku by the end of the day.
At about 11.30, Siddhant Chothe from TechVision, did an inspiring talk
about Web Accessibility, Wiable gem,
and his journey in Ruby & Rails world.
Then We did the Bentobox activity. Participants were handed a page listing various aspects of software development like infrastructure, frontend,
application, storage in boxes. We read out technologies like XML, JSON, AJAX, MongoDB, etc, and asked
everyone to write these on stickies and place them in appropriate boxes on the “Bentobox”. This was helpful
for the participants to understand what different technologies are related to web-development and where
they are used.
Then everyone broke out for lunch. Our enthusiastic lot stayed back to avoid the rush, and began
with Rails development. We started by explaining basic MVC concepts and how Rails helps as a framework.
We started with a simple App, and created “pages/home” static home page. This helped our group
to understand rails generators, routes, controllers and views. With our first page up and running, we
went for lunch.
After lunch, a session on Origami was conducted by Nima Mankar. It was a good stress buster after
all of the first session’s bombardment over the participants.
Our next objective was to build an app and deploy it to heroku. Our group started out to build “The
Cat App”! We began with explaining controllers, CRUD operations, parts of URL, REST, etc.
We created a Cat model, and everyone loved the beauty and simplicity of migrations and performing
create, update, delete, find using ActiveRecord. We quickly moved on to building CatsController and
CRUD operations on the same. We made sure we did not use scaffold, so as to explain the underlying
magic, instead of scaffold hiding it away.
Soon everyone had a functional App, and it was fun to introduce GorbyPuff as the star of our App,
whose images were displayed as cat records, which store name of the image and url to an image.
We then setup the Apps on heroku and were ready for the next part- Showcase. It was amazing to see
so many groups complete the Apps and come up with fun, interesting and quirky ideas.
One student created Boyfriend Expense (Kharcha) Management App.
The day ended on a high note amid high enthusiasm from all the participants. We finished the workshop
with a huge cake for everyone.
Overall, it was well-organized, fun, enthusiastic and a day well spent.
Thanks to Rajshree, Shifa, Pooja,
Dominika and Magdalena for organizing such an awesome event.
Both Pune edition events, have shown great interest, and it has left us all looking forward
for the next one!
We wanted to start with simplest solution by making use of simple ruby objects that takes care of business logic away from views.
A complex view
Consider this user model
Here we have a view that displays appropriate profile link and changes css class based on the role of the user.
Extracting logic to Rails helper
In the above case we can extract the logic from the view to a helper.
After the extraction the code might look like this
Now the view is much simpler.
Why not to use Rails helpers?
Above solution worked. However in a large Rails application it will start creating problems.
UsersHelper is a module and it is mixed into ApplicationHelper.
So if the Rails project has large number of helpers then all of them are
mixed into the ApplicationHelper and sometimes there is a name collision.
For example let’s say that there is another helper called ShowingHelper and this helper also has method class_for_user.
Now ApplicationHelper is mixing in both modules UsersHelper and ShowingHelper.
One of those methods will be overridden and
we would not even know about it.
Another issue is that all the helpers are modules not classes.
Because they are not classes it becomes difficult to refactor helpers later.
If a module has 5 methods and if we refactor two of the methods into two separate methods then we end up with seven methods.
Now out of those seven methods in the helper only five of them should be public and the rest two should be private.
However since all the helpers are modules it is very hard to see which of them are public and which of them are private.
And lastly writing tests for helpers is possible but testing a module directly feel weird since most of the time we test a class.
Lets take a look at how we can extract the view logic using carriers.
In our controller
Now the view looks like this
No html markup in the carriers
Even though carriers are used for presentation
we stay away from having any html markup in our carriers.
That is because once we open
the door to having html markups in our carriers then carriers quickly
get complicated and it becomes harder to test them.
No link_to in the carriers
Since carriers are plain ruby objects,
there is no link_to and other helper methods usually.
And we keep carriers that way.
not do include ActionView::Helpers::UrlHelper
because the job of the carrier is to present the data that
can be used in link_to and
complement the usage of link_to.
We believe that link_to belongs to the ERB file.
However if we really need to have an abstraction over it then we can create
a regular Rails helper method.
We minimize usage of Rails helper, we do not avoid it altogether.
Overcoming Double Dots
Many times in our view we end up doing
This is a violation of
Law of Demeter .
We call it “don’t use Double Dots”. Meaning
don’t do @article.publisher.full_name.
Its just a matter of time before views code looks like this
Since carriers encapsulate objects into classes,
we can overcome this “double dots” issue by delegating behavior to appropriate object.
After that refactoring we end up with cleaner views like.
Note that “Double dot” is allowed at other parts of the code. We do not allow it in views.
Since the carriers are simple ruby objects it’s easy to test them.
Carriers allow us to encapsulate complex business logic in simple ruby objects.
This helps us achieve clearer separation of concern,
clean up our views and avoid skewed and complex views.
Our views are free of “double dots” and
we end up with simple tests which are easy to maintain.
We decided to call it a “carrier” and
not “presenter” because the word “presenter” is overloaded and has many meanings.
We at BigBinary take a similar approach for extracting code from a fat
controller or a fat model. You can find out more about it here.
In this blog post we will see how to make outbound phone calls from the browser to a phone using Twilio . We will make use of the Twilio-JS library and Twilio-Ruby gem.
The Rails App we will be creating, is based on the Twilio Client Quick-start tutorial.
That Twilio tutorial makes use of Sinatra. We will see how we can achieve this in a Rails application.
Step 1 - Setup Twilio Credentials and TwiML App
We need to setup twilio credentials. We can find account ID and auth token from our account information.
When the call is made using browser then the phone that is receiving the call has to see a number from which the call is coming. So now we need to
setup a Twilio verified number. This number will be used to place the outgoing calls from. How to setup a verified number
can be found here.
When our app make a call from the browser using twilio-js client, Twilio first creates a new call connection from our Browser to Twilio.
It then sends a request back to our server to get information about what to do next. We can respond by asking twilio to call a number,
say something to the person after a call is connected, record a call etc.
Sending of this instructions is controlled by setting up a TwiML application. This application provides information about the end point on our server,
where twilio should send the request to fetch instructions. TwiML is a set of instructions,
that we can use to tell Twilio what to do in different cases like when an outbound phone call is made or when an inbound SMS message is received.
Given below is an example that will say a short message How are you today? in a call.
The TwiML app can be created here. Once the app is configured then we will get appsid.
We need to configure following information in our Rails Application:
Step 2 - Generate capability token to be used by twilio-js
After we have the config setup, we will proceed to create the capability token. This token will be generated using the
The token helps the twilio-js client determine, what permissions the application has like making calls, accepting calls, sending SMS, etc.
We define a TwilioTokenGeneratorService for this purpose.
As you can see, we first define a new Twilio::Util::Capability instance and pass credentials to it.
We then call allow_client_outgoing method and pass the client Sid to it. This is the identifier for the TwiML application we have previously created on Twilio.
Calling allow_client_outgoing gives permission to the client to make outbound calls from Twilio.
Finally we call the generate method to create a token from the capability object.
Step 3 - Define view elements and pass token to it
The generated token will now be passed to the Twilio JS client for connecting with Twilio. In our App we define CallsController,
and index action in this controller. This action takes care of setting the capability token. Our index view consists of
two buttons- to place and hangup a call, a number input field, call logs, and data field to pass capability token to the
Step 4 - Define coffeescript bindings to handle TwilioDevice connection to Twilio
Next we setup coffeescript bindings to handle initialization of TwilioDevice and making use of entered number
to place calls Twilio. We are taking care of various events like connect, disconnect, ready,
etc. on TwilioDevice instance. More information about TwilioDevice usage can be found here.
If we now load this page, we should be able to see our app saying its ready to take calls.
Step 5 - Define TwiML Response Generator Service
The final step before we place calls from our App is to handle callbacks from Twilio and return TwiML response. For
this we are going to define TwilioCallTwiMLGeneratorService which takes care of generating this response. More information
about how we need to define the response and individual fields can be found from Twilio’s docs.
What we need to define is a response as below:
We are making use of two elements here - Dial, which makes Twilio place a call using defined callerId value,
as the number from which the call is made, which is displayed on the callee’s phone. Note that this is the same verified
number that we had specified before. Then we specify Number which is the number, to which we want to place the call to.
which we use to generate the response as above.
We define our TwilioCallTwiMLGeneratorService to take in a phone number as parameter. It creates an instance of
Twilio::TwiML::Response and tapping on this instance we provide Dial element with a :callerId value, and the Number
to place the call to. We validate the number before passing it back, and return an error if the number is invalid.
Step 6 - Send TwiML response on Twilio callback
We are now set to define twilio’s callback handler. This will be handled by the create_call action in CallsController.
Twilio will be sending this endpoint a POST request along with some information specified here.
We make use of phone_number being passed to us by Twilio and pass it along to the TwilioCallTwiMLGeneratorService, which
return us with valid TwiML response. Since TwiML is a flavor of XML, we make using render xml to return the response.
As create_call endpoint will be used by Twilio API, we need to skip
authenticity token check for this action.
Finally we need to specify the callback url in our TwiML App on Twilio. For testing this locally, we can make use of a
service like https://ngrok.com/, to expose this endpoint.
Our service is now ready to place calls. The complete Rails application code that we have created can be found here.
I would recommend not to specify the id. Rails generates the id
automatically if we don’t explicitly specify it. Moreover there are a few more advantages of not specifying the id.
Stable ids for every fixture.
Rails will generate the id based on key name. It will ensure
that the id is unique for every fixture. It can also generate ids for
uuid primary keys.
Labeled references for associations like belongs_to, has_many.
Lets say we have a users table. And a user has many cars.
Car ferrari belongs to john. So we have mentioned user_id as 1.
When I’m looking at cars.yml I see user_id as 1. But now I to lookup to see which user has id as 1.
Here is another implementation.
Notice that I no longer specify user_id for John. I have mentioned name. And now I can reference that name in cars.yml to mention that ferrari
belongs to john.
How to set a value to nil from fixture
Let’s say that I have a boolean column which is false by
default. But for an edge case, I want it to be nil. I can obviously
mutate the data generated by fixture before testing. However I can
achieve this in fixtures also.
Specify null to make the value nil
As you can see above if the value is null then YAML will treat it as nil.
Leave the value blank to make the value nil
As you can see above if the value is blank then YAML will treat it as nil.
When model name and table name does not match
Generally in Rails, the model name and table name follow a strict convention. Post model will have table name posts.
Using this convention, the fixture file for Post models is obviously fixtures/posts.yml.
But sometimes models do not match directly with the table name. This could be because of legacy reason or because of namespacing of models.
In such cases automatic detection of fixture files becomes difficult.
Rails provides set_fixture_class method for this purpose. This is a class method which accepts a hash where key should
be name of the fixture or relative path to fixture file and value should be model class.
I can use this method inside test_helper.rb in any class inheriting from ActiveSupport::TestCase.
values interpolation using $LABEL
Rails provides many ways to keep our fixtures DRY. Label interpolation
is one of them. It allows the use of key of fixture as a value in the
fixture. For example:
$LABEL is not a global variable here. Its just a placeholder.
$LABEL is replaced by the key of the fixture. And as discussed earlier the key of the fixture in this case is john.
So $LABLE has value john.
Before this PR, I could only use
this feature if the value is exactly $LABEL. So if the email is
email@example.com I could not use the $LABEL@example.com.
But after this PR, I can $LABEL anywhere in the string, and Rails
will replace it with the key.
So the earlier example becomes:
I use YAML defaults in database.yml for drying it up and keeping
common configuration at one place.
I can use it for drying up fixtures too for extracting common part in
Note the usage of key DEFAULTS for defining default fixture.
Rails will automatically ignore any fixture with key DEFAULTS.
If we use any other key then a record with that key will also get inserted
in the database.
Database specific tricks
Fixtures bypass the normal Active Record object creation process.
After reading them from YAML file, they are inserted into database
directly using insert query. So they skip callbacks and validations
check. This also has an interesting side-effect which can be used for
drying up fixtures.
Suppose we have fixture with timestamp:
If I are using PostgreSQL, I can replace the last_active_at value
now is not a keyword here. It is just a string. The actual query
looks like this:
So the value for last_active_at is still just now when the query
The magic starts as PostgreSQL starts reading the values. now is
a shorthand for the current timestamp . As soon as Postgres reads it,
it replaces now with the current timestamp and the column last_active_at gets populated with current timestamp.
I can also use the now() function instead of just now.
This function is available in
as well as
MySQL. So the usage of
now() works in both of these databases.
I wrote a bunch of selenium tests using Selenium IDE for a project.
The selenium tests have proven to be very useful. However the tests take around 58 minutes to complete the full run.
Here are the specific steps I took which brought the running time to under 15 minutes.
Set to run at the maximum speed
setSpeed command takes Target value in milliseconds. By setting the value to zero, I set the speed to maximum and the tests indeed ran fast. However, now I had lots of tests failing which were
In our tests real firefox browser is fired up and real elements are clicked. The application does make round trip to the rails server hosted on heroku.
By setting the selenium tests to the maximum speed the selenium tests started asserting for elements on the page even before the pages were fully loaded by the browser.
I needed sets of instructions using which I could tell selenium how long to wait for before asserting for elements.
Selenium provides a wonderful suite of commands which helped me fine tune the test run. Here I’m discussing some of those commands.
This command is used to tell selenium to wait until the specified element is visible on the page.
In the below mentioned case, the Selenium IDE will wait until the element css=#text-a is visible on the page.
This command is used to tell selenium to wait until a particular text is visible in the specified element.
In the case mentioned below, Selenium IDE will wait until the text violet is displayed in the element css=#text-a.
The difference between waitForVisible and waitForText is that waitForVisible waits until the
specified element is visible on the page while waitForText waits until a particular text is visible in the
specified element on the page.
This command is used to tell Selenium to wait until the specified element is displayed on the page.
In the below mentioned case, the Selenium IDE will wait until the element css=a.button is displayed on the page.
waitForVisible and waitForElementPresent seem very similar. It seems both of these commands do the same thing. There is a subtle difference though.
waitForVisible waits until the specified element is visible. Visibility of an element is manipulated by
the settings of CSS properties. For example using display none; one can make an element not be visible at all.
In contrast the command waitForElementPresent waits until the specified element is present on the page in the form of html markup. This command does not give consideration to css settings.
This command is used to tell Selenium to wait until the page is refreshed and the targeted element is displayed on the web page.
In the example mentioned below, the Selenium IDE will wait until the page is refreshed and the targeted element css=span.button is displayed on the page.
This command is used to tell selenium to wait until a particular button is clicked for submitting the form and the page starts reloading. The subsequent commands are paused until, the page is reloaded after the element is clicked on the page.
In the case mentioned below, Selenium IDE will wait until the page is reloaded after the specified element css=input#edit is clicked.
I attended it, enjoyed it and took part as a volunteer in Pune’s first RubyConf - DeccanRubyConf 2014.
As Hou De (let it be) name said, event went in the same way.
The day before conference, in the morning Vipul
(one of the event organizers) and I picked up our guest speaker
Koichi Sasada from the Pune Airport. Koichi is a Ruby core
member and works for Heroku. We welcomed him and went
to Hyatt Regency hotel where the event is taking place. Our guest checked into hotel and then we decided
to go for a lunch at Malaka Spice restaurant.
We reached there and Koichi told us that he wanted non-spicy food (Safe food). We
ordered non spicy food, but food was still too spicy for Koichi. However we enjoyed the food and
had very good discussion about the Ruby internals, concurrency-parallelism, debugging
in Ruby, Japanese culture and the Indian culture.
After lunch, we dropped Koichi off at the hotel and we left for our home.
Next morning was the event day and I woke up early and went to the event place. As part of
volunteering team, I and other volunteers had tasks like giving pens, badges, stickers,
T-shirts and coupon for night party to the attendees.
Attendees had started to come in slowly. Some attendees asked me about T-shirt
size as I wore one of the conference T-shirts and from my T-shirt’s size they
decided their T-shirt size. It was a great experience meeting with different
kind of people from around the India.
Keynote by Koichi kicked the event off and he talked about Ruby 2.1 features like,
Required keyword parameter,
Rational number literal,
def returns symbol of method name
Runtime new features (String#scrub, Binding#local_time and etc.)
Then he talked about performance improvements, Ruby 2.2 and how to
speed up Ruby Interpreter . Click here for more details about his talk.
In between the talks, some new attendees had come for the conference who had not
registered for the conference. They told me that they thought it’s a regular Pune’s
local Ruby Meetup. There was some miss-understandings but they seemed interested in
attending the event. I contacted Gautam as he was one of the organizers and told him about the issue.
Attendees kept coming till the afternoon.
After Koichi’s talk, two sections had opened. One for talks and other for the workshops.
TDD workshop was conducted by Sidu Ponnappa. I saw lots
of attendees in this workshop and heard that it went very well.
The next talk was on Requiem for a dream by Arnab Deka.
He talked about various tips and tricks including using “Higher order functions and Concurrency” in
Ruby and other programming languages like Clojure and Elixir.
After that, Rishi Jain talked about
Game Development - The Ruby Way. He discussed how to build a game in Ruby using
Gosu library . It was a very useful session for game developers.
You can find out more about it here
Next talk was on Programming Ruby in Marathi by Ratnadeep Deshmane
& his friend Aniket Awati. This was one of the best talks of
the event. The way they used the similar words from Marathi for Ruby’s keywords and the examples
made this talk remarkable. Their presentation style was nice too. Almost all attendees enjoyed this talk and they laughed a lot.
After this talk there was tea break for 15 minutes. Staffs from the Hyatt hotel were very helpful.
There were serving tea and coffees to the attendees and overall did a good job of ensuring the the event cruised along smoothly. This is in sharp contrast to the service RubyConfIndia received from Lalit Resort.
After tea break, I didn’t get chance to attend other talks as attendees were still coming in and I was assisting them. But I heard almost all talks went very well.
In meantime, I was passing through main passage and I saw lighting talks board and decided
to give lighting talk on my Ruby gem. Lighting talks is a short presentation that
you can give about your achievement. You can also share your ideas and promote your library
or any other projects.
Then we all had our lunch, lunch was good with lots of varieties with dessert.
After lunch I went for workshop on Deliver projects 30% faster, know your CSS by
Aakash Dharmadhikari. Wanted to attend it fully, but some of
the attendees had difficulties in internet connection, so I left the room to look into it.
Lightning talks were going to start so I took sometime to prepare for my presentation.
In the lighting talks, girls from Rails Girls Summer of Code, talked about their project and
their progress on it. After that Prathamesh talked about
RubyIndia.org and asked people to subscribe the newsletter.
After that I gave talk on my Ruby gem RubySimpleSearch and you can find more on it
here. The next
speaker Rahul Mahale from Nashik asked people to help
him in growing Ruby community in Nashik. All other lighting talks went very well.
After lighting talks, there was closing keynote On Solving Problems by
Baishampayan Ghose. This talk made us think about how we write
application in our daily routine. He talked about the architecture and he also explained the future
is a function of past future = f(past). He also suggested that we should first understand the problem thoroughly and we should build the software.
Talk was very informative and went very well.
After that Gautam came on to the stage and congratulated all the sponsors, organizers and volunteers. He also told that this event got large number of girls attendees than he had ever seen in any other conference.
After this event, there was party in Irish Village hotel. Me and my friends, we all went to the party. Party was superb and we all enjoyed it.
Thanks to all sponsors and organizers who made this event fun and enjoyable.
You can checkout more pictures of the conference from here.
Note: Photos are copyrighted by respective photo owners.
Prior to upgrading to Rails 4.1 we had a helper to display flash messages and to add css class to the message
based on flash type. Here is the code.
After upgrading to Rails 4.1, we started using the new Cookies serializer. Following code was added to an initializer.
Soon after this our flash helper started misbehaving and all flash messages disappeared from the application.
JSON Cookies Serializer
Before we move ahead, a word on the new JSON Cookies Serializer. Applications created before Rails 4.1 uses Marshal to serialize cookie values into the signed and encrypted cookie jars.
Commits like this and this
made it possible to have Cookies serializer and defaulted from Marshal Serializer to a secure Serializer using JSON.
The JSON Serializer works on JSON objects. Thus objects like Date and Time will be stored as strings. Hash keys will be stored as strings.
JSON serializer makes the application much safer since it is safer to pass around strings compare to passing around arbitrary values which is what was happens when values are marshalled and passed around.
Coming back to our problem, change Stringify the incoming hash in FlashHash coupled with above serialization changes meant that even if we put a symbol as a key in the flash we have to retrieve it as “string” since the keys are internally being converted into strings.
The difference is clearly illustrated below.
Now that we know the root cause of the problem the fix was simple. Instead of relying on symbols use “string” to access value from flash.
Recently in one of our projects,
we experienced some strange errors from
Delayed::Job workers started successfully,
but when they were starting to lock the jobs, workers failed
with PG::Error: no connection to server or
PG::Error: FATAL: invalid frontend message type 60errors.
After some search, we found there had been such issues
We started isolating the problem and digging through the recent changes we had made to the project. Since the last release
the only significant modification had been made to internationalization. We had started
using I18n-active_record .
for Delayed Job we had extra check as
After some serious searching and digging through both Delayed::Job source code and how we were using to setup its config, we started noticing some issues.
The first thing we found was that the problem did not turn up when delayed job workers were started using rake jobs:work task.
After looking at DelayedJob internals we found that the main difference between a rake task and a binstub was in the fork method that was invoked in the binstub version.
The binstub version was being executed seamlessly using Daemons#run_process method and had a slightly different lifecycle of execution.
Let’s take a look into DelayedJob internals before proceeding. DelayedJob has systems of the hooks that can be used by plugin-writers and in our applications.
All this events functionality is hidden in Delayed::Lifecycle class. Each worker has its own instance of that class.
So, which events exactly do we have here?
You can setup callbacks to be run on before, after or around events simply using Delayed::Worker.lifecycle.before,
Delayed::Worker.lifecycle.after and Delayed::Worker.lifecycle.around methods.
Let’s move on to our problem. It turned out that
delayed job active record gem was closing all
database connections in before_fork hook and reestablishing them in after_fork hook.
It was clear that I18n-active-record did not play well with this, causing the issue at hand.
We looked into DelayedJob lifecycle and chose before :execute hook, which was executed after all DelayedJob ActiveRecord backend connections manipulations.
Finally the locales initializer for delayed_job workers was changed to match as below:
This helped us to mitigate the connection errors, and connections stopped dying abruptly.
I and Vipul recently gave a talk at
RedDotRubyConf on ActiveRecord can’t
do it? Arel can!. It was our first trip to Singapore and we enjoyed
the conference as well as Singapore a lot.
RDRC2014 was awesome.
We reached the venue in time for Koichi’s
keynote on Ruby.Inspect.
He talked about various things related
to development of Ruby including Ruby team at Heroku, recent releases
of Ruby and new syntax introduced in Ruby 2.1. He also talked about
performance improvements including Generational GC - RGenGC and
upcoming features in Ruby 2.2.
After that in second part of the talk, he talked about inspection
tools available in Ruby. It was a deep technical part for me and
something to learn about. The message he gave from the talk was to
become low level engineer.
Second talk of the conf was from T.J. Schuck about solving one of the
hardest problems. Storing and retrieving passwords in a secure way. He
talked about how increasing improvements in hardware pose a challenge
as even if you use proper algorithm it can be cracked with high
computing machines. It was interesting to know about internals of
storing passwords. I had never cared too much about it :)
After the coffee break, Brandon Keepers from Github gave talk on
Tending Your Open Source Garden. Github is still on Rails 2.3
and Brandon is working on bringing it up to new version. His talk was
a great advice for those who want to contribute to open source and
community. I think this talk resonated well with the audience as most
of the crowd was new and interested in open source contributions.
Gautam Rege from Josh Software gave
talk on Dark Side of Ruby.
We had attended this talk at GCRC so we left the hall after some time
and did our one last practice. But i heard the feedback was very well
for this talk.
After the lunch, Keith Pitt talked about
Guide to Continuous Deployment with Rails. He talked about keeping everything related
to deployment from CI to migrations in sync. One of the interesting
thing that i came to know from this talk was how to enable zero
downtime deployments on Heroku using
Benjamin Tan gave talk on
Ruby + Elixir: Polyglotting FTW! after
that. He talked about Elixir. This talk was about looking beyond Ruby
and adding another tool to our skills. Benjamin also gave some demos
including the last one in which he used sidekiq with Elixir. The
actual work was done by Elixir workers. I will definitely give a shot
to Elixir in the coming days.
After that we gave our talk on
I was a bit nervous as it was my
first talk. But it went well. We finished a bit early than expected.
But there was tea break after our talk :). We got some good feedback
from the attendees and especially beginners who had not used Arel
before. Our slides are
After these awesome lightning talks, the last session of Day 1
started. There were talks on Fluentd and Domain driven design. Both
were good to know as something outside of daily routine. Konstantin
Haase’slast talk of the day was **Meta** talk. He talked on
abstraction and how it happens in our mind. Our mind affects what we
see, like we see magenta color. Similarly abstraction happens in mind.
I had to concentrate a lot in this talk to understand it. But it was
Andddd that ended the first day of the conf. It was exciting and we
were looking forward to second day.
Day 2 started with Brian Helmkamp’s
talk on Docker. We missed
the initial part of the talk. He talked about basics of Docker, how to
deploying in container environment. He also discussed about deploying
a Rails app using docker and how it makes very easy to deploy
different parts of the system using docker very easily.
Zachary Scott gave next talk introducing
Ruby Core team
and how it works, how it collaborates, developer meetings, how anyone can
contribute to MRI. We also had a Friday hug in
this talk :) This talk combined with Hiroshi’s lightning talk on the
first day was great insight into CRuby development.
After the break, Pioter Solnica gave an excellent talk on
Convenience vs Simplicity.
He talked about convenience offered
by ActiveRecord may not be simple to understand. The things such as
input conversion, validation are convenient to use as a developer but
not necessarily simple to understand. He also discussed presenters,
immutable data structures,
Adamantium for creating
immutable objects in Ruby. In the second part of the talk, he talked
about relations and how they can be used in composing queries. He
explained this idea using
Ruby Object Mapper. It uses
Axiom as underlying relational
algebra instead of Arel. Its an interesting project to checkout.
After that our very own Anil Wadghule talked on
Solid Design Principles in Ruby.
His emphasis was on following designs than patterns. He also showed code examples and refactored them after
applying principles. His talk was good insight into understanding what
are these Solid principles and how they can be applied in real life.
We skipped the session after lunch and roamed around talking with
people. We had an interesting discussion about hiring Ruby on Rails
developers, interview processes etc.
Then lightning talks started. Sheng-Loong-Su talked first on using
Algorithms for Trading. He talked about collecting data using feeder,
preparing trading signals using strategy and making decision based on
trading signals. One of the best talks of day 2 was by
Grzegorz Witek on how he is traveling the world without getting
burned out and still happily programming. He talked about his
experiences in different countries being a
It was one of the best inspirational talks according to me. The last
lightning talk was about Using Vagrant for setting up Dev environment
by Shuwei and Arathi.
Then chocolate man from Belgium, Christophe Philemotte, gave talk on
Safety Nets: Learn to code with confidence.
His talk was about how we can prevent code in long term using testing, static analysis
using tools such as flog, flay, rubocop for removing duplication,
reducing complexity, fixing warnings. He also talked about importance
of code review. His code is present
here. He also gave
us excellent chocolates from Belgium.
And the last keynote by Aaron Patterson. As always, it was full of
everything - tech stuff, jokes, puns.
He talked on how he is making performance improvements in Active
Record, link generations. He showed some graphs with performance of
various database adapters tested on Rails versions ranging from 2.3
to 4 to master. He urged everyone to report performance issues to the
core team so that they are addressed quickly. This is the
app used for doing
performance testing by him.
And that ended talks at RDRC. We had an awesome after party where we
discussed with lots of people about Ruby, Rails as well as other
stuff. We would like to thank Winston
for inviting us to RedDotRubyConf.
Checkout some of the pictures from the conference
I recently conducted a workshop about Contributing to Open-Source at first-ever Rubyconf Philippines. In its introductory talk,
I spoke about how Aaron Patterson, fixed a 6 year old bug
about Optional Arguments, that existed in Rails.
Bug in ruby
Let’s try a small program.
What do you think would be printed on your terminal when you run the above program.
If you are using ruby 2.1 or below then you will see nothing. Why is that ? That’s because of a bug in ruby.
jekyll is an excellent tool for creating static pages and blogs. Our BigBinary blog is based on jekyll. Deploying our blog to heroku took longer than I had expected. I am outlining what I did to deploy BigBinary blog to heroku.
Add exclude vendor to _config.yml
Open _config.yml and add following line at the very bottom.
Create a new file called Procfile at the root of the project with following content.
Add Gemfile at the root of the project.
Add config.ru at the root of the project with following content.
Test on local machine first
Test locally by executing bundle exec jekyll serve.
Push code to heroku
Now run bundle install and add the Gemfile.lock to the repository and push the repository to heroku.
In a project we needed to write different parsers for different services. Rather than putting all those parsers in app/models or in lib we created a new directory. We put all the parsers in app/parsers .
We put all the tests for these parsers in test/parsers directory.
We can run tests parsers individually by executing rake test test/parsers/email_parser_test.rb. However when we run rake then tests in test/parsers are not picked up.
We added following code to Rakefile to make rake pickup tests in test/parsers.
Now when we run rake or rake test then tests under test/parsers are also picked up.
Above code adds a rake task rake test:parsers which would run all tests under test/parsers directory.
Ideally we should be logging an exception in Rails like this.
Above code would produce one line log message as shown below.
In order to get backtrace and other information about the exception we
need to handle logging like this.
Above code would produce following log message.
Now let’s look at why Rails logger does not produce detailed logging and what can be done about it.
A closer look at Formatters
When we use Rails.logger.info(exception) then the output is formatted
by ActiveSupport::Logger::SimpleFormatter. It is a custom formatter defined by Rails that looks like this.
As we can see it inherits from Logger::Formatter defined by Ruby Logger .
It then overrides call method which is originally defined as
When exception object is passed to SimpleFormatter then msg.inspect is called and that’s why we see the exception message without any backtrace.
The problem is that Rails’s SimpleFormatter’s call method is a bit dumb
compared to Ruby logger’s call method.
Ruby logger’s method has a special check for exception messages. If the message it is going to print is of class Exception then it prints backtrace also.In comparison SimpleFormatter just prints msg.inspect for objects of Exception class.
This problem can be solved by using config.logger.
config.logger accepts a logger conforming to the interface of Log4r or the default Ruby Logger class.
Defaults to an instance of ActiveSupport::Logger, with auto flushing off in production mode.
So now we can configure Rails logger to not to be SimpleFomatter and go back to ruby’s logger.
Let’s set config.logger = ::Logger.new(STDOUT) in config/application.rb and then try following code.
Now above code produces following log message.
Sending log to STDOUT is also a good practice
As per http://12factor.net/logs, an
application should not concern itself much with the kind of logging
framework being used. The application should write log to STDOUT and
logging frameworks should operate on log streams.
For one of our clients we need to display random records from the database. That’s easy enough. We can use random() function.
Here we are using PostgreSQL database but ,I believe, above query will also work on MySQL.
The problem here is that if the user clicks on next page then we will try to get next set of 20 random records. And since these records are truly random, sometimes the user might see the records which has already been seen in the first page.
The fix is to make it random but not truly random. It needs to be random with a seed.
Fix in MySQL
In MySQL we can pass seed directly to random() function.
Fix in PostgreSQL
In PostgreSQL it is a little more cumbersome. We first need to set seed and then the subsequent query’s usage of random() will make use of seed value.
Set seed value in before_action
For different user we should use different seed value and this value should be random. So we set the seed value in before_action.
Now change the query to use the seed value and we are all set.
we discussed ruby code where we used ps -ocommand. In this blog let’s discuss how to get arguments passed to a command.
What is the issue
In the referred blog we are trying to find if --force or -f argument was passed to the git push command.
The kernel knows the arguments that was passed to the command. So the only way to find that answer would be to to ask kernel what was the full command. The tool to deal with such issues is ps.
In order to play with ps command let’s write a simple ruby program first.
In terminal execute ruby sl.rb. In another terminal execute ps.
So here I have two bash shell open in two different tabs in my terminal. First terminal tab is running s1.rb. The second terminal tab is running ps. In the second terminal we can see the the arguments that were passed to program s1.
By default ps lists all the processes belonging to the user executing the command and the processes started from the current terminal.
ps -p87070 would show result only for the given process id.
We can pass more than on process id.
ps -o can be used to select the attributes that we want to be shown. For example I want only pids to be shown.
Now I want pid and command.
I want result only for a certain process id.
Now we have the arguments that were passed to the command. This is the code that article was talking about.
For the sake of completeness let’s see a few more options.
ps -e would list all processes.
ps -f would list a lot more attributes including ppid.
In previous blog we discussed ruby code where we used two things:
ppid and ps -ocommand. In this blog let’s discuss ppid. ps -ocommand is discussed in the next blog.
Parent process id is ppid
We know that every process has a process id. This is usually referred as pid. In *nix world every process has a parent process. And in ruby the way to get the “process id” of the parent process is through ppid.
Let’s see it in action. Time to fire up irb.
Now keep the irb session open and go to anther terminal tab. In this new tab execute pstree -p 83132
If pstree is not available then you can easily install it using brew install pstree.
As you can see from the output the process id 83132 is at the very bottom of the tree. The parent process id is 82455 which belongs to “bash shell”.
In irb session when we did Process.ppid then we got the same value 82455.
At BigBinary we create a branch for every issue. We deploy that branch and only when it is approved that branch is merged into master.
Time to time we rebase the branch. And after rebasing we need to do force push to send the changes to github. And once in a while someone force pushes into master by mistake. We recommend to set push.default to current to avoid such issues but still sometimes force push does happen in master.
In order to prevent such mistakes in future we are using pre-push hook. This is a small ruby program which runs before any git push command. If you are force pushing to master then it will reject the push like this.
pre-push hook was added to git in version 1.8.2. So you need git 1.8.2 or higher. You can easily upgrade git by executing brew upgrade git .
Seting up hooks
In order for these hooks to kick in they need to be setup.
First step is to clone the repo to your local machine. Now open ~/.gitconfig and add following line.
Change the value /Users/neeraj/code/tiny_scripts/git-hooks to match with the directory of your machine.
Making existing repositories aware of this hook
Now pre-push hook is setup. Any new repository that you clone will have the feature of not being able to force push to master.
But existing repositories do not know about this git-hook. To make existing repositories aware of this hook execute following command on all repositories.
Now if you look into the .git/hooks directory of your project you should see a file called pre-push.
It means this project is all set with pre-push hook.
When you clone a repository then git init is invoked automatically and you will get pre-push already copied for you. So you are all set for all future repositories too.
Let’s say that I’m forking repo rails/rails.
After the repo has been forked to my repository I will clone it on my local machine.
Now cd rails and execute git remote -v . This is what I see.
Now I will add upstream remote by executing following command.
After having done that when I execute git remote -v then I see
Now I want to make some changes to the code.
After all this is why I forked the repo.
Let’s say that I want to add exception handling to the forked code I have locally.
Then I create a branch called exception-handling
and make all your changes in this branch.
The key here is to not to make any changes to master branch.
I try to keep master of my forked repository in sync with the master of the original repository where I forked it.
So now let’s create a branch and I will put in all my changes there.
In the Gemfile I will use this code like this
A month has passed. In the meantime rails master has tons of changes. I want those changes in my exception-handling branch. In order to achieve that first I need to bring my local master up-to-date with rails master.
I need to switch to master branch and then I need to execute following commands.
Now the master of forked repository is in-sync with the master of rails/rails. Now that master is up-to-date I need to pull in the changes in master in my exception-handling branch.
Now my branch exception-handling has my fix on top of rails master.