Rails makes it easy to adapt Restful architecture. All you have to do is following.
I started putting all pictures related activities in pictures_controller.rb .
In the beginning it was simple.
Slowly the application evolved.
The application started handling two different types of pictures.
There would be pictures for events and then there would be pictures of users using the system.
One can add comments to the event pictures but one can’t add comment to user pictures.
Slowly the requirement for event pictures grew vastly different from user pictures.
Sounds familiar. Right.
Initially controller takes on a few responsibilities but
slowly the controller starts taking a lot more responsibilities and then controller becomes huge.
The pictures controller was really huge and was fast becoming a mess and
specially writing test was getting very difficult.
Time had come to create two different controllers: one for event pictures and one for user pictures.
But wait. Lots of people would say that if we want to be restful then
there has to be one to one mapping between the model and the controller.
Model != resource
Being restful does not mean that there has be a one to one mapping between the model and the controller.
I am going to create a new controller called user_pictures_controller.rb
which will take on all the functionality related to users dealing with picture.
And this is going to be restful.
Above I have defined a resource called user_pictures.
To keep it simple this controller would do only three things.
display all the pictures of the user ( index )
allow user to upload pictures ( create )
allow user to delete a picture ( destroy )
That’s the general idea. In my application I have only three actions.
However in the interest of general discussion
I am going to show all the seven methods here.
Also for simplicity create in this case means adding a record (I am not showing multipart upload).
Here is the code for controller.
Another use case
Let’s talk about another example.
Let’s say that we have a model called Project
besides the regular functionality of creating, deleting, updating and listing projects,
one needs two more actions called enable and disable project.
Well the projects controller can easily handle two more actions called “enable” and “disable”.
However it is a good idea to create another controller called project_status_controller .
This controller should have only two actions - create and destroy.
destroy in this case would mean disabling the project
create would mean enabling the project.
I know it looks counter intuitive. Actions ‘enable’ and ‘disable’
seem simpler than “create” and “destroy”.
I agree in the beginning adding more actions to pictures controller looks easy.
However if we go down that path then it is a slippery slope and we do not know when to stop.
Compare that with the RESTful design of having only seven action : index, show, new, edit, create, update, destroy.
This limits what a controller can do and that’s a good thing.
This ensures that a controller does not take up too many responsibilities.
Creating another controller allows all the business logic which is not related to one of those
seven actions to be somewhere else.
One last example
Now that we have the ability to “enable” and “disable” pictures how about showing “only active”,
“only inactive” and “all” pictures.
In order to accomplish it once again we can add more actions to the pictures controller.
However it is much better to have two new controllers.
Some of you must be thinking what’s the point of having a controller for the sake
of having only one action.
Well the point is having code that can be changed easily and with confidence.
In this blog I tried to show that it is not necessary to have one to one mapping between model and controllers to be restful.
It is always a good idea to create a separate controller when the existing controller is burdened with too much work.
Rails exception handling depends on two factors and we are going to discuss both of them here.
When exceptions are handled by rescue_action_locally then we get to see the page with stacktrace. When exceptions are handled by rescue_action_in_public, we get to see the public/500.html or an error page matching the error code.
As you can see Rails uses two different methods consider_all_requests_local and local_request? to decide how exception should be handled.
consider_all_requests_local is a class level variable for ActionController::Base . We hardly pay attention to it but it is configured through files residing in config/environments
As you can see in development environment all the requests are considered local.
I have overridden the method local_request? but I am still not able to see public page when exception is raised.
That is a common question I see in the mailing list. As you can see the condition to decide how to handle exception is
In development environment consider_all_requests_local is always true as I showed before. Since one of the conditions is true Rails always handles the exception using rescue_action_locally .
I am running in production mode but I am still not able to see public/500.html page when I get exception at http://localhost:3000.
Same issue. In this case you are running in production mode so consider_all_requests_local is false but local_request? is still true because of localhost.
I want local_request? to be environment dependent
Recently I started using hoptoad and I needed to test how hoptoad will handle exception in production mode. However without any change local_request? was always returning true for http://localhost:30000 .
Then I stick following file under config/initializers
Now all request in production or in staging mode are treated as NOT local.
Now in both staging and production mode I get to see 500.html page even if I am accessing the application from http://localhost:3000 .
Rails provides some good tools like automatically updating created_at and updated_at columns. Developers do not need to worry about these columns. Rails updates these columns automatically which is great. Click here to find out how rails does auto time stamping.
However I have a unique business need where I need to update a column but I do not want updated_at to be changed. Or we can see the problem this way. I want to change the updated_at to a particular value.
Look at the sql that is generated. Rails discarded the updated_at value that I had supplied and replaced the value by the current time. Rails works fine if you supply created_at value. It is the updated_at value that is discarded.
It worked. I have successfully set updated_at to year 1909. However there is a problem.
For a brief duration User.record_timestamps was set to false. That is a class level variable. It means that for that brief duration if any other User record is updated then that record will not have correct updated_at value. That is not right. I want just one record ( User.first) to not to change updated_at without changing the behavior for the whole application.
In order to isolate the behavior to only the record we are interested in, I can do this.
In order to restrict the changes to a model, I am opening up the metaclass of u ( user object) and in that object I am adding a method called record_timestamps . The idea is to insert a method called record_timestamps in the metaclass which will return true and in this way the changes are restricted to a single object rather than making change at the class level.
At this point the meta class of the user object has the method record_timestamps and this returns false. Now I update the record with updated_at set to 100 years ago. And I succeed.
Now I need to put the object behavior back to normal. I open up the metaclass and call super on the method so that the method call will go up the chain. And that’s what happens when I try to test updated_at. This time the updated_at value that I set is ignored and rails changes the updated_at value.
This strategy of opening up an instance object works but it is messy. I would like to have a method that is much easier to use and this is what I came up with. Stick this piece of code in an initializer.
This is how you can use it.
Good usage of remove_method
In the above solution I used super when I want to bring back the default auto time stamping behavior. In stead of super I can also use remove_method. More about the what remove_method does is here .
Using the above technique, I can fully control updated_at values without rails messing up anything.
Following code was tested with ruby 1.8.7 and Rails 2.x .
Rails recently added named_scope feature
and it is a wonderful thing.
If you don’t know what named_scope is then you can find out more about it
This article is not about how to use named_scope.
This article is about how named_scope does what it does so well.
ActiveRecord has something called with_scope which is not associated with named_scope.
The two are entirely separate thing.
However named_scope relies on the workings on with_scope to do its magic.
So in order to understand how named_scope works first let’s try to understand what with_scope is.
with_scope let’s you add scope to a model in a very extensible manner.
We can see that when User.all_male is called, it internally calls all_active method
and the final sql has both the conditions.
with_scope allows nesting and all the conditions nested together are used to form one single query.
And named_scope uses this feature of with_scope to form one single query from a lot of named scopes.
Writing our own named_scope called mynamed_scope
The best way to learn named_scope is by implementing the functionality of named_scope ourselves.
We will build this functionality incrementally.
To avoid any confusion we will call our implementation mynamed_scope.
To keep it simple in the first iteration we will not support any lambda operation.
We will support simple conditions feature. Here is a usage of mynamed_scope .
We expect following queries to provide right result.
Let’s implement mynamed_scope
At the top of user.rb add the following lines of code
Now in script/console if we do User then the code will not blow up.
Next we need to implement functionalities so that mynamed_scope creates class methods like active and male.
What we need is a class where each mynamed_scope could be stored.
If 7 mynamed_scopes are defined on User then we should have a way to get reference to all those mynamed_scopes.
We are going to add class level attribute myscopes which will store all the mynamed_scopes defined for that class.
This discussion is going to be tricky.
We are storing all mynamed_scope information in a variable called myscopes.
This will contain all the mynamed_scopes defined on User.
However we need one more way to track the scoping. When we are executing User.active then the active mynamed_scope should be invoked on the User. However when we perform User.male.active then the mynamed_scope active should be performed in the scope of User.male and not directly on User.
This is really crucial. Let’s try one more time. In the case of User.active the condition that was supplied while defining the mynamed_scopeactive should be acted on User directly. However in the case of User.male.active the condition that was supplied while defining mynamed_scopeactive should be applied on the scope that was returned by User.male .
So we need a class which will store proxy_scope and proxy_options.
Now the question is when do we create an instance of Scope class. The instance must be created at run time. When we execute User.male.active, until the run time we don’t know the scope object active has to work upon. It means that User.male should return a scope and on that scope active will work upon.
So for User.male the proxy_scope is the User class. But for User.male.active, mynamed_scope ‘active’ gets (User.male) as the proxy_scope.
Also notice that proxy_scope happens to be the value of self.
Base on all that information we can now write the implementation of mynamed_scope like this.
At this point of time the overall code looks like this.
What we get is an instance of Scope. What we need is a way to call sql statement at this point of time.
But calling sql can be tricky. Remember each scope has a reference to the proxy_scope before it. This is the way all the scopes are chained together.
What we need to do is to start walking through the scope graph and if the previous proxy_scope is an instance of scope then add the condition from the scope to with_scope and then go to the previous proxy_scope. Keep walking and keep nesting the with_scope condition until we find the end of chain when proxy_scope will NOT be an instance of Scope but it will be a sub class of ActiveRecord::Base.
One way of finding if it is an scope or not is to see if it responds to find(:all). If the proxy_scope does not respond to find(:all) then keep going back because in the end User will be able to respond to find(:all) method.
Now in script/console you will get undefined method find. That is because find is not implemented by Scope.
Let’s implement method_missing.
Statement User.active.male invokes method ‘male’ and since method ‘male’ is not implemented by Scope, we don’t want to call proxy_scope yet since this method ‘male’ might be a mynamed_scope. Hence in the above code a check is done to see if the method that is missing is a declared mynamed_scope or not. If it is not a mynamed_scope then the call is sent to proxy_scope for execution. Pay attention to with_scope. Because of this with_scope all calls to proxy_scope are nested.
However Scope class doesn’t implement with_scope method. However the first proxy_scope ,which will be User in our case, implements with_scope method. So we can delegate with_scope method to proxy_scope like this.
At this point of time the code looks like this
Let’s checkout the result in script/console
named_scope supports a lot more things than what we have shown. named_scope supports passing lambda instead of conditions and it also supports joins and extensions.
However in the process of building mynamed_scope we got to see the workings of the named_scope implementation.
Following code was tested with ruby 1.8.7 and Rails 2.3 .
While developing rails application you have must seen this
We all know that this message is added by Rails and it is called whiny nil .
If you open your config/development.rb file you will see
Simply stated it means that if the application happens to invoke id on a nil object then throw an error.
Rails assumes that under no circumstance a developer wants to find id of
a nil object. So this must be an error case and Rails throws an
The question I have is why 4.
Why Matz chose the id of nil to be 4.
This awesome presentation
on ‘Ruby Internals’ has the answer.
In short Matz decided to have all the odd numbers reserved for numerical values.
Check this out.
Id 1,3,5 and 7 are taken by 0,1,2 and 3.
Now we are left with the id 0,2,4 and higher values.
FALSE had the id 0 and TRUE has the id 2.
Now the next available id left is 4 and that is taken by NIL.
We won’t even be discussing this issue once 1.9 comes out
where we will have to use object_id and then this won’t be an issue.
You can follow more discussion about this article at
Hacker news .