Ruby 2.5 has removed top level constant lookup

This blog is part of our Ruby 2.5 series.

Ruby 2.5.0-preview1 was recently released.

Ruby 2.4

irb> class Project
irb> end
=> nil

irb> class Category
irb> end
=> nil

irb> Project::Category
(irb):5: warning: toplevel constant Category referenced by Project::Category
 => Category

Ruby 2.4 returns top level constant with a warning if it is unable to find find constant in the specified scope.

This does not work well in cases where we need constants to be defined with same name at top level and also in same scope.

Ruby 2.5.0-preview1

irb> class Project
irb> end
=> nil

irb> class Category
irb> end
=> nil

irb> Project::Category
NameError: uninitialized constant Project::Category
Did you mean?  Category
	from (irb):5

Ruby 2.5 throws an error if it is unable to find constant in the specified scope.

Here is relevant commit and discussion.

Scheduling pods on nodes in Kubernetes using labels

This post assumes that you have basic understanding of Kubernetes terms like pods, deployments and nodes.

A Kubernetes cluster can have many nodes. Each node in turn can run multiple pods. By default Kubernetes manages which pod will run on which node and this is something we do not need to worry about it.

However sometimes we want to ensure that certain pods do not run on the same node. For example we have an application called wheel. We have both staging and production version of this app and we want to ensure that production pod and staging pod are not on the same host.

To ensure that certain pods do not run on the same host we can use nodeSelector constraint in PodSpec to schedule pods on nodes.

Kubernetes cluster

We will use kops to provision cluster. We can check the health of cluster using kops validate-cluster.

$ kops validate cluster
Using cluster from kubectl context: test-k8s.nodes-staging.com

Validating cluster test-k8s.nodes-staging.com

INSTANCE GROUPS
NAME              ROLE   MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m4.large    1   1 us-east-1a
master-us-east-1b Master m4.large    1   1 us-east-1b
master-us-east-1c Master m4.large    1   1 us-east-1c
nodes-wheel-stg   Node   m4.large    2   5 us-east-1a,us-east-1b
nodes-wheel-prd   Node   m4.large    2   5 us-east-1a,us-east-1b

NODE STATUS
           NAME                ROLE   READY
ip-192-10-110-59.ec2.internal  master True
ip-192-10-120-103.ec2.internal node   True
ip-192-10-42-9.ec2.internal    master True
ip-192-10-73-191.ec2.internal  master True
ip-192-10-82-66.ec2.internal   node   True
ip-192-10-72-68.ec2.internal   node   True
ip-192-10-182-70.ec2.internal  node   True

Your cluster test-k8s.nodes-staging.com is ready

Here we can see that there are two instance groups for nodes: nodes-wheel-stg and nodes-wheel-prd.

nodes-wheel-stg might have application pods like pod-wheel-stg-sidekiq, pod-wheel-stg-unicorn and pod-wheel-stg-redis. Smilarly nodes-wheel-prd might have application pods like pod-wheel-prd-sidekiq, pod-wheel-prd-unicorn and pod-wheel-prd-redis.

As we can see the Max number of nodes for instance group nodes-wheel-stg and nodes-wheel-prd is 5. It means if new nodes are created in future then based on the instance group the newly created nodes will automatically be labelled and no manual work is required.

Labelling a Node

We will use kubernetes labels to label a node. To add a label we need to edit instance group using kops.

$ kops edit ig nodes-wheel-stg

This will open up instance group configuration file, we will add following label in instance group spec.

nodeLabels:
   type: wheel-stg

Complete ig configuration looks like this.

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2017-10-12T06:24:53Z
  labels:
    kops.k8s.io/cluster: k8s.nodes-staging.com
  name: nodes-wheel-stg
spec:
  image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28
  machineType: m4.large
  maxSize: 5
  minSize: 2
  nodeLabels:
    type: wheel-stg
  role: Node
  subnets:
  - us-east-1a
  - us-east-1b
  - us-east-1c

Similarly, we can label for instance group nodes-wheel-prod with label type wheel-prod.

After making the changes update cluster using kops rolling update cluster --yes --force. This will update the cluster with specified labels.

New nodes added in future will have labels based on respective instance groups.

Once nodes are labeled we can verify using kubectl describe node.

$ kubectl describe node ip-192-10-82-66.ec2.internal
Name:               ip-192-10-82-66.ec2.internal
Roles:              node
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=m4.large
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=us-east-1
                    failure-domain.beta.kubernetes.io/zone=us-east-1a
                    kubernetes.io/hostname=ip-192-10-82-66.ec2.internal
                    kubernetes.io/role=node
                    type=wheel-stg

In this way we have our node labeled using kops.

Labelling nodes using kubectl

We can also label node using kubectl.

$ kubectl label node ip-192-20-44-136.ec2.internal type=wheel-stg

After labeling a node, we will add nodeSelector field to our PodSpec in deployment template.

We will add the following block in deployment manifest.

nodeSelector:
  type: wheel-stg

We can add this configuration in original deployment manifest.

apiVersion: v1
kind: Deployment
metadata:
  name: test-staging-node
  labels:
    app: test-staging
  namespace: test
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: test-staging
    spec:
      containers:
      - image: <your-repo>/<your-image-name>:latest
        name: test-staging
        imagePullPolicy: Always
        - name: REDIS_HOST
          value: test-staging-redis
        - name: APP_ENV
          value: staging
        - name: CLIENT
          value: test
        ports:
        - containerPort: 80
      nodeSelector:
        type: wheel-stg
      imagePullSecrets:
        - name: registrykey

Let’s launch this deployment and check where the pod is scheduled.

$ kubectl apply -f test-deployment.yml
deployment "test-staging-node" created

We can verify that our pod is running on node type=wheel-stg.

kubectl describe pod test-staging-2751555626-9sd4m
Name:           test-staging-2751555626-9sd4m
Namespace:      default
Node:           ip-192-10-82-66.ec2.internal/192.10.82.66
...
...
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
QoS Class:       Burstable
Node-Selectors:  type=wheel-stg
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

Similarly we run nodes-wheel-prod pods on nodes labeled with type: wheel-prod.

Please note that when we specify nodeSelector and no node matches label then pods are in pending state as they dont find node with matching label.

In this way we schedule our pods to run on specific nodes for certain use-cases.

SAML integration with multiple IDPs using Devise & OmniAuth

Recently, we integrated our SAML service provider(SP) with multiple identity providers(IDPs) to facilitate Single sign-on(SSO) using Devise with OmniAuth.

Before we jump into the specifics, here is SAML definition from wikipedia.

Security Assertion Markup Language (SAML, pronounced sam-el) is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider(IDP) and a service provider(SP).

The choice of Devise with OmniAuth-SAML to build SAML SSO capabilities was natural to us, as we already had dependency on Devise and OmniAuth nicely integrates with Devise.

Here is the official overview on how to integrate OmniAuth with Devise.

After following the overview, this is how our config and user.rb looked like.

# config file
Devise.setup do |config|
  config.omniauth :saml,
    idp_cert_fingerprint: 'fingerprint',
    idp_sso_target_url: 'target_url'
end

#user.rb file
devise :omniauthable, :omniauth_providers => [:saml]

The problem with above configuration is that it supports only one SAML IDP.

To have support for multiple IDPs, we re-defined files as below.

# config file
Devise.setup do |config|
  config.omniauth :saml_idp1,
    idp_cert_fingerprint: 'fingerprint-1',
    idp_sso_target_url: 'target_url-1'
    strategy_class: ::OmniAuth::Strategies::SAML,
    name: :saml_idp1

  config.omniauth :saml_idp2,
    idp_cert_fingerprint: 'fingerprint-2',
    idp_sso_target_url: 'target_url-2'
    strategy_class: ::OmniAuth::Strategies::SAML,
    name: :saml_idp2
end

#user.rb file
devise :omniauthable, :omniauth_providers => [:saml_idp1, :saml_idp2]

Let’s go through the changes one by one.

1. Custom Providers: Instead of using standard provider saml, we configured custom providers (saml_idp1, saml_idp2) in the first line of configuration as well as in user.rb

2. Strategy Class: In case of the standard provider(saml), Devise can figure out strategy_class on its own. For custom providers, we need to explicitly specify it.

3. OmniAuth Unique Identifier: After making the above two changes, everything worked fine except OmniAuth URLs. For some reason, OmniAuth was still listening to saml scoped path instead of new provider names saml_idp1, saml_idp2.

# Actual metadata path used by OmniAuth
/users/auth/saml/metadata

# Expected metadata path
/users/auth/saml_idp1/metadata
/users/auth/saml_idp2/metadata

After digging in Devise and OmniAuth code bases, we discovered provider name configuration. In the absence of this configuration, OmniAuth falls back to strategy class name to build the path. As we could not find any code in Devise which defined name for OmniAuth that explained saml scoped path (we were expecting Devise to pass name assigning same value as provider).

After adding name configuration, OmnitAuth started listening to the correct URLs.

4. Callback Actions: Lastly, we added both actions in OmniauthCallbacksController:

class Users::OmniauthCallbacksController < Devise::OmniauthCallbacksController

  def saml_idp1
    # Implementation
  end

  def saml_idp2
    # Implementation
  end

  # ...
  # Rest of the actions
end

With these changes along with the official guide mentioned above, our SP was able to authenticate users from multiple IDPs.

Rails 5.2 adds expiry option for signed and encrypted cookies and adds relative expiry time

In Rails 5.1 we have option to set expiry for cookies.

cookies[:username] = {value: "sam_smith", expiry: Time.now + 4.hours}

The above code sets cookie which expires in 4 hours.

The expiry option, is not supported for signed and encrypted cookies. In other words we are not able to decide on server side when an encrypted or signed cookie would expire.

From Rails 5.2, we’ll be able to set expiry for encrypted and signed cookies as well.

cookies.encrypted[:firstname] = { value: "Sam", expiry: Time.now + 1.day }
# sets string `Sam` in an encrypted `firstname` cookie for 1 day.

cookies.signed[:lastname] =  {value: "Smith", expiry: Time.now + 1.hour }
# sets string `Smith` in a signed `lastname` cookie for 1 hour.

Apart from this, in Rails 5.1, we needed to provide an absolute date/time value for expires option.

# setting cookie for 90 minutes from current time.
cookies[:username] = {value: "Sam", expiry: Time.now + 90.minutes}

Starting Rails 5.2, we’ll be able to set the expiry option by giving a relative duration as value.

# setting cookie for 90 minutes from current time.
cookies[:username] = { value: "Sam", expiry: 90.minutes }

# After 1 hour
> cookies[:username]
#=> "Sam"

# After 2 hours
> cookies[:username]
#=> nil

Optimize JavaScript code for composability with Ramda.js

In this blog R stands for Ramda.js. More on this later.

Here is code without R.

function isUnique(element, selector) {
  const parent = element.parentNode;
  const elements = parents.querySelectorAll(selector);
  return (elements.length === 1) && (elements[0] === element);
}

Code with R.

function isUnique(element, selector) {
  const querySelectorAll = R.invoker(1, 'querySelectorAll')(selector);

  return R.pipe(
    R.prop('parentNode'),
    querySelectorAll,
    elements => R.both(
                  R.equals(R.length(elements), 1),
                  R.equals(elements[0], element)
                );
  )();
}

Is the refactored code better ?

What is R? What’s invoker? What’s pipe?

The “code without R” reads fine and even a person who has just started learning JavaScript can understand it. Then why take all this extra complexity. Shouldn’t we be writing code that is easier to understand ?

Good questions. Who could be against writing code that is easier to understand.

If all I’m writing is a function called isUnique then of course the “before version” is simpler. However this function is part of a bigger thousands of lines of code software.

A big software is nothing but a collection of smaller pieces of code. We compose code together to make code work.

We need to optimize for composability and as we write code that is more composable, we are finding that composable code is also easier to read.

At BigBinary we have been experimenting with composability. We previously wrote a blog on how using Recompose is making our React components more composable.

Now we are trying same techniques at pure JavaScript level using Ramda.js.

Let’s take a look at another examples.

Example 2

We have a list of users with name and status.

var users = [ { name: "John", status: "Active"},
              { name: "Mike", status: "Inactive"},
              { name: "Rachel", status: "Active" }
             ]

We need to find all active users. Here is a version without R.

jsfiddle

var activeUsers = function(users) {
	return users.filter(function(user) {
  	var status = user.status;
  	return status === 'Active';
  });
};

Here is code with R.

jsfiddle

var isStatusActive = R.propSatisfies(R.equals("Active"), 'status');
var active = R.filter(isStatusActive);
var result = active(users);

Now let’s say that user data changes and we have a user with an empty name. We don’t want to include such users. Now data looks like this.

var users = [ { name: "John", status: "Active"},
              { name: "Mike", status: "Inactive"},
              { name: "Rachel", status: "Active" },
              { name: "",       status: "Active" },
             ]

Here is modified code without R.

jsfiddle

var activeUsers = function(users) {
	return users.filter(function(user) {
  	var status = user.status;
  	var name = user.name;
    return name !== null &&
           name !== undefined &&
           name.length !==0 &&
           status === 'Active'
  });
};

Here is modified code with R.

jsfiddle

var isStatusActive = R.propSatisfies(R.equals("Active"), 'status');
var active = R.filter(isStatusActive);
var isNameEmpty = R.propSatisfies(R.isEmpty, 'name');
var rejectEmptyNames = R.reject(isNameEmpty);
var result = R.pipe(active, rejectEmptyNames)(users);
log(result);

Notice that change we needed to do to accomodate this request.

In the none R version, we had to get into the gut of the function and add logic. In the with R version we added new function and we just composed this new function with old function using pipe. We did not change the existing function.

Now let’s say that we don’t want all the users but just the first two users. We know what need to change in the without R version. In the with R version all we need to do is add R.take(2) and no existing function changes at all.

Here is the final code.

jsfiddle

var isStatusActive = R.propSatisfies(R.equals("Active"), 'status');
var active = R.filter(isStatusActive);
var isNameEmpty = R.propSatisfies(R.isEmpty, 'name');
var rejectEmptyNames = R.reject(isNameEmpty);

var result = R.pipe(active, rejectEmptyNames, R.take(2))(users);
log(result);

Data comes at the end

Another thing to notice is that in the R version nowhere we have said that we are acting on the users. All the functions have no mention of users. Infact all the functions do not take any argument explicitly since the functions are curried. When we want result then we are passing users as the argument but it could be articles and our code will still hold.

This is pointfree programming. We do not need to know about “pointfree” since this comes naturally when write with R.

I’m still not convinced that Ramda.js is solving any real problem

No problem.

Pleas watch Hery Underscore, You’re doing it wrong video by Brian Lonsdorf. Hopefully that will convince you to give Ramda.js a try.

If you are still not convinced then, the author of Ramda.js has written a series of blogs called Thinking in Ramda. Please read the blogs. Slowly.

Ramda brings functional concepts to JavaScript

Functional programming is another way of thinking about the code. When we move to Elm, Haskell or Elixir to get functional concepts then we are wrestling with two things at once - a new language and functional concepts.

Ramda.js brings functional concepts to JavaScript. In this way we can slowly start using functional concepts in our day to day JavaScript code.

The best part is that if you write any JavaScript code then you can start using Ramda.js today. Whether you are using React.js or Angular.js, it’s all JavaScript and you can use Ramda.js.

Elm Conf 2017 Summary

I attended Elm Conf 2017 US last week alongside Strangeloop conference. I was looking forward to the conference to know what the Elm community is working on and what problems people are facing and what are they doing to overcome those.

After attending the conference, I can say that Elm community is growing strong. The conference was attended by around 350 people and many were using Elm in production. More number of people wanted to try Elm in production.

There was a lot of enthusiasm about starting new Elm meetups. As a Ruby on Rails and React meetup organizer myself, I was genuinely interested in hearing experiences of seasoned meetup organizers. In general Evan and Richard prefer meetup to be a place where people form small groups and hack on something rather than one person teaching the whole group something.

I liked all the talks. There was variety in the topics and the speakers were all seasoned. Kudos to the organizers for putting up a great program. Below is a quick summary of my thoughts from the conference.

Keynote by Evan

Evan talked about the work he has been doing for the upcoming release of Elm. He discussed the optimization work related to code splitting, code generation and minification for speeding up building and delivering single page apps using Elm. He made another interesting point that he changed the codegen which generates the JS code from Elm code twice but nobody noticed it. Things like this can give a huge opportunity to change and improve existing designs which he has been doing for the upcoming release.

In the end he mentioned that his philosophy is not to rush things. It’s better to do things right than doing it now.

After the keynote, he encouraged people to talk to him about what they are working on which was really nice.

Accessibility with Elm

Tessa talked about her work around adding accessibility support for Elm apps. She talked about design decisions, prior art and some of the challenges she faced while working on the library like working with tabs, interactive elements and images. There was a question at the end about whether this will be incorporated into Elm core but Evan mentioned that it might take some time.

Putting the Elm Platform in the Browser

Luke, the creator of Ellie - a way to easily share your elm code with others online - talked about how he started with Ellie. He talked about the problems he had to face for implementing and sustaining Ellie through ads. During the talk, he also open sourced the code, so we can see it on Github now.

Luke mentioned how he changed the architecture of Ellie from mostly running on the server to running in the browser using service workers. He discussed future plans about sustaining Ellie, building an Elm editor instead of using Codemirror, getting rid of ads and making Ellie better for everyone.

The Importance of Ports

In other frameworks like PureScript and BuckleScript invoking native JavaScript functions is easy. In Elm one has to use “Ports”. Using Ports requires some extra stuff. In return we get more safety.

Murphy Randle presented a case where he was using too many ports which was resulting in fragmented code. He discussed how port is based on Actor Model and once we get that then using port would be much easier. He also showed refactored code.

Murphy also runs Elm Town Podcast. Listen to episode 13 to know more about Ports.

Keynote by Richard Feldman

Richard talked about his experiences in teaching beginners about Elm. He has taught Elm a lot. He has done an extensive Elm course on Front end masters. He is currently writing Elm in Action book.

He talked about finding motivation to teach using the SWBAT technique. It helped him in deciding the agenda and finding the direct path for teaching. He mentioned that in the beginning being precise and detailed is not important. This resonated with me as the most important thing for anyone who is getting started is getting started with the most basic things and then iterating over it again and again.

Parting thoughts

Elm community is small, tight, very friendly and warm. Lots of people are trying a lot of cool things. Elm Slack came in the discussions again and again as a good place to seek out help for beginners.

When I heard about Elm first, it was about good compiler errors and having run time safety. However after attending the conference I am mighty impressed with the Elm community.

Big props to Brian and Luke for organizing the conference!

All the videos from the conference are already getting uploaded here.

Ruby 2.4 has optimized enumerable min max methods

This blog is part of our Ruby 2.4 series.

Enumerables in Ruby have min, max and minmax comparison methods which are quite convenient to use.

(1..99).min         #=> 1
(1..99).max         #=> 99
(1..99).minmax      #=> [1, 99]

In Ruby 2.4, Enumurable#min, Enumurable#max methods and Enumurable#minmax method are now more optimized.

We would run the following benchmark snippet for both Ruby 2.3 and Ruby 2.4 and observe the results

require 'benchmark/ips'

Benchmark.ips do |bench|
  NUM1 = 1_000_000.times.map { rand }

  ENUM_MIN = Enumerable.instance_method(:min).bind(NUM1)
  ENUM_MAX = Enumerable.instance_method(:max).bind(NUM1)
  ENUM_MINMAX = Enumerable.instance_method(:minmax).bind(NUM1)

  bench.report('Enumerable#min') do
    ENUM_MIN.call
  end

  bench.report('Enumerable#max') do
    ENUM_MAX.call
  end

  bench.report('Enumerable#minmax') do
    ENUM_MINMAX.call
  end
end

Results for Ruby 2.3

Warming up --------------------------------------
      Enumerable#min     1.000  i/100ms
      Enumerable#max     1.000  i/100ms
   Enumerable#minmax     1.000  i/100ms
Calculating -------------------------------------
      Enumerable#min     14.810  (±13.5%) i/s -     73.000  in   5.072666s
      Enumerable#max     16.131  (± 6.2%) i/s -     81.000  in   5.052324s
   Enumerable#minmax     11.758  (± 0.0%) i/s -     59.000  in   5.026007s

Ruby 2.4

Warming up --------------------------------------
      Enumerable#min     1.000  i/100ms
      Enumerable#max     1.000  i/100ms
   Enumerable#minmax     1.000  i/100ms
Calculating -------------------------------------
      Enumerable#min     18.091  (± 5.5%) i/s -     91.000  in   5.042064s
      Enumerable#max     17.539  (± 5.7%) i/s -     88.000  in   5.030514s
   Enumerable#minmax     13.086  (± 7.6%) i/s -     66.000  in   5.052537s

From the above benchmark results, it can be seen that there has been an improvement in the run times for the methods.

Internally Ruby has changed the logic by which objects are compared, which results in these methods being optimized. You can have a look at the commits here and here.

CSV::Row#each etc. return enumerator when no block given

This blog is part of our Ruby 2.4 series.

In Ruby 2.3, These methods do not return enumerator when no block is given.

Ruby 2.3

CSV::Row.new(%w(banana mango), [1,2]).each #=> #<CSV::Row "banana":1 "mango":2>

CSV::Row.new(%w(banana mango), [1,2]).delete_if #=> #<CSV::Row "banana":1 "mango":2>

Some methods raise exception because of this behavior.

> ruby -rcsv -e 'CSV::Table.new([CSV::Row.new(%w{banana mango}, [1, 2])]).by_col.each'
 #=> /Users/sushant/.rbenv/versions/2.3.0/lib/ruby/2.3.0/csv.rb:850:in `block in each': undefined method `[]' for nil:NilClass (NoMethodError)
  from /Users/sushant/.rbenv/versions/2.3.0/lib/ruby/2.3.0/csv.rb:850:in `each'
  from /Users/sushant/.rbenv/versions/2.3.0/lib/ruby/2.3.0/csv.rb:850:in `each'
  from -e:1:in `<main>'

Ruby 2.4 fixed this issue.

Ruby 2.4

CSV::Row.new(%w(banana mango), [1,2]).each #=> #<Enumerator: #<CSV::Row "banana":1 "mango":2>:each>

CSV::Row.new(%w(banana mango), [1,2]).delete_if #=> #<Enumerator: #<CSV::Row "banana":1 "mango":2>:delete_if>

As we can see, these methods now return an enumerator when no block is given.

In Ruby 2.4 following code will not raise any exception.

> ruby -rcsv -e 'CSV::Table.new([CSV::Row.new(%w{banana mango}, [1, 2])]).by_col.each'

DateTime#to_time and Time#to_time preserves receiver's timezone offset info in Ruby 2.4

This blog is part of our Ruby 2.4 series.

In Ruby, DateTime#to_time and Time#to_time methods can be used to return a Time object.

In Ruby 2.3, these methods convert time into system timezone offset instead of preserving timezone offset of the receiver.

Ruby 2.3

> datetime = DateTime.strptime('2017-05-16 10:15:30 +09:00', '%Y-%m-%d %H:%M:%S %Z')
 #=> #<DateTime: 2017-05-16T10:15:30+09:00 ((2457890j,4530s,0n),+32400s,2299161j)>
> datetime.to_time
 #=> 2017-05-16 06:45:30 +0530

> time = Time.new(2017, 5, 16, 10, 15, 30, '+09:00')
 #=> 2017-05-16 10:15:30 +0900
> time.to_time
 #=> 2017-05-16 06:45:30 +0530

As you can see, DateTime#to_time and Time#to_time methods return time in system timezone offset +0530.

Ruby 2.4 fixed DateTime#to_time and Time#to_time.

Now, DateTime#to_time and Time#to_time preserve receiver’s timezone offset info.

Ruby 2.4

> datetime = DateTime.strptime('2017-05-16 10:15:30 +09:00', '%Y-%m-%d %H:%M:%S %Z')
 #=> #<DateTime: 2017-05-16T10:15:30+09:00 ((2457890j,4530s,0n),+32400s,2299161j)>
> datetime.to_time
 #=> 2017-05-16 10:15:30 +0900

> time = Time.new(2017, 5, 16, 10, 15, 30, '+09:00')
 #=> 2017-05-16 10:15:30 +0900
> time.to_time
 #=> 2017-05-16 10:15:30 +0900

Since this is a breaking change for Rails application upgrading to ruby 2.4, Rails 4.2.8 built a compatibility layer by adding a config option. ActiveSupport.to_time_preserves_timezone was added to control how to_time handles timezone offsets.

Here is an example of how application behaves when to_time_preserves_timezone is set to false.

> ActiveSupport.to_time_preserves_timezone = false

> datetime = DateTime.strptime('2017-05-16 10:15:30 +09:00', '%Y-%m-%d %H:%M:%S %Z')
 #=> Tue, 16 May 2017 10:15:30 +0900
> datetime.to_time
 #=> 2017-05-16 06:45:30 +0530

> time = Time.new(2017, 5, 16, 10, 15, 30, '+09:00')
 #=> 2017-05-16 10:15:30 +0900
> time.to_time
 #=> 2017-05-16 06:45:30 +0530

Here is an example of how application behaves when to_time_preserves_timezone is set to true.

> ActiveSupport.to_time_preserves_timezone = true

> datetime = DateTime.strptime('2017-05-16 10:15:30 +09:00', '%Y-%m-%d %H:%M:%S %Z')
 #=> Tue, 16 May 2017 10:15:30 +0900
> datetime.to_time
 #=> 2017-05-16 10:15:30 +0900

> time = Time.new(2017, 5, 16, 10, 15, 30, '+09:00')
 #=> 2017-05-16 10:15:30 +0900
> time.to_time
 #=> 2017-05-16 10:15:30 +0900

Using Recompose to build higher-order components

Recompose is a toolkit for writing React components using higher-order components. Recompose allows us to write many smaller higher-order components and then we compose all those components together to get the desired component. It improves both readability and the maintainability of the code.

HigherOrderComponents are also written as HOC. Going forward we will use HOC to refer to higher-order components.

Using Recompose in an e-commerce application

We are working on an e-commerce application and we need to build payment page. Here are the modes of payment.

  • Online
  • Cash on delivery
  • Swipe on delivery

We need to render our React components depending upon the payment mode selected by the user. Typically we render components based on some state.

Here is the traditional way of writing code.

state = {
  showPayOnlineScreen: true,
  showCashOnDeliveryScreen: false,
  showSwipeOnDeliveryScreen: false,
}

renderMainScreen = () => {
  const { showCashOnDeliveryScreen, showSwipeOnDeliveryScreen } = this.state;

  if (showCashOnDeliveryScreen) {
    return <CashOnDeliveryScreen />;
  } else if (showSwipeOnDeliveryScreen) {
    return <SwipeOnDeliveryScreen />;
  }
  return <PayOnlineScreen />;
}

 render() {
  return (
    { this.renderMainScreen() }
  );
 }

We will try to refactor the code using the tools provided by Recompose.

In general, the guiding principle of functional programming is composition. So here we will assume that the default payment mechanism is online. If the payment mode happens to be something else then we will take care of it by enhancing the existing component.

So to start with our code would look like this.

state = {
  paymentType: online,
}

render() {
  return (
    <PayOnline {...this.state} />
  );
}

First let’s handle the case of payment mode CashOnDelivery.

import { branch, renderComponent, renderNothing } from 'recompose';
import CashScreen from 'components/payments/cashScreen';

const cashOnDelivery = 'CASH_ON_DELIVERY';

const enhance = branch(
  (props) => (props.paymentType === cashOnDelivery)
  renderComponent(CashScreen),
  renderNothing
)

Recompose has branch function which acts like a ternary operator.

The branch function accepts three arguments and returns a HOC. The first argument is a predicate which accepts props as the argument and returns a Boolean value. The second and third arguments are higher-order components. If the predicate evaluates to true then the left HOC is rendered otherwise the right HOC is rendered. Here is how branch is implemented.

branch(
  test: (props: Object) => boolean,
  left: HigherOrderComponent,
  right: ?HigherOrderComponent
): HigherOrderComponent

Notice the question mark in ?HigherOrderComponent. It means that the third argument is optional.

If you are familiar with Ramdajs then this is similar to ifElse in Ramdajs.

renderComponent takes a component and returns an HOC version of it.

renderNothing is an HOC which will always render null.

Since the third argument to branch is optional, we do not need to supply it. If we don’t supply the third argument then that means the original component will be rendered.

So now we can make our code shorter by removing usage of renderNothing.

const enhance = branch(
  (props) => (props.paymentType === cashOnDelivery)
  renderComponent(CashScreen)
)

const MainScreen = enhance(PayOnlineScreen);

Next condition is handling SwipeOnDelivery

SwipeOnDelivery means that upon delivery customer pays using credit card using Square or a similar tool.

We will follow the same pattern and the code might look like this.

import { branch, renderComponent } from 'recompose';
import CashScreen from 'components/payments/CashScreen';
import PayOnlineScreen from 'components/payments/PayOnlineScreen';
import CardScreen from 'components/payments/CardScreen';

const cashOnDelivery = 'CASH_ON_DELIVERY';
const swipeOnDelivery = 'SWIPE_ON_DELIVERY';

let enhance = branch(
  (props) => (props.paymentType === cashOnDelivery)
  renderComponent(CashScreen),
)

enhance = branch(
  (props) => (props.paymentType === swipeOnDelivery)
  renderComponent(CardScreen),
)(enhance)

const MainScreen = enhance(PayOnlineScreen);

Extracting out predicates

Let’s extract predicates into their own functions.

import { branch, renderComponent } from 'recompose';
import CashScreen from 'components/payments/CashScreen';
import PayOnlineScreen from 'components/payments/PayOnlineScreen';
import CardScreen from 'components/payments/CardScreen';

const cashOnDelivery = 'CASH_ON_DELIVERY';
const swipeOnDelivery = 'SWIPE_ON_DELIVERY';

// predicates
const isCashOnDelivery = ({ paymentType }) =>
  (paymentType === cashOnDelivery);

const isSwipeOnDelivery = ({ paymentType }) =>
  (paymentType === swipeOnDelivery);

let enhance = branch(
  isCashOnDelivery,
  renderComponent(CashScreen),
)

enhance = branch(
  isSwipeOnDelivery,
  renderComponent(CardScreen),
)(enhance)

const MainScreen = enhance(PayOnlineScreen);

Adding one more payment method

Let’s say that next we need to add support for Bitcoin.

We can use the same process.

const cashOnDelivery = 'CASH_ON_DELIVERY';
const swipeOnDelivery = 'SWIPE_ON_DELIVERY';
const bitcoinOnDelivery = 'BITCOIN_ON_DELIVERY';

const isCashOnDelivery = ({ paymentType }) =>
  (paymentType === cashOnDelivery);

const isSwipeOnDelivery = ({ paymentType }) =>
  (paymentType === swipeOnDelivery);

const isBitcoinOnDelivery = ({ paymentType }) =>
  (paymentType === bitcoinOnDelivery);

let enhance = branch(
  isCashOnDelivery,
  renderComponent(CashScreen),
)

enhance = branch(
  isSwipeOnDelivery,
  renderComponent(CardScreen),
)(enhance)

enhance = branch(
  isBitcoinOnDelivery,
  renderComponent(BitcoinScreen),
)(enhance)

const MainScreen = enhance(PayOnlineScreen);

You can see the pattern and it is getting repetitive and boring. We can chain these conditions together to make it less repetitive.

Let’s use the compose function and chain them.

const isCashOnDelivery = ({ paymentType }) =>
  (paymentType === cashOnDelivery);

const isSwipeOnDelivery = ({ paymentType }) =>
  (paymentType === swipeOnDelivery);

const cashOnDeliveryCondition = branch(
  isCashOnDelivery,
  renderComponent(CashScreen),
)

const swipeOnDeliveryCondition = branch(
  isSwipeOnDelivery,
  renderComponent(CardScreen),
)

const enhance = compose(
  cashOnDeliveryCondition,
  swipeOnDeliveryCondition,
)

const MainScreen = enhance(PayOnlineScreen);

Refactoring code to remove repetition

At this time we are building a condition (like cashOnDeliveryCondition) for each payment type and then using that condition in compose. We can put all such conditions in an array and then we can use that array in compose. Let’s see it in action.

const cashOnDelivery = 'CASH_ON_DELIVERY';
const swipeOnDelivery = 'SWIPE_ON_DELIVERY';

const isCashOnDelivery = ({ paymentType }) =>
  (paymentType === cashOnDelivery);

const isSwipeOnDelivery = ({ paymentType }) =>
  (paymentType === swipeOnDelivery);

const states = [{
  when: isCashOnDelivery, then: CashOnDeliveryScreen
},{
  when: isSwipeOnDelivery, then: SwipeOnDeliveryScreen
}]

const componentsArray = states.map(({ when, then }) =>
  branch(when, renderComponent(then))
);

const enhance = compose(
  ...componentsArray
)

const MainScreen = enhance(PayOnlineScreen);

Extract function for reusability

We are going to extract some code in utils for better reusability.

// utils/composeStates.js

import { branch, renderComponent, compose } from 'recompose';

export default function composeStates(states) {
  const componentsArray = states.map(({ when, then }) =>
    branch(when, renderComponent(then))
  );

  return compose(...componentsArray);
}

Now our main code looks like this.

import composeStates from 'utils/composeStates.js';

const cashOnDelivery = 'CASH_ON_DELIVERY';
const swipeOnDelivery = 'SWIPE_ON_DELIVERY';

const isCashOnDelivery = ({ paymentType }) =>
  (paymentType === cashOnDelivery);

const isSwipeOnDelivery = ({ paymentType }) =>
  (paymentType === swipeOnDelivery);

const states = [{
  when: isCashOnDelivery, then: CashScreen
},{
  when: isSwipeOnDelivery, then: CardScreen
}]

const enhance = composeStates(states);

const MainScreen = enhance(PayOnlineScreen);

Full before and after comparison

Here is before code.

import React, { Component } from 'react';
import PropTypes from 'prop-types';
import { connect } from 'react-redux';
import { browserHistory } from 'react-router';
import { Modal } from 'react-bootstrap';
import * as authActions from 'redux/modules/auth';
import PaymentsModalBase from '../../components/PaymentsModal/PaymentsModalBase';
import PayOnlineScreen from '../../components/PaymentsModal/PayOnlineScreen';
import CashScreen from '../../components/PaymentsModal/CashScreen';
import CardScreen from '../../components/PaymentsModal/CardScreen';

@connect(
  () => ({}),
  { ...authActions })
export default class PaymentsModal extends Component {

  static propTypes = {
    show: PropTypes.bool.isRequired,
    hideModal: PropTypes.func.isRequired,
    orderDetails: PropTypes.object.isRequired
  };

  static defaultProps = {
    show: true,
    hideModal: () => { browserHistory.push('/'); },
    orderDetails: {}
  }

  state = {
    showOnlineScreen: true,
    showCashScreen: false,
    showCardScreen: false,
  }

  renderScreens = () => {
    const { showCashScreen, showCardScreen } = this.state;

    if (showCashScreen) {
      return <CashScreen />;
    } else if (showCardScreen) {
      return <CardScreen />;
    }
    return <PayOnlineScreen />;
  }

  render() {
    const { show, hideModal, orderDetails } = this.props;
    return (
      <Modal show={show} onHide={hideModal} dialogClassName="modal-payments">
        <PaymentsModalBase orderDetails={orderDetails} onHide={hideModal}>
          { this.renderScreens() }
        </PaymentsModalBase>
      </Modal>
    );
  }
}

Here is after applying recompose code.

import React, { Component } from 'react';
import PropTypes from 'prop-types';
import { connect } from 'react-redux';
import { Modal } from 'react-bootstrap';
import { compose, branch, renderComponent, } from 'recompose';
import * as authActions from 'redux/modules/auth';
import PaymentsModalBase from 'components/PaymentsModal/PaymentsModalBase';
import PayOnlineScreen from 'components/PaymentsModal/PayOnlineScreen';
import CashOnDeliveryScreen from 'components/PaymentsModal/CashScreen';
import SwipeOnDeliveryScreen from 'components/PaymentsModal/CardScreen';

const cashOnDelivery = 'CASH_ON_DELIVERY';
const swipeOnDelivery = 'SWIPE_ON_DELIVERY';
const online = 'ONLINE';

const isCashOnDelivery = ({ paymentType }) => (paymentType === cashOnDelivery);
const isSwipeOnDelivery = ({ paymentType }) => (paymentType === swipeOnDelivery);

const conditionalRender = (states) =>
  compose(...states.map(state =>
    branch(state.when, renderComponent(state.then))
  ));

const enhance = compose(
  conditionalRender([
    { when: isCashOnDelivery, then: CashOnDeliveryScreen },
    { when: isSwipeOnDelivery, then: SwipeOnDeliveryScreen }
  ])
);

const PayOnline = enhance(PayOnlineScreen);

@connect(
  () => ({}),
  { ...authActions })
export default class PaymentsModal extends Component {

  static propTypes = {
    isModalVisible: PropTypes.bool.isRequired,
    hidePaymentModal: PropTypes.func.isRequired,
    orderDetails: PropTypes.object.isRequired
  };

  state = {
    paymentType: online,
  }

  render() {
    const { isModalVisible, hidePaymentModal, orderDetails } = this.props;
    return (
      <Modal show={isModalVisible} onHide={hidePaymentModal} dialogClassName="modal-payments">
        <PaymentsModalBase orderDetails={orderDetails} hidePaymentModal={hidePaymentModal}>
          <PayOnline {...this.state} />
        </PaymentsModalBase>
      </Modal>
    );
  }
}

Functional code is a win

Functional code is all about composing smaller functions together like lego pieces. It results in better code because functions are usually smaller in size and do only one thing.

In coming weeks we will see more applications of recompose in the real world.

Uploading file in an isomorphic ReactJS app

Design of an isomorphic App

In a typical single-page application (SPA) server sends JSON data. Browser receives that JSON data and builds HTML.

In an isomorphic app, the server sends a fully-formed HTML to the browser. This is typically done for SEO, performance and code maintainability.

In an isomorphic app the browser does not directly deal with the API server. This is because the API server will render JSON data and browser needs to have fully formed HTML. To solve this problem a “proxy server” is introduced in between the browser and the API server.

Architecture

In this case the proxy server is powered by Node.js.

Uploading a file in an isomorphic app

Recently, while working on an isomorphic app, we needed to upload a file to the API server. We couldn’t directly upload from the browser because we ran into CORS issue.

One way to solve CORS issue is to add CORS support to the API sever. Since we did not have access to the API server this was not an option. It means now the file must go through the proxy server.

The problem can be seen as two separate issues.

  1. Uploading the file from the browser to the proxy server.
  2. Uploading the file from the proxy server to the API server.

Implementation

Before we start writing any code, we need to accept file on proxy server and it can be done by using Multer.

Multer is a node.js middleware for handling multipart/form-data.

We need to initialize multer with a path where it will store the uploaded files.

We can do that by adding the following code before initializing the node.js server app.

app.set('config', config)
  // ...other middleware
  .use(multer({ dest: 'uploads/' }).any()); // add this line

Now any file uploaded to proxy server would be stored in the /uploads directory.

Next we need a function which uploads a file from browser to the node.js server.

// code on client

function uploadImagesToNodeServer(files) {
  const formData = new FormData();
  map(files, (file, fileName) => {
    if (file && file instanceof File) {
      formData.append(fileName, file);
    }
  });

  superagent
    .post('/node_server/upload_path')
    .type('form')
    .set(headers)
    .send(data, formData)
    .then(response => {
      // handle response
    });
}

Next, let’s upload the same file from the node.js server to the API server.

To do that, we need to add a callback function to our node.js server where we are accepting the POST request for step 1.

// code on node.js server
app.post('/node_server/upload_path', function(req, res) {
  uploadImagesToApiServer(req);
  // handle response
});

function uploadImagesToApiServer(req) {
  superagent
    .post('/api_server/upload_path')
    .type('form')
    .set(headers)
    .attach('image', req.files[0].path)
    .then(response => {
      // handle response
    });
}

Graceful shutdown of Sidekiq processes on Kubernetes

In our last blog, we explained how to handle rolling deployments of Rails applications with no downtime.

In this article we will walk you through how to handle graceful shutdown of processes in Kubernetes.

This post assumes that you have basic understanding of Kubernetes terms like pods and deployments.

Problem

When we deploy Rails applications on kubernetes it stops existing pods and spins up new ones. When old pod is terminated by Replicaset, then active Sidekiq processes are also terminated. We run our batch jobs using sidekiq and it is possible that sidekiq jobs might be running when deployment is being performed. Terminating old pod during deployment can kill the already running jobs.

Solution #1

As per default pod termination policy of kubernetes, kubernetes sends command to delete pod with a default grace period of 30 seconds. At this time kubernetes sends TERM signal. When the grace period expires, any processes still running in the Pod are killed with SIGKILL.

We can adjust the terminationGracePeriodSeconds timeout as per our need and can change it from 30 seconds to 2 minutes.

However there might be cases where we are not sure how much time a process takes to gracefully shutdown. In such cases we should consider using PreStop hook which is our next solution.

Solution #2

Kubernetes provides many Container lifecycle hooks.

PreStop hook is called immediately before a container is terminated. It is a blocking call. It means it is synchronous. It also means that this hook must be completed before the container is terminated.

Note that unlike solution1 this solution is not time bound. Kubernetes will wait as long as it takes for PreStop process to finish. It is never a good idea to have a process which takes more than a minute to shutdown but in real world there are cases where more time is needed. Use PreStop for such cases.

We decided to use preStop hook to stop Sidekiq because we had some really long running processes.

Using PreStop hooks in Sidekiq deployment

This is a simple deployment template which terminates Sidekiq process when pod is terminated during deployment.

apiVersion: v1
kind: Deployment
metadata:
  name: test-staging-sidekiq
  labels:
    app: test-staging
  namespace: test
spec:
  template:
    metadata:
      labels:
        app: test-staging
    spec:
      containers:
      - image: <your-repo>/<your-image-name>:latest
        name: test-staging
        imagePullPolicy: Always
        env:
        - name: REDIS_HOST
          value: test-staging-redis
        - name: APP_ENV
          value: staging
        - name: CLIENT
          value: test
        volumeMounts:
            - mountPath: /etc/sidekiq/config
              name: test-staging-sidekiq
        ports:
        - containerPort: 80
      volumes:
        - name: test-staging-sidekiq
          configMap:
             name: test-staging-sidekiq
             items:
              - key: config
                path: sidekiq.yml
      imagePullSecrets:
        - name: registrykey

Next we will use PreStop lifecycle hook to stop Sidekiq safely before pod termination.

We will add the following block in deployment manifest.

lifecycle:
     preStop:
        exec:
          command: ["/bin/bash", "-l", "-c", "cd /opt/myapp/current; for f in tmp/pids/sidekiq*.pid; do bundle exec sidekiqctl stop $f; done"]

PreStop hook stops all the Sidekiq processes and does graceful shutdown of Sidekiq before terminating the pod.

We can add this configuration in original deployment manifest.

apiVersion: v1
kind: Deployment
metadata:
  name: test-staging-sidekiq
  labels:
    app: test-staging
  namespace: test
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: test-staging
    spec:
      containers:
      - image: <your-repo>/<your-image-name>:latest
        name: test-staging
        imagePullPolicy: Always
        lifecycle:
          preStop:
            exec:
              command: ["/bin/bash", "-l", "-c", "cd /opt/myapp/current; for f in tmp/pids/sidekiq*.pid; do bundle exec sidekiqctl stop $f; done"]
        env:
        - name: REDIS_HOST
          value: test-staging-redis
        - name: APP_ENV
          value: staging
        - name: CLIENT
          value: test
        volumeMounts:
            - mountPath: /etc/sidekiq/config
              name: test-staging-sidekiq
        ports:
        - containerPort: 80
      volumes:
        - name: test-staging-sidekiq
          configMap:
             name: test-staging-sidekiq
             items:
              - key: config
                path: sidekiq.yml

      imagePullSecrets:
        - name: registrykey

Let’s launch this deployment and monitor the rolling deployment.

$ kubectl apply -f test-deployment.yml
deployment "test-staging-sidekiq" configured

We can confirm that existing Sidekiq jobs are completed before termination of old pod during the deployment process. In this way we handle a graceful shutdown of Sidekiq process. We can apply this technique to other processes as well.

New Syntax for HTML Tag helpers in Rails 5.1

Rails is great at generating HTML using helpers such as content_tag and tag.

content_tag(:div, , class: "home")

<div class="home">
</div>

Rails 5.1 has introduced new syntax for this in the form of enhanced tag helper.

Now that same HTML div tag can be generated as follows.

tag.div class: 'home'

<div class="home">
</div>

Earlier, the tag type was decided by the positional argument to the content_tag and tag methods but now we can just call the required tag type on the tag method itself.

We can pass the tag body and attributes in the block format as well.

<%= tag.div class: 'home' do %>
  Welcome to Home!
<% end %>


<div class="home">
  Welcome to Home!
</div>

HTML5 compliant by default

The new tag helper is also HTML 5 compliant by default, such that it respects HTML5 features such as void elements.

Backward compatibility

The old syntax of content_tag and tag methods is still supported but might be deprecated and removed in future versions of Rails.

Avoid exception for dup on Integer

This blog is part of our Ruby 2.4 series.

Prior to Ruby 2.4, if we were to dup an Integer, it would fail with a TypeError.

> 1.dup
TypeError: can't dup Fixnum
	from (irb):1:in `dup'
	from (irb):1

This was confusing because Integer#dup is actually implemented.

> Integer.respond_to? :dup
=> true

However, if we were to freeze an Integer it would fail silently.

> 1.freeze
=> 1

Ruby 2.4 has now included dup-ability for Integer as well.

> 1.dup
=> 1

In Ruby, some object types are immediate variables and therefore cannot be duped/cloned. Yet, there was no graceful way of averting the error thrown by the sanity check when we attempt to dup/clone them.

So now Integer#dup functions exactly the way freeze does – fail silently and return the object itself. It makes sense because nothing about these objects can be changed in the first place.

Deploying Rails applications on Kubernetes cluster with Zero downtime

This post assumes that you have basic understanding of Kubernetes terms like pods and deployments.

Problem

We deploy Rails applications on Kubernetes frequently and we need to ensure that deployments do not cause any downtime. When we used Capistrano to manage deployments it was much easier since it has provision to restart services in the rolling fashion.

Kubernetes restarts pods directly and any process already running on the pod is terminated. So on rolling deployments we face downtime until the new pod is up and running.

Solution

In Kubernetes we have readiness probes and liveness probes. Liveness probes take care of keeping pod live while readiness probe is responsible for keeping pods ready.

This is what Kubernetes documentation has to say about when to use readiness probes.

Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup. In such cases, you don’t want to kill the application, but you don’t want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.

It means new traffic should not be routed to those pods which are currently running but are not ready yet.

Using readiness probes in deployment flow

Here is what we are going to do.

  • We will use readiness probes to deploy our Rails app.
  • Readiness probes definition has to be specified in pod spec of deployment.
  • Readiness probe uses health check to detect the pod readiness.
  • We will create a simple file on our pod with name health_check returning status 200.
  • This health check runs on arbitrary port 81.
  • We will expose this port in nginx config running on a pod.
  • When our application is up on nginx this health_check returns 200.
  • We will use above fields to configure health check in pod’s spec of deployment.

Now let’s build test deployment manifest.

---
apiVersion: v1
kind: Deployment
metadata:
  name: test-staging
  labels:
    app: test-staging
  namespace: test
spec:
  template:
    metadata:
      labels:
        app: test-staging
    spec:
      containers:
      - image: <your-repo>/<your-image-name>:latest
        name: test-staging
        imagePullPolicy: Always
       env:
        - name: POSTGRES_HOST
          value: test-staging-postgres
        - name: APP_ENV
          value: staging
        - name: CLIENT
          value: test
        ports:
        - containerPort: 80
      imagePullSecrets:
        - name: registrykey

This is a simple deployment template which will terminate pod on the rolling deployment. Application may suffer a downtime until the pod is in running state.

Next we will use readiness probe to define that pod is ready to accept the application traffic. We will add the following block in deployment manifest.

  readinessProbe:
    httpGet:
      path: /health_check
      port: 81
    periodSeconds: 5
    successThreshold: 3
    failureThreshold: 2

In above rediness probe definition httpGet checks the health check.

Health-check queries application on the file health_check printing 200 when accessed over port 81. We will poll it for each 5 seconds with the field periodSeconds.

We will mark pod as ready only if we get a successful health_check count for 3 times. Similarly, we will mark it as a failure if we get failureThreshold twice. This can be adjusted as per application need. This helps deployment to determine if the pod is in ready status or not. With readiness probes for rolling updates, we will use maxUnavailable and maxSurge in deployment strategy.

As per Kubernetes documentation.

maxUnavailable is a field that specifies the maximum number of Pods that can be unavailable during the update process. The value can be an absolute number (e.g. 5) or a percentage of desired Pods (e.g. 10%). The absolute number is calculated from percentage by rounding down. This can not be 0.

and

maxSurge is field that specifies The maximum number of Pods that can be created above the desired number of Pods. Value can be an absolute number (e.g. 5) or a percentage of desired Pods (e.g. 10%). This cannot be 0 if MaxUnavailable is 0. The absolute number is calculated from percentage by rounding up. By default, a value of 25% is used.

Now we will update our deployment manifests with two replicas and the rolling update strategy by specifying the following parameters.

  replicas: 2
  minReadySeconds: 50
  revisionHistoryLimit: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 50%
      maxSurge: 1

This makes sure that on deployment one of our pods is always running and at most 1 more pod can be created while deployment.

We can read more about rolling-deployments here.

We can add this configuration in original deployment manifest.

apiVersion: v1
kind: Deployment
metadata:
  name: test-staging
  labels:
    app: test-staging
  namespace: test
spec:
  replicas: 2
  minReadySeconds: 50
  revisionHistoryLimit: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 50%
      maxSurge: 1
  template:
    metadata:
      labels:
        app: test-staging
    spec:
      containers:
      - image: <your-repo>/<your-image-name>:latest
        name: test-staging
        imagePullPolicy: Always
       env:
        - name: POSTGREs_HOST
          value: test-staging-postgres
        - name: APP_ENV
          value: staging
        - name: CLIENT
          value: test
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /health_check
            port: 81
          periodSeconds: 5
          successThreshold: 3
          failureThreshold: 2
      imagePullSecrets:
        - name: registrykey

Let’s launch this deployment using the command given below and monitor the rolling deployment.

$ kubectl apply -f test-deployment.yml
deployment "test-staging-web" configured

After the deployment is configured we can check the pods and how they are restarted.

We can also access the application to check if we face any down time.

$ kubectl  get pods
    NAME                                  READY      STATUS  RESTARTS    AGE
test-staging-web-372228001-t85d4           1/1       Running   0          1d
test-staging-web-372424609-1fpqg           0/1       Running   0          50s

We can see above that only one pod is re-created at the time and one of the old pod is serving the application traffic. Also, new pod is running but not ready as it has not yet passed the readiness probe condition.

After sometime when the new pod is in ready state, old pod is re-created and traffic is served by the new pod. In this way, our application does not suffer any down-time and we can confidently do deployments even at peak hours.

Rails 5.1 returns unmapped timezones from ActiveSupport::TimeZone.country_zones

This blog is part of our Rails 5.1 series.

The ActiveSupport::TimeZone class serves as wrapper around TZInfo::TimeZone class. It limits the set of zones provided by TZInfo to smaller meaningful subset and returns zones with friendly names. For example, TZInfo gem returns “America/New_York” whereas Active Support returns “Eastern Time (US & Canada)”.

ActiveSupport::TimeZone.country_zones method returns a set of TimeZone objects for timezones in a country specified as 2 character country code.

# Rails 5.0
>> ActiveSupport::TimeZone.country_zones('US')

=> [#<ActiveSupport::TimeZone:0x007fcc2b9b3198 @name="Hawaii", @utc_offset=nil, @tzinfo=#<TZInfo::DataTimezone: Pacific/Honolulu>>, #<ActiveSupport::TimeZone:0x007fcc2b9d9ac8 @name="Alaska", @utc_offset=nil, @tzinfo=#<TZInfo::DataTimezone: America/Juneau>>, #<ActiveSupport::TimeZone:0x007fcc2ba03a08 @name="Pacific Time (US & Canada)", @utc_offset=nil, @tzinfo=#<TZInfo::DataTimezone: America/Los_Angeles>>,...]

In Rails 5.0, the country_zones method returns empty for some countries. This is because ActiveSupport::TimeZone::MAPPING supports only a limited number of timezone names.

>> ActiveSupport::TimeZone.country_zones('SV') // El Salvador

=> []

Rails 5.1 fixed this issue. So now if the country is not found in the MAPPING hash then a new ActiveSupport::TimeZone instance for the country is returned.

>> ActiveSupport::TimeZone.country_zones('SV') // El Salvador

=> [#<ActiveSupport::TimeZone:0x007ff0dab83080 @name="America/El_Salvador", @utc_offset=nil, @tzinfo=#<TZInfo::DataTimezone: America/El_Salvador>>]

Difference between type and type alias in Elm

This blog is part of our Road to Elm series.

What is the differnece between type and type alias.

Elm FAQ has an answer to this question. However I could not fully understand the answer.

This is my attempt in explaining it.

What is type

In Elm everything has a type. Fire up elm-repl and you will see 4 is a number and “hello” is a String.

> 4
4 : number

> "hello"
"hello" : String

Let’s assume that we are working with users records and we have following attributes of those users.

  • Name
  • Age
  • Status (Active or Inactive)

It’s pretty clear that “Name” should be of type “String” and “Age” should be of type “number”.

Let’s think about a moment what is the type of “Status”. What is “Active” and “Inactive” in terms of type.

Active and Inactive are two valid values of Status. In other programming languages we might represent Status as an enum.

In Elm we need to create a new type. And that can be done as shown here.

type Status = Active | Inactive

Second thing we are doing is that we are stating that the valid values for this new type are Active and Inactive.

When I discussed this code with my team members they asked me to show where is Active and Inactive defined. Good question.

The simple answer is that they are not defined anywhere. They do not need to be defined. What needs definition is the new type that is being created.

What makes understanding it a bit hard for people coming from Ruby, Java and such background is that these people (including me) are looking at Active and Inactive as a class or a constant which is not the right way to look at.

Active and Inactive are the valid values for type Status.

> Active
-- NAMING ERROR ----------

Cannot find variable `Active`

3|   Active
     ^^^^^^

As you can see repl is not sure what Active is.

We can solve this by pasting following code in repl.

type Status = Active | Inactive

Now we can run the same code again. This time no error.

> Active
Active : Repl.Status

What is type alias

Let’s see a simple application which just prints name and age of a single user.

Here is the code. I’m posting screenshot of the same below with certain part highlighted.

code without type alias

As you can see { name : String, age : Int } is repeated at four different places. In a bigger application it might get repeated more often.

This is what type alias does. It removes repetition. It removes verbosity.

As the name suggests this is just an alias. Note that type creates a new type whereas type alias is literally saving keystrokes. type alias does not create a new type.

Now if you read the FAQ answer again then hopefully it will make morse sense now.

Here is modified code using type alias.

Why use type alias Username : String

While browsing Elm code in general, I came across following code.

type alias Username = String

Question is what does code like this buy us. All it does is that instead of String I can now type Username.

First let’s see how it might be used.

Let’s assume that we have a function which returns Status of a user for the given username.

The function might have implementation as shown below.

getUserStatus username =
  make_db_call_and_return_user_status

Now let’s think about what the type annotation (rubyist think of it as method signature ) of function getUserStatus might look like.

It takes username as input and returns user record.

So the type annotation might look like

getUserStatus : String -> Status

This works. However the issue is that String is not expressive enough. It can be made more expressive if the signature were

getUserStatus : Username -> Status

Now that we know about type alias all we need to do is

type alias Username = String

This makes code more expressive.

No recursion with type alias

An example of where we might need recursion is while designing commenting system. A comment can have sub-comments. However since type alias is just a substituation and recursion does not work with it.

> type alias Comment = { message : String, responses : List Comment }

This type alias is recursive, forming an infinite type!

2| type alias Comment = { message : String, responses : List Comment }
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When I expand a recursive type alias, it just keeps getting bigger and bigger.
So dealiasing results in an infinitely large type! Try this instead:

    type Comment
        = Comment { message : String, responses : List Comment }

This is kind of a subtle distinction. I suggested the naive fix, but you can
often do something a bit nicer. So I would recommend reading more at:
<https://github.com/elm-lang/elm-compiler/blob/0.18.0/hints/recursive-alias.md>

Hint for Recursive Type Aliases discusses this issue in greater detail and it also has solution to the problem of recursion.

Dual role of type alias as constructor and type

Let’s say that we have following code.

type alias UserInfo =
    { name : String, age : Int }

Now we can use UserInfo as a constructor to create records.

> type alias UserInfo = { name : String, age : Int }
> sam = UserInfo "Sam" 24
{ name = "Sam", age = 24 } : Repl.UserInfo

In the above case we used UserInfo as a constructor to create new user records. We did not use UserInfo as a type.

Now let’s see another function.

type alias UserInfo =
    { name : String, age : Int }


getUserAge : UserInfo -> Int
getUserAge userinfo =
    userinfo.age

In this case UserInfo is being used in type annotation as type and not as as constructor.

Which one to use type or type alias

Both of them serve different purpose. Let’s see an example.

Let’s say that we have following code.

type alias UserInfo =
    { name : String, age : Int }

type alias Coach =
    { name : String, age : Int, sports : String }

Now let’s write a function that gets age of the given userinfo.

getUserAge : UserInfo -> Int

getUserAge UserInfo =
    UserInfo.age

Now let’s create two types of users.

sam = UserInfo "Sam" 24
charlie = Coach "Charlie" 52 "Basketball"

Now let’s try to get age of both of these people.

getUserAge sam
getUserAge charlie

Here is the compelete version if you want to run it.

Please note that elm-repl does not support type annotation so you can’t test this code in elm-repl.

The main point here is that since we used type alias, function getUserAge works for both UserInfo as well as Coach. It would be a stretch to say that this sounds like “duck typing in Elm” but it comes pretty close.

Yes Elm is staticly typed language and it enforces type. However the point here is the type alias is not exactly a type.

So why did this code work.

It worked because of Elm’s support for pattern matching for records.

As mentioned earlier type alias is just a shortcut for typing the verbose version. So let’s expand the type annotation of getUserAge.

If we were not using type alias UserInfo then it might have looked like as shown below.

getUserAge : { name : String, age : Int } -> Int

Here the argument is a record. Here is official guide on Records. While dealing with records Elm looks at the argument and if that argument is a record and has all the matching attributes then Elm will not complain because of its support for pattern matching.

Since Coach has both name and age attribute getUserAge charlie works.

You can test it by removing the attribute age from Coach and then you will see that Compiler will complain.

In summary if we want strict type enforcement then we should go for type. If we need something so that we do not need to type all the attributes all the time and we want pattern matching then we should go for type alias.

In Ruby 2.4, IPAddr#== and IPAddr#<=> do not throw exception for objects that can't be converted to IPAddr

This blog is part of our Ruby 2.4 series.

In Ruby, IPAddr#== method is used to check whether two IP addresses are equal or not. Ruby also has IPAddr#<=> method which is used to compare two IP addresses.

In Ruby 2.3, behavior of these methods was inconsistent. Let’s see an example.

# Ruby 2.3

>> IPAddr.new("1.2.1.3") == "Some ip address"
=> IPAddr::InvalidAddressError: invalid address

But if the first argument is invalid IP address and second is valid IP address, then it would return false.

# Ruby 2.3

>> "Some ip address" == IPAddr.new("1.2.1.3")
=> false

The <=> method would raise exception in both the cases.

# Ruby 2.3

>> "Some ip address" <=> IPAddr.new("1.2.1.3")
=> IPAddr::InvalidAddressError: invalid address

>> IPAddr.new("1.2.1.3") <=> "Some ip address"
=> IPAddr::InvalidAddressError: invalid address

In Ruby 2.4, this issue is fixed for both the methods to return the result without raising exception, if the objects being compared can’t be converted to an IPAddr object.

# Ruby 2.4

>> IPAddr.new("1.2.1.3") == "Some ip address"
=> false

>> "Some ip address" == IPAddr.new("1.2.1.3")
=> false

>> IPAddr.new("1.2.1.3") <=> "Some ip address"
=> nil

>> "Some ip address" <=> IPAddr.new("1.2.1.3")
=> nil

This might cause some backward compatibility if our code is expecting the exception which is no longer raised in Ruby 2.4.

Rails 5.1 has dropped dependency on jQuery from the default stack

This blog is part of our Rails 5.1 series.

Rails has been dependent on jQuery for providing the unobtrusive JavaScript helpers such as data-remote, data-url and the Ajax interactions. Every Rails application before Rails 5.1 would have the jquery-rails gem included by default.

The jquery-rails gem contains the jquery-ujs driver which provides all the nice unobtrusive features.

But now JavaScript has progressed well such that we can write the unobtrusive driver which Rails needs using just plain vanilla JavaScript.

That’s what has happened for the 5.1 release. The jquery-ujs driver has been rewritten using just plain JavaScript as part of a GSoC project by Dangyui Liu.

Now that the unobtrusive JavaScript driver does not depend on jQuery, new Rails applications also need not depend on jQuery.

So, Rails 5.1 has dropped jQuery as a dependency from the default stack.

The current jquery-based approach would still be available. It’s just that it’s not part of the default stack. You will need to manually add the jquery-rails gem to newly created 5.1 application and update the application.js to include the jquery-ujs driver.

It’s worth noting that rails-ujs only supports IE 11+. Visit the Desktop Browser Support section of Basecamp to see the full list of all the supported browsers.

Browsers support without jQuery

We saw some discussion about which all browsers are supported without jQuery. We decided to test it outselves on a plain vanilla CRUD Rails app. We tested “adding”, “editing” and “deleting” of a resource.

We found that all three operations (adding, editing and deleting) to be working in following cases.

  • Win 7 - IE 9
  • Win 7 - IE 10
  • Win 7 - IE 11
  • Win 8 - IE 10
  • Win 8.1 - IE 11
  • Win 10 - IE 14 Edge
  • Win 10 - IE 15 Edge
  • Win 10 - Firefox 53
  • Win 10 - Chrome 58
  • Win 10 - Safari 5.1
  • Mac Siera - Safari 10.1
  • Mac Sierra - Firefox 53
  • Mac Sierra - Chrome 58

API change for the event handlers

rails-ujs driver has changed the signature of the event handler functions to just pass one event object instead of event, data, status and xhr as in the case of jquery-ujs driver.

Check the documentation for the rails-ujs event handlers for more details.

Ruby 2.4 has deprecated toplevel constants TRUE, FALSE and NIL

This blog is part of our Ruby 2.4 series.

Ruby has top level constants like TRUE, FALSE and NIL. These constants are just synonyms for true, false and nil respectively.

In Ruby 2.4, these constants are deprecated and will be removed in future version.

# Ruby 2.3

2.3.1 :001 > TRUE
 => true
2.3.1 :002 > FALSE
 => false
2.3.1 :003 > NIL
 => nil
# Ruby 2.4

2.4.0 :001 > TRUE
(irb):1: warning: constant ::TRUE is deprecated
 => true
2.4.0 :002 > FALSE
(irb):2: warning: constant ::FALSE is deprecated
 => false
2.4.0 :003 > NIL
(irb):3: warning: constant ::NIL is deprecated
 => nil