While working on a client project, we started facing an issue where the JWPlayer stopped playing videos
we switched to hls
version of videos.
We found a CORS error in the JS console as shown below.
After researching we found that JWPlayer makes an AJAX request to load the m3u8 file.
To fix the issue, we needed to enable CORS and for that
we needed to make changes to S3 and Cloudfront configurations.
S3 configuration changes
We can configure CORS for the S3 bucket by allowing requests originating from specified hosts.
As show in the image below we can find the CORS configuration option in Permissions tab of the S3 bucket.
Here is the official documentation on configuring CORS for S3.
S3 bucket will now allow requests originating from the specified hosts.
Cloudfront configuration changes
Cloudfront is a CDN service provided by AWS
uses edge locations to speed up the delivery of static content.
Cloudfront takes content from S3 buckets and caches it at edge locations and delivers it to the end user.
For enabling CORS we need to configure Cloudfront to allow forwarding of required headers.
We can configure the behavior of Cloudfront by clicking on Cloudfront Distribution’s “Distribution Settings”.
Then from the “Behaviour” tab click on “Edit”.
Here we need to whitelist the headers that need to be forwarded.
Select the “Origin” header to whitelist which is required for CORS, as shown in the image below.
A Kubernetes cluster can have many nodes.
Each node in turn can run multiple pods.
By default Kubernetes manages which pod
will run on which node and this is something
we do not need to worry about it.
However sometimes we want to ensure that
certain pods do not run on the same node.
For example we have an application called wheel.
We have both staging and production version of this app
and we want to ensure that production pod and staging pod
are not on the same host.
To ensure that certain pods do not run on the same host
we can use
constraint in PodSpec to schedule pods on nodes.
We will use kops to provision
We can check the health of cluster using
$ kops validate cluster
Using cluster from kubectl context: test-k8s.nodes-staging.com
Validating cluster test-k8s.nodes-staging.com
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m4.large 1 1 us-east-1a
master-us-east-1b Master m4.large 1 1 us-east-1b
master-us-east-1c Master m4.large 1 1 us-east-1c
nodes-wheel-stg Node m4.large 2 5 us-east-1a,us-east-1b
nodes-wheel-prd Node m4.large 2 5 us-east-1a,us-east-1b
NAME ROLE READY
ip-192-10-110-59.ec2.internal master True
ip-192-10-120-103.ec2.internal node True
ip-192-10-42-9.ec2.internal master True
ip-192-10-73-191.ec2.internal master True
ip-192-10-82-66.ec2.internal node True
ip-192-10-72-68.ec2.internal node True
ip-192-10-182-70.ec2.internal node True
Your cluster test-k8s.nodes-staging.com is ready
Here we can see that there are two instance groups for nodes: nodes-wheel-stg and nodes-wheel-prd.
nodes-wheel-stg might have application pods like pod-wheel-stg-sidekiq, pod-wheel-stg-unicorn and pod-wheel-stg-redis.
nodes-wheel-prd might have application pods like pod-wheel-prd-sidekiq, pod-wheel-prd-unicorn and pod-wheel-prd-redis.
As we can see the Max number of nodes for instance group nodes-wheel-stg and nodes-wheel-prd is 5. It means if
new nodes are created in future then based on the instance group the newly created nodes will automatically
be labelled and no manual work is required.
Labelling a Node
We will use kubernetes labels
to label a node.
To add a label we need to edit instance group using kops.
$ kops edit ig nodes-wheel-stg
This will open up instance group configuration file,
we will add following label in instance group spec.
Recently, we integrated our SAML service provider(SP)
with multiple identity providers(IDPs)
to facilitate Single sign-on(SSO)
using Devise with OmniAuth.
Before we jump into the specifics, here is
SAML definition from wikipedia.
Security Assertion Markup Language (SAML, pronounced sam-el)
is an open standard
for exchanging authentication and authorization data
between parties, in particular,
between an identity provider(IDP) and a service provider(SP).
The choice of Devise with OmniAuth-SAML
to build SAML SSO capabilities
was natural to us,
as we already had dependency on Devise
and OmniAuth nicely integrates with Devise.
Here is the official overview
on how to integrate OmniAuth with Devise.
After following the overview,
this is how our config and user.rb looked like.
# config file
Devise.setup do |config|
devise :omniauthable, :omniauth_providers => [:saml]
The problem with above configuration is
that it supports only one SAML IDP.
To have support for multiple IDPs,
we re-defined files as below.
1. Custom Providers:
Instead of using standard provider saml,
we configured custom providers (saml_idp1, saml_idp2)
in the first line of configuration
as well as in user.rb
2. Strategy Class:
In case of the standard provider(saml),
Devise can figure out strategy_class
on its own.
For custom providers,
we need to explicitly specify it.
3. OmniAuth Unique Identifier:
After making the above two changes,
everything worked fine
except OmniAuth URLs.
For some reason, OmniAuth was still listening
to saml scoped path
instead of new provider names saml_idp1, saml_idp2.
# Actual metadata path used by OmniAuth
# Expected metadata path
After digging in Devise and OmniAuth code bases,
we discovered provider name configuration.
In the absence of this configuration,
OmniAuth falls back to strategy class name
to build the path.
As we could not find any code in Devise
which defined name for OmniAuth
that explained saml scoped path
(we were expecting Devise to pass name
assigning same value as provider).
After adding name configuration,
OmnitAuth started listening to the correct URLs.
4. Callback Actions:
Lastly, we added both actions in OmniauthCallbacksController:
class Users::OmniauthCallbacksController < Devise::OmniauthCallbacksController
# Rest of the actions
With these changes along with
the official guide mentioned above,
our SP was able to authenticate users from multiple IDPs.
In this blog R stands for Ramda.js. More on this later.
Here is code without R.
Code with R.
Is the refactored code better ?
What is R?
Then why take all this extra complexity.
Shouldn’t we be writing code that is easier to understand ?
Good questions. Who could be against writing code that is easier to understand.
If all I’m writing is a function called isUnique then of course the “before version” is simpler.
function is part of a bigger thousands of lines of code software.
A big software is nothing but a collection of smaller pieces of code. We compose code together to make code work.
We need to optimize for composability and as we write code that is more composable,
we are finding that composable code is also easier to read.
we have been experimenting with composability.
wrote a blog
on how using
Recompose is making our React components more composable.
Let’s take a look at another examples.
We have a list of users with name and status.
We need to find all active users. Here is a version without R.
Notice that change we needed to do to accomodate this request.
In the none R version, we had to get into the gut of the function and add logic.
In the with R version we added new function and we just composed this new function
with old function using pipe.
We did not change the existing function.
Now let’s say that we don’t want all the users but just the first two users.
We know what need to change in the without R version. In the with R version
all we need to do is add R.take(2) and no existing function changes at all.
Another thing to notice is that in the R version nowhere we have said that
we are acting on the users. All the functions have no mention of users.
Infact all the functions do not take any argument explicitly since the functions are curried.
When we want result then we are passing users
as the argument but it could be articles and our code will still hold.
This is pointfree programming.
We do not need to know about “pointfree” since this comes naturally when write with R.
I’m still not convinced that Ramda.js is solving any real problem
If you are still not convinced then, the author of Ramda.js has written a series of blogs called
Thinking in Ramda.
Please read the blogs. Slowly.
Functional programming is another way of thinking about the code.
When we move to Elm, Haskell or Elixir to get functional concepts then
we are wrestling with two things at once - a new language and functional concepts.
In this way we can slowly start using functional concepts in our day to day
using Ramda.js today. Whether you are using React.js or Angular.js, it’s all
I attended Elm Conf 2017 US last week alongside Strangeloop conference.
I was looking forward to
the conference to know what the Elm community is working on
problems people are facing and what are they doing to overcome those.
After attending the conference,
I can say that Elm community is growing strong. The conference was attended
by around 350 people and many were using Elm in production. More number of people wanted to
try Elm in production.
There was a lot of enthusiasm about starting new Elm meetups.
As a Ruby on Rails and React meetup organizer myself, I was genuinely interested in
hearing experiences of seasoned meetup organizers. In general Evan and Richard prefer
meetup to be a place where people form small groups and hack on something rather than
one person teaching the whole group something.
I liked all the talks. There was variety in the topics and the speakers were all seasoned.
Kudos to the organizers for putting up a great program. Below is a quick summary of my thoughts from the conference.
Keynote by Evan
Evan talked about the work he has been doing for the upcoming release of Elm.
He discussed the optimization work related to code splitting, code generation and
minification for speeding up building and delivering single page apps using Elm.
He made another interesting point
that he changed the codegen which generates the JS code from Elm code twice but nobody noticed it.
Things like this can give a huge opportunity to change and improve existing designs which he has been doing for the upcoming
In the end he mentioned that his philosophy is not to rush things. It’s better to do things right than doing it now.
After the keynote, he encouraged people to talk to him about what they are working on which was really nice.
Accessibility with Elm
Tessa talked about her work around adding accessibility support for Elm apps.
She talked about
design decisions, prior art and some of the challenges she faced while working on the library
like working with tabs, interactive elements and images.
There was a question at the end about whether this will be incorporated into Elm core but Evan
mentioned that it might take some time.
Putting the Elm Platform in the Browser
Luke, the creator of Ellie - a way to easily share your elm code with others online -
talked about how he started with Ellie. He talked about the problems he had to face for implementing
and sustaining Ellie through ads. During the talk, he also
open sourced the code, so we can see it on Github now.
Luke mentioned how he changed the architecture of Ellie from mostly running on the
server to running in the browser using service workers. He discussed future plans about
sustaining Ellie, building an Elm editor instead of using Codemirror, getting rid of ads
and making Ellie better for everyone.
The Importance of Ports
In other frameworks like PureScript
In Elm one has to use “Ports”. Using Ports requires some extra stuff. In return we get more safety.
Murphy Randle presented a case where he was using too many ports
which was resulting in fragmented code. He discussed how port is based on Actor Model
and once we get that then using port would be much easier. He also showed refactored code.
Murphy also runs Elm Town Podcast. Listen to episode 13 to know more about Ports.
He talked about finding motivation to teach using the SWBAT technique.
It helped him in
deciding the agenda
and finding the direct path for teaching. He mentioned that in the beginning being precise and detailed is not
important. This resonated with me as the most important thing for anyone who is getting started is
getting started with the most basic things and then iterating over it again and again.
Elm community is small, tight, very friendly and warm. Lots of people are trying a lot of cool things.
Elm Slack came in the discussions again and again as a good place to seek out help for beginners.
When I heard about Elm first, it was about good compiler errors and having run time safety.
However after attending the conference I am mighty impressed with the Elm community.
Big props to Brian and Luke for organizing the conference!
All the videos from the conference are already getting uploaded here.
Now, DateTime#to_time and Time#to_time preserve receiver’s timezone offset info.
Since this is a breaking change
for Rails application upgrading
to ruby 2.4, Rails 4.2.8 built
a compatibility layer by
adding a config option.
ActiveSupport.to_time_preserves_timezone was added to control how to_time handles timezone offsets.
Here is an example of how application behaves when
to_time_preserves_timezone is set to false.
Here is an example of how application behaves when
to_time_preserves_timezone is set to true.
is a toolkit for writing React components using
higher-order components. Recompose allows us
many smaller higher-order components
then we compose all those components
together to get the desired component.
It improves both readability and the maintainability
of the code.
are also written as HOC.
Going forward we will use
HOC to refer to
Using Recompose in an e-commerce application
We are working on an e-commerce application
we need to build payment page.
Here are the modes of payment.
Cash on delivery
Swipe on delivery
We need to render our
React components depending upon
the payment mode selected by the user.
Typically we render components
based on some state.
We will try to refactor the code using the tools
provided by Recompose.
In general, the guiding principle of functional programming
is composition. So here we will assume that the
default payment mechanism is online.
If the payment mode happens to be something else
then we will take care of it by enhancing the existing component.
function which acts like a ternary operator.
The branch function accepts three arguments
returns a HOC.
The first argument is a
accepts props as the argument
returns a Boolean value.
The second and third arguments are higher-order components.
If the predicate evaluates to true then the left HOC is rendered otherwise the right HOC is rendered.
Here is how branch is implemented.
At this time we are building a condition (like cashOnDeliveryCondition)
for each payment type and then using that condition in compose.
We can put all such conditions in an array and then we can use that array in
compose. Let’s see it in action.
Extract function for reusability
We are going to extract some code in utils
for better reusability.
In a typical
single-page application (SPA)
server sends JSON data.
Browser receives that JSON data
and builds HTML.
In an isomorphic app,
the server sends a fully-formed HTML
to the browser.
This is typically done for SEO,
and code maintainability.
In an isomorphic app the browser
does not directly deal with the API server.
This is because the API server will render
JSON data and browser needs to have fully formed
HTML. To solve this problem a “proxy server”
is introduced in between the browser and the API server.
In this case the proxy server is powered by Node.js.
Uploading a file in an isomorphic app
while working on an isomorphic app,
we needed to upload a file
to the API server.
We couldn’t directly upload from the browser
because we ran into
One way to solve CORS issue is to add CORS support
to the API sever. Since we did not have access
to the API server this was not an option.
It means now the file must go through the proxy server.
The problem can be seen as two separate issues.
Uploading the file from the browser to the proxy server.
Uploading the file from the proxy server to the API server.
Before we start writing any code,
we need to accept file on proxy server
it can be done by using
is a node.js middleware
for handling multipart/form-data.
We need to initialize multer
a path where it
will store the uploaded files.
We can do that by adding
the following code
before initializing the node.js server app.
any file uploaded to proxy server
would be stored
in the /uploads directory.
Next we need a function
which uploads a file
to the node.js server.
the same file
from the node.js server
to the API server.
To do that,
we need to add
a callback function
to our node.js server
where we are accepting
the POST request for step 1.
When we deploy Rails applications on kubernetes
it stops existing pods and spins up new ones.
When old pod is terminated by Replicaset,
then active Sidekiq processes are also terminated.
We run our batch jobs using sidekiq and it is possible that
sidekiq jobs might be running when deployment is being performed.
Terminating old pod during deployment can kill the already running jobs.
As per default
policy of kubernetes, kubernetes sends command to delete pod with a default grace period of 30 seconds.
At this time kubernetes sends TERM signal.
When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
We can adjust the terminationGracePeriodSeconds timeout as per our need and can change it from
30 seconds to 2 minutes.
However there might be cases where we are not
sure how much time a process takes to gracefully shutdown.
In such cases we should consider using
PreStop hook which is our next solution.
PreStop hook is called immediately before a container is terminated.
It is a blocking call. It means it is synchronous.
It also means that this hook must be completed before
the container is terminated.
Note that unlike solution1 this solution is not time bound.
Kubernetes will wait as long as it takes for PreStop process
to finish. It is never a good idea to have a process which takes more
than a minute to shutdown but in real world there are cases
where more time is needed. Use PreStop for such cases.
We decided to use preStop hook to stop Sidekiq because we had some really long running processes.
Using PreStop hooks in Sidekiq deployment
This is a simple deployment template which terminates
when pod is terminated during deployment.
Next we will use PreStop lifecycle hook to stop
Sidekiq safely before pod termination.
We will add the following block in deployment manifest.
PreStop hook stops all the
Sidekiq processes and does graceful shutdown of Sidekiq
before terminating the pod.
We can add this configuration in original deployment manifest.
Let’s launch this deployment and monitor the rolling deployment.
We can confirm that existing Sidekiq jobs are completed
before termination of old pod during the deployment process.
In this way we handle a graceful shutdown of
Sidekiq process. We can apply this technique to other processes