Learnings from Client-side and Server-side rendering in Backbone.js – Mr. Joel Kemp

Screen Shot 2013 11 16 At 1.02.33 Pm 300x179

In this article, I’ll talk about the high-level technologies and pros and cons to go from a fully client-side to server-side rendered application with a supplementary Backbone.js app on top. This article draws on about a year of experience using both techniques on production apps at YouNow.

Client-side rendered apps

If you’ve ever built a fully client-side rendered app (I only have experience with Backbone.js), you are probably aware of its downfalls: slow loading/render times, and no useful search engine indexing of your pages.

Despite the downfalls, these apps are pretty straightforward to build:

1. The backend serves a super-lightweight html page: pretty much an empty ‘body’ tag with one or so ‘div’ elements that your JS app dynamically fills in. All of the data rendering happens via client-side templates that are attached as script tags (with a funky type so the browser ignores it) to the html page.

  • Pro: This is great in that your app can get delivered to any JS-capable device and more or less run the same way.
  • Con: Of course, download times of JS bundles, mobile browser fetch order, and the speed of JS processing for your app will vary. In the long term, you lose the ability to guarantee a baseline, performant user experience. For example, our profile app on a desktop browser took 3 seconds to be interactive and near 10 seconds on mobile web. Ouch. (Yes, it should be stated that image sizes and the number of requests also play into this.)
  • Con: Google’s web crawler only sees your bare html page with no content in it. The crawler doesn’t execute your JS app and wait for the content to render, it just leaves with barely anything to index. This hurts. For example, our profile app had a user’s activities on the site, their broadcasts, and their connections to other people. All of that content enriched a user’s identity, but search engines knew none of it.
  • The web-crawler’s view:Screen Shot 2013-11-16 at 1.02.33 PM
  • The fully-loaded view. Not what the crawler saw:Screen Shot 2013-11-16 at 1.05.04 PM
  • Con: Even Google’s web optimizer like Page Insights will tell you that above-the-fold content should be immediately available. It’s like a slap in the face.

In retrospect, it was a bad move to make this a fully client-side app. The wrong tool for the job, as a former colleague Amjad Masad puts it. Chalk that up to my ignorance, believing the client-side rendered app-of-the-future hype, and not caring about SEO until the organic traffic numbers came in.

2. The backend primarily serves as a json-serving API service.

  • This is super-great for the backend as it does less work. With a caching layer in front of it (memcached for example), eventually, the backend kicks back with a cold beer and barely has to get up.
  • Backend engineers and DevOps engineers love this approach – infinite scale since the content delivery network (CDN) can bear the load! Though, managing/purging the cache (and CDN) is still a hard problem – but that’s irrespective of the client/server rendering choice.
  • That initial, lightweight, dummy page that the server sends a client: consisting of templates and the bare body tag, could potentially be cached on the CDN alongside the JSON api responses.

I say potentially because the problem is that you often need to “bootstrap” user (i.e., model-specific) data into the page (more on this later) – making it a dynamic, “uncacheable” page; technically still cacheable (one cache entry per user-profile), but not in the sense that we originally wanted: the same static page to be served to every user (one cache entry, period). In addition to the bootstrapped model data, we also injected metadata (for SEO purposes, go figure) into the page based on parameters attached to the url.

For example, if on a request for that initial html page, your backend checks a user’s session to see if they’re logged in, then you’d likely want to bake the user’s data into the page. The JS app would then have immediate access to that data and avoid the delay of an additional HTTP request. One way to use this bootstrapped data is to initialize a User model with it – potentially even outside of your app’s initialization (somewhere in an isolated ‘script’ tag, for example) if you need that model alive near-immediately.

Backbonejs’ documentation also touches on this, though you’d want the data placed within your app’s global namespace (like YouNow.Accounts or YouNow.Projects):

Loading Bootstrapped Models

When your app first loads, it’s common to have a set of initial models that you know you’re going to need, in order to render the page. Instead of firing an extra AJAX request to fetch them, a nicer pattern is to have their data already bootstrapped into the page.

<script>  var accounts = new Backbone.Collection;  accounts.reset(<%= @accounts.to_json %>);  var projects = new Backbone.Collection;  projects.reset(<%= @projects.to_json(:collaborators => true) %>);</script>

3. Modern JS Libraries/Frameworks are built for client-side rendering.

  • I literally haven’t seen a single Backbone.js tutorial that doesn’t do client-side rendering.
  • Models and Collections are designed to fetch JSON from your backend api service
  • Views render that fetched data with those script-tag (only to be used on the client-side) templates that were baked into the initial html page
  • Slap some client-side routing in there and the backend doesn’t need to know what’s going on.

After Google’s web optimizer stated that bit about above-the-fold content – and realizing that the concept of a single (CDN’d) static page was impractical – I started rendering the above-the-fold parts of that initial html page on the backend.

The fact is that we were already fetching information about the user whose profile you were viewing and bootstrapping that in. So why not use that readily available information on the backend to just spit out a bit more html? This realization really got the ball rolling on moving toward the server-side approach.

Server-side Rendered Apps

First, a little rant: I went to the inaugural BrooklynJS meetup this past Thursday and saw Pat Nakajima talk about how Github builds their app. That link takes you to a slide that I found most disturbing – stating that “we build websites like it’s 2005” – dissing the server rendered nature of their application. Disclaimer: I’m not picking on Pat nor his talk, he gave a really entertaining presentation that we all appreciated.

It occurred to me that the hype around client-side rendered apps is still alive and kicking. It’s not the first time I’ve heard server-side rendering negatively referred to as an “old school” technique. It also doesn’t help that there’s a lack of information on modern-day methods of server-rendering an app with a supplementary JS app on the client-side. We really do need to take a step back, examine the pros and cons of our tools more carefully, and choose the best tech and rendering approach for the job. </rant>

My first thought on moving from a client-side to a server-side rendered app was: “so, uhhh, where does the JS fit in?” Ya know, it used to do everything…

Here’s an overview of the necessary pieces for approaching a server-rendered app:

1. As expected, the server actually returns a heavier page to the client. It will consist of html representations of data that would have normally been rendered on the client-side via script-tag templates.

  • Pro: Search engine crawlers will actually see the content when they arrive! Instant SEO boosts and a guaranteed increase in organic traffic to your pages.
  • Pro: The client won’t have to do the following steps just to see a piece of content: download your entire JS app, parse and execute it, set up the router, instantiate the proper views and models, fetch the JSON data, and then finally render that data. The content is all there when the client arrives – increasing the perceived speed of your page.
  • Pro: The backend further utilizes the bootstrappable data that it has available to render more of the page.
  • Pro: You now have more of a baseline guarantee as to when users will see your page’s content.
  • Con: Depending on how you distribute the responsibilities of your api and html rendering services, your backend has to think about both JSON and HTML-fragment caching. If portions of the data won’t change, then why should the backend have to re-render its html representation?
  • Con: Along with fragment caching, you now have to apply the hard problem of cache invalidation to both your JSON and HTML at the cache (memcached or redis) and CDN layers.
  • Con: Can the server also understand the Underscore templates that I spent so much time on? Nope! Unless, you’re using Node on the backend. At YouNow, we’re using PHP, so were forced to use another templating engine (more on that later).
  • Con: There’s added complexity on how the app and backend deal with page navigation. The best solution that I know of is to have an application-level router that, on page navigation, calls particular backend endpoints for html or json, and re-renders only certain parts of the page (like a central ‘content’ div). This works well when you’re navigating within an already loaded JS app. However, if you do a page refresh or a direct navigation to a particular url (that’s a subpage of the application), the backend needs to know how to respond (render the entire page containing the content pertinent to that url/route). Lastly, on that page refresh, the JS app needs to load and put itself back in that state. Yeah, that’s a lot of work if you go for that smooth/seamless of an experience.

The backend won’t be rendering everything, however. If a user wants to scroll to see the next page of results (without a page refresh), how do you handle that? Do you fetch JSON from the backend for that page and render the html on the client? Do you ask the backend to render and serve only the html (fragment) for that next page of results?

Both possibilities are feasible and sites like Justin.tv, Facebook, and Twitter (to name a few) opt for the html-served option. At YouNow, we’re feeling out the former, JSON-served flavor. That leads us to the next point.

2. Sharing templates on the server and client

  • Pro: Reuse of the same markup (html representations of data) in different parts of the stack
  • Con: Potentially requires a build process (or integration into an existing one) for delivering the templates to the client.
  • Con: I’ve found it difficult to get the rendering of partials right on the UI side. Beware of using backend paths on your render calls. In other words, if using Mustache, be careful when you refer to a partial as {{> mypath/subpath/comment}} since mypath/subpath/ won’t exist on the client.

Let’s say that we want to render the html representation for a comment (the schema isn’t important). Both the backend and client have to render this html representation. Do you choose a PHP-specific templating engine and a separate JS-specific engine? If the markup syntax differs, then you’re potentially duplicating your efforts with separate templates for each part of the stack.

At YouNow, we chose Mustache as our shared templating engine. On the backend, we use the PHP implementation, and on the front-end we use the JS implementation.

On the backend, we define our mustache templates (comment.mustache, for example) in their own files in a templates/shared/ folder and we transport that to the client using JSTtoJS. We use Grunt to watch this particular folder and run JSTtoJS to compile the shared .mustache files into a single JS file to be bundled into our main JS bundle (or lazily loaded if you prefer).

Other companies like linkedIn have gone with other templating engines and methods. If you use require.js, you could also use the text plugin to bake the templates into the served bundle. We’re going to experiment with that in the near future.

3. Bootstrapping data

  • Pro: Easy to do since the Backend already fetched that data (either from the DB or the cache). Be sure to put it at the end of the ‘body’ tag (same as you’d do for script tags) to avoid blocking the DOM render with the parsing of that JSON object.
  • Pro: Insanely easier for building your app’s models than trying to extract a model’s representation from the DOM (mentioned below).
  • Con: You’re definitely eliminating the possibility of caching the page as a single page for all users (mentioned previously), but that’s a pipe dream anyway.
  • Con: Initial page size is much bigger – resulting in a larger download. This has implications on mobile, but I think it’s worth it since they’ll see the content rendered immediately. There are also plenty of optimizations (image sprites, minimizing the number of http requests, lazy loading other bundles, and more) and app-specific bottlenecks (maybe you can prefetch images that get dynamically loaded, maybe you have layout/reflow thrashing that you didn’t know about) that you can overcome to lessen the load.

Once the client has the rendered page, you need to layer a JS app on top of it. The first thought that entered my mind was to parse the DOM to build the models and then attach views to those DOM elements. This is bad for performance reasons (DOM interaction is slow) and a bit of a hard problem: you need to write all this logic to dig into the respective DOM elements. You can litter the dom with ‘data-‘ elements to embed this data, but you slowly realize that you’re just bootstrapping the data the hard way.

The better solution was to bake the model-data into the page (in a script tag) for use in building the models/collections of the Backbone app. By model-data, I mean the JSON data that the backend used to render the html for those models (comments, in our previous example). It should go without saying that this data should be namespaced to avoid introducing global variables. For example: YouNow.Bootstrap will contain the bootstrapped comments which would be accessible via YouNow.Bootstrap.comments.

4. Building your Backbone app from the bootstrapped data

  • Pro: Surprisingly straightforward. In our example, we bootstrapped a JSON array of comment objects – lending itself very easily to a collection of comment models. Pass that array into a collection constructor and you’re a good ways there.
  • Con: There’s some complexity in building views – as they need to know whether they’re latching onto existing DOM elements or generating those elements dynamically (for client-rendered data). You need to know which views are likely to support client-side rendering and build both render methods and initialization logic for manipulating existing $el’s (i.e., a view’s primary DOM element).

At this point, we’re layering interactivity on top of an already-rendered page. Hand in hand with this is the question of “what interactivity to do we need right this moment on page load?” This brings me to my next point.

5. Figure out how to decrease the time to interactivity

As was previously stated, the big boon for server-side rendering is that the content is already on the page. Now it’s just a matter of decreasing how long it takes for the JS app to initialize and add that necessary layer of interactivity.

This is a hard problem that I haven’t solved properly yet. I’ve written about lazily injecting scripts when you need them and it’s even more important when you’re trying to decrease the time-to-scroll (i.e, time to interactivity).

I’m currently experimenting with deferring the initialization of certain views (like sidebars and menus). However, the real key is to open up a timeline view of your app (or use Google Page Insights tool) and see what’s really taking up the most time on the page and delaying the window.onload event.  This is where front-end engineering gets fun.

Summary

I didn’t talk about progressive enhancement (the idea that if JS isn’t enabled, your page would still work properly), but that’s definitely possible with a server-rendered app. For example, an anchor ‘a’ tag could have an href to a real backend endpoint (like comments/view/1 for viewing the details of the first comment), but if the JS app is active, the app should prevent the a-tag’s navigation (using jquery’s preventDefault) and does an ajax-oriented navigation to prevent the hard refresh of the page. Ugh, more work, but it results in a really robust webpage.

I also didn’t talk about hybrid alternatives to server-rendering. For example, using PhantomJS to serve full html to crawlers, or having your backend detect Google’s escaped fragment and serving full pages. These approaches work well if you’re neck-deep in a fully client-side rendered app that will be around for a while and want some of the aforementioned server-side benefits (like SEO, most notably).

Think about whether or not your page could benefit from server-side rendering. Full client-side rendered apps are definitely easier to build, but you should be aware of the consequences and the server-side alternative.

That’s it, I’ve told you everything that I know :). Thanks for reading and happy coding!

UPDATE: Jeremy Ashkenas (creator of Backbone.js) was kind to re-emphasize the point in response to this article (on Hacker News):

Client-side-only-rendered applications should only be for private pages, user’s workspaces, and web applications — pages that a search engine will never see.

I don’t think newcomers to client-side rendered applications are fully aware of the consequences until they’re deep in it. That’s not the fault of anyone or any particular technology; hopefully, this article serves as a disclaimer of sorts.

For more information:

More about the techniques/ideas discussed here can be found in a few resources on the web. We’ll definitely see more articles from big companies in the near future.

Discussions about this article: 
http://www.reddit.com/r/programming/comments/1qs6ql/learnings_from_clientside_and_serverside/

https://news.ycombinator.com/item?id=6746508