16

We use backbone heavily for rendering our pages. All the data is passed as json from the server and the html is created on the client with backbone and mustache. This poses a big problem for SEO. One way that I was planning to get around this was to detect if the request is from a bot and use something like HtmlUnit to render the page on the server and spit it out. Would love some alternate ideas. Also would like to know if there's a flaw in what I'm planning to do.

Saurav Shah
  • 661
  • 3
  • 8
  • 16

5 Answers5

5

Build your site using Progressive Enhancement and Unobtrusive JavaScript.

When you do significant Ajax stuff, use the history API.

Then you have real URLs for everything and Google won't be a problem.

Quentin
  • 914,110
  • 126
  • 1,211
  • 1,335
4

I don't necessarily like that the only option you have for answers are to redo everything to meet a broad best practice. There's good reason to consider doing things using an unobstrusive Javascript approach, but maybe there's a good reason you're doing this as a JS-required site. Let's pretend there is.

If you're doing a Backbone.js application with dynamically filled in client templates, the best way I could think of to do this is in the link below. Basically, it amounts to telling a headless browser to run through a set of navigation commands to view all your users/products/pages and save a static html file at every step for SEO reasons.

What's the least redundant way to make a site with JavaScript-generated HTML crawlable?

Community
  • 1
  • 1
SimplGy
  • 20,079
  • 15
  • 107
  • 144
  • 2
    I think besides the SEO complications (who cares about people w/o JS) there is something very appealing about RESTful services where the server side language isn't responsible for spitting out a load of spaghetti HTML, or even if this is MVCed still tying the data down to a fixed view of html. – Dominic Apr 06 '13 at 19:07
3

In a project I'm working on at the moment I'm attempting to cover all the bases.. Backbone driven client, pushstate uris, bookmarkable pages, and html fallback where possible. The approach I've taken is to use Mustache for the templates, break them up into nice little components for my backbone views and make them available in a raw format to the client. When a page is requested the templates can be processed on the server to produce a full page and backbone attaches to the elements it wants to control.

It's not a simple setup but so far I haven't hit any roadblocks and I haven't duplicated any templates. I've had to create a page wrapper template for each available url as Mustache doesn't do "wrappers" but I think I should be able to eliminate these with some extra coding on the server.

The plan is to be able to have some components as pure js where required by the interface and some rendered by the server and enhanced with js where desired..

lecstor
  • 5,619
  • 21
  • 27
  • Are you sharing templates or template processing code? The idea has been interesting to me for a while. If one used node.js, seems like both templates and logic could be shared. If one used RoR, seems like templates could be shared, because the ERB syntax is mostly compatible with JS templating – SimplGy Feb 24 '13 at 17:15
  • We're sharing templates between server and client. Running Python on the server we use Pystache to process the templates. We use partials for things like items in a list and make the partial template available to the client code so it can add items to the list when needed. I think the logic required is usually going to be different depending on whether you're in the client or on the server but we share server-side logic between rpc handlers and page handlers by having them very light-weight and calling methods on common controller/model objects. – lecstor Mar 12 '13 at 10:35
1

if your are using node.js use rendr

Render your Backbone.js apps on the client and the server, using Node.js.

Emile Bergeron
  • 17,074
  • 5
  • 83
  • 129
hereandnow78
  • 14,094
  • 8
  • 42
  • 48
0

There's pros and cons to using the google ajax crawling scheme - I used it for a social networking site (http://beta.playup.com) with mixed results...

I wrote a gem to handle this transparently as rack middleware for ruby users (gem install google_ajax_crawler) (https://github.com/benkitzelman/google-ajax-crawler)

Read about it at http://thecodeabode.blogspot.com.au/2013/03/backbonejs-and-seo-google-ajax-crawling.html

The summary is that even though I successfully posted rendered dom snapshots to a requesting search engine, and I could see using webmaster tools that Google were crawling like 11,000 pages of the site, I found that Google were more prone to classify the apps various states (urls) as versions of the same page, and not as separate indexes. Try doing a search for beta.playup.com - only one index is listed even though the rendered content changes radically between urls )....

Ben
  • 1,203
  • 13
  • 8