I would suggest generic solution which shall work for any language. I used python pycassa to work this out:
First approach:
-Say if column_count = 10 (How many results to display on front end)
-collect first 11 rows, sort first 10, send it to user (front end) as JSON object and also parameter last=11th_column
-User then calls for page 2, with prev = 1st_column_id, column_start=11th_column and column_count = 10. Based on this, I query cassandra like cf.get('cf', key, column_start=11th_column, column_count=10)
-This way, I can traverse, next page and previous page.
-Only issue with this approach is, I don't have all columns in super column sorted. So this did not work.
Second approach ( I used in production ):
-fetch all super columns and columns for a row key. e.g in python pycassa, cf.get('cf',key)
-Sort this in python using sorted and lambda function based on column values.
-Once sorted, prepare buckets and each bucked size is of page size/column count. Also filter out any rogue data if needed before bucketing.
-Store page by page results in Redis with keys such as 'row_key|page_1|super_column' and keep refreshing redis periodically.
My second approach worked pretty good with small/medium amount of data.