1

I have an Angular app pulling data from a REST server. Each item we pull has some "core" data - what's needed to display it's basic representation - and then what I call "secondary" data, comments and other things that the user might want to see and might not.

I'm trying to optimize our request pattern to minimize the overall amount of time the user spends looking at a loading spinner: Pulling all (core/secondary) data at once causes the initial request to return far too slowly, but pulling only the bare essentials until the user asks for something we haven't requested yet also creates unnecessary load time, at least inasmuch as I could've anticipated them wanting to see it and loaded it while they were busy reading the core content.

So, right now I'm doing a "core content" pull first and then initiating a "secondary" pull at the end of the success callback from the first. This is going to be an experimental process, but I'm wondering what (if any) best practices have been established in this situation. (I'm sure a good answer to that is a google away, but in this instance I'm not quite sure what to google - thus the quotation marks in this question's title)

A more concrete question: Am I better off initiating many small HTTP transactions or a few large ones? My instinct is to do many small ones, particularly if I can anticipate a few things the user is likeliest to want to see first and get those loaded as soon as possible. But surely there's an asymptote here? Or am I off-base in this line of thinking entirely?

drew moore
  • 31,565
  • 17
  • 75
  • 112
  • More smaller request will get you your core data faster. But altogether, you will have more overhead and probably more loading time. – alesc Mar 07 '15 at 14:10
  • It always depends on the size of the requests and answers. You already said that pulling all the data at once would make the user wait to long. So your approach seems to be pretty intelligent as to load the absolute necessary, show it quickly and then add more data on the fly, showing the user what he/she can expect next. A better keyword to search for would be "progressive loading", wheere you can find a lot of patterns for images and content as well. – Rias Mar 07 '15 at 15:13

2 Answers2

2

I use the same approach as you, and it works pretty well for a many-keyed, 10,0000+ collection.

The collection is paginated with ui.bootstrap.pagination, only a maximum of 10 items are displayed at once. It can be searched on title.

So my approach is to retrieve only id and title, for the whole collection, so the search can be used straight away.

Then, as the items displayed on screen are in an array, I place a $watch on that array. The job of the $watch is to go fetch full details of the items in the array (secondary pull), but of course only when the array is changed. So, in the worst case scenario, you are pulling the full details of only 10 items.

Results are cached for more efficiency. It displays instant results, as the $watch acts as a pre-loader.

Am I better off initiating many small HTTP transactions or a few large ones?

I believe large transactions, for just a few items (the ones which are clickable on the screen) are very efficient.

Regarding the best practice bit: I suppose there are many ways to achieve your goals; however, the technique you are using works extremely well, as it retrieves only what is needed, and only just before it is needed. Besides, it is simple enough to implement.

Also, like you I would have thought many smaller pulls were surely better than several large ones. However, I was advised to go for a large pull as a comment to this question: Fetching subdocuments with angular $http

Manube
  • 5,110
  • 3
  • 35
  • 59
0

To answer you question about which keywords to search for, I suggest:

progressive loading

An alternative could be using websockets and streaming loading: Oboe.js does this quite well: http://oboejs.com/examples

Rias
  • 1,956
  • 22
  • 33