0

I'm trying to implement paging on Android using a DataSource implementation, plus PagedAdapter.

Initially, the requirements were to have a full list in memory (not using RoomDB) and I wanted to take a moving "view" over that data as the user scrolls - i.e. feeding it to the adapter in pages. I've accomplished that with the use of PositionalDataSource.

However, now I have a new requirement. Some of the items in the original full list are actually "loading" items (i.e. spinners) and I need to fetch the data that these cells represent in chunks. These chunks have undetermined sizes. When a chunk loads in, the "loading" item should move down the list, and the loaded chunk be inserted where the "loading" item used to be. This should continue until all chunks that the "loading" item represents have been loaded in, at which point the "loading" item at the end of the list should be removed.

This means, my underlying data source actually grows dynamically as the user scrolls through the list. Which I think means PositionalDataSource is not the right type of data source to use as the source docs state:

 * Position-based data loader for a fixed-size, countable data set, supporting fixed-size loads at
 * arbitrary page positions.

Emphasis on fixed-size and countable - obviously my data set is not fixed size (and is therefore also uncountable).

I've looked at other implementations of DataSource and think I've found the right one; ItemKeyedDataSource. Each of my items do indeed have a unique key, and the source docs of this class state that:

 * Incremental data loader for paging keyed content, where loaded content uses previously loaded
 * items as input to future loads.

Which to me indicates that I can use it for my required purposes. I.e. when it needs to load in a range for an item with the given key which also happens to be a "loading" item, it can use the loading items data to determine what to load.

However, I'm struggling a bit with actual implementation of this, as the official docs don't give any real example usage, and the links to example code assumes the use of RoomDB or retrofit, neither of which is the approach I need.

Could anyone help with giving me an overview of how this DataSource is supposed to function conceptually and/or in code examples using an in memory data set that needs to grow dynamically?

I realize this is pretty vague, I've only started working with this class this morning and I'm struggling.

Thomas Cook
  • 4,371
  • 2
  • 25
  • 42

1 Answers1

1

Paging already loads paginated data for you - the point of implementing a DataSource is to provide Paging a way to incrementally load more data as user scrolls near the end. The one caveat is that in Paging 2.x, load state is not built into the library so you need to track it yourself and show the spinner using some method such as ConcatAdapter.

If you want to try the v3 apis (still in beta), LoadState is a built-in concept and you can simply use the .withLoadStateFooter() transform to turn a PagingDataAdapter to a ConcatAdapter which automatically shows the loading spinner when Paging fetches a new page.

To clarify the bit on the docs about counted snapshots - Paging operates with a single source of truth (DataSource / PagingSource), which is supposed to represent a static list (once fully loaded). This doesn't mean you have to have the whole list in memory, but the items each instance of DataSource fetches should generally match the mental model of a static list. For example if you are paging in data from DB, then a single instance of DataSource / PagingSource is only valid while there are no changes in the DB. Once you insert / modify / delete a row, that instance is no longer valid, which is where DataSource.Factory comes into play, giving you a new PagedList / DataSource pair.

Now if you need to also incrementally update the backing dataset (DB in this example) via layered source approach, Paging v2 offers a BoundaryCallback you can register to fire off network fetch when Paging runs out of data to load, or alternatively in v3 the new API for this is RemoteMediator (still experimental).

dlam
  • 3,547
  • 17
  • 20
  • Perfect - I've accepted this answer. I'd ended up coming to the same conclusion on my own yesterday, and doing some jiggery pokery in my "in memory" repo that the data source defers to in order to "shuffle" loading items down when things get loaded in, and eventually removing it when it's "exhausted", whenever I add/remove something from the underlying in memory repo, I invalidate the data source via callbacks, causing my data source factory to recreate the data source (but the underlying in memory repo survives this). – Thomas Cook Feb 26 '21 at 09:16