Example use case: we have a list of messages that can change status (read/unread). New messages can appear anywhere in the list, and messages can also be deleted locally or from the backend by some other means.
The way we’ve implemented this now, we use SQLBrite as a wrapper around a local cache of data we know about.
- Actions (delete, change message status) are done by sending remote API calls over the network, or we poll the backend to see if there are any changes.
- The SQLBrite cache is updated as a side effect of remote API call results, both from actions initiated by the user, and periodic polling for updates. At this point, we know exactly what changed and we execute an INSERT, UPDATE, or DELETE to update the cache.
- The UI observer is told to re-execute the SQLBrite query for the messages table, and the UI updates itself in reaction to changes in the local cache. For example, animating away a message with a pending delete.
The question is, what is the best way to handle step 3? We’re left performing what feels like an expensive search for new, deleted, and updated messages. We have to run the whole query, build up a set of message IDs, and then compare this with the message IDs we know about in the UI model. We’re reconstructing information we had in step 2, when we knew exactly what needed updating in the UI widgets, but that information is not passed to our observer in step 3.
Is there a better way to structure this so we can avoid the costly reconciliation step?