0

I have a bit of a theoretical question for which there is no code yet as I am still just in the thinking stage. I want to update an app to allow users to share their data with others through DropBox Datastore or something like that. However, when a user creates data which get populated into multiple sqlite tables on the device, each table has an auto-incremental integer as a primary key that is used as a foreign key in other tables to link the data.

If there is more than one user actually creating the data and sharing it then the primary key columns are obviously going to be an issue. If I download the data and store it locally I obviously can't insert user 1's key value in user 2's data table, firstly because of the auto-increment and secondly because user 2 might already have data that is not shared saved with that key value.

I have thought about a few options but nothing is particularly appealing or robust. I was thinking about creating a UUID to identify the device, that value would have to be stored in each of the tables and the primary key would be a combination of that column and the current primary key integer which would obviously have to have the auto-increment removed. So to pick up all related data from each table the id column and UUID column would both have to be used.

I feel like there must be a more robust method of achieving this though, any one have any better suggestions?

collusionbdbh
  • 701
  • 1
  • 6
  • 18
  • I think the idea with the UUID is good. I would use a combination of UUID and timestamp for a primary key field. Pehrhaps time as millisecounds. This could work if a user can't create mor then one record per millisecound – JackTools.Net Oct 31 '13 at 15:50
  • Because you mentioned them, I should point out that Dropbox datastores do not (yet) support sharing between users. – user94559 Oct 31 '13 at 16:14
  • I know, the work around if I use them is going to be for users to connect to the same dropbox account with the intention of later setting up an actual SQL server to host the data. – collusionbdbh Oct 31 '13 at 16:20

1 Answers1

1

If I'm understanding well you need some sort centralised database in the cloud to communicate with your local app, is that right? A client should never create the ids for such a system. A usual practice on these cases is to always have a remote id which is created by your DB in the cloud, and whenever you don't have this value yet, you can have a fallback value (local id created locally - which is different from the remote one).

So, to illustrate my words we could set the following example. Your app stores messages in database. Say you create messages with a local id 1,2,3. Those ids will never be meant to be unique in your central database in the cloud. Instead, you'd just use them as a local fallback. As soon as you can send those 3 messages to your centralised database, it'll give them 3 new remote ids you'll use for unique means (ie.: 35, 46, 54).

Note that when you have multiple requesters/users accessing one same database there's not such way to assure uniqueness unless you follow the explained approach, or you query a certain number of unique ids in advance and on demand to your database in the cloud. Keep in mind, that the actual truth can be only delivered by the databases in your servers.

Jose L Ugia
  • 5,960
  • 3
  • 23
  • 26
  • I think that will work if I create a remote table to hold the next id for each table. Before uploading that value will be retrieved and incremented, then that remote id will be saved to the local version, then the data can be uploaded containing that remote id. When another user downloads it they will get there own auto-incremented local id but will receive that remote id also when downloading. That way I will always have all data linked locally and remotely. I think that is going to be the best option. – collusionbdbh Oct 31 '13 at 16:18
  • That's it. This is a widely used approach. – Jose L Ugia Oct 31 '13 at 16:37
  • Would it make sense to leave my existing tables in place as they are and instead create one or more mapping tables that map the remote id's to the local id's? – collusionbdbh Nov 01 '13 at 03:37
  • As per my comment above I have begun development using this strategy, it means that both the local table and the remote table only have one column each and each users app keeps track of the mapping between a local items id and the remote items id in a mapping table. The mapping table has a columns for the table name, column name, local id, and remote id. Seems to be working well so far without making things too complicated. – collusionbdbh Nov 01 '13 at 19:20
  • The mapping table is a nice idea for clarity, but you don't really need it. The remote table (truth table) just has its own id. On the other hand, the local tables (the ones in clients) have a field for the remote and one for the local. The local is assigned right away, and the remote is responded by your server whenever your record is created successfully there. That way you always know that remoteId == null in your local database means that this record was not sent to remote server yet, which is all you need. – Jose L Ugia Nov 02 '13 at 10:20
  • Because the table structures already exist and because most rows will not need to store the remote id I will stick with the mapping table. Thanks a lot for your help. – collusionbdbh Nov 08 '13 at 18:07
  • Just to add on to Jose's point, using different IDs for locally created items is very helpful. Personally I have all local IDs be negative numbers which makes if very easy to check if the ID is a remote ID or a local one. – Cassie Jan 31 '14 at 06:00