0

I am working on an Android application that uses greenDAO as a data persistence layer. The application downloads data from various different sources across multiple threads (determined by a thread pool), each piece of data is inserted into the database in a transaction using insertOrReplaceInTx. This is working fine.

My question is whether it is technically possible, using greenDAO, to encapsulate these different transactions (which occur on different threads) into an overall transaction, using nested transactions. I know in theory it is possible to do this if all the transactions were taking place on a single thread, however I am unsure if this possible with the insertOrReplaceInTx calls occurring on different threads.

The reason I wish to encapsulate these into a single overall transaction is because they represent a synchronisation process within an app. In the event of any single part of the import failing, I wish to abort and rollback all of the modifications within the overall transaction.

If I begin a transaction with db.beginTransaction on the main thread where I initiate the import process, this creates a deadlock when another thread tries to insertOrReplaceInTxt.

Is the correct way to counter this to ensure that all greenDAO transactions are taking place on the same thread?

jdmunro
  • 581
  • 2
  • 10

1 Answers1

1

Afaik, you cannot because each thread manages its own connection.

If you have such dependency between these operations, you probably want to sync them anyways.

e.g. what if Job A finishes way before Job B and Job B's db connection fails. Your data will go out of sync again. You still need some logic for the other job. Also, writers are mutually exclusive.

I would suggest creating a utility class that can run a list of runnables in a transaction. Each job, when finished, enqueues a Runnable to this utility. These runnables include the actual database commands.

When the last one arrives (this depends on your dependency logic), the utility will run all runnables in a transaction.

A sample implementation may look like this: (I used a simple counter but you may need a more complex logic)

class DbBundle {
    AtomicInteger mLatch;
    List<Runnable> mRunnables = new ArrayList();
    DbBundle(int numberOfTx) {
        mLatch = new AtomicInteger(numberOfTx);
    }

    void cancel() {
        mLatch.set(-1); // so decrement can never reach 0 in submit
    }

    boolean isCanceled() {
        mLatch.count() < 0;
    }

    void submit(Runnable runnable) {
        mRunnables.add(runnable);
        if (mLatch.decrementAndGet() == 0) {
             db.beginTransaction();
             try {
                 for (Runnable r : mRunnables) r.run();
                 db.setTransactionSuccessful()
             } finally {
                  db.endTransaction();
             }
        } 
    }
}

When you create each job, you pass this shared DbBundle and the last one will execute them all. So a job would look like:

....
try {
    if (!dbBundle.isCanceled()) { // avoid extra request if it is already canceled
        final List<User> users = webservice.getUsers();
        dbBundle.submit(new Runnable() {
            void onRun() {
                 saveUsers(users);//which calls db. no transaction code.
            });
    });
} catch(Throwable t) {
    dbBundle.cancel();
}
yigit
  • 37,683
  • 13
  • 72
  • 58
  • Thanks, I think this is probably the way to go. I didn't quite understand your comment about the jobs becoming out of sync though. I perhaps did not mention that each job essentially writing writing data to a different table, there is no dependency between the jobs other than the fact that all of them should complete successfully for the sync to complete successfully. If Job B fails, I would expect to abort the process and end the transaction unsuccessfully. BTW I am using the Android Priority Job Queue which I think you developed :) – jdmunro Oct 07 '14 at 08:34
  • hah, yes, this looks like a problem caused by decoupling things :). You mentioned that JobA and JobB should both complete or both failed. Even with db transaction (if they were working), you would need to block JobA until JobB finishes which is not a good use of Job threads (and would deadlock if # of dependent jobs is > # of max consumers). Maybe we can integrate this as an extension to JobManager. I've been thinking about such useful extension jobs for a while. Feel free to send a pull request if you happen to write a generic one. – yigit Oct 08 '14 at 04:08
  • I am still not quite sure of the need to block JobA (I think I am misunderstanding something!). Lets say JobA downloads some entities and inserts them into the DB. There is no relationship with the data that is downloaded in JobB. Meanwhile, JobB downloads its entities and inserts them into the DB. AFAIK, greenDAO ensures serial access to the DB. So lets say we did have some convenient way to encapsulate all of this into an overall transaction. If JobA failed, we would abort this overall transaction, and cancel JobB. Would this not ensure atomicity of the DB data without needing to sync jobs? – jdmunro Oct 08 '14 at 11:09
  • yes, i think we are on the same page. There, I wanted to clarify that, you still need a 3rd utility class to merge these two jobs' db transactions (at least pass transaction reference between jobs if there was such thing). – yigit Oct 11 '14 at 00:10