0

As I understand it, there are three somewhat distinct reasons to put multiple IndexedDB operations in a single transaction rather than using a unique transaction for each operation:

  1. Performance. If you’re doing a lot of writes to an object store, it’s much faster if they happen in one transaction.
  2. Ensuring data is written before proceeding. Waiting for the “oncomplete” event is the only way to be sure that a subsequent IndexedDB query won’t return stale data.
  3. Performing an atomic set of DB operations. Basically, “do all of these things, but if one of them fails, roll it all back”.

#1 is fine, most databases have the same characteristic.

#2 is a little more unique, and it causes issues when considered in conjunction with #3. Let’s say I have some simple function that writes something to the database and runs a callback when it's over:

function putWhatever(obj, cb) {
    var tx = db.transaction("whatever", "readwrite");
    tx.objectStore("whatever").put(obj);
    tx.oncomplete = function () { cb(); };
}

That works fine. But now if you want to call that function as a part of a group of operations you want to atomically commit or fail, it's impossible. You'd have to do something like this:

function putWhatever(tx, obj, cb) {
    tx.objectStore("whatever").put(obj).onsuccess = function () { cb(); };
}

This second version of the function is very different than the first, because the callback runs before the data is guaranteed to be written to the database. If you try to read back the object you just wrote, you might get a stale value.

Basically, the problem is that you can only take advantage of one of #2 or #3. Sometimes the choice is clear, but sometimes not. This has led me to write horrible code like:

function putWhatever(tx, obj, cb) {
    if (tx === undefined) {
        tx = db.transaction("whatever", "readwrite");
        tx.objectStore("whatever").put(obj);
        tx.oncomplete = function () { cb(); };
    } else {
        tx.objectStore("whatever").put(obj).onsuccess = function () { cb(); };
    }
}

However even that still is not a general solution and could fail in some scenarios.

Has anyone else run into this problem? How do you deal with it? Or am I simply misunderstanding things somehow?

Community
  • 1
  • 1
dumbmatter
  • 9,351
  • 7
  • 41
  • 80
  • you might look into a promise library to help chain up inter-dependent async callbacks. they add a bit of overhead up front, but make later code recycling easier. you can also at least name your functions and define them outside the callback so that sub-pieces don't need to be defined over and over again. – dandavis Jan 06 '15 at 04:16
  • I'm already doing that, but I don't see how it helps with the fundamental transaction issues. Whether it's in a callback or a promise or whatever, the concept is the same. – dumbmatter Jan 06 '15 at 05:16

1 Answers1

1

The following is just opinion as this doesn't seem like a 'one right answer' question.

First, performance is an irrelevant consideration. Avoid this factor entirely, unless later profiling suggests a material problem. Chances of perf issues are ridiculously low.

Second, I prefer to organize requests into transactions solely to maintain integrity. Integrity is paramount. Integrity as I define it here simply means that the database at any one point in time does not contain conflicting or erratic data. Essentially the database is never able to enter into a 'bad' state. For example, to impose a rule that cross-store object references point to valid and existing objects in other stores (a.k.a. referential integrity), or to prevent duplicated requests such as a double add/put/delete. Obviously, if the app were something like a bank app that credits/debits accounts, or a heart-attack monitor app, things could go horribly wrong.

My own experience has led me to believe that code involving indexedDB is not prone to the traditional facade pattern. I found that what worked best, in terms of organizing requests into different wrapping functions, was to design functions around transactions. I found that quite often there are very few DRY violations because every request is nearly always unique to its transactional context. In other words, while a similar 'put object' request might appear in more than one transaction, it is so distinct in its behavior given its separate context that it merits violating DRY.

If you go the function per request route, I am not sure why you are checking if the transaction parameter is undefined. Have the caller create the function and then pass it to the requests in turn. Expect the tx to always be defined and do not over-zealously guard against it. If it is ever not defined there is either a serious bug in indexedDB or in your calling function.

Explicitly, something like:

function doTransaction1(db, onComplete) {
  var tx = db.transaction(...);
  tx.onComplete = onComplete;
  doRequest1(tx);
  doRequest2(tx);
  doRequest3(tx);
}
function doRequest1(tx) {
  var store = tx.objectStore(...);
  // ...
}
// ...

If the requests should not execute in parallel, and must run in a series, then this indicates a larger and more difficult design issue.

Josh
  • 17,834
  • 7
  • 50
  • 68
  • Thanks for the post! I'm checking if tx is undefined to switch between modes: "start a new tx and wait for it to complete, so we're absolutely sure the data is written" or "use an existing transaction and don't wait for it to complete because I want to do more stuff with it later". DRY is a concern if you need both of those behaviors from a function. – dumbmatter Jan 06 '15 at 03:31
  • And performance is a HUGE concern for me due to my gratuitous abuse of IndexedDB at http://basketball-gm.com/. Your performance advice is probably correct for most people, though. – dumbmatter Jan 06 '15 at 03:32
  • Can you clarify why you are waiting or not waiting. – Josh Jan 06 '15 at 03:48