1

I'm using node-mysql2 with a connection pool and a connection limit of 10. When I restart the application, the results are good - they match what I have on the db. But when I start inserting new records and redo the same select queries, then I get intermittent results missing the latest record I just added.

If I do check the database directly, I can see the records I just added through my application. It's only the application that cannot see it somehow.

I think this is a bug, but here's how I have my code setup:

module.exports.getDB = function (dbName) {
    if (!(dbName in dbs)) {
        console.log(`Initiating ${dbName}`);
        let config = dbConfigs[dbName];
        dbs[dbName] = mysql.createPool({
            host: config.host,
            port: config.port || 3306,
            user: config.user,
            password: config.password,
            connectionLimit: 10,
            database: config.database,
            debug: config.debug
        });
    }
    return dbs[dbName]; // I just initialize each database once
};

This is my select query:

let db = dbs.getDB('myDb');
const [rows] = await db.query(`my query`);
console.log(rows[0]); // this one starts to show my results inconsistently once I insert records

And this is my insert query:

module.exports = {
    addNote: async function(action, note, userID, expID) {
        let db = dbs.getDB('myDb');

        await db.query(`INSERT INTO experiment_notes (experiment_id, action, created_by, note)
                            VALUES (?, ?, ?, ?)`, [expID, action, userID, note]);
    }
};

If I set the connectionLimit to 1, I cannot reproduce the problem... at least not yet

Any idea what I'm doing wrong?

O. Jones
  • 103,626
  • 17
  • 118
  • 172
  • When you examine your table after a few minutes, do you find all the records you inserted? Or do some records never make it into the table? Please [edit] your question to clarify. – O. Jones Feb 27 '20 at 12:57
  • Just edited it. And yes, I can see the records in the table. – Mário Silva Mar 02 '20 at 10:24

1 Answers1

1

Setting your connection_limit to 1 has an interesting side-effect: it serializes all access from your node program to your database. Each operation, be it INSERT or SELECT, must run to completion before the next one starts because it has to wait for the one connection in the pool to free up.

It's likely that your intermittently missing rows are due to concurrent access to your DBMS from different connections in your pool. If you do a SELECT from one connection while MySQL is handling the INSERT from another connection, the SELECT won't always find the row being inserted. This is a feature. It's part of ACID (atomicity, consistency, isolation, durability). ACID is vital to making DBMSs scale up.

In more complex applications than the one you showed us, the same thing can happen when you use DBMS transactions and forget to COMMIT them.

Edit Multiple database connections, even connections from the same pool in the same program, work independently of each other. So, if you're performing a not-yet-committed transaction on one connection and a query on another connection, the query will (usually) reflect the database's state before the transaction started. The query cannot force the transaction to roll back unless it somehow causes a deadlock. But deadlocks generate error messages; you probably are not seeing any.

You can sometimes control what a query sees by preceding it, on the same connection, with SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; . That can, on a busy DBMS, improve query performance a little bit, and prevent some deadlocks, as long as you're willing to have your query see only part of a transaction. I use it for historical queries (what happened yesterday). It's documented here. The default, the one that explains what you see, is SET TRANSACTION LEVEL REPEATABLE READ;

But, avoid that kind of isolation-level stuff until you need it. (That advice comes under the general heading of "too smart is dumb.")

O. Jones
  • 103,626
  • 17
  • 118
  • 172
  • 1
    This might well be the problem indeed. I was calling a function within a transaction and this function would get a new db handler instead of using the same db handler from the transaction. It would be something like: `transaction start some sql instructions function call with different db handle some other sql commit with original db handle` What I would expect would be that if for some reason this had a problem, the whole transaction would be rolled back except for whatever was done in that function. But otherwise, everything would go normally. Is this assumption wrong? – Mário Silva Mar 03 '20 at 08:45