All of the other answers (except the synchronized
method) are subject to a race condition:
If two such INSERT
statements are running concurrently, the subqueries checking the number of rows will both find the count not exceeding the maximum, and both INSERT
statements will insert their row, potentially pushing the user count beyond the maximum.
A solution using an AFTER INSERT ON user_table
trigger would have the same problem, since the effects of a transaction are not visible to concurrent transactions before the transaction is committed.
The only bullet-proof solution is to use SERIALIZABLE
transactions in the database:
import java.sql.Connection;
import java.sql.SQLException;
Connection conn;
...
conn.setAutoCommit(false);
boolean transaction_done = false;
while (! transaction_done) {
try {
conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
// run the SELECT statement that counts the rows
if (count < MAX_USERS) {
//add the new user to the DB, INSERT INTO users_table
}
conn.commit();
transaction_done = true;
} catch (SQLException e) {
// only retry the transaction if it is a serialization error
if (! "40001".equals(e.getSQLState()))
throw e;
}
}
This will only work if all transactions that insert users into the table use this piece of code (or at least use serializable transactions).