0

I'm making batch operations with MySQL Cluster NDB: insertion of 10000 or 5000 rows and removal in a single transaction. Insert works fine, but removal fails with an exception:

java.sql.SQLException: Got temporary error 221 'Too many concurrently fired triggers (increase MaxNoOfFiredTriggers)' from NDBCLUSTER
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1086) ~[mysql-connector-java-5.1.29-bin.jar:na]
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4237) ~[mysql-connector-java-5.1.29-bin.jar:na]
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169) ~[mysql-connector-java-5.1.29-bin.jar:na]
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2617) ~[mysql-connector-java-5.1.29-bin.jar:na]
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778) ~[mysql-connector-java-5.1.29-bin.jar:na]
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2828) ~[mysql-connector-java-5.1.29-bin.jar:na]
at com.mysql.jdbc.ConnectionImpl.commit(ConnectionImpl.java:1748) ~[mysql-connector-java-5.1.29-bin.jar:na]
at com.mchange.v2.c3p0.impl.NewProxyConnection.commit(NewProxyConnection.java:981) ~[c3p0-0.9.5.1.jar:0.9.5.1]
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doCommit(JdbcTransaction.java:112) [hibernate-core-4.1.0.Final.jar:4.1.0.Final]
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.commit(AbstractTransactionImpl.java:178) [hibernate-core-4.1.0.Final.jar:4.1.0.Final]

Attempt to increase MaxNoOfFiredTriggers didn't result in any change of behaviour.

Sure I could split transactions into smaller batches, but 5000 rows doesn't sound that huge for a number and especially strange that insert is actually working fine.

divanov
  • 6,173
  • 3
  • 32
  • 51

1 Answers1

1

I think you are not taking the triggers fired to enforce the Foreign Key constraints into consideration while setting the MaxNoOfFiredTriggers.

Since the insertions are fine and the deletion is not, my guess is that atleast one column of the table you are doing DMLs on is being referred by another column (from the same or a different table) as a part of an FK relationship.

For example consider the following tables,

create table parent(id int, unique uk1(id) using hash) engine ndb;

create table child(id_ref int, foreign key fk1(id_ref) references parent(id)) engine ndb;

Now, when you are doing an insertion into the parent table, triggers will be fired in the backend to update the unique index uk1. But no FK triggers are needed here as there is nothing to check.

But when you are deleting from the parent table, the FK constraint has to be enforced. Deletion should be allowed iff the child table doesn't have the value that is being deleted from the parent. This might vary if the FK constraint had an on update cascade clause - in that case, the tuples with the same value being deleted from the parent has also to be deleted in the child.

All this is enforced through FK triggers. So during deletion from parent, in addition to the triggers fired to update the hash index, Foreign Key triggers will also be fired. In this case, for every row two triggers will be fired. So if you are deleting 5000 rows in batch in this setup, you would need atleast 10K maxNoOfFiredTriggers allowance. With multiple Foreign Keys and having multiple level of Foreign Key dependencies, this calculation will vary. So you have to check your schema for all Foreign Key dependencies and calculate MaxNoOfFiredTriggers accordingly.

  • Honestly, I don't remember any more what was MaxNoOfFiredTriggers value and I didn't mention this in the question... – divanov Apr 03 '17 at 19:12