0

Currently we are using Mysql Cluster with Mybaits. When we do bulk insertion or updation into particular table, it took more than 120 seconds but expectation is below 30 secs.

For Example 10k records, First we tried to update the 10k rows at time, it took more than 180 to 240 minutes. So we moved to some solution splitting into batches like 4k, 4k, 2k, this also took 120 to 180 minutes. Finally we spitted the records to 2k, 2k, .... took 90 to 120 seconds but CPU usage went to high.

There is no relationship on that table.

Any solutions for these cases, shall we move to nosql or optimization in db level.

pandiaraj
  • 580
  • 1
  • 6
  • 18

1 Answers1

0

Cluster is very efficient when batching as network roundtrips are avoided. But your inserts sound terribly slow. Even serial inserts when not using batching should be much faster.

When I insert 20k batched records into a cluster table it takes about 0.18sec on my laptop. Depends obviously on schema and amount of data.

Make sure you are not using e.g. auto-commit after each record. Also use

INSERT ... VALUES (), (), () ... type batched inserts

rather than

INSERT ... VALUES () INSERT ... VALUES ()

You can also increase the ndb-batch-size depending on the amount of data to insert in one transaction.

Details about your setup, how you insert, if there are blobs and what schema and data look like will help to answer more specifically.