0

I was trying a large batch of insert statements recently. It took a long time and found that mysql supports batch mode, so I did a test with batch mode.

Test1:

    @Test
    public void testSingleService() {
        try {
            for (int i = 0; i < 10000; i++) {
                iLostProductService.insertProduct(new Product("testSingleService" + i));
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

mybatis.xml

    <sql id = "baseSql">
        product_name
    </sql>

    <insert id="insertProduct" parameterType="com.team.lof.model.Product">
        insert into product (<include refid="baseSql"></include>) values (#{name})
    </insert>

It takes a very long time, i give up this way.

Test2:

    @Test
    public void testBatchService() {
        List<Product> productList = new ArrayList<Product>();
        for (int i = 0; i < 10000; i++) {
            productList.add(new Product("2227testBatchService" + i));
        }
        try {
            iLostProductService.insertProductList(productList);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

mybatis.xml

    <insert id="insertProductList">
        insert into product (<include refid="baseSql"></include>) values 
        <foreach collection="productList" item="product" separator=",">
            (#{product.name})
        </foreach>
    </insert>

It takes 1s 329ms, pretty fast.

Test3:

Turn on batch mode, spring.datasource.url needs to configure rewriteBatchedStatements=true

spring.datasource.url=jdbc:mysql://127.0.0.1:3306/lof?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&useSSL=false&serverTimezone=GMT&rewriteBatchedStatements=true
    @Test
    public void testBatchExcutorService() {
        SqlSession sqlSession = null;
        try {
            sqlSession = sqlSessionFactory.openSession(ExecutorType.BATCH);
            ProductDao batchProductDao = sqlSession.getMapper(ProductDao.class);
            for (int i = 0; i < 10000; i++) {
                batchProductDao.insertProduct(new Product("2047testBatchExcutorService" + i));
            }
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            sqlSession.commit();
            sqlSession.close();
        }
    }

mybatis.xml

    <insert id="insertProduct" parameterType="com.team.lof.model.Product">
        insert into product (<include refid="baseSql"></include>) values (#{name})
    </insert>

It takes 1s 119ms, fastest.

I captured and analyzed the packets separately. The size of each packet in test 1 is 151 bytes, and the size of each packet in test 2 and test 3 is 1466 bytes. Test 2 sent a total of 634 packets with a total size of 575925 bytes. 3 A total of 452 packets were sent with a total size of 392,882 bytes. I checked the packet data of test 2 and test 3 with a size of 1466 bytes, and found that the packet of test 3 was compressed. Is this the reason for the fast batch mode?

  • Do you really want to know if sending 392Kb to the server, in stead of 576Kb is making the process much quicker? – Luuk Dec 27 '20 at 15:32
  • Yes, because I can’t find other differences – learn in Java Dec 28 '20 at 01:44
  • I found another difference, PreparedStatement will not be performed in batch mode – learn in Java Dec 28 '20 at 13:55
  • What about the number of [transactions](https://dev.mysql.com/doc/refman/8.0/en/set-transaction.html) that are done in your `Test1` compared to `Test3` ? – Luuk Dec 28 '20 at 16:37
  • Test 2 uses spring's transaction management,like this. ``` @Transactional(rollbackFor = Exception.class) public int insertProductList(List productList) throws Exception { int result = productDao.insertProductList(productList); return result; } ``` Test 3 manually open the transaction and submit. I think the two tests are in one transaction. – learn in Java Dec 29 '20 at 01:21
  • [Why does TRANSACTION / COMMIT improve performance so much with PHP/MySQL (InnoDB)?](https://stackoverflow.com/questions/14675147/why-does-transaction-commit-improve-performance-so-much-with-php-mysql-innodb) – Luuk Dec 29 '20 at 07:29
  • The comparison of the problem is: in the case of automatic commit, each time SQL is executed, the transaction will be automatically committed, and when the transaction is manually opened, there is only one transaction – learn in Java Dec 30 '20 at 05:09
  • And there is only one transaction in my test 2 and test 3 – learn in Java Dec 30 '20 at 05:10
  • Multi-row insert (=Test2) builds a single `PreparedStatement` with many placeholders `?`. Its performance gets worse when there are many columns and/or items in the list. JDBC batch (=Test3) builds a simple `PreparedStatement` (with one placeholder in you case) and reuse it which gives the driver a chance to optimize the internal process (like `rewriteBatchedStatements` in MySQL). The actual performance depends on various factors, but using batch executor is the recommended way when dealing with a large data set. – ave Jan 01 '21 at 16:40
  • Yes, although I don't know how to optimize it internally in MYSQL, but judging from the amount of data transmitted on the network, the execution time has nothing to do with packet compression. – learn in Java Jan 05 '21 at 09:21

0 Answers0