2

We runnning this code from our ERP application Dynamics AX 2012 R3 CU10, which is part of the upgrade process from AX 4.

 ttsbegin;        

    while select forupdate salesLine
        where salesLine.RemainSalesFinancial  != 0
           && salesLine.RemainInventFinancial == 0
    {
        salesLine.RemainInventFinancial = 0;
        select sum(Qty) from inventTrans
            where inventTrans.del_InventTransId == salesLine.InventTransId
               &&(inventTrans.StatusIssue       == StatusIssue::Deducted
               || inventTrans.StatusReceipt     == StatusReceipt::Received);

        salesLine.RemainInventFinancial = -inventTrans.Qty;
        salesLine.doUpdate();
    }
    ttscommit;

The thing is that we ran this process for 2 installations. In the first installation the entire procedure took 45 minutes to execute, however, on the second one it takes 24 hours.

I'm trying to find out where is the performance issue. This are my observations:

  • SalesLine number of rows is practically the same (8 million rows)
  • InventTrans is 4 times greater in the second installation (40 million vs 10 million) I don't think this should account for such a difference in execution time. I've created an index in this table, so the sum(qty) is smooth. Duration 0 in SQL Profiler
  • On SQL server side, the first installation has SQL Server 2008 Enterprise vs the second installation SQL Server 2014 Standard. Both 64 bit edition. Does Standard edition have any type of limitation that could cause this?
  • CPU is the same: 2 CPUs of 4 core each. Total 8 cores with 2,4 Ghz
  • Memory is 64 GB RAM in the second installation vs 32GB on the first one, so the problem cannot be here
  • Configuration in SQL server is the same. Splitted tempdb in 8 files os 10GB each. Max degree of paralelism set to 4 in both servers
  • Max memory ussage is set to 56 GB in the second server, 24 in the first one.
  • Other difference I see is that Application server (AOS) has only one CPU core in the second installation and in the first one it has 4, but I understand that AOS doesn't do much processing anyway.
  • OS is Windows 2012 R2 Standard 64bit in second installation. Windows 2008 R2 Datacenter 64bit in the first one

I'm not sure if OS o SQL server edition can have such an impact in execution time. I need to make sure this is the issue before ordering a software or OS change.

I don't know what else to check. Any ideas?

In case anyone wants to have a look, the code is in class.method: ReleaseUpdateDB401_Cust.updateSalesLineRemainInventFinancial

Jonathan Bravetti
  • 2,228
  • 2
  • 15
  • 29
Jack Casas
  • 914
  • 18
  • 37
  • 1
    This might be an obvious question, but how many records are being updated in the two environments? I.E. where `RemainSalesFinancial != 0 && RemainInventFinancial == 0`. Since this is a `.doUpdate()` which doesn't call any business logic, you could do this update directly in SQL. – Alex Kwitny Aug 24 '16 at 14:29
  • Wow! I don't know how I missed that! it so obvious. Indeed in the second environment almost all transactions have RemainSalesFinancial != 0. In the first evironment they are mostly =0 so they get filtered out. Thanks! (was going crazy) – Jack Casas Aug 24 '16 at 15:34
  • 1
    All of the bullet-points in your question make this comment all the more funny. Well hopefully you can solve it. Again, this can be converted to a direct SQL update if you need to, provided that it's not apart of an `atomic` process. – Alex Kwitny Aug 24 '16 at 15:47
  • On second look, this is likely apart of a couple "steps", where one step populates `inventTrans.del_InventTransId`. The `.del_` indicates it's a field marked for deletion after the upgrade. If this field has data populated, you can probably be safe to do this in SQL if needed. `InventTransId` is normalized and moved to a new table `InventTransOrigin` from `InventTrans` and is referenced by `RecId` in AX12. Or `del_InventTransId` could be a renamed field from `InventTransId` or it could just be a new column populated from another column on the same table. – Alex Kwitny Aug 24 '16 at 15:53
  • You can also use `update_recordset`. Remember to call `skipDateMethods(true)` first. – Jan B. Kjeldsen Aug 25 '16 at 07:40
  • Consider accepting the answer. – Jan B. Kjeldsen Nov 11 '16 at 15:19

1 Answers1

3

As the question comments show the time difference can be explained by the difference in data regarding the remain fields.

To speed-up the proces use only one round-trip to the database:

salesLine.skipDataMethods(true);
update_recordset salesLine
    setting RemainInventFinancial = -inventTrans.Qty
    where salesLine.RemainSalesFinancial  != 0
       && salesLine.RemainInventFinancial == 0
    join sum(Qty) from inventTrans
    where inventTrans.del_InventTransId == salesLine.InventTransId
       &&(inventTrans.StatusIssue       == StatusIssue::Deducted
       || inventTrans.StatusReceipt     == StatusReceipt::Received);
info(int642str(salesLine.rowCount()); // Number of records updated

It will be two orders of manitude (10-100 times) faster.

Jan B. Kjeldsen
  • 17,817
  • 5
  • 32
  • 50