3

My internal table contains a lot of data.

I have the following code:

LOOP AT lt_tab INTO ls_tab
  WHERE value1 EQ lv_id1
    AND value2 LE lv_id2
    AND value3 GE lv_id3.

  IF ls_tab-value4 IS NOT INITIAL.
    IF ls_tab-value4 NE lv_var.
      lv_flag = lc_var.
      EXIT.
    ENDIF.
  ELSE.
    lv_flag = lc_var.
    EXIT.
  ENDIF.
ENDLOOP.

The database table contains 7 fields and the internal table has the same type like the database table.

In the where clause there are no primary key fields.

There is a composite key in the table that consists of two primary keys. The table fields are transid(primary key), item1(primary key), value1, value2, value3 and value4.

I have to search the table based on those conditions only. But this is taking too much time. How to optimize it?

Sandra Rossi
  • 11,934
  • 5
  • 22
  • 48
Twity 56876
  • 125
  • 1
  • 2
  • 7
  • 5
    This certainly isn't the complete code. Please post a complete example, along with the structure of the table in question. – vwegert May 01 '16 at 10:48
  • How can I optimize that loop statement? – Twity 56876 May 01 '16 at 16:51
  • 1
    You could simplify the if-statement and use `ASSIGNING` instead of `INTO` but that wouldn't give you a very great preformance benefit. The actual target and more code would help –  May 01 '16 at 17:01
  • 1
    Well, you are using a loop with condition. If those value fields are not a part of the table key (either primary or secondary) and the table is big then this could take a while (because of the full scan). Second of all, why do you use `EQ`, `LE` or `GE`? Cobol and Fortran times have rather passed by. – Jagger May 01 '16 at 19:48
  • 1
    @Jagger I actually prefer using those operators because they make the coding pretier (my opinion). In the end there is no difference between `=` and `EQ` so what is the problem with that? –  May 01 '16 at 20:04
  • @LPK Well, my opinion is that they make one's coding uglier. But this is just my opinion. As I said those operator are taken from Fortran which is quite an old programming language. Moreover I am a fan of brevity and `=`, `>`, `<` are just one sign and they also correspond to real math operators. – Jagger May 01 '16 at 20:59
  • 1
    In order to answer your question properly, I need to know how table lt_tab is declared and how often the values lv_id1; lv_id2 and lv_id3 changes. Optimising that loop will likely involve declaring a secondary sorted key for table lt_tab (assuming it's sorted at the moment). As is your table is looped through sequentially every time as you are not using key fields. For large tables that will be very inefficient. But adding a secondary key negatively impacts on the performance of changing the content of the table. – Esti May 03 '16 at 03:19
  • Hi there u can use field symbols to increase efficiency .If you find me correct in your perspective comment and i will answer you the code using field symbols – Achuth hadnoor Jul 14 '16 at 10:18

6 Answers6

2

Although you have not provided enough information in order to be entirely sure what the real problem is, one could assume that the performance issue you are having is because of the fact that you use non-key fields in the loop's condition.

LOOP AT lt_tab INTO ls_tab
  WHERE value1 EQ lv_id1
    AND value2 LE lv_id2 
    AND value3 GE lv_id3.

You could define a secondary sorted key for the table type of the variable lt_tab that would contain the fields value1, value2 and value3.

Have a look at the following example.

REPORT zzy.

CLASS lcl_main DEFINITION FINAL CREATE PRIVATE.
  PUBLIC SECTION.
    CLASS-METHODS:
      class_constructor,
      main.
  PRIVATE SECTION.
    TYPES: BEGIN OF t_record,
      transid TYPE sy-index,
      item1   TYPE char20,
      value1  TYPE p LENGTH 7 DECIMALS 2,
      value2  TYPE p LENGTH 7 DECIMALS 2,
      value3  TYPE p LENGTH 7 DECIMALS 2,
      value4  TYPE p LENGTH 7 DECIMALS 2,
    END OF t_record,
    tt_record TYPE STANDARD TABLE OF t_record WITH NON-UNIQUE KEY transid item1.
*    tt_record TYPE STANDARD TABLE OF t_record WITH NON-UNIQUE KEY transid item1
*          WITH UNIQUE SORTED KEY sec_key COMPONENTS value1 value2 value3.
    CONSTANTS:
      mc_value1 TYPE p LENGTH 7 DECIMALS 2 VALUE '100.00',
      mc_value2 TYPE p LENGTH 7 DECIMALS 2 VALUE '150.00',
      mc_value3 TYPE p LENGTH 7 DECIMALS 2 VALUE '10.0'.
    CLASS-DATA:
      mt_record TYPE tt_record.
ENDCLASS.

CLASS lcl_main IMPLEMENTATION.
  METHOD class_constructor.
    DO 2000000 TIMES.
      INSERT VALUE t_record( transid = sy-index item1 = |Item{ sy-index }| 
            value1 = sy-index value2 = sy-index / 2 value3 = sy-index / 4 value4 = 0 )
        INTO TABLE mt_record.
    ENDDO.
  ENDMETHOD.

  METHOD main.
    DATA:
      l_start TYPE timestampl,
      l_end   TYPE timestampl,
      l_diff  LIKE l_start.
    GET TIME STAMP FIELD l_start.
    LOOP AT mt_record INTO DATA(ls_record) "USING KEY sec_key
      WHERE value1 = mc_value1 AND value2 >= mc_value2 AND value3 <= mc_value3.

      ASSERT 1 = 1.

    ENDLOOP.
    GET TIME STAMP FIELD l_end.
    l_diff = l_end - l_start.

    WRITE: / l_diff.
  ENDMETHOD.
ENDCLASS.

START-OF-SELECTION.
  lcl_main=>main( ).

If the table type tt_record is defined in the following way

tt_record TYPE STANDARD TABLE OF t_record WITH NON-UNIQUE KEY transid item1.

then the run time of the loop on my SAP system varies from 0.156 to 0.266 seconds.

If you define it however as follows

tt_record TYPE STANDARD TABLE OF t_record WITH NON-UNIQUE KEY transid item1
      WITH UNIQUE SORTED KEY sec_key COMPONENTS value1 value2 value3.

and adjust the loop by adding the hint USING KEY sec_key then the run time I get each time is 0.00.

Sandra Rossi
  • 11,934
  • 5
  • 22
  • 48
Jagger
  • 10,350
  • 9
  • 51
  • 93
  • 2
    According to the [documentation "Optimization of the WHERE Condition"](https://help.sap.com/doc/abapdocu_753_index_htm/7.53/en-US/index.htm?file=abenitab_where_optimization.htm), only `=` or `is initial` are considered for the optimized access so it should give the same result if `value2` and `value3` are removed from the sorted key. Note that the creation of the index is not measured, it would be interesting to measure the total time, which I guess would be very close, but if the `LOOP AT mt_record` is done several times, the sorted key would win by far. – Sandra Rossi Apr 17 '19 at 06:57
1

In this case we need a SORTED internal table instead of STANDARD internal table (default behavior) to improve the performance for mass data case.

example of definition of the internal table

DATA: lt_sorted_data TYPE SORTED TABLE OF TABLENAME WITH NON-UNIQUE KEY MTART.

Well in your case since the TABLENAME is already a database table, which contains a primary key already, we need to create another (local) structure with the same column lists, and load the data via

select * into CORRESPONDING FIELDS OF TABLE lt_sorted_data

Then it will be faster at log(n) basis.

Klaus Hopp
  • 41
  • 2
0

You can use LOOP AT ... ASSIGNING (<fieldsymbol>). The assigning is more performant than the LOOP AT ... INTO structure. Here some more info.

Jagger
  • 10,350
  • 9
  • 51
  • 93
  • 3
    This is true if structure used for `INTO` is a complex and often a nested structure otherwise there is not much of a gain. – Jagger May 01 '16 at 19:49
0

If you have a lot of data, it doesn't help to make a line of code a little bit faster.

The problem is probably that you are making a full table scan. You are processing each line of the table (until you find what you're searching for)

For this type of problem, there are sorted tables and hashed tables:

http://help.sap.com/saphelp_nw70/helpdata/en/fc/eb366d358411d1829f0000e829fbfe/content.htm

If you use them wisely, the select has to check only a fraction of the lines in the table which results in a multitude faster selection, depending on the distribution of the data in your table.

Gerd Castan
  • 6,275
  • 3
  • 44
  • 89
0

For the internal table "lt_tab" here, I would use an ABAP Sorted table with keys you've used in Where condition of that Loop statement.

Also, if this loop has been used under another loop than I strongly recommend you to check the term "Partially sequential set access" it creates a big difference on performance loops. This works when you use Sorted Tables.

Cody Gray - on strike
  • 239,200
  • 50
  • 490
  • 574
Bulent Balci
  • 644
  • 5
  • 11
0

you can use a binary loop algorithm for this...

  • Sort your internal table by primary key
  • read with binary search with the key you need. Store the index (sy-tabix).
  • loop at internal table from index XX.
  • at the moment the primary key changes (check it inside the loop) just exit the loop.

You can perform this inside loop of a key table with only your unique keys. The performance optimizes a LOT doing this. I hope this helps.

Ezequiel
  • 37
  • 3