0

Can you please help me to solve this --> I m using Datastage 11.5 and in cff stage of one of my job i m getting allocation failed error due to which my job is getting aborted when ever a large size cff file comes.

my job simplly converts cff file into text file.

Errors in job log show: Message: main_program: Current heap size: 2,072,104,336 bytes in 4,525,666 blocks Message: main_program: Fatal Error: Throwing exception: APT_BadAlloc: Heap allocation failed. [error_handling/exception.C:132]

trincot
  • 317,000
  • 35
  • 244
  • 286

1 Answers1

0

From https://www.ibm.com/support/pages/datastage-cff-stage-job-fails-message-aptbadalloc-heap-allocation-failed the Complex Flat File (CFF) stage is a composite operator and will be inserting a Promote Sub-Record operator for every subrecord. Too many of them can exhaust the available heap. To further diagnose the problem, SET the Environment Variable APT_DUMP_SCORE=True and Verify the score dump in the log to see if the job is creating too many Promote Sub-Record operators. This could be exhausting the available heap. To improve performance and reduce memory usage the table definition should be optimized further.

Resolving the problem

Here is what you can do to reduce the number of Promote Sub-Record operators:

  1. Save the table definition from the CFF stage in the job.
  2. Clear all the columns in the CFF stage.
  3. Reload the table definition from the saved table definition in step 1) by checking the check box 'Remove group columns". This step will remove the additional group columns.
  4. Check the layout, it should have the same record length with the original job. After reloading the table the table structure will be flat (no more hierarchy).

After the above steps the OSH script generated from the Complex Flat File stage will no longer contain Promote Sub-Record operator and the performance will be improved and memory usage will be reduced to minimum.

svalarezo
  • 1
  • 1