0

I am running batch job on PCF which loads 10 millions records and when I run the batch I get the below error. Through manifest.yml I've assigned 2GB of memory to app already !

2020-03-06T13:48:24.282-05:00 [APP/PROC/WEB/0] [OUT] tenured generation total 253952K, used 18859K [0x00000000f0800000, 0x0000000100000000, 0x0000000100000000)

2020-03-06T13:48:24.282-05:00 [APP/PROC/WEB/0] [OUT] to space 12672K, 0% used [0x00000000efba0000, 0x00000000efba0000, 0x00000000f0800000)

2020-03-06T13:48:24.282-05:00 [APP/PROC/WEB/0] [OUT] from space 12672K, 0% used [0x00000000eef40000, 0x00000000eef40000, 0x00000000efba0000)

2020-03-06T13:48:24.282-05:00 [APP/PROC/WEB/0] [OUT] eden space 101632K, 5% used [0x00000000e8c00000, 0x00000000e9156680, 0x00000000eef40000)

2020-03-06T13:48:24.282-05:00 [APP/PROC/WEB/0] [OUT] def new generation total 114304K, used 5465K [0x00000000e8c00000, 0x00000000f0800000, 0x00000000f0800000)

2020-03-06T13:48:24.282-05:00 [APP/PROC/WEB/0] [OUT] Heap

2020-03-06T13:48:29.255-05:00 [APP/PROC/WEB/0] [ERR] jvmkill killing current process

2020-03-06T13:53:01.261-05:00 [APP/PROC/WEB/0] [ERR] Resource exhaustion event: the JVM was unable to allocate memory from the heap.

2020-03-06T13:53:01.261-05:00 [APP/PROC/WEB/0] [ERR] ResourceExhausted! (1/0

[OUT] JVM Memory Configuration: -Xmx379303K -Xss1M -XX:ReservedCodeCacheSize=240M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=157272K
Jeff Cook
  • 7,956
  • 36
  • 115
  • 186
  • 1
    Well don't load 10 million records in memory, that is a bad idea anyway. You should read parts of that, process and write. And not read all of them at once. – M. Deinum Jul 21 '20 at 07:00
  • Its `n+1` query issue, where we need to look of data from One datasource into another datasource – Jeff Cook Jul 21 '20 at 08:14
  • 1
    Not sure what your comment means (it doesn't really make sense imho). Still don't load that many records, or tune the queries you are using. – M. Deinum Jul 21 '20 at 09:18
  • @M.Deinum - I am not loading the records in single go. Its a chunk based processing where chunSize = 500. I am not sure why this is error is coming through. – Jeff Cook Jul 23 '20 at 05:30
  • Regardless of the chunk-size if you keep references on the records read they will accumulate in-memory and eventually lead to an out-of-memory (and/or very poor performance). So without seeing your job, configuration and what you are doing with those records, this question is simply impossible too answer. – M. Deinum Jul 25 '20 at 11:26

0 Answers0