I'm running a performance test against vespa, the container looks slow to be unable to process the incoming more requests. Looked at vespa.log, there're lots of GC allocation failure logs. However, the system resources are pretty low (CPU<30%, mem<35%). Is there any configuration to optimize?
Btw, looks like the docprocservice is running on content node by default, how to tune jvmargs for docprocservice?
1523361302.261056 24298 container stdout info [GC (Allocation Failure) 3681916K->319796K(7969216K), 0.0521448 secs]
1523361302.772183 24301 docprocservice stdout info [GC (Allocation Failure) 729622K->100400K(1494272K), 0.0058702 secs]
1523361306.478681 24301 docprocservice stdout info [GC (Allocation Failure) 729648K->99337K(1494272K), 0.0071413 secs]
1523361308.275909 24298 container stdout info [GC (Allocation Failure) 3675316K->325043K(7969216K), 0.0669859 secs]
1523361309.798619 24301 docprocservice stdout info [GC (Allocation Failure) 728585K->100538K(1494272K), 0.0060528 secs]
1523361313.530767 24301 docprocservice stdout info [GC (Allocation Failure) 729786K->100561K(1494272K), 0.0088941 secs]
1523361314.549254 24298 container stdout info [GC (Allocation Failure) 3680563K->330211K(7969216K), 0.0531680 secs]
1523361317.571889 24301 docprocservice stdout info [GC (Allocation Failure) 729809K->100551K(1494272K), 0.0062653 secs]
1523361320.736348 24298 container stdout info [GC (Allocation Failure) 3685729K->316908K(7969216K), 0.0595787 secs]
1523361320.839502 24301 docprocservice stdout info [GC (Allocation Failure) 729799K->99311K(1494272K), 0.0069882 secs]
1523361324.948995 24301 docprocservice stdout info [GC (Allocation Failure) 728559K->99139K(1494272K), 0.0127939 secs]
services.xml:
<container id="container" version="1.0">
<config name="container.handler.threadpool">
<maxthreads>10000</maxthreads>
</config>
<config name="config.docproc.docproc">
<numthreads>500</numthreads>
</config>
<config name="search.config.qr-start">
<jvm>
<heapSizeAsPercentageOfPhysicalMemory>60</heapSizeAsPercentageOfPhysicalMemory>
</jvm>
</config>
<document-api />
<search>
<provider id="music" cluster="music" cachesize="64M" type="local" />
</search>
<nodes>
<node hostalias="admin0" />
<node hostalias="node2" />
</nodes>
</container>
# free -lh
total used free shared buff/cache available
Mem: 125G 43G 18G 177M 63G 80G
Low: 125G 106G 18G
High: 0B 0B 0B
Swap: 0B 0B 0B