0

I read about Bulk Collect and wrote some code using it (not deployed yet). The total amount of rows returned is in the vicinity of 80.000. I limited the amount of rows returned in one batch to 10.000, but there is no basis for using this number, I simply improvised.

What would be a good method for determining how to limit the Bulk Collect?

Robotronx
  • 1,728
  • 2
  • 21
  • 43
  • 1
    Here's an article by Steven Feuerstein in Oracle Magazine which looks at [BULK COLLECT and LIMIT](http://www.oracle.com/technetwork/issue-archive/2008/08-mar/o28plsql-095155.html). Oracle internally does convert Cursor FOR loops to BULK COLLECTS using a limit of 100. – DrabJay Aug 01 '14 at 08:36

1 Answers1

2

As with anything, the best approach would be to benchmark the different options.

Realistically, though, in the vast majority of cases, there isn't any appreciable benefit to a limit much higher than 100. With a limit of 100, you're eliminating 99% of the context shifts. It's relatively unlikely that the remaining 1% of the context shifts account for a meaningful fraction of the execution time of your code. Reducing the context shifts further probably does nothing for performance and just causes you to use more valuable PGA memory.

Justin Cave
  • 227,342
  • 24
  • 367
  • 384
  • To tell you the truth, I had no idea about the magnitude of LIMIT. 10, 100, 1000, 10000? I have some perspective now. Would you be so kind to point me in the right direction regarding benchmarking this thing? – Robotronx Aug 01 '14 at 08:22
  • @Robotron - The same way you'd benchmark anything. Depending on how sophisticated you want to get, it could be anywhere between doing a full blown load test with a bunch of threads to a simple single-user SQL*Plus session where you measure the elapsed time of the two calls. – Justin Cave Aug 01 '14 at 08:50