4

I always wondered what's happening behind the scenes when I'm querying my DB through SQL Developer.

Independently of the size of the table, I will always get at most 50 rows by default. I'm allowed to scroll down on the results table, and apparently it will somehow lazy load the rest of the results.

Showing the first 50 results doesn't seem to vary with the size of the table (at least for a simple SELECT * FROM t), so that leads me to believe that SQL Developer is enclosing my SELECT statement s with a SELECT * FROM s WHERE rownum <= 50. If that's the case, as I imagine it to be, it's not being shown on the Explain Plan tab.

How does SQL Developer then later lazily get the rest of the results if needed? Does it use some sort of offset? Will it always have to redo the query but instead of taking only rows with rownum <= 50, will do it for rownum <= 100, 150, etc? That would surely be inefficient. But unless it does it, then it risks fetching the wrong set of rows (as the table disposition of rows may have changed in-between!).

I'm also assuming that by default SQL Developer is only actually fetching from the server the minimum amount of data, ie, it's not actually fetching everything, although only showing some results. Is this the case?

Could anyone shed a bit of light on this issue?

devoured elysium
  • 101,373
  • 131
  • 340
  • 557

2 Answers2

3

so that leads me to believe that SQL Developer is enclosing my SELECT statement s with a SELECT * FROM s WHERE rownum <= 50

No it's not how it works, no predicate is added to the original query to limit the number of rows.

Any client's underlying database driver whether it's JDBC or ODP, fetches data from the open cursor in chunks. In SQL Developer the SQL array fetch Size (Tools\Preferences\Database\Advanced) is responsible for setting the value for how many rows should be fetched from the cursor with each round trip to the database server. So if, for example you have a table with 100 rows and array fetch size is set to 50 you'll fetch all rows from that table in 2 round trips to the database + 1 extra trip.

It doesn't however mean that you'd see only 50 rows. There would be fetched however many rows required to populate the SQL Developer's grid in batches of 50 rows. So you can see 100 rows displayed after you press f9 or Ctrl+Enter and it'd take two +1 round trips to the server.

Nick Krasnov
  • 26,886
  • 6
  • 61
  • 78
  • So that means that the DB will keep state associated with this query? When will it discard it? As soon as I try to run the next query? Or is it up to SQL Developer to inform the DB of when it's OK to release this' queries results? – devoured elysium Oct 24 '16 at 11:34
  • @devouredelysium When you submit a query you open a cursor, which upon execution produces result set(set of rows) which can be fetched. Rows in the result set are numbered. When you submit a new query new cursor will be opened and new result set produced. Number of open cursors per session is restricted by `open_cursors` initialization parameter. – Nick Krasnov Oct 24 '16 at 14:20
1

As proposed by others here simple steps to trace the database queries.

In SQL Developer run

 ALTER SESSION SET tracefile_identifier = mytest;
 alter session set events '10046 trace name context forever, level 1';

The first statement helps to find the trace file on the DB server, the second one activates the trace.

For more details see docs

Than run a sample query e.g.

 select owner, object_name from dba_objects

and scroll few pages down e.g. to row 130. Close the connection and examine the trace file:

 =====================
 PARSING IN CURSOR #210335064 len=42 dep=0 uid=48 oct=3 lid=48 tim=20962225002 hv=3579237936 ad='b7ef8180' sqlid='114vazvapdpjh'
 select owner, object_name from dba_objects
 END OF STMT
 PARSE #210335064:c=15600,e=31487,p=0,cr=4,cu=0,mis=1,r=0,dep=0,og=1,plh=2354722397,tim=20962225000
 EXEC #210335064:c=0,e=46,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2354722397,tim=20962225198
 FETCH #210335064:c=0,e=1650,p=0,cr=39,cu=0,mis=0,r=50,dep=0,og=1,plh=2354722397,tim=20962226907

 *** 2016-10-24 17:36:49.734

You see the cursor is parsed, executed and 50 rows are fetched (r=50). Note the cursor number (#210335064) and look further in the trace file:

 FETCH #210335064:c=0,e=515,p=0,cr=16,cu=0,mis=0,r=50,dep=0,og=1,plh=2354722397,tim=20970608847

 *** 2016-10-24 17:37:03.376

Next 50 rows were fetched ...

 FETCH #210335064:c=0,e=304,p=0,cr=6,cu=0,mis=0,r=50,dep=0,og=1,plh=2354722397,tim=20976035474

 *** 2016-10-24 17:37:19.866

Next 50 rows were fetched and finally the cursor was closed

 CLOSE #210335064:c=0,e=300,dep=0,type=0,tim=20992663336

You have scrolled over 130 row, the client performed 3 fetches with the default fetch size 50 rows You could increated the fetch size up to 200 - in that case only one fetch would be enough.

Marmite Bomber
  • 19,886
  • 4
  • 26
  • 53
  • One more question, and I know that this is probably highly DB-dependent. If I do a SELECT * FROM x ORDER BY k, with k not being indexed, does the server actually go and order everything eagerly, or will it go and try to do it in 50 rows batches, too? – devoured elysium Oct 25 '16 at 09:11
  • 1
    No RDBMS dependence here - ist SQL. If you say give me data from `x` order by `k` you must read and sort the whole table - the smallest key could be in the last record. – Marmite Bomber Oct 25 '16 at 16:11
  • You don't need to sort the whole table. You can sort only for the 50 first elements first, and then for the 51th-100th, etc, without ever doing a full sort by simply using a binary heap. This would have the advantage of not requiring to keep in memory or disk the full query results and to possibly return a lot faster, being particularly useful if in the typical use case the user just wants to see the first 50 rows anyway. To recap, while you definitely need to read the whole table, you don't really need to sort the whole table. – devoured elysium Oct 25 '16 at 20:32