I am currently implementing an adapter for PostgreSQL using libpq, and I was wondering if the single-row mode as set through PQsetSingleRowMode (https://www.postgresql.org/docs/9.6/static/libpq-single-row-mode.html) was having known performance issues?
I was under the impression that in this mode, libpq would use a more intelligent/efficient buffering than what can be achieved with a cursor and explicit FETCH.
However, when running 10x a simple query ("select id from mytable", returning 5000 records). With PostgreSQL 9.6.3 (on both client and server side), I am observing the following performance patterns:
- PQexecParams (all at once): 78 msec
- PQsetSingleRowMode: 3047 msec (most of it spent in PQgetResult)
- PQexecParams + FETCH 1 : 3313 msec (most of it spent in PQgetResult)
- PQexecParams + FETCH 10 : 219 msec
So the single-row mode appears to be barely more efficient than creating a cursor and fetching row-by-row... Is that correct behaviour? or is there some other option to control single-row mode?
(in my usage cases, fetching all at once would be risky in terms of memory usage for some queries, so the choice is really between single-row mode or fetching a few records at once)
Extra question: is there a way to abort fetching a single-row mode mid-way easily? When using a cursor+fetch this is trivial, but for single-row mode, it appears you either need to get all results or use PQcancel)
Addendum: Had another run with a profiler, turns out the calls PQgetResult makes to malloc and free are the bottlenecks (about 60% and 30% of CPU time respectively), both functions come from MSVCR120 AFAICT (this is under Win 10, server is localhost). I am using the libpq.dll from the "official" PostgreSQL zip. Interestingly enough, running other queries before the benchmark can "sometimes" make the issue disappear. Looks like PQgetResult is hitting a weak spot of malloc/free.