0

When running the query the first time in psql, it is a bit slow. The second time it's a lot faster since the Planning Time goes down substantially.

> EXPLAIN ANALYSE SELECT * FROM public.my_custom_function(10, 10, 'Personal');

The first time:

                                                                QUERY PLAN                                                                 
--------------------------------------------------------------------------------------------------------------------------------------------
 Function Scan on my_custom_function  (cost=0.25..10.25 rows=1000 width=32) (actual time=4.900..4.901 rows=1 loops=1)
 Planning Time: 30.870 ms
 Execution Time: 3.410 ms
(3 rows)

All subsequent queries:

                                                                QUERY PLAN                                                                 
--------------------------------------------------------------------------------------------------------------------------------------------
 Function Scan on my_custom_function  (cost=0.25..10.25 rows=1000 width=32) (actual time=4.900..4.901 rows=1 loops=1)
 Planning Time: 0.620 ms
 Execution Time: 4.920 ms
(3 rows)

This is the case any time I make a new connection to the DB, the first call has considerable Planning Time and all others are fine.

Additional Context

Deployment: Docker

Postgres version: 12

SQL logic: Does Indexed JOINs and WHERE lookups. I know logic there is fast and solid and it's not the query itself that needs to be optimised.

Whether I run the query by itself or via the function, the same Planning Time issue remains.

Problem:

I have an HTTP API making a connection per request, calling the function once and then returning. Hence every API request has the performance of a non-planned query.

Question:

How can I make this query be Planned for once and never again? Maybe using a PREPARE statement?

Werner Raath
  • 1,322
  • 3
  • 16
  • 34
  • Just ignore the first time. Your server process is probably swapped out, or living in a VM. BTW: 4msec is not slow. Oh, and what does `my_custom_function()` do? – wildplasser Jan 20 '21 at 18:56
  • If this is just the first time, it could also be caused by meta data cache that needs to be loaded. Does your database have a lot of tables and/or schemas? –  Jan 20 '21 at 19:13
  • If the problem is particular to your custom function, then you need to show it to us (in some simplified form which still has the problem). And if it is not, then you should show us a less peculiar example. – jjanes Jan 21 '21 at 03:38
  • Also, what is the OS and your PostgreSQL version? – jjanes Jan 21 '21 at 03:48
  • Hi all, I've updated my question with the info that you asked for – Werner Raath Jan 21 '21 at 06:38

1 Answers1

3

While there might be ways to speed this up (if we could see your function), fundamentally if you are very sensitive to performance, then you need to choose some technology that doesn't make one connection per request. Like mod_perl or FastCGI or maybe pgbouncer.

jjanes
  • 37,812
  • 5
  • 27
  • 34
  • Okay so I tested your theory and you are right! I'm going to give it another week just to wait and see if there is an actual way to do it in Postgres. If nothing pops up, I'll mark this as the answer – Werner Raath Jan 22 '21 at 06:38