4

I want to add timeout to pg-promise queries so they will fail after some amount of time if database have not yet responded. Is there any recommended way to do that or should I make custom wrapper that will handle timer and reject promise if it's too late?

Resure
  • 436
  • 7
  • 17

1 Answers1

2

From the author of pg-promise...


pg-promise doesn't support query cancellation, because it is a hack to work-around incorrect database design or bad query execution.

PostgreSQL supports events that should be used when executing time-consuming queries, so instead of waiting, one can set an event listener to be triggered when specific data/view becomes available. See LISTEN/NOTIFY example.

You can extend pg-promise with your own custom query method that will time out with a reject (see example below), but that's again another work-around on top of a design problem.

Example using Bluebird:

const Promise = require('bluebird');

Promise.config({
    cancellation: true
});


const initOptions = {
    promiseLib: Promise,
    extend(obj) {
        obj.queryTimeout = (query, values, delay) => {
            return obj.any(query, values).timeout(delay);
        }
    }
};

const pgp = require('pg-promise')(initOptions);
const db = pgp(/* connection details */);

Then you can use db.queryTimeout(query, values, delay) on every level.

Alternatively, if you are using Bluebird, you can chain .timeout(delay) to any of the existing methods:

db.any(query, values)
    .timeout(500)
    .then(data => {})
    .catch(error => {})

See also:

UPDATE

From version 8.5.3, pg-promise started supporting query timeouts, via property query_timeout within the connection object.

You can either override the defaults:

pgp.pg.defaults.query_timeout = 3000; // timeout every query after 3 seconds

Or specify it within the connection object:

const db = pgp({
    /* all connection details */

    query_timeout: 3000
});
vitaly-t
  • 24,279
  • 15
  • 116
  • 138
  • 2
    I'm mostly concerned about network problems: queries itself in a subject app are pretty simple, but sometimes network connectivity to the database can significantly degenerate for some time and I want requests promises to fail within a predictable period of time so client will get his error and not something like nginx timeout. – Resure Sep 11 '17 at 18:27
  • @Resure if a query is taking too long - that's either a database design problem or the query-planning problem. Trying to force a time-out is a bad hack, just as I explained above. – vitaly-t Sep 11 '17 at 18:42
  • 1
    Absolutely agree with that, but maybe you've misread comment: what if it's network problems, not database design problem? Or you mean that connectivity problems are design problems too? – Resure Sep 11 '17 at 19:45
  • @Resure What I meant was - having long-running queries is a design problem. There are ways to avoid it. And then you would not need to worry about the network ever, as the library is built on the connection pool that automatically restores connectivity, so any query can be successfully re-run. – vitaly-t Sep 12 '17 at 03:54
  • Using bluebirds `.timeout` would reject the promise but pg-promise wraps async code that runs a query on a database which both have no concept of a rejected/cancelled promise. – Matt Nov 17 '17 at 05:22
  • @Resure The answer has been updated, to reflect what's in the latest version of `pg-promise` ;) which now supports option `query_timeout`. – vitaly-t Nov 30 '18 at 17:34
  • @vitaly-t The problem with `query_timeout` is that it leaves hanging active queries. In my scenario I have a chart editor where users can use different data aggregations, calculations and etc. by themselves so query can become really time-consuming. That's why I want to make sure that my server won't spend too much time on user-made query and will remain responsive – Mike Yermolayev Apr 15 '20 at 20:49