1

I have two databases (postgresql 14): let's call them master and slave.

Let's create table in slave db:

create table sample_slave(
    id int, 
    name varchar,
    location varchar
);

Then in master db:

CREATE TABLE sample(
    id serial primary key, 
    name varchar, 
    location varchar
);

CREATE TABLE sample_master(check(location='master')) inherits(sample);
CREATE EXTENSION IF NOT EXISTS postgres_fdw;
CREATE SERVER slave_server FOREIGN DATA WRAPPER postgres_fdw options (dbname '...', host '...', port '5432', sslcert '...', sslkey '...', sslrootcert '...');
CREATE USER mapping for test_user SERVER slave_server options (user '...', password '...');
CREATE FOREIGN TABLE sample_slave (check(location='slave')) inherits(sample) SERVER slave_server;

create or replace function sample_trigger_fn() returns trigger as
$$
begin

    if new.location = 'master' then
        insert into public."sample_master" values(new.*);
    elsif new.location = 'slave' then
        insert into public."sample_slave" values(new.*);
    end if;

    return null;
end
$$
language plpgsql;

create trigger sample_trigger before insert on public."sample" for each row execute procedure public.sample_trigger_fn();

So, we have simple partitioned table sample. If we execute query in 'master':

SELECT * FROM sample;

we have all data from 'master' and 'slave' db. Its all works fine if all connections are stable, but if connection with 'slave' db is lost, 'select' will return error.

I want to do, that when the connection is lost, the 'master' db does not throw an error, but returns all available records (except for the part that is unavailable).

Is it possible to do that in Postgres?

Thanks

Vikl007
  • 11
  • 1
  • `postgres_fdw` is really meant for accessing remote data, stored on other instances, which you wish to keep stored there. Your case sounds like a good match for [asynchronous streaming](https://www.postgresql.org/docs/14/warm-standby.html#STREAMING-REPLICATION) [logical replication](https://www.postgresql.org/docs/14/logical-replication.html). – Zegarek Dec 06 '21 at 15:53
  • Thanks. I don’t want to duplicate data. I want view data from tables (sample1, sample2, ...) from different instance (db1, db2, ...) in one abstract table 'sample' in master db. With the help of postgres_fdw, it is enough to execute the simple SELECT command. Otherwise, I will need to connect to each instance and check the contents of each table separately. The question can be formulated as follows: How to make FOREIGN TABLE return an empty table instead of an error if the connection to that environment is lost? – Vikl007 Dec 07 '21 at 07:59
  • The extension does not offer this type of control, unfortunately. You could tweak its settings to try and change how the connection behaves (fetch_size, batch_size, keep_connections) to maybe decrease failure rate, but I don't see much else. You could resort to tricks like hiding the `employee` server behind a load balancer that switches between it and an empty standby (with `employee`'s unpopulated schema) whenever `employee` is unreachable. I think PostgreSQL replication can be used for that - after all that would be just a "backup" server that doesn't even have to make backups – Zegarek Dec 07 '21 at 08:31

0 Answers0