I have a master database doing logical replication with a publication and a slave database subscribing to that publication. It is on the slave that I am occasionally getting the following error:
ERROR: error reading result of streaming command:
LOG: logical replication table synchronization worker for subscription ABC, table XYZ
How do I stop the above error from happening?
Below is a screenshot of the log to demonstrate the error:
Here is the same information as text:
2020-11-25 06:50:51.736 UTC [91572] LOG: background worker "logical replication worker" (PID 96504) exited with exit code 1
2020-11-25 06:50:51.740 UTC [96505] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_devicekioskrating" has started
2020-11-25 06:50:52.197 UTC [96505] ERROR: error reading result of streaming command:
2020-11-25 06:50:52.200 UTC [91572] LOG: background worker "logical replication worker" (PID 96505) exited with exit code 1
2020-11-25 06:50:52.203 UTC [96506] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "workorders_sectorbranchinformation" has started
2020-11-25 06:50:52.286 UTC [96506] ERROR: error reading result of streaming command:
2020-11-25 06:50:52.288 UTC [91572] LOG: background worker "logical replication worker" (PID 96506) exited with exit code 1
2020-11-25 06:50:52.292 UTC [96507] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_kioskstatetransitions" has started
2020-11-25 06:52:14.887 UTC [96339] ERROR: error reading result of streaming command:
2020-11-25 06:52:14.896 UTC [91572] LOG: background worker "logical replication worker" (PID 96339) exited with exit code 1
2020-11-25 06:52:14.900 UTC [96543] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_sensordatafeed" has started
2020-11-25 06:52:21.385 UTC [96507] ERROR: error reading result of streaming command:
2020-11-25 06:52:21.393 UTC [91572] LOG: background worker "logical replication worker" (PID 96507) exited with exit code 1
2020-11-25 06:52:21.397 UTC [96547] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_sitemappoint" has started
2020-11-25 06:52:21.523 UTC [96547] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_sitemappoint" has finished
2020-11-25 06:52:21.528 UTC [96548] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "core_event" has started
2020-11-25 06:55:35.401 UTC [96543] ERROR: error reading result of streaming command:
2020-11-25 06:55:35.408 UTC [91572] LOG: background worker "logical replication worker" (PID 96543) exited with exit code 1
2020-11-25 06:55:35.412 UTC [96642] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_doorevents" has started
2020-11-25 06:56:43.633 UTC [96642] ERROR: error reading result of streaming command:
2020-11-25 06:56:43.641 UTC [91572] LOG: background worker "logical replication worker" (PID 96642) exited with exit code 1
2020-11-25 06:56:43.644 UTC [96678] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "workorders_sectorbranchinformation" has started
2020-11-25 06:56:43.776 UTC [96678] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "workorders_sectorbranchinformation" has finished
2020-11-25 06:56:43.782 UTC [96679] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "core_batteryhistory" has started
2020-11-25 06:57:04.166 UTC [96679] ERROR: error reading result of streaming command:
2020-11-25 06:57:04.174 UTC [91572] LOG: background worker "logical replication worker" (PID 96679) exited with exit code 1
2020-11-25 06:57:04.178 UTC [96685] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_attendantvisittime" has started
2020-11-25 06:57:06.100 UTC [96685] ERROR: error reading result of streaming command:
2020-11-25 06:57:06.160 UTC [91572] LOG: background worker "logical replication worker" (PID 96685) exited with exit code 1
2020-11-25 06:57:06.164 UTC [96693] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_kioskstatetransitions" has started
2020-11-25 06:59:50.375 UTC [96548] ERROR: error reading result of streaming command:
2020-11-25 06:59:50.382 UTC [91572] LOG: background worker "logical replication worker" (PID 96548) exited with exit code 1
2020-11-25 06:59:50.389 UTC [96755] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_sensordatafeed" has started
2020-11-25 07:00:56.844 UTC [96693] ERROR: error reading result of streaming command:
2020-11-25 07:00:56.852 UTC [91572] LOG: background worker "logical replication worker" (PID 96693) exited with exit code 1
2020-11-25 07:00:56.856 UTC [96779] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "workorders_wastestream" has started
2020-11-25 07:00:57.391 UTC [96779] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "workorders_wastestream" has finished
2020-11-25 07:00:57.397 UTC [96780] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "core_event" has started
2020-11-25 07:02:39.650 UTC [96755] ERROR: error reading result of streaming command:
2020-11-25 07:02:39.658 UTC [91572] LOG: background worker "logical replication worker" (PID 96755) exited with exit code 1
2020-11-25 07:02:39.662 UTC [96824] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_devicekioskrating" has started
2020-11-25 07:02:40.276 UTC [96824] ERROR: error reading result of streaming command:
2020-11-25 07:02:40.279 UTC [91572] LOG: background worker "logical replication worker" (PID 96824) exited with exit code 1
2020-11-25 07:02:40.283 UTC [96825] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_kioskstatetransitions" has started
2020-11-25 07:04:07.222 UTC [96825] ERROR: error reading result of streaming command:
2020-11-25 07:04:07.230 UTC [91572] LOG: background worker "logical replication worker" (PID 96825) exited with exit code 1
2020-11-25 07:04:07.234 UTC [96862] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "contractservices_attendantvisit" has started
2020-11-25 07:04:49.971 UTC [96862] ERROR: error reading result of streaming command:
2020-11-25 07:04:49.978 UTC [91572] LOG: background worker "logical replication worker" (PID 96862) exited with exit code 1
2020-11-25 07:04:50.432 UTC [97013] LOG: logical replication table synchronization worker for subscription "snf5_cba_isp_db_staging_app1_srv_sub", table "core_batteryhistory" has started
Despite this error in postgresql v13.0, the tables on the slave database seem to be replicating okay. However, I would like to resolve this error. I also tried downloading postgresql v13.1, and noticed that I still get this error and that it does not replicate okay.
There was this post that I found:
https://www.postgresql-archive.org/BUG-16643-PG13-Logical-replication-initial-startup-never-finishes-and-gets-stuck-in-startup-loop-td6156051.html
The guy there (Henry Hinze) said it was a bug and it was fixed by installing version 13 RC1.
But my experience was the reverse, in postgresql v13.0 it was not getting stuck in the start up loop but I noticed after installing postgresql v13.1 it was doing that.
I can confirm that I am using postgresql version 13.1 as /usr/lib/postgresql/13/bin/postgres -V
gives me the following output:
postgres (PostgreSQL) 13.1 (Ubuntu 13.1-1.pgdg18.04+1)
I am using Ubuntu v18.04.
I have uninstalled postgresql completely and reinstalled it and it has not resolved the issue.
The postgresql.conf settings on the slave are the default settings.
The relevant postgresql.conf on the master are as follows:
wal_level = logical
checkpoint_timeout = 5min
max_wal_size = 1GB
min_wal_size = 80MB