0

We have bytea data stored in pg_largeobjects (Large objects table)

Here the data is 675 MB. We are using Large objects client interface API provided by postgreSQL to retrieve the data (lo_read, lo_open etc)

When I try to fetch the data from local it took 30-35 sec to retrieve the content from DB. Whereas, azure postgreSQL is keep on executing even it is more than 1hr.

I'm using Java code to retrieve the data as you can see that code, network speed everything is same but when I point to azure postgreSQL the latency is very high.

If anyone knows what can be root cause or how can I retrieve the file like local. It would help me a lot and appreciated.

Azure postgreSQL version: 14.8 - Hosted in Germany west Local postgreSQL version: 15.1 - Hosted in India

The user logged in to postgreSQL is not a super user And the user which I used in local DB is super user. I could see only version, Location and super user as a difference. Remaining looks same.

Even the script ran in Germany (I thought network latency/region is the issue) there is still high latency.

I request you to please suggest any ideas/solution to resolve this issue.

Dev-eloper
  • 21
  • 1
  • 4
  • 2
    As I said on the Postgres mailing list, this is an Azure tech support question. Contact them. – Adrian Klaver Sep 01 '23 at 14:58
  • Avoid large objects by all means. They will probably make you suffer some day. Lots of things about them are weird, and there are lots of known problems. I consider it a legacy technique. – Laurenz Albe Sep 02 '23 at 16:40

0 Answers0