0

We have a postgres database with a table with about 20K rows.

The software is written in java/spring using jpa and developers used large object (@Lob) to store some long strings without worry to the lenght but now, especially with quite large row selects, the queries take a lot of time to execute.

First question is why with large object queries are so slow? It it because every large object require a sort of join with postgres table for large object?

Second question is: how can we migrate await from large objects and move to standard columns? At the end we need about 2000-3000 character at maximum.

I wrote a small java program to to this but the query is so slow that really take lot of time to execute.

Davide C
  • 830
  • 2
  • 11
  • 21

1 Answers1

1

I solved doing this:

alter table the_table
   add content text;

update the_table
  set content = convert_from(lo_get(the_oid_column), 'UTF-8');

select lo_unlink(the_oid_column)
from the_table; 

alter table the_table
   drop the_oid_column;
Davide C
  • 830
  • 2
  • 11
  • 21