3

I have an AWS RDS PostgreSQL 12.3 (t3.small, 2CPU 2GB RAM). I have this table:

CREATE TABLE public.phones_infos
(
    phone_id integer NOT NULL DEFAULT nextval('phones_infos_phone_id_seq'::regclass),
    phone character varying(50) COLLATE pg_catalog."default" NOT NULL,
    company_id integer,
    phone_tested boolean DEFAULT false,
    imported_at timestamp with time zone NOT NULL,
    CONSTRAINT phones_infos_pkey PRIMARY KEY (phone_id),
    CONSTRAINT fk_phones_infos FOREIGN KEY (company_id)
        REFERENCES public.companies_infos (id) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE CASCADE
)

There are exactly 137468 records in this table, using:

SELECT count(1) FROM phones_infos;

The ERROR: out of memory for query result occurs with this simple query when I use pgAdmin 4.6:

SELECT * FROM phones_infos;

I have tables with 5M+ records and never had this problem before.

EXPLAIN SELECT * FROM phones_infos;
Seq Scan on phones_infos  (cost=0.00..2546.68 rows=137468 width=33)

I read this article to see if I could find answers, but unfortunately as we can see on metrics:enter image description here there are no old pending connections that could eat memory.

As suggested, the shared_buffers seems to be correctly sized:

SHOW shared_buffers;
449920kB

What should I try?

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263
IndiaSke
  • 348
  • 1
  • 2
  • 10
  • That OOM must be occurring on the client side. What are you using there? – Laurenz Albe Sep 30 '20 at 09:53
  • pgAdmin 4.6. Now that you say that I admit I have issues with pgadmin always loosing connexions since last upgrade. I asked a mate on the same network he does have any problem with this query. I try to install pdagmin again – IndiaSke Sep 30 '20 at 09:59
  • pgAdmin is known to have a hard time with large result sets. Use `psql`. – Laurenz Albe Sep 30 '20 at 10:02
  • It's working again. I never had problems even querying millions of results. Do you have an alternative to pgAdmin ? (a better one) – IndiaSke Sep 30 '20 at 10:07
  • 1
    Yes, like I said, `psql`. I never use a different client. – Laurenz Albe Sep 30 '20 at 10:21
  • @LaurenzAlbe . . . I think you should combine your comments into an answer. – Gordon Linoff Sep 30 '20 at 10:44
  • As @LaurenceAlbe indicates psql will always work. I would suggest you become familiar with it. But on the IDE side you might want to look into DBeaver, or even (surprisingly) Oracle SQL developer works. If you have deep pockets then there is Toad Edge. – Belayer Sep 30 '20 at 19:16

1 Answers1

4

The problem must be on the client side. A sequential scan does not require much memory in PostgreSQL.

pgAdmin will cache the complete result set in RAM, which probably explains the out-of-memory condition.

I see two options:

  • Limit the number of result rows in pgAdmin:

    SELECT * FROM phones_infos LIMIT 1000;
    
  • Use a different client, for example psql. There you can avoid the problem by setting

    \set FETCH_COUNT 1000
    

    so that the result set is fetched in batches.

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263