1

I am ingesting a large amount of data from postgresql and am trying to get jdbc_fetch_size working properly. Before this I used redshift and it was working very well. Logstash memory would stay around 2 to 3gb. I switched to postgresql and memory is maxing out at 4gb every time. I know I can switch to jdbc_page_size but that is much slower.

Does anybody have it working properly so they can ingest 20 million records or more? What size are you setting the jdbc_fetch_size to? I tried 10 and 100.

My logstash version is 7.17.2 and my postgresql JDBC driver is version 42.3.5.

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197
Casey
  • 2,611
  • 6
  • 34
  • 60
  • As you didn't show any code, I have to guess what you are doing. "Ingesting" seems to suggest you are doing INSERTs. However, the JDBC fetch size (`Statement.setFetchSize()`) [is only used](https://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor) when _retrieving_ data e.g. through a SELECT statement. And only if you turned off auto commit on the connection. –  May 19 '22 at 20:17

0 Answers0