0

I'm using version 11.39.11 (Oct2020 SP2) of MonetDB database as part of a BI software. MonetDB feeds the fact table with a large flat file of 40 columns and 225 millions of rows. Everything works like a charm. 16Gb of RAM is enough to load that amount of data, and the amount of memory used by COPY INTO is really small.

I switched MonetDB to release 11.41.5 (Jul2021), and everything works OK, but when I have to feed the fact table with such a big flat file, MonetDB just eats all memory!

What happened to the COPY INTO statement from release 11.39.11 to 11.41.5?

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197
Llorieb
  • 25
  • 5
  • How many COPY INTO queries do you run simultaneously? Do you also run other queries concurrently? ==> i.e. does this happen if you _only_ run the COPY INTO? Do you have a single CSV file of 10s of GBs? Does this problem still happen if you split it into several smaller CSV files? – Jennie Aug 23 '21 at 07:21
  • Hi, Jennie! I just run only one COPY INTO. There's no others running simultaneously. I tested the COPY INTO with and SQL client (DBeaver) and the problem happens with new version, but not with 11.39.11. I can try split this large file into smaller ones, but in that case it's more cost effective staying in version 11.39.11, that works perfectly without any modifications. – Llorieb Aug 23 '21 at 12:14
  • Thanks for the info. We're pretty sure you're hit a performance problem with COPY INTO in Jul2021 (due to the complete rewrite of the transaction mgmt and SQL storage layer of MDB). We're working on a solution for this, which will probably come in a SP for Jul2021. – Jennie Aug 24 '21 at 11:48
  • Thank you for your prompt reponse, Jeannie! I'm very interested in that SP. Best regards. – Llorieb Aug 24 '21 at 12:42

0 Answers0