3

I am having a table with around 2 billion rows that i try to query the max(id) from. Id is not the sort key of the table and the table is using the table engine mergeTree.

No matter what I try, I get memory errors. This does not stop with this one query only. As soon as I try to query any table fully (vertical) to find data my 12 gb ram is not enough. Now I know I can just add more but that is not the point. Is it by design that clickhouse just throws an error when it doesn't have enough memory? Is there a setting that tells clickhouse to use disk instead?

SQL Error [241]: ClickHouse exception, code: 241, host: XXXXXX, port: 8123; Code: 241, e.displayText() = DB::Exception: Memory limit (for query) exceeded: would use 9.32 GiB (attempt to allocate chunk of 9440624 bytes), maximum: 9.31 GiB (version 21.4.6.55 (official build))

Thomas Miller
  • 79
  • 2
  • 8
  • could you provide the schema of table and sql-query? It looks like you use *GROUP BY*, so look at [GROUP BY in External Memory](https://clickhouse.tech/docs/en/sql-reference/statements/select/group-by/#select-group-by-in-external-memory). – vladimir Jul 30 '21 at 17:31

2 Answers2

6

Alexey Milovidov disagree to put into CH documentation minimum RAM requirements. But I would say that 32 GB is a minimum for production CH.

At least:

  • You need to lower mark cache because it's 5GB!!!! by default (set it 500MB).
  • You need to lower max_block_size to 16384.
  • You need to lower max_threads to 2.
  • You need to set max_bytes_before_external_group_by to 3GB.
  • You need to set aggregation_memory_efficient_merge_threads to 1.
Denny Crane
  • 11,574
  • 2
  • 19
  • 30
  • Would you elaborate on each point, why? Thanks! – simPod Jul 31 '21 at 19:32
  • so without 32gb CH is not able to proper manage its ram consumtion? So under this assumption, upgrading to 32gb ram would resolve all my issues? – Thomas Miller Aug 06 '21 at 12:52
  • I would say it's wasting of time, straggling with CH with less than 32GB RAM. – Denny Crane Aug 06 '21 at 13:03
  • @DennyCrane SQL Error [241]: ClickHouse exception, code: 241, host: XXXXXX, port: 8123; Code: 241, e.displayText() = DB::Exception: Memory limit (total) exceeded: would use 55.84 GiB (attempt to allocate chunk of 4231632 bytes), maximum: 55.83 GiB: (while reading column _version): (while reading from part /var/lib/clickhouse/store/21e/21ede8eb-ba4e-4f11-814c-a216f452cd57/0_919_3015_9/ from mark 0 with max_rows_to_read = 16384): While executing MergeTree (version 21.4.6.55 (official build)) I have added 4x the ram amount and the table size is now only double the size of the ram. – Thomas Miller Aug 10 '21 at 09:40
  • @ThomasMiller you will shocked but it's kinda expected behavior for CH for your use-case. Let's continue in github. – Denny Crane Aug 10 '21 at 13:17
  • "You need to lower mark cache because it's 5GB!!!! by default (set it 500MB)." => this solve my problem – Benoit F Jul 05 '22 at 09:28
0

For me what worked was to change the maximum server memory usage from 0.9 to 1.2.

<max_server_memory_usage_to_ram_ratio>1.2</max_server_memory_usage_to_ram_ratio> --> config.xml

Thanks for the reply as it led me ultimately to this.

Thomas Miller
  • 79
  • 2
  • 8