I'm hosting ClickHouse (v20.4.3.16) in 2 replicas on Kubernetes and it makes use of Zookeeper (v3.5.5) in 3 replicas (also hosted on the same Kubernetes cluster).
I would need to migrate the Zookeeper used by ClickHouse with another installation, still 3 replicas but v3.6.2.
What I tried to do was the following:
- I stopped all instances of ClickHouse in order to freeze Zookeeper nodes. Using zk-shell, I mirrored all znodes from /clickhouse of the old ZK cluster to the new one (it took some time but it was completed without problems)
- I restarted all instances of ClickHouse, one at a time, now attached to the new instance of Zookeeper.
- Both the ClickHouse instances started correctly, without any errors, but all the times I try (or someone tries) to add rows to a table with an insert, ClickHouse logs something like the following:
2021.01.13 13:03:36.454415 [ 135 ] {885576c1-832e-4ac6-82d8-45fbf33b7790} <Warning> default.check_in_availability: Tried to add obsolete part 202101_0_0_0 covered by 202101_0_1159_290 (state Committed)
and the new data is never inserted.
I've read all the info about Data Replication and Deduplication, but I am sure I'm adding new data in the insert, plus all tables make use of temporal fields (event_time or update_timestamp and so on) but it simply doesn't work.
Attaching ClickHouse back to the old Zookeeper, the problem is not happening with the same data inserted.
Is there something which needs to be done prior to change Zookeeper endpoints? Am I missing something obvious?