5

I have a cluster with 2 machines (centos7 and cassandra 3.4), 192.168.0.175 and 192.168.0.174. The seed is the 192.168.0.175.

I simply want to change the cluster name. Peace of cake should be.

I did on each cluster :

  • update system.local set cluster_name = 'America2' where key='local';

  • i did the nodetool flush

  • i updated the cassandra.yaml with the new name

  • restarted cassandra.

When i cqlsh any if describes me as connected to new cluster_name America2

When i run nodetool describecluster it shows the old cluster name America

If i stop cassandra on both machines and i try to restart them i find in logs the good old error :

org.apache.cassandra.exceptions.ConfigurationException: Saved cluster name America != configured name America2

So....what am i doing wrong!?

Mr'Black
  • 274
  • 1
  • 6
  • 19
  • I think you might need to decomission the nodes, stop them then, change the cluster name in the yaml then restart – Whitefret Apr 20 '16 at 08:16
  • I think you were not far from the solution: http://stackoverflow.com/questions/22006887/cassandra-saved-cluster-name-test-cluster-configured-name – Whitefret Apr 20 '16 at 08:25
  • From what i feel is that the ```update of the system.local``` is not doing the job. From what i understand, when Cassie fires up checks both the cassandra.yaml & the system.local. IF the name checks in both places, all should come up without any problems. So, about cassandra.yaml i am sure...not sure if i make the change to system.local persistent. – Mr'Black Apr 20 '16 at 08:39
  • did you stop cassandra before making the change in the yaml? – Whitefret Apr 20 '16 at 08:40
  • For one node cluster all the steps from above are working just fine. Testing more to figure out how to do it in a multi-node cluster – Mr'Black Apr 20 '16 at 09:20
  • Did you restart the nodes separatly? – Whitefret Apr 20 '16 at 09:28
  • After some tests, did both down, both up and also one down, one up and viceversa and repaired after all ( i don`t really know if this step is neccessary - repair ) because i considered they might miss some while one is down. – Mr'Black Apr 20 '16 at 10:50

3 Answers3

8

before changing cluster name

  1. delete node from cluster ring

    nodetool decommission

  2. stop node and change cluster name in cassandra.yaml

  3. clean node

    sudo rm -rf /var/lib/cassandra/* /var/log/cassandra/*

  4. start cassandra node

More information you can find at academy.datastax.com

Oleksandr Petrenko
  • 633
  • 1
  • 5
  • 11
2

Ok guys, what i did :

cqlsh with each machine and :

update system.local set cluster_name = 'canada' where key='local' ; then

$ nodetool flush -- system

then i stoped the service on both machines.

modified the cassandra.yaml with new cluster name canada.

started back the machines, they were working with new cluster name.

It is possible to do those steps without stoping all machines in the cluster, taking them out one by one( i think a repair on each node might be neccessary after ). Consider changing the seeds first.

Mr'Black
  • 274
  • 1
  • 6
  • 19
1

It's not really possible. I had the same problem. I solved this in a really dirty way. I wrote a script where i got all my column family data. Simply: A backup. Then i done this on each node: i stopped cassandra, i dropped all cassandras data, cache etc. (You can also reinstall cassandra.) i created a new cluster imported my backup. And i done this for each node.

Citrullin
  • 2,269
  • 14
  • 29
  • op uses flush, this means he doesn't care about loosing data no? – Whitefret Apr 20 '16 at 08:21
  • ok, not the same, my bad. but I saw this: http://stackoverflow.com/questions/22006887/cassandra-saved-cluster-name-test-cluster-configured-name – Whitefret Apr 20 '16 at 08:24