3

I had a cluster with 2 nodes (node 1 and node 2).

After decommissioning node 2 I wanted to use the server as a fresh Cassandra database for other purposes, but as soon as I restart this message appears:

org.apache.cassandra.exceptions.ConfigurationException: This node was decommissioned and will not rejoin the ring unless cassandra.override_decommission=true has been set, or all existing data is removed and the node is bootstrapped again

So I removed all existing data.

But I don't want the node to be bootstrapped again (neither rejoin the previous ring) but to be a fresh new and pure Cassandra database to be used.

The old node is not on the seed list.

Cassandra version: 3.9

EDIT: I think I was missunderstood, sorry for that. After the decommission I want to have:

  • Db1: node 1
  • Db2: node 2

Two diferent databases with no correlation, totally separated. That's because we want to reuse the machine where node2 is hosted again to deploy a Cassandra DB in another enviroment.

Shelen
  • 159
  • 1
  • 8
  • 1
    Make sure cassandra is not running. Then delete the contents of `/var/lib/cassandra/data/*, ../commitlogs/*, and ../saved_caches/*` Then get a fresh copy of cassandra.yaml, cassandra-env.sh and whatever other properties files you may have altered. Give you new node a different cluster_name. – LHWizard Oct 30 '17 at 20:50

3 Answers3

3

Don't use override_decommission. That flag is only used for rejoining the same cluster.

You should remove all data files on the node (Cassandra will recreate system tables on start). Most importantly you need to change the seed in cassandra.yaml. I suspect that it is still the ip of node 1, so you need to change it to node 2 (itself).

Simon Fontana Oscarsson
  • 2,114
  • 1
  • 17
  • 20
  • Hi, thanks! I was missing the "remove all data files" step, marked your answer as accepted. – Shelen Oct 31 '17 at 15:02
0

Use option cassandra.override_decommission: true

user6238251
  • 66
  • 1
  • 10
  • Hi, thanks for the answer. After running the cassandra with the -D option I received this: `org.apache.cassandra.exceptions.ConfigurationException: Saved cluster name **Old_Name** != configured name **New_Name**` I don't wish to be rejoined in the ring under any circumstance, what should I edit in order to stop Cassandra for trying? Thanks. – Shelen Oct 27 '17 at 11:03
0

Use that option, cassandra.override_decommission=true. Also, be aware what is the definition of cluster_name is cassandra.yaml:

The name of the cluster. This setting prevents nodes in one logical cluster from joining another. All nodes in a cluster must have the same value.

So, to be sure, also use another value for cluster_name option in cassandra.yaml.

Try these steps:

  • run in cqlsh: UPDATE system.local SET cluster_name = 'new_name' where key='local';
  • nodetool flush in order to persist the data
  • nodetool decommission
  • stop node
  • change name in cassandra.yaml
  • clean node sudo rm -rf /var/lib/cassandra/* /var/log/cassandra/* but I would just move those file in some other place until you get the state that you want
  • start node

Please check 1, 2

Horia
  • 2,942
  • 7
  • 14
  • Hi, thanks for the answer. After running the cassandra with the -D option I received this: `org.apache.cassandra.exceptions.ConfigurationException: Saved cluster name **Old_Name** != configured name **New_Name**` I don't wish to be rejoined in the ring under any circumstance, what should I edit in order to stop Cassandra for trying? PS: I edited the OP to clarify intentions; I mention you @user6238251 to bring light on the subject. Thanks to both in advance. – Shelen Oct 27 '17 at 10:58