0

I restarted my Cassandra cluster and now after its restarts, it shows that other nodes are unavailable. But when I check by going to those servers, it shows that Cassandra is running in those. Your help is highly appreciate.

nodetool repair - output

Repair session {session-id} for range (id] failed with error java.io.IOException: Cannot proceed on repair because a neighbor (/{ip}) is dead: session failed

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens  Owns   Host ID                               Rack
UN  {ip1}  2.06 GB    256     22.6%  {token 1}  1b
DN  {ip1}   ?          256     24.5%  {token 2}  1c
DN  {ip1}   ?          256     28.9%  {token 3}  1c
DN  {ip1}    ?          256     24.0%  {token 4}  1d
Cyclops
  • 649
  • 3
  • 8
  • 33

1 Answers1

0

One thing to note is you should alway restart one node at a time and wait for it to join the cluster (UN) before restarting others.

I am assuming all the nodes had previously joined the cluster and after a restart, they went out of sync. Do a rolling restart of all the nodes (one at a time) and wait for the node to join the cluster.

Cassandra prints and stores the communication and peers information in system.peer and system.local tables and they might go out of sync if you restart a node when another node is still in joining state.