0

I have a SolrCloud with 3 shards, 3 replicas and a Zookeeper ensemble with 5 members.

Replica 2 is being retired, the root device is EBS backed and it has an attached EBS volume. I'm assuming on restart it will migrate to new hardware with new public and private IPs.

I'm also assuming I'll have to restart all the shards and replicas. What's the best way to do this to assign the new replica to the same slot as the old replica? Aren't the shard / replica roles assigned to each host on the very first SolrCloud startup and aren't those assignments stored in Zookeeper?

kurt steele
  • 147
  • 2
  • 10

1 Answers1

0

replica2 restarted with new public and private IPs as expected. I stopped Tomcat on all SOLR hosts and restarted them in the normal order

shard1 shard1 shard3 replica1 replica2 replica3

This did not work as replica2 assigned itself to shard1 on repeated SolrCloud restarts. The shard and replica assignments are (as I thought) maintained in the binary files under the version-2 directory on every Zookeeper host. The following was successful:

  1. stop Tomcat on all SOLR hosts
  2. stop all Zookeeper hosts
  3. delete the version-2 directory on all Zookeeper hosts
  4. start all Zookeeper hosts
  5. re-upload the SOLR conf directory using the CLI tool
  6. start all SOLR hosts in above order

This produced the correct assignments.

kurt steele
  • 147
  • 2
  • 10