9

I have elasticsearch as a single node cluster.
One of the indexes is yellow with the explanation below.
I have read all the material here and in general, I did not find a solution for this problem. here is the index info:
yellow open research-pdl 8_TrwZieRM6oBes8sGBUWg 1 1 416656058 0 77.9gb 77.9gb

this command POST _cluster/reroute?retry_failed does not seems to be doing anything.

the setup is running on docker, I have 650GB free space.

{
  "index" : "research-pdl",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "CLUSTER_RECOVERED",
    "at" : "2020-12-16T05:21:19.977Z",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "5zzXP2kCQ9eDI0U6WY4j9Q",
      "node_name" : "37f65704d9bb",
      "transport_address" : "172.19.0.2:9300",
      "node_attributes" : {
        "ml.machine_memory" : "67555622912",
        "xpack.installed" : "true",
        "transform.node" : "true",
        "ml.max_open_jobs" : "20"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "a copy of this shard is already allocated to this node [[research-pdl][0], node[5zzXP2kCQ9eDI0U6WY4j9Q], [P], s[STARTED], a[id=J7IX30jBSP2jXl5-IGp0BQ]]"
        }
      ]
    }
  ]
}

Thanks

SexyMF
  • 10,657
  • 33
  • 102
  • 206

1 Answers1

9

The exception message is very clear, Elasticsearch never assigns the replica of the same primary shard on the same node for high availability reasons.

a copy of this shard is already allocated to this node [[research-pdl][0], node[5zzXP2kCQ9eDI0U6WY4j9Q], [P], s[STARTED], a[id=J7IX30jBSP2jXl5-IGp0BQ]]

And as you have a single node cluster, so you will not have another other node where your replicas can be assigned.

Solutions

  1. Add more nodes to your cluster, so that replicas can be assigned on other nodes. (preferred way)
  2. Reduce the replica shards to 0, this can cause data-loss and performance issues. (if at all, you don't have the option to add data-nodes and you want the green state for your cluster).

You can update the replica counts using cluster update API.

Amit
  • 30,756
  • 6
  • 57
  • 88
  • 2
    Thanks. Is it possible that I had other indexes and they suddenly disappeared because of using a single node database? I am facing an issue with index total losss. thanks – SexyMF Dec 16 '20 at 06:41
  • @SexyMF, Your cluster state is yellow, which means only replica shards are missing and you don't have a data-loss in your cluster, read more about health API here https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html – Amit Dec 16 '20 at 06:43
  • 2
    by adding another node do you mean docker-machine on the same server? or a new virtual server with elastic on it. thanks – SexyMF Dec 16 '20 at 06:43
  • new virtual server with elastic – Amit Dec 16 '20 at 06:44
  • You did not understand my first comment. yesterday I had 2 more indices, suddenly they have disappeared. They are not in the _cat/indices. can it be that the fact that I'm using a single node cluster caused them to disapear? thanks – SexyMF Dec 16 '20 at 06:49
  • @SexyMF this is interesting if it's not in _cat/indices, then maybe it deleted and you have the data-loss , but ES cluster should normally turn RED best is to ask a separate question with the logs during that time to keep it focused :) – Amit Dec 16 '20 at 06:52
  • this is a docker setup, I dont have logs yet.... I dont know. – SexyMF Dec 16 '20 at 07:29
  • @SexyMF, did you mount the volume of docker container, if not, you can still go inside the docker container and see the logs – Amit Dec 16 '20 at 07:29
  • only this is binded: `/usr/share/elasticsearch/data` does it have logs? – SexyMF Dec 16 '20 at 07:34
  • https://stackoverflow.com/questions/65319202/elasticsearch-indexes-disappeared-unexpectedly thanks! – SexyMF Dec 16 '20 at 07:52
  • reducing replica count to 0 is not solution. I have many data nodes in elasticsearch cluster and my cluster is very large. Yesterday new index got created with 10 shards (5 Primary and 5 Replica) with size of 2.06 TB and the data node size is 2 TB only. What is other solution ? – Jayesh May 02 '23 at 12:44