1

i have following picture in cluster i am using cerebro. It seems to be all shards on 3rd-node. And if data comes on i see load on 1rd node > 4 and another nodes are ok.

Logstash -> LB -> ES-nodes (1,2,3). What i am doing wrong?

Thank you in advance.enter image description here

petrolis
  • 133
  • 1
  • 3
  • 12
  • All your PRIMARY shards are in your first node, you have a replica shard on the other nodes. Nothing to do with that seems odd. Which node is your master node? That could have something to do with the load imbalance. – Chro Mar 28 '18 at 14:08
  • By the way, the number of shards and replicas is probably too high. It's dependant on your situation of course, but I'd probably set primary shards to 3 and replicas to 1 for each index. That will put 2 shards per index per node. – Chro Mar 28 '18 at 14:10
  • Chro, i have in a day 9mio documents and i am not sure, how many replicas should i take. "number_of_shards":8, "number_of_replicas":2. – petrolis Mar 28 '18 at 14:16
  • in elasticsearch.yml i did not configure which should be master, just only this: discovery.zen.ping.unicast.hosts: ["a..1", "a..2", "a..3"] – petrolis Mar 28 '18 at 14:19
  • My reply was getting long so I'm writing a proper answer. – Chro Mar 28 '18 at 15:14

1 Answers1

1

The high load on that one particular node could be for a couple reasons. The ones that initially spring to mind:

  • If it is the Master Node then the large number of shards could be having an adverse affect.
  • You could be sending numerous large read requests to that one particular node so it has to deal with all the aggregations. E.g. if you have Kibana connected to that node.

Some general notes:

  • The shards with the solid box are the primary shards. The shards with the dotted box are replica shards. You currently have primaries = 8 and replicas = 2. This means there are 8 primary shards per index, and each of those has 2 replica shards. There is much more info about shards in the ES guide. It's for an old version of ES but is still valid.

  • The fact that all your primary shards are on the same node is a coincidence. This will often happen if you have one node start up before the others. All the primary shards will be allocated to it, then the replicas will go onto other nodes once they start up. If you take down your first node you should see the primaries move to other nodes.

  • To the left of the node name will be a star. The one with the filled in star is the currently elected Master. Due to your number of shards the master will have a large overhead, relatively speaking. This is because it has to manage so many shards. Try setting "number_of_shards":3, "number_of_replicas":1. Note that those numbers are only applied to new indexes so recreate your indexes to see this take affect.

  • Your unicast settings are correct.
Chro
  • 1,003
  • 2
  • 15
  • 27