17

I open kibana and do a search and i get the error where shards failed. I looked in the elasticsearch.log file and I saw this error:

org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [622775500/593.9mb]

Is there any way to increase that limit of 593.9mb?

lezzago
  • 271
  • 1
  • 6
  • 15
  • 2
    You can also see this error directly in Chrome by opening the Developer Tools, selecting the Network tab, then re-running the search query. This error message will be available in the `_msearch?` event's `Response` field. This helps if you do not have direct access to the server logs. – anothermh Sep 18 '15 at 23:31

6 Answers6

26

You can try to increase the fielddata circuit breaker limit to 75% (default is 60%) in your elasticsearch.yml config file and restart your cluster:

indices.breaker.fielddata.limit: 75%

Or if you prefer to not restart your cluster you can change the setting dynamically using:

curl -XPUT localhost:9200/_cluster/settings -d '{
  "persistent" : {
    "indices.breaker.fielddata.limit" : "40%" 
  }
}'

Give it a try.

Val
  • 207,596
  • 13
  • 358
  • 360
  • This immediately resolved the problem that I was having and will work until I can increase the available memory. – anothermh Sep 18 '15 at 23:31
  • How do you do this for elasticsearch 5.x? – Amar Apr 11 '17 at 17:31
  • I've tried this for _cluster and my cluster name but when I do a get on _cluster/settings it doesn't seem to have applied. – Tomos Williams Feb 07 '18 at 14:31
  • 1
    @TomosWilliams how do you infer that? – Val Feb 07 '18 at 14:37
  • @Val performing a GET on _cluster/settings returns `{"persistent":{},"transient":{}}` my assumption would be that if this has applied correctly the `indices.breaker.fielddata.limit':'40%'` would appear in this output, I could be completely wrong though – Tomos Williams Feb 07 '18 at 15:59
  • 1
    @TomosWilliams Yes indeed, it should return you a non-empty response. Where is your cluster deployed? Locally or somewhere else? Which version are you running? – Val Feb 07 '18 at 16:02
  • @Val it's deployed remotely however I've got access to each box running elastic, I've tried adding the values to elastic.yml but no luck there, I'm on version 1.2.1. Thank you for your help. – Tomos Williams Feb 07 '18 at 16:10
  • Oh Geez, 1.2.1 is so oooooold, it is not even in the official documentation anymore. Any way to upgrade that to something more recent? – Val Feb 07 '18 at 16:17
  • @val That's the plan eventually but to upgrade we'd need to make a new cluster, reindex all the existing data and upgrade the queries so that everything is still functional, then change over and once again index to make sure everything is up to date – Tomos Williams Feb 07 '18 at 16:21
  • As far as I can remember, there was no field data circuit breaker in 1.2.1 (only as of 2.0), so that's why the command does nothing for you – Val Feb 07 '18 at 16:24
3

I meet this problem,too. Then i check the fielddata memory.

use below request:

GET /_stats/fielddata?fields=*

the output display:

"logstash-2016.04.02": {
  "primaries": {
    "fielddata": {
      "memory_size_in_bytes": 53009116,
      "evictions": 0,
      "fields": {

      }
    }
  },
  "total": {
    "fielddata": {
      "memory_size_in_bytes": 53009116,
      "evictions": 0,
      "fields": {

      }
    }
  }
},
"logstash-2016.04.29": {
  "primaries": {
    "fielddata": {
      "memory_size_in_bytes":0,
      "evictions": 0,
      "fields": {

      }
    }
  },
  "total": {
    "fielddata": {
      "memory_size_in_bytes":0,
      "evictions": 0,
      "fields": {

      }
    }
  }
},

you can see my indexes name base datetime, and evictions is all 0. Addition, 2016.04.02 memory is 53009116, but 2016.04.29 is 0, too.

so i can make conclusion, the old data have occupy all memory, so new data cant use it, and then when i make agg query new data , it raise the CircuitBreakingException

you can set config/elasticsearch.yml

indices.fielddata.cache.size:  20%

it make es can evict data when reach the memory limit.

but may be the real solution you should add you memory in furture.and monitor the fielddata memory use is good habits.

more detail: https://www.elastic.co/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html

Sirko
  • 72,589
  • 19
  • 149
  • 183
shangliuyan
  • 124
  • 1
  • 2
2

Alternative solution for CircuitBreakingException: [FIELDDATA] Data too large error is cleanup the old/unused FIELDDATA cache.

I found out that fielddata.limit been shared across indices, so deleting a cache of an unused indice/field can solve the problem.

curl -X POST "localhost:9200/MY_INDICE/_cache/clear?fields=foo,bar"

For more info https://www.elastic.co/guide/en/elasticsearch/reference/7.x/indices-clearcache.html

eyllanesc
  • 235,170
  • 19
  • 170
  • 241
giltsl
  • 1,371
  • 11
  • 16
1

I think it is important to understand why this is happening in the first place.

In my case, I had this error because I was running aggregations on "analyzed" fields. In case you really need your string field to be analyzed, you should consider using multifields and make it analyzed for searches and not_analyzed for aggregations.

elachell
  • 2,527
  • 1
  • 26
  • 25
1

I ran into this issue the other day. In addition to checking the fielddata memory, I'd also consider checking the JVM and OS memory as well. In my case, the admin forgot to modify the ES_HEAP_SIZE and left it at 1gig.

Glenak1911
  • 235
  • 1
  • 8
0

just use:

ES_JAVA_OPTS="-Xms10g -Xmx10g" ./bin/elasticsearch

since the default heap is 1G, if your data is big ,you should set it bigger

eyllanesc
  • 235,170
  • 19
  • 170
  • 241