0

I am trying to verify the root cause for this error in our environment.

ES Version elasticsearch-1.7.1

RemoteTransportException[host.com][inet[/internalIP:9300]]
[indices:data/read/search[phase/query]]]; nested: 
QueryPhaseExecutionException[[logstash-v2-2016.01.30][3]: 
query[filtered(+firstname:Steve +lastname:Harvey)-
>BooleanFilter(+cache(event_time_utc:[1423086205000 TO 
1454622205000]))],from[0],size[5],sort[<custom:\"event_time_utc\": 
org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@24f48
98c>!]: Query Failed [Failed to execute main query]]; nested:  ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException:
[FIELDDATA] Data too large, data for [event_time_utc] would be larger than
limit of [17997024460/16.7gb]]; nested: UncheckedExecutionException[org.elasticsearch.
common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for
 [event_time_utc] would be larger than limit of [17997024460/16.7gb]]; nested:
 CircuitBreakingException[[FIELDDATA] Data too large, data for 
[event_time_utc] would be larger than limit of [17997024460/16.7gb]]; "

You can see above that the Data too large for field event_time_utc. I am executing a curl command to investigate the issue. The data returned isn't helping.

Curl Command

 [curl -k -XGET 'https://USERNAME:PASSWORD@host.com:9200/logstash-v2-2016.01.30/_stats/fielddata/?fields=event_time_utc&pretty'][1]

My Questions are:

  1. What stat should would help me when figuring out this breaker is about to be tripped?
  2. How am i to determine which fields are going to trip this breaker?
  3. How are other monitoring this threshold.

Output

{
  "_shards" : {
    "total" : 10,
    "successful" : 10,
    "failed" : 0
  },
  "_all" : {
    "primaries" : {
      "fielddata" : {
        "memory_size_in_bytes" : 80336,
        "evictions" : 0,
        "fields" : {
      "event_time_utc" : {
        "memory_size_in_bytes" : 40168
      }
    }
  }
},
"total" : {
  "fielddata" : {
    "memory_size_in_bytes" : 92592,
    "evictions" : 0,
    "fields" : {
      "event_time_utc" : {
        "memory_size_in_bytes" : 46296
      }
    }
  }
}
  },
  "indices" : {
    "badge_v2-2016.01.30" : {
      "primaries" : {
        "fielddata" : {
          "memory_size_in_bytes" : 80336,
          "evictions" : 0,
        "fields" : {
        "event_time_utc" : {
          "memory_size_in_bytes" : 40168
        }
      }
    }
  },
  "total" : {
    "fielddata" : {
      "memory_size_in_bytes" : 92592,
      "evictions" : 0,
      "fields" : {
        "event_time_utc" : {
          "memory_size_in_bytes" : 46296
            }
          }
        }
      }
    }
  }
}
winn j
  • 442
  • 3
  • 17
  • 1
    This answer might help: http://stackoverflow.com/questions/30811046/fielddata-data-is-too-large/30814856#30814856 – Val Feb 04 '16 at 21:59
  • 1
    Val, Thank you very much. I know how to fix it now. But how are people monitoring this? How close am i to my capacity? – winn j Feb 05 '16 at 00:53

0 Answers0