0

I'm new to ElasticSearch, started working with ElasticSearch 1.7.3 as part of a Logstash-ElasticSearch-Kibana deployment.

I've defined a mapping template for my log messages, this is the interesting part:

{   
  "template" : "logstash-*",
  "settings" : { "index.refresh_interval" : "5s" },
  "mappings" : {
    "_default_" : {
      "_all" : {"enabled" : true, "omit_norms" : true},
      "dynamic_templates" : [ {
        "date_fields" : {
          "match" : "*",
          "match_mapping_type" : "date",
          "mapping" : { "type" : "date", "doc_values" : true }
        }
      }],
      "properties" : {
        "@version" : { "type" : "string", "index" : "not_analyzed" },
        "@timestamp" : { "type" : "date", "format" : "dateOptionalTime" },
        "message" : { "type" : "string" }
      }
    } , 
    "my_log" : {
      "_all" : { "enabled" : true, "omit_norms" : true },
      "dynamic_templates" : [ {
        "date_fields" : {
          "match" : "*",
          "match_mapping_type" : "date",
          "mapping" : { "type" : "date", "doc_values" : true }
        }
      }],
      "properties" : {
        "@timestamp" : { "type" : "date", "format" : "dateOptionalTime" },
        "file" : { "type" : "string" },
        "message" : { "type" : "string" }
        "geolocation" : { "type" : "string" },
      }
    }
  }
}

Although the @timestamp field is defined as doc_value:true I have an error of MemoryException because it is a fielddata:

[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [633785548/604.4 mb]

NOTE:

I know I can change the memory or add more nodes to the cluster, but in my point of view this is a design problem where this field should not be indexed in memory.

Eyal H
  • 991
  • 5
  • 22
  • Can you check this out and see if it helps: http://stackoverflow.com/questions/30811046/fielddata-data-is-too-large/30814856#30814856 – Val Nov 10 '15 at 15:03
  • I saw this, it is not a solution for me as it will pass this limit as well – Eyal H Nov 10 '15 at 15:09
  • Have you tried it out? The point is that by default, there is no limit – Val Nov 10 '15 at 15:18
  • @Val , you wrote yourself, default is 60%, so what am I missing? Have tried it, but didn't work, limit is reached – Eyal H Nov 10 '15 at 15:26
  • Oh I'm so sorry, for whatever reason I had `indices.fielddata.cache.size` in mind, which is unbounded by default. You can play with that value instead. – Val Nov 10 '15 at 15:28

0 Answers0