21

I have connected Kibana to my ES instance.

cat/indices returns:

yellow open .kibana 1 1      1 0 3.1kb 3.1kb 
yellow open tests   5 1 413042 0 3.4gb 3.4gb 

However I get the following on the kibana configuration screen. What am I missing?

Kibana screenshot

Update:

enter image description here

My sample document looks like this

    "_index": "tests",
    "_type": "test7",
    "_id": "AVGlIKIM1CQ8BZRgLZVg",
    "_score": 1.7840601,
    "_source": {
       "severity": "ERROR",
       "code": "CODE,
       "message": "MESSAGE",
       "environment": "TEST",
       "error_uuid": "cbe99080-0bf3-495c-a417-77384ba0fd39",
       "correlation_id": "cf5a1fd5-4fd2-40bb-9cdf-405b91dcbd6f",
       "timestamp": "2015-11-20 15:24:39.831"
freefall
  • 597
  • 2
  • 12
  • 28

1 Answers1

14

Disable the option Use event times to create index names and put the index name instead of the pattern (tests).

The option you are trying to use is used when you have index names based on timestamp (imagine you create a new index per day with tests-2015.12.01, tests-2015.12.02...). It's quite clear if you read the message when you enable that option:

Patterns allow you to define dynamic index names. Static text in an index name is denoted using brackets. Example: [logstash-]YYYY.MM.DD. Please note that weeks are setup to use ISO weeks which start on Monday

EDIT: The problem with an empty dropdown in the time-field name is because you don't have any field with date type in the mapping of your index. You can actually check if you do GET /<index-name>/_mapping?pretty, that the timestamp is a "string" type and not "date". This happens because the format didn't match the regex for the date detection (yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z). To solve this:

  • You can change the format of the timestamp you are inserting to match the default regex.
  • You can modify the dynamic_date_format property and put a regex that matches the current format of your timestamp.
  • You can set an index template and set the type "date" for the "timestamp" field.

In any of the cases, you would need to delete the index and create a new one or reindex the data.

Pigueiras
  • 18,778
  • 10
  • 64
  • 87
  • When I do that I am suppose to pick a Time-field name, but the drop box is empty. I've uploaded another image. – freefall Dec 16 '15 at 12:30
  • @freefall The problem is a different one then, I updated my answer. – Pigueiras Dec 16 '15 at 12:49
  • It worked. Thanks a lot! One more question. Some of my date is written in word1-word2-word3 form. Does elasticsearch by default inteprets - as "splitter" for strings fields? I see that some of my have been catogorized in multiple "buckets" – freefall Dec 16 '15 at 14:31
  • If you do `curl -XGET localhost:9200_analyze?text="word1-word2-word3"` you can see how ES process your data. By default, it uses the [standard tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/2.1/analysis-standard-tokenizer.html) that will split everything separated by dashes into tokens. @freefall – Pigueiras Dec 16 '15 at 14:42
  • I am not sure if I should ask another question hence I am writing here. Can I configure standard tokenizer to ignore dashes? A little bit of googling shows that to employ other toknizer I have to change analyzer. This seems to be an overkill for the purpose. – freefall Dec 17 '15 at 10:01
  • @freefall AFAIK no, but changing the analyzer is no overkill in my opinion, it's something that needs to be done depending on your needs (for example, I use a custom analyzer with the whitespace tokenizer). – Pigueiras Dec 17 '15 at 10:08