36

I have just installed Kibana 7.3 on RHEL 8. The Kibana service is active (running).
I receive Kibana server is not ready yet message when i curl to http://localhost:5601. My Elasticsearch instance is on another server and it is responding with succes to my requests. I have updated the kibana.yml with that

elasticsearch.hosts:["http://EXTERNAL-IP-ADDRESS-OF-ES:9200"]

i can reach to elasticsearch from the internet with response:

{
  "name" : "ip-172-31-21-240.ec2.internal",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "y4UjlddiQimGRh29TVZoeA",
  "version" : {
    "number" : "7.3.1",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "4749ba6",
    "build_date" : "2019-08-19T20:19:25.651794Z",
    "build_snapshot" : false,
    "lucene_version" : "8.1.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

The result of the sudo systemctl status kibana:

● kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-09-19 12:22:34 UTC; 24min ago
 Main PID: 4912 (node)
    Tasks: 21 (limit: 4998)
   Memory: 368.8M
   CGroup: /system.slice/kibana.service
           └─4912 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size>

Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:44 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0

the result of "sudo journalctl --unit kibana"

Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive >
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect>
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","task_manager"],"pid":1356,"message":"PollError No Living connec>
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive >
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect>

Do you have any idea where the problem is?

MoonHorse
  • 1,966
  • 2
  • 24
  • 46
  • Can you show the kibana log file in `/var/log/kibana`? – Val Sep 19 '19 at 12:30
  • I had checked it but there is no kibana log. no file nothing. – MoonHorse Sep 19 '19 at 12:39
  • How did you start the kibana service and what output did you see at that time? – Val Sep 19 '19 at 12:42
  • update the question with it. I have just enabled and started the kibana service – MoonHorse Sep 19 '19 at 12:50
  • Can you check the `/etc/systemd/system/kibana.service` file and remove `--quiet` on the `ExecStart` command line? Then restart your service and run `sudo journalctl --unit kibana` to see what the kibana service logs. – Val Sep 19 '19 at 12:52
  • there isn't --quiet on that line. ExecStart=/usr/share/kibana/bin/kibana "-c /etc/kibana/kibana.yml" – MoonHorse Sep 19 '19 at 12:57
  • Then you should be able to run `sudo journalctl --unit kibana` directly. What do you see? – Val Sep 19 '19 at 12:59
  • Updated the question with it. – MoonHorse Sep 19 '19 at 13:03
  • can you curl ES from the Kibana host? – Val Sep 19 '19 at 13:05
  • yes, i can curl. it is successful – MoonHorse Sep 19 '19 at 13:13
  • I'd like to see the first messages of the kibana service when it started. can you run `sudo journalctl --unit kibana --since "2019-09-19 12:00:00"` (and adjust the time when you started the service)? – Val Sep 19 '19 at 13:16
  • I was facing the same problem, I uninstalled kibana and downloaded the version compatible with elastic search, uncommented #http.port: 9200 in elasticsearch.yml and restart elastic search, configured same port in kibana.yml, restart kibana, it worked after that. – daemonThread May 05 '20 at 07:47
  • For me it was changing the setting elasticsearch.ssl.verificationMode to none. – Chris Parker Jun 07 '23 at 17:42

17 Answers17

27

I faced the same issue once when I upgraded Elasticsearch from v6 to v7.

Deleting .kibana* indexes fixed the problem:

curl --request DELETE 'http://elastic-search-host:9200/.kibana*'
karthikdivi
  • 3,466
  • 5
  • 27
  • 46
  • 1
    One more thing I had to do was to add a new node because for w/e reason Kibana was not getting restarted... – Daniel Hajduk Apr 08 '20 at 11:38
  • 3
    where is this .kibana* indexes located ? – Shabari nath k Jun 21 '20 at 15:19
  • @Anti_cse51 delete it from Kibana's 'Dev Tools' – karthikdivi Jun 22 '20 at 09:31
  • 1
    This did not work for me. More importantly this answer is not descriptive enough. Here is an example command I used. Notice that you need to include the elastic port as well `curl -X DELETE "http://10.10.10.20:9200/.kibana*"` – Dave Sep 01 '21 at 17:22
  • It worked great for me. I just needed to replace https by http. – jefferson.macedo Oct 31 '21 at 04:30
  • 3
    What exactly does deleting `.kibana*` do? What's lost? – duct_tape_coder Dec 28 '21 at 16:04
  • 1
    There seems to be some valuable user data in `.kibana`: https://www.elastic.co/blog/kibana-under-the-hood-object-persistence Is there another way to resolve this without losing all of that? – duct_tape_coder Dec 28 '21 at 17:15
  • thank you! this did the trick. we saw this statement in our log ```{"type":"log","@timestamp":"2023-01-03T14:16:23Z","tags":["warning","savedobjects-service"],"pid":21480,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana."} ``` – asgs Jan 03 '23 at 14:32
  • 4
    setting up elastic locally is like going through 7 layers of hell – jodoro Jan 05 '23 at 22:48
19

The error might be related to elastic.hosts settings. The following steps and worked for me:

  1. Open /etc/elasticsearch/elasticsearch.yml file and check the setting on:

#network.host: localhost

2.Open /etc/kibana/kibana.yml file and check the setting and check:

#elasticsearch.hosts: ["http://localhost:9200"]

  1. Check whether both lines have the same setting. If you are using an IP-address for the elasticsearch network host, you need to apply the same for kibana.

The issue was kibana was unable to access elasticsearch locally.

Stuart
  • 1,054
  • 9
  • 20
user8832381
  • 191
  • 1
  • 2
  • 1
    This helped me figure out my problem. In my case, I don't have direct access to edit those files and the value [on the Kibana side] must be set when the Docker container spins up. – azarc3 Dec 14 '20 at 16:07
  • Was a typo on my part, had the wrong address in /etc/kibana/kibana.yml file for elasticsearch's ip. – Dave Sep 01 '21 at 17:24
  • This helped me after I changed 9200 to be a master-only node and used 9201 as a data node on v7.13. Kibana needs a node with the correct role to function, master wasn't enough. – arberg Sep 06 '21 at 16:41
  • there is a comment on top of that line saying: # By default Elasticsearch is only accessible on localhost. Set a different # address here to expose this node on the network: If localhost is the default one why I need to change it? – jodoro Jan 05 '23 at 22:51
9

Probably not the solution for this question

In my case the version from kibana and elasticsearch were not compatible
How i was using docker, i just recreated both but using the same version (7.5.1)

https://www.elastic.co/support/matrix#matrix_compatibility

  • 4
    Actually, this was the solution for my case today. Error message `sudo journalctl --unit kibana | tail -1` was `... This version of Kibana (v7.6.1) is incompatible with the following Elasticsearch nodes in your cluster: v6.8.1 @ :9200 ()` – mgaert Apr 20 '20 at 16:57
9

The issue was kibana was unable to access elasticsearch locally. I think that you have enabled xpack.security plugin at elasticsearch.yml by adding a new line :

xpack.security.enabled : true

if so you need to uncomment these two lines on kibana.yml:

elasticsearch.username = kibana elasticsearch.password = your-password

after that save the changes and restart kibana service : sudo systemctl restart kibana.service

5

exec that

curl -XDELETE http://localhost:9200/*kibana*

and restart kibana service

service kibana restart
dılo sürücü
  • 3,821
  • 1
  • 26
  • 28
  • Before I solved this problem with increase memory size in ".wslconfig" file but this time I can't solve this problem with this method so I used your answer and solved my problem. thanks my friend. – Mohsen Saniee May 16 '21 at 05:21
4

In my case, below changes fixed the problem:


/etc/elasticsearch/elasticsearch.yml

uncomment:

#network.host: localhost

And in

/etc/kibana/kibana.yml

uncomment

#elasticsearch.hosts: ["http://localhost:9200"]

Jonathan
  • 1,955
  • 5
  • 30
  • 50
vsharma
  • 299
  • 3
  • 3
  • My issue as well. However the problem for me was I needed to expose elastic for filebeat, but never updated kibana.yml to use the external address. Elastic was still trying to bind to localhost when it needed to be configured to the new address in this file. – Dave Aug 31 '21 at 17:08
3

There can be multiple reasons for this. Few things to try

  • verify the version compatibility between kibana and elasticsearch and make sure they are compatible according to https://www.elastic.co/support/matrix#matrix_compatibility
  • verify that kibana is not trying to load some plugins which are not installed on the master node
  • delete .kibana* indices as Karthik pointed above

If they don't work, turn on verbose logging from kibana.yml and restart kibana to get more insights into what may be the cause of this.

tshepang
  • 12,111
  • 21
  • 91
  • 136
avp
  • 2,892
  • 28
  • 34
  • how to delete .kibana* indices? – Shabari nath k Jun 21 '20 at 15:23
  • @Anti_cse51 I googled "how to delete .kibana* indices" and found this https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html, it's free and very easy to do a google search. – avp Jun 22 '20 at 07:04
3

Refer the discussion on Kibana unabe to connect to elasticsearch on windows

Deleting the .kibana_task_manager_1 index on elasticsearcch solved the issue for me!

2

for me the root cause was that I don't have enough disk space, the Kibana logs have this error

Action failed with '[index_not_green_timeout] Timeout waiting for the status of the [.kibana_task_manager_8.5.1_001] index to become 'green' Refer to https://www.elastic.co/guide/en/kibana/8.5/resolve-migrations-failures.html#_repeated_time_out_requests_that_eventually_fail for information on how to resolve the issue.

I went to link mentioned in the error https://www.elastic.co/guide/en/kibana/8.5/resolve-migrations-failures.html#_repeated_time_out_requests_that_eventually_fail

and run the following request https://localhost:9200/_cluster/allocation/explain

Ther response contained this

      "deciders": [
        {
          "decider": "disk_threshold",
          "decision": "NO",
          "explanation": "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], having less than the minimum required [21.1gb] free space, actual free: [17.1gb], actual used: [87.8%]"
        }
Mustapha-Belkacim
  • 1,653
  • 1
  • 15
  • 19
1

The reason may be in :

For Linux's docker hosts only. By default 
virtual memory is not enough so run the next command as root sysctl -w vm.max_map_count=262144

So if you did not execute it, do it :

sysctl -w vm.max_map_count=262144

If it will help, to use it even after VM reloads, check please this comment : https://stackoverflow.com/a/50371108/1151741

Nigrimmist
  • 10,289
  • 4
  • 52
  • 53
0

To overcome this incident, i have deleted and recreated the both servers. I have installed ES and Kibana 7.4 , also i have increased the VM size of ES server to from t1.micro to t2.small. All worked well. In the previous ES instance, the instance was sometimes stopping itself. the vm ram was 1GB consequently i had to limit the JVM heap size and maybe that's the reason the whole problem occured.

MoonHorse
  • 1,966
  • 2
  • 24
  • 46
  • I don't think t2.small has enough capacity to run the ELK stack nowadays. Did you alter any of its specifications? You cannot change the RAM for a certain instance type afak, but you can change the Disk Volume. – Sandun Jan 04 '21 at 00:13
0

My scenario ended up with the same issue but resulted from using the official Docker containers for both Elasticsearch and Kibana. In particular, the documentation on the Kibana image incorrectly assumes you will have at least one piece of critical knowledge.

In my case, the solution was to be sure that:

  • The network tags matched
  • The link to the Elasticsearch Docker container uses the :elasticsearch tag, not the version tag.

I had made the mistake of using the Elasticsearch container version tag. Here is the corrected format of the docker run command I needed:

docker run -d --name {Kibana container name to set} --net {network name known to Elasticsearch container} --link {name of Elasticsearch container}:elasticsearch -p 5601:5601 kibana:7.10.1

Considering the command above, if we substitute...

  • lookeyHere as the Kibana container name
  • myNet as the network name
  • myPersistence as the Elasticsearch container name

Then we get the following:

docker run -d --name lookyHere --net myNet --link myPersistence:elasticsearch -p 5601:5601 kibana:7.10.1

That :elasticsearch right there is critical to getting this working as it sets the #elasticsearch.hosts value in the /etc/kibana/kibana.yml file... which you will not be able to easily modify if you are using the official Docker images. @user8832381's answer above gave me the direction I needed towards figuring this out.

Hopefully, this will save someone a few hours.

azarc3
  • 1,277
  • 13
  • 21
0

One of the issue might be you are running Kibana version which is not compatible with elasticsearch.

Check the bottom of log file using sudo tail /var/log/kibana/kibana.log

I am using Ubuntu. I can see below message in the log file:

{"type":"log","@timestamp":"2021-11-02T15:46:07+04:00","tags":["error","savedobjects-service"],"pid":3801445,"message":"This version of Kibana (v7.15.1) is incompatible with the following Elasticsearch nodes in your cluster: v7.9.3 @ localhost/127.0.0.1:9200 (127.0.0.1)"}

Now you need to install the same version of Kibana as elasticsearch. For example you can see in my system elasticsearch 7.9.3 was installed but Kibana 7.15.1 was installed.

How I have resolved this?

  1. Removed kibana using sudo apt-get remove kibana
  2. Installed kibana 7.9.3 using below commands:

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.9.3-amd64.deb
shasum -a 512 kibana-7.9.3-amd64.deb
sudo dpkg -i kibana-7.9.3-amd64.deb
sudo service kibana start
curl --request DELETE 'http://localhost:9200/.kibana*'

Modify /etc/kibana/kibana.yml file and un-comment below lines:

server.port: 5601
server.host: "localhost"
elasticsearch.hosts: ["http://localhost:9200"]

And open below url in your browser: http://localhost:5601/app/home

Similarly you can check you elasticsearch version and install same version of kibana.

amitshree
  • 2,048
  • 2
  • 23
  • 41
0
  • In my version was causing I changed the IP address. cause I used docker to start them and use bridge connect them. final I changed my IP address, then restart the docker con. it works for me.
scott
  • 71
  • 1
  • 3
0

In my case the server was updated and SELinux was blocking the localhost:9200 connection with a connection refused message.

You can check if it's enabled in /etc/selinux/config.

Luuk
  • 1,959
  • 1
  • 21
  • 43
0

Go to Kibana directory and find the kibana.yml file in config folder. Change the property as elasticsearch.hosts: ['https://localhost:9200']. Some IP address is written there so we are changing it to localhost.

BetterCallMe
  • 636
  • 7
  • 15
0

In my case there was explicit error about incompatibility between ElasticS and Kibana in /etc/kibana/kibana.log

{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.6.0"},"@timestamp":"2023-04-14T01:21:55.398+02:00","message":"This version of Kibana (v8.7.0) is incompatible with the following Elasticsearch nodes in your cluster: v7.17.9 @ 192.168.0.28:9200 (192.168.0.28)","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":118729},"trace":{"id":"66aaf063bef7d7a991c27883f4ad7e4a"},"transaction":{"id":"8f10d4e6d10975d0"}}

https://www.elastic.co/support/matrix#matrix_compatibility

27P
  • 1,183
  • 16
  • 22