13

I have set boostrap.memory_lock=true Updated /etc/security/limits.conf added memlock unlimited for elastic search user

My elastic search was running fine for many months. Suddenly it failed 1 day back. In logs I can see below error and process never starts

ERROR: bootstrap checks failed memory locking requested for elasticsearch process but memory is not locked

I hit ulimit -as and I can see max locked memory set to unlimited. What is going wrong here? I have been trying for hours but all in vain. Please help.

OS is RHEL 7.2 Elasticsearch 5.1.2

ulimit -as output

core file size        (blocks -c) 0
data seg size         (kbytes -d) unlimited
scheduling policy            (-e) 0
file size            (blocks, -f) unlimited
pending signals              (-i) 83552
max locked memory    (kbytes, -l) unlimited
max memory size      (kbytes, -m) unlimited
open files                   (-n) 65536
pipe size         (512 bytes, -q) 8
POSIX message queues   (bytes,-q) 819200
real-time priority           (-r) 0
stack size            kbytes, -s) 8192
cpu time             seconds, -t) unlimited
max user processes           (-u) 4096
virtual memory       (kbytes, -v) unlimited
file locks                   (-x) unlimited
Shades88
  • 7,934
  • 22
  • 88
  • 130
  • Add the response of ulimit -as in the question – angelcervera Jul 10 '17 at 10:01
  • added ulimit -as output, please check – Shades88 Jul 10 '17 at 10:30
  • there is still some component in tour RHEL install that holds Elasticsearch back from locking memory. Did you do an OS ugprade recently? Or was systemd updated, you might want to check the systemd files as well. There is a `LimitMEMLOCK` option in the `elasticsearch.service` definition that needs to be unlocked. – alr Jul 10 '17 at 16:32
  • I had added elasticsearch.service file and added LimitMEMLOCK=infinity in there. That too didnt take any effect – Shades88 Jul 11 '17 at 05:55
  • 1
    If you're running Elasticsearch, can you confirm that the user running Elasticsearch is the correct one to which you applied the settings in `/etc/security/limits.conf. In addition, is Elasticsearch starting when you run it directly in your shell? – Adonis Jul 11 '17 at 14:11

9 Answers9

28

Here is what I have done to lock the memory on my ES nodes on RedHat/Centos 7 (it will work on other distributions if they use systemd).

You must make the change in 4 different places:

1) /etc/sysconfig/elasticsearch

On sysconfig: /etc/sysconfig/elasticsearch you should have:

ES_JAVA_OPTS="-Xms4g -Xmx4g" 
MAX_LOCKED_MEMORY=unlimited

(replace 4g with HALF your available RAM as recommended here)

2) /etc/security/limits.conf

On security limits config: /etc/security/limits.conf you should have

elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

3) /usr/lib/systemd/system/elasticsearch.service

On the service script: /usr/lib/systemd/system/elasticsearch.service you should uncomment:

LimitMEMLOCK=infinity

you should do systemctl daemon-reload after changing the service script

4) /etc/elasticsearch/elasticsearch.yml

On elasticsearch config finally: /etc/elasticsearch/elasticsearch.yml you should add:

bootstrap.memory_lock: true

Thats it, restart your node and the RAM will be locked, you should notice a major performance improvement.

ugosan
  • 1,465
  • 16
  • 14
  • 2
    your answer is very helpful. can you please recommend other necessary settings required to setup an elastic cluster for production? – yousuf iqbal Jun 01 '18 at 20:03
  • Thank you yousuf. I've blogged about it here: https://www.ugosan.org/Locking-Memory-for-production/ the blog has other posts you might find helpful – ugosan Jun 01 '18 at 20:13
3
OS = Ubuntu 16
ElasticSearch = 5.6.3

I also used to have the same problem.

I set in elasticsearch.yml

bootstrap.memory_lock: true

and i got in my logs:

memory locking requested for elasticsearch process but memory is not locked

i tried several things, but actually you need to do only one thing (according to https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html );

file:

/etc/systemd/system/elasticsearch.service.d/override.conf

add

[Service]
LimitMEMLOCK=infinity

A little bit explanation.

The really funny thing is that systemd does not really care about ulimit settings at all. ( https://fredrikaverpil.github.io/2016/04/27/systemd-and-resource-limits/ ). You can easily check this fact.

  1. Set in /etc/security/limits.conf

    elasticsearch - memlock unlimited

  2. check that for elasticsearch max locked memory is unlimited

    $ sudo su elasticsearch -s /bin/bash $ ulimit -l

  3. disable bootstrap.memory_lock: true in /etc/elasticsearch/elasticsearch.yml

    # bootstrap.memory_lock: true

  4. start service elasticsearch via systemd

    # service elasticsearch start

  5. check what max memory lock settings has service elasticsearch after it is started

    # systemctl show elasticsearch | grep -i limitmemlock

OMG! In spite we have set unlimited max memlock size via ulimit , systemd completely ignores it.

LimitMEMLOCK=65536

So, we come to conclusion. To start elasticsearch via systemd with enabled

bootstrap.memory_lock: true

we dont need to care about ulimit settings but we need explecitely set it in systemd config file.

the end of story.

Alex
  • 837
  • 8
  • 9
1

try setting in /etc/sysconfig/elasticsearch file set MAX_LOCKED_MEMORY=unlimited

in /usr/lib/systemd/system/elasticsearch.service set LimitMEMLOCK=infinity

0

Make sure that your elasticsearch start process is configured to unlimited. For if e.g. you start elasticsarch with another user as the one configured in /etc/security/limits.conf or as root while defining a wildcard entry in limits.conf (which is not for root) it won't work.

Test itto be sure: you could e.g. put ulimit -a ; exit just after the "#Start Daemon" in /etc/init.d/elasticsearch and start with bash /etc/init.d/elasticsearch start (adapt accordingly to your start mechanism).

dr0i
  • 2,380
  • 2
  • 19
  • 36
0

check for the actual limit when the process is running (albeit short) with:

cat /proc/<pid>/limits

You will find lines similar to this:

Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes 
<truncated>    

Then depend on the runner or container (in my case it was supervisord's minfds value), you can lift the actual limitation configuration.

I hope it gives a little hint for more general cases.

Ardhi
  • 2,855
  • 1
  • 22
  • 31
0

Followed this post On ubuntu 18.04 with elasticsearch 6.x, there wasn't entry LimitMEMLOCK=infinity in file /usr/lib/systemd/system/elasticsearch.service.

So adding that in that file and setting MAX_LOCKED_MEMORY=unlimited in /etc/default/elasticsearch did the trick.

The jvm options can be added in /etc/elasticsearch/jvm.options file.

Community
  • 1
  • 1
millisami
  • 9,931
  • 15
  • 70
  • 112
0

If you use the tar distribution and want to monitor it with monit you have to tell monit to use unlimited - all other places for this configuration are ignored.

Add ulimit -s unlimited at the beginning of /etc/init.d/monit, then do systemctl daemon-reload and then service monit restart and monit start $yourMonitLabel.

dr0i
  • 2,380
  • 2
  • 19
  • 36
0

One thing it "can" be is that your /tmp is mounted with noexec https://discuss.elastic.co/t/not-able-to-start-elasticsearch-due-to-failed-memory-lock/158009/6 check your logs and see if it complains about .UnsatisfiedLinkError: Native library especially CentOS/RedHat but maybe others? Might be fixed in ES 7?

rogerdpack
  • 62,887
  • 36
  • 269
  • 388
0

If you have a swap file activated in your system, then ElasticSearch cannot allow the memory it allocated to escape to disk - this will lead to a sharp slowdown. So it needs to block its memory from being flushed to disk. To do this, it makes a special system call that blocks this memory from being flushed to disk. In order for ElasticSearch to do this, it needs to provide an environment variable:

boostrap.memory_lock=true

And also, in order for the system to allow blocking as much memory as ElasticaSearch needs, it is necessary to allow the user under which ElasticaSearch is running to block a lot of memory. To do this, write in the file:

/etc/security/limits.d/es.conf

Content:

elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited 

If you use kubernetes to run ElasticSearch, then the swap file is initially disabled in it, and therefore the memory will never be flushed to disk. Therefore, ElasticSearch does not need to lock memory. Therefore, in Kubernetes, you need to specify the following environment variable:

boostrap.memory_lock=false

Those. you need to prohibit ElasticSearch from blocking memory, since this is not necessary, because there is no swap-file in the kubernetes.

pompei
  • 1
  • 1