1

How do I move Elasticsearch data from one server to another?

I have server A running Elasticsearch 1.4.2 on one local node with multiple indices. I would like to copy that data to server B running Elasticsearch with the same version. The lucene_version is also same on both the servers.But when I copy all the files to server B data is not migrated it only shows the mappings of all the node. I tried the same procedure on my local computer and it worked perfectly. Am I missing something on the server end?

Ironman
  • 173
  • 1
  • 3
  • 10

1 Answers1

3

This can be achieved by multiple ways. The easier and safest way is to create a replica on the new node. Replica can be created by starting a new node on the new server by assigning the same cluster name. (if you have changed other network configurations then you might need to change that also). If you have initialized your index with no replica before then you can change the number of replica online using update settings api

Your cluster will be in yellow state until your datas are in sync.Normal operations won't get affected. Once your cluster state is in green you can shut down the server you do not wish to have. At this stage your cluster stage will go to yellow again. You can use the update setting to change replica count to 0 / add other nodes to bring cluster state in green state.

This way is recommended only if both your servers are on the same network else data syncing will take lots of time.

Another way is to use snapshot. You can create a snapshot on your old server. Copy the snapshot files from the old server to new server in the same location. On the new server create the same snapshot on the same location. You will find the snapshot file you copied. You can restore it using that. Doing it using command line can be a bit cumbersome. You can use a plugin like kopf which will make taking snapshot and restore as easy as button click.

Prabin Meitei
  • 1,920
  • 1
  • 13
  • 16
  • is there a way I can directly dump my 0 folder into the new server and get all the data. And how did this work on my local and its not working on the new server ? – Ironman Apr 10 '15 at 09:46
  • Copying the entire data folder also should work. For that you need to stop both the servers. Make sure that the new server have same cluster name in the configuration. Check at the cluster logs of the new server for possible restoration issues. – Prabin Meitei Apr 10 '15 at 11:00
  • I have followed all the steps correctly cluster name, lucene_version and version are same as the other server, after putting the data I can see the mappings but not the data. – Ironman Apr 10 '15 at 11:16
  • Can you put the logs you see at the cluster.log while starting the new server? – Prabin Meitei Apr 10 '15 at 11:31
  • Where do I find cluster.log – Ironman Apr 10 '15 at 11:38
  • Default location is /logs/.log Otherwise it is defined at /config/logging.yml – Prabin Meitei Apr 10 '15 at 11:50
  • `failed to start shard org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [country_files][4] failed recovery at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:185) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.nio.file.AccessDeniedException: /var/lib/elasticsearch/elasticsearch/nodes/0/indices/country_image/4/index/write.lock` – Ironman Apr 10 '15 at 12:54
  • From the logs it look like file permission issue and that's why it was working on your local server and not new server. Try changing the data directory permission to allow all. You can also try removing the write.lock file. – Prabin Meitei Apr 12 '15 at 03:58
  • It worked, I changed the directory permission to 777 and it worked like magic. Thanks for helping. – Ironman Apr 12 '15 at 10:49
  • As of today, kopf is deprecated in favor of [cerebro](https://github.com/lmenezes/cerebro) – Shadi Mar 29 '18 at 15:56