0

My incremental Snapshots in Elasticsearch are now failing. I didn't touch anything, nothing seems to have changed, can't figure out what is wrong.

I checked my Snapshots by doing: GET _cat/snapshots/cs-automated?v&s=id and finding the details of a failed one:

GET _snapshot/cs-automated/adssd....

Which showed this stacktrace:

java.nio.file.NoSuchFileException: Blob object [YI-....] not found: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 21...; S3 Extended Request ID: zh1C6C0eRy....)
    at org.elasticsearch.repositories.s3.S3RetryingInputStream.openStream(S3RetryingInputStream.java:92)
    at org.elasticsearch.repositories.s3.S3RetryingInputStream.<init>(S3RetryingInputStream.java:72)
    at org.elasticsearch.repositories.s3.S3BlobContainer.readBlob(S3BlobContainer.java:100)
    at org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.readBlob(ChecksumBlobStoreFormat.java:147)
    at org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.read(ChecksumBlobStoreFormat.java:133)
    at org.elasticsearch.repositories.blobstore.BlobStoreRepository.buildBlobStoreIndexShardSnapshots(BlobStoreRepository.java:2381)
    at org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:1851)
    at org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:505)
    at org.elasticsearch.snapshots.SnapshotShardsService.access$600(SnapshotShardsService.java:114)
    at org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:386)
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractPrioritizedRunnable.doRun(ThreadContext.java:763)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)

Don't know how to resolve this I can now longer upgrade my index, I checked this page: Resolve snapshot error in .. but still struggling. I've tried deleting a whole bunch of indicies. I may try restoring an old Snapshot. I also delete some .opendis.. indicies used for tracking ILM and a .lock index as well but nothing is helping. Very annoying.

as requested in comments:

GET /_cat/repositories?v
id           type
cs-automated   s3

GET /_cat/snapshots/cs-automated produces heaps of Snapshots all of which are PARTIAL in their status:

2020-09-08t01-12-44.ea93d140-7dba-4dcc-98b5-180e7b9efbfa PARTIAL 1599527564 01:12:44 1599527577 01:12:57 13.4s  84 177 52 229
2021-02-04t08-55-22.8691e3aa-4127-483d-8400-ce89bbbc7ea4 PARTIAL 1612428922 08:55:22 1612428957 08:55:57   35s 208 793 31 824
2021-02-04t09-55-16.53444082-a47b-4739-8ff9-f51ec038cda9 PARTIAL 1612432516 09:55:16 1612432552 09:55:52 35.6s 208 793 31 824
2021-02-04t10-55-30.6bf0472f-5a6c-4ecf-94ba-a1cf345ee5b9 PARTIAL 1612436130 10:55:30 1612436167 10:56:07 37.6s 208 793 31 824
2021-02-04t11-......
Derrops
  • 7,651
  • 5
  • 30
  • 60
  • Can you check if the specified bucket key exists in your S3 repository? Probably not, but worth checking anyway – Val Feb 18 '21 at 07:40
  • I thought the automated incremental snapshots are not in a bucket you own, that is abstracted away from you? This isn't a repo I setup this is the default 14 days of inc snapshots which you get out of the box. – Derrops Feb 18 '21 at 10:06
  • What do you get when running `GET /_cat/repositories?v` and eventually `GET /_cat/snapshots/?v` – Val Feb 18 '21 at 10:09
  • I have updated ( ; – Derrops Feb 18 '21 at 10:30
  • PARTIAL means that some shard(s) of some index(es) could not be snapshotted for a given reason. You can still restore them by [setting partial:true](https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshots-restore-snapshot.html#_partial_restore) during the restore operation – Val Feb 18 '21 at 10:46
  • not sure what you mean for me to do ? Should I restore my cluster from a partial snapshot and then apply different index settings? – Derrops Feb 19 '21 at 00:34
  • You might be hitting this issue: https://github.com/opendistro-for-elasticsearch/community/issues/141. Did you update your cluster recently? – Val Feb 19 '21 at 04:58
  • It just started happening out of the blue really. Very painful I pretty much will have to export all my kibana stuff and make a new cluster if I can't fix ) : I also have another cluster broken on me, I think that one has too many shards though. This one has almost no data in it and still has problems. – Derrops Feb 19 '21 at 05:07
  • Can you find anything useful in the Cloudwatch logs when the snapshot process kicks in? – Val Feb 19 '21 at 05:09
  • I'll have to enable it and get back to you, I only have CLI access to this env so it's a pain to debug. – Derrops Feb 19 '21 at 05:20

1 Answers1

1

The reason for snapshot to end in PARTIAL state is that because of some issue in S3 repository YI-.... file is missing. Which is clear case of repository corruption.

java.nio.file.NoSuchFileException: Blob object [YI-....] not found: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 21...; S3 Extended Request ID: zh1C6C0eRy....)

This kind of repository corruption is observed when cluster is heavily loaded (JVM > 80% or CPU utilization >80%) and few of nodes drops out of cluster.

One way to fix the issue is to delete all the snapshots that refers to index referred by "YI-....". This will cleanup S3 snapshot files of index YI-.... and now when you take new snapshot everything starts afresh.

To be on safer side, I would recommend to contact AWS support to fix this type of repository corruption.

Elasticsearch reference similar issue fixed in elasticsearch version 7.8 and above : https://github.com/elastic/elasticsearch/issues/57198

piyush daftary
  • 241
  • 1
  • 10
  • Thanks for the suggestion, it is ok for me to delete all snapshots so I might try this first before contacting them. – Derrops Feb 21 '21 at 22:56
  • Anyway, don't rely entirely on automated backup from AWS ES. With AWS ES, it's advisable to have your own manual snapshot. I followed these steps to achieve it: https://medium.com/swlh/elasticsearch7-backup-snapshot-restore-aws-s3-54a581c75589 – Juan Carlos Alafita Mar 10 '21 at 16:23
  • I disagree with Juan. AWS ES automated snapshot is pretty reliable. – piyush daftary Mar 14 '21 at 00:33