0

Recently updated the spring kinesis binder from 2.0.1.RELEASE to 2.1.0 and started to see the DynamoDB table number of writes against the table SpringIntegrationLockRegistry tripled. Wondering if anyone knows what has been changed in this lib that is doing it now.

Thanks.

1 Answers1

0

I think this commit did a respective change: https://github.com/spring-projects/spring-integration-aws/commit/ac74dfd2368c5c4b74793c259312313ad21ed5f8.

So, we renew a lock every time when we are ready to consume. If it is not locked at runtime, we are not lock holders therefore don't consume.

Artem Bilan
  • 113,505
  • 11
  • 91
  • 118
  • Hi, thanks for that. We moved our tables from provisioned to the on-demand to accommodate the higher number of reads/writes as we've seen throttling going on on dynamodb. But now after couple of hours of working our app gets: "The lock for key xxxxxxxx was not. renewed in time". And our app stops consuming any records from the stream. We have 2 shards (as binder is greedy - most of the time only one instance of the app consumes both shards), one consumer (with >=2 instances) Once we rollback to 2.0.1 everything works as a charm. Any advice on that? – user2519543 May 05 '22 at 09:40
  • It would be great to see what exception is thrown in that "not renewed in time" case, please. And this there must be more info in logs. Doesn't look like the logic is somehow broken to stop consuming even if that exception happens. Unless your client is really blocked from AWS for so many requests... – Artem Bilan May 05 '22 at 14:14
  • (1/3) I managed to replicate the issue. The issue is: I have 1 stream with 2 shards. One consumer with 3 running instances. As the binder is greedy only one instance is consuming all 2 shards. When, the active consumer instance restarting(or dying) and 2 shards are being assigned to 2 instances : e.g. one each.In Dynamodb the owners have been updated and everything is good. But both instances are starting to fire exceptions like this below and of course stop consuming events. – user2519543 May 09 '22 at 10:09
  • (2/3)Restart one of the affected instances helps as after that all two shards are being assigned to one owner and consumption of the stream is restored. Exception from one of the instances below. The second instances exception is exactly the same one different shard. – user2519543 May 09 '22 at 10:10
  • (3/3-1) `a.i.k.KinesisMessageDrivenChannelAdapter : The lock for key 'xxxxxxxx:shardId-000000000002' was not renewed in time java.util.concurrent.TimeoutException: null at java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1960) ~[na:na]` – user2519543 May 09 '22 at 10:12
  • (3/3-2)`org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.renewLockIfAny(KinesisMessageDrivenChannelAdapter.java:1030) ~[spring-integration-aws-2.4.0.jar:na] org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.execute(KinesisMessageDrivenChannelAdapter.java:946) ~[spring-integration-aws-2.4.0.jar:na] org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ConsumerDispatcher.run(KinesisMessageDrivenChannelAdapter.java:856) ~[spring-integration-aws-2.4.0.jar:na ` – user2519543 May 09 '22 at 10:13
  • To track the issue mentioned in the comment you can [here](https://github.com/spring-cloud/spring-cloud-stream/issues/2392) – user2519543 May 16 '22 at 16:21