13

I have a use case where i consume certain logs from a queue and hit some third party API's with some info from that log , in case the third party system is not responding properly i wish to implement a retry logic for that particular log .

I can add a time field and repush the message to the same queue and this message will get again consumed if its time field is valid i.e less than current time and if not then get pushed again into queue.

But this logic will add same log again and again until retry time is correct and the queue will grow unnecessarily.

Is there is better way to implement retry logic in Kafka ?

TECH007
  • 159
  • 1
  • 1
  • 6
  • If the third party API is not responding when you want to send it message N, does it make sense to go on with messages N+1, N+2, etc and come back later to message N? If not, there does not seem to be a point to let Kafka help in the retry. Just let your consumer back off for seconds, minutes, hours and push message N again. – Harald May 06 '16 at 19:12
  • 2
    Yes first i thought of some similar approach where i will seek consumer to the pervious offset in case of failure. But is there a way/method to restrict a consumer from consuming messages for N units of time ? – TECH007 May 09 '16 at 07:17

3 Answers3

10

You can create several retry topics and push failed task there. For instance you can create 3 topics with different delays in mins and rotate the single failed task till the max attempt limit reached.

‘retry_5m_topic’ — for retry in 5 minutes

‘retry_30m_topic’ — for retry in 30 minutes

‘retry_1h_topic’ — for retry in 1 hour

See more for details: https://blog.pragmatists.com/retrying-consumer-architecture-in-the-apache-kafka-939ac4cb851a

Community
  • 1
  • 1
Pave
  • 2,347
  • 4
  • 21
  • 23
1

In consumer, if it throws an exception, produce another message with attempt number 1. so next time when it is consumed, it has the property of attempt no 1. Handle it in the producer that, if it attempts more than your retry count, then stop producing it.

ksv
  • 67
  • 8
  • But do you know of a way to delay the retry? E.g. produce again but consume only after 5 minutes? Because if the consuming rate is high, then 10 retries can exhaust in few milliseconds. – Uziel Sulkies Nov 26 '17 at 09:13
  • I came across this post. Hope it helps https://stackoverflow.com/questions/46665941/apache-kafka-consumer-delay-option – ksv Nov 28 '17 at 16:15
  • I came across this post. Hope it helps https://stackoverflow.com/questions/46665941/apache-kafka-consumer-delay-option . You can also send a property with current time stamp. compare it with current time if it exceeds 5 Min.. then process it. or else produce same event. This makes unnecessary checks. Thread.sleep is acceptable one. But I am not sure of Thread Mechanisms. It may produce many threads in sleep state. Research it before using – ksv Nov 28 '17 at 16:30
1

Yes, this could be one straight solution that I also thought of. But with this, we will end up in creating many topics as it is possible that message processing will fail again.

I solved this problem by mapping this use case to Rabbit MQ. In rabbit MQ we have the concept of retry exchange where if a message processing fails from an exchange then u can send it to a retry exchange with a TTL. Once TTL gets expired the message will move back to the main exchange and is ready to be processed again.

I can post some examples explaining how we can implement an exponential backoff message processing using Rabbit MQ.

TECH007
  • 159
  • 1
  • 1
  • 6