1

My application has a .net core Microservice for handling notifications and It has been deployed on Kubernetes. In there NotificationRequestConsumer as follows, (please be noted this is just a code snippet to elaborate my question)

public class NotificationRequestConsumer : IConsumer<INotificationRequest>
{
    public NotificationRequestConsumer()
    {
        
    }
    public Task Consume(ConsumeContext<INotificationRequest> context)
    {
        // notification request logic goes here
        return Task.CompletedTask;
    }
}

how configured the Masstransit in the startup.

public static IServiceCollection AddMassTransitConnection(this IServiceCollection services, IConfiguration configuration)
{
    services.AddMassTransit(x =>
    {
        x.AddBus(context => Bus.Factory.CreateUsingRabbitMq(c =>
        {
            c.Host(configuration["RabbitMQ:HostUrl"]);
            c.ConfigureEndpoints(context);
        }));
        
        x.AddConsumer<NotificationRequestConsumer>(c => c.UseMessageRetry(r => r.Interval(1,500)));
    });

    services.AddMassTransitHostedService();

    return services;
}

As per the above code, I have set interval few milliseconds to retry if any error occurs while alert processing. If there is a problem, I use the fault consumer to store the data of the relevant request in the DB, for future use (to send the relevant notification manually in the future).

public class NotificationRequestFaultConsumer : IConsumer<Fault<INotificationRequest>>
{
    public Task Consume(ConsumeContext<Fault<INotificationRequest>> context)
    {
        //For future use, I store the relevant data here 
        return Task.CompletedTask;
    }
}

Even if I did this, the relevant exception would be added to the RabbitMQ error queue. As far as I know, That is part of how transport works.

My concerns are as follows,

  1. Does the continuous growth of the error queue cause the cluster to crash?
  2. Is it a good approach to log only to the ELK Stack without throwing exceptions and not adding them to the RabbitMQ error queue?
  3. Is it possible to give specific expiration criteria to delete the error queue automatically and is it a good thought?
Sachith Wickramaarachchi
  • 5,546
  • 6
  • 39
  • 68
  • What is the purpose of throwing error into MQ, Did you have another Consumer for alert? And I believe if you retry in a few milisecond there will be a lot of replicate error, would you want to retry in a exponential time and discard the process if it exceed a certain number of retry? – Ice Jul 20 '22 at 08:55
  • 1
    No this is the only consumer for Alert. As far as I know, If the same request fails twice from the consumer, two errors will not be logged into the rabbit MQ error queue. Isn't it? yes, I want to discard the process if it exceeds a one-time retry. – Sachith Wickramaarachchi Jul 20 '22 at 09:02
  • First question, you should be able to allocate resource in kubernetes and it will have log error before the queue becoming too large. Second, Personally if you are not having another consumer that is consuming the queue, I won't create a MQ channel for storing error, storing in db is good enough at least to me. Last Question, I am not sure how to add expiration criteria but Yes, It is normal to delete expired error log at least in my company depending on how important the log is and how often you will review it. Now this is all my own experience and suggestion. Check more before taking it all. – Ice Jul 20 '22 at 09:34
  • 1
    If you don't want messages in the error queue, you can [discard](https://stackoverflow.com/a/62221407/1882) them. – Chris Patterson Jul 20 '22 at 11:17

1 Answers1

0

You can use dead-letter-queues, which is a rabbitMQ built-in mechanism for handling messages in those cases refering the official documentation:

  1. The message is negatively acknowledged by a consumer using basic.reject or basic.nack with requeue parameter set to false.
  2. The message expires due to per-message TTL; or
  3. The message is dropped because its queue exceeded a length limit
developer_009
  • 317
  • 3
  • 11