In some cases, I use Kafka-stream to model a small in memory (hashmap) projection of a topic. The K,V cache does require some manipulations, so it is not a good case for a GlobalKTable. In such a “caching” scenario, I want all my sibling instances to have the same cache, so I need to bypass the consumer-group mechanism.
To enable this, I normally simply start my apps with a randomly generated application Id, so each app will reload the topic each time it restarts. The only caveat to that is that I end up with some consumer group orphaned on the kafka brokers up to the offsets.retention.minutes which is not ideal for our operational monitoring tools. Any idea how to workaround this?
- Can we configure the applicationId to be ephemeral in order to have it disappeared once the app dies?
- Or could we force the consumer to manage its offset only locally?
- Or is there some java adminApi I could use to clean up my consumer-group-id when gracefully shutting down the app?
Thanks