0

I have a couple of containers running in Azure Container Apps. Each of these processes messages coming to Event Hub on their specific partition. However, after some amount of unread messages, the container app deploys one more of each container, but then 2 consumers are reading a partition from the same consumer group.

The error message I got is:

An error occurred while receiving. The exception is ConnectionLostError("New receiver
'nil' with higher epoch of '0' is created hence current receiver 'nil' with epoch '0'
is getting disconnected. If you are recreating the receiver, make sure a higher epoch is used.

I have tried experimenting with load_balancing_interval and load_balancing_strategy so the consumers can swap ownership over the partition when they finished reading the message, but it didn't work. Is there a way to make clients scalable while keeping them in the same consumer group?

Here is how I create consumers:

async def receive_batch():
    checkpoint_store = BlobCheckpointStore.from_connection_string(
    conn_str=os.getenv("ST_BLOB_CONN_STR"),
    container_name="event-hub-checkpoint"
)
    consumer_client = EventHubConsumerClient.from_connection_string(
    conn_str=os.getenv("CONN_STR"),
    eventhub_name="finance_data",
    consumer_group="$Default",
    checkpoint_store=checkpoint_store,
    logging_enable=True,
    partition_ownership_expiration_interval=3,
    load_balancing_strategy="balanced")

    async with consumer_client:
        await consumer_client.receive_batch(
            on_partition_initialize,
            on_event_batch=on_event_batch,
            max_batch_size=100,
            starting_position="-1",  # "-1" is from the beginning of the partition.
        )
duxi514
  • 29
  • 3
  • Can you please share details about how you're creating your consumers? If configured to use the same consumer group and storage location, the instances will coordinate to share work between them, ensuring each partition has a single owner. For more context: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventhub/azure-eventhub#consume-events-and-save-checkpoints-using-a-checkpoint-store – Jesse Squire Oct 05 '22 at 14:22
  • I edited the post so now you can see. My main problem is that when another container of the same image gets deployed, it will create a consumer that reads the same partition as the original. So I am looking for some way to make 2 clients read the same partition from the same consumer group, if that makes sense. – duxi514 Oct 06 '22 at 06:02
  • If the consumers are using the same consumer group and same blob storage container, they will coordinate to ensure that each partition has a single reader. The behavior that you're seeing indicates that you're likely using two different blob storage containers. If that isn't the case, please open an issue for the Azure SDK team to investigate: https://github.com/Azure/azure-sdk-for-python/issues – Jesse Squire Oct 06 '22 at 13:28

0 Answers0