0

I want to copy events including their header data unchanged to another Event Hub.

What I tried so far:

  • an Azure Function with an Event Hubs Trigger and an Event Hubs Output. The function was implemented in C#, because that's the only runtime I found where I get access to the headers. The problem I saw here is that when headers are of type byte[], the function fails on the output side with a message that it cannot serialize them. The messages are written to the source Event Hub with Kafka, which means all headers will be of type byte[].
  • A simple Spring Cloud Stream application deployed to our OpenShift cluster. This works, but means an extra deployment to operate when we would have liked to have a serverless solution.

Are there simpler ways to do this?

lbilger
  • 304
  • 1
  • 8
  • Can you define "Headers" in this context? Are you referring to the AMQP Message header or another piece of data? It would also be helpful to know what the intended usage scenario for the replicated copy is. – Jesse Squire Oct 01 '21 at 14:08
  • Same as in [this](https://stackoverflow.com/questions/69405102/is-there-a-way-to-output-events-with-header-data-to-azure-event-hubs-using-azure?noredirect=1#comment122677453_69405102) question, which actually resulted from another try to get this working with Azure Functions, by _headers_ I mean the `Properties` metadata. I forgot to mention that a also need to copy the `PartitionKey`. – lbilger Oct 02 '21 at 17:23
  • My use case is that I want to work around the 20-consumer-groups-per-event-hub limitation by mirroring the events in a second event hub that can have another 20 consumer groups. But I can think of several other cases where this could be useful, e.g. when migrating from one Namespace to another and you want messages to be present in both Namespaces until all consumers have switched over. – lbilger Oct 02 '21 at 17:27

1 Answers1

1

There is a set of Event replication tasks for Azure Functions which are intended to do the translation work and make forwarding events to a second Event Hub easy.

That said, I do not know if it supports maintaining the partition key when doing so - you'd want to test that out to be sure. If not, you would need to manipulate the underlying AMQP Message to attach it.

To do so, you'd call the GetRawAmqpMessage on your destination EventData instance. On the AmqpAnnotatedMessage that gets returned, you'd inject the partition key into the Message Annotations section manually by adding an item with the key x-opt-partition-key and value of the partition key that you'd like it to reflect.

If the replication tasks don't meet your needs for some reason, the best approach would likely be manually publishing events using the method that is discussed in this answer.

Jesse Squire
  • 6,107
  • 1
  • 27
  • 30
  • Thanks for the answer! Good to know the problem was already solved by someone and this looks like exactly what I want. I tried it out and learned a lot in the process, but in the end I get the same error message I got with my own c# script-based function: `2021-10-06T08:19:47.923 [Error] Executed 'Functions.copy-all' (Failed, Id=4c649116-0fff-4013-b104-d6f79885b280, Duration=2ms)Serialization operation failed due to unsupported type System.Byte[].` Should I raise this as an issue in the GitHub project? – lbilger Oct 06 '21 at 08:28
  • I would, yes. I'm not familiar with the form for how Kafka is encoding the data; you may need to do some manipulation as part of the process. If you'd be so kind, when you open the issue, add a mention for @jsquire. – Jesse Squire Oct 06 '21 at 14:51
  • 1
    Thanks, Jesse! I have raised the issue and mentioned you and this question. However, I just learned that I don't need all of this. I learned that Kafka consumer groups are not the same as Event Hubs consumer groups and the 20-consumer-groups limit does not apply to Kafka, so my use case is void. I will accept your answer because I think it's the best way to do it when you're not using Kafka and maybe even the Kafka header issue will get fixed some day. – lbilger Oct 07 '21 at 18:52
  • Thanks for the update and I'm glad to hear that things worked out. I'll follow-up on the issue that you opened; even if not helpful for you, I'm guessing this is a scenario that wasn't considered for the replicator and it would be nice to raise awareness for. – Jesse Squire Oct 07 '21 at 19:41