Assuming the topic definition in the new cluster is exactly the same (i.e: nbr of partitions, retention, etc..) and the Producer hashing function on the message key will deliver your message to the same partition (will be a bummer if you have null keys because it'll end up in a random partition), you can simply consume from earliest from your old kafka broker topic and produce to your new topic in the new cluster using a custom consumer/producer or some tool like logstash
.
If you want to be extra sure to get the same ordering, you should only use only one consumer per topic and if your consumer supports single threaded run, even better (might avoid racing conditions).
You might also try more common solutions like MirrorMaker but be advised that MirrorMaker ordering guarantees amount to:
The MirrorMaker process will, however,
retain and use the message key for partitioning so order is preserved
on a per-key basis.
Note: As stated in the first solution and as cricket_007 said, it will only work if you were using the default partitioner and wish to keep using it in the new cluster.
In the end, if everything goes OK, you can manually copy your consumer offsets from the old kafka broker and define them on your new cluster consumer groups.
Disclaimer: This is purely theoritical. I've never tried a migration with this sort of hard requirements.