I am looking for a way or best/better design decision for a logging problem. I am using Akka actors in clusters for my back-end services and Play in the front-end to accept HTTP requests. My question is kind of extended from the old question of having the whole application log to be identifiable to the same HTTP requests, which simply use the MDC that exists in most current logging frameworks by generating a UUID in the beginning and put in the context.
An example of our data flow might look like:
"Http Request/System A" -> "Actor1/Cluster B" -> "Actor2/Cluster C" -> "Reply to System A and complete request"
This means there are at least 3 separate systems involved in the process. All my log goes to Logstash. I can generate a UUID from the start of the request from System A. However, I wish the UUID can be carried over/piggy-backed to all sub-systems, which all uses Protobuf serialisation to communicate with each other, processing jobs of belonged to the same http request.
I know I can always add an id field to all my messages, but this is very ugly.
I am wondering if there is a better way to or better mechanism to carry the information to all other calling Akka system without introducing too much noise to my business logic processing?