The way I have seen this work in the past is to add a custom header with the current timestamp including milliseconds to each request as you are reverse proxying the requests. In your case this would be done on your nginx config. Something like this
proxy_set_header X-Request-Start "t=${msec}";
Then on the Apache side before it process the request you can do the same thing.
For Apache
RequestHeader set X-Request-Start-2 "%t"
You could even record when the response is finished and have 3 points in time to compare.
Then in your logs, or django, or in your metrics gathering system, you can compare the time between the points to find out how long it takes to go from nginx to Apache, and that is the request queue time. It should be pretty fast, but if Apache isn't tuned correctly it is possible the requests are getting queued while waiting for others to get processed.
You can use something like new relic who can scope out all the details of the request and show you the results in a pretty graph. They even have a free version that does what you are looking for, and they support docker.