0

I have a Django application running inside docker container. But because of some reason my application became very slow. I want to use profiling to my application.

For that I check with Apache & nginx logs.Then want to get more picture on this.How to get the exact time the docker host receives the request & exact time docker container receives the request.

Any help will be awesome!!!

danny
  • 983
  • 1
  • 7
  • 13
  • Can you explain how you have everything setup? Do you have a load balancer handling traffic before sending to docker host, how are your containers setup, what are you using for a wsgi server for django, where does Apache and nginx come into play? Why both? – Ken Cochrane May 02 '16 at 10:54
  • @KenCochrane According to your question I have a lot need to explain.Instead I need how to get the exact time docker host & container recieves request then that would be very much help full...Sorry for not giving full details.As i need to get the time of contact docker host & container.If You have any idea you can share!!!Thanks – danny May 02 '16 at 10:59
  • in order to give you an answer we need the details, because the answer is dependent on your setup. There is no generic way to do what you are asking. – Ken Cochrane May 02 '16 at 11:06
  • I have apache2 installed inside docker & a load balancer "Nginx". As I am newbie to all this so don't have much idea on this. I need to do profiling all these components like apache2,nginx,docker host & docker container – danny May 02 '16 at 11:51
  • That it might be good to add those details to the question, so it is easier for others to find it. Does nginx run in a container as well, or is that directly on the host? – Ken Cochrane May 02 '16 at 11:59

1 Answers1

0

The way I have seen this work in the past is to add a custom header with the current timestamp including milliseconds to each request as you are reverse proxying the requests. In your case this would be done on your nginx config. Something like this

proxy_set_header X-Request-Start "t=${msec}";

Then on the Apache side before it process the request you can do the same thing.

For Apache

RequestHeader set X-Request-Start-2 "%t"

You could even record when the response is finished and have 3 points in time to compare.

Then in your logs, or django, or in your metrics gathering system, you can compare the time between the points to find out how long it takes to go from nginx to Apache, and that is the request queue time. It should be pretty fast, but if Apache isn't tuned correctly it is possible the requests are getting queued while waiting for others to get processed.

You can use something like new relic who can scope out all the details of the request and show you the results in a pretty graph. They even have a free version that does what you are looking for, and they support docker.

Ken Cochrane
  • 75,357
  • 9
  • 52
  • 60