0

I am trying to profile a python code being run by uwsgi server. A normal request would take 1-2 sec and would be doing various database calls, access data in redis and memcache, some I/O and, at the end, return some JSON response to the user.

When using newrelic for monitoring, what would be the impact on the server? How much the time can be slowed down? Is there any direct method to measure this data or any non-trivial solution?

Eel Lee
  • 3,513
  • 2
  • 31
  • 49
gaurav
  • 43
  • 6
  • I can't speak to New Relic, but I was an early, happy customer of Tracelytics -- and part of the point of the sampling-based approach is that you don't instrument *every* request, but a percentage of them, and build up a statistical profile while keeping the amount of performance impact negligible. – Charles Duffy Feb 19 '14 at 16:46
  • Anyhow -- in most cases, you won't be CPU-bound on your application servers, and the measurement overhead is almost all CPU, so the measurement overhead doesn't tend to be conflicting with where your bottlenecks are. – Charles Duffy Feb 19 '14 at 16:49
  • That said -- it's possible for your application to hit a corner case where the impact is measurable even with a small sampling ratio, so I don't know that anyone here could give you guarantees that could be trusted to apply to an application we've never seen. – Charles Duffy Feb 19 '14 at 16:50

1 Answers1

1

The impact depends slightly on your specific framework, but ultimately it depends on how many instrumented nodes are encountered for a request. One of the New Relic agent developers addressed this topic here: Looking to quantify the performance overhead of NewRelic monitoring in python django app

Community
  • 1
  • 1
alice
  • 181
  • 4