4

Our C# web app, running on Azure, uses System.Diagnostics.Trace to write trace statements for debugging/troubleshooting. Once we enable blob storage for these logs (using the "Application Logging (blob)" option in the Azure portal), the response time for our application slows down considerably. If I turn this option off, the web app speeds up again (though obviously we don't get logs in blob storage anymore).

Does anyone know if this is expected? We certainly write a lot of trace statements on every request (100 or so per request), but I would not think this was unusual for web application. Is there some way to diagnose why enabling blob storage for the logs dramatically slows down the execution of these trace statements? Is writing the trace statement synchronous with the logs being updated in blob storage, for instance?

Auth Infant
  • 1,760
  • 1
  • 16
  • 34
  • Is your web application hosted using VM or App Service, in either case do both web application and storage account are part of the same resource group and location? – Baskar Rao Nov 07 '17 at 04:27
  • @Baskar Resource Groups have nothing to do with performance or service location. – David Makogon Nov 07 '17 at 12:07
  • It is an App Service. Both the App Service and the storage account are in West US 2. How much of a difference would geolocation make? I guess I would have thought that pushing logs to blob storage would be asynchronous to actually calling Trace (You wouldn't want to have every call to trace make a network call? Although that might explain why it's going so slowly? I don't see how logging could ever practically work with a model like that, thoough) – Auth Infant Nov 07 '17 at 15:08
  • @DavidMakogon I wanted to make sure the network latency is taken into consideration as per the below docs. Client Network Capability section. https://learn.microsoft.com/en-us/azure/storage/common/storage-performance-checklist – Baskar Rao Nov 07 '17 at 19:47
  • @Baskar I get that. And asking if resources are colocated in the same region is a great question. But Resource Groups have nothing to do with resource location and play no part in performance or latency. – David Makogon Nov 07 '17 at 20:50
  • @BouillabaisseCurdlesnoot is this still a factor for you, or did you find a solution? I'm running into the same issue, and am starting to wonder if blob storage is even worth it for logs... – akaioi Nov 17 '17 at 21:36
  • 1
    @akaioi I was able to figure out a workaround/solution and I just posted it as an answer to my question here. Let me know if that helps. – Auth Infant Dec 05 '17 at 17:22

1 Answers1

3

I was unable to find any information about how logging to blob storage in Azure was implemented. However, this is what I was able to deduce:

I confirmed that disabling the global lock had no effect. Therefore, the performance problem was not directly related to lock contention.

I also confirmed that if I turn AutoFlush off, the performance problem did not occur.

From further cross referencing the source code for the .NET trace API, my conclusion is that it appears that when you enable blob storage for logs, it injects some kind of trace listener into your application (the same way you might add a listener in web.config) and it synchronously writes every trace statement it receives to blob storage.

As such, it seems that there are a few ways to workaround this behavior:

  1. Don't turn on AutoFlush, but manually flush periodically. This will prevent the synchronous blob writes from interrupting every log statement.
  2. Write your own daemon that will periodically copy local log files to blob storage or something like this
  3. Don't use this blob storage feature at all but instead leverage the tracing functionality in Application Insights.

I ended up doing #3 because, as it turns out, we already had Application Insights configured and on, we just didn't realize it could handle trace logging and querying. After disabling sampling for tracing events, we now have a way to easily query for any log statement remotely and get the full set of traces subject to any criteria (keyword match, all traces for a particular request, all traces in a particular time period, etc.) Moreover, there is no noticeable synchronous overhead to writing log statements with the Application Insights trace listener, so nothing in our application has to change (we can continue using the .NET trace class). As a bonus, since Application Insights tracing is pretty flexible with the tracing source, we can even switch to another higher performance logging API (e.g. ETW or log4net) if needed and Application Insights still works.

Ultimately, you should consider using Application Insights for storing and querying your traces. Depending on why you wanted your logs in blob storage in the first place, it may or may not meet your needs, but it worked for us.

Auth Infant
  • 1,760
  • 1
  • 16
  • 34