10

I have a Web Application running in Azure and I've had a few outages where the server is unresponsive. When I look in the IIS log, I see HTTP 500 errors with a sc-substatus of 121 and a sc-win32-status of 0.

Omitting all other fields, the logs look like this, in this order:

sc-status sc-substatus sc-win32-status
500 121 0

I can't find reference to a 500.121 error anywhere online.

Chad Gilbert
  • 36,115
  • 4
  • 89
  • 97

2 Answers2

14

I just got this from one of the Azure Software Engineers:

121 is a timeout event which basically means that the request spent 230 seconds on the worker VM without initiating any read/write IO on the connection. It is highly likely for this to be an application issue, but doesn’t necessarily have to be so.

The IIS logs all have a time-taken value hovering around 230 seconds. Mystery solved.

Chad Gilbert
  • 36,115
  • 4
  • 89
  • 97
  • 1
    Did you find that this was in fact caused by an application issue? I'm seeing the same thing right now, about every two hours, but (apart from the light your post sheds) it's still a complete mystery. – harpo Dec 30 '15 at 21:18
  • 3
    I had a memory leak in some code where I was trying to be smarter than the garbage collector (I wasn't), and it eventually manifested in super-long running requests. At 230 seconds, Azure aborted the http request and responded with the 500.121 error. My app didn't complete in time, so Azure cut it off. – Chad Gilbert Dec 30 '15 at 21:51
4

We saw the same issue.

TLDR

Symptoms

  • We began getting a 500/121 in production.
  • All response times were just over 230 seconds.
  • The issue always happened 2 hours apart... sometimes slightly longer.

Logging

  • Our own logging showed nothing.
  • ELMAH showed nothing.
  • Failed Request Tracing showed nothing.
  • The only indication was in the IIS logs.
  • We also collected some .Net Profiler Traces from the Diagnose and solve problems tab in the App Service in the Azure portal and included the Thread Stacks. We noticed that there were a number of Thread Stacks that were doing stuff related to Always Encrypted. This was just a little curious, but it was the thing that finally got us looking at Always Encrypted as the culprit.

Reproduction

  • This code was in 4 different environments.
  • It was only happening in 2 of the environments.
  • We had nothing that would cause it to happen.

MS Support Issue

Consider the following scenario:

You have Microsoft .NET Framework applications that use Always Encrypted in SQL Server 2016 or Azure SQL Database.
The column master keys for these applications are stored in the Azure Key Vault. 

In this scenario, the applications experience deadlocks. Therefore, the applications become unresponsive (hang) or time out.

The deadlocks may occur during attempts to acquire or refresh an authentication token for the Azure Key Vault.

Azure Key Vault Provider v2.1.0 Release Notes

This release addresses a bug existing in all previous releases which can cause deadlocks in multi-threaded applications.

Seth Flowers
  • 8,990
  • 2
  • 29
  • 42