0

So we're building a web based audio streaming platform where the audio files are stored in blob storage. We're creating an SAS URL to the blob and then feed that into the javascript player (aurora).

This is working fine for the most part, however when I switch tracks a lot, at a certain point I start getting 403 responses to the HEAD request of the file.

If I click on RESEND in firebug, which simply resends the exact same request, then sometimes I still get the same 403 error, but after a while the request will succeed again, which means that the URL is formed correctly.

Below is the full response I'm getting

403 Server failed to authenticate the request. Make sure the value of
Authorization header is formed correctly including the signature.
Transfer-Encoding:  chunked Server:  Microsoft-HTTPAPI/2.0
x-ms-request-id:  <removed>
access-control-expose-headers: 
Content-Type,Accept-Ranges,Content-Encoding,Content-Length,Content-Range
Access-Control-Allow-Origin:  * Date:  Fri, 12 Aug 2016 06:31:58 GMT

I'm starting to think that some kind of restriction on blob storage is triggered, such as a maximum amount of connections or a bandwidth limitation, possibly even a DOS defensive machanism. Does anyone have any suggestions?

I've read some articles about diagnostics in storage, but they all refer to the old Azure portal. My storage account is only visible in the new portal. So my subquestion would be: can anyone direct me to a way to diagnose why the requests are being denied, using the new Azure portal?

Edit: I've used Azure Management Studio to take a look at the logs of the storage account. I found this logline in there that specifies a 'SASNetworkError':

1.0;2016-08-12T10:26:27.7337647Z;GetBlob;SASNetworkError;206;19002;6;sas;;[xxx];blob;"https://[xxx].blob.core.windows.net:443/files/[xxx].flac?sv=2015-04-05&amp;sr=b&amp;si=flacpolicy636065943797947863&amp;sig=XXXXX&amp;sip=[xxx]";"/[xxx]/files/[xxx].flac";8f7d48a3-0001-0017-5983-f499a7000000;0;[xxx]:40690;2015-04-05;637;0;499;0;0;;;"&quot;0x8D35C8CDDD72689&quot;";Monday, 04-Apr-16 13:27:27 GMT;;"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0";"http://[xxx].azurewebsites.net/";

It looks like this is the cause of the error, but I can't figure out what it is that's failing.

Bart van der Drift
  • 1,287
  • 12
  • 30
  • Azure Storage wouldn't be running out of connection capacity - it's a fairly massive multi-tenant storage system. Have you experimented with setting your SAS start time to be slightly in the past, maybe a minute or two (and possibly end time a bit into the future, if it's a short validity window)? Clock drift could result in your SAS being inadvertently invalid for a short time. – David Makogon Aug 12 '16 at 12:18
  • Thanks for the suggestion, David. I'm sure Azure can handle it as a system, but I was more thinking about specific limitations to our account. I was leaving the start time blank but I've now tried setting it to yesterday. I also set the expiration time to a few years ahead, but no dice... the error is still occurring. – Bart van der Drift Aug 12 '16 at 12:31

2 Answers2

1

I discovered what caused my errors: There's a maximum of 5 named access policies on a container. I was creating a new access policy for each play and registered that under a new name. I solved this by creating a new policy and passing it to the GetSharedAccessSignature call.

Bart van der Drift
  • 1,287
  • 12
  • 30
0

According to this article,

SAS request that failed due to network errors. Most commonly occurs when a client prematurely closes a connection before timeout expiration.

Could there be a network intermediary, like a proxy, that is closing the connection?

Don Lockhart
  • 894
  • 5
  • 12
  • Thanks for the link to that article, Don, I hadn't found that yet. All the requests are done from the browser, directly to blob storage. This happens on multiple networks. I don't see how this could be due to a proxy. Could it be a failure of the load balancer in Azure? – Bart van der Drift Aug 12 '16 at 12:07