5

We use couple of azure functions and azure storage queues to communicate between them. We know about limit of 64kB per queue message, so we have to use compression of messages but sometimes we exceed that limit. According to the documentation https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted#capacity-and-quotas

Azure supports large messages by combining queues and blobs – at which point you can enqueue up to 200 GB for a single item.

looks like we can put large messages into the storage queue. Unfortunately there is no additional documentation about that. So our question is how should it works? It should work out of the box or we should use some pattern like write message to blob, put message with blob id into queue and than read blob by id in some queue-triggered function?

We use Microsoft.Azure.Storage.Queue v9.4.2 nuget package to push messages into queues.

  • The link talks about putting the queue payload in a blob, which is how you get 200GB (and that is outdated, since you can have blobs of up to 4.7TB now). You cannot increase message size of Storage queues. This has been addressed in a near-identical question, [here](https://stackoverflow.com/q/45692731/272109), so I'm marking this one as a duplicate (the answers to the other question explain this in more detail). – David Makogon Feb 18 '19 at 17:10
  • @Taras Trofymchuk, I share your frustration. The article, which is still worded the same, suggests that there would be a mechanism for combining blobs with queues to achieve the 200 GB, which doesn't appear to be the case. Therefore I feel that the article is a little misleading on that point. Looks like I will have to hand roll a solution. – Kaine Jul 27 '23 at 15:28

1 Answers1

0

An Azure Queue requires a CloudStorageAccount to back it. Based on the documentation, that CloudStorageAccount can be Azure Blob Storage.

/* Include these "using" directives...
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
*/

string storageConnectionString = "DefaultEndpointsProtocol=https;"
    + "AccountName=[Storage Account Name]"
    + ";AccountKey=[Storage Account Key]"
    + ";EndpointSuffix=core.windows.net";

CloudStorageAccount account = CloudStorageAccount.Parse(storageConnectionString);

// Create the queue client.
CloudQueueClient queueClient = account.CreateCloudQueueClient();

I cobbled this together using the following references:

https://learn.microsoft.com/en-us/azure/storage/queues/storage-dotnet-how-to-use-queues

https://learn.microsoft.com/en-us/dotnet/api/overview/azure/storage

Mike Hofer
  • 16,477
  • 11
  • 74
  • 110
  • 2
    The storage in use for a queue really has no bearing on storage used for blobs. The notion is that the message payload itself, when larger than maximum queue message, can be stored *anywhere* (including, but not limited to, blobs) - it's completely up to the app author to choose where to store larger payloads. And the blob storage account has no need to be the same as a storage account where the queue exists. – David Makogon Feb 18 '19 at 17:15
  • in that case, @DavidMakogon you are publicly exposing your message payload – Alex Gordon Aug 18 '19 at 21:37
  • 1
    @l--''''''---------'''''''''''' - this is not true. Nothing about this pattern (or what I commented on) suggests publicly exposing payloads. The URL placed in a queue message can point to a private blob, which would then require the storage acct key to access. The queue itself needs a storage account key as well. The point of my comment is that the location of a larger payload is irrelevant - whether you store it in Azure blob storage or anywhere else (even another cloud provider's storage). The app itself would need appropriate access (e.g. keys), but no, nothing needs to be publicly exposed. – David Makogon Aug 19 '19 at 00:41
  • got it! great point, didnt consider this – Alex Gordon Aug 19 '19 at 13:59