4

One of the most difficult things with DocumentDB is figuring out how many Request Units per Second (RUs/s) you need to run your application day to day but also during usage spikes. When you get this wrong, the DocumentDB client will throw exceptions, which is a terrible usage model.

If my application has particular times of the day where it would use a higher number of Request Units per Second (RUs/s), then how do I handle this in DocumentDB? I don't want to set a really high RUs/s all day because I would get charged accordingly. I also don't want to have to log into the Azure portal each time.

OmG
  • 18,337
  • 10
  • 57
  • 90
Muhammad Rehan Saeed
  • 35,627
  • 39
  • 202
  • 311

1 Answers1

6

You could create a job on Azure that scales up the throughput of your collections only at the time of day that you need it and then scale down afterward.

If you are targeting DocumentDB from .NET, this Azure article has example code that shows how to change the throughput using the .NET SDK.

The specific (C# .NET) code referenced in the article looks like this:

//Fetch the resource to be updated
Offer offer = client.CreateOfferQuery()
              .Where(r => r.ResourceLink == collection.SelfLink)    
              .AsEnumerable()
              .SingleOrDefault();

// Set the throughput to 5000 request units per second
offer = new OfferV2(offer, 5000);

//Now persist these changes to the database by replacing the original resource
await client.ReplaceOfferAsync(offer);

// Set the throughput to S2
offer = new Offer(offer);
offer.OfferType = "S2";

//Now persist these changes to the database by replacing the original resource
await client.ReplaceOfferAsync(offer);

I assume that the DocumentDB SDK for other languages would have the same feature.

Additionally, from an Azure article found here you can use PowerShell to change the service level.

$ResourceGroupName = "resourceGroupName"

$ServerName = "serverName"
$DatabaseName = "databaseName"

$NewEdition = "Standard"
$NewPricingTier = "S2"

$ScaleRequest = Set-AzureRmSqlDatabase -DatabaseName $DatabaseName -   ServerName $ServerName -ResourceGroupName $ResourceGroupName -Edition     $NewEdition -RequestedServiceObjectiveName $NewPricingTier
cnaegle
  • 1,125
  • 1
  • 10
  • 12
  • The DocumentDB broken pricing model is forcing you to write extra code just to work around it. I really feel this should not need to happen. We already have to add retry logic just to get around the rate limiting. We should be able to say, here is some money, don't rate limit me until it runs out and only charge me for what I use, that is the Azure way! – Muhammad Rehan Saeed Apr 19 '16 at 13:58
  • 3
    @MuhammadRehanSaeed yeah, I'm not thrilled with it either. I would like to see some sort of auto scale where you could set thresholds and let it scale when needed. – cnaegle Apr 19 '16 at 15:42