I'm attempting to dump and restore from my local Mongo DB to Azure Cosmos DB and I get the error "Request rate is large" My database is 9.3MB with 116 collections. I'm guessing by restoring collection by collection it will work. Is this the only way? Or to move to next pricing tier?
1 Answers
In Cosmos DB there are no pricing tiers, but rather provisioned throughput on the collection and less commonly, database level.
The reason why you are getting a 429
Request rate is large
error is because you are hitting CosmosDB with more RU/s than the ones provisioned. This has nothing to do with the volume of your database but rather the request rate you are hitting cosmos with.
You can prevent this from happening by increasing the provisioned throughput from the Scale
settings in the Azure portal, or increasing the retries in case of a throttled request on the SDK level.
The increase can be done momentarily in order to import the data and then scale it back down.
However, having 116 collections for 9.3MB of data is not a good idea in CosmosDB as you will be charged for a minimum of 400 RU/s per collection. I would recommend you read more about CosmosDB pricing and CosmosDB errors.
Error codes: https://learn.microsoft.com/en-us/rest/api/cosmos-db/http-status-codes-for-cosmosdb
Pricing: https://azure.microsoft.com/en-gb/pricing/details/cosmos-db/

- 6,872
- 1
- 20
- 29
-
Thanks for the very thorough explanation. I am testing an open source ecommerce solution https://grandnode.com/ that uses MongoDB and has 116 collections. It sounds like the pricing structure makes hosting this website in Azure even more expensive than a similar solution that uses SQL? It was under the impression that these NoSQL solutions were cheaper. – Primico Nov 14 '18 at 17:58
-
Like Nick said, for the scale of your solution is a very expensive platform, you could instead try: Use a managed mongoDB from other suppliers like mongoDB atlas, or host a mongoDB database in an Azure VM, will be much cheaper but will add extra work to manage, backup replication and so on. – Diego Mendes Nov 14 '18 at 21:58
-
@Primico - For something with so many collections, consider database-level Request Units, where all collections in the database share from an RU pool, so that you don't have to pay the minimum cost for each collection. – David Makogon Nov 15 '18 at 00:32
-
@DiegoMendes - this is what database-level RU is for, and the minimum cost is a fraction of having 116 collection-level RU minimums. – David Makogon Nov 15 '18 at 00:34