I am using Azure Cosmos DB for MongoDB API.
I have a scenario in which I have various Locations which will contain modules. These modules will contain data. There will be Location-Module mapping which will be unique. To make it further clear I'll describe it using table structure below,
Locations -> Modules -> Data
currently the structure I am using is something like this,
We have a ModuleDataDB, in it we have collections (Names made unique by combination of Location and Module generated dynamically),
ModuleDataDB -> Location1-Module1 (collection)
-> Location1-Module2 (collection)
-> Location1-Module3 (collection)
-> Location2-Module1 (collection)
-> Location2-Module2 (collection)
-> Location2-Module3 (collection)
.
.
Now the issue here is previously we were using Shared RUs (to reduce costs) for all the collection under this DB but now Azure has put the limit of 25 collections per DB if we are using shared RUs.
I am thinking of moving this structure to a more traditional approach i.e. to put all the modules from a single location inside a single collection i.e to say,
ModuleDataDB -> Location1 (collection)
-> Location2 (collection)
-> Location3 (collection)
.
.
I need suggestions on how to make it cost effective as well as limit the number of collections I make. Is there any better way to do it? Also would it make sense to switch to a VM with MongoDB in it rather than Azure Cosmos ?
EDIT1 (Solution) : As suggest by @Cedric below, I am now using a single collection for the all location module mapping and using a Synthetic partition key which will have several unique value like "Location-1_Module-1"
So it will have a structure like below,
ModuleDataDB -> Client-1(collection) --> (Data) {
LocationID : ""
ModuleID : ""
PartitionKey : <LocationID + ModuleID>
.
.
}