Basic has a hard limit of 1M document, and you cannot add more partitions to increase that so you have to go for one of the Standard tiers, S1, S2 or S3 if you want to index all 100M entries. Each individual entry (row in your database) counts as a document. The maximum size of the documents you index is 16 MB, but it may be lower depending on how you update the index (https://learn.microsoft.com/en-us/azure/search/search-limits-quotas-capacity#document-size-limits).
The number of documents you need to store affects which tier you need, but also the maximum storage size and which throughput you want. You could do a quick estimate of how big your storage size needs to be, e.g. if your 4 metadata points are all string and each string is on average 30 chars of UTF-8 then you need a total of (100M x 4 x 30 b) ~ 11 GB so storage size will likely not be something you need to select more partitions for (both S1 and S2 can fit that within a single partition).
To fit 100M documents you could go with 7 S1 partitions (15M x 7 = 105M) at £1,304.21/mo, or 2 S2 partition (100M x 2) at £1,490.52/mo. The S2s will likely give you better throughput and it will give you more indexes to work with (even if you don't need them at the moment (since you only have 4 metadata points).
As noted before, the full capability of a search engine might be a lot more than what you need right now, but if it is a strategic decision to start working with it then at least you know why you are paying for it.