0

Azure Search service maxes out at 300GB of data. As of today, we've exceeded that. Our database table consists mainly of unstructured text from website news articles around the world.

Do we have any options at all here? We like Azure Search and have built our entire back-end infrastructure around it, but now we're dead in the water with being able to add any more documents to it. Does Azure Search allow compression on the documents?

Stpete111
  • 3,109
  • 4
  • 34
  • 74

1 Answers1

1

Azure Search offers a variety of SKUs. The biggest one allows you to index up to 2.4 TB per service. You can find more details here. Note, changing SKUs requires re-indexing the data.

We don't provide data compression. If you'd like to talk to Azure Search program managers about your capacity requirements, feel free to reach out to @liamca.

Yahnoosh
  • 1,932
  • 1
  • 11
  • 13
  • thank you for your prompt reply. I have looked at the additional options, and unfortunately it appears our bill would quadruple just by stepping up from S1 to S2 (based on the estimate provided during the step of choosing the service level). We'd be shutting our doors within a month if we incurred those kinds of costs. I will look into other solutions for our search abilities. It is unfortunate that Azure Search is so expensive - we really like the service a lot. – Stpete111 Jul 27 '17 at 16:34
  • A few options to consider: if you want to make a bit more room, you can look at what fields are marked as searchable/filterable/facetable in the index and minimize those. You can also see whether you could push less data into the index (e.g. summaries of long text instead of the whole thing). Note that if you want to go from S1 to S2, it wouldn't necessarily quadruple your cost on day one. S1 and S2 cost the same "per byte", you could reduce the number of partitions you use such that the cost is only incrementally higher. S2 does have a chunkier unit cost. – Pablo Castro Jul 27 '17 at 17:14