I have a little problem that needs some suggestions:
- Lets say we have a few hundred data tables with a few dozen million rows each.
- Data tables are timestamp(key) - value
- Data tables are written once every second
The latest entry of each table should be quickly obtainable and will most likely be queried the most (sorta like "follow data in real time"). With the lack of 'Last()' or similar, I was thinking of creating another table "LatestValues" where the latest entry of each data table is updated for a faster retrieval. This, however, would add an extra update for each write operation. Also, most of the traffic would be concentrated on this table (good/bad?). Is there a better solution for this or am I missing something?
Also, lets say we want to query for the values in data tables. Since scanning is obviously out of the question, is the only option left to create a secondary index by duplicating the data, effectively doubling the storaging requirements and the amount write operations? Any other solutions?
I'm primarily looking at DynamoDB and Azure Table Storage, but I'm also curious how BigTable handles this.