Would there be any reason to split a hbase table into smaller entities, or can it grow forever (assuming available disk space)?
Background:
We have realtime data (measurements), up to lets say 500,000/s, which consists essentially of timestamp, value, flags. If we distribute the values to different tables, it would also mean to insert each of the entries individually, which is a performance killer. If we insert in bulk it is much faster. The question is, are there any downsides to have a hbase table with an extreme size?