I am trying to find the best solution to essentially replace a SQL database in a solution that is currently under development, but is working. The data store needs to to include table partitioning and file groups that can be backed up and/or rolled in/out (such as described in IBM DB2 partitioning here: http://www.ibm.com/developerworks/data/library/techarticle/dm-0605ahuja2/index.html), but also allow mass row insert/updates.
The tables stored in the SQL database currently have a simple structure: ID long (PRIMARY KEY) TimeStamp DateTime (PRIMARY KEY) Value float Attribute int
The ID is not related to any ID in the current database but comes from elsewhere. The ID and Timestamp together are used as the primary key for inserting/updating
data in the table. Currently, a stored procedure with a Table-Valued Parameter is used from .NET to allow for data to be merged into the table 50K rows at a time. Currently I have measured around 10K entries/s on the development machine, which I am happy with. However, as the solution is cost sensitive, and due to performance requirements as the table grows, I need the ability to do table partitioning (ranged by the TimeStamp field) and divide the partitions among multiple files for simplified maintenance/backup. In addition, I need the ability for a customer to execute queries (read only) on the data, at minimum, via ODBC.
I have looked at a couple of NoSQL options, mySQL, and DB2 (which seems to be the closest fit right now). I would appreciate any insight anyone has as to what the best fit may be. There is of course always the option to move the "partitioning" logic into the code, but I'd like to avoid that if possible.