Trying formulate an approach to SQL Server 2014 memory-optimized tables
One very simple table is the most active table in the application
- Data is never mutated
- Loads are all via a bulk process
- Approx 200 million records
- Currently all reads are with (nolock)
Table
int PK1
int PK2
composite clustered PK of PK1, PK2
non-clustered index on PK2
The PK was chosen in that order as that is the order of the loads
During a load the non-clustered index is disabled and then rebuilt at the end of the load
That index killed load speed and was so fragmented at the end of a load that it needed to be rebuilt anyway
- All searches are equality (never <, >, <>)
- Most searches are on PK2
- Some simple searches on PK1 and is used in joins.
Finally to the question(s).
- It is my understanding that memory-optimized indexes do not fragment.
- As an in-memory table would I reverse the PK (PK2, PK1) and have a second index on PK1?
- Is there no reason to drop and recreate the index on PK1?
- Does index fragmentation truly go away in a memory-optimized table?
I think the answer is yes but it just seems to good to be true.
Guidelines for Using Indexes on Memory-Optimized Tables
On further examination there are limitations:
- ALTER TABLE, sp_rename, alter bucket_count, and adding and removing an index outside of the CREATE TABLE statement is not supported for memory-optimized tables.
- UNIQUE, CHECK, and FOREIGN KEY constraints are not supported.
Transact-SQL Support for In-Memory OLTP
Did not open the question to criticize the product and it is a cool feature. But if a table does not support declarative referential integrity (DRI) can you call it a relational database?