Short backgroung
Hello every body. I am currently facing a situation where I need to increase performance on a very large table on SQL Server standard edition. The table is transaction heavy and expected to get quite a bit more transactions.
Solution Part 1
I have decided to split up the table into several partions using a strategy similar to the one proposed here by Barry King. So I have a bunch of tables looking something like the one below.
CREATE TABLE [NewTable001](
[DayId] [int] NOT NULL,
[OriginalTableId] [bigint] NOT NULL,
[CustomerId] [int] NULL
CONSTRAINT [PK_OriginalTableID001] PRIMARY KEY CLUSTERED
(
[OriginalTableID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [NewTable001] WITH CHECK ADD CONSTRAINT [CK_Day_001] CHECK ([DayId]=(1))
GO
ALTER TABLE [NewTable001] CHECK CONSTRAINT [CK_Day_001]
GO
I have the corresponding view as well to handle all the underlying tables. Now the problem, as Barry King stated in his blog post is the handling of unique ID's in the view. Since the view can't handle the ID they need to be stored in the tables and can't utilize the identity functionality.
Solution part 2
By creating a separate table like this:
create table PartitionIDHelper
(
ID bigint identity(10000001,1), --Hihger (with margin) than the higest value in current table
Value int --Something that is quick to write
);
and then in my procedures using this code:
declare @ID bigint;
insert into PartitionIDHelper
select 1; --Just a value so that I can get the unique ID from the table
select @ID = @@IDENTITY; --My shiny new id value to use when writing to the view</pre>
Potential problem
The potential problem with this solution is that I might write the values into [OriginalTableID] in the wrong order.
Assume that process A starts before process B. Process A also aquires the ID from the PartitionIDHelper table before process B does. Process B, however, is quicker and writes before process B causing a write to my clustered index before process A does.
Process A then does not write to the "best" position in the index as it is out of order.
Potential solutions
So how do I solve this best? I can think of two strategies.
- Accept that my clusterd index will be slightly out order
- Use a local identity column in each table for the purpose of the clustered index.
Potential solution 2 seems to me like a bad path as I will never use that column for anything else than storing order as it willl never be ordered by, used in where clauses or even selected..
After reading (most of) this blog post by Michelle Ufford I am leaning towards sticking with the slightly fragmented clustered index. The quote below poses a difficult question for me as the data is accesed mainly by singleton queries but it is an OLTP system where insert speed is importatnt.
I am not suggesting that you only create clustered indexes on identity integer columns. Fragmentation, although generally undesirable, primarily impacts range-scan queries; singleton queries would not notice much impact. Even range-scan queries can benefit from routine defragmentation efforts. However, the ever-increasing attribute of a clustered key is something to consider, and is especially important in OLTP systems where INSERT speed is important.
Any and all comments and suggestions will be greatly apreciated!
/Taher