Ahh... performance tuning SQL Server et. al. my favourite thing!
Can anyone suggest any paths we could try to accomplish this?
From the info you've given, I would vertically partition the data. Meaning maintain one database (Server A) for actual OLTP (the CRUD transactions) and one for KPIs (Server B).
For the replication I would use transactional replication - when performing correctly the latency will be sub-one second. I can't think of practical scenario where this is inappropriate. Indeed most reports are done to the end of prior day and "real-time" usually means last 5 mins
To manage the replication process I would start with a simple console app, expecting to extend it due course to match requirements. The console app should use the following namespaces (on second thoughts there are possibly later ones available for SQL2012)
using Microsoft.SqlServer.Management.Common;
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Replication;
Using the console app you can then manage the publication, subscription and any trace tokens in a single interface. It'll be a PAIN to configure (all those permissions, passwords and paths) but once it's up and running you'll be able to optimise the transactional database for data and the report server for ... reports.
I would have a replication topology that was effectively one subscription per table for the large tables and a single subscription for the rest (lookup tables, views sp's). I would replicate primary keys but not constraints, table references, triggers (relying on the source db for integrity). You should also not replicate indexes - they can be configured/optimised by hand for reporting server.
You'll also be able select which articles are appropriate for KPIs ie (no need to replicate text, varchar(max) etc)
Some helper functions are posted below to get you going.
Or is there another method I should be looking at to get us
performance wins?
In my humble experience there are ALWAYS things that can be done to improve performance. It comes down to time->cost->benefit. Sometimes a little compromise on functionality will get you a lot of performance benefit.
The devil is in the details but with that caveat...
Further random thoughts
You have identified one of your infrastructure issues - mixing OLTP and BI/Reporting. I'm unclear of your experience and how bad your performance issues are so while replication is definitely the right way to go, if you're in "fire-fighting" mode you could try.
- Server-side caching of KPI results (5min, 1hr, 1day?) in db or RAM
- Using schema-bound views you can create indexed views (on standard and enterprise editions) Depending on the type of KPIs - this might even be all you need to do! See http://msdn.microsoft.com/en-us/library/ms191432.aspx to find out more. In essence, if your KPIs are are summations/group by's you should take a good look.
- Pre-calculating summations overnight for daily KPI's. You can then optionally add just the current days' data.
- Sorting - are the KPIs expensive due to
order by
clauses. Ensure your clustered indexes are correct (REM: they do not need to exist on the primary key). Try undertaking sorts on the client once you have the data
- Clustered index size. The smaller the better. Start here if you use GUIDs
- Vertically partition the data - for example if you have a table of 200 columns but the KPIs use only 10 columns - put the 10 into a different table - you'll get more data per i/o page read (will work if your disk is the bottleneck)
- Offer the functionality of "Send reports by email" - take away the real-time nature. You might be able to deliver a %age of reports overnight when things are quieter and have a lower volume of "real-time" reports during the day. Some customers might actually prefer this feature
- Make your customers pay for reports! "Just enter your credit-cards details here..." a sure way to reduce the number of reports :)
Some more info about your config would be useful - when you say huge, how huge? How big/what type of disks, what's the RAM spec etc - the reason I ask is this.... you might spend the next 40 man days (at $500+/day?) tuning - that would buy you quite a lot of hardware! - more RAM, more disk, faster disks - SSD for tempdb or index partitions. Put another way... you might be asking too much of the hardware (and your boss is asking too much of you)
Next, you describe an enterprise application is this an Enterpise SQL Server licenses. If so you are in luck - you can create schema bound partition views and have the querys delegated to the "correct" server. There are issues with this model though - namely joins but it does provide you with an effective alternative option.
Replication code
I knew I had it somewhere. Find below some helper functions for RMO that you might find useful in getting started with replication. At some point in the past it was live code but probably longer ago than I care to think of - please treat as a pseudo.
(PS Happy for you to get in touch direct if you wish)
public static class RMOHelper
{
public static void PreparePublicationDb(MyServer Src, MyServer Dist)
{
ReplicationDatabase publicationDb = new ReplicationDatabase(Src.Database, Src.ServerConnection);
if (publicationDb.LoadProperties())
{
if (!publicationDb.EnabledTransPublishing)
{
publicationDb.EnabledTransPublishing = true;
}
// If the Log Reader Agent does not exist, create it.
if (!publicationDb.LogReaderAgentExists)
{
// Specify the Windows account under which the agent job runs.
// This account will be used for the local connection to the
// Distributor and all agent connections that use Windows Authentication.
publicationDb.LogReaderAgentProcessSecurity.Login = Dist.WinUId;
publicationDb.LogReaderAgentProcessSecurity.Password = Dist.WinPwd;
// Explicitly set authentication mode for the Publisher connection
// to the default value of Windows Authentication.
publicationDb.LogReaderAgentPublisherSecurity.WindowsAuthentication = true;
// Create the Log Reader Agent job.
publicationDb.CreateLogReaderAgent();
DeleteJobAgentSchedule(publicationDb.LogReaderAgentName);
}
}
else
{
throw new ApplicationException(String.Format(
"The {0} database does not exist at {1}.",
publicationDb,
Src.ServerName));
}
}
public static TransPublication PrepareTransPublication(MyServer Src, MyServer Dist, string publicationName)
{
// Set the required properties for the transactional publication.
TransPublication publication = new TransPublication();
publication.ConnectionContext = Src.ServerConnection;
publication.Name = publicationName;
publication.DatabaseName = Src.Database;
if (publicationName == "relation")
{
float d = 0;
}
// Specify a transactional publication (the default).
publication.Type = PublicationType.Transactional;
publication.ConflictRetention = 4;
publication.RetentionPeriod = 72;
// Activate the publication so that we can add subscriptions.
publication.Status = State.Active;
// Enable push and pull subscriptions and independent Distribition Agents.
publication.Attributes = PublicationAttributes.AllowPull|PublicationAttributes.AllowPush|PublicationAttributes.IndependentAgent;
//publication.Attributes &= PublicationAttributes.AllowSyncToAlternate;
// Specify the Windows account under which the Snapshot Agent job runs.
// This account will be used for the local connection to the
// Distributor and all agent connections that use Windows Authentication.
publication.SnapshotGenerationAgentProcessSecurity.Login = Dist.WinUId;
publication.SnapshotGenerationAgentProcessSecurity.Password = Dist.WinPwd;
// Explicitly set the security mode for the Publisher connection
// Windows Authentication (the default).
publication.SnapshotGenerationAgentPublisherSecurity.WindowsAuthentication = true;
publication.SnapshotGenerationAgentProcessSecurity.Login =Dist.WinUId;
publication.SnapshotGenerationAgentProcessSecurity.Password = Dist.WinPwd;
publication.AltSnapshotFolder = @"\\192.168.35.4\repldata\";
if (!publication.IsExistingObject)
{
// Create the transactional publication.
publication.Create();
// Create a Snapshot Agent job for the publication.
publication.CreateSnapshotAgent();
// DeleteJobAgentSchedule(ByVal jobID As Guid) As Boolean
}
else
{
//throw new ApplicationException(String.Format(
// "The {0} publication already exists.", publicationName));
}
return publication;
}
public static TransArticle PrepareTransArticle(TransPublication TransPub, Happy.MI.Replication.Article Article)
{
TransArticle TransArticle = new TransArticle();
TransArticle.ConnectionContext = TransPub.ConnectionContext;
TransArticle.Name = Article.Name;
TransArticle.DatabaseName = TransPub.DatabaseName;
TransArticle.SourceObjectName = Article.Name;
TransArticle.SourceObjectOwner = "dbo";
TransArticle.PublicationName = TransPub.Name;
//article.Type = ArticleOptions.LogBased;
//article.FilterClause = "DiscontinuedDate IS NULL";
// Ensure that we create the schema owner at the Subscriber.
if (TransArticle.IsExistingObject)
{
//do somethinbg??
}
else
{
TransArticle.SchemaOption |= CreationScriptOptions.Schema;
TransArticle.SchemaOption |= CreationScriptOptions.AttemptToDropNonArticleDependencies;
if (!Article.ObjectType.HasValue)
{
throw new Exception(string.Format("unknown schema object type for trans article {0}", Article.Name));
}
if (Article.ObjectType.Value== DataAccessADO.ObjectType.USER_TABLE)
{
TransArticle.SchemaOption |= CreationScriptOptions.ClusteredIndexes;
TransArticle.SchemaOption |= CreationScriptOptions.DriChecks;
TransArticle.SchemaOption |= CreationScriptOptions.DriDefaults;
TransArticle.SchemaOption |= CreationScriptOptions.DriPrimaryKey;
TransArticle.SchemaOption |= CreationScriptOptions.DriUniqueKeys;
//TransArticle.SchemaOption |= CreationScriptOptions.ExtendedProperties;
//TransArticle.SchemaOption |= CreationScriptOptions.NonClusteredIndexes;
TransArticle.Type = ArticleOptions.LogBased;
TransArticle.AddReplicatedColumns(Article.IncludedColumns.ToArray());
}
else if (Article.ObjectType.Value == DataAccessADO.ObjectType.VIEW)
{
TransArticle.Type= ArticleOptions.ViewSchemaOnly;
}
else if (Article.ObjectType.Value == DataAccessADO.ObjectType.SQL_SCALAR_FUNCTION)
{
TransArticle.Type = ArticleOptions.FunctionSchemaOnly;
}
else if (Article.ObjectType.Value == DataAccessADO.ObjectType.SQL_STORED_PROCEDURE)
{
TransArticle.Type = ArticleOptions.ProcSchemaOnly;
}
else
{
throw new Exception(string.Format("unsupported schema object type {0}", Article.ObjectType.Value));
}
// Create the article.
TransArticle.Create();
}
return TransArticle;
}
public static TransSubscription PrepareSubscription(TransPublication TransPub, MyServer Src, MyServer Dest, MyServer Dist)
{
// Define the push subscription.
//TransPullSubscription subscription = new TransPullSubscription();
//subscription.ConnectionContext = Dest.ServerConnection;
//subscription.PublisherName = Src.ServerName;
//subscription.PublicationName = TransPub.Name;
//subscription.PublicationDBName = Src.Database;
//subscription.DatabaseName = Dest.Database;
TransSubscription subscription = new TransSubscription();
subscription.ConnectionContext = TransPub.ConnectionContext;
subscription.PublicationName = TransPub.Name;
subscription.DatabaseName = TransPub.DatabaseName;
subscription.SubscriptionDBName = Dest.Database;
subscription.SubscriberName = Dest.ServerName;
subscription.LoadProperties();
//subscription.Remove();
// Specify the Windows login credentials for the Distribution Agent job.
subscription.SynchronizationAgentProcessSecurity.Login = Dist.WinUId;
subscription.SynchronizationAgentProcessSecurity.Password = Dist.WinPwd;
if(!subscription.IsExistingObject){
// Create the push subscription.
// By default, subscriptions to transactional publications are synchronized
// continuously, but in this case we only want to synchronize on demand.
subscription.AgentSchedule.FrequencyType = ScheduleFrequencyType.Continuously;
subscription.Create();
PrepareSnapshot(TransPub, Src, Dist);
}
return subscription;
}
public static void PrepareSnapshot(TransPublication TPub, MyServer Src, MyServer Dist)
{
SnapshotGenerationAgent agent = new SnapshotGenerationAgent();
agent.Distributor = Dist.ServerName;
agent.DistributorSecurityMode = SecurityMode.Standard;
agent.DistributorLogin = Dist.SQLUId;
agent.DistributorPassword = Dist.WinPwd;
agent.Publisher = TPub.SqlServerName;
agent.PublisherSecurityMode = SecurityMode.Standard;
agent.PublisherLogin = Src.SQLUId;
agent.PublisherPassword = Src.SQLPwd;
agent.Publication = TPub.Name;
agent.PublisherDatabase = TPub.DatabaseName;
agent.ReplicationType = ReplicationType.Transactional;
// Start the agent synchronously.
agent.GenerateSnapshot();
}
public static void ApplySubscription(Happy.MI.Replication.Subscription _subscription)
{
Happy.MI.Replication.Publication p = _subscription.Publication;
RMOHelper.PreparePublicationDb(_subscription.Publication.Src, _subscription.Publication.Dist);
TransPublication TransPub = RMOHelper.PrepareTransPublication(p.Src, p.Dist, p.PublicationName);
foreach (Happy.MI.Replication.Article a in p.Articles)
{
a.LoadProperties();
TransArticle ta = RMOHelper.PrepareTransArticle(TransPub, a);
ta.ConnectionContext.Disconnect();
}
TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, _subscription.Dest, p.Dist);
if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
{
// Start the Distribution Agent asynchronously.
TransSub.SynchronizeWithJob();
}
TransSub.ConnectionContext.Disconnect();
//foreach (Happy.MI.Replication.Subscription s in p.Subscriptions)
//{
// TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, s.Dest, p.Dist);
// if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
// {
// // Start the Distribution Agent asynchronously.
// TransSub.SynchronizeWithJob();
// }
// TransSub.ConnectionContext.Disconnect();
//}
//TransPub.ConnectionContext.Disconnect();
}
public static void Create(Happy.MI.Replication.Publication p)
{
RMOHelper.PreparePublicationDb(p.Src, p.Dist);
TransPublication TransPub = RMOHelper.PrepareTransPublication(p.Src, p.Dist, p.PublicationName);
foreach (Happy.MI.Replication.Article a in p.Articles)
{
a.LoadProperties();
RMOHelper.PrepareTransArticle(TransPub, a);
}
foreach (Happy.MI.Replication.Subscription s in p.Subscriptions)
{
TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, s.Dest, p.Dist);
if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
{
// Start the Distribution Agent asynchronously.
TransSub.SynchronizeWithJob();
}
}
}
private static void DeleteJobAgentSchedule(string s)
{
// Private Function DeleteSchedule(ByVal scheduleID As Integer) As Boolean
// Dim result As Boolean
// If (scheduleID > 0) Then
// Dim msdbConnectionString As String = Me.PublicationConnectionString.Replace(String.Format("Initial Catalog={0};", Me.PublicationDbName), "Initial Catalog=msdb;")
// Dim db As New SQLDataAccessHelper.DBObject(msdbConnectionString)
// '-- Delete Job Schedule
// Dim parameters As New List(Of System.Data.SqlClient.SqlParameter)
// parameters.Add(New System.Data.SqlClient.SqlParameter("@schedule_id", SqlDbType.Int))
// parameters.Add(New System.Data.SqlClient.SqlParameter("@force_delete", SqlDbType.Bit))
// parameters(0).Value = scheduleID
// parameters(1).Value = True
// Dim rowsAffected As Integer
// result = (db.RunNonQueryProcedure("sp_delete_schedule", parameters, rowsAffected) = 0)
// db.Connection.Close()
// db.Connection.Dispose()
// Else
// Throw New ArgumentException("DeleteSchedule(): ScheduleID must be greater than 0")
// End If
// Return result
//End Function
}
public static int PublicationEstimatedTimeBehind(Happy.MI.Replication.Subscription s)
{
PublicationMonitor mon = new PublicationMonitor();
mon.DistributionDBName = s.Publication.Dist.Database;
mon.PublisherName = s.Publication.Src.ServerName;
mon.PublicationDBName = s.Publication.Src.Database;
mon.Name = s.Publication.PublicationName;
mon.ConnectionContext = s.Publication.Src.ServerConnection;
DataSet ds1 = mon.EnumSubscriptions2(SubscriptionResultOption.AllSubscriptions);
ds1.WriteXml(@"c:\desktop\ds1.xml");
//ds.Tables[0].ToString();
if (mon.LoadProperties())
{
PendingCommandInfo pci = mon.TransPendingCommandInfo(s.Dest.ServerName, s.Dest.Database, SubscriptionOption.Push);
return pci.EstimatedTimeBehind;
}
else
{
throw new Exception(string.Format("Unable to load properties for subscription [{0}][{1}]",s.Dest.ServerName, s.Publication.PublicationName));
}
}
public static int TraceTokenPost(Happy.MI.Replication.Subscription s)
{
TransPublication TransPub = new TransPublication();
TransPub.ConnectionContext = s.Publication.Src.ServerConnection;
TransPub.Name = s.Publication.PublicationName;
TransPub.DatabaseName = s.Publication.Src.Database;
if (TransPub.LoadProperties())
{
return TransPub.PostTracerToken();
}
return 0;
}
public static bool TraceTokenReceive(Happy.MI.Replication.Subscription s, int TokenId){
PublicationMonitor mon = new PublicationMonitor();
mon.DistributionDBName = s.Publication.Dist.Database;
mon.PublisherName = s.Publication.Src.ServerName;
mon.PublicationDBName = s.Publication.Src.Database;
mon.Name = s.Publication.PublicationName;
mon.ConnectionContext = s.Publication.Src.ServerConnection;
if (mon.LoadProperties())
{
DataSet ds= mon.EnumTracerTokenHistory(TokenId);
int latency;
string str = ds.Tables[0].Rows[0]["overall_latency"].ToString();
bool res = int.TryParse(str, out latency);
return res;
}
else
{
throw new Exception(string.Format("Unable to load properties for subscription [{0}][{1}]", s.Dest.ServerName, s.Publication.PublicationName));
}
}
public static void Cmd(string cnct)
{
string script = System.IO.File.ReadAllText(@"C:\tfs\CA\Dev\MI\Happy.MI\PostReplicationScripts\GC1.txt");
SqlConnection connection = new SqlConnection(cnct+";Connection Timeout=5");
Server server = new Server(new ServerConnection(connection));
//server.ConnectionContext.InfoMessage += new System.Data.SqlClient.SqlInfoMessageEventHandler(ConnectionContext_InfoMessage);
server.ConnectionContext.ExecuteNonQuery(script);
server.ConnectionContext.Disconnect();
}
}