8

We have an enterprise application that uses Microsoft SQL Server as the database backend. We have been coming across a fair few instances where the customer has grown the application into a huge db and some of the queries being run are causing locks and performance issues for themselves and the other users.

We have tried to apply as many indexes as possible and perf tuned all queries too the limit but we have one application that must suit a lot of different customer types so its hard to create one solution that fits all.We dont have the resources to apply customer specific indexing/performance for every customer.

We know the main queries that are causing issues are the ones that are generated to drive reports and kpi's.

My question is, is there a way that we can spread the load of the application so the day to day use isn't hampered by the report/kpi generation. i.e. Could we someway mirror/duplicate the db so that day to day actions are sent to SQL Entity A and the data intense queries and sent to SQL Entity B? Therefore the data intense queries have no effect on the day to day side of things and we could queue the queries going to SQL Entity B.

In this scenario SQL Entity A and B will need to be kept in alignment at all times, but SQL Entity B will always be read only.

Can anyone suggest any paths we could try to accomplish this? Or is there another method I should be looking at to get us performance wins.

Thanks

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Matt
  • 3,305
  • 11
  • 54
  • 98
  • I'm afraid you're trying to have the cake and eat it too. You could set up a second server and replicate your 'main' server to it, but if you want to have them in sync 100% of the time this means that any processing on the main server will also cause load on the 'report' server. Depending on the type of processing it's likely that the 'replication workload' (which is just the actual changes, not the calculations etc) is not as 'heavy', but it will still have some impact and it brings its own extra load off course (logshipping). – deroby Oct 08 '13 at 09:48
  • 1
    Have tried something like [Snapshot Isolation](http://en.wikipedia.org/wiki/Snapshot_isolation) ? It might help out as the (read-only) reporting would not block the actual processing while at the same time still returning a 'consistent' view on the data. – deroby Oct 08 '13 at 09:51
  • 1
    Only a little side note. OLTP and analytics work not well togheter in the same db. Maybe the kind of replication that the gurus are suggesting you in the answers below could be made separating the read-write part (OLTP) than the read only one (analytic). In this way you could use specific optimization in both. – momobo Oct 10 '13 at 09:07

6 Answers6

4

It seems you can go with any replication option there is and be fine. In one of my previous jobs we used Log Shipping http://technet.microsoft.com/en-us/library/ms187103.aspx for that purpose.

Also you can work through replication types: http://technet.microsoft.com/en-us/library/bb677158.aspx and see which one should suit you best as you can do more than just reports on the secondary database.

If I remember correctly my initial experience, Log Shipping is extremely easy to set up so you may want to start there.

nimdil
  • 1,361
  • 10
  • 20
3

Ahh... performance tuning SQL Server et. al. my favourite thing!

Can anyone suggest any paths we could try to accomplish this?

From the info you've given, I would vertically partition the data. Meaning maintain one database (Server A) for actual OLTP (the CRUD transactions) and one for KPIs (Server B).

For the replication I would use transactional replication - when performing correctly the latency will be sub-one second. I can't think of practical scenario where this is inappropriate. Indeed most reports are done to the end of prior day and "real-time" usually means last 5 mins

To manage the replication process I would start with a simple console app, expecting to extend it due course to match requirements. The console app should use the following namespaces (on second thoughts there are possibly later ones available for SQL2012)

using Microsoft.SqlServer.Management.Common;
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Replication;

Using the console app you can then manage the publication, subscription and any trace tokens in a single interface. It'll be a PAIN to configure (all those permissions, passwords and paths) but once it's up and running you'll be able to optimise the transactional database for data and the report server for ... reports.

I would have a replication topology that was effectively one subscription per table for the large tables and a single subscription for the rest (lookup tables, views sp's). I would replicate primary keys but not constraints, table references, triggers (relying on the source db for integrity). You should also not replicate indexes - they can be configured/optimised by hand for reporting server.

You'll also be able select which articles are appropriate for KPIs ie (no need to replicate text, varchar(max) etc)

Some helper functions are posted below to get you going.

Or is there another method I should be looking at to get us performance wins?

In my humble experience there are ALWAYS things that can be done to improve performance. It comes down to time->cost->benefit. Sometimes a little compromise on functionality will get you a lot of performance benefit.

The devil is in the details but with that caveat...

Further random thoughts

You have identified one of your infrastructure issues - mixing OLTP and BI/Reporting. I'm unclear of your experience and how bad your performance issues are so while replication is definitely the right way to go, if you're in "fire-fighting" mode you could try.

  1. Server-side caching of KPI results (5min, 1hr, 1day?) in db or RAM
  2. Using schema-bound views you can create indexed views (on standard and enterprise editions) Depending on the type of KPIs - this might even be all you need to do! See http://msdn.microsoft.com/en-us/library/ms191432.aspx to find out more. In essence, if your KPIs are are summations/group by's you should take a good look.
  3. Pre-calculating summations overnight for daily KPI's. You can then optionally add just the current days' data.
  4. Sorting - are the KPIs expensive due to order by clauses. Ensure your clustered indexes are correct (REM: they do not need to exist on the primary key). Try undertaking sorts on the client once you have the data
  5. Clustered index size. The smaller the better. Start here if you use GUIDs
  6. Vertically partition the data - for example if you have a table of 200 columns but the KPIs use only 10 columns - put the 10 into a different table - you'll get more data per i/o page read (will work if your disk is the bottleneck)
  7. Offer the functionality of "Send reports by email" - take away the real-time nature. You might be able to deliver a %age of reports overnight when things are quieter and have a lower volume of "real-time" reports during the day. Some customers might actually prefer this feature
  8. Make your customers pay for reports! "Just enter your credit-cards details here..." a sure way to reduce the number of reports :)

Some more info about your config would be useful - when you say huge, how huge? How big/what type of disks, what's the RAM spec etc - the reason I ask is this.... you might spend the next 40 man days (at $500+/day?) tuning - that would buy you quite a lot of hardware! - more RAM, more disk, faster disks - SSD for tempdb or index partitions. Put another way... you might be asking too much of the hardware (and your boss is asking too much of you)

Next, you describe an enterprise application is this an Enterpise SQL Server licenses. If so you are in luck - you can create schema bound partition views and have the querys delegated to the "correct" server. There are issues with this model though - namely joins but it does provide you with an effective alternative option.

Replication code

I knew I had it somewhere. Find below some helper functions for RMO that you might find useful in getting started with replication. At some point in the past it was live code but probably longer ago than I care to think of - please treat as a pseudo.

(PS Happy for you to get in touch direct if you wish)

 public static class RMOHelper
    {
        public static void PreparePublicationDb(MyServer Src, MyServer Dist)
        {
            ReplicationDatabase publicationDb = new ReplicationDatabase(Src.Database, Src.ServerConnection);
            if (publicationDb.LoadProperties())
            {
                if (!publicationDb.EnabledTransPublishing)
                {
                    publicationDb.EnabledTransPublishing = true;
                }

                // If the Log Reader Agent does not exist, create it.
                if (!publicationDb.LogReaderAgentExists)
                {
                    // Specify the Windows account under which the agent job runs.
                    // This account will be used for the local connection to the 
                    // Distributor and all agent connections that use Windows Authentication.
                    publicationDb.LogReaderAgentProcessSecurity.Login = Dist.WinUId;
                    publicationDb.LogReaderAgentProcessSecurity.Password = Dist.WinPwd;

                    // Explicitly set authentication mode for the Publisher connection
                    // to the default value of Windows Authentication.
                    publicationDb.LogReaderAgentPublisherSecurity.WindowsAuthentication = true;

                    // Create the Log Reader Agent job.
                    publicationDb.CreateLogReaderAgent();
                    DeleteJobAgentSchedule(publicationDb.LogReaderAgentName);
                }
            }
            else
            {
                throw new ApplicationException(String.Format(
                    "The {0} database does not exist at {1}.",
                    publicationDb,
                    Src.ServerName));
            }
        }
        public static TransPublication PrepareTransPublication(MyServer Src, MyServer Dist, string publicationName)
        {
            // Set the required properties for the transactional publication.
            TransPublication publication = new TransPublication();
            publication.ConnectionContext = Src.ServerConnection;
            publication.Name = publicationName;
            publication.DatabaseName = Src.Database;
            if (publicationName == "relation")
            {
                float d = 0;

            }

            // Specify a transactional publication (the default).
            publication.Type = PublicationType.Transactional;
            publication.ConflictRetention = 4;
            publication.RetentionPeriod = 72;
            // Activate the publication so that we can add subscriptions.
            publication.Status = State.Active;
            // Enable push and pull subscriptions and independent Distribition Agents.
            publication.Attributes = PublicationAttributes.AllowPull|PublicationAttributes.AllowPush|PublicationAttributes.IndependentAgent;
            //publication.Attributes &= PublicationAttributes.AllowSyncToAlternate;
            // Specify the Windows account under which the Snapshot Agent job runs.
            // This account will be used for the local connection to the 
            // Distributor and all agent connections that use Windows Authentication.
            publication.SnapshotGenerationAgentProcessSecurity.Login = Dist.WinUId;
            publication.SnapshotGenerationAgentProcessSecurity.Password = Dist.WinPwd;

            // Explicitly set the security mode for the Publisher connection
            // Windows Authentication (the default).
            publication.SnapshotGenerationAgentPublisherSecurity.WindowsAuthentication = true;
            publication.SnapshotGenerationAgentProcessSecurity.Login =Dist.WinUId;
            publication.SnapshotGenerationAgentProcessSecurity.Password =  Dist.WinPwd;

            publication.AltSnapshotFolder = @"\\192.168.35.4\repldata\";
            if (!publication.IsExistingObject)
            {
                // Create the transactional publication.
                publication.Create();

                // Create a Snapshot Agent job for the publication.
                publication.CreateSnapshotAgent();
                // DeleteJobAgentSchedule(ByVal jobID As Guid) As Boolean
            }
            else
            {
                //throw new ApplicationException(String.Format(
                //    "The {0} publication already exists.", publicationName));
            }
            return publication;
        }
        public static TransArticle PrepareTransArticle(TransPublication TransPub, Happy.MI.Replication.Article Article)
        {
            TransArticle TransArticle = new TransArticle();
            TransArticle.ConnectionContext = TransPub.ConnectionContext;
            TransArticle.Name = Article.Name;
            TransArticle.DatabaseName = TransPub.DatabaseName;
            TransArticle.SourceObjectName = Article.Name;
            TransArticle.SourceObjectOwner = "dbo";
            TransArticle.PublicationName = TransPub.Name;
            //article.Type = ArticleOptions.LogBased;
            //article.FilterClause = "DiscontinuedDate IS NULL";

            // Ensure that we create the schema owner at the Subscriber.


            if (TransArticle.IsExistingObject)
            {
                //do somethinbg??
            }
            else
            {

                TransArticle.SchemaOption |= CreationScriptOptions.Schema;
                TransArticle.SchemaOption |= CreationScriptOptions.AttemptToDropNonArticleDependencies;

                if (!Article.ObjectType.HasValue)
                {
                    throw new Exception(string.Format("unknown schema object type for trans article {0}", Article.Name));
                }
                if (Article.ObjectType.Value== DataAccessADO.ObjectType.USER_TABLE)
                {
                    TransArticle.SchemaOption |= CreationScriptOptions.ClusteredIndexes;
                    TransArticle.SchemaOption |= CreationScriptOptions.DriChecks;
                    TransArticle.SchemaOption |= CreationScriptOptions.DriDefaults;
                    TransArticle.SchemaOption |= CreationScriptOptions.DriPrimaryKey;
                    TransArticle.SchemaOption |= CreationScriptOptions.DriUniqueKeys;
                    //TransArticle.SchemaOption |= CreationScriptOptions.ExtendedProperties;
                    //TransArticle.SchemaOption |= CreationScriptOptions.NonClusteredIndexes;
                    TransArticle.Type = ArticleOptions.LogBased;
                    TransArticle.AddReplicatedColumns(Article.IncludedColumns.ToArray());
                }
                else if (Article.ObjectType.Value == DataAccessADO.ObjectType.VIEW)
                {
                    TransArticle.Type= ArticleOptions.ViewSchemaOnly;
                }
                else if (Article.ObjectType.Value == DataAccessADO.ObjectType.SQL_SCALAR_FUNCTION)
                {
                    TransArticle.Type = ArticleOptions.FunctionSchemaOnly;
                }
                else if (Article.ObjectType.Value == DataAccessADO.ObjectType.SQL_STORED_PROCEDURE)
                {
                    TransArticle.Type = ArticleOptions.ProcSchemaOnly;
                }
                else
                {
                    throw new Exception(string.Format("unsupported schema object type {0}", Article.ObjectType.Value));
                }
                // Create the article.
                TransArticle.Create();
            }
            return TransArticle;
        }
        public static TransSubscription PrepareSubscription(TransPublication TransPub, MyServer Src, MyServer Dest, MyServer Dist)
        {
            // Define the push subscription.
            //TransPullSubscription subscription = new TransPullSubscription();
            //subscription.ConnectionContext =  Dest.ServerConnection;
            //subscription.PublisherName = Src.ServerName;
            //subscription.PublicationName = TransPub.Name;
            //subscription.PublicationDBName = Src.Database;
            //subscription.DatabaseName = Dest.Database;
            TransSubscription subscription = new TransSubscription();
            subscription.ConnectionContext = TransPub.ConnectionContext;
            subscription.PublicationName = TransPub.Name;
            subscription.DatabaseName =  TransPub.DatabaseName;
            subscription.SubscriptionDBName = Dest.Database;
            subscription.SubscriberName = Dest.ServerName;
            subscription.LoadProperties();

            //subscription.Remove();

            // Specify the Windows login credentials for the Distribution Agent job.
            subscription.SynchronizationAgentProcessSecurity.Login = Dist.WinUId;
            subscription.SynchronizationAgentProcessSecurity.Password = Dist.WinPwd;


            if(!subscription.IsExistingObject){


                // Create the push subscription.


                // By default, subscriptions to transactional publications are synchronized 
                // continuously, but in this case we only want to synchronize on demand.
                subscription.AgentSchedule.FrequencyType = ScheduleFrequencyType.Continuously;

                subscription.Create();

                PrepareSnapshot(TransPub, Src, Dist);
            }
            return subscription;
        }
        public static void PrepareSnapshot(TransPublication TPub, MyServer Src, MyServer Dist)
        {
            SnapshotGenerationAgent agent = new SnapshotGenerationAgent();
            agent.Distributor = Dist.ServerName;
            agent.DistributorSecurityMode = SecurityMode.Standard;
            agent.DistributorLogin = Dist.SQLUId;
            agent.DistributorPassword = Dist.WinPwd;

            agent.Publisher = TPub.SqlServerName;
            agent.PublisherSecurityMode = SecurityMode.Standard;
            agent.PublisherLogin = Src.SQLUId;
            agent.PublisherPassword = Src.SQLPwd;

            agent.Publication = TPub.Name;
            agent.PublisherDatabase = TPub.DatabaseName;
            agent.ReplicationType = ReplicationType.Transactional;

            // Start the agent synchronously.
            agent.GenerateSnapshot();

        }
        public static void ApplySubscription(Happy.MI.Replication.Subscription _subscription)
        {

            Happy.MI.Replication.Publication p = _subscription.Publication;
            RMOHelper.PreparePublicationDb(_subscription.Publication.Src, _subscription.Publication.Dist);
            TransPublication TransPub = RMOHelper.PrepareTransPublication(p.Src, p.Dist, p.PublicationName);
            foreach (Happy.MI.Replication.Article a in p.Articles)
            {
                a.LoadProperties();
                TransArticle ta = RMOHelper.PrepareTransArticle(TransPub, a);
                ta.ConnectionContext.Disconnect();

            }
            TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, _subscription.Dest, p.Dist);
            if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
            {
                // Start the Distribution Agent asynchronously.
                TransSub.SynchronizeWithJob();

            }
            TransSub.ConnectionContext.Disconnect();

            //foreach (Happy.MI.Replication.Subscription s in p.Subscriptions)
            //{
            //    TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, s.Dest, p.Dist);

            //    if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
            //    {
            //        // Start the Distribution Agent asynchronously.
            //        TransSub.SynchronizeWithJob();

            //    }
            //    TransSub.ConnectionContext.Disconnect();
            //}
            //TransPub.ConnectionContext.Disconnect();

        }
        public static void Create(Happy.MI.Replication.Publication p)
        {
            RMOHelper.PreparePublicationDb(p.Src, p.Dist);

            TransPublication TransPub = RMOHelper.PrepareTransPublication(p.Src, p.Dist, p.PublicationName);
            foreach (Happy.MI.Replication.Article a in p.Articles)
            {
                a.LoadProperties();
                RMOHelper.PrepareTransArticle(TransPub, a);
            }
            foreach (Happy.MI.Replication.Subscription s in p.Subscriptions)
            {
                TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, s.Dest, p.Dist);

                if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
                {
                    // Start the Distribution Agent asynchronously.
                    TransSub.SynchronizeWithJob();

                }
            }
        }
        private static  void DeleteJobAgentSchedule(string s)
        {
            //    Private Function DeleteSchedule(ByVal scheduleID As Integer) As Boolean
            //    Dim result As Boolean

            //    If (scheduleID > 0) Then
            //        Dim msdbConnectionString As String = Me.PublicationConnectionString.Replace(String.Format("Initial Catalog={0};", Me.PublicationDbName), "Initial Catalog=msdb;")

            //        Dim db As New SQLDataAccessHelper.DBObject(msdbConnectionString)

            //        '-- Delete Job Schedule
            //        Dim parameters As New List(Of System.Data.SqlClient.SqlParameter)

            //        parameters.Add(New System.Data.SqlClient.SqlParameter("@schedule_id", SqlDbType.Int))
            //        parameters.Add(New System.Data.SqlClient.SqlParameter("@force_delete", SqlDbType.Bit))

            //        parameters(0).Value = scheduleID
            //        parameters(1).Value = True

            //        Dim rowsAffected As Integer

            //        result = (db.RunNonQueryProcedure("sp_delete_schedule", parameters, rowsAffected) = 0)

            //        db.Connection.Close()
            //        db.Connection.Dispose()
            //    Else
            //        Throw New ArgumentException("DeleteSchedule(): ScheduleID must be greater than 0")
            //    End If

            //    Return result
            //End Function
        }

        public static int PublicationEstimatedTimeBehind(Happy.MI.Replication.Subscription s)
        {

            PublicationMonitor mon = new PublicationMonitor();
            mon.DistributionDBName = s.Publication.Dist.Database;
            mon.PublisherName = s.Publication.Src.ServerName;
            mon.PublicationDBName = s.Publication.Src.Database;
            mon.Name = s.Publication.PublicationName;
            mon.ConnectionContext = s.Publication.Src.ServerConnection;
            DataSet ds1 = mon.EnumSubscriptions2(SubscriptionResultOption.AllSubscriptions);
            ds1.WriteXml(@"c:\desktop\ds1.xml");

            //ds.Tables[0].ToString();


            if (mon.LoadProperties())
            {
                PendingCommandInfo pci = mon.TransPendingCommandInfo(s.Dest.ServerName, s.Dest.Database, SubscriptionOption.Push);
                return pci.EstimatedTimeBehind;
            }
            else
            {
                throw new Exception(string.Format("Unable to load properties for subscription [{0}][{1}]",s.Dest.ServerName, s.Publication.PublicationName));
            }
        }

        public static int TraceTokenPost(Happy.MI.Replication.Subscription s)
        {
            TransPublication TransPub = new TransPublication();
            TransPub.ConnectionContext = s.Publication.Src.ServerConnection;
            TransPub.Name = s.Publication.PublicationName;
            TransPub.DatabaseName = s.Publication.Src.Database;
            if (TransPub.LoadProperties())
            {
                return TransPub.PostTracerToken();
            }
            return 0;
        }
        public static bool TraceTokenReceive(Happy.MI.Replication.Subscription s, int TokenId){
            PublicationMonitor mon = new PublicationMonitor();
            mon.DistributionDBName = s.Publication.Dist.Database;
            mon.PublisherName = s.Publication.Src.ServerName;
            mon.PublicationDBName = s.Publication.Src.Database;
            mon.Name = s.Publication.PublicationName;
            mon.ConnectionContext = s.Publication.Src.ServerConnection;
            if (mon.LoadProperties())
            {
                DataSet ds= mon.EnumTracerTokenHistory(TokenId);
                int latency;
                string str = ds.Tables[0].Rows[0]["overall_latency"].ToString();
                bool res = int.TryParse(str, out latency);

                return res;
            }
            else
            {
                throw new Exception(string.Format("Unable to load properties for subscription [{0}][{1}]", s.Dest.ServerName, s.Publication.PublicationName));
            }
        }

        public static void Cmd(string cnct)
        {
            string script = System.IO.File.ReadAllText(@"C:\tfs\CA\Dev\MI\Happy.MI\PostReplicationScripts\GC1.txt");
            SqlConnection connection = new SqlConnection(cnct+";Connection Timeout=5");
            Server server = new Server(new ServerConnection(connection));
            //server.ConnectionContext.InfoMessage += new System.Data.SqlClient.SqlInfoMessageEventHandler(ConnectionContext_InfoMessage);
            server.ConnectionContext.ExecuteNonQuery(script);
            server.ConnectionContext.Disconnect();
        }
    }
chue x
  • 18,573
  • 7
  • 56
  • 70
dunxz
  • 288
  • 1
  • 11
2

You could look into partitioned tables, partitioning the data in such a way that the reporting/BI operations don't affect your day-to-day OLTP performance. It could also spare some valuable time when you need to clear out old data.

ifx
  • 561
  • 2
  • 13
2

Take a look a ScaleArc. It's SQL Connection manager that load balances by partitioning reads and writes across multiple instances. This does means you will have to sign do do replication..

Tim Child
  • 2,994
  • 1
  • 26
  • 25
0

I would say divide the issue in smaller pieces before deciding on any approach to resolve it.

DB mirroring, Replication and other high availability or DR features of SQL SERVER is there when you really need it. But those features also does not give 100% real time sync. As other experienced DBA has mentioned already you have to plan for “Planed Downtime” and/or “Few Minutes latency” and also set customer’s expectation accordingly if you go for that options.

Again those features may be turn out as shifted the issues but not actually did not resolve it unless we look at the underlying cause itself first. Below suggestions will look like generic statements but the same is this Question. Your issue is too wide to cover and first requires lot of areas to uncover before someone can answer.

Now what I wanted to point out is, ask small questions to the problem in hand.

you mentioned:

“some of the queries being run are causing locks and performance issues for themselves and the other users”

    Are these queries blocking other reads and/or write?  If both lets handle separately.
        Ideally any read should not be blocked by other read/write. What type of ISOLATIONLVEL you have in DB?
            If “READ COMMITED”, or any other more strict level than think about SNAPSHOT ISOLATION. 
        Does the queries have lot of table and/or index hints in them ?
            Try not to optimize queries by hints as first option instead treat as last option.
        if issue is blocking of write/write then couple of point you can consider.
            Does write queries written properly to acquires appropriate locks.  (due to the table hints if any)
            Have you look at the server configurations MAX memory/Thread/DOP/AUTO Statistics Async?
            Can the large insert/update be queued from APP tier ?
            Can the large insert/update be chunked in smaller batch ?
            Can the large insert/update be executed as Asynchronous operation ?
            Can you take advantage of Partitioning in database? 

All of the above “Can Questions” requires more input depending on what is your first answer. If DB or code is originally not design to accommodate such feature then this is good time to start thinking about that.

    In terms of performance are you seeing write is getting slower or read ? 
        What is causing slowness?
            CPU, Memory, DISK IO, DB properties, Too many Object Recompilations ?
        Have you extracted the Execution plan for identified main queries ?
            If queries are very long and complex instead look for how can we simplify/optimize the logic ? 
        Does Tables are overly  indexed ?
            Write operation can suffer severely if tables are over optimized for read by adding lot of indexes.
            See the index fragmentation and statistics. You must have the db maintenance plan in place. How soon the indexes become fragmented again.
        Is there lot of aggregations and calculation in the query that runs frequently?
            If query has lot of aggregations/UDF’s/Views that runs frequently we can also find out if we can store semi aggregated data separately.
        Does the reporting query retrieves lot of data?
            If the queries serving results to report they may end up being retuning thousands of rows. Think about what does user do with this much of results on UI?
            Does it really necessary ? if not can we limit the result set to return certain number of row based on user settings.
            If yes, can we implement PAGING for this queries. (that can be controlled by setting as well)
            If this much of data is feeding another sub-system (SSRS) then anyways any reporting tool will have some latency depending on how much data we are dumping in front of user.

“We have tried to apply as many indexes as possible and perf tuned all queries to the limit but we have one application that must suit a lot of different customer types so it’s hard to create one solution that fits all. We do not have the resources to apply customer specific indexing/performance for every customer.”

    Can we find out customizations and think about how can we implement separately?  
    This is a huge piece by itself. But I am telling from my own experience that it took us almost an year 
    to transform our DB design that can accommodate over 300 clients w/o worrying how one customization will 
    affect  other Client custom functionality and/or our core product features. 
    If you can manage to get a right plan laid out first, sure you can get resources to accomplish that.

“We know the main queries that are causing issues are the ones that are generated to drive reports and kpi's.”

    How many tables this queries covers ?
    If numbers less 30% of DB, then instead of whole DB, 
    we should think around these tables and queries only.


    If you find any/some of above points you haven’t visited yet do so. 
    You will find very simple things that can save you lot.
    It is better to look at root of the problem instead covering/overcoming it temporarily by using alternatives.
    Many of the DBAs and Developer on this community will be happy to assist you for “END-TO-END Resolution” or “help as needed” .
Anup Shah
  • 1,256
  • 10
  • 15
0

You can create a snapshot replication, use one database for production and the other for reporting. Move the index for reporting to the reporting database and keep the others necessary for the application in the database used by the application.

Juan
  • 1,352
  • 13
  • 20