0

Requirement:
Set of Dashboards to be shown in a ERP Home-screen. Data is filtered according to the current user permissions.

As of Now: Home Dashboard

High Chart is used for Data Visualization. Background Page is at C# .Net

Problem :

  • Every time user changes the filter it hits the Live DB and fetches the Data.
  • Every Morning, Users login at almost same time, so there will be huge no of requests sent to SQL server at the same. Which will cause performance issue. Also there are lot more charts to come.

We're planning to implement SQL Analysis Dashboard Qubes for the Data. Can somebody pls suggest if that's a right way or suggest any other better way.? Better architecture for this.

Thank you.

pawel_d
  • 3,061
  • 1
  • 8
  • 13
  • 1
    I am afraid this is not that simple to answer even with the experience of entire SO together. You have to be more specific according to the budget you have, what technologies are already in your shop , expected result (like is the reporting data need to be available for the reporting as live) and what are your / your team skills around the reporting / DW solutions, are there any business limitations? – Bartosz X Aug 16 '17 at 12:05
  • So many factors. How many users? What's the current impact on load time? What's an acceptable load time? Etc. What are the queries like - is there a lot of processing that is slowing it all down? Can't speak to the effectiveness of SQL Analysis Dashboard Qubes, but some form of pre-processing and/or caching the data is certainly a direction to go in. – jlbriggs Aug 16 '17 at 12:24
  • @BartoszX No supporting software available at our shop now accept Highcharts. Real time data is recommended and if not couple hours old data is also fine. I'm owner of this Job. I'm really a beginner in .net and have some 4 years experience in SQL. Also i've senior DB Administrator to help me with the Performance Tuning and also he's good at DW solutions(SSAS). Limitation: Goes to clients with different Server configurations. Some are really good and some are really bad. – Sandeep Mogaveer Aug 16 '17 at 12:41
  • @jlbriggs Expecting, at max 50 user might use this page at once. Load testing not done. Assumption that it will slow down DB. as we're fetching data from actual transaction tables. acceptable load time should be around 10 secs. few has lot processing and few are due to the data. We really want to know if there any technology/methodology for this. Can you pls elaborate on what exact pre-processing and/or caching is recommended?. – Sandeep Mogaveer Aug 16 '17 at 12:51
  • Have you already analysed your wait stats? What wait types do you have a problem with? (trying to understand is it hardware of row locks) – Bartosz X Aug 16 '17 at 12:52
  • Franky, I am probably not the best person to tell you what exact pre-processing and/or caching is recommended, but my bigger point was that it depends on what kind of processing is being done on the fly currently. If you're queries are doing calculations, concatenations, or heavy joins every time a user refreshes the page, then do those things ahead of time as much as possible, at whatever frequency is required for the data in question. If your queries are fast and clean already, then a different approach is needed. – jlbriggs Aug 16 '17 at 12:57
  • @BartoszX not a expert on these DBA stuffs. Definitively i'll check with our DB guy . But my basic question is, is it a right approach to read data directly from transaction tables with that many users concurrently accessing.? Whats your opinion on implementing SSAS Qubes.? – Sandeep Mogaveer Aug 16 '17 at 13:12

1 Answers1

0

So first things first - you need to know what is the real issue, if the resources - just add more as you need them and try to optimise your code. But from my experience this is not the case - I believe you have an classic example of row locks while you have concurrent transactions trying to access the same data. If you have problem with deadlocks you might want to try using the snapshot transaction isolation level, it this is just concurrent read you might want to create a dummy replication and copy the heaviest accessed objects into separate, read only DB, if you are already taking log backups using log shipping and read from the copy when possible sound like a bargain to me as well.

If you are happy to put some effort in fixing it properly I would recommend considering a Data Warehouse solution and link your application/reporting to it.

About the data cubes and/or SSAS solution, this is helpful but to do it you will realise that you need the DW anyway and to see a real advantage from it your customers would have to to a lot of dimensional aggregation not just a simple "refresh report to download today's data".

There will be a lot of work for you and I recommend to analyse the wait stats as a start point to understand where exactly you are right now and what is the real problem. As a gift, please find the code below to get these stats:

DECLARE @Wait_Types_Excluded TABLE([wait_type] nvarchar(60) PRIMARY KEY);

INSERT INTO @Wait_Types_Excluded([wait_type]) VALUES

 (N'BROKER_EVENTHANDLER'), (N'BROKER_RECEIVE_WAITFOR'), (N'BROKER_TASK_STOP'), (N'BROKER_TO_FLUSH'), (N'BROKER_TRANSMITTER')
,(N'CHECKPOINT_QUEUE'), (N'CHKPT'), (N'CLR_AUTO_EVENT'), (N'CLR_MANUAL_EVENT'), (N'CLR_SEMAPHORE') ,(N'DIRTY_PAGE_POLL')
,(N'DISPATCHER_QUEUE_SEMAPHORE'), (N'EXECSYNC'), (N'FSAGENT'), (N'FT_IFTS_SCHEDULER_IDLE_WAIT'), (N'FT_IFTSHC_MUTEX')
,(N'KSOURCE_WAKEUP'), (N'LAZYWRITER_SLEEP'), (N'LOGMGR_QUEUE'), (N'MEMORY_ALLOCATION_EXT'), (N'ONDEMAND_TASK_QUEUE')
,(N'PREEMPTIVE_XE_GETTARGETSTATE'), (N'PWAIT_ALL_COMPONENTS_INITIALIZED'), (N'PWAIT_DIRECTLOGCONSUMER_GETNEXT')
,(N'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP'), (N'QDS_ASYNC_QUEUE'), (N'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP')
,(N'QDS_SHUTDOWN_QUEUE'), (N'REDO_THREAD_PENDING_WORK'), (N'REQUEST_FOR_DEADLOCK_SEARCH'), (N'RESOURCE_QUEUE')
,(N'SERVER_IDLE_CHECK'), (N'SLEEP_BPOOL_FLUSH'), (N'SLEEP_DBSTARTUP'), (N'SLEEP_DCOMSTARTUP'), (N'SLEEP_MASTERDBREADY')
,(N'SLEEP_MASTERMDREADY'), (N'SLEEP_MASTERUPGRADED'), (N'SLEEP_MSDBSTARTUP'), (N'SLEEP_SYSTEMTASK'), (N'SLEEP_TASK')
,(N'SLEEP_TEMPDBSTARTUP'), (N'SNI_HTTP_ACCEPT'), (N'SP_SERVER_DIAGNOSTICS_SLEEP'), (N'SQLTRACE_BUFFER_FLUSH')
,(N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP'), (N'SQLTRACE_WAIT_ENTRIES'), (N'WAIT_FOR_RESULTS'), (N'WAITFOR')
,(N'WAITFOR_TASKSHUTDOWN'), (N'WAIT_XTP_RECOVERY'), (N'WAIT_XTP_HOST_WAIT'), (N'WAIT_XTP_OFFLINE_CKPT_NEW_LOG')
,(N'WAIT_XTP_CKPT_CLOSE'), (N'XE_DISPATCHER_JOIN'), (N'XE_DISPATCHER_WAIT'), (N'XE_TIMER_EVENT')
,(N'DBMIRROR_DBM_EVENT'), (N'DBMIRROR_EVENTS_QUEUE'), (N'DBMIRROR_WORKER_QUEUE'), (N'DBMIRRORING_CMD'),
(N'HADR_CLUSAPI_CALL'), (N'HADR_FILESTREAM_IOMGR_IOCOMPLETION'), (N'HADR_LOGCAPTURE_WAIT'),
(N'HADR_NOTIFICATION_DEQUEUE'), (N'HADR_TIMER_TASK'), (N'HADR_WORK_QUEUE');

SELECT
 [Approx_Wait_Stats_Restart_Date] = CAST(DATEADD(minute, -CAST((CAST(ws.[wait_time_ms] as decimal(38,18)) / 60000.0) as int), SYSDATETIME()) as smalldatetime)
,[SQL_Server_Last_Restart_Date] = CAST(si.[sqlserver_start_time] as smalldatetime)
FROM sys.dm_os_wait_stats ws, sys.dm_os_sys_info si
WHERE ws.[wait_type] = N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP';

SELECT TOP 25
 ws.[wait_type]
,[Total_Wait_(s)]         = CAST(SUM(ws.[wait_time_ms]) OVER (PARTITION BY ws.[wait_type]) / 1000.0 as decimal(19,3))
,[Resource_(s)]           = CAST(SUM([wait_time_ms] - [signal_wait_time_ms]) OVER (PARTITION BY ws.[wait_type]) / 1000.0 as decimal(19,3))
,[Signal_(s)]             = CAST(SUM(ws.[signal_wait_time_ms]) OVER (PARTITION BY ws.[wait_type]) / 1000.0 as decimal(19,3))
,[Avg_Total_Wait_(ms)]    = CASE WHEN SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type]) > 0 THEN SUM(ws.[wait_time_ms]) OVER (PARTITION BY ws.[wait_type])/ SUM(ws.[waiting_tasks_count])OVER (PARTITION BY ws.[wait_type]) END
,[Avg_Resource_Wait_(ms)  = CASE WHEN SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type]) > 0 THEN SUM(ws.[wait_time_ms] - ws.[signal_wait_time_ms]) OVER (PARTITION BY ws.[wait_type])/ SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type])END
,[Avg_Signal_Wait_(ms)]   = CASE WHEN SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type])> 0 THEN SUM(ws.[signal_wait_time_ms]) OVER (PARTITION BY ws.[wait_type])/ SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type])END
,[Waiting_Tasks_QTY]      = SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type])
,[Percent_of_Total_Waits_Time]  = CAST(CAST(SUM(ws.[wait_time_ms]) OVER (PARTITION BY ws.[wait_type]) as decimal) / CAST(SUM(ws.[wait_time_ms]) OVER() as decimal) * 100.0 as decimal(5,2))
,[Percent_of_Total_Waits_QTY]     = CAST(CAST(SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type]) as decimal)/ CAST(SUM(ws.[waiting_tasks_count]) OVER() as decimal) * 100.0 as decimal(5,2))
FROM sys.dm_os_wait_stats ws
LEFT JOIN @Wait_Types_Excluded wte ON ws.[wait_type] = wte.[wait_type]
WHERE   wte.[wait_type] IS NULL
AND    ws.[waiting_tasks_count] > 0
ORDER BY [Total_Wait_(s)] DESC;  
Bartosz X
  • 2,620
  • 24
  • 36