1

I have Azure SQL database in Standard tier, 10 DTU.

How can i "predict" performance on CPU intensive queries (as that seems to be reason for slowness)?

To illustrate problem will use perf_test table, that can be populated like this (script could be improved a lot, but that is not a point here):

CREATE TABLE dbo.perf_Test
(
    PolicyDescriptionID INT IDENTITY PRIMARY KEY,
    col1 NVARCHAR(100),
    col2 NVARCHAR(100),
    col3 NVARCHAR(100),
    col4 NVARCHAR(100),
    col5 NVARCHAR(100),
)

GO
SET NOCOUNT ON; 

DECLARE @i INT = 0
WHILE @i < 100000
BEGIN 
    DECLARE @NumberI int = CAST(RAND() * 100000 AS INT);
    DECLARE @NumberC VARCHAR(6);
    SET @NumberC = 
        CASE
            WHEN @NumberI < 10 THEN '00000' + CAST(@NumberI AS VARCHAR(6))
            WHEN @NumberI < 100 THEN '0000' + CAST(@NumberI AS VARCHAR(6))
            WHEN @NumberI < 1000 THEN '000' + CAST(@NumberI AS VARCHAR(6))
            WHEN @NumberI < 10000 THEN '00' + CAST(@NumberI AS VARCHAR(6))
            WHEN @NumberI < 100000 THEN '0' + CAST(@NumberI AS VARCHAR(6))
            ELSE CAST(@NumberI AS VARCHAR(6))
        END;

    INSERT INTO dbo.perf_Test(col1, col2, col3, col4, col5)
            VALUES(
                @NumberC, -- char
                @NumberC + RIGHT(@NumberC, 3) + @NumberC, -- casts as nvarchar
                @NumberC + 'adslk3ājdsfšadjfads',
                @NumberC, 
                @NumberC
                );
    SET @i = @i + 1;
END

For many queries azure will perform same as local machine. but for ugly query it performs much worse:

SELECT * 
FROM dbo.perf_Test
WHERE 
       col1 LIKE '%263a%'
    OR col2 LIKE '%263a%'
    OR col3 LIKE '%263a%'
    OR col4 LIKE '%263a%'
    OR col5 LIKE '%263a%'

Azure: Scan count 1, logical reads 1932 (rest 0) SQL Server Execution Times: CPU time = 16 ms, elapsed time = 6718 ms

OnPrem: Scan count 1, logical reads 1932 SQL Server Execution Times: CPU time = 563 ms, elapsed time = 482 ms.

Logical reads is the same as for 'bad' example, but this query performs approximately same in azure:

SELECT * 
FROM dbo.perf_Test
WHERE col2 = '038743743038743'

Azure: Scan count 1, logical reads 1932 SQL Server Execution Times: CPU time = 32 ms, elapsed time = 22 ms.

OnPrem: Scan count 1, logical reads 1932 SQL Server Execution Times: CPU time = 16 ms, elapsed time = 7 ms.

Returned rows is ~100 rows- same as for 'bad' example, but this query performs approximately same in azure

SELECT * 
FROM dbo.perf_Test
WHERE col1 like N'0975%'

Azure: Scan count 1, logical reads 1932 SQL Server Execution Times: CPU time = 16 ms, elapsed time = 26 ms.

OnPrem: Scan count 1, logical reads 1932 SQL Server Execution Times: CPU time = 15 ms, elapsed time = 35 ms.

If i put some CPU intensive query, difference again is huge (2 vs 35 seconds in azure):

SELECT SUM(CAST(t1.col1 AS BIGINT) + CAST(t2.col1 AS BIGINT)), COUNT(t2.col1)
FROM dbo.perf_Test t1
    CROSS JOIN dbo.perf_Test t2
WHERE t1.col3 LIKE '%263a%'
OPTION (MAXDOP 1)
Jānis
  • 2,216
  • 1
  • 17
  • 27

1 Answers1

2

If i put some CPU intensive query, difference again is huge (2 vs 35 seconds in azure):

this is because a query can be throttled until resources are available and you are comparing your onprem with SQLAZURE(standard tier 10 DTU),this is not accurate comparison

below chart shows some rough reads and writes for a service tier

enter image description here you can assume, standard tier measurements will be much less and when resources are not available for a query,it will wait.

There are some benefits when using Azure like transparent patching,backups,high availabilty,always you use enterprise..so there are some tradeoffs you have to make when you go to cloud

Below are the steps i would try in order

1.Run below query to see if any of the DTU metric is consistently >90% for a period of time,if so i would upgrade to next service tier

select   top 1 with ties end_time,B.DTUpcnt,b.DTUMetric
 from sys.dm_db_resource_stats t
 cross apply
(values
     (avg_cpu_percent,'avg_cpu_percent'),
     (avg_data_io_percent,'avg_data_io_percent'),
     (avg_memory_usage_percent,'avg_memory_usage_percent'),
     (avg_log_write_percent,'avg_log_write_percent')
     )b(DTUPcnt,DTUMetric)
     order by row_number() over (partition by end_time order by DTUMetric desc)

2.I would also try finetuning the queries which are using more DTU or provide more compute power

coming to predicting performance for the query with cross join, you will need to ensure,those tables are in buffer,so there will be no IO which will in turn reduce CPU usage..

you can also try inmemory oltp tables in azure for tables which are critical

Community
  • 1
  • 1
TheGameiswar
  • 27,855
  • 8
  • 56
  • 94
  • I was trying to find if query is throttled, but from my understanding it is not. also DTU's usage is not consumed much- DTUpcnt from your query shows 5.9% used (basically nothing else is happening on that database). also all that is shown is logical reads for all queries. Data amount is really small (like 50 000 rows in real case) so justifying cost for higher tier is difficult (elastic pools not in use as for now). – Jānis Jul 14 '17 at 09:53
  • 1
    if your reads/writes or more than limit,they will be throttled,can you gather wait stats during the run time of query – TheGameiswar Jul 14 '17 at 09:58
  • Checked delta before/after query that executes around 34 seconds is in SOS_SCHEDULER_YIELD = 31847 (wait_time_ms). i took it from sys.dm_db_wait_stats view and no other waits values increased between executions. Is there some conclusion that i can make out of this? – Jānis Jul 14 '17 at 10:57
  • sos_Scheduler_yield falls in cpu wait category, you may have to increase your tier – TheGameiswar Jul 14 '17 at 10:59