1

I have a preview version of Azure Sql Data Warehouse running which was working fine until I imported a large table (~80 GB) through BCP. Now all the tables including the small one do not respond even to a simple query

 select * from <MyTable>

Queries to Sys tables are working still.

select * from sys.objects

The BCP process was left over the weekend, so any Statistics Update should have been done by now. Is there any way to figure out what is making this happen? Or at lease what is currently running to see if anything is blocking?

I'm using SQL Server Management Studio 2014 to connect to the Data Warehouse and executing queries.

Matt Usher
  • 1,325
  • 6
  • 10
user5285420
  • 37
  • 2
  • 3
  • "Or at lease what is currently running to see if anything is blocking?" Yes, find out using some common tool. If you don't know any search for "Azure SQL blocking script" or something. – usr Aug 31 '15 at 16:44
  • Are you seeing just no response or a particular error message? – Matt Usher Aug 31 '15 at 20:24
  • Just no response. But queries against System tables work. – user5285420 Aug 31 '15 at 21:35
  • Having same issue as yours, but we import data via external tables. Everything goes fine for 1-2 days, and then there are 2 possible ways: any select on any user table hangs, or only external table select hangs. The only way to fix it - pause\unpause. Going to ask support for help. – infideltfo Jan 25 '16 at 11:11

2 Answers2

2

@user5285420 - run the code below to get a good view of what's going on. You should be able to find the query easily by looking at the value in the "command" column. Can you confirm if the BCP command still shows as status="Running" when the query steps are all complete?

select top 50
            (case when requests.status = 'Completed' then 100
            when progress.total_steps = 0 then 0
            else 100 * progress.completed_steps / progress.total_steps end) as progress_percent,
            requests.status, 
            requests.request_id, 
            sessions.login_name, 
            requests.start_time, 
            requests.end_time, 
            requests.total_elapsed_time, 
            requests.command,             
            errors.details,
            requests.session_id,
            (case when requests.resource_class is NULL then 'N/A'
            else requests.resource_class end) as resource_class,
            (case when resource_waits.concurrency_slots_used is NULL then 'N/A'
            else cast(resource_waits.concurrency_slots_used as varchar(10)) end) as concurrency_slots_used

            from sys.dm_pdw_exec_requests AS requests

            join sys.dm_pdw_exec_sessions AS sessions
                    on (requests.session_id = sessions.session_id)
            left join sys.dm_pdw_errors AS errors
                on (requests.error_id = errors.error_id)
            left join sys.dm_pdw_resource_waits AS resource_waits
                on (requests.resource_class = resource_waits.resource_class)
            outer apply (
                select count (steps.request_id) as total_steps,
                    sum (case when steps.status = 'Complete' then 1 else 0 end ) as completed_steps
                from sys.dm_pdw_request_steps steps where steps.request_id = requests.request_id
            ) progress

            where requests.start_time >= DATEADD(hour, -24, GETDATE())

            ORDER BY requests.total_elapsed_time DESC, requests.start_time DESC
  • Based on your query the bulk copy run is not showing, but show in (SELECT * FROM sys.dm_pdw_exec_requests WHERE status = 'Running'). The ones that is running may have to do with POWER BI connection that was made for auto refreshing. Thanks for the help BTW. – user5285420 Aug 31 '15 at 21:39
  • Wait, the bulk copy is still running (stuck on 40%) from last week when I changed the request.start_date >= DATEADD(hour, -120, GETDATE()) . Is there any way to kill these ? Rebooting would be ok if there is an option for that in Azure portal. – user5285420 Aug 31 '15 at 21:45
  • 2
    @user5285420 There's some great info [here](https://azure.microsoft.com/en-us/documentation/articles/sql-data-warehouse-manage-monitor/) that explains how to see if the query is waiting on resources. In this case if you'd like to kill the session, you can use the [KILL command](https://msdn.microsoft.com/library/ms173730.aspx) on the session id from sys.dm_pdw_exec_sessions. – Nicolle Anger - MSFT Sep 01 '15 at 02:01
  • Thank you. I knew the kill command but was not able to use it without the proper SPID which you just gave me through sys.dm_pdw_exec_sessions. The running processes all seemed to have cleared up from yesterday and PDW seems to be working fine now. I went through the link you sent for the waiting resources. Is there a full list of PDW system tables somewhere? It seems there are some helpful ones that I'm not aware of. Thank you again. – user5285420 Sep 01 '15 at 15:45
  • Great! I'm glad I could help! :) The full list of views can be found here: https://azure.microsoft.com/en-us/documentation/articles/sql-data-warehouse-reference-tsql-system-views/ – Nicolle Anger - MSFT Sep 02 '15 at 17:21
-2

Checkout the resource utilization and possibly other issues from https://portal.azure.com/

You can also run sp_who2 from SSMS to get a snapshot of what's threads are active and whether there's some crazy blocking chain that's causing problems.

SQLmojoe
  • 1,924
  • 1
  • 11
  • 15
  • Azure SQL Data Warehouse does not seem to support SP_WHO2 or SP_WHO. Checking sys.dm_pdw_exec_requests, sys.dm_pdw_waits, sys.dm_pdw_request_steps, and sys.dm_pdw_dms_workers there seems to be running requests but all steps have a complete status. – user5285420 Aug 31 '15 at 20:23