-3

I am facing a problem related “Cannot find table 0”. Initially I have no idea to find the root problem of this exception. Then I came to know that this problem arose due to the error “Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding”. (i.e) The execution of the Stored procedure in the SQL server 2005 takes time more than 3 seconds (default query timeout is 3 seconds). So I executed the same stored procedure with same parameters in SSMS (SQL Sever Management Studio). But it took only 1 second.

In the middle of time, we run the same source and SQL code in another server (Backup server). But there is no error. (i.e) The whole process (I have a do while loop which loops 40 times , consisting 4 for loops which loops approximately 60 times.Each time it will access 3 stored procedures) took 30 minutes in my system but in backup server only 3 minutes has been taken which means there is no timeout. But all the source are same. So now I came to an end that there is a technical issue involving.

I tried the following things in my source.
•   In Asp.net, I added the “SQLCommand Timeout = 0”. But it is a failure.
•   I used “SET ArithAbort ON” and “With Recompile”. It is also a failure.
•   Then I used “Parameter Sniffing”. (i.e) I used local variables in stored procedures. But all went in a wrong direction.

The Ado.net code for accessing the stored procedure in asp.net is like the following:

 Public Function retds1(ByVal SPName As String, ByVal conn As SqlConnection, Optional ByVal ParameterValues() As Object = Nothing) As DataSet
    dconn = New SqlConnection(ConfigurationManager.ConnectionStrings("webriskpro").ConnectionString)
    Try
        sqlcmd = New SqlCommand
        ds = New DataSet
        If dconn.State = ConnectionState.Open Then dconn.Close()
        sqlcmd = New SqlCommand(SPName, dconn)
        sqlcmd.CommandType = CommandType.StoredProcedure
        sqlcmd.CommandTimeout = 0
        dconn.Open()
        SqlCommandBuilder.DeriveParameters(sqlcmd)
        If Not ParameterValues Is Nothing Then
            For i As Integer = 1 To ParameterValues.Length
                sqlcmd.Parameters(i).Value = ParameterValues(i - 1)
            Next
        End If
        da = New SqlDataAdapter(sqlcmd)
        da.SelectCommand.CommandTimeout = 0
        da.Fill(ds)
    Catch ex As Exception
        send_prj_err2mail(ex, SPName, "")
    Finally
        dconn.Close()
    End Try
    Return ds
End Function

The execution plan of the error giving procedure is

enter image description here

Hope you guys understood my problem. Please get me back with some ideas!!

Jeremy Thompson
  • 61,933
  • 36
  • 195
  • 321
King of kings
  • 695
  • 2
  • 6
  • 21
  • 3
    Since you've posted this question [three](http://stackoverflow.com/questions/26778182/code-takes-time-vary-from-system-to-system) [times](http://stackoverflow.com/questions/26756727/code-takes-time-that-varies-system-to-system), I've got to at least ask, have you considered the processing power of the machines you've tested on? Assuming your database is local (I haven't Seen you indicate otherwise), and considering that you're looping a rather large number of times, it would seem logical to me that your systems' speeds could be causing your issue. – Justin Ryan Nov 10 '14 at 11:41
  • i also thought like that bro. but the RAM speed of my local system and backup server(where the code is running successfully) is 2gb. – King of kings Nov 10 '14 at 11:44
  • 2
    A computer's speed consists of far more factors than just the amount of memory it has... – Justin Ryan Nov 10 '14 at 11:47
  • i didnt open more than more tabs in browser, and the opened applications are vs2005 and sql server 2005. does it make any sense? @JustinRyan – King of kings Nov 10 '14 at 11:51
  • and one more thing. the timeout error comes not all the time, its random. for e.g in 61 times loop , a stored procedure's duration is 16,0,1,99 and sometimes it is going to 33000.but for cpu it is 161,0,171 something like this – King of kings Nov 10 '14 at 12:01
  • 1
    At this point, you know your code 'works' it just takes a really long time under some circumstances. Unfortunately I don't have the solution to your problem, so my advice would be to try to optimize the code. Reduce it to only what is necessary, reusing components where possible, and consider memory usage. Then ask yourself if you really need to be looping over the same code 40x60(x4?) times. – Justin Ryan Nov 10 '14 at 12:06
  • the long loop is neccessary for getting an accurate value. but extreme doubt is the same code having same loops works with 3 minutes in a server. but what makes it slow in my local system?????? – King of kings Nov 10 '14 at 12:12
  • Have you run your query through the Query Analyzer and checked the execution plan that it is using? Have you tried running the DB Tuning Advisor to see if there are any indexes that could be beneficial? – Chris Dunaway Nov 10 '14 at 15:20
  • I often encounter things running faster on a test/backup server. The production activity is likely clearing the ram cache of the data you are interested in, and instead replacing it with the data the other queries are interested in. But on the test machine, all that data is sitting in ram ready to accessed right away. Also, the disks on the backup server are close to idle, meanwhile in production they are the limiting factor of speed. – jerrylagrou Nov 10 '14 at 20:54
  • @ChrisDunaway. yes sir. i ran it in the query analyzer too.the execution plan is divided by 9 parts. I dont know how to measure the execution plan. but i posted the execution plan.please have a look and help me. – King of kings Nov 11 '14 at 03:52
  • who gave me down vote and why it is?? – King of kings Nov 11 '14 at 04:28
  • Looking at the execution plan, I see a lot of table scans there. That means that when running the query, it has to scan down through the entire table until it finds what it needs. You could benefit from from indexes on the appropriate columns. The DB Tuning advisor should give you some good suggestions. – Chris Dunaway Nov 11 '14 at 14:32

4 Answers4

2

You will need to debug the application to find out what the root cause is. In another thread you mention:

I have checked the process in SQL Server Profiler. Since the loop has no limit, we don't know how many times it iterates.

You need to write out to a text file how many times the code loops, it may be a small number in Backup compared to a large number on PROD or your local PC. You should also log out how long each part of the operation takes, sample code using a StopWatch:

Dim sw as New StopWatch()
sw.Start()

...long operation...

sw.Stop()
File.Write("Operation XYZ took " & sw.EllapsedMilliseconds / 1000 & " seconds")

I am suspicious on the code here, it doesn't look like it was very well written. For example checking if the connection is open after instantiating it smells of poor resource management. Its like the author of this code found the connection was left open and wasn't closed somewhere else:

dconn = New SqlConnection(ConfigurationManager.ConnectionStrings("webriskpro").ConnectionString)
...
If dconn.State = ConnectionState.Open Then dconn.Close() 

I'd recommend that you use the Using statement which guarantee's the connection will be closed, eg:

Using cn As New SqlConnection(ConnectionString)
cn.Open()

  Using cmd As New SqlCommand("GetCustomerByID", cn)
    Try
        With cmd
            .Connection = cn
            .CommandType = CommandType.StoredProcedure
            .Parameters.Add("@CustomerID", SqlDbType.Int, 4)
            .Parameters("@CustomerID").Value = CustomerID
        End With

        da = New SqlDataAdapter(cmd)
        da.Fill(ds, "Customer")
    Catch ex As Exception

    End Try
  End Using
End Using

Although you know that If the connection is closed before Fill is called, it is opened to retrieve data, then closed. If the connection is open before Fill is called, it remains open. So you don't even need to explicitly open it in your case.

Again you will need to debug the application to find out what the root cause is. Add heaps of logging to help diagnose the problem. Debugging the problem should shed light and give you some clues as to the root cause of this performance bottleneck.

Good luck!!

Edit: For any of the Connection Pooling idea's raised here to work you will need to use ConnectionPooling, so remove Pooling=false from your connection string:

<add name="webriskpro" 
     connectionString="Data Source=TECH01\SQL2005;Initial Catalog=webriskpro1;User ID=sa;Password=#basix123; <strike>pooling=false</strike>;connection

timeout=600;"/>

Edit 2: Some research to help you work out the problem

  1. Run SQLDiag which comes as a part of the product. You can refer books online for more details. In brief, capture Server Side trace and blocker script.

  2. Once trace is captured, look for "Attention" event. That would be the spid which has received the error. If you filter by SPID, you would see RPC:Completed event before "Attention". Check the time over there. Is that time 30 seconds? If yes, then client waited for 30 second to get response from SQL and got "timed out" [This is client setting as SQL would never stop and connection]

  3. Now, check if the query which was running really should take 30 seconds?

  4. If yes then tune the query or increase the timeout setting from the client.

  5. If no then this query must be waiting for some resources (blocked)

  6. At this point go back to Blocker Script and check the time frame when "Attention" came

If you can execute a sp_who2 while the queries are timing out, you can use the BlkBy column to trace back to the holding the lock that everyone else is waiting on.

sp_who3 is useful too as it includes the actual query.

When you see which queries/transaction is locking/blocking your database till complete you may need to rewrite them or run them at an other time to avoid blocking other processes.

An extra point to dig into is the auto increment size of your transaction log and database. Set them on a fixed size instead of a percentage of the current files. If files are getting larger than the time it takes to allocate enough space and that's longer than your transaction timeout then your db will come to a halt.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Jeremy Thompson
  • 61,933
  • 36
  • 195
  • 321
  • thanks jeremy. let me try it. And once again thanks for the time you spent. – King of kings Nov 11 '14 at 05:56
  • no actually each servers have their own database connection. what i meant is alla the database has same value. thats why i said "All are having same database. I will edit the post – King of kings Nov 11 '14 at 05:58
  • i gave max pool size in the connection string. will it help? – King of kings Nov 11 '14 at 06:08
  • Given the environments: local, Backup and Prod use different databases I have edited my answer with some more things for you to try. – Jeremy Thompson Nov 11 '14 at 07:47
  • thank you very much for the time spent. I really appreciate your helping mind. Actually From the beginning I used to trace the stored procedure in sql profiler where i could find the sproc gets more than 30 seconds. but after using the pooling concept nothing more than 1 millisecond. – King of kings Nov 11 '14 at 09:00
  • 1
    So removing the Pooling=false from the connection string resolved the issue. That is good news, keep monitoring it for a bit and let us know if the problem happens again. – Jeremy Thompson Nov 11 '14 at 11:23
  • yeah sure. you are a kind person. – King of kings Nov 12 '14 at 03:56
1

A few ideas:

  • Check the primary server for resource starvation issues: memory, disk space, lots of other processes running and using resources, et cetera.
  • Check the primary server for configuration issues: database is configured to use only a small amount of memory, system swap file is too small, et cetera.
  • Check the primary server for hardware issues: memory errors, failing RAID, failing drive, et cetera.

Hope that helps!

Monica For CEO
  • 499
  • 6
  • 20
1

I think the exception is because of the connection opening time.

You need to specify the timeout for the connection or delete the pooling=false from the connection string.

Give like this,

<add name="db1" connectionString="Data Source=your server;Initial Catalog=dbname;User ID=sa;Password=*****;connection timeout=600;"/>

The default size of the pooling is 100. If you would like to change you then you can.

Since in the backup server there will be no applications running, the speed of the system will be high and there will be no interrupts.

But in your local system, you might have worked on some other applications.So that could be one of the reason. The next time when you run, close all the opened application then check in the sql profiler.

Since the same code is working fine in the backup server, i dont think the loops(60*40 times) will not be the reason for slowing down.

Best of luck!!

0

If the error occurs only at high loads, it is likely that connections aren't being returned to the pool, and remain associated with the transaction for longer than they should. If transaction speed isn't a problem, try the following query:

SqlConnection.ClearPool(connection);

Note : For every operation in SQL we must open a connection so give this query after opening the connection

conn.Open()

cmd.ExecuteNonQuery()

SQLConnection.ClearPool(conn)

  • I have tried Max pool size in connectionstring. before i set this option in sql profiler,Audit login Audit logout are coming. now i see only the rpc:completed(reset connection). I guess your suggestion would also help me. where i can give the line in my code. before connection open? – King of kings Nov 11 '14 at 05:08
  • thanks. so i can give it after immediate open. right? – King of kings Nov 11 '14 at 05:35
  • 1
    *If* your claim about this being due to Connections not being released is true, you are giving the wrong advice. Your claim is false, as shown by the resolution, but if it were, the correct answer is to make sure connections are Closed/Disposed when done. – Andrew Barber Nov 11 '14 at 16:36