4

Note that this question is an off-shoot of a current problem that I have yet to resolve:

Need help resolving an error when WCF service returns DataTable: OutOfMemoryException

I have an existing WCF service whose endpoint is configured for TransferMode.Buffered and a client application whose connection asks for the same.

The OperationContracts are established methods that I could not easily modify except in very minor ways. For example, we have a method that accepts an SQL query String and returns a DataTable derived from SQL execution.

We are having issues with very large tables using this method. Large meaning in-memory footprint.

I am considering implementing TransferMode.Streamed (or the like), but I can't seem to get a handle on how to do it for just certain methods.

My WCF service starts up, creates the one-and-only endpoint, and that endpoint is Buffered. So that makes we want to assume that it's all or nothing. That if I change to Streaming then I would have to make a wholesale rework of all my OperationContract methods.

There are some SO questions that flirt with this subject, but not directly, and not with any answers that really get me where I need to go.

Am I missing something, or can this be made to work?

The method I need to fix: (if you want to know about the connection, refer to the referenced question...it's a lot of code)

The WCF service code:

[OperationContract]
DataTable readDataTable(out DbError dbError, String queryString, Boolean SchemaOnly);

public DataTable readDataTable(out DbError dbError, String queryString, Boolean SchemaOnly)
{
    DataTable dt = new DataTable("ReturnTable");
    dbError = new DbError();
    if (!String.IsNullOrEmpty(queryString))
    {
        try
        {
            command.CommandText = queryString.Replace(new String[] { "{{", "}}", ";;" }, new String[] { "{", "}", ";" });
            SqlDataReader reader = command.ExecuteReader(SchemaOnly ? CommandBehavior.SchemaOnly : CommandBehavior.Default);
            dt.Load(reader);
            reader.Close();
        }
        catch (Exception ex)
        {
            dbError.HasError = true;
            dbError.Error = ex.Message + "\n\n" + queryString;
        }
    }
    return dt;
}

The client code that uses it:

public DataTable readDataTable(String queryString, Boolean SchemaOnly)
{
    DbError dbError;
    DataTable dt = svcCon.readDataTable(out dbError, queryString, SchemaOnly);
    if (dbError.HasError)
    {
        if (_showErrors)
        ErrorHandler.Show(dbError.Error);
    }
    return dt;
}
Community
  • 1
  • 1
DonBoitnott
  • 10,787
  • 6
  • 49
  • 68

1 Answers1

2

This is a big question. I understand the problem. The short answer is that you can't have both streaming and buffering in the same endpoint operation contract. You can, of course, run 1-many separate endpoints from the same web service, each with a different port, for instance with some using buffering and some using streaming.

And, you can have the host set up to stream its responses while the client (using the same endpoint) can buffer its uploading requests to the host (and vice versa); that's not a problem. But buffering and streaming responses from the same host endpoint is not possible.

We've run into the same issue with buffering downloads to the client from SQL. For a while we went to streaming for downloads, primarily to cure client device timeouts, but found streaming is memory intensive, too. Streaming as a technique is also tougher to work with, we found. So we've gone away from streaming and back to buffering, but we use an interative "chunking" approach for downloads > 100mb. For example, when a 1 GB packages needs to be delivered, the host will send the file in 100mb chunks that the client software assembles as a single file.

I hope this was helpful.

Brian
  • 3,653
  • 1
  • 22
  • 33
  • Brian, thanks for the response. You're validating for me what I thought I already knew. And your scenario seems to mirror my current predicament well. Would you be so kind as to detail for me this chunking approach? What mechanism do you use to deliver the chunks? You can see from my example code that the client wants a `DataTable` and the server delivers. How might I deliver such a thing chunked? Blocks of `List` or something? Looped? Thanks in advance. – DonBoitnott Aug 13 '13 at 13:26
  • BTW, I have been trying to run with a second endpoint on a different port, but with little success. Somewhere internally, my `ServiceHost` keeps getting disposed and my SQLConnection dies. But I think I could make that fly if I could work out the mechanical issues. – DonBoitnott Aug 13 '13 at 13:27
  • Don, based on your comment, I think our requirements and yours are a little different. We're delivering over a basicHttpService files that the client devices use for data replication. We don't deliver objects or complex data structures like a DataTable. In any event, the chunking works like this: a client contacts our WCF host and requests a file, which we store on a SQL server. If the file is < 100 mb, we simply buffer the entire file and send it to the client in the response. – Brian Aug 13 '13 at 13:59
  • But if the binary is >100mb, it gets chunked. Let's say that the client requests a 1gb file. When the query runs on SQL, the first thing we do is get the DataLength() of the binary. Because the size is huge, we'll only retrieve the first 100mb using a SQL substring() function to fetch that part. When we return the 100mb in the first response, we also send a variable called "ChunkCounter" which would be set to 1 since that's the first chunk. Then the client calls the host again, handing over that ChunkCounter value. – Brian Aug 13 '13 at 14:04
  • This time, the web service will fetch the second part 100mb chunk of the 1gb binary, returning the bytes and a chunkcounter=2 ... and so on until the last chunk is fetched. At that point, I return 999 as the chunk counter value, which tells the client the chunking is finished. – Brian Aug 13 '13 at 14:06
  • Having said all that, I'm not sure how you'd adapt our chunking method to your requirements. – Brian Aug 13 '13 at 14:07
  • I'm not sure what to suggest on the second endpoint, based on the problem you're having. In our case, we simply change the configuration (not the code) for our second, third and nth endpoints, using the port as the distinguishing variable. In our case, we have alternative endpoints for different encryption modes for our clients - i.e. Android and IPhones can't do everything a laptop does in terms of encryption. Good luck. I'm off. It's quitting time in Europe. – Brian Aug 13 '13 at 14:12