3

I'm running into a problem with downloading large JSON objects from an API. Usually, this documents are small in size, but occasionally they are quite large (100k+). This puts the large object heap into play and there are some performance concerns.

Here is the guts of the downloader method that returns the response bytes.

using (var httpWebResponse = (HttpWebResponse)request.GetResponse())
{
    byte[] responseBytes;
    using (var responseStream = httpWebResponse.GetResponseStream())
    {
        using (var memoryStream = new MemoryStream())
        {
            if (responseStream != null)
            {
               responseStream.CopyTo(memoryStream);
            }
            responseBytes = memoryStream.ToArray();
        }
     }
     return responseBytes;
 }

If the end goal is to get the contents of the webresponse into a byte array, is this the most efficient way to do this? In the past I would just read the stream in chunks. I've been told that this is less efficient than CopyTo for the 90% of the time (when the JSON response is sub-85k), but better for the other 10%.

I cannot seem to find a general consensus on this. Any input would be appreciated.

Caca Milis
  • 57
  • 1
  • 6
  • 1
    Why don't you try both and benchmark? Test with various sizes and you can choose a strategy (chunks or not) depending on the `Content-Length` header, so you'll be good with both small and large jsons. – SimpleVar Nov 25 '15 at 21:27
  • If you don't need the full bytearray at once, you can use a streamreader to read specific parts. this will reduce the memory pressure. – Sievajet Nov 25 '15 at 21:33
  • @Sievajet - I need the full byte array at once as it's passed to another object. – Caca Milis Nov 25 '15 at 21:46

0 Answers0