I'm running into a problem with downloading large JSON objects from an API. Usually, this documents are small in size, but occasionally they are quite large (100k+). This puts the large object heap into play and there are some performance concerns.
Here is the guts of the downloader method that returns the response bytes.
using (var httpWebResponse = (HttpWebResponse)request.GetResponse())
{
byte[] responseBytes;
using (var responseStream = httpWebResponse.GetResponseStream())
{
using (var memoryStream = new MemoryStream())
{
if (responseStream != null)
{
responseStream.CopyTo(memoryStream);
}
responseBytes = memoryStream.ToArray();
}
}
return responseBytes;
}
If the end goal is to get the contents of the webresponse into a byte array, is this the most efficient way to do this? In the past I would just read the stream in chunks. I've been told that this is less efficient than CopyTo for the 90% of the time (when the JSON response is sub-85k), but better for the other 10%.
I cannot seem to find a general consensus on this. Any input would be appreciated.