0

When using Amazon S3 client in DotNet, we get an object of GetObjectResponse type which has .ResponseStream property and we can just use StreamReader.ReadToEnd(ResponseStream) to read the contents in one go. But there is also using MemoryStream buffer where we can use Stream.Read() method to read it in chunks using a limited size buffer. What are the disadvantages/advantages of each approach? Would the chunking method be more advantageous when reading larger files?

Riz
  • 6,486
  • 19
  • 66
  • 106
  • Can you please add some demo code to illustrate what the actual question is? This sounds like more of a general C# question as opposed to something related to AWS but just want to double check. – Ermiya Eskandary Nov 13 '21 at 10:12

1 Answers1

1

There will be easy text file access in the StreamReader. The StremReader.ReadToEnd() will read the whole file to end.

The StreamRead() receives the small chunks of data that is broken down from the larger files. The application can read these small chunks of data from the streams and it does not have to read all the data directly from the larger file

So the Chunking method StreamRead() will be more effective when it comes to Larger file.

Arun_Raja1
  • 207
  • 1
  • 1