3

I know that a BufferedWriter or BufferedReader cannot directly communicate with a file. It needs to wrap another Writer object to do it. Like,

BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter("abc.txt"));

Here we are simply wrapping a FileWriter object using a BufferedWriter for IO performance advantages.

But I could also do this,

BufferedWriter bufferedWriter = new BufferedWriter(new BufferedWriter(new FileWriter("abc.txt")));

Here the FileWrite object is wrapped using a BufferedWriter which in turn is wrapped using another BufferedWriter. Or a more evil idea would be to chain it even further.

Is there any real advantage of double BufferedWriter? Or chaining it even further? The same is applicable for BufferedReader too.

comrade
  • 4,590
  • 5
  • 33
  • 48
Aritra Roy
  • 15,355
  • 10
  • 73
  • 107
  • I don't see any performance improvement doing this. The current design of the decorator pattern allows you to do it, and you may do it, but there's no benefit. – Luiggi Mendoza Aug 23 '15 at 14:46

2 Answers2

3

There's no benefit, no.

First, you have to understand what the buffering is for. When you write to disk, the hard drive needs to physically move the disk head to the right place, then wait for the disk to spin such that it's in the right place, and then start writing bytes as the disk spins under the head. Those first two steps are much slower than the rest of the operation, relatively speaking. This means that there's a lot of fixed overhead: writing 1000 bytes is much faster than writing 1 byte 1000 times.

So, buffering is just a way of having the application write byte in a way that's easy for the application's logic — one byte at a time, three bytes, 1000 bytes, whatever — while still getting disk performance. Most write operations to the buffer don't actually cause any bytes to go to the underlying output stream; only once you hit a certain limit (say, every 1000 bytes) is everything written, all at once.

And it's the same idea on input.

So, chaining these wouldn't help. With the chain, assuming they had equal buffer sizes, you would write to the "outer" buffer, and it wouldn't write to the "inner" buffer at all... and then when it hits its limit, it would flush all of those bytes to the inner buffer. That inner buffer instantly hits its buffer limit (since they're the same limit) and flushes those bytes right to the output. You haven't had any benefits, but you did have to copy the bytes an extra time in memory (to the byte buffer).

yshavit
  • 42,327
  • 7
  • 87
  • 124
  • 1
    I have an SSD, you insensitive clod :-) Just kidding. +1. – JB Nizet Aug 23 '15 at 15:01
  • 2
    The physics are different, and the speed is better, but the same is true. Writing 1000 bytes at once is faster than writing 1 bytes 1000 times. Note that the physics is not all that matters. Every write call needs to go to a native OS function call, then to the device driver, then to the disk itself. – JB Nizet Aug 23 '15 at 15:19
0

"Buffered" here is primarily reflecting the semantics of the interface (API). Noting this, composing IO pipelines via chaining of BufferedReader is a possibility. In general, consider that consumption of a single byte at the end of the chain may involve multiple reads at the head and could, in theory and per API, simply be a computation based on data read at the head.

For the general case of block device buffering (e.g. reading from an IO device with block sized data transfer, such as FS or net endpoints), chaining buffers (effectively queues) certainly will increase memory consumption, immediately add latency to processing (due to the increased buffer size, in total). It typically will significantly increase throughput (with noted negative impact on latency).

alphazero
  • 27,094
  • 3
  • 30
  • 26