While Node.js allows you work with data streams (ie. chunked data), that does not necessarily mean that "Node.js never buffers data" - that is up to the developer to handle.
Typical streaming is done by creating a callback which receives data over a period of time, does something with that data, or handles it off to another stream.
This means that the consumer of the data can get it more readily without having to wait for the entire operation to complete.
This is what TutorialPoint was eluding to, although poorly said — and for the most part, incorrect.
Bufferring is actually key to working with streamable data.
In the sense that the actual data will reside in a Buffer until your Callback pulls the data out of the stream to work with.
This is why Node.JS has Buffer classes and libraries such as stream-buffers, to facilitate stream data access.
A common pain point in dealing with streams & buffers is something called: "Back Pressure"
This is when you have a data stream with a Producer and a Consumer, but due to load issues, your Consumers are falling behind in consuming the data at a fast enough pace compared to the rate of the Producers pumping data into the stream.
This causes the so called "back pressure" effect, and can bring down a system if the Producers are not rate limited.
Back pressure is relevant to the discussion because it is caused by the buffer being filled faster than your callback can push out data back to the user or another stream.
Therefore, bufferring — at some level or another— is essential to the operation of Node.js and is how it handles continuous streams of data. That is why Buffer class exists, as well as Streaming buffers — to facilitate the movement of data between callbacks.