I need to take a sequence of a 20Mb json objects array, pipe into a stream, break into smaller arrays of 33 items, convert it to html and then pipe into another stream (for pdf conversion).
The thing is, I haven't quite understood how node streams work. I'm trying to solve it using a Duplex stream, but I don't know how to pool the incoming chunks from the upper stream and send them in parts to the down stream. In this code
jsonReader = fs.createReadStream 'source.json'
class Convert extends Duplex
constructor: ->
super readableObjectMode: true
# Duplex.call @, readableObjectMode: true
@buffer = []
_read: (lines) ->
console.log "buffer #{@buffer.length}"
if @buffer.length is 0
@push null
else
console.log "lines: #{lines}"
page = @buffer.slice 0, 33
console.log page.length
@buffer.splice 0, 33
@push page
_write: (data, enconding, next) ->
@buffer.push data
next()
convert = new Convert()
jsonReader.pipe(convert).pipe(process.stdout)
@buffer is always empty. Where does node store the chunks incoming from the upper streams?