I am aware how IJSON is solving, the bulky JSON reading and processing challenges. However i am not able to find any article which specifies how to speed up this:
I have seen few things to achieve that
1. Use YAZL backend
2. Play with buff_size parameter(not seeing any significant improvement)
Question which I had in my mind or I guess many are already working on that:
Now if we want to utilize parallel processing power of the machine, will IJSON supports that
.
I know at no point IJSON knows entire stream size, so splitting is out of question. My knowledge is quite limited in this area. Any thread, document or link would be a nice start for me to understand this more clearly.