I'd imagine it has something to do with buffering. Images can be very large, and most of the time they arrive compressed, meaning that there's some effort your computer has to put into first decoding the image before it can display it on the screen.
When reading a large file, you usually allocate a buffer, which is an area of memory you wish to stream the uncompressed data into. In this context, you'll load a portion of the image, perform the required processing, and continue to do this until all portions of the image file have been completed. Here, it looks like once a part of the image has been fully decoded, it is rendered immediately, whereas in some implementations you'd usually wait until the whole file has been processed before visualising it.
If a bigger buffer were to be allocated, you'd see larger chunks get rendered, but this places a greater overhead on system memory.
Anyway, this is just my hunch.