Alright, I know my question is not entirely specific, as an optimum fread chunk size is more of a trial error based thing. However, I was hoping some of you guys could shed some light on this.
This also involves server related stuff, so am not sure if Stackoverflow is entirely the right place, but it did seem to be a better choice in comparison to ServerFault.
To begin with, I'm going to post two screenshots:
http://screensnapr.com/e/pnF1ik.png
http://screensnapr.com/e/z85FWG.png
Now I've got a script that uses PHP to stream files to the end user. It uses fopen and fread to stream the file. Most of these files are larger than 100MB. My concern is that sometimes, the above is what my server stats turn into. The two screens are from different servers; both servers are dedicated file streaming boxes. Nothing else runs on them except PHP streaming the file to the end user.
I'm confused about the fact that even when my server's are only transmitting an aggregate total of about 4MB/sec of data to the end client(s), the disk reads are going at 100M/s and over. This insane level of IO eventually locks my CPU because it waits for IO and tasks pile up; eventually my server becomes completely unresponsive, requiring a reboot.
My current fread chunk size is set to 8 * 1024. My question is, will changing the block size and experimenting help at all? The client is only downloading data at an average ~4MB/sec. So why is the disk reading data at 100MB/sec? I've tried every possible solution on the server end; I even swapped the disks with new ones to rule out a potential disk issue. Looks to me like this is a script issue; maybe PHP is reading the entire data from the disk regardless of how much it transfers to the end client?
Any help at all would be appreciated. And if this belongs to ServerFault, then my apologies for posting here. And if you guys need me to post snippets from the actual script, I can do that too.