0

I have a very large file (50 GByte), that I could either split in many single 2 MByte chunk files, or that I could access with file_get_contents using offsets and length of 2 Mbyte, where the used offsets are not neccessarily continuous.

So I wonder how much overhead file_get_contents does have here, or if it is even faster because the file handle is already open after the first access?

I'm using php 7.3.8 on Win10.

JSmith
  • 1
  • 1
  • `file_get_contents()` will read the entire file into memory. As PHP can handle large files, this should make no big problem. – Markus Zeller Nov 23 '19 at 13:15
  • 1
    I would have thought if you are going to read various parts of a file, using `fopen()` and multiple calls to `fseek()` would allow you to do all of the operations without have to continually access the file over again. – Nigel Ren Nov 23 '19 at 13:31
  • Tags for overhead and overhead minimization should be removed. – Tanveer Badar Nov 23 '19 at 14:13
  • @NigelRen I have read here on Stackoverflow that `file_get_contents` should be much faster than `fopen` and `fseek`. With the memory limits I have on my computer there is no other way than to access the 2 MByte chunks more than once, and it's not possible to say how often. So how long does file_get_contents need to access - let's say - bytes 48.002 GByte position to 48,004 GByte position of the file, compared to using single 2 Mbyte files. – JSmith Nov 23 '19 at 16:05
  • You would have to test it in your own situation, unless you can find an exact reference to say `file_get_contents()` is faster at reading chunks of a large file. The important thing is that using `fseek()` (which is what I think it does for `file_get_contents()` internally) doesn't read all of the contents sequentially. – Nigel Ren Nov 23 '19 at 17:58

0 Answers0