2

I have a PHP script that reads a file via fgetcsv through PHP's zip:// wrapper. This works fine locally and when run on production via a cron job. It fails with a Segmentation fault when invoked on production via CLI though, apparently at the end of reading the file when fgetcsv returns false.

The relevant parts of the script:

#!/usr/local/bin/php5
<?php

...

$i = 0;
while (($row = fgetcsv($file)) !== false) {

    // processing ...

    if (++$i % 50000 == 0) {
        echo '.';
    }
}

printf("\nFinished reading %s records.\n\n", number_format($i));
fclose($file);

And its output:

-bash-3.00$ ./script.php 
Reading file.zip................Segmentation fault

It appears to segfault before the printf but after having read all the records, so I suspect it fails when it reaches the end of the file.

-bash-3.00$ /usr/local/bin/php5 -v
PHP 5.2.8 (cli) (built: Apr 14 2010 16:08:06) 
Copyright (c) 1997-2008 The PHP Group
Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies
    with the ionCube PHP Loader v3.1.31, Copyright (c) 2002-2007, by ionCube Ltd.

What could be the cause for this and is there a way to fix it?

deceze
  • 483
  • 1
  • 6
  • 20

3 Answers3

2

strace the proc to figure out where it's dying. A segfault is by definition an unhandled error, so the logs are unlikely to tell you much interesting. Speculation: php compiled against a shared library that's not available on the host to which it has been deployed.

Alternatively, since you say you're running on shared hosting, why not get the vendor's support channel involved? They can do a lot more with root access to figure out what's going on, and it's the environment they're providing that's broken.

Jeff Albert
  • 1,987
  • 9
  • 14
  • I'll see if I can achieve anything via support, but I'm skeptical. The weird thing and what's really bothering me is, *it works* via cron, but not via CLI on the same machine. – deceze Mar 31 '11 at 04:14
  • Does your cron job call php_cli, or does it make an HTTP request to invoke the PHP script? php and php_cli are different compiled binaries, and may exhibit different behaviours. – Jeff Albert Mar 31 '11 at 05:50
  • AFAIK the file is just executed directly, hence the need to put the shebang on the first line and make it executable. I'm doing basically the same in my CLI invocation. I only have a minimal web interface to configure cron jobs on this host, so I can't know for sure what exactly is done, but I can say for sure that there's no HTTP involved. – deceze Mar 31 '11 at 06:13
  • Cron may be running the script in a different environment (jailed, maybe?), or conceivably even on a different system across a shared filesystem. I definitely recommend stracing to find out what system call makes php dump core. – Jeff Albert Mar 31 '11 at 06:20
  • Would love to `strace` it, but `strace` is not available on that host. Very little is, in fact. -_-;; Interesting idea with running on a different system though. – deceze Mar 31 '11 at 06:30
  • Well, your next best bet is to analyze the core dump (assuming you have access to where it gets dumped); if nothing else, this will help you assess your level of dedication to determining the nature of the problem. – Jeff Albert Mar 31 '11 at 06:32
  • This helped me out, but I had to put a sleep at the top of my script to be able to strace it since it would die within a second. – Jose' Vargas Jun 22 '18 at 13:30
0

You need to check your logs and have a look at where in the process it is erroring. You may need to enable logging in your php.ini file.

Likely you don't have proper permissions on the file. When you run it as cron you are likely running the script as root (assuming you have the cron in the system crontab).

So check your logs and check to make sure your file has the correct permissions to be run as the user you are logging in as through the CLI.

Mike
  • 802
  • 4
  • 5
  • This is running on a shared host, most certainly not as root. ;) Also, it's reading the file just fine; it's outputting a `.` every so many records read, and as you can see in the output, it's reading a lot of records. In fact, it appears that it's reading the whole file and segfaults at the *end* of this operation. – deceze Mar 31 '11 at 01:54
  • 1
    I don't think it's the fault of incorrect permissions or the script doing anything wrong. Seems to be something more low-level. I added the relevant piece of the script to the question for illustration. Also, unfortunately I don't have any PHP logs on this host. :( – deceze Mar 31 '11 at 01:58
  • Does the user that you are running as have permission to write the zip file in the location that you are trying to save it in? – Mike Mar 31 '11 at 02:05
  • Also if you add a .htaccess file and then in there specify the php.ini file you want to use which has logging enabled you could enable logging to a location of your choosing. Most hosting allows this. – Mike Mar 31 '11 at 02:06
  • I'm not writing any file, just reading one. Also, this host is very restrictive, I don't think I can override this setting. Furthermore, it's not executed in a web server context, only as a cron job. – deceze Mar 31 '11 at 02:10
  • Right.. assumed web, sorry bout that. – Mike Mar 31 '11 at 02:34
  • No problem. PHP just has that image, doesn't it? ^_^;; – deceze Mar 31 '11 at 02:45
0

I had the same problem when using #!/usr/bin/php -q. I was creating a pdf using fpdf.org. I was reading the whole file (with 39000 lines) with the PHP file command and I was getting the same segfault. I replaced the file command with fopen and fgets reading line by line and the problem went away. I suspect that this segfault came from bash because when I was running manually php -q mpdf.php instead of ./mpdf with #!/usr/bin/php on the first line the problem did not occur. By the way I am running PHP 4.3.1 on SUSE linux 8.2. I have to stay there and not move to later releases for a number of reasons.

I hope that was of some help. Regards from sunny Athens (in Greece)

Christos Michail

Castaglia
  • 3,349
  • 3
  • 21
  • 42