1

Hey, I am struggling a bit to determine the exact cause of an error that has been popping up in our release environment. There does not seem to be much dealing with this particular error on Google.

This is the error message we are getting:

SQLSTATE[34000]: Invalid cursor name: 7 ERROR: portal "" does not exist

The error only pops up when we are using PDO prepared statements.

This is the setup for our release environment:

  1. pgpool 3.0.1 (The postgresql backend is in Streaming Replication mode!)
  2. PHP 5.3.5
  3. PostgreSQL 9.0

Edit: Architecture is 64bit.

The same error does not manifest in our test environment (Edit: forgot to mention, the standard test environment uses Postgresql 9.0 without pgpool). Thus, I am led to suspect that pgpool is at least partly suspect.

Does anyone know what the probable causes for this error are?

Edit: ok, here is an example of the kind of code that causes the error.

$sql = 'SELECT * ';
$sql .= 'FROM "myTable" as "myStuff" ';
$sql .= 'WHERE "myTable"."status" = 1 ';
$sql .= 'AND "myTable"."myTableId" = :tableId ';
$sth = $this->_db->prepare($sql);
$sth->bindParam(':tableId', $tableId, PDO::PARAM_INT);
$sth->execute();

Edit: Some log file output;

Postgresql:

postgresql-Sun.log-129- ORDER BY "id" 
postgresql-Sun.log:130:ERROR:  portal "" does not exist
postgresql-Sun.log-131-ERROR:  prepared statement "pdo_stmt_00000011" does not exist
postgresql-Sun.log-132-STATEMENT:  DEALLOCATE pdo_stmt_00000011


postgresql-Mon.log-82-  where "id" = 32024
postgresql-Mon.log:83:ERROR:  portal "" does not exist
postgresql-Mon.log-84-ERROR:  prepared statement "pdo_stmt_00000002" does not exist
postgresql-Mon.log-85-STATEMENT:  DEALLOCATE pdo_stmt_00000002

pgpool:

LOG:   pid 22071: Replication of node:1 is behind 2080 bytes from the primary server (node:0)
LOG:   pid 22071: Replication of node:2 is behind 2080 bytes from the primary server (node:0)
LOG:   pid 13499: pool_send_and_wait: Error or notice message from backend: : DB node id: 0 backend pid: 8470 statement:  message: portal "" does not exist
LOG:   pid 13499: pool_send_and_wait: Error or notice message from backend: : DB node id: 0 backend pid: 8470 statement: DEALLOCATE pdo_stmt_00000003 message: prepared statement "pdo_stmt_00000003" does not exist
elyzion
  • 133
  • 2
  • 8
  • ok show your sql maybe you have an error in there – Ibu May 11 '11 at 06:27
  • If that were the case I would be able to replicate the error. Executing the same sql twice does not result in replication of the error. The only common denominator that exists is that it is always a select query executed with a pdo prepared statement. Also, the error takes place seemingly randomly (ie, any query satisfying the common denominator seems to be equally likely to cause the error). – elyzion May 11 '11 at 06:34
  • have you found a solution to this problem? – Ibu May 11 '11 at 16:23
  • does this error happen regardless of whether you're using emulated prepares (via the `PDO::ATTR_EMULATE_PREPARES` attribute) or not? – Charles May 11 '11 at 19:10

3 Answers3

0

If you could duplicate the problem in your test environment, I wouldn't hesitate to recommend running the server with the -d (debug) option.

Since that's not the case, I'll just remind you that it's an option.

PostgreSQL command line options

On that page, there are a couple of "Semi-internal Options" that might help isolate the problem. Might not.

Mike Sherrill 'Cat Recall'
  • 91,602
  • 17
  • 122
  • 185
0

I believe I have a temporary work around, and am invested in finding a permanent solution. I am working on a high-availability PG cluster on Amazon EC2, and have also ran into this exact issue.

It occurs randomly for queries executed using DBI's prepare/execute blocks when DBI is routed through PGPool2 (3.0.3). The portal errors do not occur when PGPool2 is removed and we use Postgres 9 directly.

We run Perl, using DBI and DBD::PG. The common factor seems to be PGPool2.

One possible solution we have found is to set 'ignore_leading_white_space' = false in the pgpool.conf. The errors completely disappear for us with this option set. Unfortunately this does have the downside of potentially routing selects to the master that should be load balanced, as such I do not consider it a final solution.

Example of code that randomly generates this issue:

$sth = $dbh->prepare("SELECT * FROM TABLEX WHERE ID = ?" ) 
|| die "Can't prepare statement: " . $dbh->errstr;
$sth->execute($self->id) || die "Can't get inventory " . $dbh->errstr;
  • Yes, pgpool is definitly suspect, I build another test cluster with the same setup as the release environment and am trying to recreate the error. Will try out the leading white space option as well. Could this be a pgpool bug? – elyzion May 13 '11 at 02:28
  • Correction, the setting is "ignore_leading_white_space = off" using 'false' we still see the error. This is odd as most fields in pgpool.conf accept false. Also odd is that ignore_leading_white_space is "on" by default – Jonathan Regeimbal May 13 '11 at 19:17
  • Tested the ignore_leading_white_space option here, doesn't change the end result. Might have to try and raise the pgpool version. Loathe to run alpha software on a production server, but it is preferable to having random queries dying all over the place. – elyzion May 19 '11 at 03:50
0

I've found a solution. Turns out this bug we are experiencing does not exists in Pgpool-II 3.1 alpha2. It looks like this bug was fixed in March after the 3.0.3 release. The solution is to download the release and build/install manually. Some of the paths are different, for instance the config path is /usr/local/etc/

Pgpool-II 3.1 alpha2 is available here: http://pgfoundry.org/frs/?group_id=1000055

It's possible that the latest 3.0-Stable tree also has a fix for this issue. I am hoping to test an export from the CVS later tonight.

  • Seems very likely that the CVS contains a fix, the follow is from Tatsuo Ishii on the pgpool mailing lists: It is possible that the bug might be fixed in the 3.0.3-STABLE tree. So you might want to try extract source from the 3.0.3-STABLE CVS tree. If you are sure what to do with CVS, I can send you the tar ball personally. – elyzion May 20 '11 at 02:44
  • I did manage to test the latest export from the 3.0.3-STABLE tree directly from Tatsuo. The export was free of the bug during my testing, and I haven't noticed any dropped queries while testing other aspects of our application. I recommend exporting the tree from the CVS or requesting the tar ball from Tatsuo, I'm confident it will solve the portal errors as it did for me! – Jonathan Regeimbal May 24 '11 at 21:20