2

So my problem is simple. I have a schema prod with many tables, and another one log with the exact same tables and structure (primary keys change that's it). When I do UPDATE or DELETE in the schema prod, I want to record old data in the log schema.

I have the following function called after a update or delete:

CREATE FUNCTION prod.log_data() RETURNS trigger
LANGUAGE plpgsql AS $$
DECLARE
    v RECORD;
    column_names text;
    value_names text;
BEGIN

    -- get column names of current table and store the list in a text var
    column_names = '';
    value_names = '';
    FOR v IN SELECT * FROM information_schema.columns WHERE table_name = quote_ident(TG_TABLE_NAME) AND table_schema = quote_ident(TG_TABLE_SCHEMA) LOOP
        column_names = column_names || ',' || v.column_name;
        value_names = value_names || ',$1.' || v.column_name;
    END LOOP;

    -- remove first char ','
    column_names = substring( column_names FROM 2);
    value_names = substring( value_names FROM 2);

    -- execute the insert into log schema
    EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' ( ' || column_names || ' ) VALUES ( ' || value_names || ' )' USING OLD;

    RETURN NULL; -- no need to return, it is executed after update
END;$$;

The annoying part is that I have to get column names from information_schema for each row. I would rather use this:

    EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' SELECT ' || OLD;

But some values can be NULL so this will execute:

INSERT INTO log.user SELECT 2,,,"2015-10-28 13:52:44.785947" instead of INSERT INTO log.user SELECT 2,NULL,NULL,"2015-10-28 13:52:44.785947"

Any idea to convert ",," to ",NULL,"?

Thanks

-Quentin

Gabriel's Messanger
  • 3,213
  • 17
  • 31

1 Answers1

0

First of all I must say that in my opinion using PostgreSQL system tables (like information_schema) is the proper way for such a usecase. Especially that you must write it once: you create the function prod.log_data() and your done. Moreover it may be dangerous to use OLD in that context (just like *) as always because of not specified elements order.

But,

to answer your exact question the only way I know is to do some operations on OLD. Just observe that you cast OLD to text by doing concatenation ... ' SELECT ' || OLD. The default casting create that ugly double-commas. So, next you can play with that text. In the end I propose:

DECLARE
     tmp TEXT
...
BEGIN
...
     /*to make OLD -> text like (2,,3,4,,)*/
     SELECT '' || OLD INTO tmp; /*step 1*/

     /*take care of commas at the begining and end: '(,' ',)'*/
     tmp := replace(replace(tmp, '(,', '(NULL,'), ',)', ',NULL)'); /*step 2*/

     /* replace rest of commas to commas with NULL between them */
     SELECT array_to_string(string_to_array(tmp, ',', ''), ',', 'NULL') INTO tmp; /*step 3*/

     /* Now we can do EXECUTE*/
     EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' SELECT ' || tmp;

Of course you can do steps 1-3 in one big step

SELECT array_to_string(string_to_array(replace(replace('' || NEW, '(,', '(NULL,'), ',)', ',NULL)'), ',', ''), ',', 'NULL') INTO tmp;

In my opinion this approach isn't any better from using information_schema, but it's your call.

Gabriel's Messanger
  • 3,213
  • 17
  • 31
  • Thanks you very much ! Thanks for your opinion, I will make some tests on big DB to see if `information_schema` is too slow for my usecase. – Quentin Silvestre Nov 03 '15 at 08:28
  • In fact this is for performance. But could you tell me when postgres fire triggers? I wonder if postgre gives the return to php before firing a trigger `after INSERT` – Quentin Silvestre Nov 03 '15 at 08:39
  • I not 100% sure but I doubt it is. Every trigger (no matter `AFTER` or `BEFORE`) in PostgreSQL is executed in the same trasaction as master query. That means that execution isn't over till trigger ends. Be aware that unhandled exception in `AFTER` trigger makes whole transaction to rollback. So it could makes problems if php would think that query was successful if even db server itself hasn't new that yet. – Gabriel's Messanger Nov 03 '15 at 09:09