2

The need for this comes from the fact, that we now have numerous sources updating sensitive bi-temporal tables, we are a little bit alert and want to cover our backs.

I used "Developing Time-Oriented Database Applications in SQL" by Richard Snodgrass to aid me with this.

I've been trying to come up with a trigger that asserts bitemporal contracts are held after each update or insert. To be more specific the contracts ensure primary key is valid-time and transaction-time sequenced, and also non-sequenced valid time continuity assertion. After breaking this down assertions are as follows:

  1. Assert there are no overlaps in "active" VALID_TIME timeline.

  2. Assert there are no overlaps in TRANSACTION_TIME timeline.

  3. Assert there are no gaps in VALID_TIME timeline.

By "active" I mean the records with TRANSACTION_END value "forever" (9999-12-31).

Here's what I have so far:

CREATE OR REPLACE TRIGGER TRIGGER_NAME
    AFTER INSERT OR UPDATE
    ON SOME_TABLE
DECLARE
         ROWCOUNT INTEGER;
BEGIN
         SELECT COUNT(*) INTO ROWCOUNT
    FROM SOME_TABLE T1,SOME_TABLE T2
    WHERE T1.PK_COLUMN2 = T2.PK_COLUMN2
    AND T1.PK_COLUMN1 = T2.PK_COLUMN1
    AND T1.TRANSACTION_START < T2.TRANSACTION_END
    AND T2.TRANSACTION_START < T1.TRANSACTION_END
    AND T1.ROWID != T2.ROWID;

    IF (ROWCOUNT>0) THEN
      RAISE_APPLICATION_ERROR(-20001, 'BITEMPORAL INTEGRITY TRIGGER CHECK : AUDIT_TIME OVERLAP');
    END IF;

         SELECT COUNT(*) INTO ROWCOUNT
    FROM SOME_TABLE T1,SOME_TABLE T2
    WHERE T1.PK_COLUMN2 = T2.PK_COLUMN2
    AND T1.PK_COLUMN1 = T2.PK_COLUMN1
    AND T1.VALID_START < T2.VALID_END
    AND T2.VALID_START < T1.VALID_END
    AND T1.TRANSACTION_END = DATE '9999-12-31'
    AND T2.TRANSACTION_END = DATE '9999-12-31'
    AND T1.ROWID != T2.ROWID;

    IF (ROWCOUNT>0) THEN
      RAISE_APPLICATION_ERROR(-20001, 'BITEMPORAL INTEGRITY TRIGGER CHECK : ACTIVE VALID_TIME OVERLAP');
    END IF;

         SELECT COUNT(*) INTO ROWCOUNT
    FROM SOME_TABLE S, SOME_TABLE T2
    WHERE S.VALID_END < T2.VALID_START
    AND S.PK_COLUMN1 = T2.PK_COLUMN1
    AND S.PK_COLUMN2 = T2.PK_COLUMN2
    AND S.TRANSACTION_END = DATE '9999-12-31'
    AND T2.TRANSACTION_END = DATE '9999-12-31'
    AND NOT EXISTS (
    SELECT *
    FROM SOME_TABLE T3
    WHERE T3.PK_COLUMN1 = S.PK_COLUMN1
    AND T3.PK_COLUMN2 = S.PK_COLUMN2
    AND (((T3.VALID_START <= S.VALID_END)
    AND (S.VALID_END < T3.VALID_END))
    OR ((T3.VALID_START < T2.VALID_START)
    AND (T2.VALID_START <= T3.VALID_END)))
    AND T3.TRANSACTION_END = DATE '9999-12-31');

    IF (ROWCOUNT>0) THEN
      RAISE_APPLICATION_ERROR(-20001, 'BITEMPORAL TRIGGER CHECK : ACTIVE VALID_TIME CONTINUITY VIOLATED (GAPS)');
    END IF;
END;

PK_COLUMN(s) are the makings of natural keys, rest should be obvious.

The questions are as follows:

Did I include every possible scenario? Is there an additional contract, which I forgot to check against?

Bonus question, can you recommend any other solid book/resource on bi-temporal data architectures?

//Added some more tags, for greater reach...

Any comments, suggestions, constructive critique is welcome.

Thanks in advance,

Matt.

Palcente
  • 625
  • 2
  • 7
  • 21
  • Why is this tagged MySQL? Your trigger appears to be PL/SQL so I'm guessing that the Oracle tag is correct. In a multi-user application, you're not going to be able to enforce this sort of thing with a trigger unless you introduce some sort of serialization mechanism to lock some resource that will cause another session trying to insert an overlapping entry to block. – Justin Cave Jul 24 '15 at 21:36
  • Thanks for your response Justin! We already have locking mechanism in place. We have a mirror table, which we merge into "live" table, as some of the updates are measured in gigabytes, and have to be bulk loaded as flat files... In regards to tags - I included both MySQL and Oracle, as I believe this question applies to either of the scenarios and suggestions in either dialect are welcome! Matt – Palcente Jul 24 '15 at 21:43
  • OK. So only one session at a time can ever be modifying data in this table? If you're making GB worth of changes at a time then I'd assume that the table is very large. If that's the case, I'd hate to run the validation on the whole table rather than just the rows that changed. My bias would be to have a materialized view that enforced the validity rather than a trigger since that doesn't require the same sort of locking. But that's not an option in MySQL. If you want the best approach in Oracle and the best approach in MySQL you want two questions. – Justin Cave Jul 24 '15 at 21:57
  • Could you elaborate on how such check could be done with materialized view please? I always saw materialized views as an option where multiple users read relatively stale data, which is not the case in this scenario. The huge update is done once a day, more less at the same time.. Rest of the updates are at random times throughout the day, and include an update and insert each, as in expire current row and insert new entry. – Palcente Jul 24 '15 at 22:18

0 Answers0