1

I would like to add a CHECK statement that calls a function when inserting new entries into a table. I have used the following sample code to implement such functionality:

CREATE TABLE customers(
    id NUMBER NOT NULL,
    PRIMARY KEY(id));

CREATE OR REPLACE FUNCTION totalCustomers
RETURN NUMBER IS 
   total NUMBER := 0; 
BEGIN 
   SELECT count(*) into total 
   FROM customers; 

   RETURN total; 
END; 
/

ALTER TABLE customers
ADD CHECK (totalCustomers() < 10);

When I run this query in livesql.oracle.com, I get the following error:

ORA-00904: "TOTALCUSTOMERS": invalid identifier.

What is the right way of calling this function in the check statement?

P.S. Please ignore the contents of the function. I will replace it with the desired contents later.

Matt
  • 796
  • 12
  • 25
  • Better idea: implement an `INSERT_CUSTOMER` procedure and put validations such as this into the procedure - then call `INSERT_CUSTOMER` every time you need to insert a new customer. Done this way your code is clear and obvious, and you don't have "magic code" that executes out-of-line where it's hard to figure out what's actually being done. – Bob Jarvis - Слава Україні Mar 01 '20 at 18:12

3 Answers3

4

There isn't one.

Straight from the Oracle documentation:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/constraint.html#GUID-1055EA97-BA6F-4764-A15F-1024FD5B6DFE

Conditions of check constraints cannot contain the following constructs:

.............

  • Calls to user-defined functions

..............

Now: you said "disregard the actual content of the function". That is not a healthy attitude; the content matters too. For one thing, the function would have to be deterministic anyway (yours is not) - this is a problem quite apart from it being a user-defined function. Moreover, conditions in constraints can only refer to values in a single row - they can't be "table" constraints, like yours is.

You may wonder, then - how would one implement a "constraint" like yours? One somewhat common method is to create a materialized view based on "select count(*)....." and put a constraint on the MV. The MV should refresh full on commit. Whenever you modify the base table and you commit. the MV is refreshed - and if the count increases above 10, the changes are rolled back.

Community
  • 1
  • 1
  • I had seen answers like this: https://stackoverflow.com/a/37580382/4231538. But I wasn't able to use them in my code. I think the quote from the documentation explains why. The question I'm trying to solve is very similar to the question shared in the link in that I am also trying to make sure the time period of new entries don't overlap with existing ones. – Matt Mar 01 '20 at 08:29
  • "The MV should refresh full on commit." Why do you say "full"? I have used refresh FAST on commit MVs dozens of times to implement cross-row constraints. Is there something specific about this use case that requires FULL over FAST? – Stew Ashton Mar 01 '20 at 09:21
  • Maybe "FULL" is the wrong expression, I assume mathguy actually means "full content". Anyway, a "ON COMMIT REFRESH" should always implemented as "FAST" rather than "COMPLETE". For such a constraint I would recommend a STATEMENT LEVEL Trigger. – Wernfried Domscheit Mar 01 '20 at 11:44
  • @StewAshton - I was referring specifically to the example in the OP's post. I don't think there is any difference between full refresh and incremental refresh, when the view simply takes `COUNT(*)` over the entire table, is there? And for a table with at most ten rows, I doubt it would ever make a difference. On the other hand, incremental refresh is much harder to set up, as no doubt you know. –  Mar 01 '20 at 14:17
  • 1
    @WernfriedDomscheit - "always"? Why? For example, in this case the base table wil always have less than ten rows; why bother to create MV logs, and jump through all the hoops, for such a use case? –  Mar 01 '20 at 14:18
  • Ok, that's a valid point, for 10 rows only the refresh can be COMPLETE. Often materialized views are used to improve the performance which usually implies bigger tables. – Wernfried Domscheit Mar 01 '20 at 14:30
  • @WernfriedDomscheit - Also, again, I don't think there is any difference between "full" and "incremental" when the view is `select count(*) from [entire_table]`, regardless of cardinality, is there? In that case too - why bother with an incremental refresh, when it does the same job as a full one? –  Mar 01 '20 at 14:34
  • Good question, according required MATERIALIZED VIEW LOG's it should not make any difference. Perhaps Oracle uses some smart magic to make FAST refresh faster - but I don't know. – Wernfried Domscheit Mar 01 '20 at 15:13
  • @mathguy: thanks for confirming that your "full" comment was related to the specific (and unrealistic) use case of the question. In fact, incremental refreshes can indeed help with MVs (on big tables) that have aggregations: counts are incremented or decremented based on the inserts or deletes recorded in the MV log. The table is not rescanned. This does not mean that incremental refreshes are **always** "faster" though... – Stew Ashton Mar 02 '20 at 07:33
2

In your comment on mathguy's answer, you say "I am also trying to make sure the time period of new entries don't overlap with existing ones." I have done this with "refresh fast on commit" materialized views. Warning: "fast" refreshes can be slow if you are not careful, please refer to this blog http://www.adellera.it/ , especially concerning statistics on the materialized view log.

I am assuming exclusive end dates. If an end date is null, that means the datetime range goes on indefinitely. Many overlaps will be caught immediately by the primary key and unique constraints. The others will be caught at commit time by the constraint on the materialized view. Note that at the end of the transaction the MV will never have any rows.

SQL> create table date_ranges (
  2    key1, start_date,
  3    primary key(key1, start_date),
  4    end_date,
  5    unique(key1, end_date),
  6    check(start_date < end_date)
  7  )
  8  as
  9  with a as (select date '2000-01-01' dte from dual)
 10  select 1, dte, dte+1 from a
 11  union all
 12  select 1, dte+1, dte+2 from a
 13  union all
 14  select 1, dte-1, dte from a
 15  union all
 16  select 2, dte+10, dte+11 from a
 17  union all
 18  select 2, dte+12, dte+13 from a
 19  union all
 20  select 2, dte+8, dte+9 from a
 21  /

Table DATE_RANGES created.

SQL> create materialized view log on date_ranges
  2  with sequence, rowid, primary key, commit scn (end_date) including new values
  3  /

Materialized view log DATE_RANGES created.

SQL> create materialized view overlapping_ranges refresh fast on commit
  2  as
  3  select a.rowid arid, b.rowid brid
  4  from date_ranges a, date_ranges b
  5  where a.key1 = b.key1
  6  and a.rowid != b.rowid
  7  and a.start_date < b.end_date
  8  and a.end_date > b.start_date;

Materialized view OVERLAPPING_RANGES created.

SQL> 
SQL> alter materialized view overlapping_ranges
  2  add constraint overlaps_not_allowed check (1=0) deferrable initially deferred
  3  /

Materialized view OVERLAPPING_RANGES altered.

SQL> insert into date_ranges select 1, date '1999-12-30', date '2000-01-4' from dual;

1 row inserted.

SQL> commit;

Error starting at line : 42 in command -
commit
Error report -
ORA-02091: transaction rolled back
ORA-02290: check constraint (STEW.OVERLAPS_NOT_ALLOWED) violated
Stew Ashton
  • 1,499
  • 9
  • 6
  • There was a thread on exactly the same question a few weeks (months?) ago on OTN. Which prompted me to re-read Toon Koopelaars' excellent "triggers considered harmful, considered harmful" blog, which I just finished... I am trying to understand exactly how to do this with a combination of row and statement level triggers, possibly the most efficient solution. I may open a discussion on OTN about it. A point he makes about the MV solution: it will serialize DML (or, rather, their commits) even when the changes have nothing to do with the constraint –  Mar 01 '20 at 14:58
  • " the MV solution: it will serialize DML (or, rather, their commits) even when the changes have nothing to do with the constraint" Yes. More precisely, at commit time the MV refresh is serialized. There is a special wait event for that. – Stew Ashton Mar 02 '20 at 07:40
  • @mathguy: there was a detailed discussion (including Toon) here: https://asktom.oracle.com/pls/apex/asktom.search?tag=best-way-to-enforce-cross-row-constraints . We compared triggers and MVs. The MV solution included aggregation. Compound triggers (row + statement) were the fastest in my simple tests. – Stew Ashton Mar 02 '20 at 09:03
  • I know about the AskTom thread, I am reading through it right now :-) I understand exactly how the MV strategy works, and Toon's points about excessive serialization, etc. With my lack of formal background, where I am tripping is his use of DBMS_LOCK and related concepts, for which reason I am reading through Tom's book where he explains these things. Not trying to compete with Toon's commercial product; just trying to understand in detail how it works. –  Mar 02 '20 at 15:24
1

I would suggest a trigger for such requirement.

CREATE OR REPLACE TRIGGER AI_customers 
    AFTER INSERT ON customers 
DECLARE

total NUMBER; 

BEGIN
   SELECT count(*) into total 
   FROM customers; 
   IF total > 10 THEN
      RAISE_APPLICATION_ERROR(-20001, 'Total number of customers must not exceed 10');
   END IF;
END;
/

Note, this is a STATEMENT LEVEL trigger (no FOR EACH ROW clause), thus you cannot get the famous "ORA-04091: table is mutating, trigger/function may not see it" error.

However, this trigger has some limitations in a multi-user environment. If user_1 inserts records into customers table and user_2 also inserts some records (before user_1 has done a COMMIT) then you may get more than 10 records in your customers table.

Wernfried Domscheit
  • 54,457
  • 9
  • 76
  • 110
  • I suggest preceding the `SELECT COUNT(*)...` with a call to `DBMS_LOCK.REQUEST`, specifying `LOCKMODE => DBMS_LOCK.X_MODE`, `TIMEOUT => 9999`, and `RELEASE_ON_COMMIT => TRUE`. Done this way no other session will proceed through the trigger until the first transaction commits or rolls back. Note: this must be done carefully or performance will suffer greatly. – Bob Jarvis - Слава Україні Mar 01 '20 at 18:24