1

I have a table where each row represents a key-value pair containing application-specific settings (such as the number of days to retain alerts, etc.). Each of these key-value pairs has a different range of valid values, so no single check constraint will apply equally to all rows. Some rows might need no validation at all and others might have string values needing special consideration. Is there some way I can create a check constraint on a per-row basis and have that constraint enforced when that row is updated?

I have attempted several times to achieve this, but have run into hurdles each time. Each attempt relies on the existence of a [Check] column on the table, wherein the constraint is defined for that row, similar to a normal table-based constraint (such as "((CAST Value AS INTEGER) <= 60)").

My first attempt was to create a normal check constraint that calls a user-defined function that reads the contents of the [Check] column (based on an identity value), performs a test of the constraint, and returns a true/false result, depending on whether or not the constraint is not violated. The problem with this approach is that it requires writing dynamic SQL to get the contents of the [Check] column as well as executing the code that it contains. But of course, dynamic SQL is not permitted in a function.

Next, I tried changing the function to a stored procedure, but it does not appear to be possible to call a stored procedure via a check constraint.

Finally, I tried creating a function AND a stored procedure, and calling the stored procedure from the function, but that is not permitted either.

The only way I know that will work is to write a huge, monolithic check constraint, containing checks for each row by identity value, all OR'ed together, like this:

(ID = 1 AND (CAST Value AS INTEGER) <= 100) OR (ID = 2 AND Value IN ('yes', 'no')) OR...

But that's an error-prone maintenance nightmare. Does anyone know of a way to accomplish what I want, without resorting to a monolithic check constraint?

As requested, consider the following table definition and some sample rows:

CREATE TABLE [dbo].[GenericSetting]
(
  [ID] [INT] IDENTITY(1,1) NOT NULL,
  [Name] [NVARCHAR](50) NOT NULL,
  [Value] [NVARCHAR](MAX) NULL,
  [Check] [NVARCHAR](MAX) NULL,
  CONSTRAINT [PK_GenericSetting] PRIMARY KEY CLUSTERED ([ID])
)

INSERT INTO [dbo].[GenericSetting] ([Name],[Value],[Check]) VALUES ('AlertRetentionDays', 60, 'CAST(Value AS INTEGER) <= 60');
INSERT INTO [dbo].[GenericSetting] ([Name],[Value],[Check]) VALUES ('ExampleMode', 60, 'CAST(Value AS INTEGER) IN (1,2,5)');
Dennis Jones
  • 211
  • 2
  • 13

4 Answers4

0

You need to create trigger on this table to accomplish this task.

Gaurav
  • 623
  • 5
  • 11
0

You would write such check constraints using conditional logic. For safety, this is actually a case where I would use case for boolean logic:

alter table eav add constraint chk_eav_value
    check (case when attribute = 'amount'
                then (case when try_convert(int, value) >= 0 then 'ok' else 'bad' end)
                when attribute = 'us_zip'
                then (case when value like '[0-9][0-9][0-9][0-9][0-9]' then 'ok' else 'bad' end)
                when attribute like 'city'
                then (case when value not like '%[a-zA-Z ']%' then 'ok' else 'bad'
                else 'ok'
            end) = 'ok');
Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786
  • Technically, this satisfies the question, but it will yield a "huge, monolithic check constraint" which is exactly what I wanted to avoid with a more dynamic approach. The table currently contains about 130 records, and even a 'case' will become nearly unreadable with that many branches. Additionally, the check constraint would have to be modified every time a new record is added. – Dennis Jones Dec 09 '18 at 00:49
  • @DennisJones . . . This is more appropriate for an EAV model. I would suggest that you store the values in separate columns, rather than separate rows. This will also handle the type incompatibilities as well. – Gordon Linoff Dec 09 '18 at 00:51
  • The table contains other columns as well (description, category, auditing columns, etc). The example table was just that, an example for the purposes of the question. Having one column per data value just isn't going to work. – Dennis Jones Dec 09 '18 at 00:57
  • @DennisJones . . . You are stuck with essentially a large `case` expression. You can put it in a user defined function, a trigger, or directly in the `check` constraint. – Gordon Linoff Dec 09 '18 at 01:03
  • Is there really no way around it? It's pretty lame (if you ask me) that SQL does not permit dynamic SQL or stored procedure calls from a function, which would make this a trivial exercise. – Dennis Jones Dec 09 '18 at 01:08
  • @DennisJones . . . I'm not sure what you mean. A stored function should do what you want. I don't see the value of dynamic SQL for this purpose. – Gordon Linoff Dec 09 '18 at 02:44
  • Think about it for a minute, and I think you'll get the picture. Every time the function is called (as a result of the check constraint being invoked), it has to query the table for the record to get the [Check] column. Then it has to execute the SQL that is contained within that [Check] column. I don't know how that can be done without dynamic SQL. – Dennis Jones Dec 09 '18 at 21:25
  • @DennisJones . . . You should be able to pass the id into the function. Functions in `check` constraints can accept arguments. – Gordon Linoff Dec 09 '18 at 21:54
  • I feel like I'm not making myself clear. You are right and I plan to pass the ID. I will even pass the content of the Check column. But in order to use the check code (which is a string), I have to create a SQL statement that uses it, and that means using dynamic SQL. If you know another way, I'm all ears. – Dennis Jones Dec 10 '18 at 04:28
0

Check constraints arent really designed to do that...best you vould do would be

  • a validation trigger on the table, which sucks, or

  • implement all your writes as stored procs themselves, and disable INS/UPD for the table otherwise. This also sucks.

At the risk of being a SO stereotype, you seem like you are putting business logic in the db layer...check constraints are great for static checks, but they werent really intended for much beyond that. I would be tempted to suggest looking upsteam (DA layer or common layer of your codebase) for solutions as well.

Yes, i went off a little there. Sorry in advance.

Jesse MacNett
  • 459
  • 4
  • 8
  • Right. Putting the checks in the application is certainly a possibility. My goal with this idea was to eliminate the need for changes to the code when new records are added. The user interface is data-driven, supporting newly added records with no code changes, and we'd like to keep it that way. – Dennis Jones Dec 09 '18 at 01:19
  • Understood. Alternatively, if you want to get "creative" (read as: complicated), you could have a prewrite step in your code for each row that pulls back the validation text, inserts the proposed value, and executes it as a select, blowing on failure. Could also probably be done as a sproc to avoid injection attacks, too... Buuuut not great. Really, any time dynamic sql enters the picture, it's usually time for a moment of reflection, but eh....i get it. Dsql prewrite validation proc is the last option i can think of offhand, if you have opted out of trigger validation – Jesse MacNett Dec 09 '18 at 02:07
  • I haven't necessarily dismissed the idea of using a trigger, but we generally avoid them and try to use them only as a last resort. In this case, I think it IS the last resort! – Dennis Jones Dec 09 '18 at 21:33
  • -nod- Triggers should always be a last resort. – Jesse MacNett Dec 09 '18 at 21:59
0

In theory, you can implement this kind of check via a scalar UDF. However, be aware that they can be quite troublesome in such scenarios.

Considering that you have already chosen EAV design approach for your system, adding UDF as a check constraint might degrade the overall performance from bad to worst.

Roger Wolf
  • 7,307
  • 2
  • 24
  • 33
  • Fortunately, this table is used for configuration data that doesn't change very often, and when it does, it's not expected to be lightning fast, so performance is not a concern in this particular case. – Dennis Jones Dec 09 '18 at 21:36