0

I am new to temporal tables. We have the following table in the database with which we would like to keep track of the CLV values of the customer in time. Here is the CREATE script of the table:

CREATE TABLE [ana].[CLV](
    [CustomerID] [varbinary](400) NOT NULL,
    [Lifetime] [numeric](9, 6) NULL,
    [NumberofOrders] [int] NULL,
    [TotalCLV] [float] NULL,
    [SysStartTime] [datetime2](7) GENERATED ALWAYS AS ROW START NOT NULL,
    [SysEndTime] [datetime2](7) GENERATED ALWAYS AS ROW END NOT NULL,
PRIMARY KEY CLUSTERED 
(
    [CustomerID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
    PERIOD FOR SYSTEM_TIME ([SysStartTime], [SysEndTime])
) ON [PRIMARY]
WITH
(
SYSTEM_VERSIONING = ON ( HISTORY_TABLE = [ana].[CLVHistory] )
)

Using a scheduled MERGE statement we are going to update this table on a weekly or monthly basis. But rather than looking at individual records, we would like to know how the table looked before and after each MERGE. In other words, a point-in-time analysis or snapshot.

When I look at Microsoft's own documentation for time travel, the script seems to contain other parameters such as MEMORY_OPTIMIZED and DATA_CONSISTENCY_CHECK. I am wondering if it is possible to look at old snapshots without these parameters. Is the current structure of the table enough to go back in time?

disasterkid
  • 6,948
  • 25
  • 94
  • 179
  • 1
    Versioning is the only thing necessary to get... versioning, so no, you don't need things like memory-optimized tables to do this. The examples are just showing off sexy ways to do this for OLTP scenarios, but the use of memory-optimized tables, columnstores and whatnot is orthogonal to temporal tables. Any temporal table will allow `FOR SYSTEM_TIME` queries. Turning this into a comparison query with before/after needs some more work (the page you linked has an [example of that](https://docs.microsoft.com/sql/relational-databases/tables/temporal-table-usage-scenarios#anomaly-detection)). – Jeroen Mostert Mar 25 '20 at 11:27
  • You are right. But how can I get a snapshot of the table before and after an entire update and not based on individual rows? For example if the update is done using a script that takes 3 hours to complete, and then we by mistake give a value to `FOR SYSTEM_TIME` that is inside that 3-hour period, we will not have the values before running the script but values that are not final. – disasterkid Mar 25 '20 at 12:24
  • 1
    Have your update process record when it started and ended. This is the only reliable way in any case, even if you could pin down before and after states down to the individual transaction (which can be achieved with triggers or Change Data Capture, for example, two possible alternatives to a versioned table), because there's no way otherwise to identify the "logical" before and after states of your data (unless you truly do everything in one single transaction, which isn't advisable for something taking three hours). – Jeroen Mostert Mar 25 '20 at 12:26
  • That's a great idea. So, I guess I save the start and end time of the script, possibly with input parameters, in a separate table and then use `BETWEEN` in `FOR SYSTEM_TIME` to get my snapshot? If that is so, feel free to post the answer. Much appreciated! – disasterkid Mar 25 '20 at 12:34

0 Answers0