-2

I have createed an archive SQL database for some tables.
I have one production server and one archive server (the archived database only has a few tables that I archived).

Now I'm searching for an option when we deploy a new version for the production database (alter tables statements) that I can run automatically run on my archived server.

I'm trying this with DDL triggers and I have a linked server to my archived database server. But it doesn't work.
Have someone any idea how to fix this? (it can be done by de deployment tool self, so I need to fix this)

I have tried this with DDL triggers on the database.

EDIT: it has be done by 2 triggers. One for logging the statement and one for execute on the Linkedserver

trigger one:

CREATE TRIGGER [LogTrigger]
ON DATABASE
FOR 
    ALTER_TABLE
   AS
BEGIN
    SET NOCOUNT ON;
    INSERT INTO TableLog
    (
       EventVal,
       EventDate,
       ChangedBy
     )
        VALUES (
           EVENTDATA(),
           GETDATE(),
           USER
        );
END;
GO

trigger two:

ALTER TRIGGER [trgTablechanges]

ON DATABASE
FOR ALTER_TABLE
AS
      SET NOCOUNT ON
      DECLARE @xEvent XML
      DECLARE @tests nvarchar(MAX)
      SET @xEvent = eventdata()
      SET @tests = CONVERT(VARCHAR(MAX), @xEvent.query('data(/EVENT_INSTANCE/TSQLCommand/CommandText)'))

     EXECUTE server.dbname.dbo.sp_executesql @tests;

GO
  • Are you *really* sure that's what you want? It means that if someone drops a column on your production server, the data goes poof on your archive server as well. That's not usually what people think of when they think "archive". – Jeroen Mostert Mar 08 '23 at 14:02
  • Thats a good point! But we only alter tables here and when we need to drop a column we need to drop it to on the archived server as well. But i understand what you are saying. Thanks! – mrdennis86 Mar 08 '23 at 14:06
  • At the very least you to provide some kind of context here. This is just too vague right now to offer a solid answer. Table definitions and a detailed explanation of what you are trying to do. – Sean Lange Mar 08 '23 at 14:13
  • 1
    I really DON'T like when people use the "don't work" line. Insta-downvote for that. Not work can mean: 1. It doesn't work due to the operation breaking laws of physics or New Jersey 2. It doesn't work because i made a typo in command 3. It doesn't work because i'm doing something which should work incorrectly. 4. It legitimately doesn't work. I'm pretty sure in general, it "should" work with DDL trigger, so you ought to demonstrate exactly what you did and why it didn't work – siggemannen Mar 08 '23 at 14:34
  • Since you don't archive all tables, i wonder how easily it would be to make this work automatically anyway. Why not just create a separate set of deploy scripts for archival data only? – siggemannen Mar 08 '23 at 14:40
  • If you doesn't have already take in account, I suggest to analyze the use of replication. It provides replica for single tables and manage automatically table structure changes. Is not trivial to use, but it is a native solution. – Roberto Ferraris Mar 08 '23 at 14:58
  • @siggemannen its has be done right now with this trigger: ``` ALTER TRIGGER [trgTablechanges] ON DATABASE FOR ALTER_TABLE AS SET NOCOUNT ON DECLARE @xEvent XML DECLARE @tests nvarchar(MAX) SET @xEvent = eventdata() SET @tests = CONVERT(VARCHAR(MAX), @xEvent.query('data(/EVENT_INSTANCE/TSQLCommand/CommandText)')) EXECUTE server.dbname.dbo.sp_executesql @tests; GO ``` – mrdennis86 Mar 09 '23 at 08:40

1 Answers1

0

You can do it by the way of two phase commit.

For this purpose, you have to create a distributed transaction for each CREATE/ALTER/DROP statement and activate MSDTC. But there is no absolute guaranties that no trouble occurs.

Another way will be to use Service Broker to propagate DDL command from one DB to the other. This way is much more stable and performant.

SQLpro
  • 3,994
  • 1
  • 6
  • 14