3

I have a requirement where i have create replication between two tables with different names and which have different column names. Is it possible to create such replication.

server A                                            server B
----------                                          ----------
Table : Test                                        Table : SUBS
--------------                                      ---------------
columns A,B,C                                       Columns D,E,F,G,H

I want to configure replication so that column A data is replicated to column D, column B data is replicated to column E, column C data is replicated to column F

Damien_The_Unbeliever
  • 234,701
  • 27
  • 340
  • 448
Raymond Morphy
  • 2,464
  • 9
  • 51
  • 93
  • possible duplicate of [How to replicate two tables with different structures but the same fields?](http://stackoverflow.com/questions/5699783/how-to-replicate-two-tables-with-different-structures-but-the-same-fields) – Ed Harper Apr 18 '11 at 08:59
  • However, what's the solution? – Raymond Morphy Apr 18 '11 at 09:25

1 Answers1

5

Apparently, the answer is: "When you define the article, you'll have to set the @vertical_partition parameter to true and then add the columns that you want with sp_articlecolumn."

However, I have to ask why you're doing this. Replication in my mind isn't a general tool for moving data around between unlike databases but for keeping two identical databases in sync.

Other brainstorm ideas:

  • You could create a new source table that does match the destination table, and use a trigger to keep the source table synchronized.
  • Push the source table intact to the destination and do the MERGE in the destination database.
  • Replication may not really be the right solution, here. What are the business rules & requirements that are calling for this to be done? Have you considered using SSIS?
  • If there IS a need for the two tables to be in exact synchronization all the time, then what is the channel of changes to the source table--an application? It almost sounds like your application needs a new level of abstraction, a data write layer that knows how to write to two sources at the same time.

Trying to keep data synchronized between two different databases can be a problem. There can be all sorts of subtle problems with race conditions, lack of distributed transactions (affecting consistency and response to failures), problems with the workarounds created to deal with not having distributed transactions, and so on and so forth. Can you instead create a linked server and some views that actually make the data in one database real-time accessed from the other?

Please tell us more about your requirements and why you need to do this.

Update

If you're going the manual update route note that you can't apply a time period's insert, update, and delete operations en masse. You have to apply them one at a time, in order. If you are instead working with current state rather than intermediate data operations, then you can do all rows at once. I will show you the MERGE example, not the history-playback one.

BEGIN TRAN;

DELETE D
FROM LinkedServer.dbo.Dest D WITH (TABLOCKX, HOLDLOCK)
WHERE
   NOT EXISTS (
      SELECT *
      FROM Source S
      WHERE D.Key = S.Key
   );

UPDATE D
SET
   D.Col1 = S.Col4,
   D.Col2 = S.Col5,
   D.Col3 = S.Col6,
   D.Col4 = S.Col7,
FROM
   LinkedServer.dbo.Dest D
   INNER JOIN Source S ON D.Key = S.Key
WHERE
   D.Col1 <> S.Col4
   OR EXISTS (
      SELECT D.Col2, D.Col4
      EXCEPT
      SELECT S.Col3, S.Col6
   ); -- or some other way to handle comparison of nullable columns

INSERT LinkedServer.dbo.Dest (Col1, Col2, Col3)
SELECT Col4, Col5, Col6
FROM Source S WITH (TABLOCK, HOLDLOCK)
WHERE
   NOT EXISTS (
      SELECT *
      FROM LinkedServer.dbo.Dest D
      WHERE S.Key = D.Key
   );

COMMIT TRAN;

You may find it better to push the whole table and do the merge operation on the destination server.

The lock hints I put in on the first query are important if you're going to have a consistent point-in-time snapshot. If you don't care about that, then take the locking hints out.

If you find that updates across the linked server are slow, then push the entire table in one piece to a temporary staging table on the remote server, and do the MERGE in a script entirely on the remote server.

ErikE
  • 48,881
  • 23
  • 151
  • 196
  • I just wanted to update two databases in two different locations. as you said I created a temp table and inserted any updated or inserted records in temp table using triggers. and then used stored procedures and Scheduled jobs and linked server to insert or update equivalent records in remote database.But my problem is about distributed transaction.Will my problems during connection loose satisfy using distributed transactions? Suppose during updating remote database the connection loose. Is using distributed transaction enough? – Raymond Morphy May 22 '11 at 19:07
  • You need to look up distributed transactions. Begin a tran, do the update over the linked server, delete the local row, then commit. If this works you're golden. If it complains about distributed transactions then you have some work to do. – ErikE May 22 '11 at 19:15
  • I searched about distributed transaction but I don't know should I use `Rollback` statement or using `commit` is enough for distributed transaction? Could you give me an example? – Raymond Morphy May 22 '11 at 19:38
  • Thanks @Erik for you answer. I have written a job with three steps and scheduled job to repeat every 5 minutes. the first step for inserting, second one for updating and third one for deleting. Am I doing right? When the first steps complete the second one will run if the first step complete successfully the second one will run and the same thing for the third step. – Raymond Morphy May 22 '11 at 20:35
  • @Raymond, no, you need to put the three statements inside a single transaction. It doesn't make sense to do them separately because then your remote table will not represent a point in time. While that may not be super serious, there's no reason to not put all the updates in a single script. Also, if something like my above query worked for you, then you know that distributed transactions are working and the DTC service is running on both servers, with permissions set to properly allow them. – ErikE May 23 '11 at 16:19