0

I have two SQL Server databases.

One is being used as the back-end for a Ruby-On-Rails system that we are transitioning from but is still in use because of the Ruby apps we are rewriting in ASP.NET MVC.

The databases have similar tables, but not identical, for the Users, Roles and Roles-Users tables.

I want to create some type of trigger to update the user and roles-users tables on each database when a modification is made on the other database of the same table.

I can't just use the users table on the original database because Ruby has a different hash function for the passwords, but I want to ensure that changes on one system are reflected on the other instanter.

I also want to avoid the obvious problem that an update on the one database triggers an update on the other which triggers an update on the first and the process repeats itself until the server crashes or something similarly undesirable happens or a deadlock occurs.

I do not want to use database replication.

Is there a somewhat simple way to do this on a transaction per transaction basis?

EDIT

The trigger would be conceptually something like this:

USE Original;
GO

CREATE TRIGGER dbo.user_update
ON dbo.user WITH EXECUTE AS [cross table user identity]
AFTER UPDATE
AS
BEGIN
   UPDATE Another.dbo.users SET column1=value1, etc., WHERE inserted.ID = Another.dbo.users.ID;
END

The problem I am trying to avoid is a recursive call.

Another.dbo.users will have a similar trigger in place on it because the two databases have different types of applications, Ruby-On-Rails on the one and ASP.NET MVC on the other that may be working on data that should be the same on the two databases.

Timothy Dooling
  • 470
  • 1
  • 4
  • 17
  • Have you Googled SQL Triggers? What specific problems did you come across when you started writing the code and found that it didn't work? We appreciate you laying out the requirements, but have you done anything yet? If so, please provide code and specifically what didn't work. – dfundako May 16 '16 at 17:51
  • I have been looking at using an "execute as" clause for the insert, delete or update statement embedded in the dml trigger to perform the operation on the other database and then checking whether the user executing the dml is the cross-database user and not calling the cross-database update if the user indicates that the other database is the origin of the call. – Timothy Dooling May 16 '16 at 17:59
  • Cool. Provide the code you tried and found to not work in your original post. Then tell us why it didnt satisfy your requirements specifically. – dfundako May 16 '16 at 18:02

1 Answers1

0

I would add a field to both tables if possible. When adding or updating a table the 'check' field would be set to 0. The trigger would look at this field and if it is 0, having been generated by an application event, then the trigger fires the insert/update into the second table but the check field would have a 1 instead of 0.

So when the trigger fires on the second table it will skip the insert back into table one.

This will solve the recursive problem.

If for some reason you can not add the check field, you can use a separate table with the primary key to the table and the check field. This need more coding but would work also.

Joe C
  • 3,925
  • 2
  • 11
  • 31
  • The intermediate table sounds like a good option. I can write directly to it and use it's trigger to write to the second database. I don't think the first table can be modified to add fields because we can't recompile much of the Ruby code because it is a "legacy" version and the current compiler won't work on it. – Timothy Dooling May 16 '16 at 19:54
  • It would be nice if anybody knew how to access the legacy Ruby code that writes password hashes from ASP.NET so I could just run the submitted password through the Ruby hasher to check against the contents of the old database (that is still in use). As yet, I have not seen a viable solution to that one. – Timothy Dooling May 16 '16 at 20:02
  • That would have been my first option personally. Dig into the legacy code so I could reproduce that until the system is retired. I would build an update-able view in the new DB that directly references the legacy DB table. That way you have only one actual data set persisted and the new applications could be built using a totally new model. New fields would be in a stub table in the new DB. Then when the legacy is ready to retire I would change the new systems stub table to have the complete structure minus legacy fields that are no longer needed and port the data from the legacy table – Joe C May 16 '16 at 21:56
  • When I refer to the legacy code, I am referring to the Ruby core code that generates the RSA hash. If a code library could be linked into the database via a CLR/UDF as you can do with C# and .NET then I would be able to do a comparison of the hash to check the password at validation time. Since I can't create a linked-library function that checks the code against the Ruby hash of the password, I can't really do this. – Timothy Dooling May 17 '16 at 15:33