0

My question is same as this question - but a little to add on to it. My problem is that the users of my web app are allowed to create new versions of a record. Every new version of a record results in creating corresponding "new versions" in 50 other related tables to record individual changes to that particular version. This creation of a new version with about 50 tables involved is running within a transaction (to rollback all changes in the event of an error). Many a times this procedure is slow, understandably due to a "lengthy transaction" with too many table "insertions" involved.

I am looking at a better solution/design to implement such a scenario.

  1. Is there a better way to maintain "versions" of the same record, particularly when it creates too many duplicates in multiple tables
  2. I don't feel the design in itself is good with too many records getting inserted for "every row version", but would at least like to address the immediate problem, the "lengthy transaction" - which causes delay at times. There may not be way way out, but wanted to still ask out - If I don't put the "versioning" inside a transaction, is there a better way to rollback in the event of an error (because transaction appears to block other OLTP queries - due to inserting new versions on all primary tables)

The versioning query runs for about 10 seconds right now, but gets worse at times. Any thoughts are appreciated

Community
  • 1
  • 1
Lalman
  • 946
  • 2
  • 11
  • 27

1 Answers1

1

Do you have to return the "created" or "failed" message in real time? The following might also be overkill for your solution but it does create a scalable solution.

The web-server could post a message (to a queue of some sort) requesting the action. At this point the user could continue using the site to do other stuff. A windows service in the background can process the message (out of context of the website) and then notify the user (by a message inside the website, similar to stack overflow notifications) or by email that the task has either run or failed.

If you can get away with doing processing decoupled in near real time then you can modify your windows service to scale. You can have a thread pool to manage requests - so maybe you only have 5 threads running at any one time to limit load. If you run into more performance problems you can scale out and develop a system that can have 2 or more processors of the queue (that does add it's own problems / complexity).

Paul Hadfield
  • 6,088
  • 2
  • 35
  • 56
  • 1
    Thanks Paul. But my problem is just that - The general use case behavior is such that, as soon as the user is allowed to create a "new version" of the record, he would go and update information related to that particular version. Hence, the new versions should exist in all those related tables for the user to update information. It has to happen real-time. – Lalman Mar 21 '12 at 09:34