-2

I need to write PL/SQL script which need to migrate 190 k subscriber from one Telco to other. With this script , I need to update different tables of subscriber and Account and do service agreement changes which has complex logic. Once each subscriber is migrated then I need to maintain status that "particular subscriber successfully Migrated". For this to achieve , I opened cursor for 190 k subscribers and each subscriber is migrated one by one in loop. While performance testing , I found only 19 k subscriber got successfully updated in 1.5. Hour. It is taking hell lot time and my deployment window is only of 3 hrs. Please advice me , how to proceed technically in such case.

binit
  • 3
  • 4
  • Prewrite all to a separate database and restore it to the target database. This removes your timing contstraints, includes testing options. – dmaij May 09 '15 at 14:37
  • @dmaij you cant't prewrite (i think). A snapshot must be taken on one side at a given point in time and translated to a new site on a frame of 3 hours – Saic Siquot May 09 '15 at 14:49
  • @Luis, I think so too, but when I look at the question the problem is probably not in the script, but the network setup or a combination – dmaij May 09 '15 at 15:03
  • Thanks , I guess prewrite might be possible , that is to save all data in temporary tables 1 day before in same database and merge with the actual database table during deployment window and might be some subscriber might be created or updated on that day so we might need to capture those during actual update. What I thought of one of the approach is to divide the subscriber base into 10 subsets using modulo 10 on account number and have 10 parallel script having same logic to work on each subset. Please advise on same. – binit May 09 '15 at 15:39

1 Answers1

1

190k records to process in modern databases really isn't a lot. It's hard though to give any general advice as what to do depends on the particular situation. My first guess is that as your target database grows during the migration you might suffer from tables stats logged as 0 resulting in FTS when doing UPDATES and checking integrity and unique constraints. Try loosening the constraints on the target. Monitor the execution plans on the target and if any are showing FTS or high IO then periodically gather stats, or use DBMS_STATS package to set the stats prior to the migration. Divide and conquer is another general approach you could take - paramaterise you migration routine so that you can operate on a range of records, then run the migration in parallel, with each one processing a different range. Perhaps you can also split the migration and validation processes, i.e. run the migration to copy the accounts, and set the status to "COPIED" (from "COPYING"), then have a validation process (possibly parallelised) which polls for "COPIED" records and then validates them setting the status to "CHECKING" then "OK" or "FAILED", or whatever.

More specific help requires more details.

TenG
  • 3,843
  • 2
  • 25
  • 42