0

what's good for Oracle DBMS_Scheduler?

  1. keeping a job scheduled(disabled) every time. and enable it and run it when needed.
  2. create the job ,run it and drop it.

I have a table x and whenever a records gets submitted to that table ,I should have a job to process that record. we may or may not have the record insertions always.. Keeping this in mind..what's better...?

Prashant Mishra
  • 619
  • 9
  • 25
  • 1
    Locks like to create a Scheduler Job to execute your job only once (or occasionally). This is not the purpose of a Scheduler Job. The intention is to run a job on a regular basis. – Wernfried Domscheit Sep 07 '15 at 12:21
  • This is aproximately the 60.000th oracle question :) – Florin Ghita Sep 07 '15 at 12:28
  • 1
    @Wernfried: I regularly use `dbms_scheduler` to run a job only once in the background, so that I can disconnect/close my SQL client and don't need to wait for the outcome of a lengthy operation. –  Sep 07 '15 at 13:31
  • There is a nice operator (at least on Unix/Linux) you can append to your command: `&` – Wernfried Domscheit Sep 07 '15 at 14:04
  • 3
    @Wernfried `DBMS_SCHEDULER` <> `crontab`. It is perfectly legitimate to use `DBMS_SCHEDULER` to process one off jobs in the background, and it makes more sense to use `DBMS_SCHEDULER` to run database jobs than it does to use a background shell process. That way you keep the database activity inside the DB where it belongs. – Sentinel Sep 07 '15 at 17:10

1 Answers1

1

Processing rows as they appear in a table in an asynchronous process can be done in a number of different ways, choose the way that suits you:

  1. Add a trigger to the table which creates a one-off job to process the row using DBMS_JOB. This is suitable if the volume of data being inserted to the table is quite low, and you don't want your job running all the time. The advantage of DBMS_JOB is that the job will not start until the insert is committed; if it is rolled back, the job is also rolled back so doesn't run. The disadvantage is that if there is a sustained spike of activity, all the jobs created will crowd out any other jobs that are running.

  2. Create a single job using DBMS_SCHEDULER which runs regularly, polls the table for new records and processes them. This method would need a column on the table that it can update to mark each record as "processed". For example, add a VARCHAR2(1) flag which is set to 'Y' on insert and set to NULL by the job after processing. You could add an index to that flag which will only store entries for unprocessed rows (so it will be small and fast). This method is much more efficient, especially for large data volumes, because each run of the job can effectively process large chunks of data in bulk at a time.

  3. Use Oracle Advanced Queueing. http://docs.oracle.com/cd/E11882_01/server.112/e11013/aq_intro.htm#ADQUE0100

For (1), a separate job is created for each record in the table. You don't need to create the jobs. You do need to monitor them, however; if one fails, you would need to investigate and re-run manually.

For (2), you just create one job and let it run regularly. If one record fails, it could be picked up by the next iteration of the job. I would process each record in a separate transaction so the failure of one record doesn't affect the failure of other records still in the queue.

For (3), you still create a job like (2) but instead of reading the table it pulls requests off a queue.

Jeffrey Kemp
  • 59,135
  • 14
  • 106
  • 158