0

workitem_routing_stats table is having around 1000000 records .all records are acceesed thats why we are using full scan hint. it takes around 25 seconds to execute is there is any way to tune this query.

SELECT /*+ full(wrs) */
wrs.NODE_ID,
wrs.bb_id--,
SUM(CASE WHEN WRS.START_TS >= (SYSTIMESTAMP-NUMTODSINTERVAL(7,'day'))
AND wrs.END_TS <= SYSTIMESTAMP THEN (wrs.WORKITEM_COUNT) END) outliers_last_sevend,

SUM(CASE WHEN WRS.START_TS >= (SYSTIMESTAMP-NUMTODSINTERVAL(30,'day'))
AND wrs.END_TS <= SYSTIMESTAMP THEN (wrs.WORKITEM_COUNT) END)
outliers_last_thirtyd ,

SUM(CASE WHEN WRS.START_TS >= (SYSTIMESTAMP-NUMTODSINTERVAL(90,'day'))
AND wrs.END_TS <= SYSTIMESTAMP THEN (wrs.WORKITEM_COUNT) END)
outliers_last_ninetyd ,
SUM(wrs.WORKITEM_COUNT)outliers_year

FROM workitem_routing_stats wrs
WHERE wrs.START_TS BETWEEN (SYSTIMESTAMP-numtodsinterval(365,'day')) AND SYSTIMESTAMP
AND wrs.END_TS BETWEEN (SYSTIMESTAMP-numtodsinterval(365,'day')) AND SYSTIMESTAMP
GROUP BY wrs.NODE_ID,wrs.bb_id ;
Bhagwat
  • 71
  • 1
  • 1
  • 5

3 Answers3

1

You may range partition the table in a monthly manner on START_TS column. (will scan only the year you are interested in)

Secondly(not a very intelligent solution) you may add a parallel(wrs 4) hint if your storage is powerfull.

You can combine these two things.

Florin Ghita
  • 17,525
  • 6
  • 57
  • 76
0

a full scan is going to be painful in any case...

however - you may avoid some computation if you simply put in the proper numbers instead of calling the conversion functions:

(SYSTIMESTAMP-numtodsinterval(365,'day')) 

should just be the same as

(SYSTIMESTAMP-365) 

this should remove overhead of calling the function, and parsing the parameter string ('day')

Randy
  • 16,480
  • 1
  • 37
  • 55
  • I'd expect `numtodsinterval` to be deterministic (though I can't see it explicitly stated in the docs), and for each `systimestamp - x` to only be calculated once, not once per row. Seems to make no difference for a simple test case anyway; are you sure it will here? – Alex Poole Sep 05 '12 at 13:43
0

one other possibility - it seems that maybe this data will be adding new timestamps as of today, but the rest is just history...

if this is the case, then you could add a summary table to hold the summarized historic information and only query this current table for the recent stuff, and UNION to the summary table for the older stuff.

you will then need to think through the JOB or other scheduled process to get the summaries populated, but it would save you a ton in this query time.

Randy
  • 16,480
  • 1
  • 37
  • 55
  • the table i have mentioned here is get populated by a job only on daily basis. – Bhagwat Sep 05 '12 at 15:32
  • yes - my suggestion would further summarize all the 1 million rows down to one per day (365) - thats a lot faster to look through :) – Randy Sep 05 '12 at 17:42