
In this example, I am using source table emp_location
in the queries, as your sample table. Also the cache files start with emp_location*
.
I created 2 dummy sessions, before your main session.
Create a run table, and save the last session run time in it.
select *
from run_history
where process_name='EMP_DIM'
+--------------+----------------------+
| PROCESS_NAME | LAST_READ_TIME |
+--------------+----------------------+
| EMP_DIM | 15-MAY-2016 12:00:07 |
+--------------+----------------------+
In the pre session task of your main session, use something like this.
update run_history
set last_read_time = sysdate
where process_name='EMP_DIM';
Now find the time, your table was updated. If table doesn't have a update time
column, use this to get latest update time of the table.
select scn_to_timestamp(max(ora_rowscn))
from emp_location;
Now is first dummy session, use this query. It will return 1 row if something was updated in source table after last_read_time
. If not, then it will return 0 rows.
select 1 from dual
where
(select scn_to_timestamp(max(ora_rowscn))
from emp_location)
>
(select last_read_time
from run_history
where process_name='EMP_DIM')
In the link task, put condition as, $s_check_last_update.TgtSuccessRows=1
.
So next session will only run when there was a real change. In its post_session_task
run a command to clear cache files.
Windows:
del $PMCacheDir\emp_location*
Unix:
rm -rf $PMCacheDir\emp_location*
This link task will have condition like. IIF($s_check_last_update.TgtSuccessRows=1,0,1)
.
In the main session, openGeneral
tab and use Treat Input links as OR
.
Now Integration service will recreate fresh cache files, if they are deleted.
Alternatively, you can achieve the same thing by a shell/bash script, which will connect to Oracle and check if something was updated in table, after last read time. And if true, it should remove cache files.