Check which one of the two processes - deleting or inserting - is slower and start from there. Data deletion can generate a lot of redo and can be slow, especially if issued in a single thread. Also, update should in theory be faster than delete+insert.
Update speed can be heavily affected by table compression and the pctfree table physical attribute, so check them. It is even more heavily affected by parallelism. Issuing it in parallel might drastically improve performance.
To be able to execute parallel dml operations in most cases you will need to enable them on session level by issuing:
alter session enable parallel dml;
I'd suggest that you try the following:
- Check the table's compression and pctfree options
- Try enabling session parallel DML before running the queries.
- Try the update statement in parallel - again issue 'alter session...' before the statement
- Try MERGE statement, if you have any kind of a primary key columns - in some cases it can be faster than update
- If the data, which needs to be updated, fills entire partitions or subpartitions, you can use TRUNCATE instead of delete. It is a very fast operation (DDL) which empties a specific partition or subpartitions.
If all else fails, you can create a new table with the same definition and load the remaining data from the main table and the new data in there (again in parallel and with an APPEND hint). After that you can drop the old and rename the new table. Be sure to manage table grants and other privileges. Shifting 60GB of data is a cumbersome operation and needs a lot of temporary space, but at least you will avoid the DELETE operation. On a reasonable hardware it should take less than 4 hours. For the main bulk of data you can also issue a CTAS statement:
create table <new_table_name> parallel <partitioning_info> as select * from <old_table_name> where <filter>;
Beware of library cache locks when issuing a long-running CTAS statements.