" I need to delete the transactions using the following if it failed
during my process and i used to load again ."
It seems your actual problem is the load process. You should define the entire load as a one Unit Of Work, i.e. with a single commit at the end of the 2000000 rows. Then you don't have to delete anything in the event of failure.
At the moment you obviously have intermittent commits. There may be any one of several reasons why you think you need to do this, but they are all bogus. You simply need to get your DBA to size the UNDO tablespace so it is sized appropriately for your data volumes. Alternatively you need to re-write your load process so you can identify which rows have been loaded (i.e. committed); in the event of failure you can just load the remainder.
As far as speeding up the deletion process, you don't have many options. FORALL processing won't be faster: PL/SQL incurs overhead and so won't be cheaper than a straight DELETE. Unless you have a table with a decade's worth of data an index on (YEAR,MONTH) won't be any use.
If you have Enterprise Edition and lots of CPUs you can try use PARALLEL DML to execute the deletion in a shorter elapsed time.
But, as with any tuning problem, you need to understand your statement's current execution path and that means starting with an Explain Plan. They're not just for SELECTs, you know.