i'm not sure if there even exists an solution with automatic chained/multi-level transaction handling that works reliable (or does not need a lot resources on database side)
if the second step fails, then the first step should be rolled back
well you could combine both steps into one:
- read from first table A
- use processor to update table A
- use processor to read from table B
- use writer to update table B
the performance will suffer a lot, because the read on table B will be a single read vs the cursor based for table a
i would go with a compensating strategy like this
- (optional) tables in use are temporary tables and not the real "production" tables, makes it easier to work with compensating with decoupling the datastores from the production
- a failed step 1 triggers another step or another job/script
- this step/job/script deletes as necessary (rows or complete table)