0

i am working with stateless session for batch job in my Play Framework 1.2.4 project.

i am inserting and updating rows quite good, but i dont know what to do when an exception occured. Heres my code:

try{
      statelesssession.insert(someobject);
   }
catch(ConstraintViolationException e)  //It happens from time to time dont ask me why..
   {
      ??????transaction.rollback();????? THATS MY CONCERN
   }
finally{
      transaction.commit();
   }

What i need to know is, i am committing data at every 100 inserts. i am wondering, if constraintviolation happens in i.e 56th record and transaction does the rollback, will i lose the other 55 records too?

if yes, what do i have to do in constraintviolationexception? or shall i commit in every 1 record to avoid this?

dreampowder
  • 1,644
  • 3
  • 25
  • 41
  • 1
    I think you should ask this question to your customer, not here.. but if you rollback at the 56th record without committing **you will loose** the previous changes up to the last commit. – mericano1 Aug 30 '12 at 15:36
  • Tanks, i wasnt sure if it just rollbacks the corresponding data or all the batch. – dreampowder Aug 31 '12 at 08:45
  • i also have another question, when i get exception, i think the boject is still persisting in the memory because my memory usage is getting higher and higher until it gets out of memory. how can i wipe that data from the memory when an exception occurs? – dreampowder Aug 31 '12 at 08:46

3 Answers3

1

if you rollback you will lose all previous records in the transaction as well. If you only want to lose the records with the constraint exceptions then you can hold the records of each batch in a list and switch to committing one by one when the batch bombs and keep on with the batches afterwards.

Firo
  • 30,626
  • 4
  • 55
  • 94
1

In this type of use case, you have another job that cut all your data in pieces of 100 objects and launch the subjob for these objects.

The best thing to do in this case for me is to throw the exception. Then the master job get this exception and all your 100 objects are rollback. Then the master job can then go into another mode for these object and relaunch the subjob per object. Then only the one that throws the exception won't be save.

This is typical handling of batch. If everything is ok, your batch is fast because you commit every 100 objects but in case of an error you fall back into single object commit so you just don't save objects that fails.

But as mericano1 said, the correct behavior in your case is a matter of business rule.

Seb Cesbron
  • 3,823
  • 15
  • 18
  • so there is another problem here. When the exception occurs, the memory usage of java.exe is increasing (i guess java is keeping the object in memory). What is the best way to recycle that data when an exception occurs? – dreampowder Aug 31 '12 at 08:44
  • 1
    as far as you don't have a reference on an object, the gc will do that automatically. To check java memory usage, don't track it with system tools but use specific tools like visualvm to track the evolution of the heap usage – Seb Cesbron Aug 31 '12 at 14:31
0

If you commit every 100 inserts, then a rollback after the 56th insert also undos all 55 inserts before.

You can commit after every insert, but in batches who insert really many rows that is slow and so not recommended.

The solution is using savepoints.

Setting a savepoint is relatively fast. It can be done after every insert. Setting a savepoint does not write any data into the database - you still have to commit later - but a rollback only is done until the last savepoint.

So in your example you commit every 100 (or whatever) rows (and after the last row for sure), and you set a savepoint after every row. When an error appears and you roll back the action, only the errorneous insert is undone, the others are not touched.

For a description see for example java.sql.Connection.setSavepoint, java.sql.Savepoint or here.

Johanna
  • 5,223
  • 1
  • 21
  • 38