I'm developing 2 applications, one backend and a frontend web application, over DB2 as dbms. The web application access at the db through services exposed by the backend, that is the only who can modify data on the tables.
In the backend application there is also a java batch process, who is responsible to update a registry table data, accessed in read from the web application (through backend services) and other applications. The batch work as described below:
- Start a connection with autocommit false (jdbc driver!)
- Erases all the row, with truncate statement
- Download a source file, and start a massive insert with a loop on the file rows
- If all is done is called the commit, otherwise is called a rollback
This behavior is necessary because meanwhile the update process is not ended the others application, that have to read data into this table, must read the old data.
The problem is that during the batch execution, that have an average execution time 40 minutes, every select operations returns deadlock with this error: SQLCODE: -913, SQLSTATE: 57033, SQLERRMC: 00C9008E.
I'm trying to understand what isolation level i need to use on delete/insert statements to avoid row lock, reading this link https://www.ibm.com/support/knowledgecenter/en/SSEPEK_10.0.0/sqlref/src/tpc/db2z_sql_isolationclause.html, but i did not find a solution.
I supposed that the problem was the truncate statement (https://www.ibm.com/support/knowledgecenter/en/SSEPGG_9.7.0/com.ibm.db2.luw.sql.ref.doc/doc/r0053474.html), but also using delete statement the deadlock error occurs in select and, in addiction, if there is a select running, the delete obtain deadlock.
I'm thinking to use the batch method through jdbc, but i never used this on db2 and i don't know the reliability on db2!
I also watched the coverage index solution (i cannot post the url for reputation limit), but it is not a viable solution, considering that the select for the web application read all the columns!