I have a question about persist and merge strategy of eclipselink. I would like to know how eclipselink/JPA inserts and updates records. Is it insert/update one by one into database? or it is saving them in a log file and then flush them to the database? It is important for me, because I am going to have a history table with trigger that triggs when insertion and update. so if for example update is happening on each field, and 3 fields are updated, then I will have 3 records in history table or one? I will be appreciated if anyone answers me and also leave some reference link for further information.
1 Answers
The persistence provider is quite free to flush changes whenever it sees fit. So you cannot reliably predict the number of update callbacks or the expected SQL statements.
In general, the provider will flush changes before each query to make changes in the persistence context available to the query. You can hint the provider to defer the flush until commit time, but the provider still can flush at will.
Please see the relevant chapters of the JPA (2.0) spec:
- §3.2.4 Synchronization to the Database
- §3.8.7 Queries and Flush Mode
EDIT: There is an important point to flushing and transaction isolation. The changes are flushed to the database and the lifecycle listeners are invoked, but the data is not committed and not visible to other transactions - the read-committed isolation is the default. The commit itself is atomic.
I am not sure what the consequences of a server crash would be, but under normal circumstances, data integrity is ensured.

- 60,521
- 48
- 179
- 224
-
1"The persistence provider is quite free to flush changes whenever it sees fit." Doesn't this brake the atomicity rule of database? – pms May 01 '14 at 15:38
-
for example, if I have 3 fields that changed, then entitymanager tries to update it 3 times and for some reason, my server crashes after one update and shuts off, then I have one change in db which I needed all 3 together. Am I missing something? if not how this will be handled? – pms May 01 '14 at 15:46
-
No, the atomicity is fine, you just can't rely on the lifecycle listeners to fire only once per transaction. Please see the edit. – kostja May 01 '14 at 16:05
-
1I think I got that. Transaction will commit at the end, so my trigger in database will trigg after commit. Thanks – pms May 01 '14 at 19:02
-
@pmp - Oracle and may other DB vendors do not support on-commit triggers to my knowledge. When using `AFTER UPDATE` or similar triggers, you will probably still end up with multiple log rows for a transaction. Please share your experience after you have tried it, I am not sure of this effect. – kostja May 02 '14 at 06:25
-
I tested the scenario. I created an a table with 2 columns and corresponding entity. I inserted values for both columns and also I had a trigger to run on every insert/update/delete. – pms May 14 '14 at 16:16
-
1hi @kostja I tested the scenario. I created an a table with 2 columns and corresponding entity. I inserted values for both columns and also I had a trigger to run on every insert/update/delete and insert the date into log table. I noticed one row executed in the log table. I did the same with update and get the same result. So my understanding is entity values get updated but they do not sent to db yet. They all sent all together, and this makes sense, because entity manager won't have any control afterwards. – pms May 14 '14 at 16:22