Yet another scenario, based on a previous question. In my opinion, its conclusion will be general enough to be useful to a wide audience. Quoting Peter Lawrey from here:
The synchronized uses a memory barrier which ensures ALL memory is in a consistent state for that thread, whether its referenced inside the block or not.
First of all, my problem deals with data visibility only. That is, atomicity ("operation synchronization") is already guaranteed in my software, so every write operation completes before any read operation on the same value, and vice versa, and so on. So the question is only about the potentially cached values by threads.
Consider 2 threads, threadA and threadB, and the following class:
public class SomeClass {
private final Object mLock = new Object();
// Note: none of the member variables are volatile.
public void operationA1() {
... // do "ordinary" stuff with the data and methods of SomeClass
/* "ordinary" stuff means we don't create new Threads,
we don't perform synchronizations, create semaphores etc.
*/
}
public void operationB() {
synchronized(mLock) {
...
// do "ordinary" stuff with the data and methods of SomeClass
}
}
// public void dummyA() {
// synchronized(mLock) {
// dummyOperation();
// }
// }
public void operationA2() {
// dummyA(); // this call is commented out
... // do "ordinary" stuff with the data and methods of SomeClass
}
}
Known facts (they follow from my software's architecuture):
operationA1()
andoperationA2()
are called by threadA,operationB()
is called by threadBoperationB()
is the only method called by threadB in this class. Notice thatoperationB()
is in a synchronized block.- very important: it is guaranteed that these operations are called in the following logical order:
operationA1()
,operationB()
,operationA2()
. It is guaranteed that every operation is completed before the previous one is called. This is due to a higher-level architectural synchronization (a message queue, but that's irrelevant now). As I've said, my question is related purely with data visibility (i.e. whether data copies are up-to-date or outdated e.g. due to the own cache of a thread).
Based on the Peter Lawrey quote, the memory barrier in operationB()
ensures that all memory will be in consistent state for threadB
during operationB()
. Therefore, e.g. if threadA has changed some values in operationA1()
, these values will be written to the main memory from the cache of threadA by the time operationB()
is started. Question #1: Is this correct?
Question #2: when operationB()
leaves the memory barrier, the values changed by operationB()
(and possibly cached by threadB) will be written back to main memory. But operationA2() will not be safe because noone asked threadA to synchronize with the main memory, right? So it doesn't matter that the changes of operationB()
are now in main memory, because threadA might still have its cached copies from the time before operationB()
was called.
Question #3: if my suspicion in Q.#2 is true, then check my source code again and uncomment the method dummyA()
, and uncomment the dummyA()
call in operationA2()
. I know this may be bad practice in other respects, but does this make a difference? My (possibly faulty) assumption is as follows: dummyA()
will cause threadA to update its cached data from main memory (due to the mLock
synchronized block), so it will see all changes done by operationB()
. That is, now everything is safe. On a side note, the logical order of method calls is as follows:
operationA1()
operationB()
dummyA()
operationA2()
My conclusion: due to the synchronized block in operationB()
, threadB will see the most up-to-date values of data that might have been changed before (e.g. in operationA1()
). Due to the synchronized block in dummyA()
, threadA will see the most up-to-date copies of data that were changed in operationB()
. Is there any error in this train of thought?