3

Yet another scenario, based on a previous question. In my opinion, its conclusion will be general enough to be useful to a wide audience. Quoting Peter Lawrey from here:

The synchronized uses a memory barrier which ensures ALL memory is in a consistent state for that thread, whether its referenced inside the block or not.

First of all, my problem deals with data visibility only. That is, atomicity ("operation synchronization") is already guaranteed in my software, so every write operation completes before any read operation on the same value, and vice versa, and so on. So the question is only about the potentially cached values by threads.

Consider 2 threads, threadA and threadB, and the following class:

public class SomeClass {

private final Object mLock = new Object();    
// Note: none of the member variables are volatile.

public void operationA1() {
   ... // do "ordinary" stuff with the data and methods of SomeClass

     /* "ordinary" stuff means we don't create new Threads,
         we don't perform synchronizations, create semaphores etc.
     */
}

public void operationB() {
  synchronized(mLock) {
     ...
     // do "ordinary" stuff with the data and methods of SomeClass
  }
}

// public void dummyA() {
// synchronized(mLock) {
//    dummyOperation();
//  }
// }

public void operationA2() {
   // dummyA();  // this call is commented out

   ... // do "ordinary" stuff with the data and methods of SomeClass
}
}

Known facts (they follow from my software's architecuture):

  • operationA1() and operationA2() are called by threadA, operationB() is called by threadB
  • operationB() is the only method called by threadB in this class. Notice that operationB() is in a synchronized block.
  • very important: it is guaranteed that these operations are called in the following logical order: operationA1(), operationB(), operationA2(). It is guaranteed that every operation is completed before the previous one is called. This is due to a higher-level architectural synchronization (a message queue, but that's irrelevant now). As I've said, my question is related purely with data visibility (i.e. whether data copies are up-to-date or outdated e.g. due to the own cache of a thread).

Based on the Peter Lawrey quote, the memory barrier in operationB() ensures that all memory will be in consistent state for threadB during operationB(). Therefore, e.g. if threadA has changed some values in operationA1(), these values will be written to the main memory from the cache of threadA by the time operationB() is started. Question #1: Is this correct?

Question #2: when operationB() leaves the memory barrier, the values changed by operationB() (and possibly cached by threadB) will be written back to main memory. But operationA2() will not be safe because noone asked threadA to synchronize with the main memory, right? So it doesn't matter that the changes of operationB() are now in main memory, because threadA might still have its cached copies from the time before operationB() was called.

Question #3: if my suspicion in Q.#2 is true, then check my source code again and uncomment the method dummyA(), and uncomment the dummyA() call in operationA2(). I know this may be bad practice in other respects, but does this make a difference? My (possibly faulty) assumption is as follows: dummyA() will cause threadA to update its cached data from main memory (due to the mLock synchronized block), so it will see all changes done by operationB(). That is, now everything is safe. On a side note, the logical order of method calls is as follows:

  1. operationA1()
  2. operationB()
  3. dummyA()
  4. operationA2()

My conclusion: due to the synchronized block in operationB(), threadB will see the most up-to-date values of data that might have been changed before (e.g. in operationA1()). Due to the synchronized block in dummyA(), threadA will see the most up-to-date copies of data that were changed in operationB(). Is there any error in this train of thought?

Community
  • 1
  • 1
Thomas Calc
  • 2,994
  • 3
  • 30
  • 56

1 Answers1

2

Your own intuition regarding question 2 is, in general, correct. Using synchronized(mLock) at the start of operationA2 will emit a memory barrier, which will ensure that further reads by operation A2 will see the writes performed by operation B, which have been published due to the memory barrier implicit with the use of synchronized(mLock) in operationB.

However, to answer question 1, note that operationB may not see any writes performed by operationA1 unless you insert a full memory barrier at the end of operationA1 (ie, there is nothing telling the system to flush values from operationA1 thread's cache). So you may want to put a call to dummyA at the end of operationA1.

To be fully safe and more maintainable, and since you state that these methods' execution do not overlap each other, you should enclose all manipulations of shared state in a synchronized(mLock) block without loss of performance.

Monroe Thomas
  • 4,962
  • 1
  • 17
  • 21