2

Can false sharing occur with the following state:

Class Foo{

   int x;
   int y;

}

Whlie two threads are modifying concurrently x and y? Or is it not possible to judge as compiler might optimize x and y to registers?

Bober02
  • 15,034
  • 31
  • 92
  • 178
  • Yes, it could happen in the case where this *isn't* optimised to registers. – Oliver Charlesworth Nov 27 '17 at 21:15
  • As each write would fill up store buffers of limited space, causing draining of buffers and cache invalidations, correct? – Bober02 Nov 27 '17 at 21:21
  • Yes, really just the same for any false-sharing scenario involved interleaved access. – Oliver Charlesworth Nov 27 '17 at 21:29
  • It’s unlikely to be ever an issue for a declaration like this. It would be an odd application logic if these two mutable variables with the same visibility and no sign of thread synchronization end up in a false sharing scenario anywhere. – Holger Nov 29 '17 at 17:48
  • @holger, why is that? The comments above and answer below seem to differ – Bober02 Dec 01 '17 at 12:39
  • 1
    There is no contradiction. The answer says it *could happen* (technically), I’m just saying it’s *not an issue* (practically). Explain the real application scenario, where false sharing happens for a class like this and I’ll explain what’s wrong with your application logic. As a starting point, just think about this: if one thread is working with `x` without any relationship to `y` while another thread is working with `y` without any relationship to `x`, why are these entirely unrelated variables declared in the same class and why are these threads working on the same instance of that class? – Holger Dec 01 '17 at 12:54
  • I understand your point, this is just a theoretical question IF this is at all possible and whether in practice cache coherency mechanism e..g in x86, would eventually cache invalidation and false sharing – Bober02 Dec 01 '17 at 22:27
  • @Bober02 `cache invalidation`? What do u mean? – Eugene Dec 01 '17 at 23:15
  • 1
    As far as I understand the problem is as follows; and y are on the same cache line (let's assume that scenario). Thread 1 loads its own cache line and Thread 2 loads its own. Both threads keep on modifying x and y respectively, which at some point fills up store buffers, and causes a flush and taht cache line to be invalidated for other thread, hence false sharing – Bober02 Dec 01 '17 at 23:26
  • @Bober02 not entirely, if Thread1 has variable y in its cache line and updates it, it also means that if that variable is in cache line of Thread2, it has to be updated. That update happens via cache invalidation, meaning that the entire cache line is updates, not the sole variable. Thread2 has not touched that variable, but it now has an outdated value, thus the name "false" sharing – Eugene Dec 03 '17 at 14:22
  • @Bober02 btw probably the simplest and best material I have read on this https://mechanical-sympathy.blogspot.md/2011/07/false-sharing.html?m=1 – Eugene Dec 03 '17 at 14:24

3 Answers3

1

Of course it could happen (could being the keyword here), you can't tell for sure if these two variables will end-up on the same cache-line obviously. Compilers will not do anything (at least javac) to prevent such scenarios as making these variables to be held on different cache lines would be probably very expensive and it would require a lot of proof that is actually needed.

And yes also, your comment is correct, cache invalidation would happen quite often and might be a cause of a bottleneck. But measuring this is not easy, you can see an example here.

Just notice that since jdk-8 there is @Contended that would pad entries to fit on exactly a cache line.

Eugene
  • 117,005
  • 15
  • 201
  • 306
0

test case like this:

public class FalseSharing implements Runnable {
public final static int NUM_THREADS = 2; // change

public final static long ITERATIONS = 50L * 1000L * 1000L;
private static VolatileLong[] longs = new VolatileLong[NUM_THREADS];

static {
    for (int i = 0; i < longs.length; i++) {
        longs[i] = new VolatileLong();
    }
}

private final int arrayIndex;

public FalseSharing(final int arrayIndex) {
    this.arrayIndex = arrayIndex;
}

public static void main(final String[] args) throws Exception {
    final long start = System.currentTimeMillis();
    runTest();
    System.out.println("duration = " + (System.currentTimeMillis() - start));
}

private static void runTest() throws InterruptedException {
    Thread[] threads = new Thread[NUM_THREADS];
    for (int i = 0; i < threads.length; i++) {
        threads[i] = new Thread(new FalseSharing(i));
    }
    for (Thread t : threads) {
        t.start();
    }
    for (Thread t : threads) {
        t.join();
    }
}

public void run() {
    long i = ITERATIONS + 1;
    while (0 != --i) {
        longs[arrayIndex].value = i;
    }
}

public final static class VolatileLong {
    public long p1, p2, p3, p4, p5, p6, p7; // padding
    public long value = 0L; // change to volatile when test
    public long p1_1, p2_1, p3_1, p4_1, p5_1, p6_1, p7_1; // padding
}}

test result(5 times)(run in Intel Core i5 with 2 core):

  • volatile without padding(ms) 1360 1401 1684 1323 1383
  • volatile with padding(ms) 568 612 653 638 669
  • non-volatile without padding(ms) 41 35 40 35 44
  • non-volatile with padding(ms) 38 39 44 49 51

so in my opinion, false sharing will not occur with non-volatile field

  • My test prove same thing as yours. I'm also using com.lmax.disruptor.Sequence to test. non-volatile long > volatile long use lasyset(Sequence.set) > Sequence.setVolatile == volatile long with padding > volatile long without padding – lich0079 Apr 16 '21 at 13:49
0

My test prove same thing as yours.

I'm also using com.lmax.disruptor.Sequence to test.

non-volatile long > volatile long use lasyset(Sequence.set) > Sequence.setVolatile == volatile long with padding > volatile long without padding

lich0079
  • 121
  • 1
  • 3