0

In this false sharing test at github, an array is define as int array[100]. And it says bad_index = 1 good_index = 99. Then it creates two threads and does the following:

  1. False sharing: thread_1 updates A[0], thread_2 updates A[bad_index]
  2. No false sharing: thread_1 updates A[0], thread_2 updates A[good_index]

With false sharing, the operation slower more than 2x. And my question is why index 1 is bad and index 99 is good?

  • Because of false sharing - you said it yourself. In this case good equals fast, bad equals slow. But no index is inherently bad - they are just different! So what is exactly are you asking? –  Jun 11 '20 at 06:49
  • 1
    this is explained throughly going from the link you gave https://github.com/MJjainam/falseSharing and from there https://parallelcomputing2017.wordpress.com/2017/03/17/understanding-false-sharing/ – Shlomi Agiv Jun 11 '20 at 06:58
  • @StaceyGirl What I am asking is "Is the bad index bad because it lies within the 64 bit boundary from `A[0]` and good index is good because it lies out the 64 bit boundary of `A[0]`?" – Prabhakar Tayenjam Jun 11 '20 at 12:17

1 Answers1

0

Index 1 is a 'bad' element because the array elements are ints without padding, so a common cache line of say 64 bytes will contain both elements at index 0 and 1. This will lead to false sharing between thread_1 and thread_2 because thread_1 will have its cache line invalidated every time thread_2 increments the element at index 1, forcing it to be reloaded even though the computation of element 0 does not depend on it.

99 is a 'good' index because it is far enough away from the first element so that will not be in the same cache line. This depends on the specific cpu running the program though.

YoYi
  • 1