1

If the value in some part of cache is 4 and we change it to 5, that sets the dirty bit for that data to 1. But what about, if we set the value back to 4, will dirty bit still stay 1 or change back to 0?

I am interested in this, because this would mean a higher level optimization of the computer system when dealing with read-write operations between main memory and cache.

  • From the perspective of the cache, it doesn't know that it had value 4 previously. Therefore it will still be set to dirty. Obviously you can implement some sort of checkpointing and revert dirty bits later for cases like you've mentioned. – Isuru H Dec 12 '16 at 15:10
  • Related: [What specifically marks an x86 cache line as dirty - any write, or is an explicit change required?](//stackoverflow.com/q/47417481) is about silent stores (optimizing just one store of the current value to not set the dirty bit). Real CPUs don't even do that. – Peter Cordes Oct 04 '19 at 21:19

1 Answers1

5

In order for a cache to work like you said, it would need to reserve half of its data space to store the old values.
Since cache are expensive exactly because they have an high cost per bit, and considering that:

  • That mechanism would only detect a two levels writing history: A -> B -> A and not any deeper (like A -> B -> C -> A).
  • Writing would imply the copy of the current values in the old values.
  • The minimum amount of taggable data in a cache is the line and the whole line need to be changed back to its original value. Considering that a line has a size in the order of 64 Bytes, that's very unlikely to happen.
  • An hierarchical structure of the caches (L1, L2, L3, ...) its there exactly to mitigate the problem of eviction.

The solution you proposed has little benefits compared to the cons and thus is not implemented.

Margaret Bloom
  • 41,768
  • 5
  • 78
  • 124
  • 1
    Optimizing for direct silent stores (i.e., storing the same value as currently in a memory location) has been considered academically, but even that potential optimization has significant issues and has not been (as far as I know) implemented in an actual product. –  Dec 13 '16 at 03:39
  • @PaulA.Clayton Do you mean [write-through](https://stackoverflow.com/questions/27087912/write-back-vs-write-through)? – Margaret Bloom Dec 13 '16 at 08:34
  • 1
    No, see [Google Scholar search results](https://scholar.google.com/scholar?hl=en&q=silent+stores&btnG=&as_sdt=1%2C21). There is actually **one** example of silent store optimization that *has* been implemented: Intel's hardware lock elision. In this case software provides a hint that the lock's value will be restored (minimizing overhead in detecting a possible silent store) and the benefit of eliding this particular type of ABA access is significant in allowing use of transactional memory yet being compatible with older hardware. –  Dec 13 '16 at 13:21
  • @PaulA.Clayton As far as I intended it, the OP question was not about Silent stores (current cache value vs current memory value) but about current cache value vs previous cache value. Thank you for the interesting point though! – Margaret Bloom Dec 13 '16 at 14:37
  • 1
    You could consider as a semi-silent store a change and revert that occurs within one cache level, and then writes back to the lower level the original value. This is relatively easy to detect and solves the double storage most of the time. – Leeor Dec 20 '16 at 21:34
  • @Leeor: A plausible mechanism would be coalescing the two stores together in the store buffer, then it becomes "just" a potential silent store. (non-x86 weakly ordered ISAs have much more latitude to coalesce stores without breaking the memory model.) But yeah, even without that flipping from Modified back to Exclusive doesn't have any fundamental problems, other than the massive amount of storage needed to detect this. It's rare enough, and memory bandwidth is cheap *enough*, that spending half your cache, or using a hash of the whole line, would be insane. – Peter Cordes Oct 04 '19 at 21:11