4

I have learned that Shared Memory computer architectures can be divided in Uniform Memory Access (UMA) and Non-uniform Memory Access (NUMA), depending on whether the access times to a given memory location are the same for all processors or not. I've also learned that NUMA architectures can be further divided into cache-coherent and non-cache-coherent, based on whether they have a mechanism for propagating (or invalidating) modified data from one processor's (or core's) cache to another's, leading to the term "ccNUMA". (Please correct me if I got anything wrong...)

Based on this question, it is also my understanding that the term NUMA specifically refers to access times to main memory, not cache, so that even though most multiprocessor systems have necessarily distributed caches, these systems are still called UMA if they have uniform access to main memory.

What I don't understand is this: why is the concept of a "ccUMA" architecture rarely mentioned? For example, wikipedia only has a page for ccNUMA (which redirects to NUMA), not for ccUMA, and the page for Cache Coherence doesn't explicitly refer to either (except that it links to Distributed Shared Memory, which seems to be roughly equivalent to NUMA...) Also, a google search for ccUMA returns far less results than ccNUMA...

Does the cache coherency problem not apply on UMA architectures? It seems to me that it does, but why is it never mentioned then?

Community
  • 1
  • 1
sp00n
  • 181
  • 1
  • 11
  • 1
    Quite possibly because systems that are NUMA but don't have hardware support for cache-coherency are a bit ... difficult ... to write programs for, and so haven't been particularly prevalent (except in the past before caches became common, meaning cache coherency was a non-issue), leading to the "cc" prefix generally not being used because it's just assumed that a modern multi-processor system maintains cache coherency... – twalberg Apr 11 '14 at 19:29
  • 3
    @twalberg Yes, UMA systems are assumed to be cache coherent (though some embedded systems might not be). Cache coherence is less expensive in a UMA system both because UMA does not scale well and because such typically provides a centralized point of communication (e.g., in early systems a shared bus to a shared memory controller). Scaling cache coherence is difficult, so large-scale NUMA naturally makes limiting cache coherence more attractive. –  Apr 12 '14 at 00:20
  • Thank you both very much. @twalberg, did you mean to say UMA or NUMA? I guess it doesn't even change a lot with respect to programming difficulty, when cache coherency is not implemented... – sp00n Apr 13 '14 at 08:45
  • 2
    @sp00n I meant NUMA, but really, cache coherency is a completely different topic than NUMA vs. UMA, although they do interact... non-cache-coherent machines are challenging to write correct software for regardless of the actual time it takes to access memory. However, cache coherency is usually easier to achieve on a UMA machine, because there is typically only one access channel to the memory, as opposed to multiple different nodes that all have memory attached. – twalberg Apr 13 '14 at 14:05
  • @twalberg Thanks a lot for the clear explanation. :) – sp00n Apr 14 '14 at 09:24

0 Answers0