0

Just for clarity. Does Local Memory refer to the memory allocated to a certain program? And does the Global memory refer the the Main memory?

I am reading about Uniform Memory Access time and Non Uniform Memory Access time. They say a multiprocessor computer has a Uniform Memory Access time if the time it takes to access memory data locally is the same as the amount of time it takes to access data globally.

I thought by "locally" they are referring to a cache, but in the preceding statements they clarify that a local memory is not a cache.

Sibulele
  • 324
  • 1
  • 3
  • 12
  • 1
    In what context? – Burak Serdar Aug 12 '22 at 13:13
  • @BurakSerdar I just updated my post. Sorry for being vague – Sibulele Aug 12 '22 at 13:19
  • In NUMA context, I believe local memory refers to memory that is closer to a processor, and global memory refers to the memory as a whole. Read about NUMA, it is explained there usually. – Burak Serdar Aug 12 '22 at 13:22
  • @BurakSerdar Yeah I see the diagram of NUMA and UMA. I guess my question should be, does the Local Memory act as a CACHE that has a bigger size? Like I don't understand why we need a local memory if we have a CACHE unless if it is because CACHEs cannot be made big enough to take the load of data a Local Memory can take. – Sibulele Aug 12 '22 at 13:31
  • No, in a NUMA architecture, local memory is not cache. Memory is partitioned so that every processor has potentially quicker access to its local memory, because there is less contention. – Burak Serdar Aug 12 '22 at 13:37
  • @BurakSerdar so you are saying, in a NUMA architecture we do not have a Global Memory, the memory is partitioned across all processors ? We only have a Global Memory in a UMA architecture where all processors have the same access time to a shared memory? – Sibulele Aug 12 '22 at 13:41
  • 1
    Global memory is simply all the memory combined. In NUMA, processes running on a single processor have faster access to their own memory. Those processes can still access memory assigned to other processors. – Burak Serdar Aug 12 '22 at 13:44
  • Okey thanks. So is it safe to say, since CACHes cannot be as big as we would want them to be, NUMA back-ups CACHEs by providing memory closer to them( local memory)? – Sibulele Aug 12 '22 at 13:59
  • Oh, I think I get it now. NUMA architecture is in Distributed Memory Systems. While UMA is in Shared Memory Systems – Sibulele Aug 12 '22 at 14:16
  • Not so about the cache. NUMA is an architecture in which the OS schedules processes to CPUs, and allocates memory for those processes in the memory block assigned to that CPU, so that the process can use the memory through a dedicated bus, as opposed to sharing the memory bus with other processors accessing the memory. – Burak Serdar Aug 12 '22 at 14:46
  • 1
    No, NUMA is not "in distributed memory system" (at least not only). NUMA is for shared memory architecture where all NUMA node can access to the memory of other NUMA nodes (hence the "shared"). Distributed memory require nodes to make explicit requests using typically a network interconnect. Distributed shared memory can be used to abstract this limitation and see for example a cluster of interconnected machine as a big NUMA machine but this is a pretty unusual case (and the memory is not really shared for the OS). NUMA is just about non-uniform memory accesses to local memory of other nodes. – Jérôme Richard Aug 12 '22 at 17:00
  • 1
    Note that a NUMA node can be for example a part of a micro-processor (this is the case for Xeon Phi processors and AMD Zen ones) or a micro-processor (like in usual multi-socket servers). NUMA machine are now very widespread. For more information please read the very famous article : https://people.freebsd.org/~lstewart/articles/cpumemory.pdf . It is quite long, but it is very instructive! – Jérôme Richard Aug 12 '22 at 17:06

0 Answers0