42

What is the difference between DMA and memory-mapped IO? They both look similar to me.

Martin Thoma
  • 124,992
  • 159
  • 614
  • 958
brett
  • 5,379
  • 12
  • 43
  • 48
  • Also related: [Linux Device Drivers, 2nd Edition: Chapter 13: mmap and DMA](http://www.xml.com/ldd/chapter/book/ch13.html); first reading the answers here helped me a lot, though. – sdaau Jan 22 '14 at 12:29

5 Answers5

53

Memory-mapped I/O allows the CPU to control hardware by reading and writing specific memory addresses. Usually, this would be used for low-bandwidth operations such as changing control bits.

DMA allows hardware to directly read and write memory without involving the CPU. Usually, this would be used for high-bandwidth operations such as disk I/O or camera video input.

Here is a paper has a thorough comparison between MMIO and DMA.

Design Guidelines for High Performance RDMA Systems

Alireza Sanaee
  • 465
  • 1
  • 7
  • 21
Greg Hewgill
  • 951,095
  • 183
  • 1,149
  • 1,285
  • So they are basically the same thing but in opposite directions? – Jacquelyn.Marquardt Jan 23 '17 at 21:25
  • 2
    Not exactly. DMA is when two devices that aren't the CPU use the memory bus to communicate (with one device usually being main memory, and the process being orchestrated by the CPU). Memory-mapped IO is the CPU talking to device on the memory bus that is not main memory. – jdizzle Jul 30 '17 at 04:11
  • why is it required to map dma buffer even if dma engine is in device ? – ransh Feb 07 '18 at 12:50
  • 1
    This is not correct. What you assume for Memory mapped I/O is actually programmed I/O. Memory mapped I/O is usually compared with port mapped I/O: the way CPU access data in devices. – Han XIAO Mar 16 '20 at 04:58
33

Since others have already answered the question, I'll just add a little bit of history.

Back in the old days, on x86 (PC) hardware, there was only I/O space and memory space. These were two different address spaces, accessed with different bus protocol and different CPU instructions, but able to talk over the same plug-in card slot.

Most devices used I/O space for both the control interface and the bulk data-transfer interface. The simple way to access data was to execute lots of CPU instructions to transfer data one word at a time from an I/O address to a memory address (sometimes known as "bit-banging.")

In order to move data from devices to host memory autonomously, there was no support in the ISA bus protocol for devices to initiate transfers. A compromise solution was invented: the DMA controller. This was a piece of hardware that sat up by the CPU and initiated transfers to move data from a device's I/O address to memory, or vice versa. Because the I/O address is the same, the DMA controller is doing the exact same operations as a CPU would, but a little more efficiently and allowing some freedom to keep running in the background (though possibly not for long as it can't talk to memory).

Fast-forward to the days of PCI, and the bus protocols got a lot smarter: any device can initiate a transfer. So it's possible for, say, a RAID controller card to move any data it likes to or from the host at any time it likes. This is called "bus master" mode, but for no particular reason people continue to refer to this mode as "DMA" even though the old DMA controller is long gone. Unlike old DMA transfers, there is frequently no corresponding I/O address at all, and the bus master mode is frequently the only interface present on the device, with no CPU "bit-banging" mode at all.

Eric Seppanen
  • 5,923
  • 30
  • 24
  • 8
    In the Linux Kernel `DMA` is mentioned in over 5000 C files, that might be a reason why everyone still talks about DMA. – JohnnyFromBF Jun 17 '15 at 16:35
  • Theoretically PCI Bus Master is "Direct Memory Access" for a PCI device, it becomes a generalized concept like a lot of things nowadays. i.e. "memory" is not just RAM/VRAM, could be virtual memory on disk. For mobiles phones, "memory" even could mean "storage" which I was not used to in the beginning. – crazii Sep 08 '22 at 07:04
30

Memory-mapped IO means that the device registers are mapped into the machine's memory space - when those memory regions are read or written by the CPU, it's reading from or writing to the device, rather than real memory. To transfer data from the device to an actual memory buffer, the CPU has to read the data from the memory-mapped device registers and write it to the buffer (and the converse for transferring data to the device).

With a DMA transfer, the device is able to directly transfer data to or from a real memory buffer itself. The CPU tells the device the location of the buffer, and then can perform other work while the device is directly accessing memory.

caf
  • 233,326
  • 40
  • 323
  • 462
3

Direct Memory Access (DMA) is a technique to transfer the data from I/O to memory and from memory to I/O without the intervention of the CPU. For this purpose, a special chip, named DMA controller, is used to control all activities and synchronization of data. As result, compare to other data transfer techniques, DMA is much faster.

On the other hand, Virtual memory acts as a cache between main memory and secondary memory. Data is fetched in advance from the secondary memory (hard disk) into the main memory so that data is already available in the main memory when needed. It allows us to run more applications on the system than we have enough physical memory to support.

enter image description here

Robert Houghton
  • 1,202
  • 16
  • 28
Usman Gill
  • 29
  • 2
0

The answers omit the fact that DMA can be used by your CPU in order to write to/read from an I/O device without constantly checking/being interrupted by a single character (programmed I/O vs interrupt-driven I/O vs DMA topic)