Memory mapped IO and files

From Helpful
Jump to navigation Jump to search
The lower-level parts of computers

General: Computer power consumption · Computer noises

Memory: Some understanding of memory hardware · CPU cache · Flash memory · Virtual memory · Memory mapped IO and files · RAM disk · Memory limits on 32-bit and 64-bit machines

Related: Network wiring notes - Power over Ethernet · 19" rack sizes

Unsorted: GPU, GPGPU, OpenCL, CUDA notes · Computer booting

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Note that

memory mapped IO is a hardware-level construction, while
memory mapped files are a software construction -- because files are.

Memory mapped files

Memory mapping of files is a technique (OS feature, system call) that pretends a file is accessible at some address in memory.

When the process accesses those memory locations, the OS will scramble for the actual contents from disk. can also be cached -- whether the OS does that at all depends a little on the OS and details(verify), whether it is in RAM depends on whether it was recently accessed by something.

For caching

In e.g. linux you get that interaction with the page cache, and the data stays cached as long as the RAM isn't used by other things.

This can also save memory - in that without memory mapping, compared to the easy choice of manually cacheing the entire thing in your process.

With mmap you may cache only the parts you use, and if multiple processes want this file, you may avoid a little duplication.

The fact that the OS can flush most or all of this data can be seen as a limitation or a feature - it's not always predictable, but it does mean you can deal with large data sets without having to think about very large allocations, and how those aren't nice to other apps.

shared memory via memory mapped files

Most kernel implementations allow multiple processes to mmap the same file -- which effectively shares memory, and probably one of the simplest in a protected mode system. (Some methods of Inter-Process communication work via mmapping)

Not clobbering each other's memory is still something you need to do yourself.

The implementation, limitations, and method of use varies per OS / kernel.

Often relies on demand paging to work.

Memory mapped IO

Map devices into memory space (statically or dynamically), meaning that memory accesses to those areas are actually backed by IO accesses (...that you can typically also do directly).

This mapping is made and resolved at hardware-level thing, and only works for DMA-capable devices (which is many).

It seems to often be done to have a simple generic interface (verify) - it means drivers and software can avoid many hardware-specific details.

See also:


This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Direct Memory Access comes down to additional hardware that can be programmed to copy bytes from one memory range to another, ...meaning the CPU doesn't have to dedicate time to do this.

DMA is independent enough at hardware level that its transfers can work at high clock rates and so fairly high throughput.

Depending a little on the design, CPU may be faster if it was otherwise idle. When CPU is not idle, the extra context switching may slow things down and DMA may be relatively free. Details vary with specific hardware designs.

They tend to work in smaller chunks, triggered by DRQ (similar in concept to IRQs, but triggering only a copy, rather than arbitrary code), and typically coordinates only small chunk copies.

The details look intimidating at first, but mostly because they are low-level. The idea is actually relatively simple.

And yes, you do often have to worry about avoiding race conditions, though there are some standard-ish tricks.

Aside from memory-to-memory use, it also allows memory-to-peripheral copies (if a specific supporting device is memory mapped(verify)).