On computer memory

From Helpful
Jump to navigation Jump to search

On memory fragmentation

Fragmentation in general

Slab allocation

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


The slab allocator does caches of fixed-size objects.

Slab allocation is often used in kernel modules/drivers that are perfectly fine to allocate only uniform-sized and potentially short-lived structures - think task structures, filesystem internals, network buffers.

Fixed size, and often separated for each specific type, makes it easier to write an allocator that guarantees allocation within very small timeframe (by avoiding "hey let me look at RAM and all the allocations currently in there" - you can keep track of slots being taken or not with a simple bitmask, and it cannot fragment).

There may also be arbitrary allocation not for specific data structures but for fixed sizes like 4K, 8K, 32K, 64K, 128K, etc, used for things that have known bounds but not precise sizes, for similar lower-time-overhead allocation at the cost of some wasted RAM.


Upsides:

Each such cache is easy to handle
avoids fragmentation because all holes are of the same size,
that the otherwise-typical buddy system still has
making slab allocation/free simpler, and thereby a little faster
easier to fit them to hardware caches better

Limits:

It still deals with the page allocator under the cover, so deallocation patterns can still mean that pages for the same cache become sparsely filled - which wastes space.


SLAB, SLOB, SLUB:

  • SLOB: K&R allocator (1991-1999), aims to allocate as compactly as possible. But fragments faster than various others.
  • SLAB: Solaris type allocator (1999-2008), as cache-friendly as possible.
  • SLUB: Unqueued allocator (2008-today): Execution-time friendly, not always as cache friendly, does defragmentation (mostly just of pages with few objects)


For some indication of what's happening, look at slabtop and slabinfo

See also:


There are some similar higher-level allocators "I will handle things of the same type" allocation, from some custom allocators in C, to object allocators in certain languages, arguably even just the implementation of certain data structures.

Memory mapped IO and files

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Note that

memory mapped IO is a hardware-level construction, while
memory mapped files are a software construction (...because files are).


Memory mapped files

Memory mapping of files is a technique (OS feature, system call) that pretends a file is accessible at some address in memory.

When the process accesses those memory locations, the OS will scramble for the actual contents from disk.

Whether this will then be cached depends a little on the OS and details(verify).


For caching

In e.g. linux you get that interaction with the page cache, and the data is and stays cached as long as there is RAM for it.


This can also save memory - in that without memory mapping, compared to the easy choice of manually cacheing the entire thing in your process.

With mmap you may cache only the parts you use, and if multiple processes want this file, you may avoid a little duplication.


The fact that the OS can flush most or all of this data can be seen as a limitation or a feature - it's not always predictable, but it does mean you can deal with large data sets without having to think about very large allocations, and how those aren't nice to other apps.


shared memory via memory mapped files

Most kernel implementations allow multiple processes to mmap the same file -- which effectively shares memory, and probably one of the simplest in a protected mode system. (Some methods of Inter-Process communication work via mmapping)


Not clobbering each other's memory is still something you need to do yourself.

The implementation, limitations, and method of use varies per OS / kernel.

Often relies on demand paging to work.

Memory mapped IO

Map devices into memory space (statically or dynamically), meaning that memory accesses to those areas are actually backed by IO accesses (...that you can typically also do directly).

This mapping is made and resolved at hardware-level thing, and only works for DMA-capable devices (which is many).

It seems to often be done to have a simple generic interface (verify) - it means drivers and software can avoid many hardware-specific details.


See also:

DMA

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Direct Memory Access comes down to additional hardware that can be programmed to copy bytes from one memory address to another, meaning the CPU doesn't have to do this.

DMA is independent enough at hardware level that its transfers can work at high clocks and throughputs (and without interrupting other work), comparable to CPU copies (CPU may be faster if it was otherwise idle. When CPU is not idle the extra context switching may slow things down and DMA may be relatively free. Details vary with specific designs, though).


They tend to work in smaller chunks, triggered by DRQ (similar in concept to IRQs, but triggering only a smallish copy, rather than arbitrary code), so that it can be coordinated as small chunks.

The details look intimidating at first, but mostly because they are low-level. The idea is actually relatively simple.


Aside from memory-to-memory use, it also allows memory-to-peripheral copies (if a specific supporting device is memory mapped(verify)).




Memory limits on 32-bit and 64-bit machines

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


tl;dr:

  • If you want to use significantly more than 4GB of RAM, you want a 64-bit OS.
  • ...and since that is now typical, most of the details below are irrelevant


TODO: the distinction between (effects from) physical and virtual memory addressing should be made clearer.


Overall factoids

OS-level and hardware-level details:

From the I want my processes to map as much as possible angle:

  • the amount of memory a single process could hope to map is typically limited by its pointer size, so ~4GB on 32-bit OS, 64-bit (lots) on a 64-bit OS.
Technically this could be entirely about the OS, but in reality this tied intimately to what the hardware natively does, because anything else would be slooow.
  • Most OS kernels have a split (for their own ease) that means that of the area a program can map, less is allocatable - to perhaps 3GB, 2GB sometimes even 1GB
this is partly a pragmatic implementation detail from back when 32 megabytes was a lot of memory, and leftover ever since


  • Since the OS is in charge of virtual memory, it can map each process to memory separately, so in theory you can host multiple 32-bit processes to together use more than 4GB
...even on 32-bit OSes: you can for example compile the 32-bit linux kernel to use up to 64GB this way
a 32-bit OS can only do this through PAE, which has to be supported and enabled in motherboard, and supported and enabled in the OS.
Note: both 32-bit and 64-bit PAE-supporting motherboards may have somewhat strange limitations, e.g. the amount of memory they will actually allow/support (mostly a problem in early PAE motherboards)
and PAE was problematic anyway - it's a nasty hack in nature, and e.g. drivers had to support it. It was eventually disabled in consumer windows (XP) for this reason. In the end it was mostly seen in servers, where the details were easier to oversee.


  • device memory maps would take mappable memory away from within each process, which for 32-bit OSes would often mean that you couldn't use all of that installed 4GB



On 32-bit systems:

Process-level details:

  • No single 32-bit process can ever map more than 4GB as addresses are 32-bit byte-addressing things.
  • A process's address space has reserved parts, to map things like shared libraries, which means a single app can actually allocate less (often by at most a few hundred MBs) than what it can map(verify). Usually no more than ~3GB can be allocated, sometimes less.


On 64-bit systems:

  • none of the potentially annoying limitations that 32-bit systems have apply
(assuming you are using a 64-bit OS, and not a 32-bit OS on a 64-bit system).
  • The architecture lets you map 64-bit addresses
...in theory, anyway. The instruction set is set up for 64 bit everything, but the current x86-64 CPU implementation's address lines are 48-bit (for 256TiB), mainly because we can increase that later without breaking compatibility, and right now it saves copper and silicon 99% of computers won't use
...because in practice it's still more than you can currently physically put in most systems. (there are a few supercomputers for which this matters, but arguably even there it's not so important because horizontal scaling is generally more useful than vertical scaling. But there are also a few architectures designed with a larger-than-64-bit addressing space)


On both 32-bit (PAE) and 64-bit systems:

  • Your motherboard may have assumptions/limitations that impose some lower limits than the theoretical one.
  • Some OSes may artificially impose limits (particularly the more basic versions of Vista seem to do this(verify))


Windows-specific limitations:

  • 32-bit Windows XP (since SP2) gives you no PAE memory benefits. You may still be using the PAE version of the kernel if you have DEP enabled (no-execute page protection) since that requires PAE to work(verify), but PAE's memory upsides are disabled (to avoid problems with certain buggy PAE-unaware drivers, possibly for other reasons)
  • 64-bit Windows XP: ?
  • /3GB switch moves the user/kernel split, but a single process to map more than 2GB must be 3GB aware
  • Vista: different versions have memory limits that seem to be purely artificial (8GB, 16GB, 32GB, etc.) (almost certainly out of market segregation)

Longer story / more background information

A 32-bit machine implies memory addresses are 32-bit, as is the memory address bus to go along. It's more complex, but the net effect is still that you can ask for 2^32 bytes of memory at byte resolution, so technically allows you to access up to 4GB.


The 'but' you hear coming is that 4GB of address space doesn't mean 4GB of memory use.


The device hole (32-bit setup)

One of the reasons the limit actually lies lower is devices. The top of the 4GB memory space (usually directly under the 4GB position) is used to map devices.

If you have close to 4GB of memory, this means part of your memory is still not addressible by the CPU, and effectively missing. The size of this hole depends on the actual devices, chipset, BIOS configuration, and more(verify).


The BIOS settles the memory address map(verify), and you can inspect the effective map (Device Manager in windows, /proc/iomem in linux) in case you want to know whether it's hardware actively using the space (The hungriest devices tend to be video cards - at the time having two 768MB nVidia 8800s in SLI was one of the worst cases) or whether your motherboard just doesn't support more than, say, 3GB at all. Both these things can be the reason some people report seeing as little as 2.5GB out of 4GB you plugged in.


This problem goes away once you run a 64-bit OS on a 64-bit processor -- though there were some earlier motherboards that still had old-style addressing leftovers and hence some issues.


Note that the subset of these issues caused purely by limited address space on 32-bit systems could also be alleviated, using PAE:

PAE

It is very typical to use virtual memory systems. While the prime upside is probably the isolation of memory, the fact that a memory map is kept for each process also means that on 32-bit, each application has its own 4GB memory map without interfering with anything else (virtual mapping practice allowing).

Which means that while each process could use 4GB at the very best, if the OS could see more memory, it might map distinct 4GBs to each process so that collectively you can use more than 4GB (or just your full 4GB even with device holes).


Physical Address Extension is a memory mapping extension (not a hack, as some people think) that does roughly that. PAE needs specific OS support, but doesn't need to break the 32-bit model as applications see it.

It allowed mapping 32-bit virtual memory into the 36 bit hardware address space, which allows for 64GB (though most motherboards had a lower limit)


PAE implies some extra work on each memory operation, but because there's hardware support it only kicked a few percent off memory access speed.


All newish linux and windows version support PAE, at least technically. However:

  • The CPU isn't the only thing that accesses memory. Although many descriptions I've read seem kludgey, I easily believe that any device driver that does DMA and is not aware of PAE may break things -- such drivers are broken in that they are not PAE-aware - they do not know the 64-bit pointers that are used internally used should be limited to 36-bit use.
  • PAE was disabled in WinXP's SP2 to increase stability related to such issues, while server windowses are less likely to have problems since they use tend to use more standard hardware and thereby drivers.

Kernel/user split

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

The kernel/user split, specific to 32-bit OSes, refers to an OS-enforced formalism splitting the mappable process space between kernel and each process.


It looks like windows by default gives 2GB to both, while (modern) linuces apparently split into 1GB kernel, 3GB application by default (which is apparently rather tight on AGP and a few other things).

(Note: '3GB for apps' means that any single process is limited to map 3GB. Multiple processes may sum up to whatever space you have free.)


In practice you may want to shift the split, particularly in Windows since almost everything that would want >2GB memory runs in user space - mostly databases. The exception is Terminal Services (Remote Desktop), that seems to be kernel space.

It seems that:

  • linuxes tend to allow 1/3, 2/2 and 3/1,
  • BSDs allow the split to be set to whatever you want(verify).
  • It seems(verify) windows can only shift its default 2/2 to the split to 1GB kernel, 3GB application, using the /3GB boot option (the feature is somewhat confusingly called 4GT), but it seems that windows applications are normally compiled with the 2/2 assumption and will not be helped unless coded to. Exceptions seem to primarily include database servers.
  • You may be able to work around it with a 4G/4G split patch, combined with PAE - with some overhead.

See also



Some understanding of memory hardware

"What Every Programmer Should Know About Memory" is a good overview of memory architectures, RAM types, reasons bandwidth and access speeds vary.


RAM types

DRAM - Dynamic RAM

lower component count per cell than most (transistor+capacitor mainly), so high-density and cheaper
yet capacitor leakage means this has to be refreshed regularly, meaning a DRAM controller, more complexity and higher latency than some
(...which can be alleviated and is less of an issue when you have multiple chips)
this or a variant is typical as main RAM, due to low cost per bit


SDRAM - Synchronous DRAM - is mostly a practical design consideration

...that of coordinating the DRAM via an external clock signal (previous DRAM was asynchronous, manipulating state as soon as lines changed)
This allows the interface to that RAM to be a predictable state machine, which allows easier buffering, and easier interleaving of internal banks
...and thereby higher data rates (though not necessarily lower latency)
SDR/DDR:
DDR doubled busrate by widening the (minimum) units they read/write (double that of SDR), which they can do from single DRAM bank(verify)
similarly, DDR2 is 4x larger units than SDR and DDR3 is 8x larger units than SDR
DDR4 uses the same width as DDR3, instead doubling the busrate by interleaving from banks
unrelated to latency, it's just that the bus frequency also increased over time.


Graphics RAM refers to varied specialized

Earlier versions would e.g. allow reads and writes (almost) in parallel, making for lower-latency framebuffers
"GDDR" is a somwhat specialized form of DDR SDRAM



SRAM - Static RAM

Has a higher component count per cell (6 transistors) than e.g. DRAM
Retains state as long as power is applied to the chip, no need for refresh, also making it a little lower-latency
no external controller, so simpler to use
e.g used in caches, due to speed, and acceptable cost for lower amounts


PSRAM - PseudoStatic RAM

A tradeoff somewhere between SRAM and DRAM
in that it's DRAM with built-in refresh, so functionally it's as standalone as SRAM and slower but you can have a bunch more of it for the same price - e.g. SRAM tends to
(yes, DRAM can have built-in refresh, but that's often points a sleep mode that retains state without requiring an active DRAM controller)




Memory stick types

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


ECC RAM

can detect many (and correct some) hardware errors in RAM
The rate of of bit-flips is low, but will happen. If your computations or data are very important to you, you want ECC.
See also:
http://en.wikipedia.org/wiki/ECC_memory
DRAM Errors in the Wild: A Large-Scale Field Study


Registered RAM (sometimes buffered RAM) basically places a buffer on the DRAM modules (register as in hardware register)

offloads some electrical load from the main controller onto these buffers, making it easier to have designs more stably connect more individual memory sticks/chips.
...at a small latency hit
typical in servers, because they can accept more sticks
Must be supported by the memory controller, which means it is a motherboard design choice to go for registered RAM or not
pricier (more electronics, fewer units sold)
because of this correlation with server use, most registered RAM is specifically registered ECC RAM
yet there is also unregistered ECC, and registered non-ECC, which can be good options on specific designs of simpler servers and beefy workstations.
sometimes called RDIMM -- in the same context UDIMM is used to refer to unbuffered
https://en.wikipedia.org/wiki/Registered_memory

FB-DIMM, Fully Buffered DIMM

same intent as registered RAM - more stable sticks on one controller
the buffer is now between stick and controller [1] rather than on the stick
physically different pinout/notching


SO-DIMM (Small Outline DIMM)

Physically more compact. Used in laptops, some networking hardware, some Mini-ITX


EPP and XMP (Enhanced Performance Profile, Extreme Memory Profiles)

basically, one-click overclocking for RAM, by storing overclocked timing profiles
so you can configure faster timings (and Vdimm and such) according to the modules, rather than your trial and error
normally, memory timing is configured according to a table in the SPD, which are JEDEC-approved ratings and typically conservative.
EPP and XMP basically means running them as fast as they could go (and typically higher voltage)



On pin count

SO-DIMM tends to have a different pin count
e.g. DDR3 has 240 pins, DDR3 SO-DIMM has 204
e.g. DDR4 has 288 pins, DDR4 SO-DIMM has 260
Registered RAM has the same pin count
ECC RAM has the same pin count


In any case, the type of memory must be supported by the memory controller

DDR2/3/4 - physically won't fit
Note that while some controllers (e.g. those in CPUs) support two generations, a motherboard will typically have just one type of memory socket
registered or not
ECC or not

Historically, RAM controllers were a thing on the motherboard near the CPU, while there are now various cases where the controller is on the CPU.

More on DRAM versus SRAM

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


On ECC

Buffered/registered RAM

EPROM, EEPROM, and variants

PROM is Programmable ROM

can be written exactly once

EPROM is Erasable Programmable ROM.

often implies UV-EEPROM, erased with UV shone through a quartz window.

EEPROM's extra E means Electrically Eresable

meaning it's now a command.
early EEPROM read, wrote, and erased (verify) a single byte at a time. Modern EEPROM can work in alrger chunks.
you only get a limited amount of erases (much like Flash. Flash is arguably just an evolution of EEPROM)


Flash memory (intro)

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


PRAM