On computer memory

From Helpful
Revision as of 16:39, 28 December 2010 by Helpful (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Memory card types

These are primarily notes
It won't be complete in any sense.
It exists to contain fragments of useful information.

See also http://en.wikipedia.org/wiki/Comparison_of_memory_cards

Secure Digital (SD, miniSD, microSD)

MiniSD and microSD are pin-compatible but smaller versions of SD.

Adapters to larger sizes exist (which are no more than plastic to hold the smaller card and wires to connect it).

You tend to see microSD more often than miniSD, probably largely because microSD is handier for devices that want to stay small.

MicroSD was previously called TransFlash, and in some areas of the world that is still a common enough name.

MultiMediaCard (MMC)

The predecessor for SD, and regularly compatible in that SD hosts tend to also support MMC cards.

CompactFlash (CF)

Type I:

  • 3.3mm thick

Type II:

  • 5mm thick

Memory Stick (Duo, Pro, Micro (M2), etc.)


A few types, sizes up to 512MB and 2GB varying with them.

Apparently quite similar to SmartMedia

SmartMedia (SM)

  • Very thin (for its length/width)
  • capacity limited to 128MB (...so not seen much anymore)

Main/working memory

See also Computer_hardware_notes#Memory_limits_on_32-bit_and_64-bit_machines


See also:

On the hardware side

On the software side

On virtual memory

On swapping

Swapping / paging

These are primarily notes
It won't be complete in any sense.
It exists to contain fragments of useful information.

The name paging comes from the nature of many virtual memory systems: they deal with (fixed-size) memory pages, and it is natural to base this augmentation on that.

Swapping to swap files and paging to page files refers to moving things from physical memory to a backing store (often disk), and back. The words paging and swapping have been synonymous for a while (they weren't always).

Swapping is a way to be able to pretend you have more memory than you have, which is done mostly for two related reasons: to pretend you have more memory than you have, and that mostly so that you can place all actively used areas in RAM while more rarely used (and even never-used) memory don't lessen the potential use of your fast RAM.

The reason for swapping out is generally to free up system memory, and there are various ways to do this cleverly. For example, a program's virtual memory space is often made of some part that is never used, (often small) bits that are continuously used, and everything inbetween. Sections that are never used need not be committed, sections that are rarely used may be paged out if that means that other active programs can fit more comfortably in main memory.

There are further arguments you can have about this. For example, do you want the OS to page things out on demand so that everything that may never be used is in memory in case it actually is used, or do you want it to page out everything that seems not to be used so that a new program starting up can immediately use main memory without there being a bunch of churning while things are redistributed.

This is also a reason high swap usage doesn't always imply heavy swapping, though - there may be programs that allocated memory that they may never use (common in systems that do their own memory management, such as VMs for Java, .NET, python and such), that may never even become committed.

It also means that bloatware isn't as bad as it seems. If an executable is 20MB but only 200K of its code is actually used, chances are the other 19MB will be in swap soon enough and not taking physical memory.


Trashing is what happens when programs actively combine to use more memory than there is memory to back it.

When that happens, some of the memory you continually use has to come from and go to disk, and continuously instead of sporadically.

In the worst case, this means that every program's memory access speeds slow down to the speed of your disk rather than that of memory, which is orders of magnitude slower.

A disk constantly rattling is a good indication that this is happening.

Not-so-bad cases are those in which only one of the programs is really active at a time, or at least, when its memory accesses happen mostly when you actually have it on the forerground and are using it. Whenever you switch between multiple such apps, the drive will churn a lot and things will be slow, for a bunch of seconds or more, but will get a little better fairly soon (as the now-active application is swapped in and the other swapped out).

(Note this is a description of behaviour of a good number of business machines, as those regularly have barely enough memory for a single serious app)

See also


This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Linux' swappiness is a controllable factor in the aggressiveness of swapping of allocated-and-used-but-inactive pages to disk.

Higher swappiness values mean the tendency to swap out is higher. (the VM system uses more information, including the currently mapped ratio, and a measure of how much trouble the kernel has recently had freeing up memory)(verify)

In linux you can use proc or sysctl to check and set swappiness

cat /proc/sys/vm/swappiness
sysctl vm.swappiness

...shows you the current swappiness (a number between 0 and 100), and you can set it with something like:

echo 60 >  /proc/sys/vm/swappiness
sysctl -w vm.swappiness=60

Note that the meaning of the value was never very settled, and its meaning has changed with kernels (for example, (particularly later) 2.6 kernels swap out more easily under the same values as before). Some kernels do little swapping for values in the range 0-60 (or 0-80, but 60 seems the more common tipping point). A value of 100 or something near it tends to make for very aggressive swapping.

There are many discussions on the subject because the kernel implementations, implications, and applicability to different use cases varies, and some of the interactions are not trivial.


  • swapping out of memory necessarily means you have too little memory to carry everything you can
  • If you have a lot of memory, there won't be much swapping anyway, so it doesn't matter so much. Having a lot more RAM than apps will use is arguably a case for low swappiness, since you might as well keep everything in memory, unless you really value the OS cache.
  • swapping will be done relatively cleverly because inactivity tends to react well to 80/20 style behaviour(verify).

Arguments for lower swappiness:

  • Avoids swapping until it's necessary for something else,
    • ...also avoiding IO (also e.g. lets laptop drives spin down)
    • (on the flipside, this means more IO and time is spent when the memory is needed)
  • apps are more likely to stay in memory (particularly larger ones). Over-aggressive swapout (e.g. when go to lunch) is less likely, meaning it is slightly less likely that you have to wait for a few seconds of churning swap-in when you continue working
  • When your computer has more memory than you actively use, there will be less IO caused by swapping inactive pages out and in again (but other tendency factors will probably be lower too)

Arguments for higher swappiness seem to include(verify):

  • keeps memory free
    • valuing potential future use of memory over keeping (possibly never-used) things in memory
    • free memory is usable by the OS cache
  • swapping out rarely used pages means new applications and new allocations are served more immediately by RAM, rather than having to wait for swapping in
  • allocation-greedy apps will not cause swapping so quickly, and are served more quickly themselves

Swappiness applies to processes, not kernel constructs like the OS cache, dentry and inode caches. Because of those, more aggressive swapping out frees up memory, meaning a little more space can be used for disk caching.

When you're looking at this from a perspective of data caching, you can see swappiness as something that indirectly controls where cache data sits - process, OS cache, or swapped out.

Consider for example the case of large databases (often following some 80/20-ish locality patterns), or of search system with large indices (often see relatively wide/random access patterns over a large amount of data).

If you set the database to cache data in process memory, you may want lower swappiness, since that makes it more likely that needed data is still in memory rather than on disk. But for exactly the same setup with the added with to rely on the OS cache, you may well want higher swappiness so that the OS cache gets more space (although trying to wring memory out of your system for this is often a bad idea).

In some cases, having a lot of memory and relying on the OS cache can take all the bother out of cacheing, and avoid duplication in multiple processes. However, the OS cache's logic isn't as complex or smart as swapping logic, so when more applications are vying for limited memory, app caches can work better, sometimes also because they/you may better predicting what should stay in memory. Even a fairly basic LRU cache can be cleverer.

(On windows, My Computer -> Properties -> Advanced -> Performance Settings -> Advanced -> Memory Usage and the choice between Programs and System Cache is a probably much like swappiness)

See also:

Page fault

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Page faults refers to attempting to get something from memory that isn't currently in memory, or some other reason that we can't use the memory we want right now.

Perhaps the most common cause is that the page was swapped out, and needs to be swapped in. Note this is not an error, but rather an informational "darn, I wish I hadn't put that on disk, now things'll be a bit slow while I get it" message.

Another reason is memory mapped IO, which deserves special mention because page faults caused by mmap misses are not an indication of swapping - though still of disk IO.

There are other reasons for page faults, mostly other aspects of memory / swap management.