Virtual memory: Difference between revisions

From Helpful
Jump to navigation Jump to search
Line 239: Line 239:
* http://en.wikipedia.org/wiki/Page_fault
* http://en.wikipedia.org/wiki/Page_fault
* http://en.wikipedia.org/wiki/Demand_paging
* http://en.wikipedia.org/wiki/Demand_paging
===Overcommitting RAM with disk===
<!--
{{comment|(Note that what windows usually calls paging, unixen usually call swapping. In broad descriptions you can treat them the same. Once you get into the details the workings and terms do vary, and precise use becomes more important.)}}
Swapping/paging is, roughly, the idea that the VMM can have a pool of virtual memory that comes from RAM ''and'' disk.
This means you allocate more total memory than would fit in RAM at the same time. {{comment|(It can be considered overcommit of RAM, though note this is ''not'' the usual/only meaning the term overcommit, see below)}}.
The VMM decides which parts go from RAM to disk, when, and how much of such disk-based memory there is.
Using disk for memory seems like a bad idea, as disks are significantly slower than RAM in both bandwidth and latency.
Which is why the VMM will always prefer to use RAM.
There are a few reasons it can make sense:
* there is often some percentage of each program's memory that is inactive
: think "was copied in when starting, and then never accessed in days, and possibly never will be"
: if the VMM may adaptively move that to disk (where it will still be available if requested, just slower), that frees up more RAM for ''active'' programs (or caches) to use.
: not doing this means a percentage of RAM would always be entirely inactive
: doing this means slow access whenever you ''did'' need that memory after all
: this means a small amount of swap space is almost always beneficial
: it also doesn't make that much of a difference, because most things that are allocated have a purpose. '''However'''...
* there are programs that blanket-allocate RAM, and will never access part/most of it even once.
: as such, the VMM can choose to not back allocated with anything, until the first use.
:: this mostly just saves some up-front work
: separately, there is a choice of how to count this not-yet used memory
:: you could choose to not count that memory at all - but that's risky, and vague
:: usually it counts towards  ''swap/page'' area  (often without ''any'' IO there)
: this means a ''bunch'' of swap space can be beneficial, even if just for bookkeeping without ever writing to it
:: just to not have to give out memory we never used
:: while still actually having backing if they ever do.
And yes, in theory neither would be necessary if programs behaved with perfect knowledge of other programs, of the system, and of how their data gets used.
In practice this usually isn't feasible, so it makes sense to do this at OS-level basically with a best-guess implementation.
In most cases it has a mild net positive effect, largely because both above reasons mean there's a little more RAM for active use.
Yes, it is ''partly'' circular reasoning, in that programmers now get lazy doing such bulk allocations knowing this implementation, thereby ''requiring'' such an implementation.
Doing it this way has become the most feasible because we've gotten used to thinking this way about memory.
Note that neither reason should impact the memory that programs actively use.
Moving inactive memory to disk will also rarely slow ''them'' down.
Things that periodically-but-very-infrequently do a thing may need up to a few extra seconds.
There is some tweaking to such systems
* you can usualy ask for RAM that is never swapped/paged. This is important if you need to guarantee it's always accessed within very low timespan (can e.g. matter for real-time music production based on samples)
* you can often tweak how pre-emptive swapping is
: To avoid having to swap things to disk during the next memory allocation request, it's useful to do so pre-emptively, when the system isn't too busy.
: this is usually somewhat tweakable
'''Using more RAM than you have'''
The above begs the question what happens when you attempting to actively to use more RAM than you have.
This is a problem with ''and'' without swapping, with and without overcommit.
Being out of memory is a pretty large issue. Even the simplest "use what you have, deny once that's gone" system would have to just deny allocations to programs.
Many programs don't check every allocation, and may crash if not actually given what they ask for.  But even if they handled denied allocation perfectly elegantly, in many cases the perfect behaviour still often amount to stopping the program.
Either way, the computer is no longer able to do what you have.
And there is an argument that it is preferable to have it continue, however slowly,
in the hope this was some intermittent bad behaviour that will be solved soon.
When you overcommit RAM with disk, this happens somewhat automatically.
And it's slow as molasses, because some of the actively used memory is now going not via microsecond-at-worst RAM but millisecond-at-best disk.
While there are cases that are less bad, it's doing this ''continuously'' instead of sporadically.
This is called '''trashing'''. If your computer suddenly started to continously rattle its disk while being verrry slow, this is what happened.
{{comment|(This is also the number one reason why adding RAM may help a lot for a given uses -- or not at all, if this was not your problem.)}}
===Overcommitting (or undercommitting) virtual memory, and other tricks===
<!--
Consider we have a VMM system with swapping, i.e.
* all of the actively used virtual memory pages are in RAM
* infrequently used virtual memory pages are on swap
* never-used pages are counted towards swap {{comment|(does ''not'' affect the ammount of allocation you can do in total)}}
Overcommit is a system where the last point can instead be:
* never-used pages are nowhere.
'''More technically'''
More technically, overcommit allows allocation of address space, without allocating memory to back it.
Windows makes you do both of those explicitly,
implying fairly straightforward bookkeeping,
and that you cannot do this type of overcommit.
{{comment|(note: now less true due to compressed memory{{verify}})}}
Linux implicitly  allows that separation,
basically because the kernel backs allocations only on first use {{comment|(which is also why some programs will ensure they are backed by something by storing something to all memory they allocate)}}.
Which is separate from overcommit; if overcommit is disabled is merely saves some initialisation work.
But with overcommit (and similar tricks, like OSX's and Win10's compressed memory, or linux's [[zswap]]) your bookkeeping becomes more flexible.
Which includes the option to give out more than you have.
'''Why it can be useful'''
Basically, when there may be a good reason pages will ''never'' be used.
The difference is that without overcommit this still needs to all count towards something (swap, in practive), but and that overcommit means the bookkeeping assumes you will always have a little such used-in-theory-but-never-practice use.
How wise that is depends on use.
There are two typical examples.
One is that a large process may fork().
In a simple implementation you would need twice the memory,
but in practice the two forks' pages are copy-on-write, meaning they will be shared until written to.
Meaning you still need to do bookkeeping in case that happens, but even if it's another worker it probably won't be twice.
In the specific case where it wasn't for another copy of that program, but to instead immediately exec() a small helper program, that means the pages will ''never'' be written.
The other I've seen is mentioned in the kernel docs: scientific computing that has very large, very sparse arrays.
This is essentially said computing avoiding writing their own clever allocator, by relying on the linux VMM instead.
Most other examples arguably fall under "users/admins not thinking enough".
Consider the JVM, which has its own allocator which you give an initial and max memory figure at startup.
Since it allocates memory on demand (also initialises it{{verify}}), the user may ''effectively'' overcommit by having the collective -Xmx be more than RAM.
That's not really on the system to solve, that's just bad setup.
'''Critical view'''
Arguably, having enough swap makes this type of overcommit largely unnecessary, and mainly just risky.
The risk isn't too large, because it's paired with heuristics that disallow silly allocations,
and the oom_killer that resolves most runaway processes fast enough.
It's like [https://en.wikipedia.org/wiki/Overselling#Airlines overselling aircraft seats], or [https://en.wikipedia.org/wiki/Fractional-reserve_banking fractional reserve banking].
It's a promise that is ''less'' of a promise, it's fine (roughly for the same reasons that systems that allow swapping are not continuously trashing), but once your end users count on this, the concept goes funny, and when everyone comes to claim what's theirs you are still screwed.
Note that windows avoids the fork() case by not having fork() at all (there's no such cheap process duplication, and in the end almost nobody cares).
Counterarguments to overcommit include that system stability should not be based on bets,
that it is (ab)use of an optimization that you should not be counting on,
that programs should not be so lazy,
and that we are actively enabling them to be lazy and behave less predictably,
and now sysadmins have to frequently figure out why that [[#oom_kill|oom_kill]] happened.
Yet it is harder to argue that overcommit makes things less stable.
Consider that without overcommit, memory denials are more common (and that typically means apps crashing).
With or without overcommit, we are currently already asking what the system's emergency response should be (and there is no obvious answer to "what do we sacrifice first") because improper app behaviour is ''already a given''.
Arguably oom_kill ''can'' be smarter, usually killing only an actually misbehaving program.
Rather than a denial probably killing the next program (more random).
But you don't gain much reliability either way.
{{comment|(In practice oom_kill can take some tweaking, because it's still possible that e.g. a mass of smaller
programs lead to the "fix" of your big database getting killed)}}
'''So is it better to disable it?'''
No, it has its benificial cases, even if they are not central.
Disabling also won't prevent swapping or trashing,
as the commit limit is typically still > RAM {{comment|(by design, and you want that. Different discussion though)}}.
But apps shouldn't count on overcommit as a feature, unless you ''really'' know what you're doing.
Note that if you want to keep things in RAM, you probably want to lower [[#swappiness|swappiness]]) instead.
'''Should I tweak it?'''
Possibly.
Linux has three modes:
* overcommit_memory=2: No overcommit
: userspace commit limit is swap + fraction of ram
: if that's &lt;RAM, the rest is only usable by the kernel, usually mainly for caches (which can be a useful mechanism to dedicate some RAM to the [[page cache]])
* overcommit_memory=1: Overcommit without checks/limits.
: Appropriate for relatively few cases, e.g. the very-space array example.
: in genera just more likely to swap and OOM.
* overcommit_memory=0: Overcommit with heuristic checks (default)
: refuses large overcommits, allows the sort that would probably reduce swap usage
These mainly control the maximum allocation limit for userspace programs.
This is still a fixed number, and still ''related'' to the amount of RAM, but the relation can be more interesting.
On windows it's plainly what you have:
swap space + RAM
and linux's it's:
swap space + (RAM * (overcommit_ratio/100) )
or, if you instead use overcommit_kb,
swap space + overcommit_kb {{verify}}
Also note that 'commit_ratio' might have been a better name,
because it's entirely possible to have that come out as ''less' than RAM, undercommit if you will.
This undercommit is also a sort of feature, because while that keeps applications from using it,
this ''effectively'' means it's dedicated to (mainly) kernel cache and buffers.
Note that the commit limit is ''how much'' it can allocate, not where it allocates from (some people assume this based on how it's calculated).
Yet if the commit limit is less than total RAM, applications will never be able to use all RAM.
This may happen when you have a lot of RAM and/or very little swap.
Because when you use overcommit_ratio (default is 50), the value (and sensibility of the) of the commit limit essentially depends on the ''ratio'' between swap space and RAM.
Say,
: 2GB swap, 4GB RAM, overcommit_ratio=50: commit limit at (2+0.5*4) = 4GB.
: 2GB swap, 16GB RAM overcommit_ratio=50: (2+0.5*16) = 10GB.
: 2GB swap, 256GB RAM overcommit_ratio=50: (2+0.5*256) = 130GB.
: 30GB swap, 4GB RAM overcommit_ratio=50: (30+0.5*4) = 32GB.
: 30GB swap, 16GB RAM overcommit_ratio=50: (30+0.5*16) = 38GB.
: 30GB swap, 256GB RAM overcommit_ratio=50: (30+0.5*256) = 156GB.
So
* and/or (more so if you have a lot of RAM) you may consider setting overcommit_ratio higher than default
: possibly close to 100% {{comment|(or use overcommit_kb instead because that's how you may be calculating it anyway)}}
: and/or more swap space.
* if you desire to leave some dedicate to caches (which is a good idea) you have to do some arithmetic.
: For example, witjh 4GB swap and 48GB RAM,
:: you need ((48-4)/48=) ~91% to cover RAM,
:: and ((48-4-2)/48=) ~87% to leave ~2GB for caches.
* this default is why people suggest your swap area should be roughly as large as your RAM (same order of magnitude, anyway)
'''Should I add more RAM instead?'''
Possibly. It depends on your typical and peak load.
More RAM improves performance noticeably only when it avoids swapping under typical load.
It helps little beyond that. It helps when it means files you read get cached (see [[page cache]]),
but beyond that has ''no'' effect.
Other notes:
* windows is actually more aggressive about swapping things out - it seems to favour favour of IO caches
* linux is more tweakable (see [[#swappiness|swappiness]]) and by default is less aggressive.
* overcommit makes sense if you have significant memory you reserve but ''never'' use
: which is, in some views, entirely unnecesssary
: it should probably be seen as a minor optimization, and not a feature you should (ab)use
Unsorted notes
* windows puts more importance on the swap file
* you don't really want to go without swap file/space on either windows or linux
: (more so if you turn autocommit off on linux)
* look again at that linux equation. That's ''not'' "swap plus more-than-100%-of-RAM"
: and note that if you have very little swap and or tons of RAM (think >100GB), it can mean your commit limit is lower than RAM
* swap will not avoid oom_kill altogether - oom_kill is triggered on low speed of freeing pages {{verify}}
-->
<!--
See also:
* https://serverfault.com/questions/362589/effects-of-configuring-vm-overcommit-memory
* https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
* https://www.win.tue.nl/~aeb/linux/lk/lk-9.html
* http://engineering.pivotal.io/post/virtual_memory_settings_in_linux_-_the_problem_with_overcommit/
* https://serverfault.com/questions/362589/effects-of-configuring-vm-overcommit-memory
-->


===Swappiness===
===Swappiness===

Revision as of 13:40, 15 July 2023

The lower-level parts of computers

General: Computer power consumption · Computer noises

Memory: Some understanding of memory hardware · CPU cache · Flash memory · Virtual memory · Memory mapped IO and files · RAM disk · Memory limits on 32-bit and 64-bit machines

Related: Network wiring notes - Power over Ethernet · 19" rack sizes

Unsorted: GPU, GPGPU, OpenCL, CUDA notes · Computer booting



This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


'Virtual memory' ended up doing a number of different things. For the most part, you can explain those things separately.


Intro

Swapping / paging; trashing

Page faults

See also

Overcommitting RAM with disk

Swappiness

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


Practical notes

Linux

"How large should my page/swap space be?"

On memory scarcity

oom_kill

oom_kill is linux kernel code that starts killing processes when there is enough memory scarcity that memory allocations cannot happen within reasonable time - as this is good indication that it's gotten to the point that we are trashing.


Killing processes sounds like a poor solution.

But consider that an OS can deal with completely running out of memory in roughly three ways:

  • deny all memory allocations until the scarcity stops.
This isn't very useful because
it will affect every program until scarcity stops
if the cause is one flaky program - and it usually is just one - then the scarcity may not stop
programs that do not actually check every memory allocation will probably crash.
programs that do such checks well may have no option but to stop completely (maybe pause)
So in the best case, random applications will stop doing useful things - probably crash, and in the worst case your system will crash.
  • delay memory allocations until they can be satisfied
This isn't very useful because
this pauses all programs that need memory (they cannot be scheduled until we can give them the memory they ask for) until scarcity stops
again, there is often no reason for this scarcity to stop
so typically means a large-scale system freeze (indistinguishable from a system crash in the practical sense of "it doesn't actually do anything")
  • killing the misbehaving application to end the memory scarcity.
This makes a bunch of assumptions that have to be true -- but it lets the system recover
assumes there is a single misbehaving process (not always true, e.g. two programs allocating most of RAM would be fine individually, and needs an admin to configure them better)
...usually the process with the most allocated memory, though oom_kill logic tries to be smarter than that.
assumes that the system has had enough memory for normal operation up to now, and that there is probably one haywire process (misbehaving or misconfigured, e.g. (pre-)allocates more memory than you have)
this could misfire on badly configured systems (e.g. multiple daemons all configured to use all RAM, or having no swap, leaving nothing to catch incidental variation)


Keep in mind that

  • oom_kill is sort of a worst-case fallback
generally
if you feel the need to rely on the OOM, don't.
if you feel the wish to overcommit, don't
oom_kill is meant to deal with pathological cases of misbehaviour
but even then might pick some random daemon rather than the real offender, because in some cases the real offender is hard to define
note that you can isolate likely offenders via cgroups now (also meaning that swapping happens per cgroup)
and apparently oom_kill is now cgroups-aware
  • oom_kill does not always save you.
It seems that if your system is trashing heavily already, it may not be able to act fast enough.
(and possibly go overboard once things do catch up)
  • You may wish to disable oom_kill when you are developing
...or at least equate an oom_kill in your logs as a fatal bug in the software that caused it.
  • If you don't have oom_kill, you may still be able to get reboot instead, by setting the following sysctls:
vm.panic_on_oom=1

and a nonzero kernel.panic (seconds to show the message before rebooting)

kernel.panic=10


See also



Copy on write

Glossary