Apache MPMs

From Helpful
Jump to navigation Jump to search

Related to web development, lower level hosting, and such: (See also the webdev category)

Lower levels


Server stuff:


Higher levels


This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

MPM notes

MPM carries the meaning of 'the module that manages the handling of many requests'. A bunch of different setups exist (as you can write your own) but there are only a few that are interesting/relevant.

MPMs can use multiple processes, multiple threads, or both. The best choice depends on your environment (does the OS support forking), code (e.g. threadsafeness), and other requirements.

to avoid confusion with the worker MPM, the word 'actor' is used (instead of 'worker') below in the meaning of 'thing that can handle a request'. Most actors handle many requests over their lifetime, mostly to lower latencies (exceptions include CGI)


Most MPMs can create a configurable amount of fixed/total/spare/idle processes or threads, because starting them when things are idle means that when a request comes in it can be immediately assigned, and the latency of actor startup can be avoided (...for as long as there are still idle actors; at some level of heavy load there's just no time).

You can configure that it would spawn more at busy times, and reduce the number when many become idle.


Non-threaded MPMs include:

  • prefork [1]
    • multiple processes, each process is a single actor (handles one request at a time)
    • uses forking to make process startup simpler.
    • not available on windows (because fork() is not)
    • starts at most min(MaxClients,ServerLimit)(verify) children
    • for the same amount of actors, it will use more memory than threaded MPMs, because each process has some base memory and its own loaded copy of modules.


  • itk, a derivative of prefork that allows running different vhosts as different users.


Threaded/hybrid MPMs (hybrid means multiple processes, each multithreaded) include:

  • worker
    • hybrid: multi-threaded in multiple processes
    • Regularly seen as the main alternative to prefork when you want threading and more actors.
    • ...typically meaning less memory use (and a little less CPU overhead)
    • Not every module is written for multithreaded use. Perhaps one of the most significant is PHP


  • event
    • derivative of worker
    • dedicates a thread for a request, rather than a thread for a connection (better behaved around keepalive, fewer threads sitting around waiting on an idle connection)
    • experimental (introduced in 2.4), but seems to work well, and can be faster than worker with less configuration worry


Experimental: ((verify) status of each, I wrote this a while ago)

  • threadpool (experimental)
    • variant of worker (Not as interesting as worker)
  • leader (experimental)
    • variant of worker that uses a leader-follower style threadpool, which may be a little faster at handling many jobs when they are very short running(verify)
  • perchild (experimental)
    • not currently functional (you probably want worker)
    • multi-threaded in multiple processes


OS-specific:

  • winnt [2]
    • Default on windows
    • multi-threaded in a single process.
    • ...so like worker, but limited to a single process that shares all threads. This also means that the one process is not killed unless apache is restarted, a persistence that can be useful as well as bite you)



  • beos [4]
    • single control process creating threads for requests




Default MPMs:

  • Most unices: prefork
  • BeOS, Netware, and OS/2 use their own (beos, mpm_netware, mpmt_os2).
  • Windows: mpm_winnt


Which MPM am I using?

Run:

apache2 -V

Which will mention something like:

Server MPM:     Worker
  threaded:     yes (fixed thread count)
    forked:     yes (variable process count)

or

Server MPM:     prefork
  threaded:     no
    forked:     yes (variable process count)

etc.

server-status can also tell you.

Some notes

Slow running actors means delays

...one way or another. If it's waiting on CPU or IO-bound work, then the server can't go faster. Allowing more actors to run won't help - sure, you can start the work for more people, but they'll finish slower so no one will be served any faster.

If that time is not all CPU, or if you can avoid CPU or IO (particularly using caching; memory is a relatively cheap resource) then you should - it'll be good for resources (CPU and IO), the request rate, and so general snappiness.


On amount of actors, and implied memory

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

In theory, just a handful of actors can saturate your resources, but depending on the work they do, there may be waiting involved, so up to a few dozen actors ensure your CPU/IO resources are being used pretty well.


Above some point, there are few to no upside in having more, and a number of possible downsides, such as the implied memory use peaking over available RAM leading to swapping/trashing.

You usually want to limit the amount of processes, to limit their potential memory use.


A rough estimate is no more than few dozen processes per GB.

This depends a little on the MPM (the below is biased to prefork), based on the idea that the basic modules you probably have enabled and, say, PHP, mean each process has ~10-15MB of basic memory use. You can sometimes slim that down. At the same time, it may also peak higher, e.g. if you've got dynamic scripts that may allocate/handling lots of data.


Multithreaded and hybrid MPMs have much such overhead just once per process, so use much less memory per actor than prefork. For example, you may find a 20-thread process using 35MB, which (uneducated guess) might be ~15MB of overhead and ~1MB per thread.

This, and the fact that dynamic apps are often IO-bound (including network setup) or CPU-bound (not memory bound) makes these MPMs an interesting way of ensuring you can handle a lot of requests at once, while not increasing apache's memory overhead much (as you certainly would under prefork for the same number of actors).

But note that app's memory allocation may be a lot more simply because you're handling more requests at once. And that all modules and code has to be threadsafe or you may get strange misbehaviour.


A setup like 20 processes and 20 threads each is fairly pointless. If you actually have 400 concurrent page serves, you're still just dividing CPU time between them. Even assuming you have a 12-core CPU, that means 33 concurrent processes on each, probably meaning each will take about 30 times as long as they would if alone on that core. Users are not being served any faster, but you may be decimating IO performance.


Note that modules and apps may allocate a bunch of RAM but never use it. This means a large number of actors will need to be able to map a lot of memory, but will never use it. For this reason, it may be interesting to configure enough swap space on a server, so that that memory can be mapped, even if it will never be backed by something physical.