Computer hardware notes

From Helpful
Revision as of 23:36, 14 February 2011 by Helpful (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Form factors / motherboard sizes

There are over two dozen of these, but most of the tower/desktop computers use one of:

  • micro-ATX - when three or four expansion slots are enough (~80% of the size of standard ATX.)
  • ATX when you want ~6 expansion slots.
  • mini-ITX - when cramped, fits in some smaller cases. Usually has one expansion slot.

See also:


This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Processors that are currently relevant

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Relevant in either the 'Interesting for regular consumers to buy now' or 'Maybe not sold anymore, but you may see plenty of these around' meanings.

AMD - notes on names:

  • Sempron refers to the budget variants from various families (previously the name Duron was used for the same purpose)
  • Athlons are usually the home-use processors
  • Opteron refers to server-market CPUs from various families

AMD - series/families:

  • (K7 series) Athlon, AthlonXP, Duron, plus some Mobile versions, plus some Semprons
    • Most relevant around ~1999 to 2005
  • (K8 series) Athlon64 (plain, X2, FX), (plus some Opterons, plus some Semprons)
    • Relevant since ~2003
  • (K10 series) Phenom, Phenom II, late Athlon (X2, 4-series, 6-series), Athlon II
    • Relevant since ~2006

See also:

Intel - notes on names:

  • Celeron refers to budged-end processor variants, from most any architecture/family
  • Xeon refers to setups with multiple processors, mostly targeted for server systems (though also faster workstations, pre-Core), and also exist in all architectures
  • Itanium (~2000) - co-developed with HP and made somewhat specifically for specific large-scale processing, servers
  • Extreme Edition (EE on the processor name, or something similar) pretty much refers to versions for hardware geeks - some specifications maxed out, more overclockable, and as a result also pricier

Intel - series/families:

  • P6
    • perhaps ~1997-2003: Pentium Pro, Pentium II, Pentium III, various Celeron,
    • perhaps 2003-: Pentium M, Celeron M - targeted for laptops (lower power use) during the early Pentium 4 days(verify)
    • some Xeon
    • perhaps 2006-: a few Core (verify)
  • (NetBurst x86 architecture) Various Pentium 4, Various Pentium 4 EE, some Xeons, some Itaniums
  • (NetBurst x64 architecture) Pentium D, Some Pentium 4, some Pentium 4 EE
    • Pentium D were early dual-core processors, pretty much replaced by Core
  • (Core architecture/brand) most Core, Core 2, plus some Celerons, some Xeons
    • Relevant from ~2006,
  • Core i7, i5, i3 brand
    • New brand series and new set of architectures, replacing the Core/Core2 brand
    • Relevant since 2008-2010 (will replace Core2 brand later)
  • (Atom architecture)
    • Relevant since ~2008: Atom
    • performance is low (maybe 10-30% speed of other processors, comparable to pre-core processors)
    • performance for power is good, so useful for embedded systems, widely used in netbooks
    • (Note that while the processor is very low-power, some Intel chipsets used on them used factors more)

See also:

Processor numbering/naming

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


Core, Core2 (and Pentium)

The letter:

  • E ??
  • X indicates extreme edition (e.g. XE6800)
  • Q indicates quad-core (e.g. Q9300, QX9650)

In the Intel Core/Core2 series, numbers currently approximately mean:

  • 2xxx: Budget
  • 4xxx, 5xxx: balanced budget (performance-for-cost tends to peak here)
  • 6xxx, 7xxx: Mid-range (with significant variation)
  • 8xxx, 9xxx: Fast and pricy

(Not all of these are actually called Core/Core2. The name 'Pentium Dual-Core' was introduced in 2006, then renamed to plain 'Pentium' in 2010. It refers to a line of Core-derived CPUs on the budget end, a little better than Celerons, mostly just low-end Core variants). I'm currently mostly seeing these in the E5xxx and E6xxx series)

For mobile variants there is a letter to indicate the power use

  • U: less than 12W
  • L: 12 to 20W
  • P: 20 to 30W
  • T: 30 to 40W
  • X: more than 40W

The current bulk of mobile processors seem to be T and P, and some L. ULVs are often UTemplate:Veridy.

  • the S (in SP, SL, SU, ...) refers to a small-package processor (no difference in power consumption (verify))

See also:

Core i3 / i5 / i7
Energy efficient choices

See also:




This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

See also

32-bit/64-bit CPUs/OSes/drivers/apps

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

What the bits technically mean - Hardware

The bit size ascribed to a computer actually refers to the overall hardware architecture, in that most elements are this size, or transfer transfer data this size at a time.

It primarily describes:

  • the size of the CPU's general purpose registers
  • the (processor-native) memory addressing size
  • the largest logical size of (mostly address and integer) data that common/important/central operations in the CPU deal with
  • the bus size the CPU communicates with other things with (though other-width transfers may well be supported)

We often refer to something-bit processors, since choosing a CPU dictates most of the rest of the system (motherboard hardware design).

So in a 32-bit system most of those are 32-bit, in a 64-bit system most are 64-bit. A good number of systems are actually somewhat mixed.

Implications - OS and software

An operating system is usually specifically made for a single architecture, or sometimes a few similar/compatible ones.

This largely because of the OS's kernel, which is the primary thing that runs directly on the hardware, and the (hardware) drivers, as they are loaded/hooked into the kernel fairly directly and speak fairly directly to all the more peripheral hardware.

This means you need a different kernel and drivers for each significantly different processor (which is why OSes are usually architecture-specific) and bit-width (which is why they're also specifically 32-bit or 64-bit). You don't want to emulate drivers because it may be somewhere between hard and impossible, and most likely it would be slow.

As such, the OS+driver is the part of a system that is pretty specifically 32-bit or 64-bit.

Applications run on / speak to the OS (and not the hardware) and its libraries, so there is more possible leeway there.

An OS comes with a set of libraries that apps hook into, and which run on that OS. Libraries are specific to the OS and its bit-width (it's perhaps not strictly necessary, but it's less messy in a few ways, easier for programmers, faster, or possibly all of those).

Programs themselves can be compiled to be 32-bit or 64-bit, and need the according libraries.

  • A 32-bit app will certainly run on a 32-bit OS (sort of a 'duh')
  • A 64-bit app will run on a 64-bit OS (also sort of a 'duh')
  • A 64-bit app will not run on a 32-bit OS
  • A 32-bit app may well run on a 64-bit OS, if the OS chooses to support this (and which usually relies on a set of 32-bit library alongside the 64-bit library set)

This leads us into some of the more confusing stuff:

  • 64-bit OSes can choose to let you run 32-bit apps
    • purely emulating a 32-bit environment, or
    • CPU-assisted if applicable (which is more lightweight, but still means overhead). This compatibility mode does not exist in all processors, but currently does in most desktop-market CPUs(verify)).
  • 64-bit processors may have a mode that makes it act like the earlier 32-bit hardware in its product line
    • letting you run 32-bit OSes on such 64-bit processors (This was a pretty common case for a while, slowly getting less common)

From the angle of OS choice:

64-bit OS:

  • can only be run on a 64-bit CPU (which is pretty much every new processor now)
  • only practical if all your drivers have 64-bit versions. (For a long time this was the reason you would lose some hardware support if you went 64-bit)
  • you'll probably prefer to run 64-bit programs when they are available
  • 32-bit programs will run, but a little slower as they need some emulation (the amount depends a little on setup and hardware)
  • memory: Physical memory limit usually lies around 64GB, not 3-4GB as in 32-bit (see below).
  • drivers: may well be harder to find in 64-bit than in 32-bit form (newer hardware rarely has this problem; it was at the introduction of 64-bit OSes that this was a much larger problem)

32-bit OS:

  • can be run on 32-bit CPUs, and a good number of 64-bit CPUs (so new and older processors)
  • (still somewhat better driver coverage for hardware)
  • slight overall speed hit on a 64-bit CPU (by running on the CPU's compatibility mode)
  • cannot run 64-bit programs
  • memory limit is 4GB (or 64GB with PAE, but this usually only matters for specialized software, such as database engines; see below)

Running a 32-bit OS on a 64-bit processor means a small speed hit, but not even nearly a factor two (as some people seem to think). Most of the time, it's 5-10% difference or so. In some specific bulk calculation processing cases it's more, but that rarely describes OS work. The biggest difference is probably hard floating-point calculation, which can describe rendering, games.

As such, running a 64-bit OS instead of a 32-bit one means little difference to most everyday work, and only part of server work.

Note that pretty much all software is available in 32-bit form (more and more software is now additionally available in 64-bit form), but since you can run 32-bit software on most 64-bit OSes, this isn't really a major factor either way, unless you know the 64-bit version is actually faster.

Memory limits on 32-bit and 64-bit machines

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

(the angle is getting more dated as 64-bit is becoming a more sensible default now)

TODO: the distinction between (effects from) physical and virtual memory addressing should be made clearer.

Overall factoids

On 32-bit systems:

Process-level details:

  • No single 32-bit process can ever map more than 4GB as addresses are 32-bit byte-addressing things.
  • A process's address space has reserved parts, to map things like shared libraries, which means a single app can actually allocate less (often by at most a few hundred MBs) than what it can map(verify). It's not unusual to only see that no more than ~3GB can be allocated.

OS-level and hardware-level details:

  • Part of the hardware-level memory mapping is taken by device memory maps, which for 32-bit OSes can mean the last few hundred megs of 4GB of installed RAM is not accessible (simply because there's only 4GB of address space and more than 4GB of things to address). If you have 4GB of RAM, your 32-bit OS may see 3.7Gb, 3.4GB, or some such figure. This memory is only usable through PAE-based allocation(verify), which usually means 'not used in practice'.
  • a 32-bit OS can only use more than 4GB if you have a motherboard that supports PAE. PAE has to be supported by motherboard and OS, and enabled in the OS. (Note: Disabled in windows XP since SP2; see details below)
    • Note: both 32-bit and 64-bit PAE-supporting motherboards may have somewhat strange limitations, e.g. the amount of memory they will actually allow/support (mostly a problem in old, early PAE motherboards)
  • Depending on the OS/kernel setup, the practical limit of 32-bit processes is often 3GB or 2GB (or sometimes even 1GB).
  • Each process is mapped to memory separately, so you can host multiple 32-bit processes to combine to use more than 4GB (even on 32-bit OSes: you can for example compile the 32-bit linux kernel to use up to 64GB this way).

If you want to use significantly more than 4GB of RAM, you need a 64-bit OS.

If you have exactly 4GB, the least-bother choice is to just not care about the few hundred MBs that will be missing, and use the three point something gigs you get.

On 64-bit systems:

  • none of the potentially annoying limitations that 32-bit systems have applies -- assuming you are using a 64-bit OS, and not a 32-bit OS (and so using the 64-bit CPU's 32-bit mode).
  • The architecture lets you map 64-bit addresses -- or, in practice, more than you can physically put in a system.

On both 32-bit (PAE) and 64-bit systems:

  • Your motherboard may have assumptions/limitations that impose some lower limit (e.g. 4, 8 or 16GB)
  • Some OSes may artificially impose limits (particularly the more basic versions of Vista seem to do this(verify))

Windows-specific limitations:

  • 32-bit Windows XP (since SP2) gives you no PAE memory benefits. You may still be using the PAE version of the kernel if you have DEP enabled (no-execute page protection) since that requires PAE to work(verify), but PAE's memory upsides are disabled (to avoid problems with certain buggy PAE-unaware drivers, possibly for other reasons)
  • 64-bit Windows XP: ?
  • /3GB switch moves the user/kernel split, but a single process to map more than 2GB must be 3GB aware
  • Vista: different versions have memory limits that seem to be purely artificial (8GB, 16GB, 32GB, etc.) (almost certainly out of market segregation)

Longer story / more background information

A 32-bit machine implies memory addresses are 32-bit, as is the memory address bus to go along. It's more complex, but the net effect is still that you can ask for 2^32 bytes of memory at byte resolution, so technically allows you to access up to 4GB.

The 'but' you hear coming is that 4GB of address space doesn't mean 4GB of memory use.

The device hole (32-bit setup)

One of the reasons the limit actually lies lower is devices. The top of the 4GB memory space (usually directly under the 4GB position) is used by devices. If your physical memory is less than that, this means using addresses you wouldn't use anyway, but if you have 4GB of memory, this means part of your memory is now effectively missing. The size of this hole depends on chipset, BIOS configuration, video card, and more(verify).

Assume for a moment you have a setup with a 512MB device hole - that would mean the 32-bit address range between 3.5GB and 4GB addresses devices instead of memory. If you have 4GB of memory plugged in, that last 512MB remains unassigned, and is effectively useless as it is entirely invisible to the CPU and everything else.

The BIOS settles the memory address map at boot time(verify), and you can inspect the effective map (Device Manager in windows, /proc/iomem in linux) in case you want to know whether it's hardware actively using the space (The hungriest devices tend to be video cards - the worst current case is probably two 768MB nVidia 8800s in SLI) or whether your motherboard just doesn't support more than, say, 3GB at all. Both these things can be the reason some people report seeing as little as 2.5GB out of 4GB you plugged in.

This is not a problem when using a 64-bit OS on a 64-bit processor -- unless, of course, your motherboard makes it one; there are various reported cases of this too.

Problems caused purely by limited address space on 32-bit systems can also be alleviated, using PAE:


In most computers today, memory management refers to cooperation between the motherboard and the operating system.

Application are isolated from each other via virtual memory mapping. A memory map is kept for each process, meaning each can pretend they're alone on the computer. Each application has its own 4GB memory map without interfering with anything else (virtual mapping practice allowing).

Physical Address Extension, a hardware feature, is a memory mapping extension (not a hack, as some people think) that uses the fact that this memory map is a low-level thing. Since application memory locations are virtualised anyway, the OS can just map various 32-bit application memory spaces into the 36 bit hardware address space that PAE allows, which allows for 64GB (though most motherboards have a lower limit, for somewhat arbitrary reasons) This also solves the device hole problem, since the previously unmappable-and-therefore-unused RAM can now be comfortably mapped again (until the point where you can and actually do place 64GB in your computer).

PAE doesn't need to break the 32-bit model as applications get to see it. Each process can only see 4GB, but a 32-bit OS's processes can collectively use more real memory.

PAE implies some extra work on each memory operation, which in the worst case seems to kick a few percent off memory access speed. (so if without PAE you see 3.7GB out of the 4GB you actually have, it can be worth it to leave PAE off)

In the (relatively rare) cases where a program wants to handle so much memory that the normal allocation methods would deny it, that program can add PAE code (working with the OS).

All newish linux and windows version support PAE, at least technically; particularly windows may disable it for you. Everything since the last generation or two of 32-bit hardware 64-bit processors can be assumed to support PAE -- to some degree. However:

  • The CPU isn't the only thing that accesses memory. Although many descriptions I've read seem kludgey, I easily believe that any device driver that does DMA and is not aware of PAE may break things -- such drivers are broken in that they are not PAE-aware - they do not know the 64-bit pointers that are used internally used should be limited to 36-bit use.
  • PAE was disabled in WinXP's SP2 to increase stability related to such issues, while server windowses are less likely to have problems since they use tend to use more standard hardware and thereby drivers.

Kernel/user split

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

The kernel/user split, specific to 32-bit OSes, refers to an OS-enforced formalism splitting the mappable process space between kernel and each process.

It looks like windows by default gives 2GB to both, while (modern) linuces apparently split into 1GB kernel, 3GB application by default (which is apparently rather tight on AGP and a few other things).

(Note: '3GB for apps' means that any single process is limited to map 3GB. Multiple processes may sum up to whatever space you have free.)

In practice you may want to shift the split, particularly in Windows since almost everything that would want >2GB memory runs in user space - mostly databases. The exception is Terminal Services (Remote Desktop), that seems to be kernel space.

It seems that:

  • linuxes tend to allow 1/3, 2/2 and 3/1,
  • BSDs allow the split to be set to whatever you want(verify).
  • It seems(verify) windows can only shift its default 2/2 to the split to 1GB kernel, 3GB application, using the /3GB boot option (the feature is somewhat confusingly called 4GT), but it seems that windows applications are normally compiled with the 2/2 assumption and will not be helped unless coded to. Exceptions seem to primarily include database servers.
  • You may be able to work around it with a 4G/4G split patch, combined with PAE - with some overhead.

See also


PCI, PCI-Express

Power Supplies, power consumption

TFT monitors

Computer noises

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Hard drives

Fan noise

Coil whine