Color notes - objectively describing color

From Helpful
Revision as of 22:19, 10 May 2011 by Helpful (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
These are primarily notes
It won't be complete in any sense.
It exists to contain fragments of useful information.

Describing color

The simplest objective way to measure a color is probably to measure the electromagnetic energy spectrum for the relevant wavelength range (often quoted as between 400 and 700nm, give or take a little at either end). You know, the ROYGBIV[1] thing.

As a note, ROYGBIV colors are those that can be made from photons of a single wavelength. There are of course more colors than that, because when we see a mix of wavelengths, we see different colors - pink, white, whatever.

Because of this, a sample of how much each wavelength is present is more information. The result is usually called a Spectrum Power Distribution (SPD).

Human viewing of color doesn't even enter into it, which is also why it's a presentation with little to no loss. It's also overkill for almost all uses. You could reproduce things for better eyes than our own (as long as you cover the wavelength range they see - cameras regularly go into infrared, for example)


When we use our eyes, it's not the SPD we perceive, but what you could call a flattened transformation of it. Different wavelengths trigger the same bits of eyes, so we see less information than there is. The way we have defined color is very much based on this simplification.

As a result, there are many SPDs that are metameters of others: looking like identical colors to us while being non-equal in SPD terms.

Our eyes can be described as: physical sensing as rods and cones, conversion to a tri-stimulus signal for transmission, and "whatever the brain does with that". Our color perception is a little more complex, and image perception more complex yet.


Most of the time we only care about color specification for human consumption, and sometimes we care about an absolute reference so that, for example so we can specify the exact color tone a photo should have, rather than 'sorta yellow like they used to make them'.


You can see color spaces as simplification of SPD-style representation that were designed with consideration for human eyes. Most color spaces are absolutely referenced, usually via CIE's work.

In computers, colors are often communicated in such an absolutely referenced space (e.g. in printer drivers, monitor drivers, so that color management knows how to reproduce colors decently), or in a not very referenced but sorta-good-enough way. For example, RGB values in HTML aren't. Images aren't necessarily absolutely referenced, nor are all digital photos (though it's more likely there).


Spectra and White, Context and Adaption, Color temperature

Generally speaking, any color we see is (our processed perception of) a mix of electromagnetic energy at various frequencies.


More technically, we the spectrum that the light source gives off, minus the power absorbed by anything the light bounced off before it got to our eyes. Meaning that the light source is an inseparable part of color perception.

Now you may vaguely remember some science experiment with colored light sources, but what isn't always mentioned is that humans are prepared to call 'white' the brightest pieces of fairly equally distributed spectrum. We adapt to it within seconds or minutes at most (consider e.g. going indoor after being in the sun - things look a little blue). There is a pretty large difference between the sun and artificial light, and between various types of artificial light. We're often intuitively aware that there is a difference, but don't always appreciate just how different the spectrum we call white in each scene is, nor how much that affects our color perception - which is both practical for us and a bit of a pain for color description.

Actually, context is more important than just white. There are a bunch of color-context optical illusions out there to drive the point home, even if the effect is subtler in most real-world cases.

Short story: color very much depends on the colors around it and white.


Color temperature is a model based on black body radiation, a radiation that has a single smoothish peak. For ~2000K-10000K this peak lies somewhere in the human-visible range, something as low as 500K can be seen as a red glow (e.g. red-hot iron), and for higher temperatures there's a slowish falloff because the overall intensity grows even though the peak moves further away.

Sunlight is usually quoted as something between 5500 and 6000 Kelvin (It indeed burns at around 5700K), so its peak lies in the visible range - broad enough to cover most of the visible range, though not really covering it equal strength at all.


Color temperature is often more of a symbolic estimate than really accurate, since most human-made light sources (and many natural ones) don't have a spectrum described by a single black-body-style peak.

Even so, it is a useful way to describe the color bias in whites in different situations. For example, fire is somewhere around 1700-2000K, light bulbs around 2500K (fire and lightbulbs peak in infrared rather than visible light), sunlight 5000-6000K, overcast sunlight 6,500 K. Most indoor lighting is intentionally biased to warmer light, somewhere between 2000 and 6000K. Speciality bulbs can do 10000K or 15000K)

Color temperature estimations are also useful for simple warm/cold corrections, for example to show a photo taken in ~5000K sunlight to show roughly the same on a ~9000K CRT screen, ~6000K LCD, 6500K or 7100K TV, and avoiding it looking particularly red on one and blue on another.


Digital cameras' white balance is for the most part an assumption of color temperature, which lets it correct frequency response so that the lighting source will appear white and not warmer or colder. We tend to like relatively warm pictures, though, so cameras are regularly somewhat biased to that.


(See also e.g. the middle of this page)}}



More official color spaces have a white point predefined, so that their definition can be absolute.

In practice, you use one of the common predefined ones, not unusually given as a spectrum, or by a coordinate in one of the more absolute color spaces, like CIE's XYZ.

Perceptually, it is important to realize that light can be reflected, absorbed, emitted in various ways, and details like diffuse and specular reflection affects our perception, but in the end it's technically all measurable as spectra.



Most mentioned effects can be fairly closely approximated by fairly simple formulae, which helps in halfway decent color use/advice.


Illuminants and white points

'White point' is the formal side of white. The term is not not mentioned that often, but it is important in any conversion that has to stay color-consistent. You may care for various reasons, such as:

  • you may want to light art with lightbulbs that give off a daylight sort of white, or with the same type of fluorescent light something was painted under.
  • you probably want to match your monitor colors to printed colors. This largely involves gamma, but also color temperature since even a relatively small 'lamp reddish' effect can be annoying. This is one aspect among others that help printed photos look like they do on your monitor.


Materials reflect/absorb light in a specific way that you could call their color. However, we only see objects when actual light (some illuminant) hits it, and all light is tinted in some way. Lightbulbs emit warmer light than sunlight, so indoor photos have a habit of looking like they have yellow-orange sort of tint when contrasted with outdoor photos.

In formal definitions, each illuminant has a brightest point, a white point. The fact it's not ideal white doesn't matter so much -- human color interpretation is quick to see interpret colors in this way, compared to the brightest color we are seeing.

In reality, the white will be a bright, low-saturation color, and often given in x,y coordinates, sometimes in X,Y,Z coordinates. White point conversions need to happen via XYZ.


To be fair, it doesn't always matter. For diagrams, colors only really have meaning in direct (adjacent) context. Sparse diagrams on white(ish) paper all look about the same.

Still, in photo work, you may want the ability to to accurately consider white points when converting between color spaces. Not so much to reflect reality exactly (photos may have intentional white balance choices - properly desaturated indoor photos can look a little depressing), but instead to be able to keep a picture's look (mainly its tint) looking the same across different devices - say, monitor and printer, which use entirely different processes to let you see things, or to calibrate a scanner (which often use fluorescent light) so that it doesn't change colors.


Some illuminants

Some relatively common illuminants (with estimated color temperature):

CIE's D series illuminants represent daylight;

  • D65 (6504K), used in TV, sRGB, and others, apparently represents various stages of daylight.
  • D50 is indirect sunlight or incandescent light

Other CIE illuminants (most are considered obsolete):

  • Illuminant A (about 2800K) is incandescent light, close to tungsten and to burning candles, which are approximated by 2500K.
  • Illuminant E is a hypothetical energy-equal illuminant, used for CIE RGB.
  • Illuminant B is direct sunlight (4800K)
  • Illuminant C (~6800K), a tad bluer than D65.

These illuminants are defined by CIE XYZ coordinates, making them absolute. The Kelvin temperatures are really just approximations, convenient for description.

See also:

Chromatic adaptation

You can decent conversion between colors with different illuminants, by converting via XYZ, and adding a chromatic adaptation transform.

It is common to do this using the Bradform transform, instead of the older von Kries or the naive lack of transform)

The Bradform, using the illuminant details and one of three formulae:

  • Bradford
  • von Kries