Color notes - objectively describing color

From Helpful
Revision as of 13:58, 26 August 2024 by Helpful (talk | contribs) (→‎Describing color)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The physical and human spects dealing with audio, video, and images

Vision and color perception: objectively describing color · the eyes and the brain · physics, numbers, and (non)linearity · color spaces · references, links, and unsorted stuff

Image: file formats · noise reduction · halftoning, dithering · illuminant correction · Image descriptors · Reverse image search · image feature and contour detection · OCR · Image - unsorted

Video: file format notes · video encoding notes · On display speed · Screen tearing and vsync

Simpler display types · Video display notes · Display DIY
Subtitle format notes


Audio physics and physiology: Sound physics and some human psychoacoustics · Descriptions used for sound and music

Noise stuff: Stray signals and noise · sound-related noise names · electronic non-coupled noise names · electronic coupled noise · ground loop · strategies to avoid coupled noise · Sampling, reproduction, and transmission distortions · (tape) noise reduction


Digital sound and processing: capture, storage, reproduction · on APIs (and latency) · programming and codecs · some glossary · Audio and signal processing - unsorted stuff

Music electronics: device voltage and impedance, audio and otherwise · amps and speakers · basic audio hacks · Simple ADCs and DACs · digital audio · multichannel and surround
On the stage side: microphones · studio and stage notes · Effects · sync


Electronic music:

Electronic music - musical terms
MIDI · Some history, ways of making noises · Gaming synth · microcontroller synth
Modular synth (eurorack, mostly):
sync · power supply · formats (physical, interconnects)
DAW: Ableton notes · MuLab notes · Mainstage notes


Unsorted: Visuals DIY · Signal analysis, modeling, processing (some audio, some more generic) · Music fingerprinting and identification

For more, see Category:Audio, video, images


Describing color

Light can be reflected, absorbed, emitted in various ways, and our perception is influenced by some automatic assumptions, including movement, shadow, details like diffuse and specular reflection affects our perception.


If we are talking about just one color to put into an image, it's all measurable as abstract spectra.

If we are talking about its perception within that image, context matters to the perception of that one color (and all others in it).


So describing all our visual interpretation is wildly complex, yet a good start is to aim for the most descriptive objective way to look at a single color.

plotting the spectrum of a fluorescent tube within the human-visible range: relatively even (which is why we perceive it as white), with a bulky peak energy peaking around yellow (5000K is a moderately warm white), with some narrow peaks related to the mercury inside (not as strongly perceived as that bulk below)

The answer from physics would probably be a Spectral Power Distribution[1] (SPD), which you can see as a histogram showing how much energy there is for the range of frequencies that humans happen to call visible (or a larger range, for some applications).


Real-world SPDs often have a wide blob (e.g. for black body radiation), and/or some specific peaks (e.g. from gas-discharge involving specific gases, most LEDs (except white)). See e.g. Spectra of light sources


SPDs are a good standard to precisely reproduce everything not only for human eyes, but also for eyes better than our own - which exist in the animal world.


SPDs don't even reflect how human eyes work at low level, or how we describe color.

We see a simplification, due both to seeing intensities through some broad color filters, and to some processing done to that effectively reducing things further.


As a result, there are a lot of SPDs we really couldn't tell apart. These are called metameters of the same color, meaning different SPDs that look identical colors to us. That wikipedia example is that a single pure yellow peak, and a specific combination of mainly a red and green peak (slightly above and below yellow), will get the same excitation after those mentioned filters in the eye. We couldn't tell them apart if we wanted to.

We rarely care a lot about metameters, but it matters that while we could e.g. store images as SPDs per pixel, that would be overkill for reproduction towards human eyes -- and also harder and more expensive to measure, so we don't. (Cameras are actually more like imitations of our eyes's filtering).

(These kinds of workings are also behind the statement that "technically, magenta doesn't exist". We process any SPD as a single color in the end, but magenta has the relatively unique property of having a distinct break between its peaks of red (~400nm) and violet (~750nm). The point of the statement "magenta doesn't exist" is that there is no single-peak color that gives the same color sensation. This is also roughly the argument of why white isn't a color, and black isn't. But if the point is that removing all but the peak of the SPD wouldn't reproduce it, then that's true for most color experience we see every day. Magenta will be one of the worst, but this is still a fairly petty semantic argument


Human eyes (and the image sensors made to imitate them) are somewhat complex, but can be seen as broad response to response to peaks in the spectrum.

Color spaces made for humans are simplified in the same ways.


(And that's ignoring the details of color perception, which is fairly eager to adapt. It's also ignoring image perception, which is more complex yet - the amount of area, context, and more can be relevant)



Most color spaces are absolutely referenced somehow, usually via CIE's work.

In computers, colors may be communicated in such an absolutely referenced space so that color management can know how to reproduce colors decently, with monitor drivers and printer drivers specifying the range of these devices and how to get there.

In theory, anyway - practice is often a little fudged.

Not everything is defined with absolute reference. As far as I know, color names and RGB values in HTML are not.

Many images are not particularly referenced, or incorrectly referenced. Even so, the difference is usually just a bit of tint, and not large enough to bother anyone.

Spectra and White, Context and Adaption, Color temperature

When we see an object, we see the spectrum of the light falling on it minus whatever that object absorbs.

As such, the light source is an inseparable part of color perception.

Objectively, you can describe any color we as a mix of EM energy at various frequencies in the range we call visible light.


In an image or scene, color also works in context.

This results in Color constancy: that we tend to describe the same thing as the same color under different-tinted (and different-strength) lights.

Adapting to white-ish is both practical for us, and a bit of a pain when you want to model color description.


If we see something that is both bright and has a fairly equal spectrum (post-human-processing, but we can ignore its details for now), we are quite prepared to call it white.

Consider e.g. going indoor after being in the sun - aside from things looking dark, they also look a little blue at first. But give us a few minutes to adapt and we use the present light as a reference point for colors, without even realizing it.


This works fairly well even if looking at an image from an illuminant other than the one you are in. In a scene that involves varied color but also one of these fairly-equal-spectrum ones, we are easily prepared to call that white. In fact, automatic illuminant correction like gray world and retinex are based fairly directly on this.

Consider that there is a fairly large difference between the sun and artificial light, and between various types of artificial light. You're somewhat aware of this, but not to how large the differences actually are, and how easily we adapt to it (unless it's particularly monochromatic, as some arc lighting is).



Color temperature is a model based on black-body radiation, which has a single smoothish peak in the EM spectrum that varies with the temperature.

From 2000 to 10000 Kelvin, enough of this lies in the human-visible range to be a useful descriptor.

Something as low as 500K can be seen as a red glow, which is why things like hot metal and lava looks red (not to do with the material, it's just that many other things catch fire or melt away). Over ~10000K the peak is outside, though the curve's falloff stays visible because the energy involved is higher.


Color temperature is often used to approximately describe the sort of mix of light, primarily whether it's on the mostly-red or mostly-blue side. It's often more of a symbolic estimate than particularly accurate, since most light sources don't have a spectrum that looks like a black-body-style peak.

It's decent for the sun, though. Tables list its color temperature as something between 5000 or 5500 to 6000 Kelvin (and indeed it burns at around 5700K), so its peak lies in the visible range.

Fire is somewhere around 1700-2000K, light bulbs around 2500K (fire and lightbulbs peak in infrared rather than visible light), sunlight 5000-6000K, overcast sunlight 6500K. Most indoor lighting is intentionally biased to warmer light, somewhere between 2000 and 6000K. Speciality bulbs can do 10000K or 15000K.


Color temperature estimations are also useful to correct reproduction of photos, for example to show a photo taken in ~5000K sunlight to show roughly the same on a ~9000K CRT screen, ~6000K LCD, 6500K or 7100K TV.

Photo cameras tend to assume and correct for illuminants in the first place. Without this, indoor scenes (lightbulbs, candles) would look very orange. Cameras are often biased to a relatively warm result, while there is quite a range in artificial lighting's color temperature (see also e.g. the middle of this page).


Official color spaces have a white point predefined, so that their definition can be absolute, and their conversions well defined.

In practice, you often use one of the common predefined ones, or use a coordinate in CIE's XYZ or something else absolute.


Illuminants and white points

'White point' is the formal side of white. The term is not not mentioned that often, but it is important in any conversion that has to stay color-consistent. You may care for various reasons, such as:

  • you may want to light art with lightbulbs that give off a daylight sort of white, or with the same type of fluorescent light something was painted under.
  • you probably want to match your monitor colors to printed colors. This largely involves gamma, but also color temperature since even a relatively small 'lamp reddish' effect can be annoying. This is one aspect among others that help printed photos look like they do on your monitor.


Materials reflect/absorb light in a specific way that you could call their color. However, we only see objects when actual light (some illuminant) hits it, and all light is tinted in some way. Lightbulbs emit warmer light than sunlight, so indoor photos have a habit of looking like they have yellow-orange sort of tint when contrasted with outdoor photos.

In formal definitions, each illuminant has a brightest point, a white point. The fact it's not ideal white doesn't matter so much -- human color interpretation is quick to see interpret colors in this way, compared to the brightest color we are seeing.

In reality, the white will be a bright, low-saturation color, and often given in x,y coordinates, sometimes in X,Y,Z coordinates. White point conversions need to happen via XYZ.


To be fair, it doesn't always matter. For diagrams, colors only really have meaning in direct (adjacent) context. Sparse diagrams on white(ish) paper all look about the same.

Still, in photo work, you may want the ability to to accurately consider white points when converting between color spaces. Not so much to reflect reality exactly (photos may have intentional white balance choices - properly desaturated indoor photos can look a little depressing), but instead to be able to keep a picture's look (mainly its tint) looking the same across different devices - say, monitor and printer, which use entirely different processes to let you see things, or to calibrate a scanner (which often use fluorescent light) so that it doesn't change colors.

Some illuminants

Some relatively common illuminants (with estimated color temperature):

CIE's D series illuminants represent daylight;

  • D65 (6504K), used in TV, sRGB, and others, apparently represents various stages of daylight.
  • D50 is indirect sunlight or incandescent light

Other CIE illuminants (most are considered obsolete):

  • Illuminant A (about 2800K) is incandescent light, close to tungsten and to burning candles, which are approximated by 2500K.
  • Illuminant E is a hypothetical energy-equal illuminant, used for CIE RGB.
  • Illuminant B is direct sunlight (4800K)
  • Illuminant C (~6800K), a tad bluer than D65.

These illuminants are defined by CIE XYZ coordinates. The Kelvin temperatures are just approximations, convenient for description.

See also:

Chromatic adaptation

You can decent conversion between colors with different illuminants, by converting via XYZ, and adding a chromatic adaptation transform.

It is common to do this using the Bradform transform, instead of the older von Kries or the naive lack of transform)

The Bradform, using the illuminant details and one of three formulae:

  • Bradford
  • von Kries