Color notes: Difference between revisions

From Helpful
Jump to navigation Jump to search
mNo edit summary
 
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
{{notes}}
#redirect [[Color notes - objectively describing color]]
{{protected}}
 
 
==Describing color==
 
The simplest ''objective'' way to measure a color is probably to measure the electromagnetic energy spectrum for the relevant wavelength range (often quoted as between 400 and 700nm, give or take a little at either end). You know, the ROYGBIV[http://en.wikipedia.org/wiki/ROYGBIV] thing.
 
As a note, ROYGBIV colors are those that can be made from photons of a single wavelength. There are of course more colors than that, because when we see a mix of wavelengths, we see different colors - pink, white, whatever.
 
Because of this, a sample of how much each wavelength is present is more information. The result is usually called a Spectrum Power Distribution (SPD).
 
Human viewing of color doesn't even enter into it, which is also why it's a presentation with little to no loss. It's also overkill for almost all uses. You could reproduce things for better eyes than our own (as long as you cover the wavelength range they see - cameras regularly go into infrared, for example)
 
 
When we use our eyes, it's not the SPD we perceive, but what you could call a flattened transformation of it. Different wavelengths trigger the same bits of eyes, so we see less information than there is. The way we have defined color is very much based on this simplification.
 
As a result, there are many SPDs that are metameters of others: looking like identical colors to us while being non-equal in SPD terms.
 
Our eyes can be described as: physical sensing as rods and cones, conversion to a tri-stimulus signal for transmission, and "whatever the brain does with that". Our color perception is a little more complex, and image perception more complex yet.
 
 
Most of the time we only care about color specification for human consumption, and sometimes we care about an absolute reference so that, for example so we can specify the exact color tone a photo should have, rather than 'sorta yellow like they used to make them'.
 
 
You can see '''color spaces''' as simplification of SPD-style representation that were designed with consideration for human eyes. Most color spaces are absolutely referenced, usually via CIE's work.
 
In computers, colors are often communicated in such an absolutely referenced space {{comment|(e.g. in printer drivers, monitor drivers, so that color management knows how to reproduce colors decently)}}, or in a not very referenced but sorta-good-enough way. For example, RGB values in HTML aren't. Images aren't ''necessarily'' absolutely referenced, nor are all digital photos (though it's more likely there).
 
 
==Spectra and White, Context and Adaption, Color temperature==
Generally speaking, any color we see is (our processed perception of) a mix of electromagnetic energy at various frequencies.
 
 
More technically, we the spectrum that the light source gives off, minus the power absorbed by anything the light bounced off before it got to our eyes.
Meaning that the light source is an inseparable part of color perception.
 
Now you may vaguely remember some science experiment with colored light sources, but what isn't always mentioned is that humans are prepared to call 'white' the brightest pieces of fairly equally distributed spectrum.
We adapt to it within seconds or minutes at most (consider e.g. going indoor after being in the sun - things look a little blue). There is a pretty large difference between the sun and artificial light, and between various types of artificial light. We're often intuitively aware that there is a difference, but don't always appreciate just how different the spectrum we call white in each scene is, nor how much that affects our color perception - which is both practical for us and a bit of a pain for color description.
 
Actually, context is more important than just white. There are a bunch of color-context optical illusions out there to drive the point home, even if the effect is subtler in most real-world cases.
 
Short story: color very much depends on the colors around it and white.
 
 
 
'''[http://en.wikipedia.org/wiki/Color_temperature Color temperature]''' <!--(Note: one of the uses of the word chromacity also refers to color temperature)--> is a model based on black body radiation, a radiation that has a single smoothish peak. For ~2000K-10000K this peak lies somewhere in the human-visible range, something as low as 500K can be seen as a red glow (e.g. red-hot iron), and for higher temperatures there's a slowish falloff because the overall intensity grows even though the peak moves further away.
 
Sunlight is usually quoted as something between 5500 and 6000 Kelvin {{comment|(It indeed burns at around 5700K)}}, so its peak lies in the visible range - broad enough to cover most of the visible range, though not really covering it equal strength at all.
 
 
Color temperature is often more of a symbolic estimate than really accurate, since most human-made light sources (and many natural ones) don't have a spectrum described by a single black-body-style peak.
 
Even so, it is a useful way to describe the color bias in whites in different situations. For example, fire is somewhere around 1700-2000K, light bulbs around 2500K {{comment|(fire and lightbulbs peak in infrared rather than visible light)}}, sunlight 5000-6000K, overcast sunlight 6,500 K. Most indoor lighting is intentionally biased to warmer light, somewhere between 2000 and 6000K. Speciality bulbs can do 10000K or 15000K)
 
Color temperature estimations are also useful for simple warm/cold corrections, for example to show a photo taken in ~5000K sunlight to show roughly the same on a ~9000K CRT screen, ~6000K LCD, 6500K or 7100K TV, and avoiding it looking particularly red on one and blue on another.
 
 
Digital cameras' white balance is for the most part an assumption of color temperature, which lets it correct frequency response so that the lighting source will appear white and not warmer or colder. We tend to like relatively warm pictures, though, so cameras are regularly somewhat biased to that.
 
 
(See also e.g. [http://www.palagems.com/gem_lighting2.htm the middle of this page])}}
 
 
 
 
More official color spaces have a white point predefined, so that their definition can be absolute.
 
In practice, you use one of the common predefined ones, not unusually given as a spectrum, or by a coordinate in one of the more absolute color spaces, like CIE's XYZ.
 
Perceptually, it is important to realize that light can be reflected, absorbed, emitted in various ways, and details like diffuse and specular reflection affects our perception, but in the end it's technically all measurable as spectra.
 
<!--(Of course, our perception goes a little beyond spectra, using context, recognition, expectations, and such. For example, gold has a warm orangish yellow color, and with a matte finish would be that as a simple monotone - but we are much more likely to recognize gold when it has the chrome-like reflection, which you only see in an overall scene)-->
 
<!--, and also to let you do some correction for it-->
 
 
Most mentioned effects can be fairly closely approximated by fairly simple formulae, which helps in halfway decent color use/advice.
 
 
 
===Illuminants and white points===
'White point' is the formal side of white.
The term is not not mentioned that often, but it is important in any conversion that has to stay color-consistent. You may care for various reasons, such as:
* you may want to light art with lightbulbs that give off a daylight sort of white, or with the same type of fluorescent light something was painted under.
 
* you probably want to match your monitor colors to printed colors. This largely involves gamma, but also color temperature since even a relatively small 'lamp reddish' effect can be annoying. {{comment|This is one aspect among others that help printed photos look like they do on your monitor.}}
 
 
Materials reflect/absorb light in a specific way that you could call their color.
However, we only see objects when actual light (some illuminant) hits it, and all light is tinted in some way. Lightbulbs emit warmer light than sunlight, so indoor photos have a habit of looking like they have yellow-orange sort of tint when contrasted with outdoor photos.
 
In formal definitions, each illuminant has a brightest point, a white point. The fact it's not ideal white doesn't matter so much -- human color interpretation is quick to see interpret colors in this way, compared to the brightest color we are seeing.
 
In reality, the white will be a bright, low-saturation color, and often given in x,y coordinates, sometimes in X,Y,Z coordinates. White point conversions need to happen via XYZ.
 
 
To be fair, it doesn't always matter. For diagrams, colors only really have meaning in direct (adjacent) context. Sparse diagrams on white(ish) paper all look about the same.
 
Still, in photo work, you may want the ability to to ''accurately'' consider white points when converting between color spaces. Not so much to reflect reality exactly (photos may have intentional white balance choices - properly desaturated indoor photos can look a little depressing), but instead to be able to keep a picture's look (mainly its tint) looking the same across different devices - say, monitor and printer, which use entirely different processes to let you see things, or to calibrate a scanner (which often use fluorescent light) so that it doesn't change colors.
 
 
===Some illuminants===
Some relatively common illuminants (with estimated color temperature):
 
CIE's D series illuminants represent daylight;
* D65 (6504K), used in TV, sRGB, and others, apparently represents various stages of daylight.
* D50 is indirect sunlight or incandescent light
 
Other CIE illuminants (most are considered obsolete):
* Illuminant A (about 2800K) is incandescent light, close to tungsten and to burning candles, which are approximated by 2500K.
* Illuminant E is a hypothetical energy-equal illuminant, used for CIE RGB.
* Illuminant B is direct sunlight (4800K)
* Illuminant C (~6800K), a tad bluer than D65.
 
These illuminants are defined by CIE XYZ coordinates, making them absolute. The Kelvin temperatures are really just approximations, convenient for description.
 
See also:
* http://en.wikipedia.org/wiki/White_point
* http://home.wanadoo.nl/paulschils/07.01.html
 
===Chromatic adaptation===
You can decent conversion between colors with different illuminants, by converting via XYZ, and adding a chromatic adaptation transform.
 
It is common to do this using the Bradform transform, instead of the older von Kries or the naive lack of transform)
 
The Bradform, using the illuminant details and one of three formulae:
* Bradford
* von Kries
<!--* XYZ Scaling-->
 
<!--
http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html
 
http://ivrgwww.epfl.ch/research/past_topics/chromatic_adaptation.html
http://www2.cmp.uea.ac.uk/Research/compvis/ChromaticAdaptation/ChromaticAdaptation.htm
 
http://ci.nii.ac.jp/naid/110001147006/en/
-->
 
==The eyes==
The eyes don't see in SPDs at all. SPDs are useful to learn about truly accurate reproduction (not just for human eyes), but it's just as useful to learn how the eyes perceive things, and it helps explain how color blindness works.
 
 
In this context the retina is most interesting - the back of the eye, the bit that's actually sensing the light. Very roughly, the retina has rods to sense light intensity, and cones to sense which colors are there.
 
 
[[Image:eyeresponse-sml.png|thumb|200px|right|Spectrum response of the three cone types]]
The reception of '''cones'''' is said to be 'tri-stimulus', meaning they in combination describe what they see in terms of the intensity of three colors. Each cone has a bit with a pigment - effectively a color filter before light gets to the light-receptive area.
 
The effect is that there are effectively three types of cones, responsive to three different (and overlapping) ranges of wavelengths, with peaks roughly corresponding with red, green and blue.
Quite usually this is referred to as L, M and S refer to '''l'''onger, '''m'''edium or '''s'''horter wavelengths (relative to each other).
Graphing their response to intensities at different wavelength will show curves that peak at approximately 610, 540 and 420nm, respectively {{comment|(Note: which aren't the idealized red, green, and blue. Note that these response curves are not evenly shaped or sized, and that they overlap)}}.
 
 
 
'''Rods''' have no pigmentation filter so receive only brightness information. Their response peak is around ~520nm, putting it between S and M closest to green - a slightly bluish green.
 
 
 
Light sensitivity in general is due to rhodopsins (a combination of retinal and opsins), which are on a proteins and variations you could call red rhodopsin, green rhodopsin, and blue rhodopsin. Cones have one of these.
(rods have a somwhat different structure{{verify}}, but still work with rhodopsin{{verify}})
 
 
 
===Rod, cone, and cone type distribution; sensitivities; night vision===
{{stub}}
 
'''Rods versus cones'''
 
From the center of your retina (the [http://en.wikipedia.org/wiki/Fovea fovea])
* from zero to about two degrees: many more cones than rods
* about two to 15&#xB0;: more rods than cones, still a noticable amount of cones
* beyond 15&#xB0;: mostly rods
 
 
Cones are present primarily in a ring directly around the fovea , meaning our color sense is best directly ahead.
 
 
 
'''Cones versus cones'''
 
In general, you have more red cones than green cones, and noticably fewer blue cones (~2%?). Still, we are about equally sensitive to blue intensity, though see less resolution.
 
The concentration of cones in the middle is noticeably more red than green or blue,
 
 
 
'''General notes on sensitivity'''
 
Sensitivity and discenrability varies with light levels, because of how cones or rods respond. The technical terms:
* '''Photopic vision''': refers to vision under well-lit conditions, primarily based on cones (1 to 1000000 cd/m^2)
* '''Mesopic vision''': darkish vision at which you have limited color vision(at about .02 to 1 cd/m^2)
* '''Scotopic vision''': refers to the monochromatic vision in low light, when cones do not work (about .02 down to about 0.000001 cd/m^2)
 
Note that if you were to graph an overall light response curve, it would vary betweenthese.
 
 
We see color best in the center of our vision. Since the concentration is red, we see red a little faster and a little more accurately.
 
 
Cone presence falls off gradually, meaning there are fewer cones (and more rods) in our peripheral vision, making perhipheral vision more light-sensitive, and explains the ability to see things slightly better in the dark by not looking at them directly.
 
However, rods also resolve a little less detail as cones connect to a single signal nerve but rods usually have to share a little.
 
 
 
 
'''On night vision'''
{{stub}}
 
'''Warning:''' Popular fact is regularly myth here, and I'm not sure all this is right.
 
 
Night vision is mostly due to rods.
 
Since the center of our vision has negligible amount of rods, and rods are responsible for night vision, we have a night vision blind spot in the center few degrees of our vision.
 
 
 
Night vision being usable depends on the amount of rhodopsins in your eyes, and its regeneration. It depletes in daylight-level intensities. When it regenerates, your eyes can do more with the same amount of light.
 
A few minutes is enough for regeneration of one or two dozen percent (at which point you start noticing night vision), fifteen minutes for the bulk of it, and over half an hour for almost all.
 
 
 
If you want to illuminate something in the dark -- instrumentation, a dark room, or such, the best color to use depends on 'what for'.
 
In the case of the darkroom, the historical choice is red, but that's got as much to do with (early) photosensitive chemicals than with your eyes.
 
 
The main choice here is: do you want to see detail, or do you want the least effect on your night vision? These are at odds almost by definition, because that which makes us see better also depletes rhodopsin.
 
But it's a little more complex than that.
 
When you value night vision above all, you're using rods. Since rods respond most to blue-green (and somewhat to lower-frequency reds), higher-frequency reds gives slower depletion of rhodopsin in rods and the night vision it provides.
 
If you want to illuminate something to see detail for any reason, for example for quick-glance readability of instrumentation, you probably want to use the center of your eyes - which primarily has cones. (''Theoretically'' this allows you to lay off using the rods further away, though that's rarely practical)
 
You can argue that any color will do, and red slightly better as you have more red cones than other cones. However, rods respond to lowish red frequencies, and (aside from depeletes night vision a little) that means your brain will get a signal from both rods and cones. In near-darkness (up to lowish mesopic), both signals are noticable, and your brain having to interpret the combination is less readable (and perhaps a little more tiring).
 
So if you want readability (and you're implicitly giving up a little night vision), white is actually as good a choice as any.
 
Higher-frequency reds are useful in that you get more response from cones than rods that way
 
 
 
See also
* [http://en.wikipedia.org/wiki/Trichromatic_color_vision Wikipedia: Trichomatic color vision]
* [http://hyperphysics.phy-astr.gsu.edu/hbase/vision/rodcone.html the density of rods and cones])
* a good book
 
===Color blindness===
Pretty much all possible malfunctions you can now think of happen, but only a few are common.
 
There are two main ways to categorize the common types: by their causes, and by the effect they have (which colors are harder to distinguish).
Note that not distinguishing colors as much is the only real effect - people don't randomly see different colors, although they may be able to focus more on brightness difference because of it{{verify}}.
 
 
''Unusual'' response to one type of cone is one common type of problem. Because of opposite processing, both protanomaly (proto referring to L cones) and deuteranomaly (referring to M cones) limit the red-green channel, while tritanomaly (S cones) limits blue-yellow.
 
Protonopia, deuteronopia and tritanopia refer to ''missing'' response from one type of cone, which means little response on one of the opposite-processing channels.
 
 
Deuteranomaly (red-green) is most common, affecting 5% of men, while the rest affects 1% or less of anyone.
 
The anomalies are hereditary, in mildly complex and differing ways.
 
 
See also:
* http://en.wikipedia.org/wiki/Color_blind
* http://www.w3.org/TR/AERT#color-contrast
 
Related:
* [http://ray.tomes.biz/b2/index.php/a/2007/06/21/p147 Tetrachromat vision]
* http://www.toledo-bend.com/colorblind/Ishihara.html
 
==The brain==
It was observed by various people that we don't see certain colors mixing; 'greenish blue' and 'yellowish red' work, but 'greenish purple' or 'blueish yellow' doen't.
 
This is not so much an effect of reception, not part of the cones, but instead caused by the way their signals are processed before they are sent to our brain.
Before sending signals to the brain, our eyes convert to a system named ''' opponent processing, ''' that turns the three types of cones' signals into two channels.
 
 
Essentially, one channel is red stimulus minus green stimulus, the other yellow minus blue. Note there are no yellow cones; yellow comes from the combination of L and M (red and green) cones. In reality,  effects like overlap and other interaction make this a more complex than described here -- get a good book if interested.
 
It also explains the two different types of color blindness - if one of the types of cone pigments is missing, one of the described color signals has little or no range.
 
After processing, little signal on the red-green means you see green, more signal means you see red. Little on yellow-blue means yellow, more means blue. (See e.g. [http://web.archive.org/web/20050209035707/http://driesen.com/opponent_processing.htm this] for a graphic explanation and [http://www.pigeon.psy.tufts.edu/avc/husband/avc4eye.htm this] for a textual one)
 
 
Sensing both colors on a channel at once is neurally impossible, making the eye a bad tool for measuring the spectrum. This signaling is one of the main reasons for '''metameters''', non-identical spectra (SPD's) which we see as the same color, (depending also on the environment light - which is a problem, as color differentiation is different under different light sources).
The existance of metameters sounds like a problem, but without them we would have needed a ''lot'' more color names and crayons in a kit - pretty much one for every noticably different SPD.
 
The fact that the neurons and the mind adapt in short and long terms is the reason for effects like
* after-images of signals that oppose according to the opposite processing model (stare at red and you see greener for a few seconds, see eg. [http://www.yorku.ca/eye/afterima.htm this example]),
* slowly adapting night vision, as the rods take over from the cones - also why we're mostly colorblind in the dark,
...and a few more complex effects.
 
==Clarifications and the Meaning of Numbers==
===Luminance vs. Brightness===
 
'''Intensity''' refers to the amount of relevant energy that arrives (photons).
 
This does not have a linear relation with the human-perceptual concept of '''brightness'''.
 
You can say that <tt>((delta-Intensity) / Intensity)</tt> is roughly constant, which is similar to the observation that the brighter something is, the more brightness change we need to see a noticabe difference.
 
We can approximate that <tt>brightness = log(intensity)</tt>
 
 
To be more accurate than 'roughly proportional to log' you have to put people in front of colors and test them. This usually yields Just Noticeable Differences (JND) (people also use the term 'Difference Threshold'), to test the difference in intensity that means noticeable variation to an observer.
 
<!--The term is used to indicate realistic perceptual resolution, and it turns out that [http://en.wikipedia.org/wiki/Weber%27s_law Weber's law] applies {{comment|(like it does, to some extent, to most human stimuli)}}.-->
 
 
====Related physics====
{{stub}}
Going from physics to human perception involves a few separate things.
 
First: our rods and cones have sensitivities that, together cover a range of the EM spectrum, which we have named visible light.
If you want to mechanically record brightness or color, there is no point in sampling outside this range.
 
Measuring energy inside that range is possible - a Spectrum Power Distribution (SPD) in the visible light frequencies. This could let you reproduce without having any human perception enter into it -- and this is useful in spectroscopy[http://en.wikipedia.org/wiki/Spectrometry].
 
SPDs are redundant and overkill for our perception, though, is a lot of information, and isn't really a useful model for cameras either.
 
 
 
Our rods and cones make our sensitivity to energy within the visible range vary quite a bit. You'll find, for example, that while visible light is quoted as 400-700nm wavelenths, about half of that range represents ''most'' of the sensitivity. Near 400nm and 700nm it has already dropped off to be almost insignificant.
 
We can use a luminosity function[http://en.wikipedia.org/wiki/Luminosity_function] {{comment|(use of CIE's '31 luminosity function is common)}} to model/estimate the average sensitivity to light throughout the visible range - and use that as a filter, for example when we want an estimate of how bright we perceive a certain color.
 
 
The most useful concepts when talking about various light-related things include: EM energy (electromagnetic radiation, light is a part of it), energy per area, energy per area per second - and most of the same list but with a filter for human perception modelled into it.
 
There are various relevant concepts and SI units:
 
Energy at all:
* '''Radiant flux''' describes an energy/light source's stength (not its observation), measuring total EM radiation, in Watt (W).
 
* '''Irradiance''' (also radiant exitance) is total EM energy per unit area (W/m<sup>2</sup>), and talks about how much energy is irradiated per area of an energy source. Note you can use the same unit when talking about receiving areas (for example, sunlight delivers ~1.3kW/m<sup>2</sup> to earth).
 
 
 
Eye correction basics:
* '''Luminous flux''' (unit: '''lumen''', abbreviated lm) is a variation of radiant flux corrected for eye-visible light using CIE's ('31) [http://en.wikipedia.org/wiki/Luminosity_function Luninosity function]. <!-- Technically, It is convertable to watts when you settle on a color. For example, 1 Watt at 555nm (yellowish green) is 683 lumen. -->
 
* '''Illuminance''' (unit: '''lux'''), is lumen per m^2, delivered perceivable light per area. Luminous emittance, same unit, is preferred when talking about emitting areas.
 
 
It is often useful to think in terms of spherical emission and how much that delivers per angle. This can be more useful than per received area, since figuring how much emitted energy an area receives depends (nontrivially) on how close you are, and it is faurly easy to calculate the part of all the energy that's being put out from the distance and area you are receiving light on.
 
Corrected:
* '''luminous intensity''' (unit: '''candela''') is lumen per steradian (lm/sr) (a steradian is a area unit based on a sphere)
* '''Luminance''' is the intensity of received energy in candela per square meter (cd/m^2)
 
There are also uncorrected ones, for more formal physics. For completeness:
* '''Radiant intensity''' is total EM radiation in Watt per steradian (W/sr) 
* '''Radiance''', is actually more of a proportion measure, (W/sr/m^2).
* '''Spectral radiance''' and '''Spectral irradiance'''
 
 
 
'''Brightness''' is a perceptive, subjective term so not a physica measure. When calculated, it is usually based on luminance{{verify}} (candela per steradian).
 
 
There seems to be a formal CIE 'Lightness', defined as a function of luminance, using a reference white. (note to self: look up. I'm guessing it's their luminosity functions)
 
===Linear and nonlinear coding===
One of areas a lot of people are confused about is the (non)linearity of signals.
 
There are two ways of thinking about signals: in '''luminance''', which is an energy-linear intensity, and in '''brightness''', which is perceptive. Brightness is usually calculated from luminance with something like brightness = log(intensity) or something similar).
 
 
Which of the two you want depends on application.
Many image operations work easiest or best in linear-energy data, and it is more representative of reality, being energy-based.
 
Storage and reproduction is another matter. If your storage medium has enough resolution, it doesn't matter so much. Digitally, if you have the 65 thousand of levels you get from 16-bit-per-channel encoding, you get enough detail despite the fact you're spending a somewhat disproportionate amount of values on the dark end of the ''perceptive'' brightness.
 
Real-world storage formats tend to restrict the bandwidth or storage space used. If you have, say, 8 bits per channel (or effectively less, as in some compression) and wish to store things meant purely for human viewing such as photographs, you have to convert data to a perceptive scale to get decent reproduction performance. (This is often glossed over in literature as ''machine vision'' is their focus, no real-world cases of reproduction.)
 
 
On the other hand, working with energy-linear values means you are spreading your resolution oddly. This generally makes sense, but also causes problems - for example banding. Consider that on a scale of 0 to 255, the near-black values 25 and 26 differ a few percent in perceptive brightness, while values above 200 differ factors less. Shaded darkness is inherently shown in less resolution this way, which wouldn't necessarily matter, but can in specific cases.
 
For example, in computer games with dark textures (usually stored as low-value integers), when those textures are brightly lit, the values are effectively multiplied and will show a banding effect - each level is much more clearly distinct than they were at their often intended darker lighting.
Application of extra gamma will hide this slightly.
 
 
 
 
====Transfer functions====
 
 
There are various transfomations to images that you can express fairly easily with simple functions. For example, increasing brightness is simply adding to a brightness channel, increasing and decreasing contrast involves multiplication.
 
''If'' it is a brightness channel, not if it's a luminance channel, because these operations assume perceptive linearity, so that low-intensity details are changed just as much as high-intensity ones. Doing the same to a luminance channel would make adding brightness actually affect the dark areas much more than the light ones. Superimposing images of different intensities will not show the details as you'ld want since different intensities imposing themselves differently.
 
 
It's not that you can't define the same effect to luminance, it's often just easier to work on brightness, or work on values ''thinking'' about brightness.
 
Commonly you would work with a transfer function, which maps from one to the other. In the case of energy to perception, a logarithm (or something more detailed that resembles it) is common.
 
 
====Gamma====
{{stub}}
Gamma commonly refers to the most involuntary form, that of CRT phosphor luminance not at all being proportional to the voltage on the video cable.
 
A monitor's reaction is nonlinear. Phosphor exitation is often decently modelled by curves such as those part of the sRGB and ITU-R BT 709 standards, but a simple power (which is technically just a first-order approximation) approximates it fairly well too.
{{comment|(Note that this reaction is roughly the opposite of what the eyes do, but this is fairly coincidental, and doesn't help possible correction)}}
 
You ''want'' that linearity all the way through; the only part you want to not be energy-linear is your eye, in the same way that you view the real world's energy without it being filtered. You want the chain of electronic hardware as a whole to have zero effect, meaning you have to feed the monitor voltages in a way that it will show intensities proportional to the image in memory.
For some years, video cards have automatically applied gamma correction to do exactly that.
 
Still, apply too much correction and you see dark, low-contrast images, apply too little and it seems too washed out. If you work with a digital photography or any sort of color referencing, you'll want to be accurate, so that colors in the screen are representative of the colors described in memory. If they are not, you are likely to adjust colors and contrast to compensate for an ill-adjusted screen, warping the original whose proper, accurate presentation you cannot see.
 
For this reason you want to tell your system about your specific monitor by giving it a profile, and most come with one. You can even create your own profile by calibrating against a refence, to do so a little more accurately<!--(profiles tend to be averages, but more importantly, environment light makes a difference to viewing)-->.
 
 
Other things gamma refers to is the function most commonly used to approximate monitor distortion, a simple power-based expression, where γ is the greek (lowercase) gamma, the variable that is raised to.
 
Imitating the gamma effect:
newpixel = oldpixel^γ
 
Note that while it is a simple power expression, the inverse is just as simple to express and calculate. The inverse function, gamma correction, is:
newpixel = oldpixel^(1/γ)
 
{{comment|(This assumes that the channel's values are between 0.0 and 1.0, so that there is no change in scale or gamut, just in response, and then mostly in the middle range)}}
 
 
A PC monitor is usually assumed to effectively apply a power to its electronic input, specifically a gamma function with a value of about 2.5.
Historically, Macintosh monitors are categorically different, with a gamma of about 1.8. Connecting a Mac monitor to a PC system or vice versa would give incorrect results.
 
The above theory is not as simple in the real world, because the overall transformation has more variables - such as the brightness and contrast knobs. They really shouldn't exist in hardware anymore, but they do.
 
Particularly having brightness up so that black isn't black will throw off your measure of gamma, as well as gamma correction.
 
See also [http://www.cs.princeton.edu/courses/archive/fall00/cs426/papers/smith95d.pdf]
 
 
 
TFTs physically react much more sensibly than CRTs, on a physical level. In pactice, TFTs have to imitate the bad behaviour of CRTs to show analog signals correctly, because if they didn't and the OS assumes and corects for a CRT, their more correct behaviour would look horrible.
 
 
 
Before the days when everything was corrected there were other solutions. For example, you could precorrect the image for a particular monitor. These days it's a bad idea, but some images like that still exist.
 
In practice, a system could be applying too much or too little gamma, which means there is a residual gamma (or inverse gamma) effect. For example, a correction of 1.8 on a 2.5 monitor would leave a residual gamma distortion, also called the system gamma, of 1.38 (2.5/1.8).
 
Having some residual gamma, with a particular side to err on, is sometimes done intentionally, for example to compensate for the fact that images are usually bright, and many viewing conditions aren't.
 
 
 
<!--
Doesn't this talk about image-data gamma and not monitor gamma?
 
[http://graphics.stanford.edu/gamma.html here].
 
<!--
For web images, to averagely please both mac and PC users with averagely corrected displays, it's fairly usual to assume a monitor has been corrected for a halfway gamma of 2.2. Most RGB color spaces use this, for example. It'll be a bit off, but not too noticably.
 
This will happen unless you use a file format in which you can specify the image's gamma in metadata, and a viewer which corrects it.
 
which gamma an image should be displayed at (and possibly whether it is partly corrected already). However, this requires application support, and support varies wildly between browsers and image formats, so you can't count on it. I hear mozilla variants do well, and Internet Explorer on Macintosh too, but the currently most common browser - Internet Explorer on PC's - doesn't. See eg. [35].
 
continue with: http://www.teamten.com/lawrence/graphics/gamma/
 
some reality details: http://www.photoscientia.co.uk/Gamma.htm
 
http://broadcastengineering.com/mag/broadcasting_gamma_correction/
 
still read:
 
http://www.cgsd.com/papers/gamma.html
 
http://www.poynton.com/GammaFAQ.html
 
unnumbered gamma section in references
 
As noted, gamma can also be part of a color space definition. This is in part because incorrect gamma will also change the color balance of an image, so should be specified. Possibly since in the end images can't tell the computer to display them correctly, the usual gamma for a color space is 2.2.
 
In fact, this last problem is one of the reasons behind sRGB. For a while, it ahs been wished that there was but one not-enirely-correct colour space, not a dozen. For this reason, sRGB was developed, which is permeating into printers, camera's, and colour management. Essentially, it is a convenient average, but the idea is that if everything supports it, you don't have complicated gamma, gamut, white point, etc trouble, past converting what you have into sRGB.
 
Apparently, PNG stores correction nicely - it stores the reciprocal of the gamma the image should be to the gamma of the system, which is enough information to reproduce it on any other system - assuming that every image viewer knows what the system's gamma is. (and, indeed, that the image writer does, but most authoring applications do)
 
Sadly, as explained, gamma is without real standard, so more support is good, but there are no guarantees yet.
 
On convention: Usually, primes on image band names (sometimes the space name itself, confusingly) are used to indicate nonlinear values, often gamma-corrected ones. For example, R'G'B' is the nonlinear form of RGB.
 
 
-->
 
 
 
 
<!--
Unused shreds:
 
 
Devices such as your monitor often don't quite abide by that, though, which is one of the main things that makes colour and image reproduction hard.
 
 
I've seen the suggestion that GIFs are often precorrected, JPEGs aren't. I have no idea whether that's true.
 
-->
 
 
 
====Luminance and Chrominance versus Luma and Chroma====
Luminance refers to the black and white part of a color image, chrominance to that which has to be added to get the intended colored image.
The terms are used when an image is split this way, or to suggest thinking about it this way.
 
 
Luminance, broadly speaking, does not strongly indicate whether the coding is linear or non-linear.
 
Because many people, documents, and even standards use luminance in the linear, non-linear, or even both senses, without mention of possible confusion, terms like Luma and Chroma were introduced to carry the meaning of being nonlinear counterparts of luminance and chrominance.
 
They are not always used correctly, so always pay attention.
 
 
 
Similarly, a prime generally indicates nonlinearity - e.g. Y indicatng linear luminance and Y' for luma.
 
However, many people are lazy and omit or forget it, so this is a convention at best and not a consistent way of recognizing what sort of channel you're dealing with. Read the standard to be sure.
 
===Numbers and systems; color management===
Digital color data consists of two things - numbers, and their meaning. A color space exists purely to map numbers to colors.
 
It is logical that different color spaces do so differently, but not quite as readily understood that, for example, different RGB spaces also do so differently, and that ignoring this will mean your image/color changes tint.
 
 
Each color space has different ideas of what part of XYZ to cover. More importantly, a file format will usually decide to spend a specific amount of bits per pixel on color reference, so there is a limited amount of colors we can pinpoint in the area. If the area is (much) larger, the colors that are referenced will be further apart, and so show less detail.
It's a range/resolution tradeoff in the end.
 
Whatever the reason, the disembodied numbers are fatal to accurate color management.
 
<!--
One of the more misleading things is that numerous related systems are similar, meaning you won't necessarily notice if you're abusing or ignoring color space. For example, you can assume linear RGB data is sRGB or any of the other one or two dozen RGB standards, and they'll look roughly the same from a distance, but not when you actually look. Between monitor and printed photo, for example, but also between monitors.
 
Most trichromatic systems differ mostly in how they choose their primaries, their white point, and their transfer functions. Between RGB systems, these tend to be the only differences.
 
The choice of primaries is also one reason gamuts differ. Device-independently, this choice seems to be mostly a matter of how much colour ''range'' versus how much ''resolution'' you want. Device-dependantly your primaries have been chosen for you.
-->
 
 
 
===Types of color models===
TODO: move this down
 
There are more color spaces than you can shake a stick at, but there are only a few distinct ideas behind them, and most come from already mentioned systems and effects.
 
The idea that any colour can be made from three primaries is the tri-chromatic theory {{comment|(or 'tri-stimulus', a slightly more general term; they are not entirely consistently used, but people (should) tend towards ''tri-stimulus'' when the stimuli aren't (necessarily) monochrome lights)}}.
 
Note that the dimensionality of color perception meaning you need three parameters to model what the eye does, no matter what you base the model in.
 
The red-green, yellow-blue and light-dark oppositions inherent in the eye is approximately modeled in those systems (they are opposite hues), and more accurately in various systems from CIE, like CIE Lab and CUE Luv.
 
This is quite similar to systems that model the way humans ''describe'' colors, which is mostly by tint (general colour type), and often supplemented by dark/brightness and/or the saturation of colours. This is realized in models like HSV/HSB/HSL (same idea, different details and/or number ranges) and slightly differently in HLS.
 
 
Video and TV standards like Y'IQ, Y'UV, and Y'CbCr do it a little differently yet, because of historical developments. They split signals into one luminance (brightness) and two chrominance channels, so that back and white TVs would use only luminance, and colour TVs additionally used the chrominance parts of the channels.
 
This stuck around for a few reasons, one being that we are more sensitive to general brightness noise than to color noise. This means we can favour one or the other in analog signals and - more importantly - in lossy image and video compression.
 
 
Humans also commonly ''compare'' colours and intensities. Without context, you can identify no more than a few hundred colors. However, give them directly adjacent monochrome colored areas and it turns out we can tell the difference between millions - in a just-noticable-difference sort of way. Interestingly, you can noticably distinguish more shades of green than other colours, though this is sometimes exhaggerated.
 
Difference measures and human comparison are the motivation behind CIE colour reference systems such as CIE XYZ, refined in later variations and used in definition of other color spaces, such as the more practical CIE Lab.
Of course, the eyes are quirky so CIE spaces are defined with conditions and disclaimers, but eye trickery aside, the approximation is quite accurate for most things.
 
===Color management, Gamut===
The point of color management is to translate the device-specific numeric values in a way to keep produce colors the same from device to device, so that each device can try to display the color as correctly ''as it can manage'' - and particularly that part varies.
 
Put another way, color corrected images store absolute color references, and each device just tries its best.
One important thing to note is that a monitor is also a device. If it is miscalibrated, the way a photo looks (and corrections made by you to make it look good) will ''not'' be reproduced elsewhere, in the same way that you having tinted glasses on would make you mis-correct the image.
 
 
A related concept is gamut, the name for the range of colors something can produce or receive. The gamut of the human eye is rather larger than the gamut of monitors and printers, for example.
 
Technically, a gamut is only limited {{comment|(...more limited than the eye's...)}} if you define it that way, but you generally need to. In images and most other storage, there's only so much space for each pixel. Consider that defining the same amount of colors over a larger space means the color resolution - the space between each define color - is coarser.. Whether this is actually noticable, or whether it is more or less important than the size of the gamut is arguable, though.
 
 
When visualizing the extent of a gamut in some specific color space for reference, a gamut is shown as a shape, often on Lab with just the saturated colors - where most of the variation is. More accurate would be to show Lab at a few intervals of luminance, and show the area covered by a standard / production method at each. This would show more clearly that the saturated Lab image applies more directly to monitors and other light-additive methods as they have an easier job showing bright often saturated colors, while the subtractive mixing of inks makes darker colors likely results and tend to cover less area.
It would also show the variation between printers.
 
 
Because color resolution is limited, particularly early color spaces were made for specific purposes, with the intended gamut in mind.
 
Color resolution is limited; when used in image storage, limited bits per pixel mean that if a the gamut covers more of XYZ, the space between the individually referenced colors is larger. Pro Photo is only available as 16 bits per pixel to avoid this causing effects such as posterization.
 
This also means that when you're going just for monitor viewing and printing, gamut size doesn't matter as much as you may think. While sRGB is a little tight for print (although it covers many cleaper printers), Adobe RGB comfortably contains most any printers' range. Pro Photo is mainly just crazy in this context. Most web pages exclaiming the joys of Pro Photo use insanely pathological test images.
 
Some relatively common spaces:
* sRGB (1995) is geared for monitors, and was made to standardize image delivery on the web. It has a relatively fairly small gamut.
* CMYK is geared for print and is modeled on some inks. It has a larger gamut than sRGB, and one that approximates most printers well.
* ColorMatch RGB seems similar to sRGB, possibly a little wider
* Adobe RGB (1998) is larger than sRGB, covering more saturation and more blues and greens. While Adobe RGB was made to approximate CMYK printability, a decent part of the difference with sRGB is actually unprintable.
* Pro Photo RGB is one of the largest available spaces, covering more saturation, most colors, and even colors we can't even see, let alone print.
 
Color spaces that are made for print tend to acknowledge there is only so much inks can reproduce and may model specific inks, spending more detail in likely colors than extremely saturated ones. Even paper's gamut can be measured - separately from ink, that is.
 
 
When using a DSLR camera you usually get a choice between sRGB and Adobe RGB. Pretty much every DSLR has a respectable gamut rather larger than sRGB, and often somewhat larger Adobe RGB, so if you want to preserve as much color as possible (and this applies mostly to quite saturated colors) while shooting JPEGs, select Adobe RGB. You can also use RAW, which stores sensor data - in other words, the data before the choice, so the camera setting is just a hint that tags along, and the choice of color space you save to when saving to something non-RAW is up to the RAW converter.
 
{{comment|(This is also one of the few cases in practice in which choosing a color space actually ''matters,'' because most everything non-raw already had its color clipped by the gamut of its color space. Note ''also'' that all this is fairly specific to bright saturated colors)}}
 
 
Note that various simpler image viewers do not convert for color spaces, meaning they generally wash out images. Photo editors like Photoshop often do, of course.
 
 
 
 
See also:
* http://www.oaktree-imaging.com/knowledge/gamuts
* http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm
* http://www.camerahobby.com/Digital_GretagMacBeth_Eye-One.htm
<!--
http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm
-->
 
 
 
==CIE color space research==
The Commission Internationale de l'Eclairage (CIE) has produced several standards and recommendations. Probably the most significant is XYZ, and the most functional things seem to be the CIE XYZ, CIE Luv and CIE Lab color spaces.
 
There are some other notable CIE happenings, such as the 1924 (pre-XYZ) Brightness Matching Function used to standardize luminance.
 
The next few sections are what I ''think'' I've been able to figure out, but ''might well be incorrect'' in parts:
 
 
 
===(1931) CIE Standard observer functions===
XYZ is based on the '' 'CIE standard observer functions' '' (or 'CIE Color Matching Functions'), released in 1931, based on about two dozen people matching a projected area of color (covering about 2 degrees of their field of vision) with a color they mixed themselves from red, green and blue light sources (with known SPDs).
 
There was another observer function released in 1964 (''CIE 1964 supplementary standard colorimetric observer'') based on new tests with about four dozen people, where the light covered 10 degrees of their field of vision.
 
 
 
 
The size is mentioned because the larger one is more appropriate for reactions to monotonous surfaces, and smaller areas more appropriate for detail, such as images.
 
 
The result for both observer functions is a transfer functions from energy to general human visual response.
 
===(1931) CIE XYZ===
XYZ was the first formal observer-based attempt at mapping the spectrum to a human-perceptual tristimulus coordinate space.
It was (and stayed) a common and widely accepted reference space because for a long time it was the ''only'' useful standard.
 
CIE XYZ can be seen as a three-dimensional space, in which the human gamut is a particular shape,  occasionally referred to as the ''color bag''. Coordinates outside the color bag don't really refer to anything.
 
 
XYZ's dimensions are rather abstract, rather imaginary primaries, harder to think about than, say, RGB or CMYK. The coordinates were defined in a way to guarantee the following: ({{verify}} these)
* When the coordinates have about the same value (eg. x,y,z each 0.3), the SPD is fairly even, which we see as white.
* All coordinates are positive for colors possible in physical reality (was a useful detail for computation at the time)
* The Y tristimulus value is directly proportional to the luminance (energy-proportional) of the additive mixture.
* Chromacity can be expressed in two dimensions, x and y, as functions of X,Y,Z
* X,Y,Z coordinates range from 0.0 to ''approximately'' 100.0.
* x,y,z coordinates are a normalization (x=X/(X+Y+Z) and analogous), with practical values rangin range from 0.0 to about 0.8. The definition also implies x+y+z=1, so that specifying two coordinates is a full color reference.
 
 
Unless specifically chosen and stated otherwise, CIE XYZ uses the 1931 'CIE 2° Standard Observer' data (and not the 1964 10° data), and so do color spaces which use XYZ as an absolute color reference.
 
 
One thing XYZ does ''not'' ensure is that you can measure perceptive distance in it. This was simply not a goal of the 1931 experiments. It has a disproportionally large volume (or area, in the color spade view) for green, for example. {{comment|A fairly pathological case for the difference: Taking a decent set of colors pairs perceived as being equally perceptually distant, the largest of the distances (in XYZ space) is about twenty times as large as the smallest.}}
 
===Chromacity/UCS diagrams===
The chromacity diagrams mix bright colours in a roughly opposite way, with white in the middle, and are convenient for white points and primaries, but also for humans picking out (idealized) colors. They do not have brightness, which makes them incomplete as a total color reference.
 
 
There are three distictly different colorful CIE diagrams that show clear colors, have one straight edge, and one curved one on which the ROYGBIV range is often marked out in wavelenths.
 
These are:
* The 1931 chromacity diagram looks like a curvy spade leaning to the left, and uses x,y coordinates ([http://images.google.com/images?q=CIE+%22chromacity+diagram%22 google image search])
Two revisions, based on studies in these years (and both referred to as UCS):
* The 1960 CIE-UCS chromacity diagram, mosly scales/warps the 1931 one, and uses u,v coordinates. This one is less common to see than:
* The 1976 CIE-UCS chromacity diagram, much like the 1960 one, but more more accurately{{verify}}, and uses u',v' coordinates. This one looks roughly like a triangle ([http://images.google.com/images?q=CIE+%22UCS+diagram%22 google image search]).
 
Yearless reference to 'CIE UCS' almost always refers to the 1976 version. Descriptions showing quotes/primes (u',v') indicates the 1976 one, but since not everyone puts them there, their absence does not necessarily mean the 1960 one.
 
 
See also various other sources, e.g. [http://www.gigahertz-optik.com/database_en/html/applications-tutorials/tutorials/ii.-properties-and-concepts-of-light-and-color/ii.10.-colorimetry.html]
 
 
====The 1931 CIE Chromacity diagram====
[[Image:CIE_1931_chromacity.png|thumb|250px|right|CIE chromacity diagram]]
The CIE chromaticy diagram uses x,y coordinates from the XYZ space. For completeness:
x  = X / (X + Y + Z)
y  = Y / (X + Y + Z)
 
There is an xyY space based on this, adding luminance. This is a fairly convenient way to refer to a XYZ color without having to convert to and back from another space.
 
The color distance observation data, when plotted on this space, look like differently-sized ellipses, which means that using coordinate distances as a measure of color distances will be inaccurate, and noticably so.
<br style="clear:both"/>
 
====The 1960 CIE-UCS Chromacity diagram====
[[Image:CIE_1960_UCS.png|thumb|250px|right|CIE 1960 UCS, with u,v coordinates]]
The ''CIE Uniform Chromacity Scale ('''UCS''')'' originates in the endavour to create a perceptual color space, one in which coordinate distances are roughly proportional.
 
Among other things, it resulted in a revision of the the original chromacity diagram. It is mostly a rescaling:
u  = 4X / (X + 15Y + 3Z)
v  = 6Y / (X + 15Y + 3Z)
 
The distance observations, when plotted on this diagram, are roughly circular, making it a little more accurate for (direct) color distance calculation.
 
Most people either consider the 1960 standard obsolete and use the 1976 version, or are unaware there are two UCS diagrams.
 
<br style="clear:both">
 
====The 1976 CIE-UCS Chromacity diagram====
[[Image:CIE_1976_UCS.png|thumb|250px|right|CIE 1976 UCS, with u',v' coordinates]]
 
There was another study in 1976, leading to the definition of:
u' = 4X / (X + 15Y + 3Z) 
v' = 9Y / (X + 15Y + 3Z)
...although it's regularly written without the quotes on the u and v.
 
 
As you can see, this UCS's difference with the 1960 UCS is purely a correction for Y,
pulling in green a bit more. This allows for better estimation of perceptual color differences based on just these coordinates.
 
<br style="clear:both">
 
===(1976) L*u*v* color space===
L*u*v*, Luv, LUV and CIELUV refer to the same color space.
 
Luv is closely tied to the UCS definitions: UCS is a color reference without brightness -- so primarily ROYGBIV and no colors that come from brightness difference, such as brown, which is mostly dark orange.
 
 
I've seen sources note there was a 1960 release of Luv and a correction in 1976. Possibly this is just because it is based on the UCS which changed, possibly it's more than that.
 
Whatever the details, the '76 release is the important one.
 
 
 
Luv requires a reference white point -- though not everyone lists the choice of white point (leading to some incompleteness and even inconsistency in practice). The definition:
 
Given that:
* the white point is '''(Xn,Yn,Zn)'''
* the to-be-converted color is '''(X,Y,Z)''', for which you also calculate the '''(u',v')''' (with the '76 UCS)
 
If Y/Yn<=0.008856:
    L* = 903.3*Y/Yn
Else:
    L* = 116 * (Y/Yn)^(1/3)
u* = 13 * L* * u'
v* = 13 * L* * v'
 
{{comment|To repeat the pathological perceptual-line-distance thing: the longest is about four times as large as the smallest. Not perfect, but a bunch better than plain XYZ}}
 
To take the pairs-of-equal-distance example
 
For the set of all colors perceived as being equally perceptually equally distant, the largest according numerical distance is about four times as large as the smallest. Not perfect, but a bunch better than XYZ.
 
 
The UCS is also used to define uvY, which is just UCS's u',v' coordinates with an added luminance channel. It seems rarely used{{verify}}.
 
===(1976) L*a*b* color space===
Created in 1976,  and also referred to as Lab, LAB, CIELAB, CIE L*a*b*, and some other variations.
 
 
Lab is good at perceptive distancely, probably because it resembles opposite processing. This makes coordinates in this space less arbitrary than those in many other colour spaces.
Its gamut is larger than most other color spaces; it includes farily extreme saturation.
 
Lab is probably the CIE space most actively used today. {{comment|other than XYZ for reference and color correction software, much of which I believe uses one XYZ-derived space or another}}}
For example, photoshop supports working in Lab, and some corrections work slightly better/easier in it, for example changing luminance contrast without changing colors.
 
Lab is derived from XYZ and (like Luv) you have to choose a white point for the conversion. As far as I can tell, D50 is preferred and D65 is fairly commonly seen too.
 
 
Given Xn,Yn,Zn as a reference white, conversion from an X,Y,Z coordinate to a L*,a*,b* coordinates (the basic definition of the color space):
 
function f(x):
    If x<=0.008856:
        return (7.787 * x) + (16/116)
    Else:
        return x^(1/3)
If Y/Yn>0.008856:
    L* = (116 * f(Y/Yn)) - 16
Else:
    L* = 903.3 * (Y/Yn)
a* = 500 * ( f(X/Xn) - f(Y/Yn) )
b* = 200 * ( f(Y/Yn) - f(Z/Zn) )
 
L is ranged from 0 to 100. a* and b* are not strictly bounded, but most everything interesting is inside the -127 .. 128 range {{comment|(and outside that, the gamut ends somewhere, and not in a regular shape; it's something like a deformed color bag)}}{{verify}}. It was designed for easy coding in 8 bits, and doing so without thinking about it too much cuts off some ''extremely'' saturated colors (apparently nigh impossible to (re)produce in reality {{verify}})
 
 
 
 
See also:
* http://www.brucelindbloom.com/index.html?LabGamutDisplayHelp.html
 
 
====In storage form====
Lab itself is described using real/floating values. For images, you're likely to use one of the following (rescaled) integer forms:
 
 
CIELab:
                          CIELab 8        CIELab 16
L*          unsigned      0 to 255        0 to 65535
a* and b*  signed    -128 to 127  -32768 to 32767
 
 
ICCLab:
                          ICCLab 8        ICCLab 16
L*          unsigned      0 to 255        0 to 65280   
a* and b*  unsigned      0 to 255        0 to 65536  (multiplied by 256, centered at 32768)
 
<!--
(65280 is FF00 in hex - fixed-point reasons?)
-->
 
Seen in use:
* variations like ITULab (8), and perhaps others.
 
* monochrome images, with only the L* image
 
====Hunter Lab====
Hunter lab, from 1966, is a non-CIE space similar to CIELab, but with somewhat different calculations.
 
The largest difference is that the blue area is larger, and the yellow area is smaller.
 
===Other CIE spaces===
{{stub}}
 
====(1964) U*V*W*, S&Theta;W*====
The fairly unknown U*V*W* came in 1964, and is based on the 1960 CIE UCS.
It is decent for colour distance measurement. It isn't that useful anymore, given the existance of Lab, Luv, and such.
 
UVW is defined based on XYZ:
U = 3X/2,
V = Y,
W = (3Y+Z-X)/2
<!--
Not at all sure about this:
 
The w in uvw is added to the UCS's u and v by saying <tt>u+v+w=1</tt>. That is, <tt>1-u-v</tt>.
 
The lack of quotes on u and v and the standard's timing probably mean it uses the earlier UCS{{verify}}.
-->
 
 
S&Theta;W* {{comment|(S, theta, W*, though also often written as SOW)}} is a polar form of U*V*W*
 
====LCH, LSH====
LCH (strictly: L*C*H*) uses the color space defined by Lab, but uses polar coordinates instead, and calls its parts Luminance, Chrominance and Hue.
 
It is primarily a transformation on top of Lab that makes it more convenient to use in some contexts. For example, it allows GUI color pickers to make more sensible to our human eyes.
It can also be useful when expressing color difference.
 
LCH is often calculated from Lab, but can also be calculated form Luv; see e.g. [http://www.aim-dtp.net/aim/technology/color-spaces_faq/color-space.faq.txt this].
 
 
LCH<sub>ab</sub> to Lab {{comment|(assuming you do the proper degree-radian conversions)}}:
L = L
a = C*cos(H)
b = C*sin(H)
The opposite conversion is slightly more complex as you have to deal with quadrants. Lab to LCH<sub>ab</sub>:
To verify
<!--
L = L
C = sqrt( a**2 + b**2 )
if a=0:
    H=0
else:
    if      a>=0 and b>=0:
      k=0
    else if a>=0 and b<0:
      k=1
    else if a<0  and b<0:
      k=2
    else if a<0  and b>=0:
      k=3
    H=( arctan(b/a) + (k*PI/2)) / (2 * PI) 
if h<0:
    h=h + PI/2
-->
 
<!--
LCH<sub>uv</sub> to CIELUV
-->
 
 
LSH is similar to LCH, but uses the definition of saturation instead of that for chrominance.
 
See e.g. [http://www.colourphil.co.uk/lab_lch_colour_space.html] or perhaps [http://www.gnyman.com/IT8FujiProvia.htm] page.
 
==More color spaces, some conversion==
{{stub}}
 
===Note on conversion===
Space-to-space conversion can be a complex process. While the the formulae converting coordinates in one type of space to another are fairly standard, these formulae often work on linear data, while many practical files use non-linear (perceptual) variation of data. This often calls for two steps.
 
There is also often an issue that the spaces use different illuminants. This often forces the conversion to go via XYZ, in which that particular conversion is easier to do.
 
 
 
For example, to go from sRGB (in D65) to Pro Photo RGB (in D50), you need five steps:
* sRGB's R'G'B' to linear RGB
* linear RGB to XYZ
* Correct for the illuminant in XYZ (Chromatic adaptation)
* XYZ to linear RGB
* RGB to Pro Photo's R'G'B'
{{comment|(where the quotes indicate coordinates in a nonlinear scale. Note that there are inherent inconsistencies between notations such as the different scales (linear vs. gamma corrected) channels/coordinates are in. Copy-pasting without knowing what things mean is always a good way to let errors in.)}}
 
 
A system may choose to implement multiple steps (particularly common ones) in a single function, of course, but that function would still be conceptually based on a conversion model with more steps.
 
Various conversion pages will mention conversion to XYZ, and regularly give relevant white points and primaries in XYZ too. XYZ is regularly used as the most central space, not so much because it's the only one or the most efficient, but more for reasons like that it's the easiest intermediate that many things are referenced against. Things like white point adaptation (regularly necessary) is also easier to do in a non-specific space such as this.
 
 
Simple conversions formulae online are often just remap the primaries, and don't care about white point. In some cases, particularly conversions to and from RGB, they may even omit which RGB standard they use.
 
===Types of color space===
====RGB====
RGB is '''not a color space in and of itself'''. That is, it is primarily the ''concept'' of additively mixing from red, green and blue primaries.
 
To be an absolute color space, you need more, primarily absolutely referenced primaries {{comment|(often in XYZ)}}, and a reference white point {{comment|(commonly one one of a few well known standard ones, which are themselves defined in XYZ)}}.
 
Such real-world models used in '''digital image''' storage include
* sRGB
* Adobe RGB
* Apple RGB
* ISO RGB
* CIE RGB
* ColorMatch RGB
* Wide Gamut RGB
* ProPhoto RGB, also known as ROMM RGB
* scRGB (based on sRGB, allowinr wider number ranges for a wider gamut - in fact being 80% imaginary)
* ...and others.
 
 
There are also standards that come from '''video reproduction''' (often parts of much larger standards), such as:
* SMPTE-C RGB
* NTSC RGB
* That defined by ITU-R BT.709
* That defined by ITU-R BT.601
* ...and others. 
(TODO: some of those are probably redundant)
 
 
Standards and image files tend to involve gamma information as well, either as metadata to go along with linear-intensity data, or sometimes pre-applied.
 
 
Various webpages may offer formulas for 'XYZ to RGB,' which doesn't actually mean much unless you know ''which'' RGB (perhaps this is usually sRGB?{{verify}}).
 
 
See also:
 
* SMPTE-C: http://www5.informatik.tu-muenchen.de/lehre/vorlesungen/graphik/info/csc/COL_30.htm
 
 
=====sRGB=====
 
See also:
* http://www.w3.org/Graphics/Color/sRGB
 
 
 
====Hue, Saturation, and something else (HLS, HSV)====
{{stub}}
[[Image:HSL and HSV bases.png|thumb|300px|right|Basis of HLS/HSL and HSV]]
[[Image:HSL and HSV.png|thumb|300px|right|HLS/HSL and HSV color volumes]]
 
 
HSV and HSL are regularly seen as different types of color pickers.
 
Not really corrected for perception, but better than nothing(/RGB, which of course isn't very hard).
 
 
You'll see:
* Hue, Saturation, Value (HSV)
* Hue, Saturation, Lightness (HLS and HSL)
* Hue, Saturation, Brightness (HSB) seems to be another name for HLS/HSL {{verify}}
 
 
These are similar but not all created equal.
 
They share the fact that they have a cyclic spectrum (or rather, RYGCBM) as Hue,
and that saturation is the concept of how much a color is used.
 
* ...but the L in HSL/HLS is more like back to saturated to white
** meaning whites are mostly found where l&gt;0.8
** and saturated colors are found around l=0.5
 
* ...whereas the V in HSV is more like the well definedness of the color
** meaning whites are mostly found when both v&gt;0.5 and s&lt;0.2
** and saturated colors are basically those for v&gt;0.6
 
 
 
See also:
* http://en.wikipedia.org/wiki/HSL_and_HSV
 
* http://www.webreference.com/dlab/9704/wheel.html
 
====Video====
 
Most analog video splits information into one luma channel (the black and white signal) and two chrominance channels (the extra data that encodes colour somehow).
 
Most the definition of the chrominance channels differs between standards.
 
 
Note that these color spaces are defined based on RGB, and fairly simple transformations from/to RGB, so can be seen as just a (perceptually clever) way of encoding RGB.
 
They are not absolute color spaces in that they depend on the RGB space they were defined on, and whether ''it'' is doing absolute color reference (that is, whether you chose RGB primaries -- and which).
 
{{comment|(Technically, BT.601 mentions RGB primaries so can be considered to give absolute reference - although it should be noted that most analog TV broadcasts as well as TVs can deviate from their standards, for example to bias towards certain color tones, so even if the method of storing video (which is what BT.601 deals with) can be well-referenced, the signal you're capturing may easily not be so well defined))}}
 
 
 
This split came from the evolution of colour in TV signals - at the time providing a signal with color sent separately enough that older black and white TVs would only use the luma.
It makes sense beyond that historical tidbit, though - most perceptive detail comes from contrast information, and most of that lies in luminance, so given limited bandwidth, that's what you would want to spend more on.
 
 
The above is not very specific, and is true for various video color systems, including Y'UV, Y'DbDr, Y'IQ, Y'PbPr, Y'CbCr, and more.
 
Notes on confusion:
* The primes/quotes (that denote nonlinearity of the Y channel) are often omitted - but because of the perceptive coding, you pretty much always deal with luma rather than luminance, so it's never particulary ambiguous.
 
* A lot of video-related literature uses luminance both in its linear and non-linear meaning. The possible ambiguity isn't always mentioned, so you need to pay attention - even in standards.
 
* YUV is regularly (and erroneously or at least ambiguously) used as a general term, to refer to this style of system, or something more specific, like YCbCr - probably partly because of the similarities in setup between all these systems - and perhaps because YUV is easier to type.
 
<!--
See also:
* [http://www.poynton.com/papers/YUV_and_luminance_harmful.html this note on term abuse]
-->
 
 
They are used in e.g.
* JPEG images (and some other image formats)
 
* analog TV: ({{verify}} the details below)
** [http://en.wikipedia.org/wiki/YUV YUV], used in [http://en.wikipedia.org/wiki/PAL PAL].
** [http://en.wikipedia.org/wiki/YDbDr YDbDr], used in [http://en.wikipedia.org/wiki/SECAM SECAM]. Very much like YUV, mostly just scaled differently
** [http://en.wikipedia.org/wiki/YIQ YIQ], used in [http://en.wikipedia.org/wiki/NTSC NTSC]
** YPbPr as used in in component video
 
* digital video
** [http://en.wikipedia.org/wiki/MPEG MPEG] video, which includes things like DVD discs
** digital television, digital video capture
** ITU-R BT.601 describes digitally storing analogue video in YUV 4:2:2 (specifically interlaced, 525-line 60 Hz, and 625-line 50 Hz - so geared towards NTSC and PAL TV)
** ITU-R BT.709 is similar, but describes many more options (frame rates, resolutions - can be seen as the HDTV-style modern version)
 
 
 
See also:
* http://en.wikipedia.org/wiki/Rec._601
* http://en.wikipedia.org/wiki/Rec._709
 
* http://en.wikipedia.org/wiki/YUV
* http://en.wikipedia.org/wiki/YIQ
* http://en.wikipedia.org/wiki/YDbDr YDbDr
 
 
=====YC<sub>b</sub>C<sub>r</sub> and YP<sub>b</sub>P<sub>r</sub>=====
{{stub}}
 
These twp are exactly the same color space, in analog (YP<sub>b</sub>P<sub>r</sub>) and digital (YC<sub>b</sub>C<sub>r</sub>) form. YCbCr is used in digital video, YPbPr is seen e.g. in [[Common_plugs_and_connectors#Component_video_.28YPbPr.29|component video]].
 
 
In calculations, Y' is ranged from 0 to 1 (this luma is calculated the BT.601 way{{verify}}), and both Pb and Pr from -0.5 to 0.5.
In digital form there are a few intermediate forms, but comonly each is stored in a byte. They are often scaled not to the full 0-to-255 range but with headroom and footroom: apparently scaling Y's 0..1 to to 16..235, and the chrominance channel's -0.5..0.5 to 16..240, centered at 128) {{verify}}.
This seems to be rooted in the attempt to avoid color clipping that can happen when video is piped through several systems by analogue means (YPbPr), but the the ~10% reduction of color resolution is somewhat pointless if the signal may be digial all the way, and it also is one reason YCbCr is not ideal video editing space.
 
{{comment|(While TVs are adjusted to display what they get nicely, this form computer-based viewing may need tweaking to use the full range, particularly to make sure the darkest color is black and not a dark gray)}}.
 
 
When YPbPr is plugged around with in the three-RCA-plug way, Green plug carries the Y signal (Luma), the the blue the Pb (blue minus yellow), and the red the Pr (red minus yellow) signal.
 
 
<!--
 
A specific standard that uses this type of space {{comment|(say, the television standard ITU-R BT.601, previously known as CCIR 601)}} will state how to convert to/from it from a specific RGB space, given:
* Kb and Kr, two constants based on the specific RGB space
* (gamma-corrected{{verify}}) R',G',B' values to convert
Y' = Kr*R' + (1-Kr-Kb)*G' + Kb*B'
Pb = 0.5 * (B'-Y') / (1-Kb)
Pr = 0.5 * (R'-Y') / (1-Kr)
Y carries brightness information as calculated from RGB, Pb the difference between blue and brightness (B-Y), Pr the difference between red and brightness (R-Y).
-->
 
 
 
See also:
* http://en.wikipedia.org/wiki/YCbCr
* http://en.wikipedia.org/wiki/YPbPr
 
 
=====Chroma subsampling=====
Video formats (including TV) spend more space describing the Y channel than they use on each color channel -- this because the eye sees loss in luminance detail more than it sees loss in color detail.
 
The selective removal of detail from chroma channels is known as chroma subsampling, and the exact type of is often denoted something like 4:2:2. These are well-defined units defined rather than just ratios, though those units are somewhat meaningless in digital form{{verify}}).
 
<!--4:2:2, or 4:2:0, and 4:1:1 are the most common forms.-->
 
Note that chroma subsampling causes banding. Since many compression formats use chroma subsampling, this fact matters to video editing, in which you want to be lossless for as long as you can, by storing video with minimal (or no) compression. It is inevitable when you write it to the final medium, often compressed for practicality, but you would often want to keep a master without chroma subsampling.
 
See also:
* [http://en.wikipedia.org/wiki/Chroma_subsampling Wikipedia: Chroma subsampling]
 
 
=====YIQ and YUV=====
{{stub}}
 
<!--
YUV can refer to a specific color space, but is also frequently used to group YUV, Y'UV, YIQ, Y'IQ, YCbCr, and YPbPr, which all use a similar luma/color/color setup.
 
 
YIQ is a color space, the one used by NTSC, while Y'UV is similar and used by PAL.
 
YIQ and YUV have a very similar color range, but code it differently. Both are linear transforms from RGB space{{verify}}, and are based on human color perception, but mostly in that they try to separate the most from least interesting information, in a way that lets it use less bandwidth with little perceptive loss - or rather, squeeze a little more quality out of the fixed amount of bandwidth.
 
Note that the first row in YIQ and YUV's from-RGB transform matrices (the luminance, black-and-white part of the transform) is identical in YIQ and YUV, and is the same as in ITU-601 and some other systems (see [[#Grayscale_conversion|grayscale conversion]]).
 
-->
 
 
====CMY and CMYK====
{{stub}}
 
CMY and CMYK are color spaces based on the subtracive coloring that happens when you mix (specific) Cyan, Magenta, Yellow, and Black inks/pigments. Color specifications are usually the percentages of intended application of each ink {{comment|(which does not necessarily translate perfectly or directly to every printing process)}}.
 
CMYK is most interesting for pre-print use, as it allows closer consideration of the printing process than print-agnostic spaces. It is arguably more accurate than other spaces, in that exact process/color intent is easier to consider and express. Color conversion from others spaces is of course possible, but color problems (e.g. those related to black) may be harder to express/preserve that way.
 
 
Screen color correction is not always good for CMYK. If it is not, color you choose based on how they look on your screen will still look a little different when printed)}}
 
Most CMYK-based processes are bad at creating saturated light colors, so CMYK in general often implies a '''fairly small gamut'''. This makes it less than ideal for certain uses, such as photo printing, though variations such as Hexachrome (CMYKOG) and CcMmYK help.
 
 
 
CMYK is the more common variation of CMY. CMYK adds black ink (K, 'key'), which is more economical, and also practical: in CMY, more C,M,Y ink would be needed to create sufficiently dark colors.
In CMYK, unsaturated colors, dark colors, and (deeper) blacks can be mixed with more black ink and less color ink. This practical variation, specifying how much of (specific) inks, also makes CMYK a slightly more complex color space, and makes calibration important, and the eventual colors you see depend on the pigments, process, and even paper.
 
 
 
One common example is the details involved in black. You may think that any CMYK color with 100% black is black {{comment|(in the way that RGB(0,0,0) is the blackest you can specify in that system)}}.  However, CMYK(0,0,0,100) is usually a dark gray. It's black enough for some needs (take a look at newspaper text, for example), but is less ideal in the context of photos, gradients and such.
 
To produce what is known as rich black, processes often apply all pigments {{comment|(CMY, then K on top)}}. You may guess that CMYK(100,100,100,100) is a sufficiently dark black - and it is, but in most cases is also a waste of ink, and may leads to paper wetness and ink bleeding problems {{comment|(although it does not always have such drawbacks, so it can have its uses)}}. {{comment|(This black is also known as registration black, because it is usually only used in the registration marks used to see whether the different passes in a process will align well)}}
 
 
It turns out that a more economical way of mixing the darkest black you'll usually discern uses about 90 to 100% of K and 50% to 70% of the color channels (depending on the ink/pigment).
Certain specific combinations into rich black, such as (75,68,67,90) or (63,52,51,100), may be seen mentioned with some regularity {{comment|(those two are Photoshop conversions from RGB(0,0,0) under two different CMYK profiles)}}. Designers may also wish for a cooler (bluer) or warmer (redder) but still mostly rich black {{comment|(often mostly C+K and M+Y+K combinations, respectively)}}.
 
Note that in terms of total ink coverage, this totals to about 300%. Rich blacks are rarely more than 300% to avoid wetness and bleeding problems) {{comment|(Registration black is sometimes referred to as 400% black.)}}
 
 
On computer screens, all these blacks will themselves usually look the same, because most screens are not very good at producing blacks in the first place. This means that carelessly combined blacks may print (in most processes) with ugly black transitions, for example boxes that represent areas you pasted in from different sources.
 
 
Gradients to black are also a potential detail, as fading to a non-rich black is quite noticeably different than a fade to rich black. It is not only lighter -- it will often look banded, and will seem to fade via a gray, or slightly wrong-colored color. Fading to a rich black is usually preferable.
 
Note that gradients to 100%K may also display some banding in print processes, which is why a rich black with at most ~90%K may actually be handier.
 
 
Another common special case is text. A separate pass for text can in many processes be practical {{comment|(possibly even with a separate black)}}, as you can create consistently sharp text out of a single pass, and avoid trapping / registration details that you would get from mixing text black from more than one color, and which is more noticeable in sharp features such text (sometimes much).
 
 
Related reading:
* http://en.wikipedia.org/wiki/CMYK
* http://en.wikipedia.org/wiki/Rich_black
* http://en.wikipedia.org/wiki/Registration_black
* http://en.wikipedia.org/wiki/Spot_color
* http://en.wikipedia.org/wiki/Trap_(printing)
* http://en.wikipedia.org/wiki/Hexachrome
* http://en.wikipedia.org/wiki/CcMmYK_color_model
 
 
==&Delta;E* and tolerancing==
The &Delta;E* measure (also often deltaE*, dE*, or dE, for ease of writing) is a measure of color difference.
The usual reason to calculate delta-E is for '''tolerancing''', the practice of measuring and deciding the acceptability of one color as another, intended color.
 
Since it It is usually suggested that the threshold at which we can discern differences is about 1 &Delta;E*, this is sometimes used as a threshold. If systems deviate more than it, it starts to become perceptibly inaccurate.
 
 
The precision with which the various standards' &Delta;E* figures correlate with human perceived difference differs.
 
Initially, dE (CIE dE76) was a simple distance. CMC allows a more tweakable model, specifically to weigh luminance and chrominance, which was also adopted in CIE dE94 (the difference is not large, though CMC is targeted at and works a little better on textiles, and dE94 is meant for paint coatings).
 
 
The simplest way to express tolerance in to specify the amount the value in each dimension may differ (you will see dL, da and db for Lab, dL, dC and dh for LCH).
Note that in various cases you may be interested only in the two chrominance dimensions.
 
 
We tend to see differences in hue, then chroma, then lightness. Because of this, LCH based tolerancing can be more useful than Lab-based tolerancing, even if the actual measure is the same (though not the shape/size of the area around it if you tolerance in a 'plus or minus 0.5 in each dimension' sort of way).
 
 
 
====CIELAB- versus CIELCH-based &Delta;E====
There is no difference in the resulting &Delta;E when you calculate it from CIE Lab color or CIE LCH, though of course the actual calculation work is a little different.
 
 
The per-dimension deltas for LCH are somewhat more human informative: dL represents a difference in brightness, dC represents a difference in saturation, dH a color shift.
 
This may be handier than the same in Lab, where you would have to know that da means redder (positive) or greener (negative) or that db means yellower (positive) and bluer (negative).
 
====dE76====
The original measure comes from CIE, and is based on L*a*b* (and released in the same year), and is simply the cartesian distance in that space:
dE*<sub>ab</sub> = sqrt( dL<sup>2</sup> + da<sup>2</sup> + db<sup>2</sup> )
This is normally just called dE* without the ab.
 
 
One criticism is that we are not equally sensitive to luminance and chrominance, so the equal weighing of all dimensions is somewhat limited.
 
====CMC l:c, dEcmc====
Released in 1984 by the Colour Measurement Committee of the Society of Dyes and Colourists of Great Britain.
 
Allows relative weighing of luminance and chrominance, which is useful when tolerancing for specific purposes. The default is 2:1, meaning more lightness variation is allowed than chroma difference - specifically twice as much.
 
Its tolerance ellipsoids are also distributed
 
http://www.brucelindbloom.com/index.html?Eqn_DeltaE_CMC.html
 
====dE94====
Released by CIE, and rather similar to the CMC tolerancing definitions.
 
http://www.brucelindbloom.com/index.html?Eqn_DeltaE_CIE94.html
 
 
====dE2000====
CIE's latest revision, and one still under some consideration.
<!-- also DE00 -->
 
==Unsorted==
 
http://en.wikipedia.org/wiki/Deep_Color
 
http://en.wikipedia.org/wiki/XvYCC
 
===Grayscale conversion===
{{stub}}
 
====From RGB====
{{stub}}
If you have RGB, the easiest conversion to grayscale is (R+G+B)/3, a simple average, but this is not very perception-accurate.
 
<!--
You could use a simple weighed average, like
( .333*red + .445*blue + 0.222*green )
-->
 
There are various RGB standards that mention grayscale conversion as weighing of the channels. In some cases this is based on perception, and in some it is mostly just correction for RGB primaries used in the standard.
These include:
* ITU-601-1 and a few others use nearly identical numbers, 0.298954*R + 0.586434*G + 0.114612*B (apparently originating from CIE-XYZ 1931{{verify}})
* EBU/ITU 3213 (PAL{{verify}})): 0.222*R + 0.707*G + 0.071*B
* BT/ITU-709 (NTSC{{verify}})): 0.213*R + 0.715*G + 0.072*B
* ...more
 
 
Most of these are much the same, and partially just corrected for the primaries they use.
 
You may have preferences for specific cases. Consider how photographers sometimes use specific colored lenses (or nearly-equivalent photoshop filters) for specific effects, such as lessening the visibility of freckles, making skies more contrasted, and such.
 
 
 
===Undetailed so far===
Mostly colour spaces, colour models, and transforms. Again, these are not necessarily accurate.
 
* HSV, HSB, HSL/HLS, HSI - hue, saturation, and brightness/level/luminisity/intensity. * IHS,
* NTSC YIQ, NTSC CMY
* YUV, YIQ, YCbCr
* CMY, CMYK
* HMMD
* StW, I1I2I3
 
* Retinal Cone
* Munsell
* Karhonen-Loeve
 
And things I've read about them:
* NTSC RGB seems to refer to the defined conversion from YIQ to RGB inside the TV.
* SMPTE RGB apparently does the same but matches modern phosphors better.
* IHS conversions seems inconsistently defined between books.
* I1I2I3 is defined from an (unspecified?) RGB as I1=(R+G+B)/3, I2=(R-B)/2, I3=(2G-R-B)/4, so seems to be a luminance-and-opposite type of system.
 
* CIE L*a*b* apparently agrees with Munsell's colour system well.
 
 
File format notes:
* GIF stores RGB
* JPG stores RGB, YCbCr, or CMYK
* PNG is RGB (sRGB, ir stores an ICC profile)
 
 
 
http://color-management-swicki.eurekster.com/device+independent+color+space/
 
 
Web gamma:
http://hsivonen.iki.fi/png-gamma/
 
http://www.libpng.org/pub/png/png-gammatest.html
 
http://www.libpng.org/pub/png/png-colortest.html
 
==References and other links==
===Technical===
====General====
* http://en.wikipedia.org/wiki/Color_space
 
* http://www.colorwiki.com/wiki/ColorWiki_Home
 
* http://www.biyee.net/v/index.htm
 
* http://www.scarse.org/docs/color_faq.html
 
* http://davis.wpi.edu/~matt/courses/color/#CIEXYZ
 
* http://en.wikipedia.org/wiki/SRGB
 
* http://www.efg2.com/Lab/Library/Color/Science.htm
 
* http://www.yorku.ca/eye/toc-sub.htm
 
* http://www.climaxtek.com/Faq/General.htm
 
* http://www.midnightkite.com/color.html
 
* http://www.avivadirectory.com/color/
 
* http://tigger.uic.edu/~hilbert/Glossary.html
 
* http://web.archive.org/web/20050624075342/http://www.cox-internet.com/ast305/color.html
 
* http://www.marktaw.com/design/ColorTheorya.html
 
* http://www.poynton.com/notes/events/20031021_LAX_ETC.html
* http://www.poynton.com/PDFs/ColorFAQ.pdf
* http://www.vision.ee.ethz.ch/~cvcourse/brechbuehler/mirror/color/GammaFAQ.html
 
* http://www.vision.ee.ethz.ch/~cvcourse/brechbuehler/mirror/color/
 
 
====CIE stuff====
* http://en.wikipedia.org/wiki/International_Commission_on_Illumination
 
* http://www.cie.co.at/framepublications.html
 
* http://hyperphysics.phy-astr.gsu.edu/hbase/vision/cie.html
 
* http://hyperphysics.phy-astr.gsu.edu/hbase/vision/cie1976.html
 
* http://www.efg2.com/Lab/Graphics/Colors/Chromaticity.htm
 
* http://www.ledtronics.com/datasheets/Pages/chromaticity/097.htm
 
* http://home.wanadoo.nl/paulschils/10.02.htm
 
* http://www.fhwa.dot.gov/tfhrc/safety/pubs/atis/ch03/body_ch03_11.html
 
* http://semmix.pl/color/models/emo111.htm
 
* http://www.videoessentials.com/res_facts.php
 
* http://www.efg2.com/Lab/Graphics/Colors/Chromaticity.htm
 
* http://www.daicolor.co.jp/english/color_e/color_e01.html
 
<!--
http://www.colourphil.co.uk/lab_lch_colour_space.html
-->
 
====Illuminants====
* http://home.wanadoo.nl/paulschils/07.01.html
 
* http://www.ledtronics.com/datasheets/Pages/chromaticity/097.htm
 
 
====Gamma====
* http://www.sjbrown.co.uk/gamma.html http://scanline.ca/gamma/
* http://www.mvtec.com/halcon/download/documentation/reference/hdev elop/trans_from_rgb.html
* http://scanline.ca/ycbcr/
* http://www.brucelindbloom.com/index.html?Eqn_RGB_to_XYZ.html
 
* http://www.libpng.org/pub/png/book/chapter10.html ***
 
* http://graphics.stanford.edu/gamma.html
 
* http://www.cs.princeton.edu/courses/archive/fall00/cs426/papers/smith95d.pdf
 
* http://www.libpng.org/pub/png/colorcube/
 
* http://hyperphysics.phy-astr.gsu.edu/hbase/atmos/blusky.html
 
* http://www.cg.tuwien.ac.at/research/theses/matkovic/node9.html
 
* http://www.neuro.sfc.keio.ac.jp/~aly/polygon/info/color-space-faq.html
 
* http://www.scarse.org/docs/color_faq.html#yuv
 
* http://www.sapdesignguild.org/resources/glossary_color/index1.html
 
* http://www.buena.com/articles/hsvspace.pdf
 
* http://www.brucelindbloom.com/index.html?UPLab.html
 
 
====Psycho-ish====
* http://www.sapdesignguild.org/resources/glossary_color/index1.html
<!--
http://www.sessions.edu/career_center/design_tools/color_calculator/index.asp
-->
 
 
 
====Formulae====
* http://www.brucelindbloom.com/index.html?Equations.html
 
* http://www.easyrgb.com/math.php?MATH=M3#text3
 
* http://www.aim-dtp.net/aim/technology/color-spaces_faq/color-space.faq.txt
 
* http://www.brucelindbloom.com/index.html?WorkingSpaceInfo.html (RGB spaces)
 
<!--
* http://www.physlink.com/Education/AskExperts/ae409.cfm
 
* http://people.scs.fsu.edu/~burkardt/f_src/colors/colors.html
 
* http://www.cs.rit.edu/~ncs/color/t_convert.html
-->
 
====Other====
* http://www.microsoft.com/taiwan/whdc/device/display/color/sRGB.mspx
 
* http://www.color.org/v4spec.html
 
* http://www.biyee.net/v/color_vision_test/confusion_lines.htm
 
* http://www.mat.univie.ac.at/~kriegl/Skripten/CG/node9.html
 
* http://www.mat.univie.ac.at/~kriegl/Skripten/CG/node2.html
 
* http://www.schorsch.com/kbase/glossary/illuminance.html
 
 
* http://www.ilkeratalay.com/colorspacesfaq.php
 
* http://eidetic.ai.ru.nl/egon/MhCP/maarten/papers/Color_space_selection4.pdf
 
* http://www.cis.rit.edu/mcsl/outreach/faq.php?catnum=3
 
* http://www.ntsc-tv.com/ntsc-index-06.htm
 
<!--
http://developer.r-project.org/sRGB-RFC.html
http://www.cs.rit.edu/~ncs/color/t_convert.html
-->
 
===Websites===
====Online calculators====
* http://www.brucelindbloom.com/index.html?ColorCalculator.html
 
* http://www.easyrgb.com/calculator.php
 
* http://www.colorpro.com/info/tools/convert.htm
 
====Color pickers====
Color pickers / color schemes / color coordination, often for styles
 
* http://www.colorschemer.com/online.html
 
* http://www.ficml.org/jemimap/style/color/wheel.html
 
* http://www.colorjack.com/sphere/
 
 
* http://www.colorsontheweb.com/colorwheel.asp
 
* http://wellstyled.com/tools/colorscheme2/index-en.html
 
 
* http://www.behr.com/behrx/workbook/
 
* http://kuler.adobe.com/
 
* http://www.degraeve.com/color-palette/ bases colors on image
 
 
* http://pourpre.com/colordb/?i=c486A95&l=eng - harmonies and such
 
* http://www.siteprocentral.com/html_color_code.html
 
* http://www.telecable.es/personales/alberto9/color/index.htm - funky flash app
 
 
 
====Social scheming====
* http://www.colorschemer.com/schemes/?sort=rating
 
* http://beta.dailycolorscheme.com/
 
* http://www.colourlovers.com/
 
 
====Reference lists====
* http://fugal.net/vim/rgbtxt.html - color name list
 
* http://www.pitt.edu/~nisg/cis/web/cgi/rgb.html
 
* http://blizzardskies.com/bz/colorchart.html
 
* http://en.wikipedia.org/wiki/List_of_colors
 
 
<!--* http://www.graphviz.org/doc/info/colors.html sets -->
 
* http://html-color-codes.com/
 
* http://www.musemixer.com/HTML-HEX-Color-Chart/
 
* http://www.logoorange.com/color/color-codes-chart.php
 
====Usability====
(considering color blindness and such):
 
* http://gmazzocato.altervista.org/colorwheel/wheel.php
 
* http://www.w3.org/TR/AERT#color-contrast
 
====Other/unsorted====
* http://blog.doloreslabs.com/?p=11
 
* http://tools.dynamicdrive.com/gradient/
 
* http://meyerweb.com/eric/tools/color-blend/
 
* http://en.wikipedia.org/wiki/List_of_monochrome_and_RGB_palettes
 
 
* http://www.student.oulu.fi/~oniemita/dye/dyemixer/
 
* http://www.nendai.nagoya-u.ac.jp/gsd/sicc/
 
 
* http://www.fuelyourcreativity.com/3-deadly-sins-of-print-design/
 
 
More links: http://www.tlbox.com/web_designers/color
 
* http://stat.ethz.ch/R-manual/R-patched/library/grDevices/html/convertColor.html
 
===Software===
* http://www.littlecms.com/faq.htm
 
 
[[Category:Audio, video, images]]

Latest revision as of 13:46, 20 June 2013