Photography notes: Difference between revisions
m (→Infrared) |
m (→Infrared) |
||
Line 588: | Line 588: | ||
:::: Which is just as false-color as before but looks a little more contrasty. | :::: Which is just as false-color as before but looks a little more contrasty. | ||
:::: These images may then be color corrected to appear more neutral, which tends to come out as white-and-blueish | :::: These images may then be color corrected to appear more neutral, which tends to come out as white-and-blueish | ||
: Note that such visible-and-IR mixes | : Note that such visible-and-IR mixes will look fuzzy, because IR has its own focal point {{verify}} | ||
Revision as of 12:25, 13 July 2023
Flash
Digital
Digital raw formats
On digital ISO
tl;dr:
- higher ISO means more gain during readout of the sensor.
- Which can be seen as more sensitivity,
- since it's just amplification, it increases the amount of signal as much as the noise
- ...but it's a little more interesting than that due to the pragmatics of photography
- You generally want to set it as low as sensible for a situation, or leave it on auto
ISO in film indicated the grain size in photo rolls (see ISO 5800). Smaller meant finer detail, but also that more light was required to react to get just as much image.
ISO in digital photography (see also ISO 12232) is different. It refers to the amplification used during the (at this stage still analog) readout of sensor rows - basically, i.e. what gain to use before feeding the signal to the ADC. But what does that do?
Given a sensor with an image currently in it, the only change it would really make to the readout
is brightness, not signal to noise, or quality in any way.
In that sense, it has no direct effect on the amount of light accumulated in the sensor. However, since it is one of the physical parameters the camera chooses (alongside aperture and shutter time), it can choose to trade off one for another.
For example, in the dark, a camera on full auto is likely to choose a wide open aperture (for the most light), and then choose a higher ISO if that means the shutter time can be lowered to not introduce too much motion blur from shaking hands.
There are other such tradeoffs, e.g. controlled via modes -- for example, portrait mode tries to open the aperture so the background is blurred, sports mode aiming for short shutter time so you get minimal motion blur, but at the cost of noise, and more. But most of these are explained mostly in terms of the aperture/shutter tradeoff, and ISO choice is relatively unrelated, and can be explained as "as low as is sensible for the light level".
Still, you can play with it.
Note that the physical parameters are chosen with the sensor in mind - to not saturate its cells (over-exposure) {{comment|(also other constraints, like avoiding underexposure, signal falling into the noise floor, and in general also tries to use much of storage range it has (also to avoid unnecessary quantization, though this is less important.)
As such, when you force a high ISO (i.e. high gain) but leave everything else auto, you will effectively force a camera to choose a lower shutter time and/or smaller aperture.
Which means less actual light being used to form the image, which implies lower signal-to-noise (because a noise floor is basically a constant). The noise is usually still relatively low, but the noise can become noticeable e.g. in lower-light conditions.
Similarly, when you force a low ISO, the camera must plan for more light coming in, often meaning a longer exposure time.
On a tripod this can mean nicely low-noise images, while in your hands it typically means shaky-hand blur.
In a practical sense: when you have a lot of light, somewhat low ISO gives an image with less noise.
When you have little light, high ISO lets you bring out what's there, with inevitable noise.
Leaving it on auto tends to do something sensible.
More technical details
On a sensor's dynamic range and bits
Infrared
Infrared around optical cameras mostly deals with the fact that camera sensors can see some amount into near infrared.
Note that with how large the range of infrared is, that's only near-infrared and then only a small part of that.
For reference, our eyes see ~700nm (~red) to ~400nm (~violet), while CCD and CMOS image sensors might see perhaps 1000nm to ~350nm.
The amount and shape of that overall sensitivity varies, but tl;dr: they look slightly into near-IR (also slightly into UV-A on the other end).
Changing that sensitivity
Broadly speaking, there are two main things we might call "infrared filters": IR-cut and IR-pass, which do exactly opposite things.
- These image sensors sensors, when used for photography, will typically have IR-cut filters - a coating or filter that cuts most of that IR response
- because it turns out to be easier to add a separate infrared-cut filter than to design that into the sensor itself(verify).
- usually a filter in front of a camera image sensor, because you'd want it always and built in
- look transparent but bluish from most angles (because they also remove a little visible red)
- cuts a good range above some point, or rather transition, often somewhere around 740...800nm
- Since it's a transition, bright IR might still be visible. For example, IR remote controls (typically in the 840..940nm range) may still be (barely) visible, in part because they're actually quite bright and concentrated
- Where the IR-cut is a physically separate filter, rather than a coating, you can removing that infrared-cut to get back the sensitivity that the sensor itself has
- doing so leaves you with something that looks mostly like regular optical, but stronger infrared sources will look like a pink-ish white.
- White-ish because IR above 800nm or so passes through all of the bayes filter color filters rouhgly equally
- pink-ish mostly because infrared is next to red so the red and it passes it more, so red tends to dominate.
- This is why IR photographers may use a color filter to reduce visible red -- basically so that the red pixels pick up mostly IR, not visible-red-and-IR.
- Which is just as false-color as before but looks a little more contrasty.
- These images may then be color corrected to appear more neutral, which tends to come out as white-and-blueish
- This is why IR photographers may use a color filter to reduce visible red -- basically so that the red pixels pick up mostly IR, not visible-red-and-IR.
- Note that such visible-and-IR mixes will look fuzzy, because IR has its own focal point (verify)
- IR-pass, visible-cut filters
- often look near-black
- cut everything below a transition, somewhere around 720..850nm range
- using these on a camera that has an IR-cut will give you very little signal (it's much like an audio highpass and lowpass set to about the same frequency, except these cut only so much so you do have some response left)
- but on a camera with IR sensitivity, this lets you view mostly IR without much of the optical
Sensors with a bayes filter (i.e. color sensors) actually cut out more IR than a sensor without such a color filter would be. A non-colored sensor would do better, but these are a niche product, at least in handheld form.
There are also NIR cameras, which are functionally much like converted handhelds, but a bunch more efficient than most DIYing.
DIYing regular digital cameras and webcams lets you see only partly into near-infrared, because that's how far these sensors will go.
You can't e.g. make a thermal camera with such DIYing - those sensors are significantly different.
Thermographic cameras often sensitive to a larger range, like 14000nm to 1000 nm, covering most of near-infrared and some of mid-infrared.
In photography, base cameras and lens filters give you options like:
- cutting IR and some red - similar to a regular blue filter
- passing only IR
- passing IR plus blue - happens to be useful for crop analysis(verify)
- passing IR plus all visible
- passing everything (including the little UV)
For some DIY, like FTIR projects,
note that there are ready-made solutions, such as the Raspberry's NoIR camera
If you want to do this yourself,
you may want to remove a webcam's IR-cut filter - and possibly put in an IR-pass filter.
- In webcams the IR-cut tends to be a glass in the screwable lense
- which may just be removable
- For DSLRs the IR-cut this is typically a layer on top of the sensor
- which can be much harder to deal with
While in DSLRs the IR-cut is typically one of a few layers mounted on top of the sensor (so that not every lens has to have it), in webcams, the IR-cut filter may well be on the back of the lens assembly
See also: