|This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)|
- 1 Feature detection
- 2 (Primarily) supporting transforms
- 3 Other useful processing
Where features tend to include:
- Points -
- Blobs - smooth areas that won't (necessarily) be detected by point detection. Their approximate centers may also be considered interest points
- Edges -
- a relatively one-dimensional feature, though with a direction
- Corners - Detects things like intersections and ends of sharp lines
- a relatively two-dimensional kind of feature
In comparisons between similar images, one should keep in mind that blob-centers can become interest points, gradients can become edges, etc., and that the difference to detectors can be and/or should be fuzzy.
- Interest point - could be said to group the above and more
- preferably has a clear definition
- has a well-defined position
- preferably quite reproducible, that is, stable under relatively minor image alterations such as scale, rotation, translation, brightness.
- useful in their direct image context - corners, endpoints, intersections
- Region of interest
Interest point / corner detection
- Laplacian of Gaussian (LoG)
- Difference of Gaussians (DoG)
- Determinant of Hessian (DoH)
- Maximally stable extremal regions
Tends to refer to detecting anything more complex than a point, edge, blob, or corner. Regularly by example.
- SIFT (Scale-Invariant Feature Transform)
- SURF (Speeded Up Robust Features)
- faster than SIFT, performs similarly
- GLOH (Gradient Location and Orientation Histogram)
- MSER (Maximally Stable Extremal Regions)
- (primarily blob detection)
- LESH (Local Energy based Shape Histogram)
- Hough transform
- Structure tensor
- SPIN, RIFT (but SIFT usually works better(verify))
(Primarily) supporting transforms
Morphological image processing
Focusing on details or overall image
bandpass, blur, median
For color analysis we often want to focus on the larger blobs and ignore small details. (though in some cases they can fall away in statistics anyway).
Each pixel defined by variance in nearby block of pixels
Other useful processing
Near-duplicate detection, image similarity, image fingerprinting
(Near-)duplicate detection is generally defined as detecting mild variations coming from one or more of:
- Common image/video editing operations:
- Crops - digital crops (often mostly of the less interesting areas) often up to half of the original
- Resizes - different resolution variations of the same image (includes resampling inaccuracies)
- Aspect ratio changes - particularly on TV material
- Mild color changes - contrast changes, implied changes from color space conversion
And, in some applications:
- Camera angles - different cameras taking images of the same thing (consider TV coverage from various networks). Also images from the same camera a short time apart
- Camera settings - such as color, brightness, exposure.
- Added borders
- mild noise
It can be a simple task - at least, much simpler than sub-image detection, more arbitrary image comparison, feature detection.
When you're, say, only interested in removing some almost duplicate wallpapers in your selection you deal with little more than rescales and crops, and perhaps some color changes. These can be covered with relatively simple methods.