Polylines and splines, curves, interpolation, resampling, easing

Jump to: navigation, search
 This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)
 These are primarily notesIt won't be complete in any sense.It exists to contain fragments of useful information.

Polylines, polygons

Polylines and polygons typically consist of vertices that describe adjacent line segments.

Line segments can be considered a more restricted case of your basic mathematical infinite line. Two points describe it (while in math we often use slope-intercept or point-slope form), and we say they stop at those endpoints. This view can be helpful because we can steal properties and algorithms from the infinite-line cases as long as we also consider we added the 'stops here' detail.

Polylines

A polyline is also known as polygonal chain, polygonal curve, polygonal path, piecewise linear curve and probably under a few more names.

They consist of a series of vertices that are used to describe a continuous sequence of straight line segments.

If they describe a closed shape, they are usually referred to as polygons.

Polygons

Polygons refer to closed polylines.

Closedness is often either something that is tested for (last vertex is first vertex) or, perhaps more commonly, an additional property that says "treat this as if the last and first vertex are connected".

The latter may be more concvenient, and is more robust when you use floating point numbers and transform the values (and so may get slight errors).

polygon area, definition direction (2D)

The most elegant solution I know of is summing of lengths of cross products. That is, take one arbitrary point combine it with all sequential pairs from the polygon's actual vertices.

This will also tell you the direction - clockwise or anticlockwise - that the polygon was defined in (possibly only when the arbitrary point is inside?(verify))

It is usually easiest to take one of the polygon's own vertices as the fixed point.

The reason it works is related to the fact that the cross product calculates the area of the parallelogram described by the two vectors, which you want to divide this by 2 so that you get the area of the triangle that is half that parallelogram -- i.e. the triangle that is described by each pair of vectors you consider.

If you want it to make more intuitive sense: draw a convex polygon, and take the reference point to be in the middle somewhere, and start with the two-point thing described. You're drawing a division of triangles that together exactly make up the polygon. You're tesselating it.

It also works for concave polygons, which is slightly harder to prove or explain, but you can probably intuit that it calculates negative areas when the vectors that go the other way around: negative for clockwise, positive for counter/anticlockwise. (For simple concave examples, you can convince yourself that it is the outside areas which get counted positively as often as negatively, so count as zero in the eventual sum.)

Note that if the polygon is not closed, or is self-overlapping, the concept of its area is not well-defined (so no algorithm will yield a very useful result unless it deals with this explicitly somehow).

See also:

Another method:

point inside/outside test (2D)

To test whether a specific point is inside a particular closed polygon, cast a ray from the point in any direction (e.g. horizontal, for ease) and check whether it crosses polygon edges, reducing the problem to finding whether/where two lines meet.

If there is an even amount of crossings it is outside the polygon, while an odd number means it is inside. This also holds for concave polygons, where a line may cross the border more than once. (For convex polygons, the number of crossings will only ever be zero or one)

Note

• you yourself have to decide whether a point exactly on a boundary should be considered inside or outside.
• you may wish to add a slight threshold to bias for inclusion (e.g. to avoid exclusion caused by rounding error)
• if the polygon is not closed or is self-overlapping, the problem isn't too well defined

Cartographic additions

Consider having borders between countries. If you take each country as a distinct polygon, you effectively store most data twice, because each border is part of two polygons.

This can become problematic whenever you start transforming these coordinates (floating point errors), when you simplify the lines, or any other case where what should be the exact same line ends up not being numerically identical anymore. Drawing both now may leave holes and overlaps.

GIS typically avoid such double lines.

One way is to store sets of polylines, and multiple what-you-consider-polygons can refer to the same polyline.

This also means that conversion between that and polygons (consider import and export) is more involved. that you will need extra logic to deal with data like this (polylines will only be removed if no polygons refer to it anymore), vertices may connect three or more points, and such.

It also gets interesting when you have polylines in different layers. Say you have shorelines in ine, political borders in another. Drawing a country polygon is an interesting operation that may need to deal with some almost-meeting cases and such.

Line segment simplification

Lines such as shorelines may be created in high resolution, but accuracy to a meter is not useful at country scale. Drawing the line at various zoom levels would be faster if you could dynamically lower the resolution of the line.

Taking every so-manieth is a quick but very inaccurate solution; it is very likely to destroy shapes, particularly if the vertices are not spaced evenly.

In various applications there are additional considerations, particularly when applying severe reductions. These considerations may include:

• shared vertices/lines, common in geographic data
• vertices that are involved in more than two line segments -- often treated as non-simplifyable (but see mesh simplification)
• treating start and end vertices as non-simplifyable
• avoid solutions that introduce certain types of intersections, such as
• self-intersections, as small, thin features such as river inlets might
• intersections between lines belonging to the same logical feature, but are separate entities (two banks of a river, the same border if it is used in two separate polygons)
• if you have point locations related to polygons, avoid solutions that change whether these are inside or outside
• may simplification introduce self-intersection? (may happen for most algorithms that omit points, inluding Douglas-Peucker)
• minimal area change
• minimal (aesthetic) shape change

Douglas-Peucker, Ramer

The Douglas-Peucker algorithm (also Ramer-Douglas-Peucker; Ramer published in 1972, Douglas and Peucker in 1973) is an algorithm commonly employed to simplify polygonal lines. The basic algorithm is O(n2), optimizable to O(n lg n).

DP is an iterative/recursive algorithm that starts to approximate a polyline with a straight line between first and last points, then iteratively/recursively considers vertices to add, adding the point furthest away unless it is so close it falls under a certain threshold.

The threshold effectively controls the accuracy/resolution of the line, and most implementations allow you to supply this threshold, and/or a target amount of vertices.

Cartography exposes that it doesn't always make sense when lines relate to other lines, the simplification does not conserve area very well, and such.

As pointed out in (Ebisch 2002), the Douglas-Peucker paper mentions the same thing in various ways, likely intended the same way but interpreted in mildly different ways:

1. "The perpendicular distance of C from the segment A–B...,"
2. "...distance from the straight line segment...",
3. "...the furthest point from the straight segment...",
4. "...maximum perpendicular distance from the segment...",
5. "...the greatest perpendicular distance between it and the straight line defined by the anchor and the floater”.

While (5) seems to be the most widely used by programmers, (2) and (3) are better choices in certain cases, some of which occur in real-world data and not just in pathological examples.

Just perpendicular distance isn't enough because it causes the algorithm to mistakenly omit points, for examples those are extended beyond the current segment but have a small perpendicular distance. For a real-world shoreline on which this leads to bad simplification (and some pathological examples), see (Ebisch 2002).

See also

• U Ramer (1972), An iterative procedure for the polygonal approximation of plane curves
• D. Douglas and T. Peucker, (1973) Algorithms for the reduction of the number of points required to represent a digitized line or its caricature.
• J. Hershberger and J. Snoeyink, (1992) Speeding Up the Douglas-Peucker Line-Simplification Algorithm [1]
• K. Ebisch, (2002) A correction to the Douglas–Peucker line generalization algorithm [2]

• G. Gorman, (2007) Shoreline approximation for unstructured mesh generation [3]
• R. McMaster and S. Shea, (1992) Generalisation in Digital Cartography
• G. Hunter, (1998) Generalisation Techniques for Spatial Data, Lecture Notes, University of Melbourne.

Curves, splines

A curve is a collection of points that a moving point passes through, with no discontinuities or gaps. Perhaps a little more intuitively, it is a trace of values. It may be a segment or without end. (Mathematically, curves include straight lines, lines segments, and other things that you may not intuitively consider curves because of the curvy connotation)

In many contexts, a curve is a thing described by a simple formula or method, and are regularly set up in so that you can just use some parameters. For example, a Bézier curve is a curvy segment described by four points.

Many real-world shapes are too complicated to be described by a single curve, which is where splines come in. Splines are made of curves (for example, a Bézier spline is made of Bézier curves end-to-end). A spline is a function defined piecewise, usually using polynomials.

In many cases, control points that are used in curves are shared between spline segments, for

See also:

Curve/spline methods/formulae

Different methods give different features and guarantees. For example, Bézier focuses more on guaranteeing mathematical continuity, but partly because of that is much less pliable, given the same amount of control points, than various other spline types.

• Bézier curve is one in which each polynomial is in Bézier form
• Often quadratic or cubic (the linear case is straight lines, higher-level than cubic starts taking a lot of calculations)
• A B-spline are a variations/generalization of Bézier splines(verify) that have handier properties for some modelling purposes (Control points in a Bézier spline have limited control over the spline, which is not handy when you want to avoid adding many more control points to get a sharper shape.
• Cardinal spline is a class of spline, a curve that passes smoothly through a given set of points, with some degree of tension at each.

Interpolation and resampling

Interpolation usually refers to the process of creation of new points within a data set, based on current data. This is often done by fitting a continuous function through the existing data, allowing for any amount of new points.

Uses of interpolation includes image resizing, line smoothing, and such. Sound resampling can also be based on interpolation, although there are other methods.

Note that there are many named interpolations that may not be mentioned here because they are analogous enough. For example, trilinear is linear in 3D and works much like bilinear in 2D but with consideration of another dimension.

Particularly in raster images (like photos), specific wishes are part of the equation, and there are various photo printing interpolators that do things you may not want to do with other types of data (such as being adaptive to sharp contrast). I won't get into this much here.

Linear and polynomial interpolation

See also:

Linear interpolation can be seen as the specific case of a linear polynomial.

Cubic interpolation

Probably the simplest method that gives real, data-based continuity between points.

Uses two extra data points around the two in question. (For calculation of the first and last segments, an extra point is often dreamed up with the same slope as the segment in question)

Bicubic interpolation

2D case of cubic (analogous to how bilinear is the 2D case of linear). Uses 16 points.

When resampling (2D) raster images, bicubic is regularly chosen over bilinear interpolation when speed is less of an issue, because it is perceptually smoother (and has less noticeable artifacting)

Video rendering can also use bicubic filtering.

Spline interpolation

Interpolation using piecewise polynomials (as in splines).

Semi-sorted

Cosine interpolation

Cosine interpolation is a simple but fairly effective way to cheat.

That is, you take the suitably oriented half of the cosine curve (0..pi if going down, (pi..2pi if going up) and

While the curve isn't in any way based on the data, it will look close enough nice smoothness at least in 1D (e.g. will look continuous, also through each data point (at which it will have slope of 0)).

You can see it as a creative segmented way of creating a new function, and doesn't work for arbitrarily oriented many-segment curves (those that couldn't be expressed as functions).

Data fitting

If you do not want to consider every possible segment but mainly their tendency, you can go an entirely different route, e.g. regression, smoothing splines, and such.

Lanczos algorithm/resampling

Mostly applied to 2D raster images, because you can get considerable zoom with less bothersome artifacts than some other other methods give. Probably a little better than bicubic.

Lanczos (lancoz), often also known as Lanczos3, which refers to the (pretty sensible) choice of a=3.

See also:

Mitchell

Apparently referring to:

• DP Mitchell and AN Netravali, Reconstruction filters in computer-graphics, in Proceedings of the 15th Annual Conference on Computer Graphics and interactive Techniques (SIGGRAPH '88). [4]

Easing, (inbe)tweening

Easing, inbetweening, and tweening often refers to interpolation as applied to animation, where you calculate properties for each frame to be shown.

The inbetweened property is usually position, although anything you can create an interpolation function for rotation, color, shape, or anything else.

It can be easierst to take functions from 'things for interpolation' lists.

Actually, your choice is more general since there are fewer restrictions on functions used for tweening - the function need only make sense in the single segment and only defined in the given domain. You can dream up a lot of variations, and there are endless possible functions that you can use, though the and the relatively few informal names we use for easing show that we have relatively few perceptions.

We often want a function that is

• is defined in the domain 0..1, maps to 0..1
• is monotonous
• has specific nice behaviour in certain ranges (mostly 'near the start' and 'near the end')
e.g. slowing down

There are exceptions to all of these. We might want something that overshoots a little and settles but snaps back to the actual end coordinate. We might want something that looks springy, sticky, or otherwise relatively powerpointy. We may only care how it behaves when almost there.

Easing in and easing out refer to doing a certain type of easing at the destination and source end of a path to be traveled -- often just describing the reactions in the domain subranges 0..0.5 and 0.5..1.

Lerp, lerping, is short for Linear interpolation, and refers to a transfer function with a 'how far along' argument (in the 0..1 range).

In various cases this is a linear tween.

This is arguably a convenience thing, for a lot of "I need to rescale this thing" uses.

C/C++

Python

Java