Data modeling, restructuring, analysis, fuzzy cases, learning

From Helpful
Revision as of 23:35, 29 June 2020 by Helpful (Talk | contribs)

Jump to: navigation, search
This is more for overview of my own than for teaching or exercise.

Overview of the areas

Arithmetic · 'elementary mathematics' and similar concepts
Set theory, Category theory
Geometry and its relatives · Topology
Elementary algebra - Linear algebra - Abstract algebra
Calculus and analysis
Logic
Semi-sorted
 : Information theory · Number theory · Decision theory, game theory · Recreational mathematics · Dynamical systems · Unsorted or hard to sort


Math on data:

  • Statistics as a field
some introduction · areas of statistics
types of data · on random variables, distributions
Virtues and shortcomings of...
on sampling · probability
glossary · references, unsorted
Footnotes on various analyses

Other data analysis, data summarization, learning

  • Data modeling, restructuring, analysis, fuzzy cases, learning
Data massage
Data clustering · Dimensionality reduction · Fuzzy coding, decisions, learning · Optimization theory, control theory
Connectionism, neural nets · Evolutionary computing



This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


Contents

Concepts & glossary

The curse of dimensionality is, roughly, the idea that when you add a dimension to your model you need proportionally more data for decent training of that model. Similarly, since the volume increases so fast, you probably have a sparsity problem. It's a fairly exponential problem, so



Stochastic processes, deterministic processes, random fields

A deterministic process deals with possible determined cases, no unknowns or random variables.


A stochastic process (a.k.a. random process) allows indeterminacy, typically by working with probability distributions.

A lot of data is stochastically modeled, because you only need partial data and can generally only get partial data.


(Models mixing deterministic and stochastic processes are often called hybrid models)


A random field basically describes the generalization that happens when the parameter (dependent variable) is not necessarily time, or one-dimensional, or real-valued.


See also:


Types of problems

Tasks are often one of:

  • Clustering points out regions or groups of (mutual) similarity, and dissimilarity from other groups.
clustering may not deal well with future data of the same sort, unless some care has been taken, so may not be the best choice of a learning/predicting system
  • Vector quantization: Discretely dividing continuous space into various areas/shapes
which itself can be used for decision problems, labeling, etc.
  • Dimensionality reduction: projecting attributes into lower-dimensional data
where the resulting data is (hopefully) comparably predictive/correlative (compared to the original)
The reason is often to eliminate attributes/data that may be irrelevant or too informationally sparse
see also #Ordination.2C_Dimensionality_reduction.2C_Factor_Analysis.2C_Multivariate_analysis
  • Feature extraction: discovering (a few nice) attributes from (many) old ones, or just from data in general.
Often has a good amount of overlap with dimensionality reduction
  • others...

Markov property

the Markov property is essentially that there is no memory, only direct response: that response of a process is determined entirely by its current state (and input, if you don't already define that as part of the state).

More formally, "The environment's response (s,r) at time t+1 depends only on the Markov state s and action a at time t" [1]


There are many general concepts that you can make stateless, and thereby Markovian:

  • A Markov chain refers to a Markov process with finite, countable states [3]
  • A Markov random field [4]
  • A Markov logic network [5]
  • A Markov Decision Process (MDP) is a decision process that satisfies the Markov property
  • ..etc.


Observability and controllability

Underfitting and overfitting (learners)

Underfitting is when a model is too simple to be good at describing all the patterns in the data.

Underfitted models and learners may still generalize very well, and that can be intentional, e.g. to describe just the most major patterns.

It may be hard to quantify how crude is too crude, though.


Overfitting often means the model is allowed to be so complex that a part of it describes all the patterns there are, meaning the rest ends up describing just noise, or insignificant variance or random errors in the training set.

A little overfitting is not disruptive, but a lot of it often is, distorting or drowning out the parts that are actually modeling the major relationships.

Put another way, overfitting it is the (mistaken) assumption that convergence in the training data means convergence in all data.


There are a few useful tests to evaluate overfitting and underfitting.


supervised versus unsupervised systems (learners)

Supervised usually means the training process is suggested or somehow (dis)approved. Usually it refers to having annotated trainign data, sometimes to altering it. Example: Classification of documents, basic neural network back-propagation, least-squares fitting, operator cloning

Unsupervised refers to processes that work without intervention. For example, self-organizing maps, or clustering documents based on similarity needs no annotation.

Semi/lightly supervised usually means there is an iterative process which needs only minimal human intervention, be it to deal with a few unclear cases, for information that may be useful, or such.


Inductive versus deductive systems (learners)

Inductive refers to training purely from data.

Deductive refers to also having a theory about the domain domain.


Inductive learning can be approached as a search in a hypothesis space.

Model-based versus model-free systems (learners)

Statistical modeling, data fusion (and related estimation)

Data fusion means merging data from various sources, e.g. various sensors. This often implies some sort of modelling.


Regression analysis

Regression glossary:

  • linear regression models a linear predictor function
typically a least squares estimator
  • simple regression indicates a single independent/explanatory variable

Linear regression

Logistic regression

Mean Squared Error (metric)

Bayes estimator

Kalman filter

Data clustering

Clustering groups a few related sub-problems, including

cluster formation - organizing into clusters
cluster segmentation - dealing with boundaries (often using cluster centers)
labeling - assigning meaningful names
for the (relatively few) cases where this makes sense to estimate
deciding how many groups to have in the result
evaluation of a solution (possibly feeding back into the previous point)


Formally, the simplest clustering ca be described as:

  • you have a set of n data objects, call it D = { d1, ..., dn }
  • in its simplest shape, a clustering result is a disjoint partitioning of D
  • which makes clustering itself a function, mapping each datapoint to a cluster number/label that indicate membership of said cluster


The input data is often either

  • a set of a points in a many-dimensional space, usually a vector space, plus a metric to calculate distance between them, OR
  • a set of already-chosen distances
preferably complete, but depending on how it's made it might be sparse, and it may be easier to do some fuzzy statistical estimation than to ask people to complete it


The latter may well be in the form of a distance matrix / similarity matrix.

A bunch of methods given datapoints-plus-metric convert to that internally, but starting data-plus-metric is often a little more flexible up front - both for data massaging, and sometimes for implementation reasons.


That said, the choice of metric takes care, because there are many ways to accidentally put some bias into the metric.


Many methods look at element-to-element similarities/dissimilarities, while a few choose to be more involved with the data that comes from (e.g. some Maximum-likelihood-based methods common in bioinformatics)


Variations

Hard, soft, and fuzzy clustering

Hard clustering means each item should be assigned to a single group. This is essentially a partitioning.


Soft clustering means something can belong to more than one group.

Regularly used for data known to be too complex to be reduced cleanly with hard clustering, such as when there are closeby, overlapping, or ambiguous groups.

Soft clustering is generally understood as boolean soft clustering: something can belong to one or more clusters, but there are no degrees.


Fuzzy clustering is soft clustering plus degrees of membership.

This means intermediate results are effectively still moderately high-dimensional data, you often still have to make a decision about exclusion, thresholds or such (preferably within the algorithm, to have all information available).

If you don't make such a decision, the result more resembles dimensionality reduction.

Agglomerative versus divisive

Agglomerative clustering usually starts with each item in its own cluster and merges them where it seems a good idea.

Divisive clustering usually starts with every item in a single cluster and iteratively splits them as it sees fit.

The difference seems to lie largely in what side they err on in unclear cases.(verify)


Hierarchical clustering

Hierarchical clustering creates a tree of relations, often by an process where we keep tracks of how things join, rather than just assimilate things into a larger blob.


Hierarchical clusterers can be flexible, in that their results can partition into an arbitrary number of groups (by choosing the depth at which there are that amount of groups).

Depending on the data these results contain may also be useful as an approximation for fuzzy clustering. They may also be a little more helpful in cluster stability tests.


Some algorithms record and retain he distances of (/stress at) each such joint. These can be interesting to visualize (think dendrograms and such), and to effectively allow the amount-of-cluster choice to be made later (think threshold in a dendrogram).


http://gepas.bioinfo.cipf.es/cgi-bin/docs/clusterhelp


Notes on....

Group number choice

Some algorithms try to decide on a suitable number of target groups, but many require you to choose an exact number before they get started.

This number is difficult to decide since there is usually is no well-defined, implicit, calculable best choice.


This is a problem particularly in hard clustering, because any decision of group membership is very final. The membership of bordercases may not be stable under even the slightest amount of (sample) noise.


Things you can do include:

  • use evaluation to measure the fitness of a solution (or sub-solutions while still clustering), based

Note that such a metric in itself is only a relative value in a distribution you don't know - you'll often have to calculate the fitness for many solutions to get a still-vague idea of fitness.

  • In the case of hard clustering, you can intentionally add some noise and see how much the membership of each item varies - and, say, report that as the confidence we have in a choice.
  • use some type of cross-validation


Inter-cluster and intra-cluster comparisons; susceptibilities

Depending on the algorithm, you often want to be able to compare

  • items to clusters (agglomerative and divisive decisions)
  • clusters to clusters (e.g. in hierarchical decisions)
  • items to items (e.g. for centroid/medoid decisions)


Cluster-to-cluster distances are most interesting to hierarchical clustering, and can be calculated in a number of ways (usually hardwired into the algorithm), including:

  • Single-link, a.k.a. minimum linkage or nearest linkage
the most similar combination (lowest distance) of possible comparisons
more susceptible to over-chaining than most other methods
  • Complete-link , a.k.a. max/farthest linkage
uses the least similar (largest distance) combination of possible comparisons
often gives a non-chained, more equally divided clusters than single-link
outliers may have disproportional influence
  • Average-link (a.k.a. group-average, a.k.a. Group-average (agglomerative) clustering (GAAC)) - average of distances between all inter-cluster pairs
less sensitive to outliers than complete-link, less sensitive to inversion than centroid approaches.
  • Centroid approaches use a calculated average for comparison to a cluster
quite susceptible to inversions (verify)
  • Medoid approaches try to use a representative item for comparison to a cluster
somewhat susceptible to inversions (verify)
  • Density-based methods care more about local density of items and less directly about the exact distances involved
  • Ward's method
based on Ward criterion, a.k.a. the Ward minimum variance criterion



Potential problems:

  • outliers
    • including a single outlier may drastically change comparisons to that group.
    • The timing of their inclustion can have significant effects on the result.
  • the chaining effect refers to algorithms doing a chain/string of assignments of to a group. The concept is clearest in
  • Inversions (sometimes 'reversal' or 'non-monotonicity') - describes when similarity values do not decrease monotonously in a series of iterations
    • easily happens when a process makes decisions based on centers that move in the process of clustering (such as in many centroid-style processes) particularly when combined with cases where there is no clear clustering solution.


See also:

TODO: read

on convergence
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Convergence is a nontrivial check in many algorithms.

You could check whether the group assignments have not changed, but this is sensitive to oscillations, resulting in a premature report of convergence and/or a failure to converge (depending somewhat on logic and data size).

A simple threshold is arbitrary since the error values often depend on the scale of the data values (which is not very trivial to correct for(verify)).


This is occasionally solved by error minimization criteria, for example minimization-of-the-sum-of-squares.

There are some details to this. For example, reallocating a point between clusters, various methods consider only the error decrease in the target cluster - while the solution's total error may increase. It usually still converges, but the total error decreases with a little more oscillation, which "no significant improvement in the last step" terminating criterion may be sensitive to (though arguably it's always more robust to check whether the error decrease is roughly asymptotic with the minimum you presume it'll get to).

The idea resembles Expectation Maximization (EM) methods in that it tries to maximize the probability of the clusters being the correct by minimizing the energy/error.


Purely random initial positioning may cause the local minimum problem. Smartly seeding the initial centroids helps and need not be too computationally expensive - and in fact helps convergence; see e.g. k-means++.

Alternatively, you could run many versions of the analysis, each with random initial placement, and see see whether (and/or to which degree) the results are stable, but this can be computationally expensive.

Robustness in hard clustering

Particularly hard clusterers are often not robust against even minor variations in the data. That is, separate data sets that are highly correlative may lead to significantly different results; areas in which membership is borderline flip-flip under the tiniest (sample) noise.


You can evaluating a solution for stress (or correlate distances it implies to the original data e.g. in hierarhical data), though in itself this is only a general thing. It is meaningful the same measre from other clusterings of the same data, meaning that you *can* roughly compare different solutions for expression of the original data, but only in a roughly converging way.


One trick is to cause the problem and test how varied results will be over mild variations over the data. You can for example repeat the clustering some amount of times with some noise, and record how often things change membership.

You can repeat the clustering omitting random pieces of data to lessen the effect of outliers - pieces of data that do not agree with the rest.

You can even aggregate the results from these runs and combine them into a sort of fuzzy cluster result that can show you instabilities, and/or converge on a clusters amount choice.

Evaluation
Silhouette coefficients
Davies-Bouldin index
Gap statistic
Unsorted

Implementation notes

k-means

The k-means problem is finding a cluster labelling for a given amount of clusters (k) with minimal error, where the error function is based on the the within-group sum of squares.

(For completeness, that means for all elements in a group, calculate the square of the euclidean distance to the centroid, and sum up all these squares, which gives per-cluster error values. Various convergence checks will want to know the sum of these errors)


Most implementations are iterative and look something like:

  • Position k cluster centroids (at random, or sometimes slightly more cleverly)
  • For each element, assign to the nearest centroid
  • Recalculate (affected) centroid means (and often the error/energy at the same time)
  • Check whether the moved centroids change the element assignments.
If so, iterate
If no change, we have converged and can stop


Of iterative clustering methods, k-means is the simplest and many others can be said to be based on it.


K-means gives better better results if the value for k is a good choice, representative for the data. (it's not unusual to try various k and test them)

The common 'nearest cluster' criterion will avoid attraction of multiple clusters, and the whole will converge to a decent solution for k groups.


Limitations:

  • results are sensitive to initial placement, and it is easy to get stuck in a local minimum.
It is not unusual to run the clustering various times with different starting clusters and see how stable the clustering is.
  • If k is not representative of the structure in the data the solution may not be satisfying at all. This is partly caused by, and partly independent of, the fact that there may be various possible stable clusterings.
  • the simple distance metric means we say the shape for inclusion is always a circle


Average-case runtime is decent because it's a fairly simple algorithm. Worst-case runtime, for fairly pathological datasets, is fairly quite high. There are faster approximations of k-means that you may wish to consider.

You can tweak k-means in various ways. For example, you can assign weights (based of frequency, importance, etc) to elements to affect the centroid mean recalculation.


See also:


Variations:

  • ISODATA (Iterative Self-Organising Data Analysis Technique Algorithm) builds on k-means and tries to be smart in initial positions and in variations of k(verify).
  • H-means is a variation on k-means that recalculates the centroid only after a complete iteration over all the items, not after each reassignment.
Seen one way, it checks for error decrease less often, which makes it a smidge more sensitive to local minima and perhaps doesn't converge as nicely.
Although the difference in practice tends to be slight, k-means tends to be the slightly safer (and much more common) choice, even if the order it handles elements in is a different kind of bias that h-means avoids.
Canopy clustering

https://en.wikipedia.org/wiki/Canopy_clustering_algorithm

Bisecting k-means
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


hard c-means
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


UPGMA, WPGMA
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

UPGMA = Unweighed Pair Group Method using Arithmetic averaging

WPGMA = Weighed Pair Group Method using Arithmetic averaging

(These specific names/abbreviations come from Sneath and Sokal 1973)


UPGMA assigns equal weight to all distances:

D((u,v),w) = (nu*D(u,w) + nv*D(v,w)) / (nu+nu)

WPGMA uses:

D((u,v),w) = (nu*D(u,w) + nv*D(v,w)) / 2


In the unweighed variant, the two things being combined weigh equally, in the weighed variant, all leaves that are part of a cluster weigh in as much as the other part.


Bottom-up combiners working from a difference matrix, combining whatever leaf/cluster distance is minmal, then recalculating the difference matrix.


It's not too hard too argue this terminology is a little arbitrary

UPGMC, WPGMC
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Unweighed Pair Group Method using Centroids, and Weighed Pair Group Method using Centroids

(the specific names/abbreviations come from Sneath and Sokal 1973)

Fuzzy c-means (FCM)
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)
  • Type: fuzzy clustering (not really partitioning anymore)

Method: Like k-means, but weighs centroid recentering calculations by fuzzy distance to all data points

http://www.elet.polimi.it/upload/matteucc/Clustering/tutorial_html/cmeans.html

http://www.scholarpedia.org/article/Fuzzy_C-Means_Cluster_Analysis

Fuzzy k-means
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

http://www.know-center.tugraz.at/forschung/wissenserschliessung/downloads_demos/fuzzy_k_means_and_k_means_clustering_demo

Shell clustering
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Basic fuzzy c-means would include by a radius - a sphere.

There are various other options, including:

  • fuzzy c-quadric shells algorithm (FCQS) detects ellipsoids
  • fuzzy c-varieties algorithm (FCV) detects infinite lines (linear manifolds) in 2D
  • adaptive fuzzy c-varieties algorithm (AFC): detects line segments in 2D data
  • fuzzy c-shells algorithm (FCS) detects circles
  • fuzzy c-spherical shells algorithm (FCSS) detects circles
  • fuzzy c-rings algorithm (FCR) detects circles
  • fuzzy c-rectangular shells algorithm (FCRS) detects rectangles
  • Gath-Geva algorithm (GG) detects ellipsoids
  • Gustafson-Kessel algorithm (GK) detects ellipsoids of roughly the same size
PAM

Partitional, medoid-based

CLARA

Partitional, medoid-based


AGNES

Hierarchical

AGglomerative NESting

DIANA

Hierarchical

DIvisie ANAlysis

Buckshot

Hybrid

Birch

Hybrid

BITCH (balanced iterative reducing and clustering using hierarchies)

https://en.wikipedia.org/wiki/BIRCH

Cure

Hybrid

Clustering Using REpresentatives

https://en.wikipedia.org/wiki/CURE_algorithm

Rock

Hybrid


"Rock: A robust clustering algorithm for categorical attributes"

Chameleon

Hybrid, Hierarchical


"CHAMELEON: A Hierarchical Clustering Algorithm Using Dynamic Modeling"

DBSCAN

Density-based.

The assumption that real objects will always be a dense cloud of points more easily rejects random points as noise/outliers (even if relatively close).

It can also deal decently with closeby nonlinear clusters, if separated cleanly.


http://en.wikipedia.org/wiki/DBSCAN

OPTICS

Similar to DBSCAN

https://en.wikipedia.org/wiki/OPTICS_algorithm

FLAME

Fuzzy, density-based

http://en.wikipedia.org/wiki/FLAME_Clustering


Clustering by Committee (CBC)

Hybrid


Based on responsive elements (a comittee) voting on specific outcomes.


See also:


Principal Direction Divisive Partitioning (PDDP)
Information Bottleneck

See also:


Agglomerative Information Bottleneck

See also:

Expectation Maximization Clustering

Related fields

See also

Dimensionality reduction

As a wide concept

Dimensionality reduction can be seen in a very wide sense, of creating a simpler variant of the data that focuses on the more interesting and removes the less interesting.


Note that in such a wide sense a lot of learning is useful as dimensionality reduction, just because the output is a useful and smaller thing, be it clustering, neural networks, whatever.

But in most cases it's also specific output, minimal output for that purpose.


Dimensionality reduction in practice is much more mellow version of that, reducing data to a more manageable form. It historically often referred to things like factor analysis and multivariate analysis, i.e. separating out what seem to be structural patterns, but not doing much with them yet, often acknowledging that we usually still have entangled surface effects in the observations given, and our direct output is probably still a poor view of the actual dependent variables that created the observations we look at. (Whether that's an issue or not depends largely on what it's being used for)


Reducing the amount of dimensions in highly dimensional data is often done to alleviate the curse of dimensionality.[7]

Said curse comes mainly from the fact that for every dimension you add, the implied volume increases quicker and quicker.

So anything that wants to be exhausitve is now in trouble. Statistics has an exponentially harder job of proving significance (at least without exponential amounts more data), Machine learning needs as much more data to train well, optimization needs to consider all combinations of dimensions as variables, etc.


Distance metrics are funnier, in that the influence of any one dimension becomes smaller, so differences in metrics smaller - and more error-prone in things like clustering. (verify)

It's not all bad, but certainly a thing to consider.


Ordination, Factor Analysis, Multivariate analysis

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

(See also Statistics#Multivariate_statistics)


Ordination widely means (re)ordering objects so that similar objects are near each other, and dissimilar objects are further away.


Ordination methods are often a step in something else, e.g. be good data massage before clustering.

It can become more relevant to higher-dimensional data (lots of variables), in that ordination is something you watch (so sort of an implicit side effect) in the process of dimensionality reduction methods.

Of course, different methods have their own goals and focus, different requirements, and their own conventions along the way.

Typical methods include PCA, SVD, and




Factor Analysis, Principal Component Analysis (PCA), and variants

Correspondence Analysis (CA)

Conceptually similar to PCA, but uses a Chi-square distance, to be more appicable to nominal data (where PCA applies to continuous data).


See also:

Multi-dimensionsional scaling (MDS)

Refers to a group of similar analyses, with varying properties and methods, but which all focus on ordination

Commonly mentioned types of MDS include:

  • Classical multidimensional scaling, a.k.a. Torgerson Scaling, Torgerson–Gower scaling.
  • Metric multidimensional scaling
  • Non-metric multidimensional scaling


It is regularly used to provide a proximity visualization, so the target dimensions may be two or three simply because this is easier to plot.


Depending on how you count, there are somewhere between three and a dozen different MDS algorithms.

Some MDS methods closely resemble things like PCA, SVD, and others in how they change the data. Some more generic MDS variants are more on the descriptive side, so can be solved with PCA, SVD, and such.


A ordination, most try to not change the relative distances, but do change the coordinate system in the process.


Input

Input to many method is a similarity matrix - a square symmetric matrix often based on a similarity metric. Note some similar methods may be based on dissimilarity instead.


At a very pragmatic level, you may get

  • a ready-made similarity matrix
  • items plus a method to compare them
  • a table of items versus features (such as coordinates, preferences), with a method to compare them
  • perceived similarities between each item (e.g in market research)

There is little difference, except in some assumption like whether the feature values are Euclidean, independent, orthogonal, and whatnot.


Result evaluation

MDS solutions tend to be fairly optimal, in that for the amount of target dimensions you will get the solution's distances correlating to the original data's distance's as well as they can.

There are still a number of things that help or hinder accuracy, primarily:

  • the choice of input values
  • the choice of target dimensions (since too few lead to ill placement choices)
  • (to some degree) the type of MDS
  • methods that have cluster-like implementations may be sensitive to initial state


You probably want to see how good a solution is.

The simplest method is probably calculating the correlation coefficient between input (dis)similarity and resulting data (dis)similarity, to show how much the MDS result fits the variation in the original. By rule of thumb, values below 0.6 mean a bad solution, and values above 0.8 or 0.9 are pretty good solutions (depending on the accuracy you want, but also varying with the of MDS).

Other methods include Kruskal's Stress, split data tests data stability tests (i.e., eliminating one item, see if result is similar) test-retest reliability [8]


Algorithms

Note that in general, correlation can be complete (if k point in k-1 dimensions, or distances between any two items are equal whichever way they are combined, e.g. if distances are those in euclidean space), but usually is not. The output's choice of axes is generally not particularly meaningful.


The most common example is principal coordinates analysis (PCO, PCoA), also known as Classical multidimensional scaling, Torgerson Scaling and Torgerson-Gower scaling, which is a single calculation step so does not require iteration or convergence testing.

See also (Torgerson, 1952), (Torgerson, 1958), (Gower, 1966), CMDSCALE in R

(Note: PCO (Principle Coordinate analysis) should not be confused with PCA (Principle Component Analysis) as it is not the same method, although apparently equivalent when the PCA kernel function is isotropic, e.g. is working on Euclidean coordinates/distance)(verify)


The first dimension in the result should capture the most variability (first principal coordinate), the second the second most (second principal coordinate), etc. The eigenvectors of the input distance matrix yield the principal coordinates, and the eigenvalues give proportion of variance accounted for. As such, eigenvalue decomposition (or the more general singular value decomposition) can be used for this MDS method. (The degree to which distances are violated can be estimated by how many small or negative eigenvalues. If there are none (...up to a given amount of dimensions...) then the analysis is probably reasonable), and you can use the eigenvalues to calculate how much of the total variance is accounted for(verify) - and you have the option of choosing afterwards how many dimensions you want to use (which you can't do in non-metric).




Metric (multidimensional) scaling: a class of MDS that assumes dissimilarities are distances (and thereby also that they are symmetric). May be used to indicate PCO, but is often meant to indicate a class based on something of a generalization, in that the stress function is more adjustable. The optimization method used is often Stress Majorization (see also SMACOF [9] (Scaling by Majorizing A COmplicated Function)).




On iterative MDS methods: minimize stress no unique solution (so starting position may matter)

  • stress-based MDS methods
  • may be little more than non-parametric version of PCO(verify)



Non-metric multidimensional scaling can be a broad name, but generally find a (non-parametric) monotonic relationship between [the dissimilarities in the item-item matrix and the Euclidean distance between items] and [the location of each item in the low-dimensional space].

The optimization method is usually something like isotonic regression (which is due to monotonicity constraints). Methods regularly have both metric and non-metric parts, and non-metric scaling in the broad sense can describe quite varying methods (see e.g. Sammon's NLM).

Note that the the monotonic detail means that ranking of items ends up as more important than the (dis)similarities. This may be a more appropriate way of dealing with certain data, such as psychometric data, e.g. ratings of different items on an arbitrary scale.

Non-metric MDS may give somewhat lower-stress solutions than metric MDS in the same amount of dimensions.(verify)

Also, certain implementations may deal better with non-normality or varying error distributions (often by not making those assumptions).


Examples:

  • Kruskal's non-metric MDS(verify)
  • Shepard-Kruskal Scaling(verify) (and (verify) whether that isn't the same thing as the last)
  • Sammon non-linear mapping [10]


Variations of algorithms can be described as:

  • Replicated MDS: evaluates multiple matrices simultaneously
  • Three-way scaling
  • Multidimensional Unfolding
  • Restricted MDS




Multidimensional scaling (MDS) [11]

  • classical MDS (quite similar to PCA under some conditions, apparently when you use euclidean distance?)
    • (Difference matrix -> n dimensional coordintes (vectors))
  • Kruskal's non-metric MDS (R: isoMDS)
  • Sammon's non-linear mapping (R: sammon)


See also
  • WS Torgerson (1958) Theory and Methods of Scaling
  • JB Kruskal, and M Wish (1978) Multidimensional Scaling
  • I Borg and P Groenen (2005) Modern Multidimensional Scaling: theory and applications
  • TF Cox and MAA Cox (1994) Multidimensional Scaling


Generalizized MDS (GMDS)

A generalization of metric MDS where the target domain is non-Euclidean.

See also:


Sammon’s (non-linear) mapping

Singular value decomposition (SVD)

See also:

Manifold learning

An approach to non-linear dimensionality reduction.

Can be thought of an extension of PCA-style techniques that is sensitive to non-linear structure in data.


Expectation Maximisation (EM)

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

A broadly applicable idea/method that iteratively uses the Maximum Likelihood (ML) idea and its estimation (MLE).


Fuzzy coding, decisions, learning

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


On bias

Methods / algorithms / searchers

Decision trees

ID3
Pruning (ID3, others)
Rule Post-pruning; C4.5

Instance-based learning

Bayesian learning

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Bayesian learning is a general probablistic approach, mostly specifically used as a probablistic classifier.

Mathematically it is based on any observable attribute you can think of, and the math requires Bayesian inversion (see below).

Many basic implementations also use the Naive Bayes assumption (see below), because it saves a lot of computation time, and seems to work almost as well in most cases.

Bayesian classifier

Bayes Optimal Classifier
Naive Bayes Classifier

Bayesian (Belief) Network

Some classifiers

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)
  • Parzen classifier
  • Backpropagation classifier


Evaluating classifiers

Kernel(-related) methods, the Kernel Trick

Support Vector Machines

Markov Models, Hidden Markov Models

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Something like (the simplest possible) Bayesian Belief Networks, but geared to streams of data. Can be seen as a state machine noting the likeliness of each next step based on a number of preceding steps.

The hidden variant only shows its output (and hides the model that produces it), the non-hidden one shows all of its state.

Simple ones are first-order


Optimization theory, control theory

Glossary

Some controllers

In terms of the (near) future:

  • greedy control doesn't really look ahead.
  • PID can be tuned for some basic tendencies
  • MPC tries to minimize mistakes in predicted future


  • For example, take a HVAC system that actively heats but passively cools. This effectively means you should be very careful of overshooting. You would make the system sluggish -- which also reduces performance because it lengthens the time of effects and settling


Non-linear:

  • HVAC


Greedy controllers

Doesn't look ahead, just minimizes for the current step.


For example basic proportional adjustment.

Tends not to be stable.

Can be stable enough for certain cases, in particular very slow systems where slow control is fine, and accuracy not so important.


For example, water boilers have such large volume that even a bang-bang controller (turn heater element fully on or off according to temperature threshold) will keep the water within a few few-degrees of that threshold, simply because the water's heat capacity is large in relation to the heating element you'ld probably use.


But in a wider sense, e.g. that same boiler with a small volume, or powerful heater, will mean such control causes unproductive feedback, e.g. oscillations when actuation running is about as fast or faster than measurement.


Hysteresis (behaviour)

Hysteresis control (type)

Map-based controller (type)

PID controller

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

PID is a fairly generic control-loop system, still widely used in industrial control systems.

It is useful in systems that have to deal with delays between and/or in actuation and sensing, where they can typically be tuned to work better than greedy controllers (and also be tuned to work worse), because unlike greedy, you can try to tune out overshoots as well as oscillations.

PID is computationally very cheap (a few adds and multiplies per step), compared to some other cleverer methods.


Yet:

  • There are no simple guarantees of optimality or stability,
you have to tune them,
and learn how to tune them.
  • tuning is complex in that it depends on
how fast the actuation works
how fast you sample
how fast the system changes/settles
  • doesn't deal well with long time delays
  • derivative component is sensitive to noise, so filtering may be a good idea
  • has trouble controlling complex systems
more complex systems should probably look to MPC or similar.
  • linear at heart (assumes measurement and actuation are relatively linear)
so doesn't perform so well in non-linear systems
  • symmetric at heart, so not necessarily well-suited to non-symmetric actuation
consider e.g. a HVAC system -- which would oscillate around its target by alternately heating and cooling.
It is much more power efficient to do one passively, e.g. active heating and passive cooling (if it's cold outside), or active cooling and passive heating (if it's warmer outside)
means it's easier to overshoot, and more likely to stick off-setpoint on the passive side, so on average be on one side
You could make the system sluggish -- in this case it reduces the speed at which it reaches the setpoint, but that is probably acceptable to you.
in other words: sluggish system and/or a bias to one side


Some definition
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

The idea is to adjust the control based on some function of the error, and a Proportional–Integral–Derivative (PID) controller combines the three components it names, each tweaked with their own weight (gain).

The very short version is that

  • P adjusts according to the proportional error
  • I adjusts according to the integrated error
  • D adjusts according to the derivative error


It can be summarized as:

PID formula.png

where

  • e(t) is the error
  • P, I, and D are scalar weights controlling how much effect each component has


Some intuition
So how do you tune it?

MPC (Model Predictive Control)

FLC (Fuzzy Logic Control)

Notes on

Gradient-based learning

See also

Log-Linear and Maximum Entropy

Reinforcement learning (RL)

See also:

  • LP Kaelbing, ML Littman, AW Moore. (1996) Reinforcement learning: A survey.
  • RS Sutton, AG Barto (1998) Reinforcement Learning: An Introduction


Classifiers

Combining classifiers

Connectionism

Connectionism is the idea that interesting things are emergent effects from interconnected networks of simple units.

The term is used in cognitive psychology, cognitive science, neuroscience, and philosophy of mind, to model mental and behavioral phenomena -- but most recognizably in artificial intelligence in the form of neural networks.

Another angle is computationalism, which works more on higher-level expression, and tends to think connectionism is no more than lower-level association. The two are not necessarily incompatible, but certainly distinct approaches.


Neural networks

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Terminology old and new

ANN history

Perceptrons

Feedforward nets

RNNs

CNNs

Deep?

Unsorted

Evolutionary computing

See also


Semi-sorted

Unsorted