Data modeling, restructuring, and massaging: Difference between revisions
m (→word2vec) |
|||
(15 intermediate revisions by the same user not shown) | |||
Line 266: | Line 266: | ||
=Putting numbers to words= | =Putting numbers to words= | ||
====Computers and people and numbers==== | |||
<!-- | <!-- | ||
Where people are good at words and bad at numbers, computers are good at numbers and bad at words. | Where people are good at words and bad at numbers, computers are good at numbers and bad at words. | ||
Line 272: | Line 274: | ||
So it makes sense to express words ''as'' numbers? | So it makes sense to express words ''as'' numbers? | ||
That does go a moderate way, but it really matters ''how''. | |||
If you just make a list enumerating, say, 1=the, 2=banana, 3=is, ..., 150343=ubiquitous | |||
: it makes data ''smaller'' to work with, but equally clunky to do anything with other than counting {{comment|(...which has its uses - e.g. gensim's collocation analysis does such an enumeration to keep its intermediate calculations all-numeric)}}. | |||
: but does absolutely nothing to make them ''comparable'' | |||
In a lot of cases it would be really nice if we could encode ''some'' amount of meaning. | |||
But since that asks multiple difficult questions at once, | |||
like 'how would you want to encode different kinds of things in a way that is useful later', | |||
we are initially happy with the ability to say that 'is' is like 'be' more than 'banana' | |||
or at least a metric of similarity, what kind of role it plays. | |||
There are many of approaches to this you can think of, | |||
and almost all you can think of has been tried, to moderate success. | |||
-- | It can help to add a knowledge base of sorts | ||
: Maybe start with an expert-of-sorts making a list of all terms you know, and recognizing them in a text. | |||
: Maybe start making that into an going as far as to modeling that swimming is a type of locomotion, and so is walking. | |||
It can help to collect statistics about what kind of sentence patterns there are, | |||
and what kinds of words are usually where. | |||
It can help to detect infections, like if it adds in -ness then it's probably a [[noun]] and maybe [[uncountable]], | |||
and you can tag that even if you don't know the root word/morpheme it's on. | |||
Each of these may takes a large amount of work, and this can do a single task with good accuracy, | |||
though it may only be good at what it does because it doesn't even try other things. | |||
It can help to combine methods, and some of the systems built like this perform quite decently. | |||
It turns out that it can be | |||
as relevant how precise it is at specific things, | |||
as it is how consistent it is throughout. | |||
Maybe it can extract very specific sentences with perfect accuracy, | |||
but ignores anything it isn't sure about. | |||
Fuzzier methods are also common, in part because they can be trained to give ''some'' answer for everything, | |||
and even if they're not high quality they can be consistent. | |||
Say, you can get a reasonable answer to basic (dis)similarity between all words without much up-front annotation or knowledge, | |||
and for basically all words you can find a little context for to feed into the system. | |||
--> | |||
<!-- | |||
This can include | This can include | ||
* words that appear in similar context (see also [[word embeddings]], even [[topic modelling]]) | * words that appear in similar context (see also [[word embeddings]], even [[topic modelling]]) | ||
Line 316: | Line 340: | ||
: Say, where many methods may put 'car', 'road', and 'driving' in the same area, this may also say roughly ''how'' they are related, in a semantic sense. This can be more precise distance-wise, except that it also more easily incomplete. | : Say, where many methods may put 'car', 'road', and 'driving' in the same area, this may also say roughly ''how'' they are related, in a semantic sense. This can be more precise distance-wise, except that it also more easily incomplete. | ||
--> | |||
<!-- | |||
Not quite central to the area, but helpful to some of the contrasts we want to make, | |||
is '''semantic similarity''', the broad area of [[metric]]s between words, phrases, and/or documents. | |||
Other people use the term semantic similarity to focus on the ontological sort of methods - and possibly specifically "is a" and other strongly [[ontology|ontological]] relationships, and ''not'' just "seems to occur in the same place". {{comment|('''Semantic relatedness''' is sometimes added to mean 'is a' plus [[antonyms]] (opposites), [[meronyms]] (part of whole), [[hyponyms]]/[[hypernyms]])}} | |||
Some people use semantic similarity can refer to absolutely ''any'' metric that gives you distances. | |||
https://en.wikipedia.org/wiki/Semantic_similarity | https://en.wikipedia.org/wiki/Semantic_similarity | ||
--> | --> | ||
====vector space representations==== | ====vector space representations, word embeddings, and more==== | ||
<!-- | <!-- | ||
Around text, '''vector space representations''' | Around text, '''vector space representations''' are the the general idea | ||
are the the general idea | that for each word (or similar units) you can somehow figure out a dense, semantically useful vector. | ||
: ''Dense'' compared to the input: when you can see text as 'one of ~100K possible words', then a vector of maybe two hundred dimensions that seems to do a good job packin enough meaning for good (dis)similarity comparisons is pretty good | |||
: ''Semantically useful'' in that those properties tend to be useful | |||
:: often focusing on such (dis)similarity comparisons - e.g. 'bat' may get a sense of animal, tool, and verb, and noun -- or rather, in this space appear close to animals and tools and verbs than e.g. 'banana' does. | |||
There is a whole trend in, instead of using well-annotated data (which is costly to come by), | |||
yuou can use orders more ''non-annotated'' data (much easier to come by) | |||
based on assumptions like the [[distributional hypothesis]] makes: | |||
that words in similar context, and training from nearby words, is enough to give a good sense of comparability. | |||
Years ago that was done with things like linear algebra (see e.g. [[Latent Semantic Analysis]]), | |||
now it is done with more complex math, and/or neural nets, | |||
which is more polished way of training faster - though the way you handle its output is much the same. | |||
Yes, this training is an bunch of up-front work, | |||
but assuming you can learn it well for a target domain (and trying to learn ''all'' data often gives good ''basic'' coverage of most domains) | |||
then many things build on top have a good basis (and do not have to deal with classical training issues like high dimensionality, sparsity, smoothing, etc). | |||
'''Word embeddings''' sometimes have a much more specific meaning, a specific way of finding and using text vectors. | |||
(There are varied definitions, and you can argue that under some, terms like 'static word embeddings' make no sense) | |||
but the terms are mixed so much that you should probably read 'embeddings' as 'text vectors' | |||
and figure out yourself what the implementation actually is. | |||
--- | |||
In the context of some of the later developments, the simpler variant implmentations are considered static vectors. | |||
'''Static vectors''' refer to systems where each word alwys gets the same vector. | |||
That usually means: | |||
* make vocabulary: each word gets an entry | |||
* learn vector for each item in the vocabulary | |||
Static vectors / | |||
-- | --- | ||
We previously mentioned that putting a single number on a word has issues. | |||
We now point out that putting a single vector for a word have some issues. | |||
Consider, say, | Consider, say, "we saw the saw". | ||
A static vector method still | |||
cannot be expressed in numbers without those two words ''having'' to be the same thing. | |||
"we saw a bunny" or "I sharpened the saw" will have one sense - probably both the tool quality and the seeing quality, | "we saw a bunny" or "I sharpened the saw" will have one sense - probably both the tool quality and the seeing quality, | ||
and any use will tell you it's slightly about tools | and any use will tell you it's slightly about tools | ||
Line 366: | Line 423: | ||
But saw, hm. | But saw, hm. | ||
And if you table a motion, it ''will'' associate in gestures and woodworking, | |||
because those are the more common things | |||
Line 514: | Line 576: | ||
All of this may still apply a single vector to the same word always (sometimes called static word embeddings). | All of this may still apply a single vector to the same word always (sometimes called static word embeddings). | ||
This is great for unambiguous content words, but less so for polysemy and | This is great for unambiguous content words, but less so for polysemy and | ||
--- | |||
Line 572: | Line 636: | ||
A model where a word/token can be characterized by something ''smaller'' than that exact whole word. | A model where a word/token can be characterized by something ''smaller'' than that exact whole word. | ||
A technique that assigns meanings to words | |||
via meanings learned on subwords - which can be arbitrary fragments. | |||
This can also do quite well at things otherwise [[out of vocabulary]] | |||
Say, the probably-out-of-vocabulary apploid may get a decent guess | |||
if we learned a vector for appl from e.g. apple. | |||
Also, it starts dealing with misspellings a lot better. | |||
Understanding the language's morphology would probably do a little better, | |||
but just share larger fragments of characters tends to do well enough, | |||
in part because inflection, compositional agglutination (e.g. turkish) | |||
and such are often ''largely'' regular. | |||
Yes, this is sort of an [[n-gram]] trick, | |||
and for that reason the data (which you ''do'' have to load to use) | |||
can quickly explode for that reason. | |||
For this reason it's often combined with [[bloom embeddings]]. | |||
Examples: | Examples: | ||
fastText | fastText, floret, | ||
[https://d2l.ai/chapter_natural-language-processing-pretraining/subword-embedding.html] | [https://d2l.ai/chapter_natural-language-processing-pretraining/subword-embedding.html] | ||
--> | --> | ||
=====Bloom embeddings===== | =====Bloom embeddings, a.k.a. the hash trick===== | ||
<!-- | <!-- | ||
An idea akin to [[bloom filter]]s applied to word embeddings. | |||
Consider a language model that should be assigning vectors. | |||
When it sees [[out-of-vocabulary]] words, what do you do? | |||
Do you treat them as not existing at all? | |||
: ideally, we could do something quick and dirty that is better than nothing. | |||
Do you add just as many entries to the vocabulary? | |||
That can be large, and more importantly, now documents can't don't share vocabs anymore, or vectors indexed by those vocabs. | |||
Do you map all to a single unknown vector? | |||
That's small, but makes them ''by definition'' indistinguishable. | |||
If you wanted to do even a ''little'' extra contextual learning for them, | |||
that learning would priobably want to move it in all directions, and end up doing nothing and being pointless. | |||
Another option would be to | |||
* reserve a number of entries for unknown words, | |||
* assign all unknowns into there somehow (in a way that ''will'' still collide | |||
:: via some hash trickery | |||
* hope that the words that get assigned together aren't ''quite'' as conflicting as ''all at once''. | |||
It's a ''very'' rough, | |||
* it is definitely better than nothing. | |||
* with a little forethought you could sort of share these vectors between documents | |||
:: in that the same words will map to the same entry every time | |||
Limitations: | |||
* when you smush things together and use what you previously learned | |||
* when you smush things together and ''learn'', you relate unrelated things | |||
This sort of bloom-like intermediate is also applied to subword embeddings, | |||
because it gives a ''sliding scale'' between | |||
'so large that it probably won't fit in RAM' and 'so smushed together it has become too fuzzy' | |||
some | |||
* [https://github.com/explosion/floret floret] (bloom embeddings for fastText) | |||
* thinc's HashEmbed [https://thinc.ai/docs/api-layers#hashembed] | |||
* spaCy’s MultiHashEmbed and HashEmbedCNN (which uses thinc's HashEmbed) | |||
Line 595: | Line 732: | ||
https://spacy.io/usage/v3-2#vectors | https://spacy.io/usage/v3-2#vectors | ||
--> | |||
<!-- | <!-- | ||
Line 664: | Line 800: | ||
https://en.wikipedia.org/wiki/Topic_model | https://en.wikipedia.org/wiki/Topic_model | ||
====word2vec==== | |||
{{stub}} | |||
word2vec is one of many ways to put semantic vectors to words (in the [[distributional hypothesis]] approach), | |||
and refers to two techniques, using either [[bag-of-words]] and [[skip-gram]] as processing for a specific learner, | |||
as described in T Mikolov et al. (2013), "{{search|Efficient Estimation of Word Representations in Vector Space}}", | |||
probably the one that kicked off this dense-vector idea into the interest. | |||
Word2vec amounts could be seen as building a classifier that predicts what word apear in a context, and/or what context appears around a word, | |||
which happens to do a decent task of classifying that word. | |||
That paper mentions | |||
* its continuous bag of words (cbow) variant predicts the current word based on the words directly around it (ignoring order, hence bow{{verify}}) | |||
* its continuous skip-gram variant predicts surrounding words given the current word. | |||
:: Uses [[skip-gram]]s as a concept/building block. Some people refer to this technique as just 'skip-gram' without the 'continuous', | |||
but this may come from not really reading the paper you're copy-pasting the image from? | |||
:: seems to be better at less-common words, but slower | |||
(NN implies [[one-hot]] coding, so not small, but it turns out to be moderately efficient{{verify}}) | |||
<!-- | |||
https://en.wikipedia.org/wiki/Word2vec | |||
https://www.kdnuggets.com/2018/04/implementing-deep-learning-methods-feature-engineering-text-data-cbow.html | |||
https://pathmind.com/wiki/word2vec | |||
https://www.youtube.com/watch?v=LSS_bos_TPI&vl=en | |||
https://towardsdatascience.com/introduction-to-word-embedding-and-word2vec-652d0c2060fa | |||
--> | |||
====GloVe==== | |||
<!-- | |||
Global Vectors for Word Representation | |||
The GloVe paper itself compares itself with word2vec, | |||
and concludes it consistenly performs a little better. | |||
See also: | |||
* http://nlp.stanford.edu/projects/glove/ | |||
* J Pennington et al. (2014), "GloVe: Global Vectors for Word Representation" | |||
--> | |||
[[Category:Math on data]] |
Revision as of 11:57, 28 March 2024
This is more for overview of my own than for teaching or exercise.
|
Intro
NLP data massage / putting meanings or numbers to words
Bag of words / bag of features
The bag-of-words model (more broadly bag-of-features model) use the collection of words in a context, unordered, in a multiset, a.k.a. bag.
In other words, we summarize a document (or part of it) it by appearance or count of words, and ignore things like adjacency and order - so any grammar.
In text processing
In introductions to Naive Bayes as used for spam filtering, its naivety essentially is this assumption that feature order does not matter.
Though real-world naive bayes spam filtering would take more complex features than single words (and may re-introduce adjacenct via n-grams or such), examples often use 1-grams for simplicity - which basically is bag of words, exc.
Other types of classifiers also make this assumption, or make it easy to do so.
Bag of features
While the idea is best known from text, hence bag-of-words, you can argue for bag of features, applying it to anything you can count, and may be useful even when considered independently.
For example, you may follow up object detection in an image with logic like "if this photo contains a person, and a dog, and grass" because each task may be easy enough individually, and the combination tends to narrow down what kind of photo it is.
In practice, the bag-of-features often refers to models that recognize parts of a whole object (e.g. "we detected a bunch of edges of road signs" might be easier and more robust than detecting it fully), and used in a number image tasks, such as feature extraction, object/image recognition, image search, (more flexible) near-duplicate detection, and such.
The idea that you can describe an image by the collection of small things we recognize in it, and that combined presence is typically already a strong indicator (particularly when you add some hypothesis testing). Exact placement can be useful, but often easily secondary.
See also:
N-gram notes
N-grams are contiguous sequence of length n.
They are most often seen in computational linguistics.
Applied to sequences of characters it can be useful e.g. in language identification,
but the more common application is to words.
As n-grams models only include dependency information when those relations are expressed through direct proximity, they are poor language models, but useful to things working off probabilities of combinations of words, for example for statistical parsing, collocation analysis, text classification, sliding window methods (e.g. sliding window POS tagger), (statistical) machine translation, and more
For example, for the already-tokenized input This is a sentence . the 2-grams would be:
- This is
- is a
- a sentence
- sentence .
...though depending on how special you do or do not want to treat the edges, people might fake some empty tokens at the edge, or some special start/end tokens.
Skip-grams
Note: Skip-grams seem to refer to now two different things.
An extension of n-grams where components need not be consecutive (though typically stay ordered).
A k-skip-n-gram is a length-n sequence where the components occur at distance at most k from each other.
They may be used to ignore stopwords, but perhaps more often they are intended to help reduce data sparsity, under a few assumptions.
They can help discover patterns on a larger scale, simply because skipping makes you look further for the same n. (also useful for things like fuzzy hashing).
Skip-grams apparently come from speech analysis, processing phonemes.
In word-level analysis their purpose is a little different. You could say that we acknowledge the sparsity problem, and decide to get more out of the data we have (focusing on context) rather than trying to smooth.
Actually, if you go looking, skip-grams are now often equated with a fairly specific analysis.
Syntactic n-grams
Flexgrams
Words as features - one-hot coding and such
Putting numbers to words
Computers and people and numbers
vector space representations, word embeddings, and more
Contextual word embeddings
Subword embeddings
Bloom embeddings, a.k.a. the hash trick
Moderately specific ideas and calculations
Collocations
Collocations are statistically idiosyncratic sequences - the math that is often used asks "do these adjacent words occur together more often than the occurrence of each individually would suggest?".
This doesn't ascribe any meaning, it just tends to signal anything from empty habitual etiquette, jargon, various substituted phrases, and many other things that go beyond purely compositional construction, because why other than common sentence structures would they co-occur so often?
...actually, there are varied takes on how useful collocations are, and why.
latent semantic analysis
Latent Semantic Analysis (LSA) is the application of Singular Value Decomposition on text analysis and search.
random indexing
https://en.wikipedia.org/wiki/Random_indexing
Topic modeling
Roughly the idea given documents that are about a particular topic, one would expect particular words to appear in the each more or less frequently.
Assuming such documents sharing topics, you can probably find groups of words that belong to those topics.
Assuming each document is primarily about one topic, you can expect a larger set of documents to yield multiple topics, and an assignment of one or more of these topics, so act like a soft/fuzzy clustering.
This is a relatively weak proposal in that it relies on a number of assumptions, but given that it requires zero training, it works better than you might expect when those assumptions are met. (the largest probably being your documents having singular topics).
https://en.wikipedia.org/wiki/Topic_model
word2vec
word2vec is one of many ways to put semantic vectors to words (in the distributional hypothesis approach), and refers to two techniques, using either bag-of-words and skip-gram as processing for a specific learner, as described in T Mikolov et al. (2013), "Efficient Estimation of Word Representations in Vector Space", probably the one that kicked off this dense-vector idea into the interest.
Word2vec amounts could be seen as building a classifier that predicts what word apear in a context, and/or what context appears around a word,
which happens to do a decent task of classifying that word.
That paper mentions
- its continuous bag of words (cbow) variant predicts the current word based on the words directly around it (ignoring order, hence bow(verify))
- its continuous skip-gram variant predicts surrounding words given the current word.
- Uses skip-grams as a concept/building block. Some people refer to this technique as just 'skip-gram' without the 'continuous',
but this may come from not really reading the paper you're copy-pasting the image from?
- seems to be better at less-common words, but slower
(NN implies one-hot coding, so not small, but it turns out to be moderately efficient(verify))