Data modeling, restructuring, and massaging: Difference between revisions

From Helpful
Jump to navigation Jump to search
mNo edit summary
 
(39 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{#addbodyclass:tag_math}}
{{Math notes}}
{{Math notes}}


Line 266: Line 267:
=Putting numbers to words=
=Putting numbers to words=


===Knowledge base style===


====Computers and people and numbers====
<!--
Where people are good at words and bad at numbers, computers are good at numbers and bad at words.
So it makes sense to express words ''as'' numbers?
That does go a moderate way, but it really matters ''how''.
If you just make a list enumerating, say, 1=the, 2=banana, 3=is, ..., 150343=ubiquitous
: it makes data ''smaller'' to work with, but equally clunky to do anything with other than counting {{comment|(...which has its uses - e.g. gensim's collocation analysis does such an enumeration to keep its intermediate calculations all-numeric)}}.
: but does absolutely nothing to make them ''comparable''
In a lot of cases it would be really nice if we could encode ''some'' amount of meaning.
But since that asks multiple difficult questions at once,
like 'how would you want to encode different kinds of things in a way that is useful later',
we are initially happy with the ability to say that 'is' is like 'be' more than 'banana'
or at least a metric of similarity, what kind of role it plays. 
There are many of approaches to this you can think of,
and almost all you can think of has been tried, to moderate success.
It can help to add a knowledge base of sorts
: Maybe start with an expert-of-sorts making a list of all terms you know, and recognizing them in a text.
: Maybe start making that into an going as far as to modeling that swimming is a type of locomotion, and so is walking.
It can help to collect statistics about what kind of sentence patterns there are,
and what kinds of words are usually where.
It can help to detect infections, like if it adds in -ness then it's probably a [[noun]] and maybe [[uncountable]],
and you can tag that even if you don't know the root word/morpheme it's on.
Each of these may takes a large amount of work, and this can do a single task with good accuracy,
though it may only be good at what it does because it doesn't even try other things.
It can help to combine methods, and some of the systems built like this perform quite decently.
It turns out that it can be
as relevant how precise it is at specific things,
as it is how consistent it is throughout.
Maybe it can extract very specific sentences with perfect accuracy,
but ignores anything it isn't sure about.
Fuzzier methods are also common, in part because they can be trained to give ''some'' answer for everything,
and even if they're not high quality they can be consistent.
Say, you can get a reasonable answer to basic (dis)similarity between all words without much up-front annotation or knowledge,
and for basically all words you can find a little context for to feed into the system.
-->
<!--
This can include
* words that appear in similar context (see also [[word embeddings]], even [[topic modelling]])


===Statistic style===
* words that have similar meaning


* Topological / ontological similarity - based on more strongly asserted properties
: Say, where many methods may put 'car', 'road', and 'driving' in the same area, this may also say roughly ''how'' they are related, in a semantic sense. This can be more precise distance-wise, except that it also more easily incomplete.
-->
<!--


====Collocations====
Not quite central to the area, but helpful to some of the contrasts we want to make,
is '''semantic similarity''', the broad area of [[metric]]s between words, phrases, and/or documents.


Collocations are statistically idiosyncratic sequences - the math that is often used asks
Other people use the term semantic similarity to focus on the ontological sort of methods - and possibly specifically "is a" and other strongly [[ontology|ontological]] relationships, and ''not'' just "seems to occur in the same place". {{comment|('''Semantic relatedness''' is sometimes added to mean 'is a' plus [[antonyms]] (opposites), [[meronyms]] (part of whole), [[hyponyms]]/[[hypernyms]])}}
"do these adjacent words occur together more often than the occurrence of each individually would suggest?".


This doesn't ascribe any meaning,
Some people use semantic similarity can refer to absolutely ''any'' metric that gives you distances.
it just tends to signal anything from
empty habitual etiquette,
jargon,
various [[substituted phrases]],
and many other things that go beyond purely [[compositional]] construction,
because why other than common sentence structures would they co-occur so often?


...actually, there are varied takes on how useful [[collocations]] are, and why.
https://en.wikipedia.org/wiki/Semantic_similarity
-->


====Word embeddings====
====vector space representations, word embeddings, and more====
<!--
<!--


{{zzz|For context|
Around text, '''vector space representations''' are the the general idea that for each word (or similar units) you
Where people are good at words and bad at numbers, computers are good at numbers and bad at words.
can calculate something that you an meaningfully compare to other.
 
-->
=====Just count in a big table=====
<!--
'''You could just count.'''[https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html] Given a collection of documents
* decide what words to include
* make a big documents-by-words table {{comment|(we like to say 'matrix' instead of 'table' when we get mathy, but it's the same concept really)}}
* count


One limitation is that including all words makes a humongous table {{comment|(and most cells will contain zero)}}.
Yet not including them means we say they do not exist ''at all''.


So it makes sense to express words ''as'' numbers?
Another limitation is that counts ''as such'' are not directly usable, for dumb reasons like that
if more count means more important, consider longer documents will have higher counts just because they are longer,
and that 'the' and 'a' will be most important.


...that goes a moderate way, but if you want to approximate the contained meaning ''somehow'',
that's demonstrably still too clunky.


To illustrate one limitation, you could say that 'saw' is represented by a specific number.
'''You could use something like [[tf-idf]]'''[https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html#sklearn.feature_extraction.text.TfidfVectorizer], an extra step on top of the previous, e.g.
Such an enumeration means that "we saw the saw" cannot be expressed in numbers without those two words ''having'' to be the same thing.
* will downweigh 'the' by merit about being absolutely everywhere
* reduces the effect of document length


You can imagine that's not a problem for, say, antidisestablishmentarianism.
There's not a lot of varied subtle variation in its use, so you can pretend has one meaning, one function.


Can't you just have multiple saws? Sure, but how do you decide?
Saw as a verb, saw as a noun? Aside from cases where variants inflect a little regularly, or not visible,
the larger problem is that that's that's somewhat circular, depending on sort-of-already-knowing.
In reality, having a good parse of sentence structure is just ''not'' independent from its meaning,
you end up having to do both concurrently.


It turns out that a lot language, ''does'' need to be resolved by context of what they do to nearby words.
Another limitation is that that table's other axis being '' 'all' documents' seems huge for no good reason.
And it turns out the most commonly used words often the weirder ones.
There are plenty of tasks where those original training documents (that happened to be a training task long ago) are just... not useful.


This entanglement seems to help thing stay compact, with minimal ambiguity, and without requiring very strict rules,
Sure there is information in those documents, but can't we learn from it, then put it in a more compact form?
which are things that natural languages seem to like to balance (even [[conlang]]s like lobjan engineer this balance),
and has some other uses - like [[double meanings]], and intentionally generalizing meanings.


So we're stuck with compact complexity.
We can try to model absolutely everything we do in each language, and that might even give us something more useful in the end.


Another limitation is that these still focus on unique words. Even words that are inflections of others will be treated as entirely independent,
so if one document uses only 'fish' and 'spoon', it might even be treated as entirely distint from one that says only 'fishing' and 'spooning'.
Never that bad, but you can intuit why this isn't useful.


Yet imagine for a moment a system that would just pick up 'saw after a pronoun' and 'saw after a determiner',
and not even because it knows what pronouns or determiners are, but because given a ''load'' of examples,
those are two of the things the word 'saw' happens to be next to.


Such a system ''also'' doesn't have to know that it is modeling a certain verbiness and nouniness as a result.
One thing many tried is to use something like matrix methods - factorization, dimensionality reduction, and the likes.
It might, from different types of context, perhaps learn that one of these contexts relates it to a certain tooliness as well.
You don't need to know how the math works, but the point is that similarly expressed terms can be recognized as such,
and e.g. 'fish' and 'fishing' will smush into each other, automatically because otherwise-similar documents will mention both
{{comment|(looking from the perspective of the other axis, mayne we can also compare documents better, but unless comparing documents was your goal, that was actually something we were trying to get rid of)}}
-->


If fact, such a system won't and ''can't'' explain such aspects.
=====Word embeddings=====


So why do it?
<!--
The above arrived in an area where those vectors no longer represent just the words,
but contain some comparability to similar words.




Well, it learns these things without us telling it anything, and the similarity it ends up helping things like:
There are some older distributional similarity approaches - these were a little clunky in that
: "if I search for walk, I also get things that mention running",  
they made for high dimensional, sparse vectors, where each one represented a specific context.
: "this sentence probably says something about people moving"
They were sometimes more explainable, but somewhat unwieldy.
: "walk is like run, and to a lesser degree swim",  
: "I am making an ontology to encode these and would like some assistance to not forget things"




Also, this is an [[unsupervised]] technique. Yes, you will probably get more precise answers with the same amount of a supervised technique, i.e. with well annotated data,
but it is usually harder to have a good amount of well-annotated data (that you can have endless discussions about),
and much esier to have ''tons'' of un-annotated data.


It may pick up relations only softly, and in ways you can't easily fetch out,
'''"Word embeddings"''' often refer the word vector thing ''but more compactly'' somehow:
but it's pretty good at not missing them,
we try to some somehow figure out a dense, semantically useful vector.
meaning you don't have to annotate half the world's text with precise meaning
for it to work.  


You just feed it lots of text.
: ''Dense'' compared to the input: when you can see text as 'one of ~100K possible words', then a vector of maybe two hundred dimensions that seems to do a good job packin enough meaning for good (dis)similarity comparisons is pretty good


: ''Semantically useful'' in that those properties tend to be useful
:: often focusing on such (dis)similarity comparisons - e.g. 'bat' may get a sense of animal, tool, and verb, and noun -- or rather, in this space appear close to animals and tools and verbs than e.g. 'banana' does.




There are limitations. It might pick up on more subtleties,  
This amounts to even more up-front work, so why do this dense semantic thing?
but like any unsupervised technique,
and tends to be better at finding things that at describing ''what'' it found.


One is practical - classical methods run into some age-old machine learning problems like high dimensionality, sparsity,
and there happen to be some word embedding methods that sidestep that (by cheating, but by cheating ''pretty well'' ).




Also, putting all words in a single space lets us compare terms, sentences, and documents.
If the vectors are good, this is a good approximation of ''semantic'' similarity.


You would get further by encoding 'saw as a verb' and 'saw as an implement' as different things,
If we can agree on a basis between uses (e.g. build a reasonable set of vectors per natural language)
sure, but that would only solves how to ''store'' knowing that. Not finding out.
we might even be able to give a basic idea of e.g. what a document is about.


...that one turns out to be optimistic, for a slew of reasons.
You can often get better results out of becoming a little domain-specific.
Which them makes your vectors specific to just your system again.


This is not a great introduction, because there are multiple underlying issues here,
and we aren't even going to solve them all, we will just choose to go just half a step fuzzier,
to a point where we can maybe encode multiple uses of words (and if they have a single one, great!).






'''So how do you get these dense semantic vectors?'''


There are varied ways to do this.


Word embedding are usually explained as "vector representation for a word"


Such vectors are useful for things like subject similarity, sentiment analysis, syntactic parsing,  
You ''could'' start with well-annotated data, and that might be of higher quality in the end,
but it is hard to come by annotation for many aspects over an entire language,
It's a lot of work to even try - and that's still ignoring details like contextual ambiguity,
analyses even people wouldn't quite agree on, the fact you have to impose a particular system
so if it doesn't encode something you wanted, you have to do it ''again'' later.




The fact that it's a vector isn't that important.
A recent trend is to put a little more trust in the assumptions of the [[distributional hypothesis]],
It happens to be mathematically handy to work with,
e.g. that words in similar context will be comparable,  
and we happen to want to end up with vector values where similar vectors ''hopefully'' carry similar meanings.
and focus on words in context.


For which we can use non-annotated data. We need a ''lot'' more of it for comparable quality,
but people have collective produced a lot of text, internet and other.




'Embeddings' is a bit of a strange term for that concept,
and seems to point out (with most methods we use today) how training these considered their context


This was done in varied ways over the years (e.g. [[Latent Semantic Analysis]] applies somewhat),
later with more complex math, and/or neural nets.
Which may just train better - the way you handle its output is much the same.




And when using the same to figure out what unseen text means, it may well assign the tool-ish sense, or the verb-ish sense, on similar context.
One of the techniques that kicked this off in more recent years is [[word2vec]],
which doesn't do a lot more than looking what appears in similar contexts.
(Its view is surprisingly narrow - what happens in a small window in a ''lot'' of data will tend to be more consistent.
A larger window is not only more work but often too fuzzy)
Its math is apparently {{search|Levy "Neural Word Embedding as Implicit Matrix Factorization"|fairly like the classical matrix factorization}}.




In fact, some patterns are strong enough that even unseen words will get a decent estimation.
'''Word embeddings''' often refers to learning vectors from context,
Say, give we spoiaued to an embedding-style parser, and it's going to guess it's a verb ''while also'' pointing out it's out of its vocabulary.
though there are more varied meanings (some conflicting),
so you may wish to read 'embeddings' as 'text vectors' and figure out yourself what the implementation actually is.




Line 407: Line 489:




In ''use'', the assigned vector is typically not dependent on the context of the current,  
-->
but the vector that was learned earlier was dependent on the context in the training data. {{verify}}
 
=====Static embeddings=====
<!--
 
In the context of some of the later developments, the simpler variant implmentations are considered static vectors.
 
'''Static vectors''' refer to systems where each word alwys gets the same vector.
 
That usually means:
* make vocabulary: each word gets an entry
* learn vector for each item in the vocabulary
 
 
 
 
 
 
'''Limits of static embeddings'''
 
 
 
We previously mentioned that putting a single number on a word has issues.
 
 
We now point out that putting a single vector for a word have some issues.
 
Yes, a lot of words will get a completely sensible sense.
 
Yet if we made an enumerated vocabulary, and each item gets a vector, then the saw in "I sharpened the saw" and "we saw a bat" ''will'' be assigned the same vector; same for the bat in "I saw a bat" and "I'll bat an eyelash".
 
 
The problem is that ''if'' that vector is treated as the semantic sense, both saws have exactly the same sense.
 
...probably both the tool quality and the seeing quality,
and any use will tell you it's slightly about tools
{{comment|(by count, most 'saw's are the seeing, something that even unsupervised learning should tell you)}}.
 
 
You can imagine it's not really an issue for words like antidisestablishmentarianism.
There's not a lot of varied subtle variation in its use, so you can pretend has one meaning.
 
But saw, hm.
 
In theory, it may assign
* a vector to the verb saw that is more related to seeing than to cutting
* a vector to the noun bat that is more related to other animals than it is to sports equipment
 
That said, that is not a given.
: There's at least four options and it might land on any of them, particularly for a sentence in isolation
: the meanings we propose to be isolated here may get weirdly blended in training
 
 
But for similar reasons, if you {{example|table a motion}}, it ''will'' associate in gestures and woodworking, because those are the more common things.
 
 
 
'''"Can't you just have multiple saws, encode 'saw as a verb' differently from 'saw as an implement'?"'''
 
Sure, now you can ''store'' that, but how do you decide?
 
There are some further issues from inflections, but ignoring those for now,
a larger problem is that that the whole idea is somewhat circular,
depending on already knowing the correct parse to learn this from.
{{comment|(also knowing which words need this separated treatment, but arguably ''that'' you can figure that out from things that end up being approximated in sufficiently distinct ways or not)}}
 
In reality, finding the best parse of sentence structure just isn't independent from finding its meaning.
You end up having to do both concurrently.
 
A lot language is pretty entangled, and needs to be resolved by context of what they do to nearby words - human brevity relies on some ambiguity resolving. And it turns out the most commonly used words often the weirder ones.
 
This entanglement seems to help thing stay compact, with minimal ambiguity, and without requiring very strict rules.
Natural languages seem to end up balancing amount of rules/exception to reasonable levels {{comment|(even [[conlang]]s like lobjan think about this, though they engineer it explicitly)}}, and has some other uses - like [[double meanings]], and intentionally generalizing meanings.
 
So we're stuck with compact complexity.
We can try to model absolutely everything we do in each language, and that might even give us something more useful in the end.
 
 
 
 
 
'''Yet''' imagine for a moment a system that would just pick up 'saw after a pronoun' and 'saw after a determiner',
and not even because it knows what pronouns or determiners are, but because given a ton of examples,
those are two of the things the word 'saw' happens to often be next to.
 
Such a system ''also'' doesn't have to know that it is modeling a certain verbiness and nouniness as a result.
 
It might, from different types of context, perhaps learn that one of these contexts relates it to a certain tooliness as well.
But, not doing that on purpose, such a system won't and ''can't'' explain such aspects.
 
So why mention such a system? Why do that at all?
 
 
Usually because it learns these things without us telling it anything, and the similarity it ends up helping things like:
: "if I search for walk, I also get things that mention running",
: "this sentence probably says something about people moving"
: "walk is like run, and to a lesser degree swim",
: "I am making an ontology style system, and would like some assistance to not forget adding related things"
 
The "without us telling it anything" -- it being an [[unsupervised]] technique -- also matters.
 
You will probably get more precise answers with the same amount of well-annotated data.
You will probably get equally good answers with less annotated data.
 
But the thing is that annotated data is hard and expensive, because it's a lot of work.
 
And you can have endless discussions about annotation ''because'' these is ambiguity in there, so there's probably an upper limit, or even more time spent.
 
 
'''It's just easier to have ''a lot more''' un-annotated data''',
so even if it needs ''so much more'' text, a method that then does comparably well is certainly useful.
 
You just feed it lots of text.
 
 
XXX
This is not a solution to all of the underlying issues here. We're not even trying to solve them all,
in fact we're just going to gloss over a lot of them,
to a point where we can maybe encode multiple uses of words (and if they have a single one, great!).
 
 
 
There are limitations, some upsides that are arguably also downsides.
 
It might pick up on more subtleties, but like any unsupervised technique,
and tends to be better at finding things that at describing ''what'' it found.
 
It may pick up relations only softly,
and in ways you can't easily extract or learn from,
but it's pretty good at not completely missing them,
meaning you don't have to annotate half the world's text with precise meaning
for it to work.
 
 
 
 
 




Word embedding are usually explained as "vector representation for a word"


(you can call this [[distributional similarity]], that a word is characterized by the company it keeps.
Such vectors are useful for things like subject similarity, sentiment analysis, syntactic parsing,


The fact that it's a vector isn't that important.
It happens to be mathematically handy to work with,
and we happen to want to end up with vector values where similar vectors ''hopefully'' carry similar meanings.






These vectors come from machine learning (of varying type and complexity).
'Embeddings' is a bit of a strange term for that concept,
* LSA
and seems to point out (with most methods we use today) how training these considered their context


* [[word2vec]] -
: technically patented?
: T Mikolov et al. (2013) "Efficient Estimation of Word Representations in Vector Space"


* tok2vec


* FastText
And when using the same to figure out what unseen text means, it may well assign the tool-ish sense, or the verb-ish sense, on similar context.
: https://fasttext.cc/


* lda2vec
: https://multithreaded.stitchfix.com/blog/2016/05/27/lda2vec/#topic=38&lambda=1&term=


In fact, some patterns are strong enough that even unseen words will get a decent estimation.
Say, give we spoiaued to an embedding-style parser, and it's going to guess it's a verb ''while also'' pointing out it's out of its vocabulary.






All of this may still apply a single vector to the same word always (sometimes called static word embeddings).
This is great for unambiguous content words, but less so for polysemy and




Line 444: Line 659:
<!--
<!--


The first attempts at word embeddings were typically static vectors,
 
The first attempts at word embeddings, and many since,
were static vectors,
meaning that the lookup (even if trained from something complex)
meaning that the lookup (even if trained from something complex)
always gives the same vector for the same word.
always gives the same vector for the same word.


This was mostly to keep the data manageable, and to keep it a simple and fast lookup.
This is a manageable amount of data, and to keep it a simple and fast lookup.




Line 454: Line 671:
and more details where ambiguity is low,
and more details where ambiguity is low,
but will do poorly in the specific cases where words's meaning depends on context.
but will do poorly in the specific cases where words's meaning depends on context.
Consider "passing out" can mean anything from giving people things to losing consciousness.
Generally, the more commonly used the verb, the tricksier it is.
We like the idea of one word, one meaning, but most languages ''really'' messed that one up.




Line 460: Line 685:
learns about words ''in a sequence''.
learns about words ''in a sequence''.


This is still just statistics, but the model you run will give
This is still just statistics,  
but the model you run will give


This is a bunch more data, a bunch more training,
and not always worth it


Depending on how much context you pay attention to,  
Depending on how much context you pay attention to,  
Line 467: Line 695:




-->
=====Subword embeddings=====
<!--
A model where a word/token can be characterized by something ''smaller'' than that exact whole word.


Consider "we saw a bat".  
A technique that assigns meanings to words
via meanings learned on subwords - which can be arbitrary fragments.


In theory, it may assign
* a vector to the verb saw that is more related to seeing than to cutting
* a vector to the noun bat that is more related to other animals than it is to sports equipment


That said, that is not a given.  
This can also do quite well at things otherwise [[out of vocabulary]]
: There's at least four options and it might land on any of them, particularly for a sentence in isolation
Say, the probably-out-of-vocabulary apploid may get a decent guess
: the meanings we propose to be isolated here may get weirdly blended in training
if we learned a vector for appl from e.g. apple.
 
Also, it starts dealing with misspellings a lot better.


Understanding the language's morphology would probably do a little better,
but just share larger fragments of characters tends to do well enough,
in part because inflection, compositional agglutination (e.g. turkish)
and such are often ''largely'' regular.


Consider "passing out" can mean anything from giving people things to losing consciousness.
Generally, the more commonly used the verb, the tricksier it is.


We like the idea of one word, one meaning, but most languages ''really'' messed that one up.


-->
Yes, this is sort of an [[n-gram]] trick,
and for that reason the data (which you ''do'' have to load to use)
can quickly explode for that reason.


=====Subword embeddings=====
For this reason it's often combined with [[bloom embeddings]].
<!--


A model where a word/token can be characterized by something ''smaller'' than that exact whole word.


Often just because words happen to share large fragments of characters,
(not because of stronger analysis like good given lemmatization, strongly compositional agglutination (e.g. turkish).
Chances are it will pick up on such strong patterns, but that's more a side effect of them indeed being regular)




Examples:
Examples:
fastText
fastText, floret,


[https://d2l.ai/chapter_natural-language-processing-pretraining/subword-embedding.html]
[https://d2l.ai/chapter_natural-language-processing-pretraining/subword-embedding.html]


-->


-->
=====The hashing trick (also, Bloom embeddings)=====


=====Bloom embeddings=====
<!--
<!--


A [[bloom filter]] applied to word embeddings to get better-than-nothing embeddings from something very compact.
The '''hashing trick''' works for everything from basic counting to contextual and sub-word embeddings -- just anywhere where you need to put a fixed bound on, and are willing to accept degrading performance beyond that.
 
'''You can use the [[hashing trick]]'''[https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html#sklearn.feature_extraction.text.HashingVectorizer] if you want to put an upper bound on memory -- but this comes at an immediate cost that may be avoided with a more clever plan.
* squeezes all words into a fixed amount of entries
* ...almost indiscriminantly, so the more words you push into fewer entries, the less accurate it is.
* it's better than nothing given low amounts of memory, but there are usually better options
 
 
 
The hashing trick can be applied to word embeddings,
is sometimes called Bloom embeddings, because it's an idea ''akin'' to [[bloom filter]]s.
 
But probably more because it's shorter than "embeddings with the hashing trick".
 
 
 
Consider a language model that should be assigning vectors.
 
When it sees [[out-of-vocabulary]] words, what do you do?
 
Do you treat them as not existing at all?
: ideally, we could do something quick and dirty that is better than nothing.
 
Do you add just as many entries to the vocabulary?
That can be large, and more importantly, now documents can't don't share vocabs anymore, or vectors indexed by those vocabs.
 
Do you map all to a single unknown vector?
That's small, but makes them ''by definition'' indistinguishable.
 
 
If you wanted to do even a ''little'' extra contextual learning for them,
that learning would priobably want to move it in all directions, and end up doing nothing and being pointless.
 
 
Another option would be to
* reserve a number of entries for unknown words,
 
* assign all unknowns into there somehow (in a way that ''will'' still collide
:: via some hash trickery
 
* hope that the words that get assigned together aren't ''quite'' as conflicting as ''all at once''.
 
 
 
It's a ''very'' rough,
* it is definitely better than nothing.
 
* with a little forethought you could sort of share these vectors between documents
:: in that the same words will map to the same entry every time




Limitations:
* when you smush things together and use what you previously learned
* when you smush things together and ''learn'', you relate unrelated things


https://explosion.ai/blog/bloom-embeddings


https://spacy.io/usage/v3-2#vectors
This sort of bloom-like intermediate is also applied to subword embeddings,
because it gives a ''sliding scale'' between
'so large that it probably won't fit in RAM' and 'so smushed together it has become too fuzzy'
 
 
some
* [https://github.com/explosion/floret floret] (bloom embeddings for fastText)
 
* thinc's HashEmbed [https://thinc.ai/docs/api-layers#hashembed]


* spaCy’s MultiHashEmbed and HashEmbedCNN (which uses thinc's HashEmbed)


-->


<!--
===(something inbetween)===


====semantic folding====
https://explosion.ai/blog/bloom-embeddings
https://en.wikipedia.org/wiki/Semantic_folding


https://spacy.io/usage/v3-2#vectors
-->
-->


=====Now we have nicer numbers, but how how I ''use'' them?=====
<!--
<!--
====Hyperspace Analogue to Language====


* use the vectors as-is
* adapt the embeddings with your own training
:: starts with a good basis, refines for your use
:: but: only deals with tokens already in there


* there are also some ways to selectively alter vectors
:: can be useful if you want to keep sharing the underling embeddings


-->
-->


===Could be either style===


====Semantic similarity====
=====vectors - unsorted=====
<!--


<!--


Semantic similarity is the bread area of [[metric]]s between words, phrases, and/or documents.


These vectors come from machine learning (of varying type and complexity).
* LSA
* [[word2vec]] -
: technically patented?
: T Mikolov et al. (2013) "Efficient Estimation of Word Representations in Vector Space"
* tok2vec


Some people use this term specifically when it is based on strongly coded meaning/semantics (ontology style),
* FastText
because this lets you make stronger statements,
: https://fasttext.cc/
contrasted with similarity based only on [[lexicographical]] details word embeddings.


...the latter is fuzzier, but also tends to give a reasonable answer to a lot of things that the more exact approach
* lda2vec
: https://multithreaded.stitchfix.com/blog/2016/05/27/lda2vec/#topic=38&lambda=1&term=




https://en.wikipedia.org/wiki/Semantic_similarity




All of this may still apply a single vector to the same word always (sometimes called static word embeddings).
This is great for unambiguous content words, but less so for polysemy and


-->


<!--
===(something inbetween)===


====semantic folding====
https://en.wikipedia.org/wiki/Semantic_folding


-->


<!--
====Hyperspace Analogue to Language====


This can include
* words that appear in similar context (see also [[word embeddings]], even [[topic modelling]])


* words that have similar meaning


* Topological / ontological similarity - based on more strongly asserted properties
-->
:: '''Semantic similarity''' may well refer more specifically to ''only'' "is a" and other [[ontology|ontological]] relationships,
and ''not'' just "seems to co-occur"
:: '''Semantic relatedness''' then might also include [[antonyms]] (opposites), [[meronyms]] (part of whole), [[hyponyms]]/[[hypernyms]]


===Moderately specific ideas and calculations===


The last tries to not only know e.g. 'car' and 'road' and 'driving' are related
====Collocations====
but also roughly ''how'' they are related, in a semantic sense.


Collocations are statistically idiosyncratic sequences - the math that is often used asks
"do these adjacent words occur together more often than the occurrence of each individually would suggest?".


between documents, or between
This doesn't ascribe any meaning,  
Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness
it just tends to signal anything from
empty habitual etiquette,
jargon,
various [[substituted phrases]],
and many other things that go beyond purely [[compositional]] construction,
because why other than common sentence structures would they co-occur so often?


-->
...actually, there are varied takes on how useful [[collocations]] are, and why.


===Moderately specific ideas and calculations===


====latent semantic analysis====
====latent semantic analysis====
Line 613: Line 924:
https://en.wikipedia.org/wiki/Topic_model
https://en.wikipedia.org/wiki/Topic_model


==Word embeddings==
<!--


Word embeddings are a wider class of techniques, the general idea being that
====word2vec====
you can map to fairly dense feature vectors with reasonable training.
{{stub}}


Dense as in much denser than one-word-per-value,
word2vec is one of many ways to put semantic vectors to words (in the [[distributional hypothesis]] approach),
and also denser in the sense that a few hundred features seem to have some semantic value (if you can figure out what that is)
and refers to two techniques, using either [[bag-of-words]] and [[skip-gram]] as processing for a specific learner,
from relations of orders more words than that.
as described in T Mikolov et al. (2013), "{{search|Efficient Estimation of Word Representations in Vector Space}}",
 
probably the one that kicked off this dense-vector idea into the interest.
 
They're often trained from adjacency - so are an [[distributional similarity]] thing.
Which were classically focused on linear algebra thing (see e.g. [[Latent Semantic Analysis]]) but has now grown into neural nets.




It's an extra job you need to do up front, but assuming you can learn it well,
Word2vec amounts could be seen as building a classifier that predicts what word apear in a context, and/or what context appears around a word,
the real learner now doesn't have to deal with ridiculous dimensionality,
which happens to do a decent task of classifying that word.
or the sparsity/smoothing that often brings in.
 
 
 
-->
===word2vec===
{{stub}}


word2vec is one of many ways to put semantic vectors to words (in the [[distributional hypothesis]] approach),
and refers to two techniques, using either [[bag-of-words]] and [[skip-gram]] as processing for a specific learner,
as described in T Mikolov et al. (2013), "{{search|Efficient Estimation of Word Representations in Vector Space}}"


That paper mentions
That paper mentions
Line 650: Line 946:
:: seems to be better at less-common words, but slower
:: seems to be better at less-common words, but slower


The way it builds that happens to make it a decent classifier of that word,
so actually both work out as characterizing the word.


 
(NN implies [[one-hot]] coding, so not small, but it turns out to be moderately efficient{{verify}})
NN implies [[one-hot]] coding, so not small, but it turns out to be moderately efficient{{verify}}.




Line 669: Line 962:
-->
-->


===GloVe===
====GloVe====
<!--
<!--
Global Vectors for Word Representation
Global Vectors for Word Representation


[http://nlp.stanford.edu/projects/glove/ GloVe]
 
The GloVe paper itself compares itself with word2vec,
and concludes it consistenly performs a little better.
 
 
 
See also:
* http://nlp.stanford.edu/projects/glove/
* J Pennington et al. (2014), "GloVe: Global Vectors for Word Representation"
 
-->
-->




[[Category:Math on data]]
[[Category:Math on data]]

Latest revision as of 23:14, 21 April 2024

This is more for overview of my own than for teaching or exercise.

Overview of the math's areas

Arithmetic · 'elementary mathematics' and similar concepts
Set theory, Category theory
Geometry and its relatives · Topology
Elementary algebra - Linear algebra - Abstract algebra
Calculus and analysis
Logic
Semi-sorted
: Information theory · Number theory · Decision theory, game theory · Recreational mathematics · Dynamical systems · Unsorted or hard to sort


Math on data:

  • Statistics as a field
some introduction · areas of statistics
types of data · on random variables, distributions
Virtues and shortcomings of...
on sampling · probability
glossary · references, unsorted
Footnotes on various analyses


  • Other data analysis, data summarization, learning
Data modeling, restructuring, and massaging
Statistical modeling · Classification, clustering, decisions, and fuzzy coding ·
dimensionality reduction ·
Optimization theory, control theory · State observers, state estimation
Connectionism, neural nets · Evolutionary computing
  • More applied:
Formal grammars - regular expressions, CFGs, formal language
Signal analysis, modeling, processing
Image processing notes



Intro

NLP data massage / putting meanings or numbers to words

Bag of words / bag of features

The bag-of-words model (more broadly bag-of-features model) use the collection of words in a context, unordered, in a multiset, a.k.a. bag.

In other words, we summarize a document (or part of it) it by appearance or count of words, and ignore things like adjacency and order - so any grammar.



In text processing

In introductions to Naive Bayes as used for spam filtering, its naivety essentially is this assumption that feature order does not matter.


Though real-world naive bayes spam filtering would take more complex features than single words (and may re-introduce adjacenct via n-grams or such), examples often use 1-grams for simplicity - which basically is bag of words, exc.

Other types of classifiers also make this assumption, or make it easy to do so.


Bag of features

While the idea is best known from text, hence bag-of-words, you can argue for bag of features, applying it to anything you can count, and may be useful even when considered independently.

For example, you may follow up object detection in an image with logic like "if this photo contains a person, and a dog, and grass" because each task may be easy enough individually, and the combination tends to narrow down what kind of photo it is.


In practice, the bag-of-features often refers to models that recognize parts of a whole object (e.g. "we detected a bunch of edges of road signs" might be easier and more robust than detecting it fully), and used in a number image tasks, such as feature extraction, object/image recognition, image search, (more flexible) near-duplicate detection, and such.

The idea that you can describe an image by the collection of small things we recognize in it, and that combined presence is typically already a strong indicator (particularly when you add some hypothesis testing). Exact placement can be useful, but often easily secondary.


See also:

N-gram notes

N-grams are contiguous sequence of length n.


They are most often seen in computational linguistics.


Applied to sequences of characters it can be useful e.g. in language identification, but the more common application is to words.

As n-grams models only include dependency information when those relations are expressed through direct proximity, they are poor language models, but useful to things working off probabilities of combinations of words, for example for statistical parsing, collocation analysis, text classification, sliding window methods (e.g. sliding window POS tagger), (statistical) machine translation, and more


For example, for the already-tokenized input This is a sentence . the 2-grams would be:

This   is
is   a
a   sentence
sentence   .


...though depending on how special you do or do not want to treat the edges, people might fake some empty tokens at the edge, or some special start/end tokens.


Skip-grams

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Note: Skip-grams seem to refer to now two different things.


An extension of n-grams where components need not be consecutive (though typically stay ordered).


A k-skip-n-gram is a length-n sequence where the components occur at distance at most k from each other.


They may be used to ignore stopwords, but perhaps more often they are intended to help reduce data sparsity, under a few assumptions.

They can help discover patterns on a larger scale, simply because skipping makes you look further for the same n. (also useful for things like fuzzy hashing).


Skip-grams apparently come from speech analysis, processing phonemes.


In word-level analysis their purpose is a little different. You could say that we acknowledge the sparsity problem, and decide to get more out of the data we have (focusing on context) rather than trying to smooth.

Actually, if you go looking, skip-grams are now often equated with a fairly specific analysis.



Syntactic n-grams

Flexgrams

Words as features - one-hot coding and such

Putting numbers to words

Computers and people and numbers

vector space representations, word embeddings, and more

Just count in a big table
Word embeddings
Static embeddings
Contextual word embeddings
Subword embeddings
The hashing trick (also, Bloom embeddings)
Now we have nicer numbers, but how how I use them?
vectors - unsorted

Moderately specific ideas and calculations

Collocations

Collocations are statistically idiosyncratic sequences - the math that is often used asks "do these adjacent words occur together more often than the occurrence of each individually would suggest?".

This doesn't ascribe any meaning, it just tends to signal anything from empty habitual etiquette, jargon, various substituted phrases, and many other things that go beyond purely compositional construction, because why other than common sentence structures would they co-occur so often?

...actually, there are varied takes on how useful collocations are, and why.


latent semantic analysis

Latent Semantic Analysis (LSA) is the application of Singular Value Decomposition on text analysis and search.


random indexing

https://en.wikipedia.org/wiki/Random_indexing


Topic modeling

Roughly the idea given documents that are about a particular topic, one would expect particular words to appear in the each more or less frequently.

Assuming such documents sharing topics, you can probably find groups of words that belong to those topics.

Assuming each document is primarily about one topic, you can expect a larger set of documents to yield multiple topics, and an assignment of one or more of these topics, so act like a soft/fuzzy clustering.

This is a relatively weak proposal in that it relies on a number of assumptions, but given that it requires zero training, it works better than you might expect when those assumptions are met. (the largest probably being your documents having singular topics).


https://en.wikipedia.org/wiki/Topic_model


word2vec

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

word2vec is one of many ways to put semantic vectors to words (in the distributional hypothesis approach), and refers to two techniques, using either bag-of-words and skip-gram as processing for a specific learner, as described in T Mikolov et al. (2013), "Efficient Estimation of Word Representations in Vector Space", probably the one that kicked off this dense-vector idea into the interest.


Word2vec amounts could be seen as building a classifier that predicts what word apear in a context, and/or what context appears around a word, which happens to do a decent task of classifying that word.


That paper mentions

  • its continuous bag of words (cbow) variant predicts the current word based on the words directly around it (ignoring order, hence bow(verify))
  • its continuous skip-gram variant predicts surrounding words given the current word.
Uses skip-grams as a concept/building block. Some people refer to this technique as just 'skip-gram' without the 'continuous',

but this may come from not really reading the paper you're copy-pasting the image from?

seems to be better at less-common words, but slower


(NN implies one-hot coding, so not small, but it turns out to be moderately efficient(verify))


GloVe