A framework for learning word vectors (Mikolov et al. 2013)
- Every word in a fixed vocabulary is represented by a vector
- Go through each position t in the text, which has a center word c and context words o (outer)
- Use the similarity of the word vectors for c and o to calculate the probability of o given c]
- Keep adjusting the word vectors to maximize this probability
Word2Vec Variants

UK Twitter word embeddings (II)
Word embeddings trained on UK Twitter content (II)The total number of tweets used was approximately 1.1 billion, covering the years 2012 to and including 2016.Settings: Skip-gram with negative sampling (10 noise words), a window of 9 words, dimensionality of 512, and 10 epochs of training. After filtering out words with less than 100 occurrences, an embedding corpus of 470,194 unigrams was obtained (see embd_voc). The corresponding 512-dimensional embeddings are held in embd_vec.bz2.
https://figshare.com/articles/dataset/UK_Twitter_word_embeddings_II_/5791650

Seonglae Cho