2
7. March 2003Leif Grönqvist, MSI - Växjö2 Overview My background Introduction to vector space models and Latent Semantic Indexing A toy example Interpretation Some applications A concrete example and a small experiment Improvements of the model Various unsolved problems Conclusion: things I have to do

4
7. March 2003Leif Grönqvist, MSI - Växjö4 Vector Space Models If we had a way to map any term to a vector in a high-dimensional space, in a way such that the similarity between the meaning of the terms is reflected in the distance between the vectors… Then we could: For a given term t, find an ordered list of the terms most similar to t For any two terms, find the similarity between them

5
7. March 2003Leif Grönqvist, MSI - Växjö5 Vector Space Models, cont. And if it is possible to add meaning for terms and this is also reflected by adding the corresponding vectors, we could do some more things: If we assume that it is possible to extract terms from a document, we can map documents to vectors too! A set of terms (one or more terms) may be seen as a document as well

6
7. March 2003Leif Grönqvist, MSI - Växjö6 Vector Space Models, cont. Now it is possible for any [term or document] d, to find an ordered list of the terms or documents most similar to d Further, we can for any two [term or document]s, find the similarity between them Therefore it is meaningful to look at terms as a special case of document – a short one

7
7. March 2003Leif Grönqvist, MSI - Växjö7 Alternative data sources A useful data source to get similar information would be a thesaurus, a WorldNet, or any kind of knowledge database. But: We don’t have them for all languages They are not domain specific and domain specific terms are not covered In such data sources most of the words are missing Especially names, compounds, technical terms and numbers My big newspaper corpus contains ~3 000 000 unique words A vector space model can be trained from raw un- annotated corpus data!

8
7. March 2003Leif Grönqvist, MSI - Växjö8 Calculating a vector space The training process needs a large set of documents - the bigger the better. My data set used for experiments contains roughly 1.5 million newspaper articles and 0.5 billion running words but I will collect more… Step 1: Create a word-by-document matrix - each element in the matrix is a frequency for a word type in a specific document From here there are several ways to find a good vector space

9
7. March 2003Leif Grönqvist, MSI - Växjö9 Vector Space Algorithms Singular Value Decomposition (SVD) This is a mathematically complicated (based on eigen-values) way to find an optimal vector space in a specific number of dimensions Computationally heavy - maybe 20 hours for my test set Uses often the entire document as context Random Indexing (RI) Select a number of dimensions randomly Not as heavy to calculate, but more unclear (for me) why it works Uses a small context, typically 1+1 – 5+5 words Neural nets, Hyperspace Analogue to Language, etc.

10
7. March 2003Leif Grönqvist, MSI - Växjö10 The terminology I use Some people use these terms in a sloppy way. For me: LSI=LSA: Latent Semantic Indexing/ Analysis are used in roughly the same way by most people Two ways to obtain the model used in LSA are SVD and RI – they both find the latent information

11
7. March 2003Leif Grönqvist, MSI - Växjö11 The distance measure Three easy-to-calculate distance measures: Cosine: the cosine of the angle between the vectors Euclidean distance: just the distance as we all know it Manhattan distance: the distance if you walk only along the orthogonal axes Just as easy to calculate in n dimensions where n>>3 The most used is the cosine

18
7. March 2003Leif Grönqvist, MSI - Växjö18 What does the SVD give? Dumais 1995: “The SVD program takes the ltc transformed term-document matrix as input, and calculates the best "reduced-dimension" approximation to this matrix.” Michael W Berry 1992: “This important result indicates that A k is the best k-rank approximation (in at least squares sense) to the matrix A. Leif: What Berry says is that SVD gives the best projection from n to k dimensions, that is the projection that keep distances in the best possible way, so no problems with local maxima.

19
7. March 2003Leif Grönqvist, MSI - Växjö19 What does it really mean then? The fact that a word w is represented by a specific vector v means exactly nothing! If two words a, b are represented by vectors close to each other (the angle between them is small) then: a and b are often found in the same document and/or a is often found together with c and c is often found together with b And so on…

20
7. March 2003Leif Grönqvist, MSI - Växjö20 A naïve algorithm Not trivial that SVD and RI works. I will explain a naive but more intuitive algorithm to obtain a result similar to SVD, but too slow for practical use: 1. Select a random point in a space with the selected dimensionality, for each unique word 2. For each document D in the set: move the points corresponding to each word towards the mass center for the words/points in D. 3. If any point made a “big” move since last iteration, then go back to step 2 Step 1-3 could be done several times to have a chance to find the global maximum

24
7. March 2003Leif Grönqvist, MSI - Växjö24 Some applications Automatic generation of a domain specific thesaurus Keyword extraction from documents Find sets of similar documents in a collection Find documents related to a given document or a set of terms

25
7. March 2003Leif Grönqvist, MSI - Växjö25 Problems and questions How can we interpret the similarities as different kinds of relations? How can we include document structure and phrases in the model? Terms are not really terms, but just words Ambiguous terms pollute the vector space How could we find the optimal number of dimensions for the vector space?

28
7. March 2003Leif Grönqvist, MSI - Växjö28 A small experiment I want the model to know the difference between Bengt and Bengt 1. Make a frequency list for all n-tuples up to n=5 with a frequency>1 2. Keep all words in the bags, but add the tuples, with space replaced by _, as words 3. Run the LSI again Now Bengt_Johansson is a word, and Bengt_Johansson is NOT Bengt + Johansson Number of terms grows from 34238 to 104783

31
7. March 2003Leif Grönqvist, MSI - Växjö31 The new vector space model It is clear that it is now possible to find terms closely related to Bengt Johansson – the handball coach But is the model better for single words or for document comparison? What do you think? More “words” than before – hopefully it improves the result just as more data does At least no reason for a worse result... Or?

38
7. March 2003Leif Grönqvist, MSI - Växjö38 Hmm, adding n-grams was maybe too simple... If the bad result is due to overtraining, it could help to remove the words I build phrases of, but maybe not all Another way to try is to use a dependency parser to find more meaningful phrases, not just n-grams

39
7. March 2003Leif Grönqvist, MSI - Växjö39 The interpretation of similarities I havn’t tried to solve this problem at all but one idea I have is to: Calculate vector spaces for various dimensionalities and context widths Check if the different settings find different kind of relations With a data source like WordNet it could be done in a systematic way

40
7. March 2003Leif Grönqvist, MSI - Växjö40 How to select the number of dimensions Susan T Dumais 1995: “In previous experiments we found that performance, improves as the number of dimensions is increased up to 200 or 300 dimensions, and decreases slowly after that to the level observed for the standard vector EC­3 method (Dumais, 1991).” Jason I Hong 2000: “There does not seem to be a general consensus for an optimal number of dimensions; instead, the size of the concept space must be determined based on the specific collection of documents used.” Thomas K Landauer 1997: “Near maximum performance of 45-53%, corrected for guessing, was obtained over a fairly broad region around 300 dimensions” Leif 2003: “We should try to do similar experiments as Dumais/Landauer, but relate the optimal dimensionality to measures like number of documents, terms, nonzero elements, etc, because these could give us a formula not relying on hand tagged data sets”

41
7. March 2003Leif Grönqvist, MSI - Växjö41 Performance for the SVD Dumais 1995: “The SVD takes only about 2 minutes on a Sparc10 for a 2k x 5k matrix, but this time increases to about 18-20 hours for a 60k x 80k matrix.” Hong 2000: “The SVD algorithm is O(N 2 k 3 ), where N is the number of terms plus documents, and k is the number of dimensions in the concept space”, “However, if the collection is stable, SVD will only need to be performed once, which may be an acceptable cost.” Leif: So if a good computer today is 100 times faster than Dumais’ 1995 and we have 20 times bigger data sets and we have an optimized SVD function instead of a research prototype, it should still take around 20 hours.

42
7. March 2003Leif Grönqvist, MSI - Växjö42 What I still have to do something about Find a better LSI/SVD package than the one I have (old C-code from 1990), or maybe writing it myself... Get the phrases into the model in some way When these things are done I could: Try to interpret various relations from similarities in a vector space mode Try to solve the “number of optimal dimensions”- problem Explore what the length of the vectors mean

Om projektet

Kontakta oss

To ensure the functioning of the site, we use cookies. We share information about your activities on the site with our partners and Google partners: social networks and companies engaged in advertising and web analytics. For more information, see the Privacy Policy and Google Privacy &amp Terms.
Your consent to our cookies if you continue to use this website.