Vector Space Model n Documents and queries are represented in a highdimensional vector space in which each dimension in the space corresponds to a word (term) in the corpus (document collection). n The entities represented in the figure are q for query and d 1, d 2, and d 3 for the three documents. n The term weights are derived from occurrence counts.

Vector Space Methods n The classic structure in vector space text mining methods is a termdocument matrix where – Rows correspond to terms, columns correspond to documents, and – Entries may be binary or frequency counts. n A simple and obvious generalization is a bigram (multigram)-document matrix where – Rows correspond to bigrams, columns to documents, and again entries are either binary or frequency counts.

Social Networks n Social networks can be represented as graphs – A graph G(V, E), is a set of vertices, V, and edges, E – The social network depicts actors (in classic social networks, these are humans) and their connections or ties – Actors are represented by vertices, ties between actors by edges n n There is one-to-one correspondence between graphs and so-called adjacency matrices Example: Author-Coauthor Networks

Graphs versus Matrices

Two-Mode Networks n When there are two types of actors – – – n Individuals and Institutions Alcohol Outlets and Zip Codes Paleoclimate Proxies and Papers Authors and Documents Words and Documents Bigrams and Documents SNA refers to these as two-mode networks, graph theory as bi-partite graphs – Can convert from two-mode to one-mode

Two-Mode Computation Consider a bipartite individual by institution social network. Let Am×n be the individual by institution adjacency matrix with m = the number of individuals and n = the number of institutions. Then Cm×m = Am×n. ATn×m= Individual-Individual social network adjacency matrix with cii = ∑jaij = the strength of ties to all individuals in i’s social network and cij = the tie strength between individual i and individual j.

Two-Mode Computation Similarly, Pn×n = ATn×m Am×n= Institution by Institution social network adjacency matrix with pjj=∑iaij= strength of ties to all institutions in i’s social network with pij the tie strength between institution i and institution j.

Two-Mode Computation n n Of course, this exactly resembles the computation for LSI. Viewed as a two-mode social network, this computation allows us: – to calculate strength of ties between terms relative to this document database (corpus) – And also to calculate strength of ties between documents relative to this lexicon n If we can cluster these terms and these documents, we can discover: – similar sets of documents with respect to this lexicon – sets of words that are used the same way in this corpus

Example of a Two-Mode Network Our A matrix

Example of a Two-Mode Network Our P matrix

Block Models n n n A partition of a network is a clustering of the vertices in the network so that each vertex is assigned to exactly one class or cluster. Partitions may specify some property that depends on attributes of the vertices. Partitions divide the vertices of a network into a number of mutually exclusive subsets. – That is, a partition splits a network into parts. n Partitions are also sometimes called blocks or block models. – These are essentially a way to cluster actors together in groups that behave in a similar way.

Example Data n The text data were collected by the Linguistic Data Consortium in 1997 and were originally used in Martinez (2002) – The data consisted of 15, 863 news reports collected from Reuters and CNN from July 1, 1994 to June 30, 1995 – The full lexicon for the text database included 68, 354 distinct words n n In all 313 stopper words are removed after denoising and stemming, there remain 45, 021 words in the lexicon – In the examples that I report here, there are 503 documents only

Example Data n n A simple 503 document corpus we have worked with has 7, 143 denoised and stemmed entries in its lexicon and 91, 709 bigrams. – Thus the TDM is 7, 143 by 503 and the BDM is 91, 709 by 503. – The term vector is 7, 143 dimensional and the bigram vector is 91, 709 dimensional. – The BPM for each document is 91, 709 by 91, 709 and, of course, very sparse. A corpus can easily reach 20, 000 documents or more.

Term-Document Matrix Analysis Zipf’s Law

Term-Document Matrix Analysis

Mixture Models for Clustering Mixture models fit a mixture of (normal) distributions n We can use the means as centroids of clusters n Assign observations to the “closest” centroid n Possible improvement in computational complexity n

Our Proposed Algorithm n n n Choose the number of desired clusters. Using a normal mixtures model, calculate the mean vector for each of the document protoclusters. Assign each document (vector) to a protocluster anchored by the closest mean vector. – This is a Voronoi tessellation of the 7143 dimensional term vector space. The Voronoi tiles correspond to topics for the documents. n Or assign documents based on maximum posterior probability.