An Alternative Prior Process for Nonparametric Bayesian Clustering

Transcription

1 An Alternative Prior Process for Nonparametric Bayesian Clustering Hanna M. Wallach Department of Computer Science University of Massachusetts Amherst Shane T. Jensen Department of Statistics The Wharton School, University of Pennsylvania Lee Dicker Department of Biostatistics Harvard School of Public Health Katherine A. Heller Engineering Department University of Cambridge Abstract Prior distributions play a crucial role in Bayesian approaches to clustering. Two commonly-used prior distributions are the Dirichlet and Pitman-Yor processes. In this paper, we investigate the predictive probabilities that underlie these processes, and the implicit rich-get-richer characteristic of the resulting partitions. We explore an alternative prior for nonparametric Bayesian clustering the uniform process for applications where the rich-get-richer property is undesirable. We also explore the cost of this process: partitions are no longer exchangeable with respect to the ordering of variables. We present new asymptotic and simulation-based results for the clustering characteristics of the uniform process and compare these with known results for the Dirichlet and Pitman-Yor processes. We compare performance on a real document clustering task, demonstrating the practical advantage of the uniform process despite its lack of exchangeability over orderings. 1 Introduction Nonparametric Bayesian models provide a powerful and popular approach to many difficult statistical problems, including document clustering (Zhang et al., 2005), topic modeling (Teh et al., 2006b), and clustering motifs in DNA sequences (Jensen and Liu, Appearing in Proceedings of the 13 th International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: W&CP 9. Copyright 2010 by the authors. 2008). The key assumption underlying nonparametric Bayesian models is the existence of a set of random variables drawn from some unknown probability distribution. This unknown probability distribution is itself drawn from some prior distribution. The Dirichlet process is one such prior for unknown probability distributions that has become ubiquitous in Bayesian nonparametric modeling, as reviewed by Muller and Quintana (2004). More recently, Pitman and Yor (1997) introduced the Pitman-Yor process, a two-parameter generalization of the Dirichlet process. These processes can also be nested within a hierarchical structure (Teh et al., 2006a; Teh, 2006). A key property of any model based on Dirichlet or Pitman-Yor processes is that the posterior distribution provides a partition of the data into clusters, without requiring that the number of clusters be pre-specified in advance. However, previous work on nonparametric Bayesian clustering has paid little attention to the implicit a priori rich-get-richer property imposed by both the Dirichlet and Pitman-Yor process. As we explore in section 2, this property is a fundamental characteristic of partitions generated by these processes, and leads to partitions consisting of a small number of large clusters, with rich-get-richer usage. Although rich-getricher cluster usage is appropriate for some clustering applications, there are others for which it is undesirable. As pointed out by Welling (2006), there exists a need for alternative priors in clustering models. In this paper, we explore one such alternative prior the uniform process which exhibits a very different set of clustering characteristics to either the Dirichlet process or the Pitman-Yor process. The uniform process was originally introduced by Qin et al. (2003) (page 438) as an ad hoc prior for DNA motif clustering. However, it has received little attention in the subsequent statistics and machine learning literature and its clustering characteristics have remained largely unex-

2 An Alternative Prior Process for Nonparametric Bayesian Clustering plored. We therefore compare the uniform process to the Dirichlet and Pitman-Yor processes in terms of asymptotic characteristics (section 3) as well as characteristics for sample sizes typical of those found in real clustering applications (section 4). One fundamental difference between the uniform process and the Dirichlet and Pitman-Yor processes is the uniform process s lack of exchangeability over cluster assignments the probability P (c) of a particular set of cluster assignments c is not invariant under permutations of those assignments. Previous work on the uniform process has not acknowledged this issue with respect to either inference or probability calculations. We demonstrate that this lack of exchangeability is not a significant problem for applications where a more balanced prior assumption about s is desired. We present a new Gibbs sampling algorithm for the uniform process that is correct for a fixed ordering of the cluster assignments, and show that while P (c) is not invariant to permuted orderings, it can be highly robust. We also consider the uniform process in the context of a real text processing application: unsupervised clustering of a set of documents into natural, thematic groupings. An extensive and diverse array of models and procedures have been developed for this task, as reviewed by Andrews and Fox (2007). These approaches include nonparametric Bayesian clustering using the Dirichlet process (Zhang et al., 2005) and the hierarchical Dirichlet process (Teh et al., 2006a). Such nonparametric models are popular for document clustering since the number of clusters is rarely known a priori, and these models allow the number of clusters to be inferred along with the assignments of documents to clusters. However, as we illustrate below, the Dirichlet process still places prior assumptions on the clustering structure: partitions will typically be dominated by a few very large clusters, with overall rich-get-richer cluster usage. For many applications, there is no a priori reason to expect that this kind of partition is preferable to other kinds of partitions, and in these cases the uniform process can be a better representation of prior beliefs than the Dirichlet process. We demonstrate that the uniform process leads to superior document clustering performance (quantified by the probability of unseen held-out documents under the model) over the Dirichlet process using a collection of carbon nanotechnology patents (section 6). 2 Predictive Probabilities for Clustering Priors Clustering involves partitioning random variables X = (X 1,..., X N ) into clusters. This procedure is often performed using a mixture model, which assumes that each variable was generated by one of K mixture components characterized by parameters Φ = {φ k } K k=1 : P (X n Φ) = K P (c n =k) P (X n φ k, c n =k), (1) k=1 where c n is an indicator variable such that c n = k if and only if data point X n was generated by component k with parameters φ k. Clustering can then be characterized as identifying the set of parameters responsible for generating each observation. The observations associated with parameters φ k are those X n for which c n = k. Together, these observations form cluster k. Bayesian mixture models assume that the parameters Φ come from some prior distribution P (Φ). Nonparametric Bayesian mixture models further assume that the probability that c n = k is well-defined in the limit as K. This allows for more flexible mixture modeling, while avoiding costly model comparisons in order to determine the right number of clusters or components K. From a generative perspective, in nonparametric Bayesian mixture modeling, each observation is assumed to have been generated by first selecting a set of component parameters φ k from the prior and then generating the observation itself from the corresponding component. Clusters are therefore constructed sequentially. The component parameters responsible for generating a new observation are selected using the predictive probabilities the conditional distribution over component parameters implied by a particular choice of priors over Φ and c n. We next describe three priors the Dirichlet, Pitman-Yor, and uniform processes using their predictive probabilities. For notational convenience we define ψ n to be the component parameters for the mixture component responsible for observation X n, such that ψ n = φ k when c n = k. 2.1 Dirichlet Process The Dirichlet process prior has two parameters: a concentration parameter θ, which controls the formation of new clusters, and a base distribution G 0. Under a Dirichlet process prior, the conditional probability of the mixture component parameters ψ N+1 associated with a new observation X N+1 given the component parameters ψ 1,..., ψ N associated with previous observations X 1,..., X N is a mixture of point masses at the locations of ψ 1,..., ψ N and the base distribution G 0. Variables X n and X m are said to to belong to the same cluster if and only if ψ n = ψ m. 1 This predictive probability formulation therefore sequentially constructs a partition, since observation X N+1 belongs to an existing cluster if ψ N+1 = ψ n for some n N or a new cluster consisting only of X N+1 if ψ N+1 is drawn directly from G 0. If φ 1,..., φ K are the K distinct values 1 Assuming a continuous G 0.

3 H.M. Wallach, S.T. Jensen, L. Dicker, and K.A. Heller in ψ 1,..., ψ N and N 1,..., N K are the corresponding s (i.e., N k = N n=1 I (ψ n = φ k ), then P (ψ N+1 ψ 1,..., ψ N, θ, G 0 ) = { Nk N+θ ψ N+1 = φ k {φ 1,..., φ K } θ N+θ ψ N+1 G 0. (2) New observation X N+1 joins existing cluster k with probability proportional to N k (the number of previous observations in that cluster) and joins a new cluster, consisting of X N+1 only, with probability proportional to θ. This predictive probability is evident in the Chinese restaurant process metaphor (Aldous, 1985). The most obvious characteristic of the Dirichlet process predictive probability (given by (2)) is the richget-richer property: the probability of joining an existing cluster is proportional to the size of that cluster. New observations are therefore more likely to join already-large clusters. The rich-get-richer characteristic is also evident in the stick-breaking construction of the Dirichlet process (Sethuraman, 1994; Ishwaran and James, 2001), where each unique point mass is assigned a random weight. These weights are generated as a product of Beta random variables, which can be visualized as breaks of a unit-length stick. Earlier breaks of the stick will tend to lead to larger weights, which again gives rise to the rich-get-richer property. 2.2 Pitman-Yor Process The Pitman-Yor process (Pitman and Yor, 1997) has three parameters: a concentration parameter θ, a base distribution G 0, and a discount parameter 0 α < 1. Together, θ and α control the formation of new clusters. The Pitman-Yor predictive probability is P (ψ N+1 ψ 1,..., ψ N, θ, α, G 0 ) = { Nk α N+θ ψ N+1 = φ k {φ 1,..., φ K } θ+kα N+θ ψ N+1 G 0. (3) The Pitman-Yor process also exhibits the rich-getricher property. However, the discount parameter α serves to reduce the probability of adding a new observation to an existing cluster. This prior is particularly well-suited to natural language processing applications (Teh, 2006; Wallach et al., 2008) because it yields power-law behavior (cluster usage) when 0 < α < Uniform Process Predictive probabilities (2) and (3) result in partitions that are dominated by a few large clusters, since new observations are more likely to be assigned to larger clusters. For many tasks, however, a prior over partitions that induces more uniformly-sized clusters is desirable. The uniform process (Qin et al., 2003; Jensen and Liu, 2008) is one such prior. The predictive probability for the uniform process is given by P (ψ N+1 ψ 1,..., ψ N, θ, G 0 ) = { 1 K+θ ψ N+1 = φ k {φ 1,..., φ K } θ K+θ ψ N+1 G 0. (4) The probability that new observation X N+1 joins one of the existing K clusters is uniform over these clusters, and is unrelated to the s. Although the uniform process has been used previously for clustering DNA motifs (Qin et al., 2003; Jensen and Liu, 2008), its usage has otherwise been extremely limited in the statistics and machine learning literature and its theoretical properties have thus-far not been explored. Constructing prior processes using predictive probabilities can imply that the underlying prior results in nonexchangeability. If c denotes a partition or set of cluster assignments for observations X, then the partition is exchangeable if the calculation of the full prior density of the partition P (c) via the predictive probabilities is unaffected by the ordering of the cluster assignments. As discussed by Pitman (1996) and Pitman (2002), most sequential processes will fail to produce a partition that is exchangeable. The Dirichlet process and Pitman-Yor process predictive probabilities ((2) and (3)) both lead to exchangeable partitions. In fact, their densities are special cases of exchangeable partition probability functions given by Ishwaran and James (2003). Green and Richardson (2001) and Welling (2006) discuss the relaxation of exchangeability in order to consider alternative prior processes. The uniform process does not ensure exchangeability: the prior probability P (c) of a particular set of cluster assignments c is not invariant under permutation of those cluster assignments. However, in section 5, we demonstrate that the nonexchangeability implied by the uniform process is not a significant problem for real data by showing that P (c) is robust to permutations of the observations and hence cluster assignments. 3 Asymptotic Behavior In this section, we compare the three priors implied by predictive probabilities (2), (3) and (4) in terms of the asymptotic behavior of two partition characteristics: the number of clusters K N and the distribution of s H N = (H 1,N, H 2,N,..., H N,N ) where H M,N is the number of clusters of size M in a partition of N observations. We begin by reviewing previous results for the Dirichlet and Pitman-Yor processes, and then present new results for the uniform process.

4 An Alternative Prior Process for Nonparametric Bayesian Clustering 3.1 Dirichlet Process θ = 1 θ = 10 θ = 100 As the number of observations N, the expected number of unique clusters K N in a partition is E (K N DP) = N n=1 θ n 1 + θ The expected number of clusters of size M is θ log N. (5) lim N E (H M,N DP) = θ M. (6) Expected Number of Clusters DP UN PY (α = 0.5) PY (α = 0.25) PY (α = 0.75) 1e+02 5e+02 5e+03 5e+04 N = Number of Observations Expected Number of Clusters DP UN PY (α = 0.5) PY (α = 0.25) PY (α = 0.75) 1e+02 5e+02 5e+03 5e+04 N = Number of Observations Expected Number of Clusters DP UN PY (α = 0.5) PY (α = 0.25) PY (α = 0.75) 1e+02 5e+02 5e+03 5e+04 N = Number of Observations This well-known result (Arratia et al., 2003) implies that as N, the expected number of clusters of size M is inversely proportional to M regardless of the value of θ. In other words, in expectation, there will be a small number of large clusters and vice versa. 3.2 Pitman-Yor Process Pitman (2002) showed that as N, the expected number of unique clusters K N in a partition is E (K N PY) Γ(1 + θ) αγ(α + θ) N α. (7) Pitman s result can also be used to derive the expected number of clusters of size M in a partition: E (H M,N PY) Γ(1 + θ) M 1 m=1 (m α) N α. (8) Γ(α + θ) M! 3.3 Uniform Process Previous literature on the uniform process does not contain any asymptotic results. We therefore present the following novel result for the expected number of unique clusters K N in a partition as N : E (K N UP) 2θ N 1 2. (9) A complete proof is given in the supplementary materials. In section 4, we also present simulation-based results that suggest the following conjecture for the expected number of clusters of size M in a partition: E (H M,N UP) θ. (10) This result corresponds well to the intuition underlying the uniform process: observations are a priori equally likely to join any existing cluster, regardless of size. 3.4 Summary of Asymptotic Results The distribution of s for the uniform process is dramatically different to that of either the Pitman- Yor or Dirichlet process, as evidenced by the results Figure 1: Expected number of clusters ˆK N versus sample size N for different θ. Axes are on a log scale. above, as well as the simulation-based results in section 4. The uniform process exhibits a uniform distribution of s. Although the Pitman-Yor process can be made to behave similarly to the uniform process in terms of the expected number of clusters (by varying α, as described below), it cannot be configured to exhibit a uniform distribution over cluster sizes, which is a unique aspect of the uniform process. Under the Dirichlet process, the expected number of clusters grows logarithmically with the number of observations N. In contrast, under the uniform process, the expected number of clusters grows with the square root of the number of observations N. The Pitman- Yor process implies that the expected number of clusters grows at a rate of N α. In other words, the Pitman- Yor process can lead to a slower or faster growth rate than the uniform process, depending on the value of the discount parameter α. For α = 0.5, the expected number of clusters grows at the same rate for both the Pitman-Yor process and the uniform process. 4 Simulation Comparisons: Finite N The asymptotic results presented in the previous section are not necessarily applicable to real data where the finite number of observations N constrains the distribution of s, M M H M,N = N. In this section, we appraise the finite sample consequences for the Dirichlet, Pitman-Yor, and uniform processes via a simulation study. For each of the three processes, we simulated 1000 independent partitions for various values of sample size N and concentration parameter θ, and calculated the number of clusters K N and distribution of s H N for each of the partitions. 4.1 Number of Clusters K N In figure 1, we examine the relationship between the number of observations N and the average number of

5 H.M. Wallach, S.T. Jensen, L. Dicker, and K.A. Heller Dirichlet Process: N=1000 Pitman Yor (α = 0.5) : N=1000 Uniform: N= Dirichlet Process: N=10000 Pitman Yor (α = 0.5) : N=10000 Uniform: N= e 03 1e Dirichlet Process: N= Pitman Yor (α = 0.5) : N= Uniform: N= e 03 1e Figure 2: Cluster sizes H M,N as a function of M for different values of N for the Dirichlet, Pitman-Yor, and uniform processes. Data are plotted on a log-log scale and the red lines indicate the asymptotic relationships. Each point is the average number of clusters (across 1000 simulated partitions) of a particular. clusters ˆK N (averaged over the 1000 simulated partitions). For α = 0.5, the Pitman-Yor process exhibits the same rate of growth of ˆK N as the uniform process, confirming the equality suggested by (7) and (9) when α = 0.5. As postulated in section 3.2, the Pitman-Yor process can exhibit either slower (e.g., α = 0.25) or faster (e.g., α = 0.75) rates of growth of ˆK N than the uniform process. The rate of growth of ˆKN for the Dirichlet process is the slowest, as suggested by (5). 4.2 Distribution of Cluster Sizes In this section, we examine the expected distribution of s under each process. For brevity, we focus only on concentration parameter θ = 10, though the same trends are observed for other values of θ. Figure 2 is a plot of Ĥ M,N (the average number of clusters of size M) as a function of M. For each process, Ĥ M,N was calculated as the average over the 1000 simulated independent partitions of H M,N under that process. The red lines indicate the asymptotic relationships, i.e., (6) for the Dirichlet process, (8) for the Pitman- Yor process, and (10) for the uniform process. The results in figure 2 demonstrate that the simulated distribution of s for the uniform process is quite different to the simulated distributions of clusters sizes for either the Dirichlet or Pitman-Yor processes. It is also interesting to observe the divergence from the asymptotic relationships due to the finite sample sizes, especially in the case of small N (e.g., N = 1000). 5 Exchangeability As mentioned in section 2, the uniform process does not lead to exchangeable partitions. Although the exchangeability of the Dirichlet and Pitman-Yor processes is desirable, these clustering models also exhibit the rich-get-richer property. Applied researchers are routinely forced to make assumptions when modeling real data. Even though the use of exchangeable priors can provide many practical advantages for clustering tasks, exchangeability itself is one particular modeling assumption, and there are situations in which the rich-get-richer property is disadvantageous. In reality, many data generating processes are not exchangeable, e.g., news stories are published at different times and therefore have an associated temporal ordering. If one is willing to make an exchangeability assumption, then the Dirichlet process prior is a natural choice. However, it comes with additional assumptions about the size distribution of clusters. These assumptions will be reasonable in certain situations, but less reasonable in others. It should not be necessary to restrict applied researchers to exchangeable models, which can impose other undesired assumptions, when alternatives do exist. The uniform process sacrifices the exchangeability assumption in order to make a more balanced prior assumption about s. In this section, we explore the lack of exchangeability of the uniform process by first examining, for real data, the extent to which P (c) is affected by permuting the observations. For any particular ordering of observations X = (X 1,..., X N ), the joint probability of the corresponding cluster assignments c is P (c ordering 1,..., N) = N P (c n c <n ) (11) n=1 where c <n denotes the cluster assignments for observations X 1,..., X n 1 and P (c n c <n ) is given by (4). Clearly, exhaustive evaluation of P (c) for all possible orderings (permutations of observations) is not possible for realistically-sized data sets. However, we

6 An Alternative Prior Process for Nonparametric Bayesian Clustering Standard Deviation of log P(c) Between-Ordering SD Between-Partition SD Concentration Parameter Figure 3: Comparison of the Between-Partition SD and the Between-Ordering SD (averaged over different inferred partitions) for the uniform process with six different values of concentration parameter θ. can evaluate the robustness of P (c) to different orderings as follows: for any given partition c (set of cluster assignments), we can compute the standard deviation of log P (c) over multiple different orderings of the observations. This between-ordering SD gives an estimate of the degree to which the ordering of observations affects P (c) for a particular partition. For any given ordering of observations, we can also compute the standard deviation of log P (c) over multiple different partitions (realizations of c) inferred using the Gibbs sampling algorithm described below. This between-partition SD gives an estimate of the variability of inferred partitions for a fixed ordering. Figure 3 shows the between-ordering SD and the between-partition SD for partitions of 1000 carbon nanotechnology patent abstracts (see next section), obtained using five Gibbs sampling chains and 5000 orderings of the data with different values of θ. The variability between orderings is considerably smaller than the variability between partitions, suggesting that uniform process clustering results are not significantly sensitive to different orderings. These results are encouraging for applications where one is willing to sacrifice exchangeability over orderings in favor of a more balanced prior assumption about s. 6 Document Clustering Application In this section, we compare the Dirichlet process and the uniform processes on the task of clustering real documents specifically, the abstracts of 1200 carbon nanotechnology patents. Dirichlet processes have been used as the basis of many approaches to document clustering including those of Zhang et al. (2005), Zhu et al. (2005) and Wallach (2008). In practice, however, there is often little justification for the a priori richget-richer property exhibited by the Dirichlet process. We consider a nonparametric word-based mixture model where documents are clustered into groups on the basis of word occurrences. The model assumes the following generative process: The tokens w d that comprise each document, indexed by d, are drawn from a document-specific distribution over words φ d, which is itself drawn from a document-specific Dirichlet distribution with base distribution n d and concentration parameter β. The document-specific base distribution is obtained by selecting a cluster assignment from the uniform process. If an existing cluster is selected, then n d is set to the cluster-specific distribution over words for that cluster. If a new cluster is selected, then a new cluster-specific distribution over words is drawn from G 0, and n d is set to that distribution: { 1 d 1+θ θ c d c <d c d = k 1,..., K d 1+θ c d = k new (12) n k G 0 (13) φ d Dir (φ d n cd, β) (14) w d Mult (φ d ), (15) where c d is the cluster assignment for the d th document. Finally, G 0 is chosen to be a hierarchical Dirichlet distribution: G 0 = Dir (n c β 1 n), where n Dir (n β 0 u). This model captures the fact that documents in different clusters are likely to use different vocabularies, yet allows the word distribution for each document to vary slightly from the word distribution for the cluster to which that document belongs. The key consequence of using either a Dirichlet or uniform process prior is that the latent variables n d are partitioned into C clusters where the value of C does not need to be pre-specified and fixed. The vector c denotes the cluster assignments for the documents: c d is the cluster assignment for document d. Given a set of observed documents W = {w d } D d=1, Gibbs sampling (Geman and Geman, 1984) can be used to infer the latent cluster assignments c. Specifically, the cluster assignment c d for document d can be resampled from P (c d c \d, w, θ) P (c d c \d, θ) P (w d c d, c \d, W \d, β), (16) where c \d and W \d denote the sets of clusters and documents, respectively, excluding document d. The vector β = (β, β 1, β 0 ) represents the concentration parameters in the model, which can be inferred from W using slice sampling (Neal, 2003), as described by Wallach (2008). The likelihood component of (16) is P (w d c d, c \d, W \d, β) = N d n=1 N <d,n w + β N <d,n n d wn c d +β 1 w N <d,n w d <d,n Nwn +β 0 W 1 Pw N<d,n w +β 0 Pw N <d,n w c d +β 1 + β, (17)

7 H.M. Wallach, S.T. Jensen, L. Dicker, and K.A. Heller where the superscript < d, n denotes a quantity including data from documents 1,..., d and positions 1,..., n 1 only for document d. N w d is the number of times word type w occurs in document d, N w cd is the number of times w occurs in cluster c d, and N w is the number of times w occurs in the entire corpus. The conditional prior probability P (c d c \d, θ) can be constructed using any of the predictive probabilities in section 2. For brevity, we focus on the (commonlyused) Dirichlet process and the uniform process. For the Dirichlet process, the conditional prior probability is given by (2). Since the uniform process lacks exchangeability over observations, we condition upon an arbitrary ordering of the documents, e.g., 1,..., D. The conditional prior of c d given c \d is therefore P (c d c \d, θ, ordering 1,..., D) P (c d c 1,..., c d 1, θ) D P (c m c 1,..., c m 1, θ), (18) m=d+1 where P (c d c 1,..., c d 1, θ) is given by (4). The latter terms propagate the value of c d to the cluster assignments c d+1,..., c D for the documents that follow document d in the chosen ordering. With this definition of the conditional prior, the Gibbs sampling algorithm is a correct clustering procedure for W, conditioned on the arbitrarily imposed ordering of the documents. We compare the Dirichlet and uniform process priors by using the model (with each prior) to cluster 1200 carbon nanotechnology patent abstracts. For each prior, we use Gibbs sampling and slice sampling to infer cluster assignments c train and β for a subset W train of 1000 training abstracts. Since the results in section 5 indicate that the variability between partitions is greater than the variability between orderings, we use a single ordering of W train and perform five runs of the Gibbs sampler. To provide insight into the role of θ, we compare results for several θ values. We evaluate predictive performance by computing the probability of a held-out set W test of 200 abstracts given each run from the trained model. We compute log P (W test D train, θ, β) = log c P test (Wtest, c test D train, θ, β), where D train = (W train, c train ) and the sum over c test is approximated using a novel variant of (Wallach et al., 2009) s leftto-right algorithm (see supplementary materials). We average this quantity over runs of the Gibbs sampler for W train, runs of the left-to-right algorithm, and twenty permutations of the held-out data W test. The left-hand plot of figure 4 compares the Dirichlet and uniform processes in terms of log P (W test D train, θ, β). Regardless of the value of concentration parameter θ, the model based on the uniform process leads to systematically higher held-out probabilities than the model based on the Dirichlet process. In other words, the uniform process provides a substantially better fit for the data in this application. The right-hand plot of figure 4 compares the Dirichlet and uniform processes in terms of the average number of clusters in a representative partition obtained using the Gibbs sampler. The uniform process leads to a greater number of clusters than the Dirichlet process for each value of θ. This is not surprising given the theoretical results for the a priori expected s (section 3) and the fact that the choice of clustering prior is clearly influential on the posterior distribution in this application. 7 Discussion The Dirichlet and Pitman-Yor processes both exhibit a rich-get-richer property that leads to partitions with a small number of relatively large clusters and vice versa. This property is seldom fully acknowledged by practitioners when using either process as part of a nonparametric Bayesian clustering model. We examine the uniform process prior, which does not exhibit this rich-get-richer property. The uniform process prior has received relatively little attention in the statistics literature to date, and its clustering characteristics have remained largely unexplored. We provide a comprehensive comparison of the uniform process with the Dirichlet and Pitman-Yor processes, and present a new asymptotic result for the squareroot growth of the expected number of clusters under the uniform process. We also conduct a simulation study for finite sample sizes that demonstrates a substantial difference in distributions between the uniform process and the Pitman-Yor and Dirichlet processes. Previous work on the uniform process has ignored its lack of exchangeability. We present new results demonstrating that although the uniform process is not invariant to permutations of cluster assignments, it is highly robust. Finally, we compare the uniform and Dirichlet processes on a real document clustering task, demonstrating superior predictive performance of the uniform process over the Dirichlet process. Acknowledgements This work was supported in part by the Center for Intelligent Information Retrieval, in part by CIA, NSA and NSF under NSF grant #IIS , and in part by subcontract #B from Lawrence Livermore National Security, LLC, prime contractor to DOE/NNSA contract #DE- AC52-07NA Any opinions, findings and conclusions or recommendations expressed in this material are the authors and do not necessarily reflect those of the sponsor.

Bayesian Statistics: Indian Buffet Process Ilker Yildirim Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 August 2012 Reference: Most of the material in this note

Dirichlet Processes A gentle tutorial SELECT Lab Meeting October 14, 2008 Khalid El-Arini Motivation We are given a data set, and are told that it was generated from a mixture of Gaussian distributions.

The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures

Topic models for Sentiment analysis: A Literature Survey Nikhilkumar Jadhav 123050033 June 26, 2014 In this report, we present the work done so far in the field of sentiment analysis using topic models.

Conditional Random Fields: An Introduction Hanna M. Wallach February 24, 2004 1 Labeling Sequential Data The task of assigning label sequences to a set of observation sequences arises in many fields, including

CPC/CPA Hybrid Bidding in a Second Price Auction Benjamin Edelman Hoan Soo Lee Working Paper 09-074 Copyright 2008 by Benjamin Edelman and Hoan Soo Lee Working papers are in draft form. This working paper

An example of a computable absolutely normal number Verónica Becher Santiago Figueira Abstract The first example of an absolutely normal number was given by Sierpinski in 96, twenty years before the concept

On the effect of data set size on bias and variance in classification learning Abstract Damien Brain Geoffrey I Webb School of Computing and Mathematics Deakin University Geelong Vic 3217 With the advent

2DI36 Statistics 2DI36 Part II (Chapter 7 of MR) What Have we Done so Far? Last time we introduced the concept of a dataset and seen how we can represent it in various ways But, how did this dataset came

Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main

5 General discussion 5.1 Introduction The primary goal of this thesis was to understand how the spatial dependence of consumer attitudes can be modeled, what additional benefits the recovering of spatial

1 WHERE DOES THE 10% CONDITION COME FROM? The text has mentioned The 10% Condition (at least) twice so far: p. 407 Bernoulli trials must be independent. If that assumption is violated, it is still okay

Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization GENOME 560, Spring 2012 Data are interesting because they help us understand the world Genomics: Massive Amounts

Syllabus Master s Programme in Statistics and Data Mining 120 ECTS Credits Aim The rapid growth of databases provides scientists and business people with vast new resources. This programme meets the challenges

How I won the Chess Ratings: Elo vs the rest of the world Competition Yannis Sismanis November 2010 Abstract This article discusses in detail the rating system that won the kaggle competition Chess Ratings:

The Exponential Family David M. Blei Columbia University November 3, 2015 Definition A probability density in the exponential family has this form where p.x j / D h.x/ expf > t.x/ a./g; (1) is the natural

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a

Course Catalog In order to be assured that all prerequisites are met, students must acquire a permission number from the education coordinator prior to enrolling in any Biostatistics course. Courses are

Conve Hull Probability Depth: first results Giovanni C. Porzio and Giancarlo Ragozini Abstract In this work, we present a new depth function, the conve hull probability depth, that is based on the conve

Content Area: Mathematics Grade Level Expectations: High School Standard: Number Sense, Properties, and Operations Understand the structure and properties of our number system. At their most basic level

A class of on-line scheduling algorithms to minimize total completion time X. Lu R.A. Sitters L. Stougie Abstract We consider the problem of scheduling jobs on-line on a single machine and on identical

Chapter 45 Non-Inferiority ests for One Mean Introduction his module computes power and sample size for non-inferiority tests in one-sample designs in which the outcome is distributed as a normal random

MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

University of Exeter Department of Computer Science Probabilistic topic models for sentiment analysis on the Web Chenghua Lin September 2011 Submitted by Chenghua Lin, to the the University of Exeter as

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

PROBABILITY AND LIKELIHOOD, A BRIEF INTRODUCTION IN SUPPORT OF A COURSE ON MOLECULAR EVOLUTION (BIOL 3046) Probability The subject of PROBABILITY is a branch of mathematics dedicated to building models

1 epresentation of Games Kerschbamer: Commitment and Information in Games Game-Theoretic Description of Interactive Decision Situations This lecture deals with the process of translating an informal description

A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method