Bayesian mixture modeling using a mixture of finite mixtures with normalized inverse Gaussian weights
Abstract
In Bayesian inference for mixture models with an unknown number of components, a finite mixture model is usually employed that assumes prior distributions for mixing weights and the number of components. This model is called a mixture of finite mixtures (MFM). As a prior distribution for the weights, a (symmetric) Dirichlet distribution is widely used for conjugacy and computational simplicity, while the selection of the concentration parameter influences the estimate of the number of components. In this paper, we focus on estimating the number of components. As a robust alternative Dirichlet weights, we present a method based on a mixture of finite mixtures with normalized inverse Gaussian weights. The motivation is similar to the use of normalized inverse Gaussian processes instead of Dirichlet processes for infinite mixture modeling. Introducing latent variables, the posterior computation is carried out using block Gibbs sampling without using the reversible jump algorithm. The performance of the proposed method is illustrated through some numerical experiments and real data examples, including clustering, density estimation, and community detection.
Keywords: Bayesian nonparametrics; Clustering; Dirichlet distribution; Inverse Gaussian distribution; Markov chain Monte Carlo; Mixture models
1 Introduction
We consider a finite mixture model with an unknown number of components:
(1.1) |
where is the number of components, is a -dimensional observation vector, and is the mixing weight vector, satisfying and . For , is a probability density or probability mass function parameterized by the component-specific parameter . The function is also called the kernel of the mixture model. Each observation is assumed to arise from one of the components, and each component is weighted based on its frequency of occurrence. Mixture models are important statistical models used in model-based clustering and density estimation because they can model complex data-generating distributions in random phenomena by using multiple components. Finite mixture models have been applied in many fields, for instance, sociology (Handcock et al., 2007), economics (Frühwirth-Schnatter and Kaufmann, 2008) and genetics (McLachlan et al., 2002) and so on. In application, the determination of is a very important issue. Correctly estimating improves model interpretability, the accuracy of kernel parameter estimates and model predictions, and reduces computation time. Although various methods have been proposed for the selection of , such as model selection criteria and hypothesis testing, using these criterion makes it difficult to quantify the uncertainty of , and also introduces the bias that model selection is carried out. The comprehensive review of (finite) mixture models is provided by McLachlan (2000); Frühwirth-Schnatter (2006); Frühwirth-Schnatter et al. (2019); McLachlan et al. (2019). In this paper, we focus on the case where is finite and make a clear distinction between the number of components and the number of clusters . In general, the number of components is written as , where is the number of components for which data are actually assigned, and is the number of empty components (see also Argiento and De Iorio, 2022).
In Bayesian analysis, a finite mixture model that assumes prior distributions for the mixing weights and the number of components is usually employed (see, e.g. Nobile, 1994; Miller and Harrison, 2018). Such a model is also called a mixture of finite mixtures (MFM), and the model is often used, as well as infinite mixture models represented by Dirichlet process mixture models. Although the reversible jump algorithm (Green, 1995; Richardson and Green, 1997) has been used to obtain samples from the posterior distribution based on MFM, it faces significant computational and implementation challenges. Methods based on marginal likelihoods have also been proposed (see, e.g. Nobile and Fearnside, 2007), while, as with reversible jump, the computational aspect is an issue. A method called sparse MFM (Rousseau and Mengersen, 2011; Malsiner-Walli et al., 2016) has also been proposed, which focuses on estimating the number of clusters by using the overfitting model with a large fixed , and choosing the prior distribution of the component rate well. Under same conditions, they showed that has consistency for true . Since it allows for many empty components, we cannot estimate the number of components and it is not easy to quantify the uncertainty. Recently, the use of nonparametric Bayesian methods for MFM to estimate has attracted much attention. In Miller and Harrison (2018), the exchangeable partition distribution is derived by marginalizing out , and the restaurant process is constructed to overcome computational difficulties. In Frühwirth-Schnatter et al. (2021), a generalized MFM in which the parameters of the Dirichlet distribution depend on is proposed. Miller and Harrison (2018) derived the theoretical result that the marginal posterior probabilities of and are asymptotically equivalent when the sample size is large, while only a sampling of is obtained as with general nonparametric Bayesian methods. Thus, the number of components cannot be sampled directly from the posterior distribution. Although Frühwirth-Schnatter et al. (2021)’s method can sample directly, as a sparse MFM, it has the disadvantage of producing a large number of empty components and overestimating . In addition, most MFM studies use a symmetric Dirichlet distribution as the mixing weight vector, and the choice of a hyperparameter is very important. Although it is possible to set a prior distribution and learn from the data, the Metropolis-Hastings algorithm is required.
The main purposes of this paper are 1) sampling directly and efficiently compared to MFM based on the Dirichlet distribution; 2) estimating reasonably by suppressing ; 3) proposing a MFM that is robust to the choice of hyperparameters of the distribution on a simplex. To this end, we employ a normalized inverse Gaussian distribution as the mixing weight vector of the finite mixture model. To the best of our knowledge, there are few studies using the normalized inverse Gaussian distributions as the mixing weights in the field of finite mixture models. In Bayesian nonparametric inference with infinite mixtures, Lijoi et al. (2005) proposed a normalized inverse Gaussian process and showed some analytical results of the posterior distribution under the process. Similarly, since Miller and Harrison (2018) is a finite version of the Dirichlet process, the model presented in this paper corresponds to a kind of finite alternative of Lijoi et al. (2005). Furthermore, leveraging data augmentation with a latent gamma random variable and the result of Argiento and De Iorio (2022), we construct an efficient posterior sampling algorithm.
This paper is organized as follows. In Section 2, we introduce the MFM and its equivalent representation using discrete probability measures. We also discuss the data augmentation and the conditional posterior distribution of , which are crucial for constructing the computational algorithm.In Section 3, we present the proposal method as well as the posterior computation algorithm. In Section 4, we illustrate some numerical studies to compare the proposed method with existing methods. R code implementing the proposed methods is available at Github repository:
https://github.com/Fumiya-Iwashige/MFM-Inv-Ga
2 Mixture of finite mixtures
We introduce a mixture of finite mixtures and its equivalent representation of the discrete probability measure. The proposed model and the posterior computation algorithm presented in the next section are largely based on the basic model presented in this section.
2.1 Formulation
Let each observations be univariate or multivariate in an Euclidean space. A MFM model is a statistical model defined by the following hierarchical representation.
(2.1) |
where is a parametric model with parameter , is a point mass at and is a element of a vector . is a prior density function of the parameter in mixture components. The parameter space is denoted by . Each is a latent allocation indicates to which component each observation is allocated. Given the number of components , is a probability distribution on the -dimensional unit simplex, in that, this is the prior distribution for the mixing weights. We focus the case where the mixing weights can be further hierarchically expressed as follows:
(2.2) |
where is unnormalized weights and is (normalized) mixing weight vector. This representation is the most basic way to generate a probability distribution on the -dimensional unit simplex . One of the most famous examples of a distribution on is the Dirichlet distribution. If in (2.2), , where is the gamma distribution with shape parameter and scale parameter , and is the Dirichlet distribution with parameter . If , the distribution obtained by normalization is . The symmetric Dirichlet distribution is often used in the framework of MFM, because of conjugacy with categorical distributions and simplicity of computation. The symmetric structure of is also essential for marginalizing out and deriving the exchangeable partition distribution. However, it is known that the estimation result is sensitive to the choice of the shape parameter . For example, Miller and Harrison (2018) recommended to use as a default choice and the value works well in many cases. The use of a small was recommended in terms of sparse or generalized MFMs (Rousseau and Mengersen, 2011; Malsiner-Walli et al., 2016; Frühwirth-Schnatter et al., 2021). A small value of indicates that many empty components can be created. Although most previous studies allow for the appearance of the empty components, there are several problems. First, tends to be overestimated, making the interpretation of the model difficult. Second, the existing of the empty components decreases the predictive performance of the model, as we will see in a later section through density estimations. Finally, they lead to an increase in computation time. When we focus on the estimation of the number of components , it is desirable to have few empty components. In other words, the discrepancy between and should be small. To achieve this, we use the normalized inverse Gaussian distribution as mixing weights, instead of the Dirichlet distribution.
2.2 Equivalent representations using discrete probability measures
We give an equivalent representation of the MFM. We can construct a discrete measure in the parameter space , almost surely, where , and are realizations from the distributions , and , respectively. Let be random variables according to . Then, the model (2.1) is equivalent to the following hierarchical representation using a random measure .
(2.3) |
where is a probability distribution of with parameters , and . The representation (2.3) is the MFM described as a nonparametric Bayesian framework with infinite mixtures. If we replace with the Dirichlet process, (2.3) represents the well-known Dirichlet process mixture models (e.g., Escobar and West, 1995). If we replace with the normalized inverse Gaussian process, (2.3) represents the normalized inverse Gaussian process mixture models (Lijoi et al., 2005). For MFM, Argiento and De Iorio (2022) proposed a normalized independent finite point process (Norm-IFPP), which is a class of flexible prior distributions for . We employ the representation (2.3) with the Norm-IFPP. The advantages are as follows. We can directly estimate and . This is a major difference from Miller and Harrison (2018), which estimates indirectly through the number of clusters . Furthermore, an efficient Gibbs sampler can be constructed by incorporating the data augmentation with a latent Gamma random variable. This data augmentation enables us to overcome the lack of conjugacy in the categorical distribution. Instead of using the probability density function of , we can use the density function of . This is the key to building an efficient MCMC algorithm for the proposed model.
Our study is in the spirit of nonparametric Bayes in that it accounts for uncertainty in the prior distribution of the parameters by means of a random measure based on the Norm-IFPP. The details of Norm-IFPP and and the independent finite point process (IFPP) are given in Argiento and De Iorio (2022).
2.3 Data augmentation and conditional distribution of
We illustrate the data augmentation and conditional posterior distribution of . We introduce the data augmentation in models (2.1) and (2.2). This technique is employed in James et al. (2009) and Argiento and De Iorio (2022). Let and be the allocated and unallocated index sets, respectively. The conditional joint distribution of and given and a label vector is
where . Since this equation consists of categorical distributions, conjugacy with such a distribution is required as for Gibbs sampling. This restriction is relaxed by using latent a random variable , where . In fact,
Thus, the conditional distribution of each is
(2.4) | |||
(2.5) |
If it is easy to generate random variables from the distributions in (2.4) and (2.5), an efficient Gibbs sampling algorithm can be constructed. For example, when the prior distribution of the unnormalized weight is an inverse Gaussian distribution, (2.4) and (2.5) are generalized inverse Gaussian distributions. Introducing , the update of the variable is replaced by the update of the variable . Thus, the selection of is essential in MCMC updates.
Under this data augmentation, the conditional distribution of is established in Theorem 5.1 in Argiento and De Iorio (2022). This theorem states that if follows a Norm-IFPP, then the posterior distribution of the random measure , given , is a superposition of a finite point process with fixed points and an IFPP. The IFPP characterizes the process of unallocated jumps, where the discrete probability distribution that serves as its parameter corresponds to the distribution of . The conditional distribution of is given by
(2.6) |
where , is the number of cluster (unique values of ) and is a Laplace transform of . We sample from (2.6) in the MCMC algorithm. Then, we straightforwardly obtain by adding to .
Remark 2.1.
Although we use the data augmentation , the distributions (2.4) and (2.5) are generally complex. In addition, the result of Argiento and De Iorio (2022) applies only to that have a Laplace transform. Thus, the inverse Gaussian distribution is one of the few examples that satisfy both computational and theoretical constraints.
3 Methodology
In this section, we propose a mixture of finite mixtures with normalized inverse Gaussian weights. Moreover, we develop an efficient posterior sampling algorithm based on Argiento and De Iorio (2022).
3.1 Mixture of finite mixtures with normalized inverse Gaussian weights
We propose a mixture of finite mixtures with normalized inverse Gaussian weights (denoted by MFM-Inv-Ga), where the notation explicitly specifies because it is essential for computation. Our proposal model only requires (2.2) to be
where is the inverse Gaussian distribution with shape parameter and scale parameter , and is the normalized inverse Gaussian distribution with parameters for . The probability density function of the inverse Gaussian distribution is given by
(3.1) |
where is the shape parameter and is the scale parameter. The mean and variance of are given by
(3.2) |
From Lijoi et al. (2005), the probability density function of the normalized inverse Gaussian distribution with parameters is given by
(3.3) |
where and is the modified Bessel functions of the third kind of order (see, also Ghosal and van der Vaart, 2017). Due to the breakdown of conjugacy with the categorical distribution, constructing an efficient Gibbs sampler is challenging. Because of the complexity of (3.3), the calculation in Miller and Harrison (2018) is intractable: constructing an MCMC algorithm based on the restaurant process by marginalizing out is extremely challenging. To overcome this difficulty, we use the data augmentation in Subsection 2.3 and the method proposed in Argiento and De Iorio (2022). As a result, it is sufficient to use not (3.3) but (3.1) and (3.8) to construct our MCMC algorithm.
3.2 Posterior computation
We present a fast and efficient posterior computation algorithm for the proposed method. To this end, we adopt the blocked Gibbs sampling scheme in Argiento and De Iorio (2022). Let be a hyper-parameter of the prior distribution , and be a joint prior density function except for and . is the size of the th cluster. is the Laplace transform of , which is defined by for .
We summarize the algorithm in Algorithm 1. The point of this algorithm is the data augmentation through the latent variable such as , where is the sample size and . It is important to sample the number of empty components from (2.6) in Step 4. This step allows for direct sampling of by adding and label variables to be updated as in the finite mixture model with given in step 2. The update is the same as the telescoping sampling proposed by Frühwirth-Schnatter et al. (2021), and the method is more efficient than the classical restaurant process. However, in implementation, it is important to determine so that the series in Steps 3 and 4 can be written analytically and random variables can be easily generated from the full conditional distributions of and . In Steps 5 and 7, if is the , each generalized inverse Gaussian distribution (denoted by GIG) is immediately derived from (2.4) and (2.5). It is easy to generate random numbers from the GIG distribution. In Steps 6 and 7, the assigned unnormalized weights and kernel parameters are updated. When sampled in Step 4 is greater than or equal to , Steps 8 and 9 are executed. Thus, when we get many empty components, it takes longer to compute.
3.3 Prior distributions for the number of mixture components
Assume that follows a discrete probability distribution with the support . In this paper, we consider () for , because the constraints in Steps 3 and 4 are satisfied. In fact, from Argiento and De Iorio (2022), we have
(3.4) |
Assuming for , the full conditional distributions of and are given by
respectively, where is parallel shifted of the distribution by . If we consider the negative binomial distribution as a prior distribution for , the full conditional distributions are also obtained but learning hyper-parameters become slightly troublesome.
3.4 Specification of kernels
The choice of kernel is important, and the appropriate kernel must be selected for the purpose. In this paper, although we do not discuss the details of the selection of kernels, we present some famous kernels for the sake of completeness.
3.4.1 Cluster analysis and density estimation
One of the most famous and useful kernels is the (multivariate) normal kernel . In the univariate case, we often use the normal inverse gamma model as a prior distribution of , where , . The parameter is called a smoothing parameter and plays an important role in density estimation. It is possible to include a hierarchical prior for . In the multivariate case, we often employ the normal inverse Wishart model as a prior distribution of , where and is a positive definite matrix. We will use normal kernels in later numerical experiments in Sections 4.1 and 4.2 for clustering and density estimation. In the context of finite mixture models, various kernels have been proposed. If we have prior information that the data have skewness, the skew normal or skew-t kernel is also useful (see e.g., Frühwirth-Schnatter and Pyne, 2010). The skew-normal and skew-t distributions is easy to handle because these kernels have the scale mixtures of normal representations, except for the degree of freedom of skew-t distribution.
3.4.2 Network analysis
As an application of the proposed method, we perform community detection on network data. Community detection is the task of identifying dense subclasses in network data and corresponds to clustering and estimating the number of components, called the number of communities in network analysis. Note that the number of components of finite mixture models is equivalent to the number of communities in the network, and both are denoted . Estimating the number of communities is an important problem and various methods have been proposed (Shi and Malik, 2000; Newman, 2004; White and Smyth, 2005). From a model-based perspective, it is essentially the same as estimating the number of components in a finite-mixture model, and we can apply MFM. The stochastic block model is a famous statistical model of network data (Henze, 1986; Nowicki and Snijders, 2001; Geng et al., 2019), which assumes a stochastic block structure behind and specifies the community structure by estimating the probability of edges being drawn between each group. We estimate the number of communities with the MFM in the framework of this stochastic block model. Geng et al. (2019) proposed a stochastic block model based on MFM with Dirichlet weights, and also constructed a similar algorithm to Miller and Harrison (2018).
MFM can be easily applied to community detection by modifying the kernel. Data are replaced by the adjacency matrix , where and is the size of the node. When , this indicates that an edge is drawn from the th node to the th node, and when , this does not. For simplicity, we assume that the adjacency matrix is not direct and does not have a self-loop, in that and , where . The stochastic block model is formulated as follows,
(3.5) |
where and is the number of communities. is a symmetric matrix and defines the stochastic block structure of a network. Each element represents the probability that an edge is drawn between any node belonging to the community label and any node belonging to the community label . To perform community detection based on the proposed model, we just set the Bernoulli likelihood as the kernel.
3.5 Evaluation of the number of empty components
From the point of view of the interpretability of the model, the generalization performance, and the computational efficiency, it is reasonable that the number should be small. For the full conditional distribution of , the following inequality holds, where .
Proposition 3.1.
For the full conditional distribution of , the inequalities
(3.6) | ||||
(3.7) |
holds, where is the Laplace transform of the probability distribution for .
Proof.
When ,
Thus, the conditional expectation of is
Furthermore,
Finally, the result follows from Markov’s inequality. ∎
Proposition 3.1 shows that plays an important role in the generation of empty components. The critical difference between the inverse Gaussian and gamma distributions in estimating in the MFM is the Laplace transform. Let and be the Laplace transforms of and , respectively. The we have
(3.8) | |||
(3.9) |
Laplace transforms and are decreasing functions with respect to . The former has exponential decay, while the latter is polynomial. Figure 1 shows the graphs of the Laplace transforms of inverse Gaussian and gamma distributions when the shape parameters are and the scale parameters are . It can be seen that decreases much faster than . The speed of this decrease has a significant impact on the appearance of the empty component in estimating the number of components.


4 Empirical demonstrations
We evaluate the performance of the MFM-Inv-Ga and MFM-Ga methods through some numerical experiments. Recall that is the shape parameter of the prior distribution of unnormalized weights in the proposed method (MFM-Inv-Ga), while is that of MFM-Ga.
4.1 Inference for the number of mixture components and clustering
In this subsection, we illustrate the performance of clustering and inference for the number of components using artificial and real data.
4.1.1 Artificial data
In this simulation, we assume that . We generate data from the following multivariate normal distribution:
where , , and is the identity matrix. We set and generate dataset. Each MCMC iteration is and the first half of samples is not used as a burn-in period. We assume that and . We employ the multivariate normal kernel and the normal inverse Wishart model as the prior of the parameters in the kernel, where is the sample mean vector, , and is the sample covariance matrix. The shape parameters are . This choice of and induces the mean and variance of and to be equivalent. Hence, the first and second moments of the inverse Gaussian and gamma distribution are matched for each shape parameter (see, also Lijoi et al., 2005). To measure performance, we consider the three criteria.
-
•
The posterior mean of the number of components .
-
•
The posterior mean of the rand index. The rand index is a measure of the clustering fitting, and the value takes . When it is close to , the assignment estimate is reasonable.
-
•
The posterior probability that the number of empty components is equal to .
The respective averages over repetitions are denoted by and .
We report the posterior mean of in Figure 2. It is observed that the results of the MFM-Inv-Ga method remain almost the same even if the shape parameter is varied. However, the MFM-Ga method tends to overestimate the number of mixture components as the shape parameter decreases.

Table 1 shows the results of the clustering performance and the posterior probability that is equal to . For clustering, the MFM-Inv-Ga and MFM-Ga methods are comparable. Both methods have reasonable clustering accuracy, only slightly better for MFM-Ga than for MFM-Inv-Ga. As the shape parameters and decrease, increases slightly and its standard deviation decreases. This is natural given that for data that have components with relatively very large cluster sizes, a suitable prior on a simplex is one with large mass at the edges or vertices. It is important to note that Figure 2 and Table 1 show that the MFM-Inv-Ga method produces few empty components for all scenarios and provides reasonable estimates of , but not MFM-Ga.
and | ||||||
---|---|---|---|---|---|---|
MFM-Inv-Ga | ||||||
MFM-Ga | ||||||
and | ||||||
MFM-Inv-Ga | ||||||
MFM-Ga | ||||||
This can be seen in Table 2. It can also be seen that MFM-Inv-Ga assigns a very high posterior probability to than MFM-Ga, and the behavior of the posterior distributions for and is similar, together with Table 1.
MFM-Inv-Ga | ||||||
---|---|---|---|---|---|---|
MFM-Ga | ||||||
Table 3 shows the CPU times of MFM-Inv-Ga and MFM-Ga for repetitions of MCMC with iterations, where and . When , MFM-Inv-Ga is faster than MFM-Ga on average, but MFM-Inv-Ga has more variability than MFM-Ga. On the other hand, MFM-Ga is very time-consuming in , because many empty components are created. As a result, MFM-Inv-Ga has the same clustering accuracy as MFM-Ga and outperforms MFM-Ga in terms of estimation and CPU time.
MFM-Inv-Ga | MFM-Ga | |
---|---|---|
4.1.2 Thyroid Data
We apply the proposed method to famous thyroid data. The data is available from the R package mclust and is well known as benchmark data for clustering. The sample size is and the dimension of the data is . The thyroid disease of each patient is included and classified into three categories: Normal, hypo and hyper. The number of diseases is , and , respectively. Using these labels as true labels, the main interest is the accuracy of the clustering and whether the number of components is estimated to be .
We use the same model and shape parameters as in Section 4.1.1. We independently run the MCMC chain with different initial values. The number of iterations for each chain is and finally we get the samples. We compare MFM-Inv-Ga and MFM-Ga through the posterior mean of , the posterior probability and the posterior mean of the rand index (RI). Table 4 shows that the result of the real data analysis is similar to that of Section 4.1.1. MFM-Inv-Ga provides a reasonable estimate of for all shape parameters by not producing empty components. However, MFM-Ga overestimates . Furthermore, MFM-Inv-Ga is slightly more accurate than MFM-Ga.
MFM-Inv-Ga | |||
---|---|---|---|
RI | |||
MFM-Ga | |||
RI | |||
4.2 Density estimation
As seen in the previous subsection, the main difference between the MFM-Inv-Ga and MFM-Ga methods is the frequency of occurrence of the empty components. The MFM-Ga method can achieve very high clustering accuracy by setting a small and allowing empty components. However, small not only makes the model difficult to interpret but also degrades the predictive accuracy of the model. In this subsection, we examine this phenomenon through density estimation using predictive distributions.
We used the famous galaxy dataset, which is a small data set consisting of velocities (km/sec) of different galaxies. The data is widely used in nonparametric Bayesian statistics as a benchmark for density estimation and cluster analysis. The details of the data are found in Roeder (1990). We set and . We also employ the univariate normal kernel , and the normal–inverse gamma conjugate prior as the prior of the parameters in the kernel. Moreover, we assume that the prior of is , and we set as Richardson and Green (1997). The parameter controls the smoothness of the estimated density function. We assume that the prior of the smoothing parameter is , where and as Escobar and West (1995). Parameters and should be carefully learned from the data, since density estimation is sensitive to their choice. The MCMC iterations are and the first half of the samples are not used as a burn-in period. The shape parameters are the same as in Section 4.1. We evaluated the result of density estimation and posterior probabilities of , , and for each shape parameter.


We show the estimated density functions using the posterior means in Figure 3. From the figure, it is observed that the shapes of the estimated densities using the MFM-Inv-Ga do not depend on the choice of the shape parameter . However, results using the MFM-Ga method seem to be strongly influenced by the choice of the shape parameter . For and , MFM-Ga cannot capture two large peaks in the middle.
MFM-Inv-Ga | ||||||||
---|---|---|---|---|---|---|---|---|
MFM-Ga | ||||||||
The number of clusters in the galaxy data has been reported as or in existing studies. From table 5, in MFM-Inv-Ga, the posterior distributions of have large probabilities at and for all and each posterior distribution induced by MFM-Inv-Ga is more similar than by MFM-Ga. For , both MFM-Inv-Ga and MFM-Ga overestimate when and are small. This can be seen in Table 6. The reason is why the data size is small and is easily produced. Tables 5 and 6 show that the degree of overestimation is much smaller for MFM-Inv-Ga than for MFM-Ga.
and | ||||||
---|---|---|---|---|---|---|
MFM-Inv-Ga | ||||||
MFM-Ga |
Focusing on the number of clusters , it is interesting that MFM-Inv-Ga is more robust for shape parameter estimates than MFM-Ga. This suggests that the prior distribution of based on MFM-Inv-Ga is less informative than MFM-Ga, i.e., the same relationship holds for MFM-Inv-Ga and MFM-Ga as for normalized inverse Gaussian process and Dirichlet process. In Lijoi et al. (2005), the prior of induced by a normalized inverse Gaussian process is not more informative for the precision parameter than the Dirichlet process. However, the prior for the normalized inverse Gaussian process can be written in closed form, while that for MFM-Inv-Ga is given in complex integral form.
The shape parameter of MFM-Ga should be chosen carefully because it has a significant impact on clustering, density estimation, and the appearance of empty components. However, MFM-Inv-Ga is much more robust than MFM-Ga with respect to the choice of the shape parameter. Hence, MFM-Inv-Ga is superior to MFM-Ga in that it is much easier to use than MFM-Ga.
4.3 Community detection
We apply the proposed method to the community detection for network data. Similar comparisons as in Section 4.1 are made for both artificial and real data. Since the number of components of the finite mixture models is equivalent to the number of communities of the network, we use the same notation to denote the number of communities.
4.3.1 Artificial data
First, we illustrate the performance of the proposed method using simulation data. We assume that the true number of communities is , denoted by and the number of nodes in the network is set as . We here consider the balanced network, in that the true allocation consists of communities with nodes. For the true probability matrix , we assume that each component is expressed by , where , and . The assumption indicates that the edges are more easily drawn within the same community and less easily drawn between different communities. In the setup, we generate the data set.
We set and , and employ (3.5) as a kernel and a prior, where and . The values of the shape parameters, the evaluations and the MCMC setting are the same as those of Section 4.1.1.

We report the posterior mean of the number of communities in Figure 4. It is observed that the results are almost the same as those of Figure 2.
MFM-Inv-Ga | ||||||
---|---|---|---|---|---|---|
MFM-Ga | ||||||
MFM-Inv-Ga | ||||||
MFM-Ga | ||||||
Table 7 also shows that the results of the posterior probability of are the same as Table 1. However, in terms of clustering accuracy for network data, MFM-Inv-Ga is higher and more accurate than MFM-Ga.
MFM-Inv-Ga | ||||||
---|---|---|---|---|---|---|
MFM-Ga | ||||||
From table 8, MFM-Inv-Ga has a much higher posterior probability at for all shape parameters than MFM-Ga. As a result, in the context of community detection, MFM-Inv-Ga also achieves better clustering and community estimation than MFM-Ga.
We also compared the computation time with Geng’s method (Geng et al., 2019), denoted MFM-Geng, when the shape parameters are and . The result in Table 9 indicates that MFM-Inv-Ga is on average more than twice as efficient as MFM-Geng in terms of computation time. The reason is why our algorithm does not update the label variable based on the restaurant process and MFM-Inv-Ga has a structure that is unlikely to produce empty components.
The MFM-Geng can estimate the number of clusters , but cannot directly estimate the number of communities . Furthermore, it is difficult in the MFM-Geng to estimate the hyperparameter of , because it is necessary to perform complex series calculations, including its parameter. In summary, MFM-Inv-Ga (and MFM-Ga) are superior to MFM-Geng in terms of computation time, direct estimation of , and estimation of an essential parameter.
MFM-Inv-Ga | MFM-Geng | |
---|---|---|
4.3.2 Dolphins social network data
The dolphins social network data is often used as a benchmark. Data can be obtained in http://www-personal.umich.edu/mejn/netdata/. The data is constructed as an undirected graph and expresses a small-scale animal social network with bottlenose dolphins off Doubtful Sound, New Zealand. Each node represents a dolphin, and an edge is drawn if two dolphins appear to be closely related to each other. In previous studies, it is well-known that the network has two communities. The details of the data are found in Lusseau et al. (2003).
As before, we compare the posterior distribution of the number of communities between MFM-Inv-Ga and MFM-Ga, and create a co-clustering matrix of MFM-Inv-Ga as quantifying uncertainty of clustering. In this analysis, we set and .
MFM-Inv-Ga | ||||||
---|---|---|---|---|---|---|
MFM-Ga | ||||||
From Table 10, the MFM-Inv-Ga is able to successfully estimate the number of communities regardless of the values of the shape parameter, while the MFM-Ga is not. We report a co-cluster matrix of MFM-Inv-Ga for and MFM-Geng for , and the results are shown in Figure 5. The result of MFM-Inv-Ga is almost identical to the results reported in Geng et al. (2019). Furthermore, we confirmed that changing the value of does not change the co-cluster matrices and the clustering solution based on MAP estimation. This shows the efficiency of the proposed method.


5 Concluding remarks
We proposed a mixture of finite mixtures (MFM) model based on the normalized inverse Gaussian distribution and constructed an efficient posterior sampling algorithm based on Argiento and De Iorio (2022). The proposed method is a finite analog of the inverse Gaussian processes proposed by Lijoi et al. (2005). We illustrate the performance of the proposed method for clustering, density estimation, and community detection, compared to existing MFM models based on Dirichlet distribution (e.g., Miller and Harrison, 2018; Geng et al., 2019; Argiento and De Iorio, 2022). The proposed method is robust against the choice of hyper-parameter , and provided reasonable estimates of the number of components and communities compared to the MFM based on the Dirichlet prior distribution. Moreover, the proposed method also has a reasonable predictive performance in the sense of density estimation by suppressing the appearance of empty components.
The drawbacks of the proposed method are as follows. Some parameters involved in the model do not have closed-form marginal distributions because it is not easy to marginalize out . For example, when we focus on clustering, obtaining the interpretable prior distribution for the number of clusters is very important (see e.g., Zito et al., 2023) to incorporate subjective prior information. However, the proposed model cannot lead to a tractable marginal prior distribution for the number of clusters. In the mixture of finite mixtures, the model consists of a distribution over a simplex based on the normalization of independent random variables. Therefore, the correlation between categories cannot be properly modeled. The normalized inverse Gaussian distribution has a negative covariance as well as the Dirichlet distribution. It may be inappropriate for data with positive correlations between categories, such as the proportion of symbiotic organisms present, disease complication data, or gene expression data. To address such a problem, it may be necessary to construct the MFM in a more general framework that removes the assumption of independence in the Norm-IFPP by Argiento and De Iorio (2022). The construction of MFM models using a more flexible distribution over a simplex that can also express positive correlations such as the generalized Dirichlet distribution (Wong, 1998) is an interesting future topic. Furthermore, the proposed model can be applied to spatial data (e.g., Geng et al., 2021) and functional data (e.g., Hu et al., 2023). For network data, it is expected to extend the MFM to network with weighted edges, degree-corrected stochastic block models, and mixed membership stochastic block models.
Acknowledgement
This work was supported by Japan Society for the Promotion of Science, the establishment of university fellowships towards the creation of science technology innovation Grant Number JPMJFS2129. This work is partially supported by the Japan Society for the Promotion of Science (grant number: 21K13835).
References
- Argiento and De Iorio (2022) Argiento, R. and M. De Iorio (2022). Is infinity that far? a bayesian nonparametric perspective of finite mixture models. The Annals of Statistics 50(5), 2641–2663.
- Escobar and West (1995) Escobar, M. D. and M. West (1995). Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association 90(430), 577–588.
- Frühwirth-Schnatter (2006) Frühwirth-Schnatter, S. (2006). Finite mixture and Markov switching models. Springer.
- Frühwirth-Schnatter et al. (2019) Frühwirth-Schnatter, S., G. Celeux, and C. P. Robert (2019). Handbook of mixture analysis. CRC press.
- Frühwirth-Schnatter and Kaufmann (2008) Frühwirth-Schnatter, S. and S. Kaufmann (2008). Model-based clustering of multiple time series. Journal of Business & Economic Statistics 26(1), 78–89.
- Frühwirth-Schnatter et al. (2021) Frühwirth-Schnatter, S., G. Malsiner-Walli, and B. Grün (2021). Generalized mixtures of finite mixtures and telescoping sampling. Bayesian Analysis 16(4), 1279–1307.
- Frühwirth-Schnatter and Pyne (2010) Frühwirth-Schnatter, S. and S. Pyne (2010). Bayesian inference for finite mixtures of univariate and multivariate skew-normal and skew-t distributions. Biostatistics 11(2), 317–336.
- Geng et al. (2019) Geng, J., A. Bhattacharya, and D. Pati (2019). Probabilistic community detection with unknown number of communities. Journal of the American Statistical Association 114(526), 893–905.
- Geng et al. (2021) Geng, J., W. Shi, and G. Hu (2021). Bayesian nonparametric nonhomogeneous poisson process with applications to usgs earthquake data. Spatial Statistics 41, 100495.
- Ghosal and van der Vaart (2017) Ghosal, S. and A. W. van der Vaart (2017). Fundamentals of nonparametric Bayesian inference, Volume 44. Cambridge University Press.
- Green (1995) Green, P. J. (1995). Reversible jump markov chain monte carlo computation and bayesian model determination. Biometrika 82(4), 711–732.
- Handcock et al. (2007) Handcock, M. S., A. E. Raftery, and J. M. Tantrum (2007). Model-based clustering for social networks. Journal of the Royal Statistical Society Series A: Statistics in Society 170(2), 301–354.
- Henze (1986) Henze, N. (1986). A probabilistic representation of the’skew-normal’distribution. Scandinavian Journal of Statistics 13(4), 271–275.
- Hu et al. (2023) Hu, G., J. Geng, Y. Xue, and H. Sang (2023). Bayesian spatial homogeneity pursuit of functional data: an application to the us income distribution. Bayesian Analysis 18(2), 579–605.
- James et al. (2009) James, L. F., A. Lijoi, and I. Prünster (2009). Posterior analysis for normalized random measures with independent increments. Scandinavian Journal of Statistics 36(1), 76–97.
- Lijoi et al. (2005) Lijoi, A., R. H. Mena, and I. Prünster (2005). Hierarchical mixture modeling with normalized inverse-gaussian priors. Journal of the American Statistical Association 100(472), 1278–1291.
- Lusseau et al. (2003) Lusseau, D., K. Schneider, O. J. Boisseau, P. Haase, E. Slooten, and S. M. Dawson (2003). The bottlenose dolphin community of doubtful sound features a large proportion of long-lasting associations: can geographic isolation explain this unique trait? Behavioral Ecology and Sociobiology 54, 396–405.
- Malsiner-Walli et al. (2016) Malsiner-Walli, G., S. Frühwirth-Schnatter, and B. Grün (2016). Model-based clustering based on sparse finite gaussian mixtures. Statistics and Computing 26(1), 303–324.
- McLachlan (2000) McLachlan, G. (2000). Finite mixture models. A wiley-interscience publication.
- McLachlan et al. (2002) McLachlan, G. J., R. W. Bean, and D. Peel (2002). A mixture model-based approach to the clustering of microarray expression data. Bioinformatics 18(3), 413–422.
- McLachlan et al. (2019) McLachlan, G. J., S. X. Lee, and S. I. Rathnayake (2019). Finite mixture models. Annual review of Statistics and its Application 6(1), 355–378.
- Miller and Harrison (2018) Miller, J. W. and M. T. Harrison (2018). Mixture models with a prior on the number of components. Journal of the American Statistical Association 113(521), 340–356.
- Newman (2004) Newman, M. E. (2004). Detecting community structure in networks. The European Physical Journal B 38, 321–330.
- Nobile (1994) Nobile, A. (1994). Bayesian analysis of finite mixture distributions. Carnegie Mellon University.
- Nobile and Fearnside (2007) Nobile, A. and A. T. Fearnside (2007). Bayesian finite mixtures with an unknown number of components: The allocation sampler. Statistics and Computing 17, 147–162.
- Nowicki and Snijders (2001) Nowicki, K. and T. A. B. Snijders (2001). Estimation and prediction for stochastic blockstructures. Journal of the American statistical association 96(455), 1077–1087.
- Richardson and Green (1997) Richardson, S. and P. J. Green (1997). On bayesian analysis of mixtures with an unknown number of components (with discussion). Journal of the Royal Statistical Society Series B: Statistical Methodology 59(4), 731–792.
- Roeder (1990) Roeder, K. (1990). Density estimation with confidence sets exemplified by superclusters and voids in the galaxies. Journal of the American Statistical Association 85(411), 617–624.
- Rousseau and Mengersen (2011) Rousseau, J. and K. Mengersen (2011). Asymptotic behaviour of the posterior distribution in overfitted mixture models. Journal of the Royal Statistical Society Series B: Statistical Methodology 73(5), 689–710.
- Shi and Malik (2000) Shi, J. and J. Malik (2000). Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8), 888–905.
- White and Smyth (2005) White, S. and P. Smyth (2005). A spectral clustering approach to finding communities in graphs. In Proceedings of the 2005 SIAM international conference on data mining, pp. 274–285. SIAM.
- Wong (1998) Wong, T.-T. (1998). Generalized dirichlet distribution in bayesian analysis. Applied Mathematics and Computation 97(2-3), 165–181.
- Zito et al. (2023) Zito, A., T. Rigon, and D. B. Dunson (2023). Bayesian nonparametric modeling of latent partitions via stirling-gamma priors. arXiv preprint arXiv:2306.02360.