In this article, we describe a speaker adaptation method based on the probabilistic 2-mode analysis of training models. Probabilistic 2-mode analysis is a probabilistic extension of multilinear analysis. We apply probabilistic 2-mode analysis to speaker adaptation by representing each of the hidden Markov model mean vectors of training speakers as a matrix, and derive the speaker adaptation equation in the maximum a posteriori (MAP) framework. The adaptation equation becomes similar to the speaker adaptation equation using the MAP linear regression adaptation. In the experiments, the adapted models based on probabilistic 2-mode analysis showed performance improvement over the adapted models based on Tucker decomposition, which is a representative multilinear decomposition technique, for small amounts of adaptation data while maintaining good performance for large amounts of adaptation data.

In automatic speech recognition (ASR) systems using hidden Markov models (HMMs)[1], mismatches between the training and testing conditions lead to performance degradation. One of such mismatches results from speaker variation. Thus, speaker adaptation techniques[2] are employed to transform a well-trained canonical model (e.g., speaker-independent (SI) HMM) to the target speaker. Speaker adaptation requires fewer adaptation data than needed to build a speaker-dependent (SD) model. Among speaker adaptation techniques, eigenvoice (EV)[3] expresses the model of a new speaker as a linear combination of basis vectors, which are built from the principal component analysis (PCA) of the HMM mean vectors of training speakers.

In a similar approach, speaker adaptation based on tensor analysis using Tucker decomposition[4] was investigated in[5], where bases were constructed from the multilinear decomposition of a tensor that consisted of the HMM mean vectors of training speakers. In the approach, all the training models were collectively arranged in a third-order tensor (3-D array):

ℳR×D×S

(1)

where the first, second, and third modes (dimensions) were for the mixture component, dimension of the mean vector, and training speaker. In[5], Tucker decomposition was used to build bases and in the experiments, speaker adaptation using Tucker decomposition showed better performance than eigenvoice and maximum likelihood linear regression (MLLR) adaptation[6]. The improvement seemed to be attributable to the increased number of adaptation parameters and compact bases. Also noticed in[5] was that the increased number of adaptation parameters did not guarantee a good performance when the amount of adaptation data was small (the determination of the proper number of adaptation parameters for given adaptation data is a model-order selection problem). Extending the tensor-based approach, in[7], the fourth mode for noise was added (so,ℳ became a 4-D array) so that the training models of various speakers and noise conditions were decomposed.

In this article, we describe a speaker adaptation method using probabilistic 2-mode analysis, which is an application of probabilistic tensor analysis (PTA)[8] to the second-order tensor (i.e., matrix); PTA is an application of probabilistic PCA (PPCA)[9] to tensor objects. Using probabilistic 2-mode analysis, we derive bases from training models in a probabilistic framework, and formulate the speaker adaptation equation in the maximum a posteriori (MAP) framework[10]. The speaker adaptation equation based on the probabilistic approach becomes similar to MAP linear regression (MAPLR) adaptation[11] as shown below. The experiments showed that the proposed method further improved the performance of the speaker adaptation based on Tucker decomposition for small amounts of adaptation data.

The rest of this article is organized as follows. Section 2.1 explains some tensor algebra and tensor decomposition. Section 2.3 explains the probabilistic 2-mode analysis of a set of mean vectors of training HMMs. In Section 2.5, the estimation of the prior distribution of the adaptation parameter is described. Section 2.6 describes the speaker adaptation in the MAP framework using the bases and the prior. Section 2.2 describes the speaker adaptation using Tucker decomposition, which is compared with the probabilistic 2-mode analysis-based method. We explain the experiments in Section 3 and conclude the article in Section 4. Some of the notations used in this article are summarized in Table1.

A tensor is a multidimensional array, and an N-dimensional array is called the N th-order tensor (or N-way array). The order of a tensor is the number of indices for addressing the tensor; so the order ofℳI1×I2×⋯×IN is N. Scalar, vector, and matrix are zeroth-, first-, and second-order tensors, respectively. There are three indices for addressing the array in a third-order tensor as depicted in Figure1.

Figure 1

A third-order tensor.

Tensor algebra is performed in terms of matrix and vector representations of tensors; the mode-n flattening (matricization) of tensorℳ, which is denoted as M(n), is obtained by reordering the elements as follows:

M(n)∈ℝIn×(I1×⋯×I(n−1)×I(n+1)×⋯×IN).

(2)

That is, all the column vectors along the mode n are arranged into a matrix. For example, a third-order tensorℳI×J×K can be flattened into an I×(JK), J×(KI), or K×(IJ) matrix as depicted in Figure2; for aℳ2×2×2 tensor:

The operation of the mode-n flattening will be denoted as matn(·), i.e.,matn(ℳ)=M(n).

Multiplication of a tensor and a matrix is performed by n-mode product; the n-mode product of a tensorW with a matrix U is denoted as

ℳ=W×nU

(5)

and is carried out by matrix multiplication in terms of flattened matrices:

M(n)=UW(n)

(6)

or elementwise

W×nUi1…in−1jin+1…iN=∑in=1Inwi1i2…iNujin

(7)

where w and u denote the elements ofW and U, respectively. IfW∈ℝI1×I2×⋯×IN andUT∈ℝKn×In, then the dimension ofW×nUT becomes I1×I2×⋯×In−1×Kn×In+1×⋯×IN.

As an extension of singular value decomposition (SVD) to tensor objects, Tucker decomposition decomposes a tensor as follows[4]:

ℳI1×I2×⋯×IN≃WK1×K2×⋯×KN∏n=1N×nUn

(8)

whereUn∈ℝIn×Kn, Kn≤In (n=1,…,N). The core tensorW and mode matrices Un’s correspond to the matrices of singular values and orthonormal basis vectors in matrix SVD, respectively. An example of Tucker decomposition of a third-order tensor is illustrated in Figure3.

Figure 3

Tucker decomposition of a third-order tensor.

The core tensorW and mode matrices Un’s in Tucker decomposition can be computed such that they minimize

Error=∥ℳ−W∏n=1N×nUn∥2

(9)

where the norm of a tensor is defined as∥X∥=∑i1=1I1∑i2=1I2…∑iN=1INxi1i2…iN2. A representative technique for Tucker decomposition is the alternating least squares (ALS)[12]; the basic idea is to compute each mode matrix Un alternatingly with other mode matrices fixed. For more details on Tucker decomposition, refer to[4]. In the following section, we explain probabilistic 2-mode analysis in the context of speaker adaptation.

2.2 Speaker adaptation using Tucker decomposition

The probabilistic 2-mode analysis based method is a probabilistic extension of the Tucker decomposition based method. Thus, we compare the probabilistic approach with the Tucker decomposition based method in the experiments. In this section, we explain the speaker adaptation based on the Tucker decomposition of training models in[5]. In this article, speaker adaptation is performed by updating the mean vectors of the output distribution of an HMM. The HMM mean vectors of each training speaker are arranged in an R×D matrix:

All the centered HMM mean vectors of training speakers,Ms−M¯s=1S whereM¯=1/S∑sMs, are collectively expressed as a third-order tensorℳ~, and we decompose the training tensor by Tucker decomposition as follows:

In the above equation,Umixture∈ℝR×KR,Udim∈ℝD×KD, andUspeaker∈ℝS×KS are basis matrices for the mixture component, dimension of the mean vector, and training speaker, respectively (KR≤R−1, KD≤D−1, and KS≤S−1); the core tensorG is common across the mixture component, dimension of the mean vector, and training speaker. In Equation (11), the s th row vector of Uspeaker, which is denoted as uspeaker;s, corresponds to the speaker weight of the s th speaker, thus the low-rank approximation of the s th speaker model is given by

Ms≃GKR×KD×KS×3uspeaker;s×1Umixture×2Udim+M¯.

(12)

If we define the augmented speaker weightWsKR×KD≡GKR×KD×KS×3uspeaker;s, Equation (12) becomes

Ms≃Ws×1Umixture×2Udim+M¯=UmixtureWsUdimT+M¯.

(13)

Thus, we express the model of a new speaker as

Mnew=UmixtureWnewUdimT+M¯.

(14)

For the given adaptation data O={o1,…,oT}, we derive the equation for finding the speaker weight in a maximum likelihood (ML) criterion:

where γr(t) denotes the occupation probability of being at mixture r at t given O, Cr the covariance matrix of the r th Gaussian component of an SI HMM (in this article, a diagonal covariance matrix is used); umixture;r andm¯r denote the r th row vectors of Umixture andM¯, respectively. In the above equation, Wnew, aug can be computed using a technique similar to MLLR adaptation and the weight of the new speaker is obtained by

Ŵnew=Wnew,augUdim

(16)

which is plugged into Equation (14) to produce the model updated for the new speaker.

2.3 Probabilistic 2-mode analysis

The advantage of probabilistic 2-mode analysis over Tucker decomposition is similar to that of PPCA over standard PCA; probabilistic 2-mode analysis can deal with missing entries in the data tensor (although this is not the case in our experiments). In the modeling perspective, probabilistic 2-mode analysis assumes a distribution of latent variables, thus it is suitable for a MAP framework.

In this section, the ensemble of training models is expressed as

M={Ms}s=1S.

(17)

Assuming the HMM mean vectors of training speakers are drawn from the matrix-variate normal distribution[13], we derive the adaptation equation based on the probabilistic 2-mode analysis of training models. We use probabilistic 2-mode analysis, the second-order case of PTA[8], to decompose the training models expressed in matrix form. The latent tensor model is expressed as

ℳ=W∏n=1N×nUn+ℳmean+E

(18)

whereW denotes the latent tensor, Un’s the factor loading matrices,ℳmean the mean, andE is the error/noise process. The 2-mode case of the latent tensor model is given by

M=W×1U1×2U2+Mmean+E

(19)

which becomes, for the training models {M1,…,MS},

Ms=Ws×1U1×2U2+Mmean+Es=UmixtureWsUdimT+Mmean+Es

(20)

whereWs∈ℝKR×KD denotes the latent matrix,Umixture∈ℝR×KR andUdim∈ℝD×KD the factor loading matrices (KR≤R−1 and KD≤D−1), Mmean the mean, and Es the error/noise process. (Mode matrices and dimensions are defined as follows: U1=Umixture, U2=Udim, I1=R, I2=D, K1=KR, and K2=KD.) The distribution of Ws is assumed to be a matrix-variate normal, i.e.,Ws∼N(0KR×KD,IKR⊗IKD) where ⊗ denotes the Kronecker product, and independent of Es whose elements followN(0,σ2). Figure4 shows the graphical model representing the probabilistic 2-mode model.

Figure 4

Graphical model representation of the probabilistic 2-mode model.

In Equation (20), it is computationally intractable to calculate Un’s simultaneously. So, the following decoupled predictive density is defined:

pM|Mmean,{Un}n=1N,σ2∝∏n=1NpM×̄nUnT|t̄n,σn2

(21)

wheret̄n∈ℝIn×1 andσn2 denote the mean vector and noise variance, respectively, for mode n;M×̄nUnT≡M×1U1T…×n−1Un−1T×n+1Un+1T…×NUNT, i.e., the product of M with all the mode matrices except mode n, which is called the contracted n-mode product[14]. That is, the n th probabilistic function is defined as the projected M by all Uj’s expect Un. Given observed data M, the decoupled posterior probabilistic function is defined as

pMmean,{Un}n=1N,σ2|M∝∏n=1Npt̄n,Un,σn2,M,{Uj}j=1,j≠nN.

(22)

By Bayes’ theorem, the n th posterior distribution can be expressed in terms of the decoupled likelihood function and the decoupled prior distribution:

Now, Un’s are obtained by maximizing the following posterior distribution:

p{Un}n=1N|M≈∏n=1NpUn|M×̄nUnT

(26)

wherepUn|M×̄nUnT≡∏s=1SMs×̄nUnT. The expectation-maximization (EM) algorithm[15] is applied to compute Un’s. The application of the EM algorithm to construct probabilistic 2-mode model is explained in the next section.

2.4 Construction of probabilistic 2-mode model for speaker adaptation

In Equation (20), for the given training models, the maximum likelihood (ML) estimate of Mmean is given asM¯=(1/S)∑sMs and{Un,σn2} can be estimated as follows. First, let us define the followings: Lettn;j∈ℝIn×1 be the j th column vector of

T(n)=matnM×̄nUnT

(27)

for1≤j≤ĪnS (Īn=∏j=1,j≠nNIj) andxn;j∈ℝKn×1 be the j th column vector of

whereSn=1/(ĪnS−1)∑j=1ĪnStn;j−t̄ntn;j−t̄nT and tr[ · ] denotes the trace of a matrix. Summing up for all the modes, we obtain the following log-likelihood function of the posterior distribution:

L=∑nlogpUn|M×̄nUn∝−∑nĪnS2log|Gn|+trGn−1Sn.

(31)

The graphical model representation of the decoupled probabilistic model is shown in Figure5.

Figure 5

Graphical model representation of the decoupled probabilistic model.

We seek to find Un’s that maximize the log-likelihood function alternatingly. Mode matrices U1 and U2 are initialized with the results from the Tucker decomposition which minimizes the reconstruction error:

Error=∑sMs−Ws×1U1×2U2+M¯2.

(32)

With the initial U1 and U2, the following procedure is performed for each mode (n=1,2).

Each training model is projected into mode matrices except mode n and expressed in a mode-n matrix:

Ts,(n)=matnMs×̄nUnT.

(33)

All the column vectors ofTs,(n)s=1S constitute the training data set:

{tn;j},1≤j≤ĪnS.

(34)

Then, with an initial estimate ofσn2 (e.g., 0.005 was used in the experiments), the EM algorithm is iterated as follows until Un andσn2 converge.

E-step: From Equation (31), the expectation of the log-likelihood function of complete data w.r.t.pxn;j|tn;j,t̄n,Un,σn2 is given as

Essentially, the procedure applies PPCA to the data set {tn;j} for each mode.

2.5 Estimation of prior distribution

Given model parameters{M¯,Un,σn2}, the weight matrix for the training speaker model Ms is obtained by

Ws=Ms−M¯∏n=12×nHn−1UnT=H1−1UmixtureTMs−M¯H2−1UdimTT.

(41)

From the set of weight matrices{Ws}s=1S, the distribution of the weight is estimated. In deriving the adaptation equation in the MAP framework, the parameters for the prior distribution can be obtained in closed-form solutions if p(W) follows a conjugate distribution. Hence, we assume the prior distribution of the weight to be a matrix-variate normal:

p(W)∝1|Σ|KD/2|Ψ|KR/2exp{−12tr[(W−Wmean)TΣ−1(W−Wmean)Ψ−1]}.

(42)

Furthermore, the hyperparameters of p(W) can easily be estimated in an ML criterion if Ψ is known[16]. So, Ψ is assumed to be the identity matrix[17], and the hyperparameters are estimated as:

where Λ andΛ̂ denote the current and updated model parameters, respectively, andmnew;r=umixture;rWnewUdimT+m¯r. In finding the speaker weight, we compute Wnew, aug≡WnewUdim, from which Wnew is obtained. Solving in this way, we can use the row-by-row technique in MLLR adaptation[6]. Setting∂WnewQ(Λ,Λ̂)=0 yields the following equation:

where vr(i,i) denotes the (i,i) element of Vr and ws;i the i th column vector of Ws, aug≡WsUdim. Then, the speaker weight can be computed:

wnew, aug,(i)T=G(i)+Σ(i)−1−1z(i)T,i=1,…,D

(49)

where wnew, aug, (i) denotes the i th row of Wnew, aug and z(i) the i th row vector of Z. The method becomes similar to MAPLR adaptation in[11]. Finally, the speaker weight is obtained as

Ŵnew=Wnew, augUdim+

(50)

where [ · ]+ denotes the pseudoinverse of a matrix. The weight is plugged into Equation (44) to produce the model adapted to the new speaker.

2.7 Speaker adaptation techniques compared in the experiments

In this section, we briefly review the speaker adaptation techniques compared with the probabilistic 2-mode analysis based method: eigenvoice adaptation[3], MLLR adaptation[6], and MAPLR adaptation[11].

In eigenvoice adaptation, the collection of HMM mean vectors of speaker s is arranged in an (RD)×1 vector:

μs=μs;1μs;2⋮μs;R.

(51)

Then, the set of S supervectors, {μ1,…,μS}, is decomposed by PCA to produce the adaptation model

μnew=Φwnew+μ̄

(52)

where Φ=[ϕ1…ϕK], the basis matrix consisting of the K dominant eigenvectors from PCA, andμ̄=1/S∑sμs. The K×1 weight vector can be obtained by maximizing the likelihood of the adaptation data, which is given by

ŵnew=∑t∑rγr(t)ΦrTCr−1Φr−1×[∑t∑rγr(t)ΦrTCr−1ot−μ̄r]

(53)

where Φr andμ̄r denote the D×K submatrix and D×1 subvector corresponding to the r th mixture of Φ andμ̄, respectively.

In MLLR adaptation, the updated model for a new speaker is obtained by linearly transforming the SI model (assuming a global regression matrix):

μnew,r=Wnewξr,ξr=ωμSI,r

(54)

where μSI, r denotes the mean vector of the SI HMM corresponding to mixture r and ω is the bias offset term: ω=1 to include the term and ω=0 otherwise (ω=1 in our experiments). The D×(D+1) transformation matrix can be obtained in an ML criterion, which yields the following equation:

∑t∑rγr(t)Cr−1otξrT=∑t∑rγr(t)Cr−1WnewξrξrT.

(55)

The above equation can be solved for Wnew:

ŵnew,(i)T=G(i)−1z(i)T,i=1,…,D

(56)

whereŵnew,(i) and z(i) denote the i th row vectors ofŴnew and Z, respectively; G(i) and Z are defined as:

Vr=∑tγr(t)Cr−1Dr=ξrξrTG(i)=∑rvr(i,i)DrZ=∑t∑rγr(t)Cr−1otξrT

(57)

where vr(i,i) denotes the (i,i) element of Vr.

In MAPLR adaptation, the prior for the transformation matrix is used in the MLLR framework. The parameters for the prior are obtained from the MLLR transformation matrices of training speakers {W1,…,WS}:

w̄(i)=1S∑sws,(i)Σ(i)=1S−1∑sws,(i)−w̄(i)Tws,(i)−w̄(i)

(58)

where ws, (i) denotes the i th row vector of Ws. Then, the transformation matrix for a new speaker is obtained in a MAP criterion. Deriving the equation in the same way as above, we can obtain the following:

We carried out the large-vocabulary continuous-speech recognition (LVCSR) experiments using the Wall Street Journal corpus WSJ0[18]. In building the SI model, we used 12754 utterances of 101 speakers from the corpus. As the acoustic feature vector, we used the 39-dimensional vector consisting of 13-dimensional mel-frequency cepstral coefficients (MFCCs) including the 0th cepstral coefficient, their derivative coefficients, and their acceleration coefficients. The feature vector was extracted with the 20-ms Hamming window with the frame sliding of 10 ms. Using the HMM toolkit (HTK)[19], we built a tied-state triphone model (word-internal triphones) with 3472 tied states and 8-mixture Gaussian.

To build training models for constructing bases, we transformed the SI model by MLLR adaptation[6] using 32 regression classes followed by maximum a posteriori (MAP) adaptation[10]. We used the 101 adapted models to build the Tucker decomposition and probabilistic tensor based models as well as eigenvoice.

For adaptation and recognition test, we used Nov’92 5K non-verbalized adaptation and test sets. The number of testing speakers was 8; adaptation set was used for adaptation and testing set of 330 sentences was used for recognition test (the number of testing utterances per speaker was about 40). The length of an adaptation sentence was about 6 s and the adaptation was performed in supervised mode. In recognition test, we used WSJ 5K non-verbalized 5k closed-vocabulary set and WSJ standard 5K non-verbalized closed bigram.

The word recognition accuracy of the SI model is 91.54%. Table2 shows the results of the Tucker decomposition and probabilistic 2-mode based methods (KS=100 in the Tucker decomposition based model). In the table, the probabilistic 2-mode based method shows improved performance over the Tucker decomposition based method for small amounts of adaptation data, which can be evidently seen in Figure6 for the Tucker decomposition and probabilistic 2-mode based models with (KR=20,KD=35). The results of MAPLR[11] are also shown in the figure. The use of MAP framework contributes to improved performance for small amounts of adaptation data. The number of free parameters of each method is given as follows: 20 · 35 for the Tucker 3-mode and probabilistic 2-mode based models, and 39 · 40 for MAPLR adaptation. In Figure7, the Tucker decomposition based method is compared with MLLR and eigenvoice adaptation techniques. The figure shows that the Tucker decomposition based method outperforms MLLR and eigenvoice adaptation techniques for adaptation sentences > 1. It can be inferred from the figure that eigenvoice adaptation will outperform the Tucker decomposition based method or MLLR for sparse adaptation data. The p-values from the matched-pair t-test are shown in Table3; although the values are not always small, the performance improvement of the probabilistic 2-mode based method seems meaningful. Additionally, Figure8 shows the performance of the probabilistic 2-mode based model with (KR=20,KD=35), MLLR adaptation with a full regression matrix, and MAPLR adaptation for adaptation data of about 6–240 s; for adaptation sentences ≥ 10 (about 60 s), the probabilistic 2-mode based model shows the comparable performance with MLLR adaptation and MAPLR adaptation. In Figure8, the p-values are given as: p<0.01 for 1–5 adaptation sentences between the probabilistic 2-mode based model and MLLR adaptation, p<0.05 for 2–5 adaptation sentences between the probabilistic 2-mode based model and MAPLR adaptation. The number of free parameters of each method is summarized in Table4.

Figure 6

Word recognition accuracy of the probabilistic 2-mode based model, Tucker 3-mode based model, and MAPLR adaptation.

Figure 7

Word recognition accuracy of the Tucker 3-mode based model, MLLR, and eigenvoice adaptation.

Table 2

Word recognition accuracy (%) of the Tucker 3-mode and probabilistic 2-mode based methods

Number of

Number of adaptation sentences

Method

(KR,KD)

free parameters

1

2

3

4

5

Tucker 3-mode

(20, 35)

700

91.84

92.98

93.07

92.99

93.11

(20, 38)

760

91.82

92.83

93.11

93.01

93.01

(30, 35)

1050

90.77

92.99

93.18

93.09

92.94

(30, 38)

1140

90.77

92.86

93.18

93.01

92.86

(40, 35)

1400

89.39

92.85

93.11

93.24

93.03

(40, 38)

1520

89.16

92.77

93.20

93.14

92.98

(50, 35)

1750

87.95

92.34

93.24

93.26

93.13

(50, 38)

1900

87.75

92.47

93.27

93.31

93.16

Probabilistic 2-mode

(20, 35)

700

93.07

93.18

93.26

93.27

93.16

(20, 38)

760

92.96

93.07

93.03

93.24

93.13

(30, 35)

1050

92.98

93.20

93.24

93.24

93.31

(30, 38)

1140

92.94

93.33

93.33

93.27

93.31

(40, 35)

1400

93.13

93.20

93.39

93.24

93.01

(40, 38)

1520

93.14

93.22

93.33

93.20

93.24

(50, 35)

1750

93.26

93.35

93.37

93.29

93.29

(50, 38)

1900

93.37

93.44

93.42

93.31

93.39

The number of mixture components R=3472·8 and the dimension of acoustic feature vector D=39. The number of free parameters is KR×KD.

Table 3

p-values from the matched-pairt-test

Methods

Number of adaptation sentences

1

2

3

4

5

Prob. 2-mode and Tucker 3-mode

<0.01

0.22

0.08

0.03

0.34

Prob. 2-mode and MAPLR

0.10

< 0.01

< 0.01

0.01

0.02

Prob. 2-mode and MLLR, block-diagonal

<0.01

0.01

0.04

<0.01

0.05

Prob. 2-mode and EV

< 0.01

Tucker 3-mode and MLLR, block-diagonal

0.43

0.18

0.63

0.17

0.22

Tucker 3-mode and EV

0.94

< 0.01

< 0.01

< 0.01

< 0.01

For the probabilistic 2-mode and Tucker 3-mode based models, KR=20 and KD=35.

Figure 8

Word recognition accuracy of the probabilistic 2-mode based model, MLLR, and MAPLR adaptation.

Table 4

Number of free parameters of adaptation techniques

Method

Number of free parameters

Probabilistic 2-mode based model

20·35 (KR·KD)

Tucker 3-mode based model

20·35 (KR·KD)

MLLR, 3-block-diagonal regression matrix

13·40

MLLR, full regression matrix

39·40

MAPLR adaptation

39·40

Eigenvoice

50

We think that the performance improvement of the proposed method over MLLR or MAPLR adaptation comes from the use of basis vectors and speaker weight of large dimension. Additionally, we think that the performance improvement of the probabilistic 2-mode based method in the MAP framework over the Tucker decomposition based method in the ML framework for small amounts of adaptation data (e.g., 1 adaptation sentence) is due to its constraint on the weight. If the amount of adaptation data is small (e.g., 1 adaptation sentence), the weight cannot be reliably estimated in the ML framework where the weight is estimated using only adaptation data without constraint, as done in the Tucker decomposition based method. The results confirm that constraint on the weight in the MAP framework can produce better model when the amount of adaptation data is small.

The selection of appropriate dimensions of model parameters (e.g., KR and KD) in the probabilistic 2-mode analysis depends on the training models and also available adaptation data. The selection of model parameters affects the performance of the system, but how to choose the optimum model parameters is not obvious, which needs a further study.

In this article, we applied probabilistic tensor analysis to the adaptation of HMM mean vectors to a new speaker. The training models consisted of the mean vectors of HMMs expressed in matrix form and the training set was decomposed by probabilistic 2-mode analysis. The prior distribution of the adaptation parameter was estimated from the training models. Then, the speaker adaptation equation was derived in the MAP framework. Compared with the speaker adaptation method based on Tucker 3-mode decomposition in the ML framework, the proposed method further improved the performance for small amounts of adaptation data.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.