Abstract

Blind source separation (BSS) and sound activity detection (SAD) from a sound source mixture with minimum prior information are two major requirements for computational auditory scene analysis that recognizes auditory events in many environments. In daily environments, BSS suffers from many problems such as reverberation, a permutation problem in frequency-domain processing, and uncertainty about the number of sources in the observed mixture. While many conventional BSS methods resort to a cascaded combination of subprocesses, e.g., frequency-wise separation and permutation resolution, to overcome these problems, their outcomes may be affected by the worst subprocess. Our aim is to develop a unified framework to cope with these problems. Our method, called permutation-free infinite sparse factor analysis (PF-ISFA), is based on a nonparametric Bayesian framework that enables inference without a pre-determined number of sources. It solves BSS, SAD and the permutation problem at the same time. Our method has two key ideas: unified source activities for all the frequency bins and the activation probabilities of all the frequency bins of all the sources. Experiments were carried out to evaluate the separation performance and the SAD performance under four reverberant conditions. For separation performance in the BSS_EVAL criteria, our method outperformed conventional complex ISFA under all conditions. For SAD performance, our method outperformed the conventional method by 5.9–0.5% in F-measure under the condition RT20 = 30–600 [ms], respectively.

Keywords

Introduction

Computational auditory scene analysis (CASA) aims to find auditory events and extract valuable information from captured sound signals [1, 2]. An overview of CASA system is depicted in Figure 1. First, the CASA system captures sound signals by using a microphone array. Then, it detects sound activities of each source and separates the mixture into individual sources. Finally, it visualizes the auditory events or recognizes these separated sound sources. This article focuses on the source activity detection (SAD) and sound source separation. SAD is useful for CASA systems because this function helps these systems discover audio sources especially when a huge amount of archived audio signals is analyzed. Another example of the benefit of the SAD is compatibility with automatic speech recognition. For accurate automatic speech recognition, it is necessary to extract the voiced part, which is referred to as voice activity detection [3, 4]. Sound source separation is essential for CASA systems because we often observe a mixture of multiple sound sources in our daily environment. Our goal is to develop a simultaneous sound activity detection and sound source separation system for CASA.

Figure 1

Overview of CASA system and location of our method.

The combination of sound source separation and source activity detection should overcome the following difficulties for real-world applications:

1.

unknown mixing processes,

2.

source number uncertainty,

3.

reverberation, and

4.

performance degradation caused by mutually dependent functions.

The first one indicates that the CASA system should work without information specific to a certain environment or a situation such as the environment’s impulse responses or the sound source locations. The second one expresses that the CASA system should achieve robust estimation under the condition that the number of sources is unknown. The third one means that the mixture of audio signals captured in a room may contain reverberations that affect the microphone array processing. The last one means that cascaded processing to cope with the above-mentioned difficulties may be severely affected by the worst subprocess of the CASA system. When source separation processing is performed in the frequency domain, the output signals are affected by the permutation problem, which is ambiguity in the output order for different frequency bins. Conventional methods take the cascade approach. The mixed signals are separated for each frequency bin first, and then the permutation problem is solved. As mentioned above, the overall performance is limited to the performance of the worst subprocess.

Our solution for overcoming these difficulties is as follows. The mixing process is modeled stochastically and inferred on the basis of this model. To handle source number uncertainty, we introduce a nonparametric Bayesian approach. The reverberation is absorbed by using frequency-domain processing. Unified analysis of the source separation and permutation resolution is used to optimize these mutually dependent functions.

This article presents a permutation-free infinite sparse factor analysis (PF-ISFA): a joint estimation method that simultaneously achieves frequency-domain source separation and SAD using a minimum amount of prior information. PF-ISFA achieves robust estimation without using prior information about the number of sources. PF-ISFA extends the frequency-domain ISFA [5], which is a nonparametric Bayesian frequency-domain source separation method. We build a generative process that explains the observed sound source mixture and derive a Bayesian inference to retrieve respective sound sources and sound activities. The key idea of PF-ISFA is that all the frequency bins of signals are processed at the same time to avoid the permutation problem. In particular, a unified source activity for all frequency bins is introduced into its generative model.

The rest of this article is organized as follows. Section 2 summarizes the main problem treated, and introduces study related to our method. Section 3 explains conventional ISFA in the time and frequency domains and then introduces our new method PF-ISFA. Section 4 gives detailed posterior inferences of PF-ISFA. Section 5 presents experimental results, and Section 6 concludes this article.

Problem statement and related study

This section starts by summarizing the problem that is solved in this article and the assumptions needed to solve it. After that, the study related to this problem, especially concerning source separation, the permutation problem, and sound detection methods, are introduced.

Problem statement

The problem statement is briefly summarized below.

Input:

Sound mixtures of K sources captured by D microphones.

Output:

Estimated K source signals,

Detected source activities of source signals.

Assumptions:

1.

The number of sources K is not more than the number of microphones D.

2.

The locations of the sources do not change.

The sound activity represents whether or not sound is active in each time frame. This sound activity estimation enables sound detection. The system estimates the source activities of K source signals and separates the D mixed signals captured by the microphones into K sources without prior information, such as locations, microphone locations, and impulse responses between sound sources and microphones. The first assumption means that this system deals with a determined or over-determined problem. The second assumption means that the mixing process from the sources to the microphones is unchanged.

Requirements

This system should fulfill some requirements in order to work in daily environments. These requirements are summarized as follows.

1.

Blind source separation,

2.

Frequency domain processing,

3.

Permutation resolution,

4.

Robust estimation without source knowledge, and

5.

Unified approach.

These requirements are described in detail below.

Blind source separation

One of the system’s major requirements is to work with the minimum amount of prior information. This is because getting prior information, such as the direction of arrival of sound or the reverberation level of the room, in advance is a troublesome task for the system. In addition, even if the prior information can be obtained, the separation performance is severely affected by the quality of the information. The system should not be dependent on such information. The source separation method that uses the minimum prior information is called blind source separation (BSS).

Frequency domain processing

There are two reasons why frequency domain processing is inevitable for CASA. One is to deal with reverberation and the other is to model source signals using the sparseness of sound energy.

The mixing process of speech signals in our daily surroundings is modeled as a convoluted mixture [6]. The signals captured by the microphones consist of a mixture of ones from various sources and they are contaminated by reflections, reverberations, and arrival time lags at the microphones. To model these time-delayed signals, a convoluted mixture is often used.

Attempts to solve a BSS problem involving convoluted mixtures of signals mainly use, frequency domain processing. This is because the convoluted mixture in the time domain can be explained in a simplistic form in the frequency domain. Specifically, the short time Fourier transform (STFT) can convert a convoluted mixture in the time domain into instantaneous mixtures for all frequency bins. In other words, STFT can absorb the reverberation of the source signals within the window length. Thus, frequency domain processing is effective when BSS is applied to audio signals in practical situations.

Permutation resolution

As mentioned above, the convoluted mixture in the time domain is converted into instantaneous mixtures for individual frequency bins. Many frequency-domain BSS methods independently separate the mixed signals for all the frequency bins; thus, an ambiguity arises in the output order. The system must arrange the separated signals in the correct order for the frequency bins. This is called the “permutation problem”. The permutation problem should be solved in order to achieve frequency-domain BSS.

Robust estimation without source knowledge

Many CASA systems and many source separation methods use prior knowledge about source signals for robust estimation to improve the performance. For instance, HARK [7] localizes the sound sources before separation by using the number of sources. When independent component analysis (ICA), a well-known BSS method, is applied to the input signals, principal component analysis (PCA) is commonly used as preprocessing for ICA [8]. This is because the number of dimensions of ICA’s input signals can be reduced. However, getting prior knowledge about sources is difficult for the system, so robust estimation without source knowledge is desirable. A nonparametric Bayesian framework is helpful for robust inference without knowing the number of sources.

Unified approach

A unified estimation method enables effective processing because it makes the most of the information available from the observed signals. Many source separation frameworks use a cascaded approach. For instance, HARK [7] localizes the sources first and then separates the observed signals into individual sources; the conventional frequency-domain ICA separates the observations and then resolves the permutation problem. One of the critical weak points of these cascaded approaches is that the separation performance is limited to the performance of the worst subprocess.

Related study

Source separation method of speech signals

Source separation is being actively studied for signal processing. Some methods use the source and microphone locations. Delay-and-sum beamforming and null beamforming are methods that emphasize or suppress the signal from a specific direction. These methods can be implemented with less computational complexity. HARK uses geometric higher-order decorrelation-based source separation (GHDSS) [9]. GHDSS separates mixed signals by using a higher-order decorrelation between the sound source signals and geometric constraints derived from the positional relationships among the microphones. The weak point of these methods is that they require the source and microphone locations. This prior information cannot easily be obtained in advance.

Many BSS methods have already been introduced. One well-known BSS method is ICA, which separates mixed signals on the basis of the statistical independence between of different source signals. Many algorithms are used for ICA, such as the minimization mutual information [10], Fast ICA [11], and JADE [12]. For BSS for speech signals, frequency-domain ICA is commonly used [13]. While ICA does achieve BSS, it does not detect the activities of individual sources; moreover, frequency-domain ICA is plagued by the permutation problem.

ISFA [14] is a BSS method based on the nonparametric Bayesian approach. It achieves SAD and BSS simultaneously, but it is modeled in the time domain, so it is vulnerable to the reverberation that often appears in our daily surroundings.

Frequency-domain ISFA (FD-ISFA), which we proposed in our previous study [5], can handle a convoluted mixture that contains room reverberation. One problem for FD-ISFA is the permutation problem. Conventional FD-ISFA independently separates the signals for all the frequency bins, so it cannot avoid permutation ambiguity.

Permutation problem

Some methods solve the permutation problem by post processing. One method is based on estimation of the direction of arrival and inter-frequency correlation of the signal envelopes [15]; another uses the power ratio of the signals as a dominance measure [16].

Other methods avoid this problem by using a unified criterion from among all frequency bins. Independent vector analysis (IVA) [17] and permutation-free ICA [18] are BSS methods that avoid the permutation problem. These methods are based on ICA and cannot simultaneously achieve sound source detection.

BSS framework achieving SAD

Some BSS frameworks obtain SAD information simultaneously. Switching ICA [19] is a BSS method which can achieve SAD. Switching ICA employs a hidden Markov model (HMM) on its model to represent whether the source is active or not. The SAD information is obtained from these estimated hidden variables of HMM. Non-stationary Bayesian ICA [20] achieves dynamic source separation by estimating the sources and the mixing matrices for each time frame on the basis of variational Bayesian inference. The SAD information is obtained from automatic relevance determination (ARD) parameters, which are the precision parameters of the probabilistic density of the mixing matrix. Since these methods are time-domain approaches, it is not appropriate for speech separation of convoluted mixtures.

The combination of a maximize signal-to-noise ratio beamformer, a voice activity detector and online clustering achieves BSS and SAD [21]. This method is a cascade approach. It achieves SAD and the time-difference of arrival estimation first and then separates signals using this them. As mentioned above, the weak point of cascaded approach is that the separation performance is limited to the performance of the worst subprocess.

ISFA

This section first summarizes conventional methods for ISFA: Section 3.1 shows the model of ISFA in the time domain [14], and Section 3.2 explains its expansion into the frequency domain (FD-ISFA) [5] and its problems. Then, Section 3.3 describes a model of FD-ISFA without permutation ambiguity (PF-ISFA).

ISFA in time domain

ISFA [14] achieves BSS of instantaneous mixtures of time-domain signals without knowing the number of sources. It is based on the following instantaneous mixture model, which expresses that D × T observed data X is composed of a linear combination of K × T source signals S.

X=A(Z⊙S)+E,

(1)

where A is a D × K mixing matrix, E is a D × T Gaussian noise term, and Z is a binary mask on X. ⊙ denotes element-wise multiplication. Let xdt, adk, zkt, skt, εdt be the elements of X, A, Z, S, and E, respectively. The generative model of ISFA is shown in Figure 2. σA2 and σε2 are the variance parameters of the elements of A and E.

Figure 2

Graphical model of conventional ISFA.

The priors of these parameters are as follows:

εt∼N(0,σε2I),σε2∼IG(pε,qε),

(2)

skt∼N(0,1),

(3)

ak∼N(0,σA2I),σA2∼IG(pA,qA),and

(4)

Z∼IBP(α),α∼G(pα,qα).

(5)

Here, ak is the k th row of A, and pε, qε, pA, qA, pα, and qα are the hyperparameters. IBP(α) is the Indian buffet process (IBP) [22] with concentration parameter α. IBP [22] is a stochastic process that can deal with a potentially infinite number of signals. It is used in order to achieve separation without using prior knowledge about the number of sources.

In the time domain, each element of X, A, S, and E is a real-valued variable. Each of these variables has a normal distribution as a prior. N(μ,σ2) is a normal distribution with mean μ and variance σ2. The probability density function of this normal distribution is

N(x;μ,σ2)=12Πσ2exp-(x-μ)22σ2.

(6)

The IBP concentration parameter has a gamma prior, and the variance parameters of A and E have inverse gamma priors. G(b,θ) and IG(b,θ) are gamma distribution and the inverse gamma distribution with shape parameter b and scale parameter θ, respectively. The probability density functions of these distributions are

G(x;b,θ)=xb-1Γ(b)θbexp-xθ,and

(7)

IG(x;b,θ)=x-(b+1)Γ(b)θbexp-1θx.

(8)

A Bayesian hierarchical model aims at explaining the uncertainty in the model from the observed data by treating latent variables as a probabilistic variable rather than a fixed value. In our model, we place a gamma prior on the concentration parameter of IBP so that the emergence of sources in Z can be controlled by the data we have.

ISFA in frequency domain

Since the convoluted mixture is converted into complex spectra by using STFT, the elements of X, S, A, and E become complex-valued variables. FD-ISFA is a model for complex values that arises in frequency-domain processing. It can deal with an instantaneous mixture of complex spectra.

The generative model is the same as for time-domain ISFA. However, the priors of these complex-valued elements are different from those of time-domain ISFA.

εt∼NC(0,σε2I),

(9)

skt∼NC(0,1),and

(10)

ak∼NC(0,σA2I)

(11)

Here, instead of the normal distribution, a univariate complex normal distribution NC is used for complex-valued parameters. The probability density functions of this distribution is

NC(x;μ,σ2)=1Πσ2exp-|x-μ|2σ2.

(12)

Conjugacy is one of the helpful properties of Bayesian inference. If we choose a conjugate prior, a closed-form expression can be given for the posterior. The variances σε2 and σA2 have a conjugate inverse gamma prior, and the Gaussian conjugate prior can be used for the mixing matrix A. For simplicity, the univariate complex normal distribution is introduced as a conjugate prior of source signal S. It is noted that a super-Gaussian prior, such as student-t or Laplace distribution, should be used for speech signals. The complex extension of these distributions is non-trivial. We don’t deal with the complex super-Gaussian prior in this article and this is one of our future study.

The processing flow of FD-ISFA is as follows. After STFT, the complex spectra are whitened in each frequency bin, and FD-ISFA is applied for each frequency bin of these complex spectra independently. FD-ISFA is plagued by two well-known ambiguities of frequency domain BSS: the scaling ambiguity and permutation ambiguity. The scaling ambiguity is that the amplitude of the output signals may not equal that of the original sources. Some post-processing methods are needed to resolve these two ambiguities. The projection back method [23] is an effective solution for the scaling ambiguity. The permutation ambiguity is solved by using the methods mentioned above [15, 16]. After these problems have been solved, estimated complex spectra are assembled into source signals by using inverse STFT.

New method: PF-ISFA

Our new method, permutation-free ISFA (PF-ISFA), achieves both BSS and SAD without being affected by the permutation problem. Its key idea for avoiding the permutation problem is unified activity for each time frame. Conventional ISFA is applied independently to each frequency bin. That is to say, conventional ISFA does not consider any relations across frequency bins. This is the main reason for the permutation problem. By contrast, in the PF-ISFA model, all frequency bins are unified by the activity matrix. Since this unified activity controls the output order of source signals, PF-ISFA is not affected by the permutation problem.

The flow of PF-ISFA is depicted in Figure 3, and the generative process of PF-ISFA is described in Figure 4. Let F be the number of frequency bins. PF-ISFA is also based on instantaneous mixture for each frequency bin. PF-ISFA deals with the F-tuple frequency bins at the same time. The elements of Z, X, S, E, and A are defined as xfdt, afdk, zfkt, sfkt, εfdt, respectively.

Figure 3

Schematic overview for our method.

Figure 4

Graphical model of PF-ISFA.

The following model is introduced to unify the activities of all frequency bins.

zfkt=bktϕ,ϕ∼Bernoulli(ψkf),

(13)

where Bernoulli(x) is the Bernoulli distribution with parameter x. bkt is the unified source activity of source k at time t, and Ψ is the probability of the source k becoming active (activation probability) in the f th frequency bin. B represents the K × T matrix of bkt and Ψ means the K × F matrix of ψkf. Let β be the hyperparameter. The prior distributions of the newly introduced variables are assumed to be as follows:

B∼IBP(α),α∼G(pα,qα),and

(14)

Ψ∼BetaβK,β(K-1)K.

(15)

PF-ISFA estimates the source signals S, their time-frequency activities Z, the mixing matrix A, unified activities B, activation probabilities Ψ, and other parameters by using only the observed signal X.

One of the main differences between this PF-ISFA model and conventional ISFA model is the unified activity matrix for each time frame B and the activation probability matrix for each frequency bin Ψ. A graphical model of conventional ISFA is shown in Figure 2. Whereas each frequency bin is independently estimated in the conventional ISFA model, all frequency bins are bundled together by the unified activity matrix in the PF-ISFA model.

Here, all data points are assumed to be independent and identically distributed. The smaller the sum of the noise terms is, the higher the likelihood of PF-ISFA is.

Inference of PF-ISFA

The model parameters of PF-ISFA are estimated by using an iterative algorithm based on the nonparametric Bayesian model. Sound source separation and SAD are achieved by estimating skft and bkt, respectively. The parameter update algorithm is given as follows.

This method is based on the Metropolis-Hastings algorithm [24]. The posterior distributions of the latent variables are derived from Bayes’ theorem by multiplying the priors by the likelihood function.

Sound sources

When zfkt is active, sfkt is sampled by using the following posterior.

Source activity of each time-frequency frame

If bkt = 1, zfkt is sampled from its posterior distribution. The posterior of zfkt is calculated as follows.

P(zfkt|bkt,ψkf,z-fkt,xft,sft,Af)∝PpPl

(19)

where

Pl=P(xft|Af,sft,zft,σε2)

is the probability of likelihood, and

Pp=P(zfkt|bkt,ψkf)

is the probability of prior.

Then, the following posterior distribution is derived.

P(zfkt|bkt,ψkf,z-fkt,xft,sft,Af)=Bernoullip1p0+p1,

(20)

where

log(p1)=log(ψkf)+2Re(sfkt∗akfHε-fkt)+|sfkt|2afkHafkσε2

(21)

log(p0)=log(1-ψkf).

(22)

Unified activity for each time frame

To calculate the ratio of the probability that bkt becomes active to the probability that bkt becomes inactive,we use Equation (23). This ratio r is divided into two parts: the ratio of prior rp and the ratio of the likelihood of f th frequency bin rl,f.

To decide whether or not bkt is active, we sample u from Uniform(0,1) and compare it with r / (1 + r). If u ≤ r / (1 + r), then bkt becomes active; otherwise, it remains dormant.

Number of new sources

Some source signals that were not active at the beginning are active at time t for the first time. Let κt be the number of these sources. This κt is sampled with the Metropolis-Hastings algorithm.

First, the prior distribution of κt is P(κt|α)=PoissonαT. After sampling κt, we initialize the new sources and their activities. Next, we decide whether this update is acceptable or not. Let ξ and ξ∗ be the current state (i.e., the condition before transition) and the next state candidate (the condition after transition), respectively. The acceptance probability of the transition is min(1,rξ→ξ∗). According to Meeds [25] and Knowles [14], rξ→ξ∗ becomes the ratio of the likelihood of the current state to that of the next state. This ratio can be calculated as follows.

rξ→ξ∗=∏f=1F(detΛξ,f)-1expμξ,fHΛξ,fμξ,f,

(27)

where

Λξ,f=I+Af∗HAf∗σε2,Λξ,fμξ,f=1σε2Af∗Hεft.

Here, Af∗ is the D × κt matrix of the additional part of Af. When new κt sources appear, the mixing matrix should be expanded from D × K to D × (K + κt). Af∗ means the mixing matrix for these new sources.

Activation probability for each frequency bin

where nkf=∑t=1Tzkft is the number of active time-frequency frames of source k in the f th frequency bin, and mk=∑t=1Tbkt is the number of active time frames of source k.

Mixing matrix

The mixing matrix is estimated in each column. The posterior distribution is

P(afk|Af,-k,Sf,Xf,Zf)∝P(Xf|Af,Sf,Zf,σε2)P(afk|σA2)=NC(afk;μA,ΛA-1),

(29)

where

ΛA=sfkHsfkσε2+1σA2ID,μA=σA2sfkHsfkσA2+σε2Ef|akf=0sfk.

Variance of noise and mixing matrix

The variance of noise corresponds to the noise level of the estimated signals, and the variance of the mixing matrix affects the scale of the estimated signals. Their posteriors are as follows.

P(σε2|E)∝P(E|σε2)P(σε2|pε,qε)=IGσε2;pε+FTD,qε(1+qε∑f=1Ftr(EfHEf)).

(30)

P(σA2|A)∝P(A|σA2)P(σA2|pA,qbfA)=IGσA2;pA+FDK,qA1+qA∑f=1Ftr(AfHAf)

(31)

Concentration parameter of IBP

The posterior distribution of concentration parameter α is

p(α|B)∝P(B|α)P(α|pα,qα)=Gα;K++pα,qα1+qαHT,

(32)

where K+ is the active number of sources, and Hn=∑j=1n1j is the n th harmonic number.

Experimental results

In this section, we evaluate the separation performance and the accuracy of the source activity. Section 5.1 presents the results of separation performance and SAD performance compared with FD-ISFA [5]. Section 5.2 shows the separation results compared with PF-ICA [18] using two or four microphones (D = 2, 4) and various source locations.

Compared with FD-ISFA

The experiments used simulated mixtures in four rooms with reverberation times of 20, 150, 400, and 600 [ms]. The simulated mixtures were generated by convoluting the impulse responses measured in the rooms. These impulse responses were recorded by using the microphone array depicted in Figure 5. We use two microphones in these experiments (D = 2). The microphone and source locations are shown in Figure 6, and experimental conditions are listed in Table 1. For each condition, 200 mixtures using JNAS phoneme-balanced sentences were tested.

Figure 5

Microphone array for measuring impulse response.

Figure 6

Locations of microphones and sources of Section 5.1.

Table 1

Experimental conditions

No. of sources K

2

Sampling rate

16 [kHz]

STFT window length

64 [ms]

STFT shift length

32 [ms]

Iterations

300 [times]

Hyperparameters

(pε, qε) = (10000, 1.0)

(pA, qA) = (1.1, 0.1)

(pα, qα) = (3.2, 0.21)

β = 0.5

The values of these hyperparameters are empirically-selected. The small σε2 means the smaller the noise term becomes. Therefore, pε and qε is set to 10000 and 1.0 in order to get smaller variance. In contrast, σA2 should have a certain amount because σA2 affects the amplitudes of output signals. If σA2 is too large, the power of estimated signals become small, and then these signals are considered to be inactive.

Separation performance

First, an example of the experimental results obtained from the separation experiment using mixed signals (D = 2) in a room with reverberation time of 20 [ms] is shown. Spectrograms of a source signal, a signal separated using PF-ISFA, a signal separated using conventional FD-ISFA, and a permutation-aligned signal separated using FD-ISFA are shown in Figures 7, 8, 9, and 10, respectively.

Figure 7

Spectrogram of source signal.

Figure 8

Spectrogram of PF-ISFA separated signal.

Figure 9

Spectrogram of FD-ISFA separated signal.

Figure 10

Spectrogram of permutation-aligned FD-ISFA separated signal.

When FD-ISFA is used, the results, shown in Figure 9, contained many horizontal lines; however, there are fewer of these lines in Figure 10. These lines are the spectrogram of the other separated signal. This means that the output orders of the FD-ISFA results are not aligned for all frequency bins. However, there are no horizontal lines in the spectrogram of PF-ISFA (Figure 8). This shows that the output order is aligned; in other words, the permutation problem has been solved by using PF-ISFA.

The spectrogram shown in Figure 8 has vivid time structure. This indicates that the constraint on the unified activity is too strong and the activation probability for each frequency bin becomes almost one. In order to improve this phenomenon, we might introduce a hyperparameter which can control the activation probability appropriate to observed signals.

We also evaluated our method in terms of the signal-to-distortion ratio (SDR), the image-to-spatial distortion ratio (ISR), the source-to-interference ratio (SIR), and the source-to-artifacts ratio (SAR) [26]. SDR is an overall measure of the separation performance; ISR is a measure of the correctness of the inter-channel information; SIR is a measure of the suppression of the interference signals; and SAR is a measure of the naturalness of the separated signals.

The results are summarized in Table 2. Larger value means better separation. “Non-Perm” was calculated from the output signals themselves; in other words, their permutations were not aligned. “Solver” means that the permutations were aligned using inter-frequency correlation of signal envelope. “Perm” means that the output signals permutations are aligned using the correlation between the outputs and the original sources; in other words, the permutations were aligned by using the original source signals for reference.

One of the reasons of the poor performance of FD-ISFA is due to the cascade approach. The results show that FD-ISFA achieves better performance if the permutation problem is perfectly solved. Therefore, this poor performance comes from the permutation solver. This indicates that the overall performance of cascade approach is severely affected by the performance of worst subprocess.

These results show that the performance in rooms with reverberation times of 150, 400, and 600 [ms] is worse than for RT20 = 30 [ms] reverberation. This is because the reverberation time of these rooms are longer than the STFT window length (64 [ms]). If the reverberation time is longer than the STFT window length, the reverberation affects multiple time frames, and this degrades the performance.

The result of PF-ISFA (Perm) and that of PF-ISFA (Non-Perm) is different. If the source activity results are poor, the activities of two separated signals become similar. In this case, the permutation ambiguity is likely to arise because the unified activity matrix becomes meaningless. In other words, PF-ISFA marks better result when each source signal has different activity.

SAD performance

Next, we evaluated our method in SAD accuracy. The SAD result of PF-ISFA was estimated as unified source activities, that is the parameter bft in Section 3.3. Since FD-ISFA estimated the sound activity for each frequency bin independently, we calculated the number of active bins for each time frame and determined the source activity of each time frame by using threshold processing.

Our method achieved a better precision rate and lower recall rate than FD-ISFA, and the results show that PF-ISFA achieved robust SAD performance under reverberant condition. This is because PF-ISFA estimates the source activities using a unified parameter for all frequency bins. PF-ISFA is less likely to determine that the time frame is active, even if some frequency bins have a certain power level.

Compared with PF-ICA

In second experiment, we used two or four microphones (D = 2, 4) to observe the two sound source mixture with interval θ = 60, 120,and 180[deg]. For each interval, 20 mixtures were tested using JNAS phoneme-balanced sentences. The microphone and source locations is shown in Figure 11. We use red microphones when D = 2. In order to calculate SDR, ISR, SIR, and SAR, two signals which maximize SDR score are chosen from estimated signals when using four microphones.

Figure 11

Locations of microphones and sources of Section 5.2.

The average SDR and SIR of separated signals are shown in Figures 12 and 13 for each interval when D = 2 and 4, respectively. Table 4 summarizes average SDR, ISR, SIR, and SAR of all intervals.

Table 4 indicates that PF-ISFA marks better average SIR except for the condition RT20 = 150 [ms]. This means that PF-ISFA can suppress the interference signal better than PFICA. PF-ISFA and PF-ICA marks similar results by the average SDR when D = 2, and The SDR score of PF-ISFA is lower than that of PF-ICA when D = 4. This is because these SDR scores are affected by the SAR scores. The output signals of PF-ICA are created by multiplying separation matrix by observed signals. Then, the artificial noise is not likely to emerge. In contrast, PF-ISFA estimates the source signals by sampling, and PF-ISFA output is based on the best one sample of all samples created during estimation.

Conclusion and future study

This article presented a joint estimation method of BSS and SAD in the frequency domain that also solves the permutation problem. It was designed by using a nonparametric Bayesian approach. Unified source activity was introduced to automatically align the permutations of the output order for all frequency bins.

In the future, we will evaluate the separation performance of a mixture of signals from three or more talkers. We will attempt to develop a method that can separate mixtures with longer reverberations (i.e., longer than the STFT window length) robustly. Last but not least, the method should be sped up to achieve real-time processing so that it can be applied to robot applications.

Declarations

Acknowledgements

This study was partially supported by KAKENHI and Honda Research Institute Japan Inc., Ltd.

Authors' original submitted files for images

Below are the links to the authors’ original submitted files for images.

Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.