A headset system is proposed including a headset unit to be worn by a user and having two or more microphones, and a base unit in wireless communication with the headset. Signals received from the microphones are processed using a first adaptive filter to enhance a target signal, and then divided and...http://www.google.es/patents/US20040193411?utm_source=gb-gplus-sharePatente US20040193411 - System and apparatus for speech communication and speech recognition

System and apparatus for speech communication and speech recognitionUS 20040193411 A1

Resumen

A headset system is proposed including a headset unit to be worn by a user and having two or more microphones, and a base unit in wireless communication with the headset. Signals received from the microphones are processed using a first adaptive filter to enhance a target signal, and then divided and supplied to a second adaptive filter arranged to reduce interference signals and a third filter arranged to reduce noise. The outputs of the second and third filters are combined, and are be subject to further a processing in the frequency domain. The results are transmitted to a speech recognition engine.

Imágenes(20)

Reclamaciones(29)

1. A headset system including a base unit and a headset unit to be worn by a user and having a plurality of microphones, the headset unit and base unit being in mutual wireless communication, and at least one of the base unit and the headset unit having digital signal processing means arranged to perform signal processing in the time domain on audio signals generated by the microphones, the digital signal processing means including at least one adaptive filter to enhance a wanted signal in the audio signals and at least one adaptive filter to reduce an unwanted signal in the audio signals.

2. A headset system according to claim 1 in which the base unit includes a cradle for holding the headset unit.

3. A headset system according to claim 1 in which the headset unit is associated with a loudspeaker operable by the headset unit for generating audio signals to the user.

4. A headset system according to claim 1 in which the digital signal processing means includes:

a first adaptive filter arranged to enhance a target signal in the digital signals, and

a second adaptive filter and to a third adaptive filter each receiving the output of the first adaptive filter,

the second filter being arranged to suppress unwanted interference signals, and the third filter being arranged to suppress noise signals.

5. A headset system according to claim 4 which the digital processing means is adapted to combine the outputs of the second and third adaptive filters, convert to the frequency domain and perform further processing in the frequency domain.

6. A headset system according to claim 5 in which an output Si(t) of the second filter and an output Sn(t) of the third filter are linearly combined using weighting factors to derive two interference signals, a first of the interference signals Ic being subtracted from the output of the first filter, and a second of the interference signals Is being converted into the frequency domain.

7. A headset system according to claim 4 in which the second and third filter are not adapted if it is determined that a target signal is present.

8. A headset system according to claim 4 in which the second filter is not updated if it is determined that an interference signal is not present.

9. A headset system according to claim 7 comprising the step of at intervals determining signal energy, and deriving at least one noise threshold from a plurality of values of the signal energy, said determination including determining whether a further signal energy is above the noise threshold.

10. A headset system according to claim 9 in which the derivation of said noise threshold includes using the plurality of signal energy to derive a histogram representing the statistical frequencies of signal energy values in each of a number of bands, and deriving the noise threshold from a signal energy value Emax associated with the band having the highest histogram value.

11. A headset system according to claim 4 to 10 in which the digital signal processing means comprises a fourth adaptive filter for determining the direction of arrival of the target signal.

12. A headset system according to claim 11 in which the weights of the fourth adaptive filter are updated including repeatedly performing an update process which attenuates each existing weight value by a forgetting factor α.

13. A headset system according to claim 11, in which the digital signal processing means is adapted to determine a ratio Pk indicating the ratio of the highest central weight value A of the fourth adaptive filter to the sum of A and the highest peripheral weight value B, the digital signal processing means only adapting the first filter if the ratio Pk is above a given value TPk1.

14. A headset system according to claim 13 in which, following an adaptation of the first filter, the digital signal processing means calculates a new value Pk2 of the ratio, determines whether the value of Pk2 is below the previous maximum value of Pk2 and below a threshold TPk, and if so restores at least one of the first, second and third filters to its previous state.

15. A headset system according to claim 13 when dependent on claim 8 in which the determination that an interference signal is not present includes a determination that the value of said ratio is below a threshold TPk2.

16. A headset system according to claim 4 in which the weights of the second filter are adapted by a weight updating factor μ which varies inversely with an error output ec1 of the second filter.

17. A headset system according to claim 5 in which the combined signals are transformed into two frequency domain signals which are a desired signal Sf and an interference signal If, Sf and If are transformed into respective modified spectra Ps and Pi, the modified spectra are warped into respective Bark spectra Bs and Bi.

18. A headset system according to claim 17 in which, prior to said warping, frequency scanning is applied to the modified spectra Ps and Pi, and peaks which are found to be common to both are attenuated in Pi.

19. A headset system according to claim 17 in which a ratio is derived of the sum of the values of Bs over the Bark critical bands up to the voice band upper cutoff, and the sum of the values of Bs over the Bark critical bands at and above the unvoiced bank lower cutoff.

20. A headset system according to claim 16 in which the ratio is above a given threshold, the values of Bs above the unvoiced band lower threshold are amplified.

21. A headset system according to claim 1 further including

a speech recognition engine receiving the output of the digital signal processing means.

22. A headset system according to claim 18 in which the speech recognition engine receives from the digital signal processing means information indicating any one or more of:

23. A headset system according to claim 1 in which the headset unit comprises two arms for location proximate the mouth of the user and for positioning to either side of the user's head.

24. A headset system according to claim 23 in which the headset is suitable for positioning supported by the user's shoulders with the arms embracing the user's neck.

25. A headset system according to claim 23 in which at least one microphone is provided on a free end of each of the arms.

26. A headset unit for use in the headset system of claim 1.

27. A method of processing signals received from an array of sensors comprising the steps of sampling and digitising the received signals and processing the digitally converted signals, the processing including:

filtering the digital signals using a first adaptive filter arranged to enhance a target signal in the digital signals,

transmitting the output of the first adaptive filter to a second adaptive filter and to a third adaptive filter, the second filter being arranged to suppress unwanted interference signals, and the third filter being arranged to suppress noise signals; and

combining the outputs of the second and third filters.

28. Signal processing apparatus arranged to carry out a method according to claim 27.

29. A microphone headset comprising first and second microphones disposed at respective ends of a support, the support being adapted to be worn around the neck or head of a user.

Descripción

FIELD OF THE INVENTION

[0001] The present invention relates to a system and apparatus for speech communication and speech recognition. It further relates to signal processing methods which can be implemented in the system.

BACKGROUND OF THE INVENTION

[0002] The present applicant's PCT application PCT/SG99/00119, the disclosure of which is incorporated herein by reference in its entirety, proposes a method of processing signals in which signals received from an array of sensors are subject to a first adaptive filter arranged to enhance a target signal, followed by a second adaptive filter arranged to suppress unwanted signals. The output of the second filter is converted into the frequency domain, and further digital processing is performed in that domain.

[0003] The present invention seeks to provide a headset system performing improved signal processing of audio signals and suitable for speech communication.

[0004] The present invention further seeks to provide signal processing methods and apparatus suitable for use in a speech communication and/or speech recognition system.

SUMMARY OF THE INVENTION

[0005] In general terms, a first aspect of the present invention proposes a headset system including a base unit and a headset unit to be worn by a user (e.g. resting on the user's head or around the user's shoulders) and having a plurality of microphones, the headset unit and base unit being in mutual wireless communication, and at least one of the base unit and the headset unit having digital signal processing means arranged to perform signal processing in the time domain on audio signals generated by the microphones, the signal processing means including at least one adaptive filter to enhance a wanted signal in the audio signals and at least one adaptive filter to reduce an unwanted signal in the audio signals.

[0006] Preferably the digital signal processing means are part of the headset unit.

[0007] The headset can be used for communication with the base unit, and optionally with other individuals, especially via the base unit. The headset system may comprise, or be in communication with, a speech recognition engine for recognizing speech of the user wearing the headset unit.

[0008] Although the signal processing may be as described in PCT/SG99/00119, more preferably, the signal processing is modified to distinguish between the noise and interference signals. Signals received from the microphones (array of sensors) are processed using a first adaptive filter to enhance a target signal, and then divided and supplied to a second adaptive filter arranged to reduce interference signals and a third filter arranged to reduce noise. The outputs of the second and third filters are combined, and may be subject to further processing in the frequency domain.

[0009] In fact, this concept provides a second, independent aspect of the invention which is a method of processing signals received from an array of sensors comprising the steps of sampling and digitising the received signals and processing the digitally converted signals, the processing including:

[0010] filtering the digital signals using a first adaptive filter arranged to enhance a target signal in the digital signals,

[0011] transmitting the output of the first adaptive filter to a second adaptive filter and to a third adaptive filter, the second filter being arranged to suppress unwanted interference signals, and the third filter being arranged to suppress noise signals; and

[0012] combining the outputs of the second and third filters.

[0013] The invention further provides signal processing apparatus for performing such a method.

BRIEF DESCRIPTION OF THE DRAWINGS

[0014] An embodiment of the invention will now be described by way of example with reference to the accompanying drawings in which:

[0015]FIG. 1 illustrates a general scenario in which an embodiment of the invention may operate.

[0016]FIG. 2 is a schematic illustration of a general digital signal processing system which is an embodiment of present invention.

[0017]FIG. 3 is a system level block diagram of the described embodiment of FIG. 2.

[0018]FIG. 4a-d is a flow chart illustrating the operation of the embodiment of FIG. 3.

[0019]FIG. 5 illustrates a typical plot of non-linear energy of a channel and the established thresholds.

[0032]FIG. 16 illustrates a specific embodiment of the invention schematically.

[0033]FIG. 17 illustrates a headset unit which is a component of the embodiment of FIG. 16.

[0034]FIG. 18, which is composed of FIGS. 18(a) and 18(b), shows two ways of wearing the headset unit of FIG. 17.

DETAILED DESCRIPTION OF THE EMBODIMENT OF THE INVENTION

[0035] Below, with reference to FIGS. 16 and 17, we describe a specific embodiment of the invention. Before that, we describe in detail a digital signal processing technique which may be employed by the invention.

[0036]FIG. 1 illustrates schematically the operating environment of a signal processing apparatus 5 of the described embodiment of the invention, shown in a simplified example of a room. A target sound signal “s” emitted from a source s′ in a known direction impinging on a sensor array, such as a microphone array 10 of the apparatus 5, is coupled with other unwanted signals namely interference signals u1, u2 from other sources A, B, reflections of these signals u1r, u2r and the target signal's own reflected signal sr. These unwanted signals cause interference and degrade the quality of the target signal “s” as received by the sensor array. The actual number of unwanted signals depends on the number of sources and room geometry but only three reflected (echo) paths and three direct paths are illustrated for simplicity of explanation. The sensor array 10 is connected to processing circuitry 20-60 and there will be a noise input q associated with the circuitry which further degrades the target signal.

[0037] An embodiment of signal processing apparatus 5 is shown in FIG. 2. The apparatus observes the environment with an array of four sensors such as microphones 10a-10d. Target and noise/interference sound signals are coupled when impinging on each of the sensors. The signal received by each of the sensors is amplified by an amplifier 20a-d and converted to a digital bitstream using an analogue to digital converter 30a-d. The bit streams are feed in parallel to the digital signal processor 40 to be processed digitally. The processor provides an output signal to a digital to analogue converter 50 which is fed to a line amplifier 60 to provide the final analogue output.

[0038]FIG. 3 shows the major functional blocks of the digital processor in more detail. The multiple input coupled signals are received by the four-channel microphone array 10a-10d, each of which forms a signal channel, with channel 10a being the reference channel. The received signals are passed to a receiver front end which provides the functions of amplifiers 20 and analogue to digital converters 30 in a single custom chip. The four channel digitized output signals are fed in parallel to the digital signal processor 40. The digital signal processor 40 comprises five sub-processors. They are (a) a Preliminary Signal Parameters Estimator and Decision Processor 42, (b) a Signal Adaptive Filter 44, (c) an Adaptive Interference Filter 46, (d) an Adaptive Noise Estimation Filter 48, and (e) an Adaptive Interference and Noise Cancellation and Suppression Processor 50. The basic signal flow is from processor 42, to processor 44, to processor 46 and 48, to processor 50. The output of processor 42 is referred to as “stage 1” in this process; the output of processor 44 as “stage 2”, and the output of processors 46, 48 as “stage 3”. These connections being represented by thick arrows in FIG. 3. The filtered signal S is output from processor 50. Decisions necessary for the operation of the processor 40 are generally made by processor 42 which receives information from processors 44-50, makes decisions on the basis of that information and sends instructions to processors 44-50, through connections represented by thin arrows in FIG. 3. The outputs I, S of the processor 40 are transmitted to a Speech recognition engine, 52.

[0039] It will be appreciated that the splitting of the processor 40 into the five component parts 42, 44, 46, 48 and 50 is essentially notional and is made to assist understanding of the operation of the processor. The processor 40 would in reality be embodied as a single multi-function digital processor performing the functions described under control of a program with suitable memory and other peripherals. Furthermore, the operation of the speech recognition engine 52 also could in principle be incorporated into the operation of the processor 40.

[0040] A flowchart illustrating the operation of the processors is shown in FIG. 4a-d and this will firstly be described generally. A more detailed explanation of aspects of the processor operation will then follow.

[0041] The front end 20,30 processes samples of the signals received from array 10 at a predetermined sampling frequency, for example 16 kHz. The processor 42 includes an input buffer 43 that can hold N such samples for each of the four channels. Upon initialization, the apparatus collects a block of N/2 new signal samples for all the channels at step 500, so that the buffer holds a block of N/2 new samples and a block of N/2 previous samples. The processor 42 then removes any DC from the new samples and pre-emphasizes or whitens the samples at step 502.

[0042] Following this, the total non-linear energy of a stage 1 signal sample Er1 and a stage 2 signal sample Er3 is calculated at step 504. The samples from the reference channel 10a are used for this purpose although any other channel could be used.

[0043] There then follows a short initialization period at step 506 in which the first 20 blocks of N/2 samples of signal after start-up are used to estimate a Bark Scale system noise Bn at step 516 and a histogram Pb at step 518. During this short period, an assumption is made that no target signals are present. The updated Pb is then used with updated Pbs to estimate the environment noise energy En and two detection thresholds, a noise threshold Tn1 and a larger signal threshold Tn2, are calculated by processor 42 from En using scaling factors. The routine then moves to point B and point F.

[0044] After this initialization period, Pbs and Bn are updated when an update condition is fulfilled.

[0045] At step 508, it is determined if the stage 3 signal energy Er3 is greater than the noise threshold Tn1. If not, the Bark Scale system noise Bn is updated at step 510. Then, it'll proceed to step 512. If so, the routine will skip step 510 and proceed to step 512. A test is made at step 512 to see if the signal energy Er1 is greater than the noise threshold Tn1. If so, Pb and Pbs are estimated at step 518 for computing En, Tn1 and Tn2. The routine then moves to point B and point F. If not, only Pbs will be updated and it's used with previous Pb to compute En, Tn1 and Tn2 at step 514. Tn1 and Tn2 will follow the environment noise level closely. The histogram is used to determine if the signal energy level shows a steady state increase which would indicate an increase in noise, since the speech target signal will show considerable variation over time and thus can be distinguished. This is illustrated in FIG. 15 in which a signal noise level rises from an initial level to a new level which exceeds both thresholds.

[0046] A test is made at step 520 to see if the estimated energy Er1 in the reference channel 10a exceeds the second threshold Tn2. If so, a counter CL is reset and a candidate target signal is deemed to be present. The apparatus only wishes to process candidate target signals that impinge on the array 10 from a known direction normal to the array, hereinafter referred to as the boresight direction, or from a limited angular departure there from, in this embodiment plus or minus 15 degrees. Therefore, the next stage is to check for any signal arriving from this direction.

[0047] At step 528, three coefficients are established, namely a correlation coefficient Cx, a correlation time delay Td and a filter coefficient peak ratio Pk which together provide an indication of the direction from which the target signal arrived.

[0048] At step 530, three tests are conducted to determine if the candidate target signal is an actual target signal. First, the cross correlation coefficient Cx must exceed a predetermined threshold Tc, second, the size of the delay coefficient must be less than a value θ indicating that the signal has impinged on the array within the predetermined angular range and lastly the filter coefficient peak ratio Pk must exceed a predetermined threshold TPk1. If these conditions are not met, the signal is not regarded as a target signal and the routine passes to step 534 (non-target signal filtering). If the conditions are met, the confirmed target signal is fed to step 532 (target signal filtering) of Signal Adaptive Spatial Filter 44.

[0049] If at step 520, the estimated energy Er1 in the reference channel 10a is found not to exceed the second threshold Tn2, the target signal is considered not to be present and the routine passes to step 534 via steps 522-526 in which the counter CL is incremented. At step 524, CL is checked against a threshold TCL. If the threshold is reached, block leak compensation is performed on the filter coefficient Wtd and counter CL is reset at step 526. This block leak compensation step improves the adaptation speed of the filter coefficient Wtd to the direction of fast changing target sources and environment. If the threshold is not reached, the program moves to step 534 described below.

[0050] Following step 530, the confirmed target signal is fed to step 532 at the Signal Adaptive Spatial Filter 44. The filter is instructed to perform adaptive filtering at step 532 and 536, in which the filter coefficients Wsu are adapted to provide a “target signal plus noise” signal in the reference channel and “noise only” signals in the remaining channels using the Least Mean Square (LMS) algorithm. In order to prevent the filter coefficient updated wrongly, a running energy ratio Rsd is computed at every sample at step 532. This running energy ratio Rsd is used as a condition to test whether that the filter coefficient corresponding to that particular sample should be updated or not. The filter 44 output channel equivalent to the reference channel is for convenience referred to as the Sum Channel and the filter 44 output from the other channels, Difference Channels. The signal so processed will be, for convenience, referred to as A′.

[0051] If the signal is considered to be a noise signal, the routine passes to step 534 in which the signals are passed through filter 44 without the filter coefficients being adapted, to form the Sum and Difference channel signals. The signals so processed will be referred to for convenience as B′.

[0052] The effect of the filter 44 is to enhance the signal if this is identified as a target signal but not otherwise.

[0053] At step 538, a new filter coefficient peak ratio Pk2 is calculated based on the filter coefficient Wsu. At step 539, if the signal is not A′ signals from step 532 the routine passes to step 548. Else, the peak ratio calculated at step 538 is compared with a best peak ratio BPk at step 540. If it is larger than best peak ratio, the value of best peak ratio is replaced by this new peak ratio Pk2 and all the filter coefficients Wsu are stored as the best filter coefficients at step 542. If it is not, the peak ratio Pk2 is again compared with a threshold TPk at step 544. If the peak ratio is below the threshold, a wrong update on the filter coefficients is deemed to be occurred and the filter coefficients are restored to the previous stored best filter coefficients at step 546. If it is above the threshold, the routine passes to step 548.

[0054] At step 548, an energy ratio Rsd and power ratio Prsd between the Sum Channel and the Difference Channels are estimated by processor 42. Besides these, two other coefficients are also established, namely an energy ratio factor Rsdf and a second stage non-linear signal energy Er2. Following this, the adaptive noise power threshold TPrsd is updated based on the calculated power ratio Prsd.

[0055] At this point, the signal is divided into two parallel paths namely point C and point D. Following point C, the signal is subject to a further test at step 552 to determine if the noise or interference presence. First, if the signals are A′ signals from step 532, the routine passes to step 556. Second, if the estimated energy Er2 is found not to exceed the second threshold Tn2, the signal is considered not to be present and the routine passes to step 556. Third, the filter coefficient peak ratio Pk2 is compared to a threshold TPk2. If it is higher than threshold, this may indicate that there is a target signal and routine passes to step 556. Lastly, the Rsd and Prsd are compared to threshold Trsd and TPrsd respectively. If the ratios are both lower than threshold, this indicates probable noise but if higher, this may indicate that there has been some leakage of the target signal into the Difference channel, indication the presence of a target signal after all. For such target signals, the routine also passes to step 556. For all other non-target signals, the routine passes to step 554.

[0056] At step 554-558, the signals are processed by the Adaptive Interference Filter 46, the purpose of which is to reduce the unwanted signals. The filter 46, at step 554 is instructed to perform adaptive filtering on the non-target signals with the intention of adapting the filter coefficients to reducing the unwanted signal in the Sum channel to some small error value ec1. This computed ec1 is also fed back to step 554 to prevent signal cancellation cause by wrong updating of filter coefficients.

[0057] In the alternative, at step 556, the target signals are fed to the filter 46 but this time, no adaptive filtering takes place, so the Sum and Difference signals pass through the filter.

[0059] Following point D, the signals will pass through few test conditions at step 560. First, if the signals are A′ signals from step 532, the routine passes to step 564. Second, if the signals are classified as non-target signal by step 552 (C′ signal), the routine passes to step 564. Third, the Rsdf and Prsd are compared to threshold Trsdf and TPrsd respectively. If the ratios are both lower than threshold, this indicates probable ambient noise signal but if higher, this may indicate that there has been some leakage of the target signal into the Difference channel, indication the presence of a target signal after all. Lastly, if the estimated energy Er2 is found exceeds the first threshold Tn1, signals are considered to be present. For such signals, the routine also passes to step 564. For all other ambient noise signals, the routine passes to step 562.

[0060] At step 562-566, the signals are processed by the Adaptive Ambient noise Estimation Filter 48, the purpose of which is to reduce the unwanted ambient noise. The filter 48, at step 562 is instructed to perform adaptive filtering on the ambient noise signals with the intention of adapting the filter coefficients to reducing the unwanted ambient noise in the Sum channel to some small error value ec2.

[0061] In the alternative, at step 564, the signals are fed to the filter 48 but this time, no adaptive filtering takes place, so the Sum and Difference signals pass through the filter.

[0063] At step 568, output signals from processor 46: Sc1 and Si and output signals from processor 48: Sc2 and Sn are processed by an adaptive signal multiplexer. Here, those signals are multiplex and a weighted average error signal es(t), a sum signal Sc(t) and a weighted average interference signal Is(t) are produced. These signals are then collected for the new N/2 samples and the last N/2 samples from the previous block and a Hanning Window Hn is applied to the collected samples as shown in FIG. 13 to form vectors Sh, Ih and Eh. This is an overlapping technique with overlapping vectors Sh, Ih and Eh being formed from past and present blocks of N/2 samples continuously. This is illustrated in FIG. 14. A Fast Fourier Transform is then performed on the vectors Sh, Ih and Eh to transform the vectors into frequency domain equivalents Sf, If and Ef at step 570.

[0064] At step 572, a modified spectrum is calculated for the transformed signals to provide “pseudo” spectrum values Ps and Pi.

[0065] In order to reduce signal distortion due to wrong estimation of the noise spectra, a frequency scanning is performed between Ps and Pi to look for the peaks in the same frequency components at step 574. Attenuation is then performed on those peaks in Pi to reduce the signal cancellation effect. Ps and Pi are then warped into the same Bark Frequency Scale to provide Bark Frequency scaled values Bs and Bi at step 576. At step 578, a voice unvoice detection is performed on Bs and Bi to reduce the signal cancellation on the unvoice signal.

[0066] A weighted combination By of Bn (through path F) and Bi is then made at step 580 and this is combined with Bs to compute the Bark Scale non-linear gain Gb at step 582.

[0067] Gb is then unwrapped to the normal frequency domain to provide a gain value G at step 584 and this is then used at step 586 to compute an output spectrum Sout using the signal spectrum Sf and Ef from step 570. This gain-adjusted spectrum suppresses the interference signals, the ambient noise and system noise.

[0068] An inverse FFT is then performed on the spectrum Sout at step 588 and the output signal is then reconstructed from the overlapping signals using the overlap add procedure at step 590.

[0069] Hence, besides providing the Speech Recognition Engine 52 with a processed signal S, the system also provides a set of useful information indicated as I on FIG. 3. This set of information may include any one or more of:

[0078] Major steps in the above described flowchart will now be described in more detail.

[0079] Non-Linear Energy Estimation (STEPS 504.548)

[0080] At each stage of adaptive filter, the reference signal is taken at a delay half the tap-size. Thus, the end of two stages adaptive filter, the signal is delayed by Lsu/2 and Luq/2. In order for the decision-making mechanism for the different stages to accurately follow these delays, the signal energy calculations are calculated at 3 junctions, resulting in 3 pairs of the signal energy.

[0081] The first signal energy is calculated at no delay and is used by the time delay estimation and stage1 Adaptive Spatial Filter.
Er1=1J-2∑i=1J-2x(i)2-x(i+1)x(i-1)A.1

[0082] The second signal energy is calculated at a delay of half of Adaptive Spatial Filter tap-size, Lsu/2.
Er2=1J-2∑i=-Lsu/2J-Lsu/2-2x(i)2-x(i+1)x(i-1)A.2

[0083] The last signal energy is calculated at a delay of Lsu/2+Luq/2 and is used by noise updating.
Er3=1J-2∑i=-(Lsu/2+Luq/2)J-(Lsu/2+Luq/2)-2x(i)2-x(i+1)x(i-1)A.3

[0084] These delays are implemented by means of buffering.

[0085] Threshold Estimation and Updating (STEPS 514.518)

[0086] The processor 42 estimates two thresholds Tn1 and Tn2 based on a statistical approach. Two sets of histogram, referred to as Pb and Pbs, are computed in the same way, except that Pbs is computed every block of N/2 samples and Pb is computed only on the first 20 blocks of N/2 samples or when Er1<Tn1 which means that there is neither a target signal nor an interference signal is present. Er1 is used as the input sample of the histograms, and the length of the histograms is a number M (which may for example be 24). Each histogram is as found from the following equation:

Hi=αHi+(1−α)δ(i−D).I24×l B.1

[0087] Where Hi stands for either of Pb and Pb, and has the form:
Hi=[h(1)h(2)⋮h(i)⋮h(24)]B.2I24×1=[11⋮1]B.3δ(i-D)=0,iD1,i=DB.4

[0088] Thus, α is a forgetting factor. For Pb, α is chosen empirically to be 0.9988 and for Pbs, α is equal to 0.9688.

[0089] The value of D which is used in Equation B1 is determined using table 1 below: Specifically, we find the value of Emax in table 1 which is lowest but which is above the input sample Er1, and the corresponding D is used in Equation B.1. Thus, each D labels a corresponding band of values for Er1. For example, if Er1 is 412, this the band up to Emax=424, i.e. the range corresponding to D=13, and accordingly D=13 is used in Equation B.1. Thus, if Er1 continues to stay at a certain level, say in the band up to Emax(D), the weight of the corresponding D value in the histogram will start to build up to become the maximum. It indicates that the current running average noise level is approximately Emax(D).

TABLE 1

D

Emax (D)

1

10

2

11

3

15

4

21

5

29

6

40

7

56

8

79

9

110

10

115

11

216

12

303

13

424

14

593

15

829

16

1161

17

1624

18

2274

19

3181

20

4452

21

6232

22

8724

23

12199

24

17686

[0090] After computing Pb and Pbs, the peak values of Pb and Pbs are labelled pp and pps respectively. pp is reset to be equal to (pps−5) if (pps−pp)>5.

[0091] Below is the pseudo-C which uses pp to estimate Tn1 and Tn2:

Np = Emax[pp];

Rpp = Er1 /(Er1 + Np);

gamma = sfun(Rpp, 0, 0.8);

Ep = gamma*Ep + (1 − gamma)*Er1;

if (En >= Ep)

En = 0.7*En + 0.3*Ep;

else if (Er1 <= Er_old)

{

En = 0.9995*En + 0.0005*Ep;

Er_old = Er1;

}

else

En = 0.995*En + 0.005*Ep;

[0092] The Emax values in table 1 were chosen experimentally based on a statistical method. Samples (in this case, Er1) were collected under certain environments (office, car, super-market, etc) and a histogram was generated based on the collected samples. From the histogram, a probability density function is computed and from there the Emax values were decided.

[0093] Similarly, all the factors in the first order recursive filters and the lower, upper limit of the s-function above are chosen empirically. Once the noise energy En is obtained, the two signal detection thresholds Tn1 and Tn2 are established as follows:

Tn1=δ1En B.5

Tn2=δ2En B.6

[0094] δ1 and δ2 are scalar values that are used to select the thresholds so as to optimize signal detection and minimize false signal detection. As shown in FIG. 5, Tn1 should be above the system noise level, with Tn2 sufficient to be generally breached by the potential target signal. These factors may be found by trial and error. In this embodiment, δ1=1.375 and δ2=1.675 have been found to give good results.

[0095] In comparison to the algorithms for setting Tn1 and Tn2 in PCT/SG99/00119, the noise level can be tracked more robustly yet faster. A further motivation for the above algorithm for finding the thresholds is to distinguish between signal and noise in all environments, especially noisy environments (car, supermarket, etc.). This means that the user can use the embodiment any where.

[0096] Time Delay Estimation (Td) (STEP 528)

[0097]FIG. 6A illustrates a single wave front impinging on the sensor array. The wave front impinges on sensor 10d first (A as shown) and at a later time impinges on sensor 10a (A′ as shown), after a time delay td. This is because the signal originates at an angle of 40 degrees from the boresight direction. If the signal originated from the boresight direction, the time delay td will have been zero ideally.

[0098] Time delay estimation of performed using a tapped delay line time delay estimator included in the processor 42 which is shown in FIG. 6B. The filter has a delay element 600, having a delay Z−L/2 connected to the reference channel 10a and a tapped delay line filter 610 having a filter coefficient Wtd connected to channel 10d. Delay element 600 provides a delay equal to half of that of the tapped delay line filter 610. The outputs from the delay element is d(k) and from filter 610 is d′(k). The Difference of these outputs is taken at element 620 providing an error signal e(k) (where k is a time index used for ease of illustration). The error is fed back to the filter 610. The Least Mean Squares

Wtd(k+1)=Wtd(k)+2μtdS10d(k)e(k) B.1

[0099] (LMS) algorithm is used to adapt the filter coefficient Wtd as follows:
Wtd(k+1)=[Wtd0(k+1)Wtd1(k+1)⋮WtdLo(k+1)]B.2S10d(k)=[S10d0(k)S10d1(k)⋮S10dLo(k)]B.3μtd=βtdS10d(k)B.6

[0100] where βtd is a user selected convergence factor 0<βtd≦2, | | denoted the norm of a vector, k is a time index, Lo is the filter length.

e(k)=d(k)−d′(k) B4

d′(k)=Wtd(k)T·S10d(k) B.5

[0101] The impulse response of the tapped delay line filter 620 at the end of the adaptation is shown in FIG. 6c. The impulse response is measured and the position of the peak or the maximum value of the impulse response relative to origin O gives the time delay Td between the two sensors which is also the angle of arrival of the signal. In the case shown, the peak lies at the centre indicating that the signal comes from the boresight direction (Td=0). The threshold θ at step 506 is selected depending upon the assumed possible degree of departure from the boresight direction from which the target signal might come. In this embodiment, θ is equivalent to ±15°.

[0102] Normalized Cross Correlation Estimation Cx (STEP 528)

[0103] The normalized crosscorrelation between the reference channel 10a and the most distant channel 10d is calculated as follows:

[0104] Samples of the signals from the reference channel 10a and channel 10d are buffered into shift registers X and Y where X is of length J samples and Y is of length K samples, where J>K, to form two independent vectors Xr and Yr:
Xr=[xrxr(2)⋮xr(J)]C.1Yr=[yryr(2)⋮yr(K)]C.2

[0105] A time delay between the signals is assumed, and to capture this Difference, J is made greater than K. The Difference is selected based on angle of interest. The normalized cross-correlation is then calculated as follows:
Cx(l)=YrT*XrlYr*XrlC.3Where…Xrl=[XrXr(l+1)⋮xr(K+l-1)]C.4

[0106] Where T represents the transpose of the vector and | | represent the norm of the vector and I is the correlation lag. I is selected to span the delay of interest. For a sampling frequency of 16 kHz and a spacing between sensors 10a, 10d of 18 cm, the lag I is selected to be five samples for an angle of interest of 15°.

[0107] The threshold Tc is determined empirically. Tc=0.65 is used in this embodiment.

[0109] In the time delay estimation LMS algorithm, a modified leak compensation form is used. This is simply implemented by:

Wtd=αWtd (where α=forgetting_factor˜=0.98)

[0110] This leak compensation form has the property of adapting faster to the direction of fast changing sources and environment.

[0111] Filter Coefficient Peak Ratio, Pk (STEP 528)

[0112] The impulse response of the tapped delay line filter with filter coefficients Wtd at the end of the adaptation with the present of both signal and interference sources is shown in FIG. 7. The filter coefficient Wtd is as follows:
Wtd(k)=[Wtd0(k)Wtd1(k)⋮WtdL0(k)]

[0113] With the present of both signal and interference sources, there will be more than one peak at the tapped delay line filter coefficient. The Pk ratio is calculated as follows:
A=MaxWtdnwhereL02-Δ≤n≤L02+ΔB=MaxWtdnwhere0≤n<L02-Δ,L02+Δ<nPk=AA+B

[0114] Δ is calculated base on the threshold θ at step 530. In this embodiment, with θ equal to ±15°, Δ is equivalent to 2. A low Pk ratio indicates the present of strong interference signals over the target signal and a high Pk ratio shows high target signal to interference ratio.

[0115] Adaptive Spatial Filter 44 (STEPS 532-536)

[0116]FIG. 8 shows a block diagram of the Adaptive Linear Spatial Filter 44. The function of the filter is to separate the coupled target interference and noise signals into two types. The first, in a single output channel termed the Sum Channel, is an enhanced target signal having weakened interference and noise i.e. signals not from the target signal direction. The second, in the remaining channels termed Difference Channels, which in the four channel case comprise three separate outputs, aims to comprise interference and noise signals alone.

[0117] The objective is to adopt the filter coefficients of filter 44 in such a way so as to enhanced the target signal and output it in the Sum Channel and at the same time eliminate the target signal from the coupled signals and output them into the Difference Channels.

[0118] The adaptive filter elements in filter 44 acts as linear spatial prediction filters that predict the signal in the reference channel whenever the target signal is present. The filter stops adapting when the signal is deemed to be absent.

[0119] The filter coefficients are updated whenever the conditions of steps are met, namely:

[0123] As illustrate in FIG. 8, the digitized coupled signal X0 from sensor 10a is fed through a digital delay element 710 of delay Z−Lsu/2. Digitized coupled signals X1, X2, X3 from sensors 10b, 10c, 10d are fed to respective filter elements 712,4,6. The outputs from elements 710,2,4,6 are summed at Summing element 718, the output from the Summing element 718 being divided by four at the divider element 719 to form the Sum channel output signal. The output from delay element 710 is also subtracted from the outputs of the filters 712,4,6 at respective Difference elements 720,2,4, the output from each Difference element forming a respective Difference channel output signal, which is also fed back to the respective filter 712,4,6. The function of the delay element 710 is to time align the signal from the reference channel 10a with the output from the filters 712,4,6.

[0124] The filter elements 712,4,6 adapt in parallel using the normalized LMS algorithm given by Equations E.1 . . . E.8 below, the output of the Sum Channel being given by equation E.1 and the output from each Difference Channel being given by equation E.6:
S^c(k)=S_(k)+X_0(k)4E.1Where:S_(k)=∑m=1M-1S_m(k)E.2S_m(k)=(Wsum(k))TXm(k)E.3

[0125] Where m is 0, 1, 2 . . . M−1, the number of channels, in this case 0 . . . 3 and T denotes the transpose of a vector;
Xm(k)=[X1m(k)X2m(k)⋮XLSUm(k)]E.4Wsum(k)=[Wsu1m(k)Wsu2m(k)⋮WsuLSUm(k)]E.5

[0126] Where Xm(k) and Wsum(k) are column vectors of dimension (Lsu×1).

[0127] The weight Xm(k) is updated using the normalized LMS algorithm as follows:

[0128] and where βsu is a user selected convergence factor 0<βsu≦2, ∥ ∥ denoted the norm of a vector and k is a time index.

[0129] Running Rsd within Adaptive Spatial Filter (STEP 532)

[0130] To prevent filter coefficients being updated wrongly, conditions for updating a block of N/2 samples is insufficient. Running Rsd is computed every N/2 samples and it's being used with other conditions to test whether that particular sample should update or not.

[0133] In the events of wrong updating, the coefficients of the filter could adapt to the wrong direction or sources. To reduce the effect, a set of ‘best coefficients’ is kept and copied to the beam-former coefficients when it is detected to be pointing to a wrong direction, after an update.

[0134] Two mechanisms are used for these:

[0135] A set of ‘best weight’ includes all of the three filter coefficients (Wsu1−Wsu3). They are saved based on the following conditions:

[0136] When there is an update on filter coefficients Wsu, the calculated Pk2 ratio is compared with the previous stored BPk, if it is above the BPk, this new set of filter coefficients shall become the new set of ‘best weight’ and current Pk2 ratio is saved as the new BPk.

[0137] A second mechanism is used to decide when the filter coefficients should be restored with the saved set of ‘best weights’. This is done when filter coefficients are updated and the calculated Pk2 ratio is below BPk and threshold TPk. In this embodiment, the value of TPk is equal to 0.65.

[0138] Calculation of Energy Ratio Rsd (STEP 548)

[0139] This is performed as follows:
S^c=[S^c(0)S^c(1)⋮S^c(J-1)]F.1D^c=[d^c(0)d^c(1)⋮d^c(J-1)]=[d^c1(0)d^c1(1)⋮d^c1(J-1)]+[d^c2(0)d^c2(1)⋮d^c2(J-1)]+[d^c3(0)d^c3(1)⋮d^c3(J-1)]F.2

[0140] J=N/2, the number of samples, in this embodiment 256.

[0141] Where ESUM is the sum channel energy and EDIF is the difference channel
ESUM=1J-2∑j=1J-2S^c(j)2-S^c(j-1)S^c(j-1)energy.F.3EDIF=13(J-2)∑j=1J-2d^c(j)2-d^c(j-1)d^c(j-1)F.4Rsd=ESUMEDIFF.5

[0142] The energy ratio between the Sum Channel and Difference Channel (Rsd) must not exceed a predetermined threshold. In the four channel case illustrated here the threshold is determined to be about 1.5.

[0143] Calculation of Power Ratio Prsd (STEP 548)

[0144] This is performed as follows:
S^c=[S^c(0)S^c(1)⋮S^c(J-1)]∂^c=[∂^c(0)∂^c(1)⋮∂^c(J-1)]=[∂^c1(0)∂^c1(1)⋮∂^c1(J-1)]+[∂^c2(0)∂^c2(1)⋮∂^c2(J-1)]+[∂^c3(0)∂^c3(1)⋮∂^c3(J-1)]

[0145] J=N/2, the number of samples, in this embodiment 128.

[0146] Where PSUM is the sum channel power and PDIF is the difference channel power.
PSUM=1J∑j=0J-1S^c(j)2PDIF=13(J)∑j=0J-1∂^c(j)2Prsd=PSUMPDIF

[0147] The power ratio between the Sum Channel and Difference Channel must not exceed a dynamic threshold, TPrsd.

[0148] Calculation of Energy Ratio Factor Rsdf (STEP 548)

[0149] This Energy Ratio Factor Rsdf is obtained by passing the Rsd to a non-linear S-shape transfer function as shown in FIG. 9. Certain range of the Rsd value can be boosted up or suppressed by changing the shape of the transfer function using different sets of threshold level, SL and SH.

[0150] Dynamic Noise Power Threshold Updating TPrsd (STEP 550)

[0151] This dynamic noise power threshold, TPrsd is updated base on the following conditions:

[0152] If the reference channel signal energy is more than 700 and power ratio is less than 0.45 for 64 consecutive processing blocks,

TPrsd=α1*TPrsd+(1−α1)*Prsd

[0153] Else if the reference channel signal energy is less than 700, then

TPrsd=α2*TPrsd+(1−α2)*Max_Prsd

[0154] In this embodiment, α1=0.67, α2=0.98 and Max_Prsd=1.3 have been found to give good results.

[0155] Adaptive Interference Filter 46 (STEPS 554-558)

[0156]FIG. 10 shows a schematic block diagram of the Adaptive Interference Filter 46. This filter adapts to interference signal and subtracts it from the Sum Channel so as to derive an output with reduced interference noise.

[0157] The filter 46 takes outputs from the Sum and Difference Channels of the filter 44 and feeds the Difference Channel Signals in parallel to another set of adaptive filter elements 750,2,4 and feed the Sum Channel signal to a corresponding delay element 756. The outputs from the three filter elements 750,2,4 are subtracted from the output from delay element 756 at Difference element 758 to form and error output ecl, which is fed back to the filter elements 750,2,4. The output from filter 44 is also passed to an Adaptive Signal Multiplexer to mix with filter output from filter 48 and subtract it from the Sum Channel.

[0158] Again, the Least Mean Square algorithm (LMS) is used to adapt the filter coefficients Wuq as follows:

ec1(k)=Ŝc(k)−Si(k) (I.1)

[0159] Where
S1(k)=∑m=1M-1∂cm(k)and∂cm(k)=Wuqm(k)T·Ym(k(1.2)Ym(k)=[∂c1m(k)∂c2m(k)⋮∂cLuqm(k)](1.3)Wuqm(k+1)=Wuqm(k)+2μuqmYm(k)ec1(k)(1.4)μnom=βnoYm+ec1(1.5)

[0160] and where βuq is a user select factor 0<βuq≦2 and where m is 0, 1, 2 . . . M−1, the number of channels, in this case 0 . . . 3.

[0161] When only target signal is present and the Interference filter is updated wrongly, the error signal in equation I.1 will be very large and the norm of Ym will be very small. Hence, by including norm of error signal ∥ec1∥ into weight updating μ calculation (equation I.5), the μ will become very small whenever there is a wrong updating of Interference filter occur. This step help to prevent a wrong updating of weight coefficients of Interference filter and hence reduce the effect of signal cancellation.

[0162] Adaptive Ambient Noise Estimation Filter 48 (STEPS 562-566)

[0163]FIG. 11 shows a schematic block diagram of the Adaptive Ambient Noise Estimation Filter 48. This filter adapts to the environment noise and subtracts it from the Sum Channel so as to derive an output with reduced noise.

[0164] The filter 48 takes outputs from the Sum and Difference Channels of the filter 44 and feeds the Difference Channel Signals in parallel to another set of adaptive filter elements 760,2,4 and feed the Sum Channel signal to a corresponding delay element 766. The outputs from the three filter elements 760,2,4 are subtracted from the output from delay element 766 at Difference element 768 to form and error output ec2, which is fed back to the filter elements 760,2,4. The output from filter 48 also passed to an Adaptive Signal Multiplexer to mix with filter output from filter 46 and subtract it from the Sum Channel.

[0165] Again, the Least Mean Square algorithm (LMS) is used to adapt the filter coefficients Wno as follows:

[0166] and where βno is a user select factor 0<βno≦2 and where m is 0, 1, 2 . . . M−1, the number of channels, in this case 0 . . . 3.

[0167] Adaptive Signal Multiplexer (STEP 568)

[0168]FIG. 12 shows a schematic block diagram of the Adaptive Signal Multiplexer. This multiplexer adaptively multiplex the output from interference filter 46 Si and ambient noise filter 48 Sn to produce two interference signals Ic and Is as follows:

Ic(t)=We1Si(t)+We2Sn(t

Is(t)=Wn1Si(t)+Wn2Sn(t)

[0169] The weights (We1, We2) and (Wn1, Wn2) can be changed base on different input signal environment conditions to minimize signal cancellation or improve unwanted signal suppression. In this embodiment, the weights are determined base on the following conditions:

[0170] If target signal is detected and updating condition for filter 46 (552) and filter 48 (560) are false then We1=0, We2=1.0, Wn1=0.8 and Wn2=1.0.

[0171] Else if no target signal is detected and updating condition for filter 46 (552) is true then We1=1.0, We2=1.0, Wn1=1.0 and Wn2=1.0.

[0172] Else if no target signal is detected and updating condition for filter 46 (552) is false and updating condition for filter 48 (560) is true then We1=0, We2=1.0, Wn1=1.0 and Wn2=1.0.

[0173] Ic is subtracted from the Sum Channel Ŝc so as to derive an output es with reduced noise and interference. This output es is almost interference and noise free in an ideal situation. However, in a realistic situation, this cannot be achieved. This will cause signal cancellation that degrades the target signal quality or noise or interference will feed through and this will lead to degradation of the output signal to noise and interference ratio. The signal cancellation problem is reduced in the described embodiment by use of the Adaptive Spatial Filter 44, which reduces the target signal leakage into the Difference Channel. However, in cases where the signal to noise and interference is very high, some target signal may still leak into these channels.

[0174] To further reduce the target signal cancellation problem and unwanted signal feed through to the output, the other output signal from Adaptive Signal Multiplexer Is is fed into the Adaptive Non-Linear Interference and Noise Suppression Processor 50.

[0177] Sc(t), es(t) and Is(t) is buffered into a memory as illustrated in FIG. 13. The buffer consists of N/2 of new samples and N/2 of old samples from the previous block.

[0178] A Hanning Window is then applied to the N samples buffered signals as illustrated in FIG. 14 expressed mathematically as follows:
Sh=[Sc(t+1)Sc(t+2)⋮Sc(t+N)]·Hn(H.3)Eh=[es(t+1)es(t+2)⋮es(t+N)]·Hn(H.4)Ih=[Is(t+1)Is(t+2)⋮Is(t+N)]·Hn(H.5)

[0179] Where (Hn) is a Hanning Window of dimension N, N being the dimension of the buffer. The “dot” denotes point-by-point multiplication of the vectors. T is a time index.

[0180] The resultant vectors [Sh], [Eh] and [Ih] are transformed into the frequency domain using Fast Fourier Transform algorithm as illustrated in equation H.6, H.7 and H.8 below:

Sf=FFT(Sh) (H.6)

Ef=FFT(Eh) (H.7)

If=FFT(Ih) (H.8)

[0181] A modified spectrum is then calculated, which is illustrated in Equations H.9 and H.10:

Ps=|Re(Sf)|+|Im(Sf)|+F(Sf)*rs (H.9)

PI=|Re(If)|+|Im(If)|+F(Sf)*ri (H.10)

[0182] Where “Re” and “Im” refer to taking the absolute values of the real and imaginary parts, rs and ri are scalars and F(Sf) and F(If) denotes a function of Sf and If respectively.

[0183] One preferred function F using a power function is shown below in equation H.11 and H.12 where “Conj” denotes the complex conjugate:

Ps=|Re(Sf)|+|Im(Sf)|+(Sf*conj(Sf))*rs (H.11)

Pi=|Re(If)|+|Im(If)|+(If*conj(If))*ri (H.12)

[0184] A second preferred function F using a multiplication function is shown below in equations H.13 and H.14:

Ps=|Re(Sf)|+|Im(Sf)|+|Re(Sf)|*|Im(Sf)|*rs (H.13)

Pi=|Re(If)|+|Im(If)|+|Re(If)|*|Im(If)|*ri (H.14)

[0185] The values of the scalars (rs and ri) control the tradeoff between unwanted signal suppression and signal distortion and may be determined empirically. (rs and ri) are calculated as 1/(2vs) and 1/(2vi) where vs and vi are scalars. In this embodiment, vs=vi is chosen as 8 giving rs=ri={fraction (1/256)}. As vs, vi reduce, the amount of suppression will increase.

[0186] Frequency Scan for Similar Peak Between Ps and Pi.

[0187] Pi may contain some of the frequency components of Ps due to the wrong estimation of Pi. Therefore, frequency scanning is applied to both Ps and Pi to look for the peaks in the same frequency components. For those peaks in Pi is then multiplied by an attenuation factor which is chosen to be 0.1 in this case.

[0188] The Spectra (Ps) and (Pi) are warped into (Nb) critical bands using the Bark Frequency Scale [See Lawrence Rabiner and Bing Hwang Juang, Fundamental of Speech Recognition, Prentice Hall 1993]. The number of Bark critical bands depends on the sampling frequency used. For a sampling of 16 kHz, there will be Nb=22 critical bands. The warped Bark Spectrum of (Ps) and (Pi) are denoted as (Bs) and (Bi).

[0189] Voice Unvoiced Detection and Amplification

[0190] This is used to detect voice or unvoiced signal from the Bark critical bands of sum signal and hence reduce the effect of signal cancellation on the unvoiced signal. It is performed as follows:
Bs=[Bs(0)Bs(1)⋮Bs(Nb)]Vsum=∑n=0kBs(n)wherekisthevoicebanduppercutoffUsum=∑n=lNbBs(n)wherelistheunvoicedbandlowercutoffUnvoice_Ratio=UsumVsum

If Unvoice_Ratio>Unvoice_Th

Bs(n)=Bs(n)×A

[0191] where l≦n≦Nb

[0192] In this embodiment, the value of voice band upper cutoff k, unvoiced band lower cutoff I, unvoiced threshold Unvoice_Th and amplification factor A is equal to 16, 18, 10 and 8 respectively.

[0193] A Bark Spectrum of the system noise and environment noise is similarly computed and is denoted as (Bn). Bn is first established during system initialization as Bn=Bs and continues to be updated when no target signal is detected (step) by the system i.e. any silence period. Bn is updated as follows:

if ((Er3 < Tn1) ∥ (loop_cnt < 20))

{

if (Er3 < nl1)}

α = 0.98;

else

α = 0.90;

nl1 = α*nl1 + (1 − α)*Er1;

Bn = α*Bn + (1 − α)*Bs;

}

[0194] Using (Bs, Bi and Bn) a non-linear technique is used to estimate a gain (Gb) as follows:

[0195] First the unwanted signal Bark Spectrum is combined with the system noise Bark Spectrum by using as appropriate weighting function as illustrate in Equation J.1.

By=Ω1Bi+Ω2Bn (J.1)

[0196] Ω1, and Ω2 are weights whose can be chosen empirically so as to maximize unwanted signals and noise suppression with minimized signal distortion. In this embodiment, Ω1=1.0 and Ω2=0.25.

[0197] Following that a post signal to noise ratio is calculated using Equation J.2 and J.3 below:
Rpo=BsBy(J.2)

Rpp=Rpo−INb×1 (J.3)

[0198] The division in equation J.2 means element-by-element division and not vector division. Rpo and Rpp are column vectors of dimension (Nb×1), Nb being the dimension of the Bark Scale Critical Frequency Band and INb×1 is a column unity vector of dimension (Nb×1) as shown below:
Rpo=[rpo(1)rpo(2)⋮rpo(Nb)](J.4)Rpp=[rpp(1)rpp(2)⋮rpp(Nb)](J.5)INbx1=[11⋮1](J.6)

[0199] If any of the rpp elements of Rpp are less than zero, they are set equal to zero.

[0201] The division in Equation J.7 means element-by-element division. Bo is a column vector of dimension (Nb×1) and denotes the output signal Bark Scale Bark Spectrum from the previous block Bo=Gb×Bs, (See Equation J.15) (Bo initially is zero). Rpr is also a column vector of dimension (Nb×1). The value of βi is given in Table 2 below:

TABLE 2

i

1

2

3

4

5

βi

0.01625

0.1225

0.245

0.49

0.98

[0202] The value i is set equal to 1 on the onset of a signal and βi value is therefore equal to 0.01625. Then the i value will count from 1 to 5 on each new block of N/2 samples processed and stay at 5 until the signal is off. The i will start from 1 again at the next signal onset and the βi is taken accordingly.

[0203] Instead of βi being constant, in this embodiment βi is made variable and starts at a small value at the onset of the signal to prevent suppression of the target signal and increases, preferably exponentially, to smooth Rpr.

[0204] From this, Rrr is calculated as follows:
Rrr=RprINbx1+Rpr(J.8)

[0205] The division in Equation J.8 is again element-by-element. Rrr is a column vector of dimension (Nb×1).

[0206] From this, Lx is calculated:

Lx=Rrr·Rpo (J.9)

[0207] The value Lx of is limited to Pi (≈3.14). The multiplication is Equation J.9 means element-by-element multiplication. Lx is a column vector of dimension (Nb×1) as shown below:
Lx=[lx(1)lx(2)⋮lx(nb)⋮lx(Nb)](J.10)

[0208] A vector Ly of dimension (Nb×1) is then defined as:
Ly=[ly(1)ly(2)⋮ly(nb)⋮ly(Nb)](J.11)

[0209] Where nb=1,2 . . . Nb. Then Ly is given as:
ly(nb)=exp(E(nb)2)and(J.12)E(nb)=-0.57722-log(lx(nb))+lx(nb)-(lx(nb))24+(lx(nb))38-(lx(nb))496…(J.13)

[0210] E(nb) is truncated to the desired accuracy. Ly can be obtained using a look-up table approach to reduce computational load.

[0211] Finally, the Gain Gb is calculated as follows:

Gb=Rrr·Ly (J.14)

[0212] The “dot” again implies element-by-element multiplication. Gb is a column vector of dimension (Nb×1) as shown:
Gb=[g(1)g(2)⋮g(nb)⋮g(Nb)](J.15)

[0213] As Gb is still in the Bark Frequency Scale, it is then unwrapped back to the normal linear frequency scale of N dimensions. The unwrapped Gb is denoted as G.

[0214] The output spectrum with unwanted signal suppression is given as:

{overscore (S)}f=(1−Rsdf).G·Sf+Rsdf.Ef (J.16)

[0215] The “·” again implies element-by-element multiplication. In eqn J.16 if Rsdf is high (implying high signal energy to interference energy) the output signal spectrum is weighted more from Ef than the Noise suppression part (G·Sf) to prevent signal cancellation caused by the noise suppression part.

[0216] The recovered time domain signal is given by:

{overscore (S)}t=Re(IFFT({overscore (S)}f)) (J.17)

[0217] IFFT denotes an Inverse Fast Fourier Transform, with only the Real part of the inverse transform being taken.

[0218] Finally, the output time domain signal is obtained by overlap add with the previous block of output signal:
S^t=[S_t(1)S_t(2)⋮S_t(N/2)]+[Zt(1)Zt(2)⋮Zt(N/2)](J.18)Where:Zt=[S_t-1(1+N/2)S_t-1(2+N/2)⋮S_t-1(N)](J.19)

[0219] The embodiment described is not to be construed as limitative. For example, there can be any number of channels from two upwards. Furthermore, as will be apparent to one skilled in the art, many steps of the method employed are essentially discrete and may be employed independently of the other steps or in combination with some but not all of the other steps. For example, the adaptive filtering and the frequency domain processing may be performed independently of each other and the frequency domain processing steps such as the use of the modified spectrum, warping into the Bark scale and use of the scaling factor pi can be viewed as a series of independent tools which need not all be used together.

[0220] Turning now to FIGS. 16 and 17, an embodiment of the invention is shown which is a headset system. As shown schematically in FIG. 16, the system has two units, namely a base station 71 and a mobile unit 72.

[0221] The base unit provides connection to any host system 73 (such as a PC) through a USB (universal serial bus). It acts as a router for steaming audio information between the host system and the mobile unit 72. It is formed with a cradle (not shown) for receiving and holding the mobile unit 72. The cradle is preferably provided with a charging unit co-operating with a rechargeable power source which is part of the mobile unit 72. The charging unit charges the power source while the mobile unit 72 is held by the cradle.

[0222] The base unit 71 includes at least one aerial 74 for two-way wireless communication with at least one aerial 75 of the mobile unit 72. The mobile unit includes a loadspeaker 76 (shown physically connected to the mobile unit 72 by a wire, though as explained below, this is not necessary), and at least two microphones (audio sensors) 77. The wireless link between mobile unit 72 and base station 71 is a highly secure RF Bluetooth link.

[0223]FIG. 17 shows the mobile unit 72 in more detail. It has a structure defining an open loop 78 to be placed around the head or neck of a user, for example so as to be supported on the user's shoulders. At the two ends of the loop are multiple microphones 77 (normally 2 or 4 in total), to be placed in proximity of the user's mouth for receiving voice input. One of more batteries 79 may be provided near the microphones 76. In this case there are two antennas 75 embedded in the structure. Away from the antennas, the loop 78 is covered with RF absorbing material. A rear portion 80 of the loop is a flex-circuit containing digital signal processing and RF circuitry.

[0224] The system further includes an ear speaker (not shown) magnetically coupled to the mobile unit 72 by components (not shown) provided on the mobile unit 72. The user wears the ear speaker in one of his ears, and it allows audio output from the host system 73. This enables two-way communication applications, such as intemet telephony and other speech and audio applications.

[0225] Preferably, the system includes digital circuitry carrying out a method according to the invention on audio signals received by the multiple microphones 76. Some or all of the circuitry can be within the circuitry 80 and/or within the base unit 71.

[0226] FIGS. 18(a) and 18(b) show two ways in which a user can wear the mobile unit 72 having the shape illustrated in FIG. 17. In FIG. 18(a) the user wears the mobile unit 72 resting on the top of his head with the microphones close to his mouth. In FIG. 18(b) the user has chosen to wear the mobile unit 72 supported by his shoulders and with the two arms of the loop embracing his neck, again with the microphone close to his mouth.

[0227] Use of first, second etc. in the claims should only be construed as a means of identification of the integers of the claims, not of process step order. Any novel feature or combination of features disclosed is to be taken as forming an independent invention whether or not specifically claimed in the appendant claims of this application as initially filed.