Various speech enhancement techniques (e.g. noise suppression, dereverberation) rely on the knowledge of the statistics of the clean signal and the noise process. In practice, however, these statistics are not explicitly available, and the overall enhancement accuracy critically depends on the estimation quality of the unknown statistics. The estimation of noise (and speech) statistics is particularly a critical issue and a challenging problem under non-stationary noise conditions.
In this respect, subspace-based approaches have been shown to provide a good tracking vs. final misadjustment tradeoff. In this talk, we investigate the noise floor estimation using subspace decomposition. We examine both rank-limited and spherical assumptions of the speech and the noise DFT matrices, and the robustness of the bias compensation factor (inherent to the majority of noise estimation schemes). Experimental investigation of the subspace tracking performance and comparison with the state of the art is also presented.