Many signals can be labeled with a small set of events such that each event is categorized according to its surrounding signal shapes. In this thesis, we provide a general approach based on linear state space models to learn sparse signal decompositions from single-channel and multi-channel discrete-time measurements. The proposed approach provides a sparse multi-channel representation of a given signal, which can be interpreted as a signal labeling. This thesis is organized in three parts.
In the first part, several important properties of linear state space models (LSSMs) are revisited. Especially, signals generated with an autonomous LSSM are thoroughly investigated and fully characterized. In particular, we show that the set of such signals forms a ring and that the correlation function between any autonomous LSSM signal and any discrete-time signal can be recursively and efficiently computed. These two properties along with the vast modeling capabilities of LSSM signals are at the heart of this thesis.
In the second part, we develop a general approach to detect events in (single-channel or multi-channel) discrete-time signals and estimate the parameters of such events. Since the number of events is assumed to be substantially smaller than the number of samples, the set of detected events is interpreted as a sparse representation of the given signal. An event locally creates characteristic signals which are modeled with a two-sided autonomous LSSM; the right-sided model accounts for the signals observed after that event while the left-sided model accounts for the signals before that event. Thus, the problem of event detection and estimation is substituted by fitting at any given time a LSSM signal to observations. For this purpose, new cost functions are defined: a LSSM-weighted squared error cost and a LSSM-weighted polynomial cost. These cost functions have the attractive property of being recursively computed. In addition, closed-form solutions for several minimization problems are available. As far as event detection is concerned, several hypothesis tests with a suitable notion of local likelihood are promoted. Surprisingly, event detection in various conditions, such as in the presence of an unknown additive or multiplicative interference signal, can be naturally dealt with. Finally, various important practical applications are addressed in detail in order to exemplify the potential of the proposed approach for event detection and estimation.
In the third and last part, we propose a general approach to learn sparse signal decompositions. We assume that each signal component can be sparsely represented in the input domain of some unknown LSSM. We model sparse inputs with zero-mean Gaussian random variables with unknown variances, as in the sparse Bayesian learning framework. Then, all unknown parameters are estimated by maximum likelihood with an expectation maximization (EM) algorithm where all parameters are jointly updated with closed-form expressions and all expectation quantities are efficiently computed with a Gaussian message passing algorithm. This general approach can deal with a large variety of sparse signal decomposition problems. Among them, we address the problems of learning repetitive signal shapes, learning classes of signal shapes, and decomposing a signal with scaled, time-shifted, and time-dilated versions of a signal shape. All these concepts and methods are illustrated with practical examples. --> Many signals can be labeled with a small set of events such that each event is categorized according to its surrounding signal shapes. In this thesis, we provide a general approach based on linear state space models to learn sparse signal decompositions from single-channel and multi-channel discrete-time measurements. The proposed approach provides a sparse multi-channel representation of a given signal, which can be interpreted as a signal labeling. This thesis is organized in three parts.In the first part, several important properties of linear state space models (LSSMs) are revisited. Especially, signals generated with an autonomous LSSM are thoroughly investigated and fully characterized. In particular, we show that the set of such signals forms a ring and that the correlation function between any autonomous LSSM signal and any discrete-time signal can be recursively and efficiently computed. These two properties along with the vast modeling capabilities of LSSM signalsare at the heart of this thesis.In the second part, we develop a general approach to detect events in (single-channel or multi-channel) discrete-time signals and estimate the parameters of such events. Since the number of events is assumed to be substantially smaller than the number of samples, the set of detected events is interpreted as a sparse representation of the given signal. An event locally creates characteristic signals which are modeled with a two-sided autonomous LSSM; the right-sided model accounts for thesignals observed after that event while the left-sided model accounts for the signals before that event. Thus, the problem of event detection and estimation is substituted by fitting at any given time a LSSM signal to observations. For this purpose, new cost functions are defined: a LSSM-weighted squared error cost and a LSSM-weighted polynomial cost. These cost functions have the attractive property of being recursively computed. In addition, closed-form solutions for several minimization problems are available. As far as event detection is concerned, several hypothesis tests with a suitable notion of local likelihood are promoted. Surprisingly, event detection in various conditions, such as in the presence of an unknown additive or multiplicative interference signal, can be naturally dealt with. Finally, various important practical applications are addressed in detail in order to exemplify the potential of the proposed approach for event detection and estimation.In the third and last part, we propose a general approach to learn sparse signal decompositions. We assume that each signal component can be sparsely represented in the input domain of some unknown LSSM. We model sparse inputs with zero-mean Gaussian random variables with unknown variances, as in the sparse Bayesian learning framework. Then, all unknown parameters are estimated by maximum likelihood with an expectation maximization (EM) algorithm where all parameters are jointly updated with closed-form expressions and all expectation quantities are efficiently computed with a Gaussian message passing algorithm. This general approach can deal with a large variety of sparse signal decomposition problems. Among them, we address the problems of learning repetitive signal shapes, learning classes of signal shapes, and decomposing a signal with scaled, time-shifted, and time-dilated versions of a signal shape. All these concepts and methods are illustrated with practical examplesShow more