This study proposes a new model which is fully specified for automated seizure onset detection and seizure onset prediction based on electroencephalography (EEG) measurements. We processed two archetypal EEG databases, Freiburg (intracranial EEG) and CHB-MIT (scalp EEG), to find if our model could outperform the state-of-the art models. Four key components define our model: (1) multiscale principal component analysis for EEG de-noising, (2) EEG signal decomposition using either empirical mode decomposition, discrete wavelet transform or wavelet packet decomposition, (3) statistical measures to extract relevant features, (4) machine learning algorithms. Our model achieved overall accuracy of 100% in ictal vs. inter-ictal EEG for both databases. In seizure onset prediction, it could discriminate between inter-ictal, pre-ictal, and ictal EEG with the accuracy of 99.77%, and between inter-ictal and pre-ictal EEG states with the accuracy of 99.70%. The proposed model is general and should prove applicable to other classification tasks including detection and prediction regarding bio-signals such as EMG and ECG. (C) 2017 Elsevier Ltd. All rights reserved.

State-space smoothing has found many applications in science and engineering. Under linear and Gaussian assumptions, smoothed estimates can be obtained using efficient recursions, for example Rauch Tung Striebel and Mayne Fraser algorithms. Such schemes are equivalent to linear algebraic techniques that minimize a convex quadratic objective function with structure induced by the dynamic model. These classical formulations fall short in many important circumstances. For instance, smoothers obtained using quadratic penalties can fail when outliers are present in the data, and cannot track impulsive inputs and abrupt state changes. Motivated by these shortcomings, generalized Kalman smoothing formulations have been proposed in the last few years, replacing quadratic models with more suitable, often nonsmooth, convex functions. In contrast to classical models, these general estimators require use of iterated algorithms, and these have received increased attention from control, signal processing, machine learning, and optimization communities. In this survey we show that the optimization viewpoint provides the control and signal processing community great freedom in the development of novel modeling and inference frameworks for dynamical systems. We discuss general statistical models for dynamic systems, making full use of nonsmooth convex penalties and constraints, and providing links to important models in signal processing and machine learning. We also survey optimization techniques for these formulations, paying close attention to dynamic problem structure. Modeling concepts and algorithms are illustrated with numerical examples. (C) 2017 Elsevier Ltd. All rights reserved.

In Model Predictive Control (MPC), the control input is computed by solving a constrained finite-time optimal control (CFTOC) problem at each sample in the control loop. The main computational effort when solving the CFTOC problem using an active-set (AS) method is often spent on computing the search directions, which in MPC corresponds to solving unconstrained finite-time optimal control (UFTOC) problems. This is commonly performed using Riccati recursions or generic sparsity exploiting algorithms. In this work the focus is efficient search direction computations for AS type methods. The system of equations to be solved at each AS iteration is changed only by a low-rank modification of the previous one, and exploiting this structured change is important for the performance of AS type solvers. In this paper, theory for how to exploit these low-rank changes by modifying the Riccati factorization between AS iterations in a structured way is presented. A numerical evaluation of the proposed algorithm shows that the computation time can be significantly reduced by modifying, instead of re-computing, the Riccati factorization. This speed-up can be important for AS type solvers used for linear, nonlinear and hybrid MPC.

The local approach to linear parameter varying (LPV) system identification consists in interpolating individually estimated local linear time invariant (LTI) models corresponding to fixed values of the scheduling variable. It is shown in this paper that, without any global structural assumption of the considered LPV system, individually estimated local state-space LTI models do not contain sufficient information for determining similarity transformations making them coherent. It is possible to estimate these similarity transformations from input-output data under appropriate excitation conditions. (C) 2017 Published by Elsevier Ltd.

Predicting the sign of press perturbation responses in ecological networks is challenging, due to the poor knowledge of the strength of the direct interactions among the species, and to the entangled coexistence of direct and indirect effects. We show in this paper that, for a class of networks that includes mutualistic and monotone networks, the sign of press perturbation responses can be qualitatively determined based only on the sign pattern of the community matrix, without any knowledge of parameter values. For other classes of networks, we show that a semi-qualitative approach yields sufficient conditions for community matrices with a given sign pattern to exhibit mutualistic responses to press perturbations; quantitative conditions can be provided as well for community matrices that are eventually nonnegative. We also present a computational test that can be applied to any class of networks so as to check whether the sign of the responses to press perturbations is constant in spite of parameter variations.

The ensemble Kalman filter (EnKF) is a Monte Carlo-based implementation of the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear, and non-Gaussian state estimation problems. Its ability to handle state dimensions in the order of millions has made the EnKF a popular algorithm in different geoscientific disciplines. Despite a similarly vital need for scalable algorithms in signal processing, e.g., to make sense of the ever increasing amount of sensor data, the EnKF is hardly discussed in our field. This self-contained review is aimed at signal processing researchers and provides all the knowledge to get started with the EnKF. The algorithm is derived in a KF framework, without the often encountered geoscientific terminology. Algorithmic challenges and required extensions of the EnKF are provided, as well as relations to sigma point KF and particle filters. The relevant EnKF literature is summarized in an extensive survey and unique simulation examples, including popular benchmark problems, complement the theory with practical insights. The signal processing perspective highlights new directions of research and facilitates the exchange of potentially beneficial ideas, both for the EnKF and high-dimensional nonlinear and non-Gaussian filtering in general.

Passive magnetic sensors measure the magnetic field density in three axes and are often integrated on a single chip. These low-cost sensors are widely used in car navigation as well as in battery powered navigation equipment such as smartphones as part of an electronic compass. We focus on a train localization application with multiple, exclusively onboard sensors and a track map. This approach is considered as a base technology for future railway applications such as collision avoidance systems or autonomous train driving. In this paper, we address the following question: how beneficial are passive magnetic measurements for train localization? We present and analyze measurements of two different magnetometers recorded on a regional train at regular passenger service. We show promising correlations of the measurements with the track positions and the traveled switch way. The processed data reveals that the railway environment has repeatable, location-dependent magnetic signatures. This is considered as a novel approach to train localization, as the use of these magnetic signals at first view is not obvious. The proposed methods based on passive magnetic measurements show a high potential to be integrated in new and existing train localization approaches.

In this paper, we study the problem of controlling complex networks with unilateral controls, i.e., controls which can assume only positive or negative values, not both. Given a complex network represented by the adjacency matrix A, an algorithm is developed that constructs an input matrix B such that the resulting system (A, B) is controllable with a near minimal number of unilateral control inputs. This is made possible by a reformulation of classical conditions for controllability that casts the minimal unilateral input selection problem into well known optimization problems. We identify network properties that make unilateral controllability relatively easy to achieve as compared to unrestricted controllability. The analysis of the network topology for instance allows us to establish theoretical bounds on the minimal number of controls required. For various categories of random networks as well as for a number of real-world networks these lower bounds are often achieved by our heuristics.

Devising the planar routes of minimal length that are required to pass through predefined neighborhoods of target points plays an important role in reducing the missions operating cost. Two versions of the problem are considered. The first one assumes that the ordering of the targets is fixed a priori. In such a case, the optimal route is devised by solving a convex optimization problem formulated either as a second-order cone program or as a sum-of-squares optimization problem. Additional route properties, such as continuity and minimal curvature, are considered as well. The second version allows the ordering of the targets to be optimized to further reduce the route length. We show that such a problem can be solved by introducing additional binary variables, which allows the route to be designed using off-the-shelf mixed-integer solvers. A case study that shows that the proposed strategy is computationally tractable is presented.

We propose a novel class of Sequential Monte Carlo (SMC) algorithms, appropriate for inference in probabilistic graphical models. This class of algorithms adopts a divide-and-conquer approach based upon an auxiliary tree-structured decomposition of the model of interest, turning the overall inferential task into a collection of recursively solved subproblems. The proposed method is applicable to a broad class of probabilistic graphical models, including models with loops. Unlike a standard SMC sampler, the proposed divide-and-conquer SMC employs multiple independent populations of weighted particles, which are resampled, merged, and propagated as the method progresses. We illustrate empirically that this approach can outperform standard methods in terms of the accuracy of the posterior expectation and marginal likelihood approximations. Divide-and-conquer SMC also opens up novel parallel implementation options and the possibility of concentrating the computational effort on the most challenging subproblems. We demonstrate its performance on a Markov random field and on a hierarchical logistic regression problem. Supplementary materials including proofs and additional numerical results are available online.

This paper proposes a decentralized control strategy for the voltage regulation of islanded inverter-interfaced microgrids. We show that an inverter-interfaced microgrid under plug-and-play (PnP) functionality of distributed generations (DGs) can be cast as a linear time-invariant system subject to polytopic-type uncertainty. Then, by virtue of this novel description and use of the results from theory of robust control, the microgrid control system guarantees stability and a desired performance even in the case of PnP operation of DGs. The robust controller is a solution of a convex optimization problem. The main properties of the proposed controller are that: 1) it is fully decentralized and local controllers of DGs that use only local measurements; 2) the controller guarantees the stability of the overall system; 3) the controller allows PnP functionality of DGs in microgrids; and 4) the controller is robust against microgrid topology change. Various case studies, based on time-domain simulations in MATLAB/SimPowerSystems Toolbox, are carried out to evaluate the performance of the proposed control strategy in terms of voltage tracking, microgrid topology change, PnP capability features, and load changes.

A platform for sensor fusion consisting of a standard smartphone equipped with the specially developed Sensor Fusion appis presented. The platform enables real-time streaming of data over WiFi to a computer where signal processingalgorithms, e.g., the Kalman filter, can be developed and executed in a Matlab framework. The platform is an excellenttool for educational purposes and enables learning activities where methods based on advanced theory can be implementedand evaluated at low cost. The article describes the app and a laboratory exercise developed around these new technologicalpossibilities. The laboratory session is part of a course in sensor fusion, a signal processing continuation course focused onmultiple sensor signal applications, where the goal is to give the students hands on experience of the subject. This is done byestimating the orientation of the smartphone, which can be easily visualized and also compared to the built-in filters in thesmartphone. The filter can accept any combination of sensor data from accelerometers, gyroscopes, and magnetometers toexemplify their importance. This way different tunings and tricks of important methods are easily demonstrated andevaluated on-line. The presented framework facilitates this in a way previously impossible.

A common issue with many system identification problems is that the true input to the system is unknown. This paper extends a previously presented indirect modelling framework that deals with identification of systems where the input is partially or fully unknown. In this framework, unknown inputs are eliminated by using additional measurements that directly or indirectly contain information about the unknown inputs. The resulting indirect predictor model is only dependent on known and measured signals and can be used to estimate the desired dynamics or properties. Since the input of the indirect model contains both known inputs and measurements that could all be correlated with the same disturbances as the output, estimation of the indirect model has similar challenges as a closed-loop estimation problem. In fact, due to the generality of the indirect modelling framework, it unifies a number of already existing system identification problems that are contained as special cases. For completeness, the paper is concluded with one method that can be used to estimate the indirect model as well as an experimental verification to show the applicability of the framework.

Bayesian nonparametric approaches have been recently introduced in system identification scenario where the impulse response is modeled as the realization of a zero-mean Gaussian process whose covariance (kernel) has to be estimated from data. In this scheme, quality of the estimates crucially depends on the parametrization of the covariance of the Gaussian process. A family of kernels that have been shown to be particularly effective in the system identification framework is the family of Diagonal/Correlated (DC) kernels. Maximum entropy properties of a related family of kernels, the Tuned/Correlated (TC) kernels, have been recently pointed out in the literature. In this technical note, we show that maximum entropy properties indeed extend to the whole family of DC kernels. The maximum entropy interpretation can be exploited in conjunction with results on matrix completion problems in the graphical models literature to shed light on the structure of the DC kernel. In particular, we prove that the DC kernel admits a closed-form factorization, inverse, and determinant. These results can be exploited both to improve the numerical stability and to reduce the computational complexity associated with the computation of the DC estimator.

Most navigation systems today rely on global navigation satellite systems (gnss), including in cars. With support from odometry and inertial sensors, this is a sufficiently accurate and robust solution, but there are future demands. Autonomous cars require higher accuracy and integrity. Using the car as a sensor probe for road conditions in cloud-based services also sets other kind of requirements. The concept of the Internet of Things requires stand-alone solutions without access to vehicle data. Our vision is a future with both invehicle localization algorithms and after-market products, where the position is computed with high accuracy in gnss-denied environments. We present a localization approach based on a prior that vehicles spend the most time on the road, with the odometer as the primary input. When wheel speeds are not available, we present an approach solely based on inertial sensors, which also can be used as a speedometer. The map information is included in a Bayesian setting using the particle filter (PF) rather than standard map matching. In extensive experiments, the performance without gnss is shown to have basically the same quality as utilizing a gnss sensor. Several topics are treated: virtual measurements, dead reckoning, inertial sensor information, indoor positioning, off-road driving, and multilevel positioning.

In this paper the main goal is to compare the instrumental variables and the least squares methods applied to parameter estimation in continuous-time systems, avoiding any preliminary discretization of the process, and to analyse which method is more suitable for estimation in continuous-time under stochastic perturbations. A numerical example illustrates the effectiveness of the algorithms.

There is a big trend nowadays to use event-triggered proximity report for indoor positioning. This paper presents a generic received-signal-strength (RSS) threshold optimization framework for generating informative proximity reports. The proposed framework contains five main building blocks, namely the deployment information, RSS model, positioning metric selection, optimization process and management. Among others, we focus on Gaussian process regression (GPR)-based RSS models and positioning metric computation. The optimal RSS threshold is found through minimizing the best achievable localization root-mean-square-error formulated with the aid of fundamental lower bound analysis. Computational complexity is compared for different RSS models and different fundamental lower bounds. The resulting optimal RSS threshold enables enhanced performance of new fashioned low-cost and low-complex proximity report-based positioning algorithms. The proposed framework is validated with real measurements collected in an office area where bluetooth-low-energy (BLE) beacons are deployed.

The general Simultaneous Localisation and Mapping (SLAM) problem aims at estimating the state of a moving platform simultaneously with building a map of the local environment. There are essentially three classes of algorithms. EKF- SLAM and FastSLAM solve the problem on-line, while Nonlinear Least Squares (NLS) is a batch method. All of them scales badly with either the state dimension, the map dimension or the batch length. We investigate the EM algorithm for solving a generalized version of the NLS problem. This EM-SLAM algorithm solves two simpler problems iteratively, hence it scales much better with dimensions. The iterations switch between state estimation, where we propose an Extended Rauch-Tung-Striebel smoother, and map estimation, where a quasi-Newton method is suggested. The proposed method is evaluated in real experiments and also in simulations on a platform with a monocular camera attached to an inertial measurement unit. It is demonstrated to produce lower RMSE than with a standard Levenberg-Marquardt solver of NLS problem, at a computational cost that increases considerably slower.

In this paper, we propose a distributed algorithm for solving coupled problems with chordal sparsity or an inherent tree structure which relies on primalâ€“dual interior-point methods. We achieve this by distributing the computations at each iteration, using message-passing. In comparison to existing distributed algorithms for solving such problems, this algorithm requires far fewer iterations to converge to a solution with high accuracy. Furthermore, it is possible to compute an upper-bound for the number of required iterations which, unlike existing methods, only depends on the coupling structure in the problem. We illustrate the performance of our proposed method using a set of numerical examples.

This paper presents a novel multi-sensor framework to efficiently identify, track, localise and map every piece of fruit in a commercial mango orchard. A multiple viewpoint approach is used to solve the problem of occlusion, thus avoiding the need for labour-intensive field calibration to estimate actual yield. Fruit are detected in images using a state-of-the-art faster R-CNN detector, and pair-wise correspondences are established between images using trajectory data provided by a navigation system. A novel LiDAR component automatically generates image masks for each canopy, allowing each fruit to be associated with the corresponding tree. The tracked fruit are triangulated to locate them in 3D, enabling a number of spatial statistics per tree, row or orchard block. A total of 522 trees and 71,609 mangoes were scanned on a Calypso mango orchard near Bundaberg, Queensland, Australia, with 16 trees counted by hand for validation, both on the tree and after harvest. The results show that single, dual and multi-view methods can all provide precise yield estimates, but only the proposed multi-view approach can do so without calibration, with an error rate of only 1.36% for individual trees.

1 Migratory songbirds carry an inherited capacity to migrate several thousand kilometers each year crossing continental landmasses and barriers between distant breeding sites and wintering areas. How individual songbirds manage with extreme precision to find their way is still largely unknown. The functional characteristics of biological compasses used by songbird migrants has mainly been investigated by recording the birds directed migratory activity in circular cages, so-called Emlen funnels. This method is 50 years old and has not received major updates over the past decades. The aim of this work was to compare the results from newly developed digital methods with the established manual methods to evaluate songbird migratory activity and orientation in circular cages. 2 We performed orientation experiments using the European robin (Erithacus rubecula) using modified Emlen funnels equipped with thermal paper and simultaneously recorded the songbird movements from above. We evaluated and compared the results obtained with five different methods. Two methods have been commonly used in songbirds orientation experiments; the other three methods were developed for this study and were based either on evaluation of the thermal paper using automated image analysis, or on the analysis of videos recorded during the experiment. 3 The methods used to evaluate scratches produced by the claws of birds on the thermal papers presented some differences compared with the video analyses. These differences were caused mainly by differences in scatter, as any movement of the bird along the sloping walls of the funnel was recorded on the thermal paper, whereas video evaluations allowed us to detect single takeoff attempts by the birds and to consider only this behavior in the orientation analyses. Using computer vision, we were also able to identify and separately evaluate different behaviors that were impossible to record by the thermal paper. 4 The traditional Emlen funnel is still the most used method to investigate compass orientation in songbirds under controlled conditions. However, new numerical image analysis techniques provide a much higher level of detail of songbirds migratory behavior and will provide an increasing number of possibilities to evaluate and quantify specific behaviors as new algorithms will be developed.

Todays 4G LTE systems bring unprecedented mobile broadband performance to over a billion of users across the globe. Recently, work on a 5G mobile communication system has begun, and next to a new 5G air interface, LTE will be an essential component. The evolution of LTE will therefore strive to meet 5G requirements and to address 5G use cases. In this article, we provide an overview of foreseen key technology areas and components for LTE Release 14, including latency reductions, enhancements for machine-type communication, operation in unlicensed spectrum, massive multi-antenna systems, broadcasting, positioning, and support for intelligent transportation systems.

Geometric phases describe how in a continuous-time dynamical system the displacement of a variable (called phase variable) can be related to other variables (shape variables) undergoing a cyclic motion, according to an area rule. The aim of this paper is to show that geometric phases can exist also for discrete-time systems, and even when the cycles in shape space have zero area. A context in which this principle can be applied is stock trading. A zero-area cycle in shape space represents the type of trading operations normally carried out by high-frequency traders (entering and exiting a position on a fast time-scale), while the phase variable represents the cash balance of a trader. Under the assumption that trading impacts stock prices, even zero-area cyclic trading operations can induce geometric phases, i.e., profits or losses, without affecting the stock quote.

The identification of multivariable state space models in innovation form is solved in a subspace identification framework using convex nuclear norm optimization. The convex optimization approach allows to include constraints on the unknown matrices in the data-equation characterizing subspace identification methods, such as the lower triangular block-Toeplitz of weighting matrices constructed from the Markov parameters of the unknown observer. The classical use of instrumental variables to remove the influence of the innovation term on the data equation in subspace identification is avoided. The avoidance of the instrumental variable projection step has the potential to improve the accuracy of the estimated model predictions, especially for short data length sequences. (C) 2016 Elsevier Ltd. All rights reserved.

Background: The mode of action of a drug on its targets can often be classified as being positive (activator, potentiator, agonist, etc.) or negative (inhibitor, blocker, antagonist, etc.). The signed edges of a drug-target network can be used to investigate the combined mechanisms of action of multiple drugs on the ensemble of common targets. Results: In this paper it is shown that for the signed human drug-target network the majority of drug pairs tend to have synergistic effects on the common targets, i.e., drug pairs tend to have modes of action with the same sign on most of the shared targets, especially for the principal pharmacological targets of a drug. Methods are proposed to compute this synergism, as well as to estimate the influence of the drugs on the side effect of another drug. Conclusions: Enriching a drug-target network with information of functional nature like the sign of the interactions allows to explore in a systematic way a series of network properties of key importance in the context of computational drug combinatorics.

This paper presents a data-driven receding horizon fault estimation method for additive actuator and sensor faults in unknown linear time-invariant systems, with enhanced robustness to stochastic identification errors. State-of-the-art methods construct fault estimators with identified state-space models or Markov parameters, without compensating for identification errors. Motivated by this limitation, we first propose a receding horizon fault estimator parameterized by predictor Markov parameters. This estimator provides (asymptotically) unbiased fault estimates as long as the subsystem from faults to outputs has no unstable transmission zeros. When the identified Markov parameters are used to construct the above fault estimator, stochastic identification errors appear as model uncertainty multiplied with unknown fault signals and online system inputs/outputs (I/O). Based on this fault estimation error analysis, we formulate a mixed-norm problem for the offline robust design that regards online I/O data as unknown. An alternative online mixed-norm problem is also proposed that can further reduce estimation errors at the cost of increased computational burden. Based on a geometrical interpretation of the two proposed mixed-norm problems, systematic methods to tune the user-defined parameters therein are given to achieve desired performance trade-offs. Simulation examples illustrate the benefits of our proposed methods compared to recent literature. (C) 2016 Elsevier Ltd. All rights reserved.

Inspired by ideas taken from the machine learning literature, new regularization techniques have been recently introduced in linear system identification. In particular, all the adopted estimators solve a regularized least squares problem, differing in the nature of the penalty term assigned to the impulse response. Popular choices include atomic and nuclear norms (applied to Hankel matrices) as well as norms induced by the so called stable spline kernels. In this paper, a comparative study of estimators based on these different types of regularizers is reported. Our findings reveal that stable spline kernels outperform approaches based on atomic and nuclear norms since they suitably embed information on impulse response stability and smoothness. This point is illustrated using the Bayesian interpretation of regularization. We also design a new class of regularizers defined by "integral" versions of stable spline/TC kernels. Under quite realistic experimental conditions, the new estimators outperform classical prediction error methods also when the latter are equipped with an oracle for model order selection. (C) 2016 Elsevier Ltd. All rights reserved.

Nonlinear Kalman filters are algorithms that approximately solve the Bayesian filtering problem by employing the measurement update of the linear Kalman filter (KF). Numerous variants have been developed over the past decades, perhaps most importantly the popular sampling based sigma point Kalman filters.In order to make the vast literature accessible, we present nonlinear KF variants in a common framework that highlights the computation of mean values and covariance matrices as the main challenge. The way in which these moment integrals are approximated distinguishes, for example, the unscented KF from the divided difference KF.With the KF framework in mind, a moment computation problem is defined and analyzed. It is shown how structural properties can be exploited to simplify its solution. Established moment computation methods, and their basics and extensions, are discussed in an extensive survey. The focus is on the sampling based rules that are used in sigma point KF. More specifically, we present three categories of methods that use sigma-points 1) to represent a distribution (as in the UKF); 2) for numerical integration (as in Gauss-Hermite quadrature); 3) to approximate nonlinear functions (as in interpolation). Prospective benefits and downsides are listed for each of the categories and methods, including accuracy statements. Furthermore, the related KF publications are listed.The theoretical discussion is complemented with a comparative simulation study on instructive examples.

It is a well-known fact that externally positive linear systems may fail to have a minimal positive realization. In order to investigate these cases, we introduce the notion of minimal eventually positive realization, for which the state update matrix becomes positive after a certain power. Eventually positive realizations capture the idea that in the impulse response of an externally positive system the state of a minimal realization may fail to be positive, but only transiently. As a consequence, we show that in discrete-time it is possible to use downsampling to obtain minimal positive realizations matching decimated sequences of Markov coefficients of the impulse response. In continuous-time, instead, if the sampling time is chosen sufficiently long, a minimal eventually positive realization leads always to a sampled realization which is minimal and positive.

In this study, Random Forests (RF) classifier is proposed for ECG heartbeat signal classification in diagnosis of heart arrhythmia. Discrete wavelet transform (DWT) is used to decompose ECG signals into different successive frequency bands. A set of different statistical features were extracted from the obtained frequency bands to denote the distribution of wavelet coefficients. This study shows that RF classifier achieves superior performances compared to other decision tree methods using 10-fold cross-validation for the ECG datasets and the obtained results suggest that further significant improvements in terms of classification accuracy can be accomplished by the proposed classification system. Accurate ECG signal classification is the major requirement for detection of all arrhythmia types. Performances of the proposed system have been evaluated on two different databases, namely MIT-BIH database and St. -Petersburg Institute of Cardiological Technics 12-lead Arrhythmia Database. For MIT-BIH database, RF classifier yielded an overall accuracy 99.33 % against 98.44 and 98.67 % for the C4.5 and CART classifiers, respectively. For St. -Petersburg Institute of Cardiological Technics 12-lead Arrhythmia Database, RF classifier yielded an overall accuracy 99.95 % against 99.80 % for both C4.5 and CART classifiers, respectively. The combined model with multiscale principal component analysis (MSPCA) de-noising, discrete wavelet transform (DWT) and RF classifier also achieves better performance with the area under the receiver operating characteristic (ROC) curve (AUC) and F- measure equal to 0.999 and 0.993 for MIT-BIH database and 1 and 0.999 for and St. Petersburg Institute of Cardiological Technics 12-lead Arrhythmia Database, respectively. Obtained results demonstrate that the proposed system has capacity for reliable classification of ECG signals, and to assist the clinicians for making an accurate diagnosis of cardiovascular disorders (CVDs).

In this letter, numerical algorithms for computing the marginal version of the Bayesian Cramér–Rao bound (M-BCRB) for jump Markov nonlinear systems and jump Markov linear Gaussian systems are proposed. Benchmark examples for both systems illustrate that the M-BCRB is tighter than three other recently proposed BCRBs

The first order stable spline (SS-1) kernel (also known as the tunedcorrelated kernel) is used extensively in regularized system identification, where the impulse response is modeled as a zero-mean Gaussian process whose covariance function is given by well designed and tuned kernels. In this paper, we discuss the maximum entropy properties of this kernel. In particular, we formulate the exact maximum entropy problem solved by the SS-1 kernel without Gaussian and uniform sampling assumptions. Under general sampling assumption, we also derive the special structure of the SS-1 kernel (e.g. its tridiagonal inverse and factorization have closed form expression), also giving to it a maximum entropy covariance completion interpretation.

In this work we present a practical calibration algorithm that calibrates a magnetometer using inertial sensors. The calibration corrects for magnetometer sensor errors, for the presence of magnetic disturbances and for misalignment between the magnetometer and the inertial sensor axes. It is based on a maximum likelihood formulation and is formulated as an offline method. It is shown to give good results using data from two different commercially available sensor units. Using the calibrated magnetometer measurements in combination with the inertial sensors to determine orientation, is shown to lead to significantly improved heading estimates.

This note presents an efficient approach for the evaluation of multi-parametric mixed integer quadratic programming (mp-MIQP) solutions, occurring for instance in control problems involving discrete time hybrid systems with quadratic cost. Traditionally, the online evaluation requires a sequential comparison of piecewise quadratic value functions. We introduce a lifted parameter space in which the piecewise quadratic value functions become piecewise affine and can be merged to a single value function defined over a single polyhedral partition without any overlaps. This enables efficient point location approaches using a single binary search tree. Numerical experiments with a power electronics application demonstrate an online speedup up to an order of magnitude. We also show how the achievable online evaluation time can be traded off against the offline computational time.

Keywords

Control of constrained systems; control of discrete time hybrid systems; explicit MPC, Engineering and Technology

This study presents a new approach for optimal placement of synchronised phasor measurement units (PMUs) to ensure complete power system observability in the presence of non-synchronous conventional measurements and zero injections. Currently, financial or technical restrictions prohibit the deployment of PMUs on every bus, which in turn motivates their strategic placement across the power system. PMU allocation is optimised here based on measurement observability criteria for achieving solvability of the power system state estimation. Most of the previous work has proposed topological observability based methods for optimal PMU placement (OPP), which may not always ensure numerical observability required for successful execution of state estimation. The proposed OPP method finds out the minimum number and the optimal locations of PMUs required to make the power system numerically observable. The problem is formulated as a binary semi-definite programming (BSDP) model, with binary decision variables, minimising a linear objective function subject to linear matrix inequality observability constraints. The BSDP problem is solved using an outer approximation scheme based on binary integer linear programming. The developed method is conducted on IEEE standard test systems. A large-scale system with 3120 buses is also analysed to exhibit the applicability of proposed model to practical power system cases.

Safety and security applications benefit from better situational awareness. Radar micro-Doppler signatures from an observed target carry information about the target's activity, and have potential to improve situational awareness. This article describes, compares, and discusses two methods to classify human activity based on radar micro-Doppler data. The first method extracts physically interpretable features from the time-velocity domain such as the main cycle time and properties of the envelope of the micro-Doppler spectra and use these in the classification. The second method derives its features based on the components with the most energy in the cadence-velocity domain (obtained as the Fourier transform of the time-velocity domain). Measurements from a field trial show that the two methods have similar activity classification performance. It is suggested that target base velocity and main limb cadence frequency are indirect features of both methods, and that they do often alone suffice to discriminate between the studied activities. This is corroborated by experiments with a reduced feature set. This opens up for designing new more compact feature sets. Moreover, weaknesses of the methods and the impact of non-radial motion are discussed.

In simple organisms like E. coli, the metabolic response to an external perturbation passes through a transient phase in which the activation of a number of latent pathways can guarantee survival at the expenses of growth. Growth is gradually recovered as the organism adapts to the new condition. This adaptation can be modeled as a process of repeated metabolic adjustments obtained through the resilencings of the non-essential metabolic reactions, using growth rate as selection probability for the phenotypes obtained. The resulting metabolic adaptation process tends naturally to steer the metabolic fluxes towards high growth phenotypes. Quite remarkably, when applied to the central carbon metabolism of E. coli, it follows that nearly all flux distributions converge to the flux vector representing optimal growth, i.e., the solution of the biomass optimization problem turns out to be the dominant attractor of the metabolic adaptation process.

Tracking human body motions using inertial sensors has become a well-accepted method in ambulatory applications since the subject is not confined to a lab-bounded volume. However, a major drawback is the inability to estimate relative body positions over time because inertial sensor information only allows position tracking through strapdown integration, but does not provide any information about relative positions. In addition, strapdown integration inherently results in drift of the estimated position over time. We propose a novel method in which a permanent magnet combined with 3-D magnetometers and 3-D inertial sensors are used to estimate the global trunk orientation and relative pose of the hand with respect to the trunk. An Extended Kalman Filter is presented to fuse estimates obtained from inertial sensors with magnetic updates such that the position and orientation between the human hand and trunk as well as the global trunk orientation can be estimated robustly. This has been demonstrated in multiple experiments in which various hand tasks were performed. The most complex task in which simultaneous movements of both trunk and hand were performed resulted in an average rms position difference with an optical reference system of 19.7 +/- 2.2 mm whereas the relative trunk-hand and global trunk orientation error was 2.3 +/- 0.9 and 8.6 +/- 8.7 deg respectively.

We present an adaptive smoother for linear state-space models with unknown process and measurement noise covariances. The proposed method utilizes the variational Bayes technique to perform approximate inference. The resulting smoother is computationally efficient, easy to implement, and can be applied to high dimensional linear systems. The performance of the algorithm is illustrated on a target tracking example.

The particle Gibbs sampler is a systematic way of using a particle filter within Markov chain Monte Carlo. This results in an off-the-shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in a Markov chain Monte Carlo scheme. We show that the particle Gibbs Markov kernel is uniformly ergodic under rather general assumptions, which we will carefully review and discuss. In particular, we provide an explicit rate of convergence, which reveals that (i) for fixed number of data points, the convergence rate can be made arbitrarily good by increasing the number of particles and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles superlinearly with the number of observations. We illustrate the applicability of our result by studying in detail a common stochastic volatility model with a non-compact state space.

Rod photoreceptors consist of an outer segment (OS) and an inner segment. Inside the OS a biochemical machinery transforms the rhodopsin photoisomerization into electrical signal. This machinery has been treated as and is thought to be homogenous with marginal inhomogeneities. To verify this assumption, we developed a methodology based on special tapered optical fibers (TOFs) to deliver highly localized light stimulations. By using these TOFs, specific regions of the rod OS could be stimulated with spots of light highly confined in space. As the TOF is moved from the OS base toward its tip, the amplitude of saturating and single photon responses decreases, demonstrating that the efficacy of the transduction machinery is not uniform and is 5–10 times higher at the base than at the tip. This gradient of efficacy of the transduction machinery is attributed to a progressive depletion of the phosphodiesterase along the rod OS. Moreover we demonstrate that, using restricted spots of light, the duration of the photoresponse along the OS does not increase linearly with the light intensity as with diffuse light.

Keywords

Natural Sciences

BIBTEX

@article{diva2:849577,
author = {Mazzolini, Monica and Facchetti, Giuseppe and Andolfi, Laura and Proietti Zaccaria, Remo and Tuccio, Salvatore and Treu, Johannes and Altafini, Claudio and Di Fabrizio, Enzo and Torre, Vincent},
title = {{The phototransduction machinery in the rod outer segment has a strong efficacy gradient}},
journal = {Proceedings of the National Academy of Sciences of the United States of America},
year = {2015},
volume = {112},
number = {20},
pages = {E2715--E272},
}

Micro-Doppler radar signatures have great potential for classifying pedestrians and animals, as well as their motion pattern, in a variety of surveillance applications. Due to the many degrees of freedom involved, real data need to be complemented with accurate simulated radar data to be able to successfully design and test radar signal processing algorithms. In many cases, the ability to collect real data is limited by monetary and practical considerations, whereas in a simulated environment, any desired scenario may be generated. Motion capture (MOCAP) has been used in several works to simulate the human micro-Doppler signature measured by radar; however, validation of the approach has only been done based on visual comparisons of micro-Doppler signatures. This work validates and, more importantly, extends the exploitation of MOCAP data not just to simulate micro-Doppler signatures but also to use the simulated signatures as a source of a priori knowledge to improve the classification performance of real radar data, particularly in the case when the total amount of data is small.

In this paper, we propose using Gaussian processes to track an extended object or group of objects, that generates multiple measurements at each scan. The shape and the kinematics of the object are simultaneously estimated, and the shape is learned online via a Gaussian process. The proposed algorithm is capable of tracking different objects with different shapes within the same surveillance region. The shape of the object is expressed analytically, with well-defined confidence intervals, which can be used for gating and association. Furthermore, we use an efficient recursive implementation of the algorithm by deriving a state space model in which the Gaussian process regression problem is cast into a state estimation problem.

Synthetic aperture radar (SAR) equipment is a radar imaging system that can be used to create high-resolution images of a scene by utilizing the movement of a flying platform. Knowledge of the platforms trajectory is essential to get good and focused images. An emerging application field is real-time SAR imaging using small and cheap platforms where estimation errors in navigation systems imply unfocused images. This contribution investigates a joint estimation of the trajectory and SAR image. Starting with a nominal trajectory, we successively improve the image by optimizing a focus measure and updating the trajectory accordingly. The method is illustrated using simulations using typical navigation performance of an unmanned aerial vehicle. One real data set is used to show feasibility, where the result indicates that, in particular, the azimuth position error is decreased as the image focus is iteratively improved.

Filtering and smoothing algorithms for linear discrete-time state-space models with skewed and heavy-tailed measurement noise are presented. The algorithms use a variational Bayes approximation of the posterior distribution of models that have normal prior and skew-t-distributed measurement noise. The proposed filter and smoother are compared with conventional low-complexity alternatives in a simulated pseudorange positioning scenario. In the simulations the proposed methods achieve better accuracy than the alternative methods, the computational complexity of the filter being roughly 5 to 10 times that of the Kalman filter.

An established method for grey-box identification is to use maximum-likelihood estimation for the nonlinear case implemented via extended Kalman filtering. In applications of (nonlinear) model predictive control a more and more common approach for the state estimation is to use moving horizon estimation, which employs (nonlinear) optimization directly on a model for a whole batch of data. This paper shows that, in the linear case, horizon estimation may also be used for joint parameter estimation and state estimation, as long as a bias correction based on the Kalman filter is included. For the nonlinear case two special cases are presented where the bias correction can be determined without approximation. A procedure how to approximate the bias correction for general nonlinear systems is also outlined. (C) 2015 Elsevier Ltd. All rights reserved.

Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user’s pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.

In this paper, we present an approach to combine measurements from inertial sensors (accelerometers and gyroscopes) with time-of-arrival measurements from an ultrawideband (UWB) system for indoor positioning. Our algorithm uses a tightly coupled sensor fusion approach, where we formulate the problem as a maximum a posteriori (MAP) problem that is solved using an optimization approach. It is shown to lead to accurate 6-D position and orientation estimates when compared to reference data from an independent optical tracking system. To be able to obtain position information from the UWB measurements, it is imperative that accurate estimates of the UWB receivers positions and their clock offsets are available. Hence, we also present an easy-to-use algorithm to calibrate the UWB system using a maximum-likelihood (ML) formulation. Throughout this work, the UWB measurements are modeled by a tailored heavy-tailed asymmetric distribution to account for measurement outliers. The heavy-tailed asymmetric distribution works well on experimental data, as shown by analyzing the position estimates obtained using the UWB measurements via a novel multilateration approach.

We study cooperative sensor network localization in a realistic scenario where 1) the underlying measurement errors more probably follow a non-Gaussian distribution; 2) the measurement error distribution is unknown without conducting massive offline calibrations; and 3) non-line-of-sight identification is not performed due to the complexity constraint and/or storage limitation. The underlying measurement error distribution is approximated parametrically by a Gaussian mixture with finite number of components, and the expectation-conditional maximization (ECM) criterion is adopted to approximate the maximum-likelihood estimator of the unknown sensor positions and an extra set of Gaussian mixture model parameters. The resulting centralized ECM algorithms lead to easier inference tasks and meanwhile retain several convergence properties with a proof of the "space filling" condition. To meet the scalability requirement, we further develop two distributed ECM algorithms where an average consensus algorithm plays an important role for updating the Gaussian mixture model parameters locally. The proposed algorithms are analyzed systematically in terms of computational complexity and communication overhead. Various computer based tests are also conducted with both simulation and experimental data. The results pin down that the proposed distributed algorithms can provide overall good performance for the assumed scenario even under model mismatch, while the existing competing algorithms either cannot work without the prior knowledge of the measurement error statistics or merely provide degraded localization performance when the measurement error is clearly non-Gaussian.

We present an online method for joint state and parameter estimation in jump Markov non-linear systems (JMNLS). State inference is enabled via the use of particle filters which makes the method applicable to a wide range of non-linear models. To exploit the inherent structure of JMNLS, we design a Rao-Blackwellized particle filter (RBPF) where the discrete mode is marginalized out analytically. This results in an efficient implementation of the algorithm and reduces the estimation error variance. The proposed RBPF is then used to compute, recursively in time, smoothed estimates of complete data sufficient statistics. Together with the online expectation maximization algorithm, this enables recursive identification of unknown model parameters including the transition probability matrix. The method is also applicable to online identification of jump Markov linear systems(JMLS). The performance of the method is illustrated in simulations and on a localization problem in wireless networks using real data.

For communities of agents which are not necessarily cooperating, distributed processes of opinion forming are naturally represented by signed graphs, with positive edges representing friendly and cooperative interactions and negative edges the corresponding antagonistic counterpart. Unlike for nonnegative graphs, the outcome of a dynamical system evolving on a signed graph is not obvious and it is in general difficult to characterize, even when the dynamics are linear. In this paper, we identify a significant class of signed graphs for which the linear dynamics are however predictable and show many analogies with positive dynamical systems. These cases correspond to adjacency matrices that are eventually positive, for which the Perron-Frobenius property still holds and implies the existence of an invariant cone contained inside the positive orthant. As examples of applications, we determine cases in which it is possible to anticipate or impose unanimity of opinion in decision/voting processes even in presence of stubborn agents, and show how it is possible to extend the PageRank algorithm to include negative links.

It is well-known that the motion of an acoustic source can be estimated from Doppler shift observations. It is however not obvious how to design a sensor network to efficiently deliver the localization service. In this work a rather simplistic motion model is proposed that is aimed at sensor networks with realistic numbersof sensor nodes. It is also described how to efficiently solve the associated least squares optimization problem by Gauss-Newton variable projection techniques, and how to initiate the numerical search from simple features extracted from the observed frequency series. The methods are evaluated by Monte Carlo simulations and demonstrated on real data by localizing an all-terrain vehicle. Itis concluded that the processing components included are fairly mature for practical implementations in sensor networks.

In optimization algorithms used for on-line Model Predictive Control (MPC), linear systems of equations are often solved in each iteration. This is true both for Active Set methods as well as for Interior Point methods, and for linear MPC as well as for nonlinear MPC and hybrid MPC. The main computational effort is spent while solving these linear systems of equations, and hence, it is of greatest interest to solve them efficiently. Classically, the optimization problem has been formulated in either of two ways. One leading to a sparse linear system of equations involving relatively many variables to compute in each iteration and another one leading to a dense linear system of equations involving relatively few variables. In this work, it is shown that it is possible not only to consider these two distinct choices of formulations. Instead it is shown that it is possible to create an entire family of formulations with different levels of sparsity and number of variables, and that this extra degree of freedom can be exploited to obtain even better performance with the software and hardware at hand. This result also provides a better answer to a recurring question in MPC; should the sparse or dense formulation be used.

In this letter, we propose a general framework for greedy reduction of mixture densities of exponential family. The performances of the generalized algorithms are illustrated both on an artificial example where randomly generated mixture densities are reduced and on a target tracking scenario where the reduction is carried out in the recursion of a Gaussian inverse Wishart probability hypothesis density (PHD) filter.

This paper addresses one of the main challenges in physical activity monitoring, as indicated by recent benchmark results: The difficulty of the complex classification problems exceeds the potential of existing classifiers. Therefore, this paper proposes the ConfAdaBoost.M1 algorithm. This algorithm is a variant of the AdaBoost.M1 that incorporates well-established ideas for confidence-based boosting. ConfAdaBoost.M1 is compared to the most commonly used boosting methods using benchmark datasets from the UCI machine learning repository. Moreover, it is evaluated on an activity recognition and an intensity estimation problem, including a large number of physical activities from the recently released PAMAP2 dataset. The presented results indicate that the proposed ConfAdaBoost.M1 algorithm significantly improves the classification performance on most of the evaluated datasets, especially for larger and more complex classification tasks. Finally, two empirical studies are designed and carried out to investigate the feasibility of ConfAdaBoost.M1 for physical activity monitoring applications in mobile systems.

In this paper, we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations of CFPs. We also put forth distributed convergence tests which enable us to establish feasibility or infeasibility of the problem distributedly, and we provide convergence rate results. Under the assumption that the problem is feasible and boundedly linearly regular, these convergence results are given in terms of the distance of the iterates to the feasible set, which are similar to those of classical projection methods. In case the feasibility problem is infeasible, we provide convergence rate results that concern the convergence of certain error bounds.

Particle Metropolis-Hastings (PMH) allows for Bayesian parameter inference in nonlinear state space models by combining MCMC and particle filtering. The latter is used to estimate the intractable likelihood. In its original formulation, PMH makes use of a marginal MCMC proposal for the parameters, typically a Gaussian random walk. However, this can lead to a poor exploration of the parameter space and an inefficient use of the generated particles.

We propose two alternative versions of PMH that incorporate gradient and Hessian information about the posterior into the proposal. This information is more or less obtained as a byproduct of the likelihood estimation. Indeed, we show how to estimate the required information using a fixed-lag particle smoother, with a computational cost growing linearly in the number of particles. We conclude that the proposed methods can: (i) decrease the length of the burn-in phase, (ii) increase the mixing of the Markov chain at the stationary phase, and (iii) make the proposal distribution scale invariant which simplifies tuning.

Prediction and filtering of continuous-time stochastic processes often require a solver of a continuous-time differential Lyapunov equation (CDLE), for example the time update in the Kalman filter. Even though this can be recast into an ordinary differential equation (ODE), where standard solvers can be applied, the dominating approach in Kalman filter applications is to discretize the system and then apply the discrete-time difference Lyapunov equation (DDLE). To avoid problems with stability and poor accuracy, oversampling is often used. This contribution analyzes over-sampling strategies, and proposes a novel low-complexity analytical solution that does not involve oversampling. The results are illustrated on Kalman filtering problems in both linear and nonlinear systems.

A method for fusing Synthetic Aperture Radar (SAR) images with opticalaerial images is presented. This is done in a navigation framework, where the absolute position and orientation of the flying platform, as computed from the inertial navigation system, is corrected based on the aerial image coordinates taken as ground truth. The method is suitable for new low-price SAR systems for small unmanned vehicles. The primary application is remote sensing, where the SAR image provides one further "colour" channel revealing reflectivity to radio waves. The method is based on first applying an edge detection algorithm to the images and then optimising the most important navigation states by matching the two binary images. To get a measure of the estimation uncertainty, we embed the optimisation in a least squares framework, where an explicit method to estimate the (relative) size of the errors is presented. The performance is demonstrated on real SAR and aerial images, leading to an error of only a few pixels.

BACKGROUND: Procrastination is a prevalent self-regulatory failure associated with stress and anxiety, decreased well-being, and poorer performance in school as well as work. One-fifth of the adult population and half of the student population describe themselves as chronic and severe procrastinators. However, despite the fact that it can become a debilitating condition, valid and reliable self-report measures for assessing the occurrence and severity of procrastination are lacking, particularly for use in a clinical context. The current study explored the usefulness of the Swedish version of three Internet-administered self-report measures for evaluating procrastination; the Pure Procrastination Scale, the Irrational Procrastination Scale, and the Susceptibility to Temptation Scale, all having good psychometric properties in English.

METHODS: In total, 710 participants were recruited for a clinical trial of Internet-based cognitive behavior therapy for procrastination. All of the participants completed the scales as well as self-report measures of depression, anxiety, and quality of life. Principal Component Analysis was performed to assess the factor validity of the scales, and internal consistency and correlations between the scales were also determined. Intraclass Correlation Coefficient, Minimal Detectable Change, and Standard Error of Measurement were calculated for the Irrational Procrastination Scale.

RESULTS: The Swedish version of the scales have a similar factor structure as the English version, generated good internal consistencies, with Cronbach's α ranging between .76 to .87, and were moderately to highly intercorrelated. The Irrational Procrastination Scale had an Intraclass Correlation Coefficient of .83, indicating excellent reliability. Furthermore, Standard Error of Measurement was 1.61, and Minimal Detectable Change was 4.47, suggesting that a change of almost five points on the scale is necessary to determine a reliable change in self-reported procrastination severity.

CONCLUSIONS: The current study revealed that the Pure Procrastination Scale, the Irrational Procrastination Scale, and the Susceptibility to Temptation Scale are both valid and reliable from a psychometric perspective, and that they might be used for assessing the occurrence and severity of procrastination via the Internet.

TRIAL REGISTRATION: The current study is part of a clinical trial assessing the efficacy of Internet-based cognitive behavior therapy for procrastination, and was registered 04/22/2013 on ClinicalTrials.gov (NCT01842945).

Keywords

Social Sciences

BIBTEX

@article{diva2:805833,
author = {Rozental, Alexander and Forsell, Erik and Svensson, Andreas and Forsström, David and Andersson, Gerhard and Carlbring, Per},
title = {{Psychometric evaluation of the Swedish version of the pure procrastination scale, the irrational procrastination scale, and the susceptibility to temptation scale in a clinical population.}},
journal = {BMC Psychology},
year = {2014},
volume = {2},
number = {1},
pages = {54--},
}

Estimation-based iterative learning control (ILC) is applied to a parallel kinematic manipulator known as the Gantry-Tau parallel robot. The system represents a control problem where measurements of the controlled variables are not available. The main idea is to use estimates of the controlled variables in the ILC algorithm, and in the paper this approach is evaluated experimentally on the Gantry-Tau robot. The experimental results show that an ILC algorithm using estimates of the tool position gives a considerable improvement of the control performance. The tool position estimate is obtained by fusing measurements of the actuator angular positions with measurements of the tool path acceleration using a complementary filter.

Model estimation and structure detection with short data records are two issues that receive increasing interests in System Identification. In this paper, a multiple kernel-based regularization method is proposed to handle those issues. Multiple kernels are conic combinations of fixed kernels suitable for impulse response estimation, and equip the kernel-based regularization method with three features. First, multiple kernels can better capture complicated dynamics than single kernels. Second, the estimation of their weights by maximizing the marginal likelihood favors sparse optimal weights, which enables this method to tackle various structure detection problems, e. g., the sparse dynamic network identification and the segmentation of linear systems. Third, the marginal likelihood maximization problem is a difference of convex programming problem. It is thus possible to find a locally optimal solution efficiently by using a majorization minimization algorithm and an interior point method where the cost of a single interior-point iteration grows linearly in the number of fixed kernels. Monte Carlo simulations show that the locally optimal solutions lead to good performance for randomly generated starting points.

Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an off-the-shelf class of Markov kernels that can be used to simulate, for instance, the typically high-dimensional and highly autocorrelated state trajectory in a state-space model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in state-space models, but also in models with more complex dependencies, such as non-Markovian, Bayesian nonparametric, and general probabilistic graphical models.

This paper presents a new prediction update for extended targets whose extensions are modeled as random matrices. The prediction is based on several minimizations of the Kullback-Leibler divergence (KL-div) and allows for a kinematic state dependent transformation of the target extension. The results show that the extension prediction is a significant improvement over the previous work carried out on the topic.

Among the many different formulations of Model Predictive Control (MPC) with guaranteed stability, one that has attracted significant attention is the formulation with a terminal cost and terminal constraint set, the so called dual mode formulation. In this technical note our goal is to make minimal changes to the dual mode framework, for the linear polytopic case, in order to develop a flexible reference tracking algorithm with guaranteed stability and low complexity, which is intuitive and easily understood. The main idea is to introduce a scaling variable that dynamically scales the terminal constraint set and therefore allows it to be centered around an arbitrary setpoint without violating the stability conditions. The main benefit of the algorithm is reduced complexity of the resulting QP compared to other state of art methods without loosing performance.

In this paper we describe an approach to maximum likelihood estimation of linear single input single output (SISO) models when both input and output data are missing. The criterion minimised in the algorithms is the Euclidean norm of the prediction error vector scaled by a particular function of the covariance matrix of the observed output data. We also provide insight into when simpler and in general sub-optimal schemes are indeed optimal. The algorithm has been prototyped in MATLAB, and we report numerical results that support the theory.

The electroencephalogram (EEG) signal is very important in the diagnosis of epilepsy. Long-term EEG recordings of an epileptic patient contain a huge amount of EEG data. The detection of epileptic activity is, therefore, a very demanding process that requires a detailed analysis of the entire length of the EEG data, usually performed by an expert. This paper describes an automated classification of EEG signals for the detection of epileptic seizures using wavelet transform and statistical pattern recognition. The decision making process is comprised of three main stages: (a) feature extraction based on wavelet transform, (b) feature space dimension reduction using scatter matrices and (c) classification by quadratic classifiers. The proposed methodology was applied on EEG data sets that belong to three subject groups: (a) healthy subjects, (b) epileptic subjects during a seizure-free interval and (c) epileptic subjects during a seizure. An overall classification accuracy of 99% was achieved. The results confirmed that the proposed algorithm has a potential in the classification of EEG signals and detection of epileptic seizures, and could thus further improve the diagnosis of epilepsy.

In this paper, we consider robust stability analysis of large-scale sparsely interconnected uncertain systems. By modeling the interconnections among the subsystems with integral quadratic constraints, we show that robust stability analysis of such systems can be performed by solving a set of sparse linear matrix inequalities. We also show that a sparse formulation of the analysis problem is equivalent to the classical formulation of the robustness analysis problem and hence does not introduce any additional conservativeness. The sparse formulation of the analysis problem allows us to apply methods that rely on efficient sparse factorization techniques, and our numerical results illustrate the effectiveness of this approach compared to methods that are based on the standard formulation of the analysis problem.

This paper presents a data-driven approach to diagnostics of systems that operate in a repetitive manner. Considering that data batches collected from a repetitive operation will be similar unless in the presence of an abnormality, a condition change is inferred by comparing the monitored data against an available nominal batch. The method proposed considers the comparison of data in the distribution domain, which reveals information of the data amplitude. This is achieved with the use of kernel density estimates and the Kullback–Leibler distance. To decrease sensitivity to disturbances while increasing sensitivity to faults, the use of a weighting vector is suggested which is chosen based on a labeled dataset. The framework is simple to implement and can be used without process interruption, in a batch manner. The approach is demonstrated with successful experimental and simulation applications to wear diagnostics in an industrial robot gearbox and for diagnostics of gear faults in a rotating machine.

The effects of wear to friction are studied based on constant-speed friction data collected from dedicated experiments during accelerated wear tests. It is shown how the effects of temperature and load uncertainties produce larger changes to friction than those caused by wear, motivating the consideration of these effects. Based on empirical observations, an extended friction model is proposed to describe the effects of speed, load, temperature, and wear. Assuming the availability of such a model and constant-speed friction data, a maximum likelihood wear estimator is proposed. The performance of the wear estimator under load and temperature uncertainties is found by means of simulations and verified under three case studies based on real data. Practical issues related to experiment length are considered based on an optimal selection of speed points to collect friction data, improving the achievable performance bound for any unbiased wear estimator. As it is shown, reliable wear estimates can be achieved even under load and temperature uncertainties, making condition-based maintenance of industrial robots possible.

We propose a new method for generating semidefinite relaxations of optimal power flow problems. The method is based on chordal conversion techniques: by dropping some equality constraints in the conversion, we obtain semidefinite relaxations that are computationally cheaper, but potentially weaker, than the standard semidefinite relaxation. Our numerical results show that the new relaxations often produce the same results as the standard semidefinite relaxation, but at a lower computational cost.

Anomaly detection in large populations is a challenging but highly relevant problem. It is essentially a multi-hypothesis problem, with a hypothesis for every division of the systems into normal and anomalous systems. The number of hypothesis grows rapidly with the number of systems and approximate solutions become a necessity for any problem of practical interest. In this paper we take an optimization approach to this multi-hypothesis problem. It is first shown to be equivalent to a non-convex combinatorial optimization problem and then is relaxed to a convex optimization problem that can be solved distributively on the systems and that stays computationally tractable as the number of systems increase. An interesting property of the proposed method is that it can under certain conditions be shown to give exactly the same result as the combinatorial multi-hypothesis problem and the relaxation is hence tight.

A method to identify linear parameter varying models through minimisation of an -norm objective is presented. The method uses a direct nonlinear programming approach to a non-convex problem. The reason to use -norm is twofold. To begin with, it is a well-known and widely used system norm, and second, the cost functions described in this paper become differentiable when using the -norm. This enables us to have a measure of first-order optimality and to use standard quasi-Newton solvers to solve the problem. The specific structure of the problem is utilised in great detail to compute cost functions and gradients efficiently. Additionally, a regularised version of the method, which also has a nice computational structure, is presented. The regularised version is shown to have an interesting interpretation with connections to worst-case approaches.

In this paper, we address the problem of multi-target detection and tracking over a network of separately located Doppler-shift measuring sensors. For this challenging problem, we propose to use the probability hypothesis density (PHD) filter and present two implementations of the PHD filter, namely the sequential Monte Carlo PHD (SMC-PHD) and the Gaussian mixture PHD (GM-PHD) filters. Performances of both filters are carefully studied and compared for the considered challenging tracking problem. Simulation results show that both PHD filter implementations successfully track multiple targets using only Doppler shift measurements. Moreover, as a proof-of-concept, an experimental setup consisting of a network of microphones and a loudspeaker was prepared. Experimental study results reveal that it is possible to track multiple ground targets using acoustic Doppler shift measurements in a passive multi-static scenario. We observed that the GM-PHD is more effective, efficient and easy to implement than the SMC-PHD filter.

In this paper, the measurements of individual wheel speeds and the absolute position from a global positioning system are used for high-precision estimation of vehicle tire radii. The radii deviation from its nominal value is modeled as a Gaussian random variable and included as noise components in a simple vehicle motion model. The novelty lies in a Bayesian approach to estimate online both the state vector and the parameters representing the process noise statistics using a marginalized particle filter (MPF). Field tests show that the absolute radius can be estimated with submillimeter accuracy. The approach is tested in accordance with regulation 64 of the United Nations Economic Commission for Europe on a large data set (22 tests, using two vehicles and 12 different tire sets), where tire deflations are successfully detected, with high robustness, i.e., no false alarms. The proposed MPF approach outperforms common Kalman-filter-based methods used for joint state and parameter estimation when compared with respect to accuracy and robustness.

Most of the currently used techniques for linear system identification are based on classical estimation paradigms coming from mathematical statistics. In particular, maximum likelihood and prediction error methods represent the mainstream approaches to identification of linear dynamic systems, with a long history of theoretical and algorithmic contributions. Parallel to this, in the machine learning community alternative techniques have been developed. Until recently, there has been little contact between these two worlds. The first aim of this survey is to make accessible to the control community the key mathematical tools and concepts as well as the computational aspects underpinning these learning techniques. In particular, we focus on kernel-based regularization and its connections with reproducing kernel Hilbert spaces and Bayesian estimation of Gaussian processes. The second aim is to demonstrate that learning techniques tailored to the specific features of dynamic systems may outperform conventional parametric approaches for identification of stable linear systems.

A marginal version of the enumeration Bayesian Cramer-Rao Bound (EBCRB) for jump Markov systems is proposed. It is shown that the proposed bound is at least as tight as EBCRB and the improvement stems from better handling of the nonlinearities. The new bound is illustrated to yield tighter results than BCRB and EBCRB on a benchmark example.

Random set based methods have provided a rigorous Bayesian framework and have been used extensively in the last decade for point object estimation. In this paper, we emphasize that the same methodology offers an equally powerful approach to estimation of so called extended objects, i.e., objects that result in multiple detections on the sensor side. Building upon the analogy between Bayesian state estimation of a single object and random finite set estimation for multiple objects, we give a tutorial on random set methods with an emphasis on multiple extended object estimation. The capabilities are illustrated on a simple yet insightful real life example with laser range data containing several occlusions.

In this article we present a parametric branch and bound algorithm for computation of optimal and suboptimal solutions to parametric mixed-integer quadratic programs and parametric mixed-integer linear programs. The algorithm returns an optimal or suboptimal parametric solution with the level of suboptimality requested by the user. An interesting application of the proposed parametric branch and bound procedure is suboptimal explicit MPC for hybrid systems, where the introduced user-defined suboptimality tolerance reduces the storage requirements and the online computational effort, or even enables the computation of a suboptimal MPC controller in cases where the computation of the optimal MPC controller would be intractable. Moreover, stability of the system in closed loop with the suboptimal controller can be guaranteed a priori.

We consider robust geolocation in mixed line-of-sight (LOS)/non-LOS (NLOS) environments in cellular radio networks. Instead of assuming known propagation channel states (LOS or NLOS), we model the measurement error with a general two-mode mixture distribution although it deviates from the underlying error statistics. To avoid offline calibration, we propose to jointly estimate the geographical coordinates and the mixture model parameters. Two iterative algorithms are developed based on the well-known expectation-maximization (EM) criterion and joint maximum a posteriori-maximum likelihood (JMAP-ML) criterion to approximate the ideal maximum-likelihood estimator (MLE) of the unknown parameters with low computational complexity. Along with concrete examples, we elaborate the convergence analysis and the complexity analysis of the proposed algorithms. Moreover, we numerically compute the Cramer-Rao lower bound (CRLB) for our joint estimation problem and present the best achievable localization accuracy in terms of the CRLB. Various simulations have been conducted based on a real-world experimental setup, and the results have shown that the ideal MLE can be well approximated by the JMAP-ML algorithm. The EM estimator is inferior to the JMAP-ML estimator but outperforms other competitors by far.

The norm-optimal iterative learning control (ilc) algorithm for linear systems is extended to an estimation-based norm-optimal ilc algorithm where the controlled variables are not directly available as measurements. A separation lemma is presented, stating that if a stationary Kalman filter is used for linear time-invariant systems then the ilc design is independent of the dynamics in the Kalman filter. Furthermore, the objective function in the optimisation problem is modified to incorporate the full probability density function of the error. Utilising the Kullback–Leibler divergence leads to an automatic and intuitive way of tuning the ilc algorithm. Finally, the concept is extended to non-linear state space models using linearisation techniques, where it is assumed that the full state vector is estimated and used in the ilc algorithm. Stability and convergence properties for the proposed scheme are also derived.

An important class of optimisation problems in control and signal processing involves the constraint that a Popov function is non-negative on the unit circle or the imaginary axis. Such a constraint is convex in the coefficients of the Popov function. It can be converted to a finite-dimensional linear matrix inequality via the Kalman-Yakubovich-Popov lemma. However, the linear matrix inequality reformulation requires an auxiliary matrix variable and often results in a very large semidefinite programming problem. Several recently published methods exploit problem structure in these semidefinite programmes to alleviate the computational cost associated with the large matrix variable. These algorithms are capable of solving much larger problems than general-purpose semidefinite programming packages. In this paper, we address the same problem by presenting an alternative to the linear matrix inequality formulation of the non-negative Popov function constraint. We sample the constraint to obtain an equivalent set of inequalities of low dimension, thus avoiding the large matrix variable in the linear matrix inequality formulation. Moreover, the resulting semidefinite programme has constraints with low-rank structure, which allows the problems to be solved efficiently by existing semidefinite programming packages. The sampling formulation is obtained by first expressing the Popov function inequality as a sum-of-squares condition imposed on a polynomial matrix and then converting the constraint into an equivalent finite set of interpolation constraints. A complexity analysis and numerical examples are provided to demonstrate the performance improvement over existing techniques.

The dependence of radio signal propagation on the environment is well known, and both statistical and deterministic methods have been presented in the literature. Such methods are either based on randomised or actual reflectors of radio signals. In this work, we instead aim at estimating the location of the reflectors based on geo-localised radio channel impulse reponse measurements and using methods from synthetic aperture radar (SAR). Radio channel data measurements from 3GPP E-UTRAN have been used to verify the usefulness of the proposed approach. The obtained images show that the estimated reflectors are well correlated with the aerial map of the environment. Also, which part of the trajectory contributed to different reflectors have been estimated with promising results.

We present an approach for computing the driving direction of a vehicle by processing measurements from one 2-axis magnetometer. The proposed method relies on a non-linear transformation of the measurement data comprising only two inner products. Deterministic analysis of the signal model reveals how the driving direction affects the measurement signal and the proposed classifier is analyzed in terms of its statistical properties. The method is compared with a model based likelihood test using both simulated and experimental data. The experimental verification indicates that good performance is achieved under the presence of saturation, measurement noise, and near field effects.

With the electromagnetic theory as basis, we present a sensor model for three-axis magnetometers suitable for localization and tracking as required in intelligent transportation systems and security applications. The model depends on a physical magnetic dipole model of the target and its relative position to the sensor. Both point target and extended target models are provided as well as a target orientation dependent model. The suitability of magnetometers for tracking is analyzed in terms of local observability and the Cramér Rao lower bound as a function of the sensor positions in a two sensor scenario. The models are validated with real field test data taken from various road vehicles which indicate excellent localization as well as identification of the magnetic target model suitable for target classification. These sensor models can be combined with a standard motion model and a standard nonlinear filter to track metallic objects in a magnetometer network.

This paper considers the problem of dynamic modeling and identification of robot manipulators with respect to their elasticities. The so-called flexible joint model, modeling only the torsional gearbox elasticity, is shown to be insufficient for modeling a modern industrial manipulator accurately. The extended flexible joint model, where non-actuated joints are added to model the elasticity of the links and bearings, is used to improve the model accuracy. The unknown elasticity parameters are estimated using a frequency domain gray-box identification method. The conclusion is that the obtained model describes the movements of the motors and the tool mounted on the robot with significantly higher accuracy. Similar elasticity model parameters are obtained when using two different output variables for the identification, the motor position and the tool acceleration.

A computational algorithm is presented for the Bayesian Cramer-Rao lower bound (BCRB) in filtering applications with measurement noise from mixture distributions with jump Markov switching structure. Such mixture distributions are common for radio propagation in mixed line- and non-line-of-sight environments. The newly derived BCRB is tighter than earlier more general bounds proposed in literature, and thus gives a more realistic bound on actual estimation performance. The resulting BCRB can be used to compute a lower bound on root mean square error of position estimates in a large class of radio localization applications. We illustrate this on an archetypical tracking application using a nearly constant velocity model and time of arrival observations.

The classical shift retrieval problem considers two signals in vector form that are related by a shift. This problem is of great importance in many applications and is typically solved by maximizing the cross-correlation between the two signals. Inspired by compressive sensing, in this paper, we seek to estimate the shift directly from compressed signals. We show that under certain conditions, the shift can be recovered using fewer samples and less computation compared to the classical setup. We also illustrate the concept of superresolution for shift retrieval. Of particular interest is shift estimation from Fourier coefficients. We show that under rather mild conditions only one Fourier coefficient suffices to recover the true shift.

Regular and moderate physical activity practice provides many physiological benefits. It reduces the risk of disease outcomes and is the basis for proper rehabilitation after a severe disease. Aerobic activity and strength exercises are strongly recommended in order to maintain autonomy with ageing. Balanced activity of both types is important, especially to the elderly population. Several methods have been proposed to monitor aerobic activities. However, no appropriate method is available for controlling more complex parameters of strength exercises. Within this context, the present article introduces a personalized, home-based strength exercise trainer designed for the elderly. The system guides a user at home through a personalized exercise program. Using a network of wearable sensors the user's motions are captured. These are evaluated by comparing them to prescribed exercises, taking both exercise load and technique into account. Moreover, the evaluation results are immediately translated into appropriate feedback to the user in order to assist the correct exercise execution. Besides the direct feedback, a major novelty of the system is its generic personalization by means of a supervised teach-in phase, where the program is performed once under supervision of a physical activity specialist. This teach-in phase allows the system to record and learn the correct execution of exercises for the individual user and to provide personalized monitoring. The user-driven design process, the system development and its underlying activity monitoring methodology are described. Moreover, technical evaluation results as well as results concerning the usability of the system for ageing people are presented. The latter has been assessed in a clinical study with thirty participants of 60 years or older, some of them showing usual diseases or functional limitations observed in elderly population.

Monte Carlo methods, in particular those based on Markov chains and on interacting particle systems, are by now tools that are routinely used in machine learning. These methods have had a profound impact on statistical inference in a wide range of application areas where probabilistic models are used. Moreover, there are many algorithms in machine learning which are based on the idea of processing the data sequentially, first in the forward direction and then in the backward direction. In this tutorial we will review a branch of Monte Carlo methods based on the forward-backward idea, referred to as backward simulators. These methods are useful for learning and inference in probabilistic models containing latent stochastic processes. The theory and practice of backward simulation algorithms have undergone a significant development in recent years and the algorithms keep finding new applications. The foundation for these methods is sequential Monte Carlo (SMC). SMC-based backward simulators are capable of addressing smoothing problems in sequential latent variable models, such as general, nonlinear/non-Gaussian state-space models (SSMs). However, we will also clearly show that the underlying backward simulation idea is by no means restricted to SSMs. Furthermore, backward simulation plays an important role in recent developments of Markov chain Monte Carlo (MCMC) methods. Particle MCMC is a systematic way of using SMC within MCMC. In this framework, backward simulation gives us a way to significantly improve the performance of the samplers. We review and discuss several related backward-simulation-based methods for state inference as well as learning of static parameters, both using a frequentistic and a Bayesian approach.

Direct torque control (DTC) is considered as one of the most efficient techniques for speed and/or position tracking control of induction motor drives. However, this control scheme has several drawbacks: the switching frequency may exceed the maximum allowable switching frequency of the inverters, and the ripples in current and torque, especially at low speed tracking, may be too large. In this brief, we propose a new approach that overcomes these problems. The suggested controller is a model predictive controller, which directly controls the inverter switches. It is easy to implement in real time and it outperforms all previous approaches. Simulation results show that the new approach has as good tracking properties as any other scheme, and that it reduces the average inverter switching frequency about 95% as compared to classical DTC.

Boundary effects in iterative learning control (ILC) algorithms are considered in this article. ILC algorithms involve filtering of input and error signals over finite-time intervals, often using non-causal filters, and it is important that the boundary effects of the filtering operations are handled in an appropriate way. The topic is studied using both a proposed theoretical framework and simulations, and it is shown that the method for handling the boundary effects has impact on the stability and convergence properties of the ILC algorithm.

Courses at the Master’s level in automatic control and signal processing cover mathematical theories and algorithms for control, estimation, and filtering. However, giving students practical experience in how to use these algorithms is also an important part of these courses. A goal is that the students should not only be able to understand and derive these algorithms, but also be able to apply them to real-life technical problems. The latter is achieved by assigning more time to the laboratory tutorials and designing them in such a way that the exercises are open for interpretation; an example of this would be giving the students more freedom to decide how to acquire the data needed to solve the given exercises.The students are asked to hand in a laboratory report in which they describe how they solved the exercises. This paper presents a double-blind peer-review process for laboratory reports, introduced at the Department of Electrical Engineering, Linköping University, Sweden. A survey was administered to students, and the results are summarized in this paper. Also discussed are the teachers’ experiences of peer review and of how students perform later in their education in writing their Master’s theses.

This paper gives an overview of the identificationof linear systems. It covers the classical approach ofparametric methods by means of maximum likelihood andpredicion error methods, as well all classical non-parametricmethods through spectral analysis. It also covers very recenttechniques dealing with convex formulations by regularizationof FIR and ARX models, as well as new alternatives tospectral analysis, through local linear models. An example of identification of aircraft dynamics illustrates the approaches.

We present a system identification method for problems with partially missing inputs and outputs. The method is based on a subspace formulation and uses the nuclear norm heuristic for structured low-rank matrix approximation, with the missing input and output values as the optimization variables. We also present a fast implementation of the alternating direction method of multipliers (ADMM) to solve regularized or non-regularized nuclear norm optimization problems with Hankel structure. This makes it possible to solve quite large system identification problems. Experimental results show that the nuclear norm optimization approach to subspace identification is comparable to the standard subspace methods when no inputs and outputs are missing, and that the performance degrades gracefully as the percentage of missing inputs and outputs increases.

We present a novel method for Wiener system identification. The method relies on a semiparametric, i.e. a mixed parametric/nonparametric, model of a Wiener system. We use a state-space model for the linear dynamical system and a nonparametric Gaussian process model for the static nonlinearity. We avoid making strong assumptions, such as monotonicity, on the nonlinear mapping. Stochastic disturbances, entering both as measurement noise and as process noise, are handled in a systematic manner. The nonparametric nature of the Gaussian process allows us to handle a wide range of nonlinearities without making problem-specific parameterizations. We also consider sparsity-promoting priors, based on generalized hyperbolic distributions, to automatically infer the order of the underlying dynamical system. We derive an inference algorithm based on an efficient particle Markov chain Monte Carlo method, referred to as particle Gibbs with ancestor sampling. The method is profiled on two challenging identification problems with good results. Blind Wiener system identification is handled as a special case.

There has been recently a trend to study linear system identification with high order finite impulse response (FIR) models using the regularized least-squares approach. One key of this approach is to solve the hyper-parameter estimation problem that is usually nonconvex. Our goal here is to investigate implementation of algorithms for solving the hyper-parameter estimation problem that can deal with both large data sets and possibly ill-conditioned computations. In particular, a QR factorization based matrix-inversion-free algorithm is proposed to evaluate the cost function in an efficient and accurate way. It is also shown that the gradient and Hessian of the cost function can be computed based on the same QR factorization. Finally, the proposed algorithm and ideas are verified by Monte-Carlo simulations on a large data-bank of test systems and data sets.

This paper presents a cardinalized probability hypothesis density (CPHD) filter for extended targets that can result in multiple measurements at each scan. The probability hypothesis density (PHD) filter for such targets has been derived by Mahler, and different implementations have been proposed recently. To achieve better estimation performance this work relaxes the Poisson assumptions of the extended target PHD filter in target and measurement numbers. A gamma Gaussian inverse Wishart mixture implementation, which is capable of estimating the target extents and measurement rates as well as the kinematic state of the target, is proposed, and it is compared to its PHD counterpart in a simulation study. The results clearly show that the CPHD filter has a more robust cardinality estimate leading to smaller OSPA errors, which confirms that the extended target CPHD filter inherits the properties of its point target counterpart.

Knowledge of the noise distribution is typically crucial for the state estimation of general state-space models. However, properties of the noise process are often unknown in the majority of practical applications. The distribution of the noise may also be non-stationary or state dependent and that prevents the use of off-line tuning methods. For linear Gaussian models, Adaptive Kalman filters (AKF) estimate unknown parameters in the noise distributions jointly with the state. For nonlinear models, we provide a Bayesian solution for the estimation of the noise distributions in the exponential family, leading to a marginalized adaptive particle filter (MAPF) where the noise parameters are updated using finite dimensional sufficient statistics for each particle. The time evolution model for the noise parameters is defined implicitly as a Kullback-Leibler norm constraint on the time variability, leading to an exponential forgetting mechanism operating on the sufficient statistics. Many existing methods are based on the standard approach of augmenting the state with the unknown variables and attempting to solve the resulting filtering problem. The MAPF is significantly more computationally efficient than a comparable particle filter that runs on the full augmented state. Further, the MAPF can handle sensor and actuator offsets as unknown means in the noise distributions, avoiding the standard approach of augmenting the state with such offsets. We illustrate the MAPF on first a standard example, and then on a tire radius estimation problem on real data.

In many applications, design or analysis is performed over a finite-frequency range of interest. The importance of the H2 norm highlights the necessity of computing this norm accordingly. This paper provides different methods for computing upper bounds of the robust finite-frequency H2 norm for systems with structured uncertainties. An application of the robust finite-frequency H2 norm for a comfort analysis problem of an aero-elastic model of an aircraft is also presented.

We investigate the unsupervised K-means clustering and the semi-supervised hidden Markov model (HMM) to automatically detect anomalous motion patterns in groups of people (crowds). Anomalous motion patterns are typically people merging into a dense group, followed by disturbances or threatening situations within the group. The application of K-means clustering and HMM are illustrated with datasets from four surveillance scenarios. The results indicate that by investigating the group of people in a systematic way with different K values, analyze cluster density, cluster quality and changes in cluster shape we can automatically detect anomalous motion patterns. The results correspond well with the events in the datasets. The results also indicate that very accurate detections of the people in the dense group would not be necessary. The clustering and HMM results will be very much the same also with some increased uncertainty in the detections.

This paper proposes a general convex framework for the identification of switched linear systems. The proposed framework uses over-parameterization to avoid solving the otherwise combinatorially forbidding identification problem, and takes the form of a least-squares problem with a sum-of-norms regularization, a generalization of the ℓ1-regularization. The regularization constant regulates the complexity and is used to trade off the fit and the number of submodels.

We consider time-of-arrival based robust geolocation in harsh line-of-sight/non-line-of-sight environments. Herein, we assume the probability density function (PDF) of the measurement error to be completely unknown and develop an iterative algorithm for robust position estimation. The iterative algorithm alternates between a PDF estimation step, which approximates the exact measurement error PDF (albeit unknown) under the current parameter estimate via adaptive kernel density estimation, and a parameter estimation step, which resolves a position estimate from the approximate log-likelihood function via a quasi-Newton method. Unless the convergence condition is satisfied, the resolved position estimate is then used to refine the PDF estimation in the next iteration. We also present the best achievable geolocation accuracy in terms of the Cramér-Rao lower bound. Various simulations have been conducted in both real-world and simulated scenarios. When the number of received range measurements is large, the new proposed position estimator attains the performance of the maximum likelihood estimator (MLE). When the number of range measurements is small, it deviates from the MLE, but still outperforms several salient robust estimators in terms of geolocation accuracy, which comes at the cost of higher computational complexity.

The Quantization Theorem I (QT I) implies that the likelihood function can be reconstructed from quantized sensor observations, given that appropriate dithering noise is added before quantization. We present constructive algorithms to generate such dithering noise. The application to maximum likelihood estimation (mle) is studied in particular. In short, dithering has the same role for amplitude quantization as an anti-alias filter has for sampling, in that it enables perfect reconstruction of the dithered but unquantized signal’s likelihood function. Without dithering, the likelihood function suffers from a kind of aliasing expressed as a counterpart to Poisson’s summation formula which makes the exact mle intractable to compute. With dithering, it is demonstrated that standard mle algorithms can be re-used on a smoothed likelihood function of the original signal, and statistically efficiency is obtained. The implication of dithering to the Cramér–Rao Lower Bound (CRLB) is studied, and illustrative examples are provided.

The performance of an optimal filter is lower bounded by the Bayesian Cramer-Rao Bound (BCRB). In some cases, this bound is tight (achieved by the optimal filter) asymptotically in information, i.e., high signal-to-noise ratio (SNR). However, for jump Markov linear Gaussian systems (JMLGS) the BCRB is not necessarily achieved for any SNR. In this paper, we derive a new bound which is tight for all SNRs. The bound evaluates the expected covariance of the optimal filter which is represented by one deterministic term and one stochastic term that is computed with Monte Carlo methods. The bound relates to and improves on a recently presented BCRB and an enumeration BCRB for JMLGS. We analyze their relations theoretically and illustrate them on a couple of examples.

This paper develops and illustrates a new maximum-likelihood based method for the identification of Hammerstein-Wiener model structures. A central aspect is that a very general situation is considered wherein multivariable data, non-invertible Hammerstein and Wiener nonlinearities, and colored stochastic disturbances both before and after the Wiener nonlinearity are all catered for. The method developed here addresses the blind Wiener estimation problem as a special case.

In extended/group target tracking, where the extensions of the targets are estimated, target spawning and combination events might have significant implications on the extensions. This paper investigates target spawning and combination events for the case that the target extensions are modeled in a random matrix framework. The paper proposes functions that should be provided by the tracking filter in such a scenario. The results, which are obtained by a gamma Gaussian inverse Wishart implementation of an extended target probability hypothesis density filter, confirms that the proposed functions improve the performance of the tracking filter for spawning and combination events.

This paper presents a model-based phase-only predistortion method suitable for outphasing radio frequency (RF) power amplifiers (PA). The predistortion method is based on a model of the amplifier with a constant gain factor and phase rotation for each outphasing signal, and a predistorter with phase rotation only. Exploring the structure of the outphasing PA, the problem can be reformulated from a nonconvex problem into a convex least-squares problem, and the predistorter can be calculated analytically. The method has been evaluted for 5MHz Wideband Code-Division Multiple Access (WCDMA) and Long Term Evolution (LTE) uplink signals with Peak-to-Average Power Ratio (PAPR) of 3.5 dB and 6.2 dB, respectively, applied to a fully integrated Class-D outphasing RF PA in 65nm CMOS. At 1.95 GHz for a 5.5V supply voltage, the measured output power of the PA was +29.7dBm with a power-added efficiency (PAE) of 26.6 %. For the WCDMA signal with +26.0dBm of channel power, the measured Adjacent Channel Leakage Ratio (ACLR) at 5MHz and 10MHz offsets were -46.3 dBc and -55.6 dBc with predistortion, compared to -35.5 dBc and -48.1 dBc without predistortion. For the LTE signal with +23.3dBm of channel power, the measured ACLR at 5MHz offset was -43.5 dBc with predistortion, compared to -34.1 dBc without predistortion.

Localization is an enabling technology in many applications and services today and in the future. Satellite navigation often works fine for navigation, infotainment and location based services, and it is today the dominating solution in commercial products. A nice exception is the localization in Google Maps, where radio signal strength from WiFi and cellular networks are used as complementary information to increase accuracy and integrity. With the on-going trend with more autonomous functions being introduced in our vehicles and with all our connected devices, most of them operated in indoor enviroments where satellite signals are not available,there is an acute need for new solutions.

At the same time, our smartphones are getting more sophisticated in their sensor configuration. Therefore, in this chapter we present a freely available Sensor Fusion app developed in-house, how it works, how it has been used, and how it can be used based on a variety of applications in our research and student projects.

The discrete time general state-space model is a flexible framework to deal with the nonlinear and/or non-Gaussian time series problems. However, the associated (Bayesian) inference problems are often intractable. Additionally, for many applications of interest, the inference solutions are required to be recursive over time. The particle filter (PF) is a popular class of Monte Carlo based numerical methods to deal with such problems in real time. However, PF is known to be computationally expensive and does not scale well with the problem dimensions. If a part of the state space is analytically tractable conditioned on the remaining part, the Monte Carlo based estimation is then confined to a space of lower dimension, resulting in an estimation method known as the Rao-Blackwellized particle filter (RBPF).

In this chapter, we present a brief review of Rao-Blackwellized particle filtering. Especially, we outline a set of popular conditional tractable structures admitting such Rao-Blackwellization in practice. For some special and/or relatively new cases, we also provide reasonably detailed descriptions.We confine our presentation mostly to the practitioners’ point of view.

It is a well-known fact that exercising helps people improve their overall well-being; both physiological and psychological health. Regular moderate physical activity improves the risk of disease progression, improves the chances for successful rehabilitation, and lowers the levels of stress hormones. Physical fitness can be categorized in cardiovascular fitness, and muscular strength and endurance. A proper balance between aerobic activities and strength exercises are important to maximize the positive effects. This balance is not always easily obtained, so assistance tools are important. Hence, ambient assisted living (AAL) systems that support and motivate balanced training are desirable. This chapter presents methods to provide this, focusing on the methodologies and concepts implemented by the authors in the physical activity monitoring for aging people (PAMAP) platform. The chapter sets the stage for an architecture to provide personalized activity monitoring using a network of wearable sensors, mainly inertial measurement units (IMU). The main focus is then to describe how to do this in a personalizable way: (1) monitoring to provide an estimate of aerobic activities performed, for which a boosting based method to determine activity type, intensity, frequency, and duration is given; (2) supervise and coach strength activities. Here, methodologies are described for obtaining the parameters needed to provide real-time useful feedback to the user about how to exercise safely using the right technique.

Scene reconstruction, i.e. the process of creating a 3D representation (mesh) of some real world scene, has recently become easier with the advent of cheap RGB-D sensors (e.g. the Microsoft Kinect).

Many such sensors use rolling shutter cameras, which produce geometrically distorted images when they are moving. To mitigate these rolling shutter distortions we propose a method that uses an attached gyroscope to rectify the depth scans.We also present a simple scheme to calibrate the relative pose and time synchronization between the gyro and a rolling shutter RGB-D sensor.

For scene reconstruction we use the Kinect Fusion algorithm to produce meshes. We create meshes from both raw and rectified depth scans, and these are then compared to a ground truth mesh. The types of motion we investigate are: pan, tilt and wobble (shaking) motions.

As our method relies on gyroscope readings, the amount of computations required is negligible compared to the cost of running Kinect Fusion.

This chapter is an extension of a paper at the IEEE Workshop on Robot Vision [10]. Compared to that paper, we have improved the rectification to also correct for lens distortion, and use a coarse-to-fine search to find the time shift more quicky.We have extended our experiments to also investigate the effects of lens distortion, and to use more accurate ground truth. The experiments demonstrate that correction of rolling shutter effects yields a larger improvement of the 3D model than correction for lens distortion.

In this chapter parallel implementations of hybrid MPC will be discussed. Different methods for achieving parallelism at different levels of the algorithms will be surveyed. It will be seen that there are many possible ways of obtaining parallelism for hybrid MPC, and it is by no means clear which possibilities that should be utilized to achieve the best possible performance. To answer this question is a challenge for future research.

The Handbook of Intelligent Vehicles provides a complete coverage of the fundamentals, new technologies, and sub-areas essential to the development of intelligent vehicles; it also includes advances made to date, challenges, and future trends. Significant strides in the field have been made to date; however, so far there has been no single book or volume which captures these advances in a comprehensive format, addressing all essential components and subspecialties of intelligent vehicles, as this book does. Since the intended users are engineering practitioners, as well as researchers and graduate students, the book chapters do not only cover fundamentals, methods, and algorithms but also include how software/hardware are implemented, and demonstrate the advances along with their present challenges. Research at both component and systems levels are required to advance the functionality of intelligent vehicles. This volume covers both of these aspects in addition to the fundamentals listed above.

Narrowband Internet of Things (NB-IoT) is an emerging cellular technology designed to target low-cost devices, high coverage, long device battery life (more than ten years), and massive capacity. We investigate opportunities for device tracking in NB-IoT systems using Observed Time Difference of Arrival (OTDOA) measurements. Reference Signal Time Difference (RSTD) reports are simulated to be sent to the mobile location center periodically or on an ondemand basis. We investigate the possibility of optimizing the number of reports per minute budget on horizontal positioning accuracy using an on-demand reporting method based on the Signal to Noise Ratio (SNR) of the measured cells received by the User Equipment (UE). Wireless channels are modeled considering multipath fading propagation conditions. Extended Pedestrian A (EPA) and Extended Typical Urban (ETU) delay profiles corresponding to low and high delay spread environments, respectively, are simulated for this purpose. To increase the robustness of the filtering method, measurement noise outliers are detected using confidence bounds estimated from filter innovations.

Many positioning systems rely on accuratetime of arrival measurements. In this paper, we addressnot only the accuracy but also the relevance of Time ofArrival (TOA) measurement error modeling. We discusshow better knowledge of these errors can improve relativedistance estimation, and compare the impact of differentlydetailed measurement error information. These models arecompared in simulations based on models derived froman Ultra Wideband (UWB) measurement campaign. Theconclusion is that significant improvements can be madewithout providing detailed received signal information butwith a generic and relevant measurement error model.

Assessing the fundamental performance limitations in Bayesian filtering can be carried out using the parametric Cramer-Rao bound (CRB). The parametric CRB puts a lower bound on mean square error (MSE) matrix conditioned on a specific state trajectory realization. In this work, we derive the parametric CRB for state-space models, where the measurement equation is modeled by a Gaussian process regression. These models appear, for instance in proximity report-based positioning, where proximity reports are obtained by hard thresholding of received signal strength (RSS) measurements, that are modeled through Gaussian process regression. The proposed parametric CRB is evaluated on selected state trajectories and further compared with the positioning performance obtained by the particle filter. The results corroborate that the positioning accuracy achieved in this framework is close to the parametric CRB.

This paper deals with state inference and parameter identification in Jump Markov Non-Linear System. The state inference problem is solved efficiently using a recently proposed Rao-Blackwellized Particle Filter, where the discrete state is integrated out analytically. Within the RBPF framework, Recursive Maximum Likelihood parameter identification is performed using gradient ascent algorithms. The proposed learning method has the advantage over (online) Expectation Maximization methods, that it can be easily applied to cases where the probability density functions defining the Jump Markov Non-Linear System are not members of the exponential family. Two benchmark problems illustrate the parameter identification performance.

Advances in sensor systems have resulted in the availability of high resolution sensors, capable of generating massive amounts of data. For complex systems to run online, the primary focus is on computationally efficient filters for the estimation of latent states related to the data. In this paper a novel method for efficient state estimation with the unscented Kalman Filter is proposed. The focus is on applications consisting of a massive amount of data. From a modelling perspective, this amounts to a measurement vector with dimensionality significantly greater than the dimensionality of the state vector. The efficiency of the filter is derived from a parallel filter structure which is enabled by the expectation propagation algorithm. A novel parallel measurement processing expectation propagation unscented Kalman filter is developed. The primary advantage of the novel algorithm is in the ability to achieve computational improvements with negligible loses in filter accuracy. An example of robot localization with a high resolution laser rangefinder sensor is presented. A 47.53% decrease in computational time was exhibited for a scenario with a processing platform consisting of 4 processors, with a negligible loss in accuracy.

In polar region operations, drift sea ice positioning and tracking is useful for both scientific and safety reasons. Modeling ice movements has proven difficult, not least due to the lack of information of currents and winds of high enough resolution. Thus, observations of drift ice is essential to an up-to-date ice-tracking estimate.

As an inverse problem, it is possible to extract current and wind estimates from the tracked objects of a Multi-Target Tracking (MTT) filter. By inserting the track estimates into a Gaussian field, we obtain a two-dimensional current estimate over a region of interest.

The algorithm is applied to a Terrestrial Radar Interferometer (TRI) dataset from Kongsfjorden, Svalbard, to show the practical application of the current estimation.

In polar region operations, drift sea ice positioning and tracking is useful for both scientific and safety reasons. Modeling ice movements has proven difficult, not least due to the lack of information of currents and winds of high enough resolution. Thus, observations of drift ice is essential to an up-to-date ice-tracking estimate.

Recent years have seen the rise of Unmanned Aerial Systems (UAS) as a platform for geoobservation, and so too for the tracking of sea ice. Being a mobile platform, the research on UAS path-planning is extensive and usually involves an objective-function to minimize. For the purpose of observation however, the objective-function typically changes as observations are made along the path.

In this paper we discuss the architectural outline of a system capable of fusing data from multiple sources—UAS’s and others—as well as incorporating that data for both path-planning, sea ice movement prediction and target initialization. The system contains tracking of sea ice objects, situation map logic and is expandable as discussed with path-planning capabilities for closing the loop of optimizing paths for information acquisition.

In this work, we estimate a model of the vertical dynamics of a quadcopter and explain how this model can be used for mass estimation and diagnosis of system changes. First, a standard thrust model describing the relation between the calculated control signals of the rotors and the thrust that is commonly used in literature is estimated. The estimation results are compared to those using a refined thrust model and it turns out that the refined model gives a significant improvement. The combination of a nonlinear model and closed-loop data poses some challenges and it is shown that an instrumental variables approach can be used to obtain accurate estimates. Furthermore, we show that the refined model opens up for fault detection of the quadcopter. More specifically, this model can be used for mass estimation and also for diagnosis of other parameters that might vary between and during missions.

Classification of motion mode (walking, running, standing still) and device mode (hand-held, in pocket, in backpack) is an enabler in personal navigation systems for the purpose of saving energy and design parameter settings and also for its own sake. Our main contribution is to publish one of the most extensive datasets for this problem, including inertial data from eight users, each one performing three pre-defined trajectories carrying four smartphones and seventeen inertial measurement units on the body. All kind of metadata is available such as the ground truth of all modes and position. A second contribution is the first study on a joint classifier of motion and device mode, respectively, where preliminary but promising results are presented.

Assessing the fundamental performance limitationsin Bayesian filtering can be carried out using the parametricCram´er-Rao bound (CRB). The parametric CRB puts a lowerbound on mean square error (MSE) matrix conditioned on aspecific state trajectory realization. In this work, we derive theparametric CRB for state-space models, where the measurementequation is modeled by a Gaussian process regression.These models appear, for instance in proximity report-basedpositioning, where proximity reports are obtained by hardthresholding of received signal strength (RSS) measurements, thatare modeled through Gaussian process regression. The proposedparametric CRB is evaluated on selected state trajectories andfurther compared with the positioning performance obtained bythe particle filter. The results corroborate that the positioningaccuracy achieved in this framework is close to the parametricCRB.

The Military Institute of Engineering (IME) is a Brazilian Army higher education institution, located in Rio de Janeiro – Brazil. Every three years, the Brazilian government evaluates the engineering bachelor´s degree programs and IME is always among the best Brazilian engineering schools. Despite the excellent results, the CDIO framework was chosen, at the end of 2014, as a reference framework to improve the education process, reorganize the programs and promote the development of intra and interpersonal skills. The objective of this work is to show the initial steps of the CDIO implementation. In the beginning of the process, the idea was to start only with the Mechanical Engineering program. However, after the initial promotion seminars presented for faculty, staff and students, a sense of commitment was created, and four of ten undergraduate programs also decided to participate. One of the first step was the formation of the team responsible to coordinate the implementation of this set of best practices. The Kotter framework has been used to drive this transformation process. The CDIO syllabus was translated into Portuguese, and it was compared with some of the Brazilian regulatory requirements for engineering education and has been used as a reference for future curriculum changes. This paper also highlights the process for change management, the remodeling of the first year and the introduction of an entrepreneurship course in partnership with an outstanding Brazilian business school.

In this work an Inertial Measurement Unit is used to improve tool position estimates for an ABB IRB 4600 industrial robot, starting from estimates based on motor angle forward kinematics. A Complementary Filter and an Extended Kalman Filter are investigated. The Complementary Filter is found to perform on par with the Extended Kalman Filter while having lower complexity both in the tuning process and the filtering computations.

In this paper, an approach to estimate the mass of a quadcopter using only inertial measurements and pilot commands is presented. For this purpose, a lateral dynamic model describing the relation between the roll rate and the lateral acceleration is formulated. Due to the quadcopter’s inherent instability, a controller is used to stabilize the system and the data is collected in closed loop. Under the effect of feedback and disturbances, the inertial measurements used as input and output are correlated with the disturbances, which complicates the parameter estimation. The parameters of the model are estimated using several methods. The simulation and experimental results show that the instrumental-variable method has the best potential to estimate the mass of the quadcopter in this setup.

Acoustic frequency tracking of a harmonic signalwith continuously varying frequency is considered. The Rao-Blackwellized point mass filter (RBPMF), previously proposed bythe authors for mechanical vibration tracking, is applied to the problem. The RBPMF is compared with two periodogram-based methods, and the similarities and differences between them are explained. Both experimental and simulation results in a Doppler frequency tracking scenario are presented, and the results show that the RBPMF can have significantly less estimation error than the competing methods.

This paper presents three iterative methods for orientation estimation. The first two are based on iterated Extended Kalman filter (IEKF) formulations with different state representations. The first is using the well-known unit quaternion as state (q-IEKF) while the other is using orientation deviation which we call IMEKF. The third method is based on nonlinear least squares (NLS) estimation of the angular velocity which is used to parametrise the orientation. The results are obtained using Monte Carlo simulations and the comparison is done with the non-iterative EKF and multiplicative EKF (MEKF) as baseline. The result clearly shows that the IMEKF and the NLS-based method are superior to q-IEKF and all three outperform the non-iterative methods.

In polar region operations, drift ice positioning and tracking is useful for both scientific and safety reasons. At its core is a Multi-Target Tracking (MTT) problem in which currents and winds make motion modeling difficult. One recent algorithm in the MTT field, employed in this paper, is the Labeled Multi-Bernoulli (LMB) filter. In particular, a proposed reformulation of the LMB equations exposes a structure which is exploited to propose a compact algorithm for the generation of the filter's posterior distribution. Further, spatial indexing is applied to the clustering process of the filter, allowing efficient separation of the filter into smaller, independent parts with lesser total complexity than that of an unclustered filter. Many types of sensors can be employed to generate detections of sea ice, and in this paper a recorded dataset from a Terrestrial Radar Interferometer (TRI) is used to demonstrate the application of the Spatially Indexed Labeled Multi-Bernoulli filter to estimate the currents of an observed area in Kongsfjorden, Svalbard.

The CDIO framework for development of engineering education is presented, including the overall ideas, the fundamental documents, and some development tools. The automatic control subject and its role in engineering education is studied using the CDIO Standards as reference. Some examples from the engineering education at Linköping University are presented with special focus on the control education.

The interest for system identification in dynamic networks has increased recently with a wide variety of applications. In many cases, it is intractable or undesirable to observe all nodes in a network and thus, to estimate the complete dynamics. Furthermore, it might even be challenging to estimate a subset of the network if key nodes are unobservable due to correlation between the nodes. In this contribution, we will discuss an approach to treat this problem. The approach relies on additional measurements that are dependent on the unobservable nodes and thus indirectly contain information about them. These measurements are used to form an alternative indirect model that is only dependent on observed nodes. The purpose of estimating this indirect model can be either to recover information about modules in the original network or to make accurate predictions of variables in the network. Examples are provided for both recovery of the original modules and prediction of nodes.

Chronic kidney disease (CKD) is a global public health problem, affecting approximately 10% of the population worldwide. Yet, there is little direct evidence on how CKD can be diagnosed in a systematic and automatic manner. This paper investigates how CKD can be diagnosed by using machine learning (ML) techniques. ML algorithms have been a driving force in detection of abnormalities in different physiological data, and are, with a great success, employed in different classification tasks. In the present study, a number of different ML classifiers are experimentally validated to a real data set, taken from the UCI Machine Learning Repository, and our findings are compared with the findings reported in the recent literature. The results are quantitatively and qualitatively discussed and our findings reveal that the random forest (RF) classifier achieves the near-optimal performances on the identification of CKD subjects. Hence, we show that ML algorithms serve important function in diagnosis of CKD, with satisfactory robustness, and our findings suggest that RF can also be utilized for the diagnosis of similar diseases.

Modern fighter aircraft require maximum control performance in order to have the upper hand in a dogfight or when they have to outmaneuver an enemy missile. Therefore pilots must be able to maneuver the aircraft very close to the limit of what it is capable of while at the same time focus on the tactical tasks of the mission. To enable this, modern flight control systems have automatic systems for angle of attack and load factor limiting.

One such design technique is reference or command governors. In this paper we investigate different ways of designing command governors for angle of attack and load factor limiting in in fighter aircraft flight control systems. We discuss different design approaches and their properties and implement one selected design in Saab's JAS 39 Gripen fighter aircraft simulation environment.

In order to meet the requirements for autonomous systems in real world applications, reliable path following controllers have to be designed to execute planned paths despite the existence of disturbances and model errors. In this paper we propose a Linear Quadratic controller for stabilizing a 2-trailer system with possible off-axle hitching around preplanned paths in backward motion. The controller design is based on a kinematic model of a general 2-trailer system including the possibility for off-axle hitching. Closed-loop stability is proved around a set of paths, typically chosen to represent the possible output from the path planner, using theory from linear differential inclusions. Using convex optimization tools a single quadratic Lyapunov function is computed for the entire set of paths.

The stable spline kernel and the diagonal correlated kernel are two kernels that have been tested extensively in kernel-based regularization methods for LTI system identification. As shown in our recent works, although these two kernels are introduced in different ways, they share some common features, e.g., they all belong to the class of exponentially convex locally stationary kernels, and state-space model induced kernels. In this work, we further show that similar to the derivation of the stable spline kernel, the continuous-time diagonal correlated kernel can be derived by applying the same "stable" coordinate change to a "generalized" first order spline kernel, and thus can be interpreted as a stable generalized first order spline kernel. This interpretation provides new facets to understand the properties of the diagonal correlated kernel. Due to this interpretation, new eigendecompositions, explicit expression of the norm, and new maximum entropy interpretation of the diagonal correlated kernel are derived accordingly.

The identification of continuous-time models of dynamical systems based on sampled measurements of input and output signals is a research topic that has received much attention during the past decades. However, a framework for the correct assessment of the performance of various estimation methods, as well as their numerical reliability, is still missing due to a number of benchmarking difficulties, equally applicable to both discrete-and continuous-time identification problems. This paper revisits this topic, reports new numerical results, highlights several fundamental aspects regarding the definition of an appropriate benchmark for the evaluation of continuous-time linear model identification algorithms and discusses several means of addressing the related existing problems.

Contractive interference functions are a subclass of the standard interference functions used in the design and analysis of distributed power control algorithms for wireless networks. Their peculiarity is that for the resulting positive system the existence and global asymptotic stability of a unique positive equilibrium point is guaranteed. In this paper we give an infinitesimal characterization of nonlinear contractiveinterference functions in terms of the spectral radius of the Jacobian linearization at any point in the positive orthant. The condition we obtain, that the spectral radius is always less than 1, extends to the nonlinear case an equivalent property of linear interference functions, and leads to a Jacobian characterization similar to the one commonly used in contraction analysis of nonlinear systems.

In a continuous-time nonlinear driftless control system, a geometric phase is a consequence of nonintegrability of the vector fields, and it describes how cyclic trajectories in shape space induce non-periodic motion in phase space, according to an area rule. The aim of this paper is to shown that geometric phases exist also for discrete-time driftless nonlinear control systems, but that unlike their continuous-time counterpart, they need not obey any area rule, i.e., even zero-area cycles in shape space can lead to nontrivial geometric phases. When the discrete-time system is obtained through Euler discretization of a continuous-time system, it is shown that the zero-area geometric phase corresponds to the gap between the Euler discretization and an exact discretization of the continuous-time system.

It is a well known fact that finite time optimal controllers, such as MPC do not necessarily result in closed loop stable systems. Within the MPC community it is common practice to add a final state constraint and/or a final state penalty in order to obtain guaranteed stability. However, for more advanced controller structures it can be difficult to show stability using these techniques. Additionally in some cases the final state constraint set consists of so many inequalities that the complexity of the MPC problem is too big for use in certain fast and time critical applications. In this paper we instead focus on deriving a tool for a-postiori analysis of the closed loop stability for linear systems controlled with MPC controllers. We formulate an optimisation problem that gives a sufficient condition for stability of the closed loop system and we show that the problem can be written as a Mixed Integer Linear Programming Problem (MILP).

When an aircraft is flying and burning fuel the center of gravity (c.g.) of the aircraft shifts slowly. The c.g. can also be shifted abruptly when e.g. a ghter aircraft releases a weapon. The shift in c.g. is difficult to measure or estimate so the flight control systems need to be robustly designed to cope with this variation. However for fighter aircrafts with high manoeuvrability there is room for improvements. In this paper we investigate if the use of adaptive control law augmentation can be used to better cope with the change in c.g. We augment a baseline controller with a robust Model Reference Adaptive Control (MRAC) design and analyse its benets and possible issues.

Reversing with a dolly steered trailer configura- tion is a hard task for any driver without extensive training. In this work we present a motion planning and control framework that can be used to automatically plan and execute complicated manoeuvres. The unstable dynamics of the reversing general 2- trailer configuration with off-axle hitching is first stabilised by an LQ-controller and then a pure pursuit path tracker is used on a higher level giving a cascaded controller that can track piecewise linear reference paths. This controller together with a kinematic model of the trailer configuration is then used for forward simulations within a Closed-Loop Rapidly Exploring Random Tree framework to generate motion plans that are not only kinematically feasible but also include the limitations of the controller’s tracking performance when reversing. The approach is evaluated over a series of Monte Carlo simulations on three different scenarios and impressive success rates are achieved. Finally the approach is successfully tested on a small scale test platform where the motion plan is calculated and then sent to the platform for execution.

In many traffic situations there are times where interaction with other drivers is necessary and unavoidable in order to safely progress towards an intended destination. This is especially true for merge manoeuvres into dense traffic, where drivers sometimes must be somewhat aggressive and show the intention of merging in order to interact with the other driver and make the driver open the gap needed to execute the manoeuvre safely. Many motion planning frameworks for autonomous vehicles adopt a reactive approach where simple models of other traffic participants are used and therefore need to adhere to large margins in order to behave safely. However, the large margins needed can sometimes get the system stuck in congested traffic where time gaps between vehicles are too small. In other situations, such as a highway merge, it can be significantly more dangerous to stop on the entrance ramp if the gaps are found to be too small than to make a slightly more aggressive manoeuvre and let the driver behind open the gap needed. To remedy this problem, this work uses the Intelligent Driver Model (IDM) to explicitly model the interaction of other drivers and evaluates the risk by their required deceleration in a similar manner as the Minimum Overall Breaking Induced by Lane change (MOBIL) model that has been used in large scale traffic simulations before. This allows the algorithm to evaluate the effect on other drivers depending on our own trajectory plans by simulating the nearby traffic situation. Finding a globally optimal solution is often intractable in these situations so instead a large set of candidate trajectories are generated that are evaluated against the traffic scene by forward simulations of other traffic participants. By discretization and using an efficient trajectory generator together with efficient modelling of the traffic scene real-time demands can be met.

The aim of this paper is to suggest a modification to the usual bounded confidence model of opinion dynamics, so that “changes of opinion” (intended as changes of the sign of the initial state) of an agent are never induced by the dynamics. In order to do so, a bipartite consensus model is used, endowing it with a confidence range. The resulting signed bounded confidence model has a state-dependent connectivity and a behavior similar to its standard counterpart, but in addition it preserves the sign of the opinions by “repelling away” opinions localized near the origin but on different sides with respect to 0.

In multiparametric programming an optimization problem which is dependent on a parameter vector is solved parametrically. In control, multiparametric quadratic programming (mp-QP) problems have become increasingly important since the optimization problem arising in Model Predictive Control (MPC) can be cast as an mp-QP problem, which is referred to as explicit MPC. One of the main limitations with mp-QP and explicit MPC is the amount of memory required to store the parametric solution and the critical regions. In this paper, a method for exploiting low rank structure in the parametric solution of an mp-QP problem in order to reduce the required memory is introduced. The method is based on ideas similar to what is done to exploit low rank modifications in generic QP solvers, but is here applied to mp-QP problems to save memory. The proposed method has been evaluated experimentally, and for some examples of relevant problems the relative memory reduction is an order of magnitude compared to storing the full parametric solution and critical regions.

In many practical applications of system identification, it is not feasible to measure both the inputs applied to the system as well as the output. In such situations, it is desirable to estimate both the inputs and the dynamics of the system simultaneously; this is known as the blind identification problem. In this paper, we provide a novel extension of subspace methods to the blind identification of multiple-input multiple-output linear systems. We assume that our inputs lie in a known subspace, and we are able to formulate the identification problem as rank constrained optimization, which admits a convex relaxation. We show the efficacy of this formulation with a numerical example.

The safe fusion algorithm is benchmarked againstthree other methods in distributed target tracking scenarios. Safefusion is a fairly unknown method similarly to, e.g., covarianceintersection, that can be used to fuse potentially dependentestimates without double counting data. This makes it suitablefor distributed target tracking, where dependencies are oftenunknown or difficult to derive. The results show that safe fusionis a very competitive alternative in five evaluated scenarios, whileat the same time easy to implement and compute compared tothe other evaluated methods. Hence, safe fusion is an attractivealternative in track to track fusion systems.

In this paper a cascaded approach for stabilizationand path tracking of a general 2-trailer vehicle configurationwith an off-axle hitching is presented. A low level LinearQuadratic controller is used for stabilization of the internalangles while a pure pursuit path tracking controller is used ona higher level to handle the path tracking. Piecewise linearityis the only requirement on the control reference which makesthe design of reference paths very general. A Graphical UserInterface is designed to make it easy for a user to design controlreferences for complex manoeuvres given some representationof the surroundings. The approach is demonstrated with challengingpath following scenarios both in simulation and on asmall scale test platform.

This paper is concerned with cooperative Terrain Aided Navigation of a network of aircraft using fusion of Radar Altimeter and inter-node range measurements. State inference is performed using a Rao-Blackwellized Particle Filter with online measurement noise statistics estimation. For terrain coverage measurement noise parameter identification, an online Expectation Maximization algorithm is proposed, where local sufficient statistics at each node are calculated in the E-step, which are then distributed to neighboring nodes using a random gossip algorithm to perform the M-step at each node. Simulation results show that improvement on positioning and calibration performance can be achieved compared to a non-cooperative approach.

In this paper, recent results on the evaluation of the Bayesian Cramer-Rao bound for jump Markov systems are presented. In particular, previous work is extended to jump Markov systems where the discrete mode variable enters into both the process and measurement equation, as well as where it enters exclusively into the measurement equation. Recursive approximations are derived with finite memory requirements as well as algorithms for checking the validity of these approximations are established. The tightness of the bound and the validity of its approximation is investigated on a couple of examples.

In this paper, we study the problem of controlling large scale networks with controls which can assume only positive values. Given an adjacency matrix A, an algorithm is developed that constructs an input matrix B with a minimal number of columns such that the resulting system (A, B) is positively controllable. The algorithm combines the graphical methods used for structural controllability analysis with the theory of positive linear dependence. The number of control inputs guaranteeing positive controllability is near optimal.

We still have very little knowledge about how ourbrains decouple different sound sources, which is known assolving the cocktail party problem. Several approaches; includingERP, time-frequency analysis and, more recently, regression andstimulus reconstruction approaches; have been suggested forsolving this problem. In this work, we study the problem ofcorrelating of EEG signals to different sets of sound sources withthe goal of identifying the single source to which the listener isattending. Here, we propose a method for finding the number ofparameters needed in a regression model to avoid overlearning,which is necessary for determining the attended sound sourcewith high confidence in order to solve the cocktail party problem.

In this paper, a calibration method for a triaxial accelerometer using a triaxial gyroscope is presented. The method uses a sensor fusion approach, combining the information from the accelerometers and gyroscopes to find an optimal calibration using Maximum likelihood. The method has been tested by using real sensors in smartphones to perform orientation estimation and verified through Monte Carlo simulations. In both cases, the method is shown to provide a proper calibration, reducing the effect of sensor errors and improving orientation estimates.

In inertial motion capture, a multitude of body segments are equipped with inertial sensors, consisting of 3D accelerometers and 3D gyroscopes. Using an optimization-based approach to solve the motion capture problem allows for natural inclusion of biomechanical constraints and for modeling the connection of the body segments at the joint locations. The computational complexity of solving this problem grows both with the length of the data set and with the number of sensors and body segments considered. In this work, we present a scalable and distributed solution to this problem using tailored message passing, capable of exploiting the structure that is inherent in the problem. As a proof-of-concept we apply our algorithm to data from a lower body configuration.

This paper presents a batch estimation method for Simultaneous Localization and Mapping (SLAM) using the Prediction Error Method (PEM). The estimation problem considers landmarks as parameter while treating dynamics using state space models. The gradient needed for parameter estimation is computed recursively using an Extended Kalman Filter (EKF). Results using simulations with a monocular camera and inertial sensors are presented and compared to a Nonlinear Least- Squares (NLS) estimator. The presented method produce both lower RMSE’s and scale better to the batch length.

Radar and sonar provide information of both range and radial velocity to unknown objects. This is accomplished by emitting a signal waveform and computing the round trip time and Doppler shift. Estimation of the round trip time and Doppler shift is usually done separately without considering the couplings between these two object related quantities. The purpose of this contribution is to first model the amplitude, time shift and time scale of the returned signal in terms of the object related states range and velocity, and analyse the Cramér-Rao lower bound (CRLB) for the joint range and velocity estimation problem. One of the conclusions is that there is negative correlation between range and velocity. The maximum likelihood (ML) cost function also confirms this strong negative correlation. For target tracking applications, the use of the correct covariance matrix for the measurement vector gives a significant gain in information, compared to using the variance of range and velocity assuming independence. In other words, the knowledge of the correlation tells the tracking filter that a too large range measurement comes most likely with a too small velocity measurement, and vice versa. Experiments with sound pulses reflected in a wall indoors confirm the main conclusion of negative correlation.

We consider a target tracking problem where, in addition to the usual sensor measurements, accurate observations with uncertain timestamps are available. Such observations could, \eg, come from traces left by a target or from witnesses of an event, and have the potential in some scenarios to improve the accuracy of an estimate significantly. The Bayesian solution to the smoothing problem for one observation with uncertain timestamp is derived for a linear Gaussian state space model. The joint and marginal distributions of the states and uncertain time are derived, as well as the minimum mean squared error (MMSE) and maximum a posteriori (MAP) estimators. To attain an intuition for the problem in consideration a simple first-order example is presented and its posterior distributions and point estimators are compared and examined in some depth.

GPS is widely used for localization and tracking, however traditional GPS receivers consume too much energy for many applications. This paper implements and evaluates the performance of a low-energy GPS prototype. The main difference is that a traditional GPS needs to sample signals transmitted by satellites for 30 seconds to estimate its position. Our prototype reduces this time by three orders of magnitude and it can compute positions from only 2 milliseconds of data. We present a new algorithm that increases robustness by filtering on estimated residuals instead of using an altitude database. In addition, we show that our new algorithm works with both fixed and moving targets. The solution consists of (1) a portable device that samples the GPS signal and (2) a server that utilizes Doppler navigation and Coarse Time Navigation to estimate positions. We performed tests in a wide variety of environments and situations. These tests show that our prototype provides a median positioning error of roughly 40 meters even when the GPS receiver is moving at 80 kilometres per hour.

This paper presents a feasibility study on smartphone localization of missing persons in Search And Rescue(SAR) operations using widely available Commercial-Off-The-Shelf (COTS) products. We assume (1) that the missing person wears an enabled smartphone and (2) that messages transmitted by this smartphone can be intercepted by mobile agents at known positions. We present a proof-of-concept that consists of several mobile agents carrying smartphones that measure the Received Signal Strength (RSS) of Wi-fi messages transmitted by the smartphone of the missing person. The positions of the mobile agents are determined using the GPS unit on the smartphones.The mobile agents send the collected RSS and GPS data to a central processing unit, which processes the data in real-time and guides mobile agents in SAR operations by estimating the position of the missing person. Our central processing unit runs a localization algorithm that requires no calibration, because rescue operations usually take place in unknown environments with unknown hardware. Our experiments in an 250x130m2 outdoor field shows that our localization system provides an average localization performance of roughly 15 meters, which is sufficient for most SAR operations of interest. In addition, we performed several successful tests with a Quadcopter to show the feasibility of using unmanned vehicles in SAR operations.

Positioning in cellular networks is often based on mobile-assisted measurements of serving and neighboring base stations. Traditionally, positioning is considered to be enabled when the mobile provides measurements of three different base stations. In this paper, we additionally investigate positioning based on time series of Time Of Flight (TOF) and Time Difference of Arrival (TDOA) measurements gathered from two base stations with known positions, where the specific base stations involved depend on the trajectory of the mobile station.. The set of two base stations is different along the trajectory. Each report contains TOF for the serving base station, and one TDOA measurement for the most favorable neighboring base station relative the serving base station. We derive explicit analytical solution related to the intersection of the absolute distance circle (from TOF) and relative distance hyperbola (from TDOA). We consider both geometric noise-free problem and the more realistic problem with additive noise as delivered in the 3rd Generation Partnership Project (3GPP) Long-Term Evolution (LTE). Positioning performance is evaluated using the Cramer-Rao lower bound.

We consider personal navigation systems in devices equipped with inertial sensors and GPS, where we propose an improved Pedestrian Dead Reckoning (PDR) algorithm that learns gait parameters in time intervals when position estimates are available, for instance from GPS or an indoor positioning system (IPS). A novel filtering approach is proposed that is able to learn internal gait parameters in the PDR algorithm, such as the step length and the step detection threshold. Our approach is based on a multi-rate Kalman filter bank that estimates the gait parameters when position measurements are available, which improves PDR in time intervals when the position is not available, for instance when passing from outdoor to indoor environments where IPS is not available. The effectiveness of the new approach is illustrated on several real world experiments.

This paper proposes a diagonal covariance matrix approximation for Wide-Sense Stationary (WSS) signals with correlated Gaussian noise. Existing signal models that incorporate correlations often require regularization of the covariance matrix, so that the covariance matrix can be inverted. The disadvantage of this approach is that matrix inversion is computational intensive and regularization decreases precision. We use Bienayme's theorem to approximate the covariance matrix by a diagonal one, so that matrix inversion becomes trivial, even with nonuniform rather than only uniform sampling that was considered in earlier work. This approximation reduces the computational complexity of the estimator and estimation bound significantly. We numerically validate this approximation and compare our approach with the Maximum Likelihood Estimator (MLE) and Cramer-Rao Lower Bound (CRLB) for multivariate Gaussian distributions. Simulations show that our approach differs less than 0.1% from this MLE and CRLB when the observation time is large compared to the correlation time. Additionally, simulations show that in case of non-uniform sampling, we increase the performance in comparison to earlier work by an order of magnitude. We limit this study to correlated signals in the time domain, but the results are also applicable in the space domain.

We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers. Like related methods, iPMCMC is a Markov chain Monte Carlo sampler on an extended space. We present empirical results that show significant improvements in mixing rates relative to both non-interacting PMCMC samplers and a single PMCMC sampler with an equivalent memory and computational budget. An additional advantage of the iPMCMC method is that it is suitable for distributed and multi-core architectures.

In Moving Horizon Estimation (MHE) the computed estimate is found by solving a constrained finite-time optimal estimation problem in real-time at each sample in a receding horizon fashion. The constrained estimation problem can be solved by, e.g., interior-point (IP) or active-set (AS) methods, where the main computational effort in both methods is known to be the computation of the search direction, i.e., the Newton step. This is often done using generic sparsity exploiting algorithms or serial Riccati recursions, but as parallel hardware is becoming more commonly available the need for parallel algorithms for computing the Newton step is increasing. In this paper a newly developed tailored, non-iterative parallel algorithm for computing the Newton step using the Riccati recursion for Model Predictive Control (MPC) is extended to MHE problems. The algorithm exploits the special structure of the Karush-Kuhn-Tucker system for the optimal estimation problem. As a result it is possible to obtain logarithmic complexity growth in the estimation horizon length, which can be used to reduce the computation time for IP and AS methods when applied to what is today considered as challenging estimation problems. Furthermore, promising numerical results have been obtained using an ANSI-C implementation of the proposed algorithm, which uses Message Passing Interface (MPI) together with InfiniBand and is executed on true parallel hardware. Beyond MHE, due to similarities in the problem structure, the algorithm can be applied to various forms of on-line and off-line smoothing problems.

Kernel-based machine learning methods are gaining increasing interest in flow modeling and prediction in recent years. Gaussian process (GP) is one example of such kernelbased methods, which can provide very good performance for nonlinear problems. In this work, we apply GP regression to flow modeling and prediction of athletes in ski races, but the proposed framework can be generally applied to other use cases with device trajectories of positioned data. Some specific aspects can be addressed when the data is periodic, like in sports where the event is split up over multiple laps along a specific track. Flow models of both the individual skier and a cluster of skiers are derived and analyzed. Performance has been evaluated using data from the Falun Nordic World Ski Championships 2015, in particular the Men’s cross country 4 × 10 km relay. The results show that the flow models vary spatially for different skiers and clusters. We further demonstrate that GP regression provides powerful and accurate models for flow prediction.

The speed of a wheeled vehicle is usually estimatedusing wheel speed sensors (WSS) or GPS. If these signals are unavailable, other methods must be used. We propose a novelapproach exploiting the fact that vibrations from rotating axles,with fundamental frequency proportional to vehicle speed, aretransmitted via the vehicle chassis. Using an accelerometer, these vibrations can be tracked to estimate vehicle speed whileother sources of vibrations act as disturbances. A state-space model for the dynamics of the harmonics is presented andformulated such that there is a conditional linear-Gaussiansubstructure, enabling efficient Rao-Blackwellized methods. Avariant of the Rao-Blackwellized point-mass filter is derived, significantly reducing computational complexity, and reducingthe memory requirements from quadratic to linear in thenumber of grid points. It is applied to experimental data from the sensor cluster of a car and validated using therotational frequency from WSS data. The proposed methodshows improved performance and robustness in comparisonto a Rao-Blackwellized particle filter implementation and afrequency spectrum maximization method.

This paper describes the motivation, the current state and the further actions of an improvement process of the engineering education at the Military Institute of Engineering (IME) in Brazil. Based on the reasons for why and how to change, the CDIO framework has been chosen as the kernel of this improvement process. The activities realized the plan of the further actions and the open questions are presented in this paper. The paper is a condensed presentation of the report (Cerqueira et. al., 2016), where a thorough background and more details can be found.

Svante Gunnarsson, Ylva Jung, Clas Veibäck, Torkel Glad, "IO (Implement and Operate) First in an Automatic Control Context", Proceedings of the 12th International CDIO Conference, Turku University of Applied Sciences,Turku, Finland, June 12-16, 2016, Research Reports from Turku University of Applied Sciences, Vol. 45, 238-249, 2016.

AbstractKeywordsBiBTeX

Abstract

A first course in Automatic control is presented. A main objective of the course is to put most of the emphasis on the Implement and Operate phases in the process of developing a control system for a process. The course is built around a large amount of student active learning based on three extensive laboratory exercises, where each laboratory exercise can have duration of up to two weeks. For each of the laboratory exercises there is a sequence of learning activities supporting the students’ learning: Introductory lecture, problem solving session, preparation work, help-desk session, independent work in the laboratory, and a final demonstration of the control system. In addition there is a small project where the task is to write a manual for a process operator. The laboratory tasks involve implementation of a control system in an industrial PLC (Programmable Logic Controller) and development of an operator interface.

We study severely quantized received signal strength (RSS)-based cooperative localization in wireless sensor networks. We adopt the well-known sum-product algorithm over a wireless network (SPAWN) framework in our study. To address the challenge brought by severely quantized measurements, we adopt the principle of importance sampling and design appropriate proposal distributions. Moreover, we propose a parametric SPAWN in order to reduce both the communication overhead and the computational complexity. Experiments with real data corroborate that the proposed algorithms can achieve satisfactory localization accuracy for severely quantized RSS measurements. In particular, the proposed parametric SPAWN outperforms its competitors by far in terms of communication cost. We further demonstrate that knowledge about non-connected sensors can further improve the localization accuracy of the proposed algorithms.

Parametric Cramer-Rao lower bounds (CRLBs) are given for discrete-time systems with non-zero process noise. Recursive expressions for the conditional bias and mean-square-error (MSE) (given a specific state sequence) are obtained for Kalman filter estimating the states of a linear Gaussian system. It is discussed that Kalman filter is conditionally biased with a non-zero process noise realization in the given state sequence. Recursive parametric CRLBs are obtained for biased estimators for linear state estimators of linear Gaussian systems. Simulation studies are conducted where it is shown that Kalman filter is not an efficient estimator in a conditional sense.

The commercial interest in proximity services is increasing. Application examples include location-based information and advertisements, logistics, social networking, file sharing, etc. In this paper, we consider network-based positioning based on times series of proximity reports from a mobile device, either only a proximity indicator, or a vector of RSS from observed nodes. Such positioning corresponds to a latent and nonlinear observation model. To address these problems, we combine two powerful tools, namely particle filtering and Gaussian process regression (GPR) for radio signal propagation modeling. The latter also provides some insights into the spatial correlation of the radio propagation in the considered area. Radio propagation modeling and positioning performance are evaluated in a typical office area with Bluetooth-Low-Energy (BLE) beacons deployed for proximity detection and reports. Results show that the positioning accuracy can be improved by using GPR.

This paper investigates the usefulness of multi-frequency received signal strength (RSS) for indoor localization. Acollected set of data from four sites containing 7 frequencies fromdual receivers and a high accuracy reference positioning systemis presented. The collected data is also made publicly availablethrough ResearchGate. The data is analyzed with respect tospatial variations using Gaussian processes ( GP ). The resultsshow that there are more rapid signal variations across corridorsthan along them. The uniqueness of RSS fingerprints is analyzedsuggesting that sequences of measurements in smoothing, orsmoothing-like, algorithms that can handle temporary positionambiguities are likely the best choice for localization applications.

Drive cycle following is important for concept comparisons when evaluating vehicle concepts, but it can be time consuming to develop good driver models that can achieve accurate following of a specific velocity profile. Here, a new approach is proposed where a simple driver model based on a PID controller is extended with an Iterative Learning Control (ILC) algorithm. Simulation results using a nonlinear vehicle and control system model show that it is possible to achieve very good cycle following in a few iterations with little tuning effort. It is also possible to utilize the repetitive behavior in the drive cycle to accelerate the convergence of the ILC algorithm even further.

In this paper, we propose a new method to identify biochemical reaction networks (i.e. both reactions and kinetic parameters) from heterogeneous datasets. Such datasets can contain (a) data from several replicates of an experiment performed on a biological system; (b) data measured from a biochemical network subjected to different experimental conditions, for example, changes/perturbations in biological inductions, temperature, gene knock-out, gene over-expression, etc. Simultaneous integration of various datasets to perform system identification has the potential to avoid non-identifiability issues typically arising when only single datasets are used.

System identification with regularization methods has attracted increasing attention recently and is a complement to the current standard maximum likelihood/ prediction error method. In this paper, we focus on the kernel-based regularization method and give a spectral analysis of the so-called diagonal correlated (DC) kernel, one family of kernel structures that has been proven useful for linear time-invariant system identification. In particular, using the theory of Bessel functions, we derive the eigenvalues and corresponding eigenfunctions of the DC kernel. Accordingly, we derive the Karhunen-Loeve expansion of the stochastic process whose covariance function is the DC kernel.

A skew-t variational Bayes filter (STVBF) is applied to indoor positioning with time-of-arrival (TOA) based distance measurements and pedestrian dead reckoning (PDR). The proposed filter accommodates large positive outliers caused by occasional non-line-of-sight (NLOS) conditions by using a skew-t model of measurement errors. Real-data tests using the fusion of inertial sensors based PDR and ultra-wideband based TOA ranging show that the STVBF clearly outperforms the extended Kalman filter (EKF) in positioning accuracy with the computational complexity about three times that of the EKF.

In this paper, the maximum entropy property of the discrete-time first-order stable spline kernel is studied. The advantages of studying this property in discrete-time domain instead of continuous-time domain are outlined. One of such advantages is that the differential entropy rate is well-defined for discrete-time stochastic processes. By formulating the maximum entropy problem for discrete-time stochastic processes we provide a simple and self-contained proof to show what maximum entropy property the discrete-time first-order stable spline kernel has.

We present algorithm evaluations for ATR of small sea vessels. The targets are at km distance from the sensors, which means that the algorithms have to deal with images affected by turbulence and mirage phenomena. We evaluate previously developed algorithms for registration of 3D-generating laser radar data. The evaluations indicate that some robustness to turbulence and mirage induced uncertainties can be handled by our probabilistic-based registration method.

We also assess methods for target classification and target recognition on these new 3D data. An algorithm for detecting moving vessels in infrared image sequences is presented; it is based on optical flow estimation. Detection of moving target with an unknown spectral signature in a maritime environment is a challenging

problem due to camera motion, background clutter, turbulence and the presence of mirage. First, the optical flow caused by the camera motion is eliminated by estimating the global flow in the image. Second, connected regions containing significant motions that differ from camera motion is extracted. It is assumed that motion caused by a moving vessel is more temporally stable than motion caused by mirage or turbulence. Furthermore, it is assumed that the motion caused by the vessel is more homogenous with respect to both magnitude and orientation, than motion caused by mirage and turbulence. Sufficiently large connected regions with a flow of acceptable magnitude and orientation are considered target regions. The method is evaluated on newly collected sequences of SWIR and MWIR images, with varying targets, target ranges and background clutter.

Finally we discuss a concept for combining passive and active imaging in an ATR process. The main steps are passive imaging for target detection, active imaging for target/background segmentation and a fusion of passive and active imaging for target recognition.

Gaussian process regression is a popular method for non-parametric probabilistic modeling of functions. The Gaussian process prior is characterized by so-called hyperparameters, which often have a large influence on the posterior model and can be difficult to tune. This work provides a method for numerical marginalization of the hyperparameters, relying on the rigorous framework of sequential Monte Carlo. Our method is well suited for online problems, and we demonstrate its ability to handle real-world problems with several dimensions and compare it to other marginalization methods. We also conclude that our proposed method is a competitive alternative to the commonly used point estimates maximizing the likelihood, both in terms of computational load and its ability to handle multimodal posteriors.

We provide conditions that guarantee existence, uniqueness and stability of strictly positive equilibria for nonlinear cooperative systems associated to vector fields that are concave or subhomogeneous. This class of positive systems describes well interconnected dynamics that are of key interest for communication, biological, economical and neural network applications. These conditions can be formulated directly in terms of the spectral radius of the Jacobian of the system, and do not require to use constant inputs to move the equilibrium point from the origin to the interior of the positive orthant.

In order to investigate the cases in which an externally positive discrete-time system fails to have a minimal positive realization, in this paper we introduce the notion of minimal eventually positive realization, fr which the state update matrix becomes positive after a certain power. This property captures the idea that in the impulse response of an externally positive system the state of a minimal realization may fail to be positive, but only transiently. It is shown in the paper that whenever a minimal eventually positive realization exists, then the sequence of Markov parameters of the impulse response admits decimated subsequences for which minimal positive realizations exist and can be obtained by downsampling the eventually positive realization.

To estimate the smoothing distribution in a nonlinear state space model, we apply the conditional particle filter with ancestor sampling. This gives an iterative algorithm in a Markov chain Monte Carlo fashion, with asymptotic convergence results. The computational complexity is analyzed, and our proposed algorithm is successfully applied to the challenging problem of sensor fusion between ultrawideband and accelerometer/gyroscope measurements for indoor positioning. It appears to be a competitive alternative to existing nonlinear smoothing algorithms, in particular the forward filtering-backward simulation smoother.

In Model Predictive Control (MPC) the control signal is computed by solving a constrained finite-time optimal control (CFTOC) problem at each sample in the control loop. The CFTOC problem can be solved by, e.g., interior-point or active-set methods, where the main computational effort in both methods is known to be the computation of the search direction, i.e., the Newton step. This is often done using generic sparsity exploiting algorithms or serial Riccati recursions, but as parallel hardware is becoming more commonly available the need for parallel algorithms for computing the Newton step is increasing. In this paper a tailored, non-iterative parallel algorithm for computing the Newton step using the Riccati recursion is presented. The algorithm exploits the special structure of the Karush-Kuhn-Tucker system for a CFTOC problem. As a result it is possible to obtain logarithmic complexity growth in the prediction horizon length, which can be used to reduce the computation time for popular state-of-the-art MPC algorithms when applied to what is today considered as challenging control problems.

We study maximum likelihood (ML) position estimation using quantized received signal strength measurements. In order to mitigate the undesired quantization effect in the observations, the dithering technique is adopted. Various dither noise distributions are considered and the corresponding likelihood functions are derived. Simulation results show that the proposed ML estimator with dithering is able to generate a significantly reduced bias but a modestly increased mean-square error as compared to the conventional ML estimator without dithering.

One of the key challenges in identifying nonlinear and possibly non-Gaussian state space models (SSMs) is the intractability of estimating the system state. Sequential Monte Carlo (SMC) methods, such as the particle filter (introduced more than two decades ago), provide numerical solutions to the nonlinear state estimation problems arising in SSMs. When combined with additional identification techniques, these algorithms provide solid solutions to the nonlinear system identification problem. We describe two general strategies for creating such combinations and discuss why SMC is a natural tool for implementing these strategies.

Particle Metropolis-Hastings enables Bayesian parameter inference in general nonlinear state space models (SSMs). However, in many implementations a random walk proposal is used and this can result in poor mixing if not tuned correctly using tedious pilot runs. Therefore, we consider a new proposal inspired by quasi-Newton algorithms that may achieve similar (or better) mixing with less tuning. An advantage compared to other Hessian based proposals, is that it only requires estimates of the gradient of the log-posterior. A possible application is parameter inference in the challenging class of SSMs with intractable likelihoods.We exemplify this application and the benefits of the new proposal by modelling log-returns offuture contracts on coffee by a stochastic volatility model with alpha-stable observations.

Simultaneous localization and mapping (SLAM) is a well-known positioning approach in GPS-denied environments such as urban canyons and inside buildings. Autonomous/aided target detection and recognition (ATR) is commonly used in military application to detect threats and targets in outdoor environments. This papers present approaches to combine SLAM with ATR in ways that compensate for the drawbacks in each method. The methods use physical objects that are recognizable by ATR as unambiguous features in SLAM, while SLAM provides the ATR with better position estimates. Landmarks in the form of 3D point features based on normal aligned radial features (NARF) are used in conjunction with identified objects and 3D object models that replace landmarks when possible. This leads to a more compact map representation with fewer landmarks, which partly compensates for the introduced cost of the ATR. We analyze three approaches to combine SLAM and 3D-data; point-point matching ignoring NARF features, point-point matching using the set of points that are selected by NARF feature analysis, and matching of NARF features using nearest neighbor analysis. The first two approaches are is similar to the common iterative closest point (ICP). We propose an algorithm that combines EKF-SLAM and ATR based on rectangle estimation. The intended application is to improve the positioning of a first responder moving through an indoor environment, where the map offers localization and simultaneously helps locate people, furniture and potentially dangerous objects such as gas canisters.

Maximum likelihood (ML) estimation using Newton’s method in nonlinear state space models (SSMs) is a challenging problem due to the analytical intractability of the log- likelihood and its gradient and Hessian. We estimate the gradient and Hessian using Fisher’s identity in combination with a smoothing algorithm. We explore two approximations of the log-likelihood and of the solution of the smoothing problem. The first is a linearization approximation which is computationally cheap, but the accuracy typically varies between models. The second is a sampling approximation which is asymptotically valid for any SSM but is more computationally costly. We demonstrate our approach for ML parameter estimation on simulated data from two different SSMs with encouraging results.

We propose nested sequential Monte Carlo (NSMC), a methodology to sample from sequences of probability distributions, even where the random variables are high-dimensional. NSMC generalises the SMC framework by requiring only approximate, properly weighted, samples from the SMC proposal distribution, while still resulting in a correct SMC algorithm. Furthermore, NSMC can in itself be used to produce such properly weighted samples. Consequently, one NSMC sampler can be used to construct an efficient high-dimensional proposal distribution for another NSMC sampler, and this nesting of the algorithm can be done to an arbitrary degree. This allows us to consider complex and high-dimensional models using SMC. We show results that motivate the efficacy of our approach on several filtering problems with dimensions in the order of 100 to 1 000.

Data-efficient learning in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. In this paper, we consider one instance of this challenge, the pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only. We introduce a data-efficient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model that uses deep auto-encoders to learn a low-dimensional embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning ensures that not only static but also dynamic properties of the data are accounted for. This is crucial for long-term predictions, which lie at the core of the adaptive model predictive control strategy that we use for closed-loop control. Compared to state-of-the-art reinforcement learning methods for continuous states and actions, our approach learns quickly, scales to high-dimensional state spaces and is an important step toward fully autonomous learning from pixels to torques.

Modeling dynamical systems is important in many disciplines, such as control, robotics, or neurotechnology. Commonly the state of these systems is not directly observed, but only available through noisy and potentially high-dimensional observations. In these cases, system identification, i.e., finding the measurement mapping and the transition mapping (system dynamics) in latent space can be challenging. For linear system dynamics and measurement mappings efficient solutions for system identification are available. However, in practical applications, the linearity assumptions does not hold, requiring nonlinear system identification techniques. If additionally the observations are high-dimensional (e.g., images), nonlinear system identification is inherently hard. To address the problem of nonlinear system identification from high-dimensional observations, we combine recent advances in deep learning and system identification. In particular, we jointly learn a low-dimensional embedding of the observation by means of deep auto-encoders and a predictive transition model in this low-dimensional space. We demonstrate that our model enables learning good predictive models of dynamical systems from pixel information only.

This paper concerns the estimation of approximate (linear) inverse models of block-oriented systems and the presented results give an improved understanding of these approximations. The estimated inverse is intended to be used as a pre- or postdistorter of the original system and a good inverse model would thus be one that, when used in series with the original system, produces a signal that resembles the original input. An inverse model of a nonlinear system can either be estimated in the standard way (from input to output) and then inverted, or directly (from output to input). This choice will affect the model. In the general case, the two modeling approaches will lead to different models, which will be shown for Hammerstein and Wiener systems. However, for a noise-free Hammerstein system with a white Gaussian input, the two approaches will result in the same model, up to a constant. When the two models are not equal, and the goal is to use the inverse as described above, it can be beneficial to estimate an approximate inverse directly. It will also be illustrated in an example how the inverse estimate can be used to get a nonparametric estimate of the nonlinearity in a block-oriented system.

A common issue with many system identification problems is that the true input to the system is unknown. In this paper, a framework, based on indirect input measurements, is proposed to solve the problem when the input is partially or fully unknown, and cannot be measured directly. The approach relies on measurements that indirectly contain information about the unknown input. The resulting indirect model formulation, with both direct and indirect input measurements as inputs, can be used to estimate the desired model of the original system. Due to the similarities with closed-loop system identification, an iterative instrumental variable method is proposed to estimate the indirect model. To show the applicability of the proposed method, it is applied to data from an inverted pendulum experiment with good results.

This paper presents a method for global pose estimation using inertial sensors, monocular vision, and ultra wide band (UWB) sensors. It is demonstrated that the complementary characteristics of these sensors can be exploited to provide improved global pose estimates, without requiring the introduction of any visible infrastructure, such as fiducial markers. Instead, natural landmarks are jointly estimated with the pose of the platform using a simultaneous localization and mapping framework, supported by a small number of easy-to-hide UWB beacons with known positions. The method is evaluated with data from a controlled indoor experiment with high precision ground truth. The results show the benefit of the suggested sensor combination and suggest directions for further work.

This paper presents a method for matching spotlight Synthetic Aperture Radar (SAR) images with a georeferenced 3D-map as means for navigational aid. A hypothesis of the flying platform's absolute position, velocity and direction - which later can be used to correct the inertial navigation system - is attained by image matching and optimization. A projective model with 6 DoF is used to create a simulated SAR image from a 3D map. The parameters of the projective model represents the most important of the platform's navigation state, and these are adjusted by Chamfer matching the captured SAR image to simulated ones. The performance is demonstrated on real spotlight SAR images and 3D-map, and the error is shown to be only a few pixels in average, which in our case is about 3 meters.

The well-known cooperative localization algorithm, ‘sum-product algorithm over a wireless network’ (SPAWN) hastwo major shortcomings, a relatively high computationalcomplexity and a large communication load. Using the Gaus-sian mixture model with a model selection criterion and thesigma-point (SP) methods, we propose the SPAWN-SP toovercome these problems. The SPAWN-SP easily accommo-dates different localization scenarios due to its high flexibilityin message representation. Furthermore, harsh LOS/NLOSenvironments are considered for the evaluation of coopera-tive localization algorithms. Our simulation results indicatethat the proposed SPAWN-SP demonstrates high localizationaccuracy in different localization scenarios, thanks to its highflexibility in message representation.

This paper presents a distributed online method for joint state and parameter estimation in a Jump Markov NonLinear System based on a distributed recursive Expectation Maximization algorithm. State inference is enabled via the use of Rao-Blackwellized Particle Filter and, for the parameter estimation, the E-step is performed independently at each sensor with the calculation of local sufficient statistics. An average consensus algorithm is used to diffuse local sufficient statistics to neighbors and approximate the global sufficient statistics throughout the network. The evaluation of the proposed algorithm is carried out on a Terrain Based Navigation problem where the unknown parameters of the observation noise model contain relevant information about the terrain properties.

A marginal version of the Weiss-Weinstein bound (WWB) is proposed for discrete-time nonlinear filtering. The proposed bound is calculated analytically for linear Gaussian systems and approximately for nonlinear systems using a particle filtering scheme. Via simulation studies, it is shown that the marginal bounds are tighter than their joint counterparts.

Cramér-Rao lower bounds (CRLBs) are proposed for deterministic parameter estimation under model mismatch conditions where the assumed data model used in the design of the estimators differs from the true data model. The proposed CRLBs are defined for the family of estimators that may have a specified bias (gradient) with respect to the assumed model. The resulting CRLBs are calculated for a linear Gaussian measurement model and compared to the performance of the maximum likelihood estimator for the corresponding estimation problem.

Positioning in radio networks is a well establishedresearch area. The dominating approach has been that positioningalgorithms are implemented in the higher levels of the communicationsystem based on position related information derivedin the lowest (physical) layer. Examples of measurement includereceived signal strength (RSS), time of arrival (TOA), angleof arrival (AOA), and fusion and filtering is a straightforwardtask. The technical driver for positioning has been E911 andfor commercially driver comes from location based services andlogistics management. These demands are fundamental in thedevelopment of positioning in future radio networks standards.There is today a trend for accuracy demand that goes beyondwhat can be achieved with todays measurements. Another trendis that measurements and positioning algorithms are approachingeach other, so some parts of the positioning are performed on thechip-sets (lowest layer) and low-level measurements are availableto the operating system (highest level). The purpose of thissurvey is to describe this trend in more detail, with examples ofdevelopments in cellular networks as well as WiFi and Bluetooth.

The Ensemble Kalman filter (EnKF) is a standard algorithm in oceanography and meteorology, where it has got thousands of citations. It is in these communities appreciated since it scales much better with state dimension n than the standard Kalman filter (KF). In short, the EnKF propagates ensembles with N state realizations instead of mean values and covariance matrices and thereby avoids the computational and storage burden of working on n×n matrices. Perhaps surprising, very little attention has been devoted to the EnKF in the signal processing community. In an attempt to change this, we present the EnKF in a Kalman filtering context. Furthermore, its application to nonlinear problems is compared to sigma point Kalman filters and the particle filter, so as to reveal new insights and improvements for high-dimensional filtering algorithms in general. A simulation example shows the EnKF performance in a space debris tracking application.

A ship's roll dynamics is sensitive to the mass and mass distribution. Changes in these physical properties might introduce unpredictable behavior of the ship and a worst-case scenario is that the ship will capsize. In this paper, a recently proposed approach for online estimation of mass and center of mass is validated using experimental data. The experiments were performed using a scale model of a ship in a wave basin. The data were collected in free run experiments where the rudder angle was recorded and the ship's motion was measured using an inertial measurement unit. The motion measurements are used in conjunction with a model of the roll dynamics to estimate the desired properties. The estimator uses the rudder angle measurements together with an instrumental variable method to mitigate the influence of disturbances. The experimental study shows that the properties can be estimated with quite good accuracy but that variance and robustness properties can be improved further.

A ship's roll dynamics is very sensitive to changes in the loading conditions and a worst-case scenario is the loss of stability. This paper proposes an approach for online estimation of a ship's mass and center of mass. Instead of focusing on a sensor-rich environment where all possible signals on a ship can be measured and a complete model of the ship can be estimated, a minimal approach is adopted. A model of the roll dynamics is derived from a well-established model in literature and it is assumed that only motion measurements from an inertial measurement unit together with measurements of the rudder angle are available. Furthermore, identifiability properties and disturbance characteristics of the model are presented. Due to the properties of the model, the parameters are estimated with an iterative instrumental variable approach to mitigate the influence of the disturbances and it uses multiple datasets simultaneously to overcome identifiability issues. Finally, a simulation study is presented to investigate the sensitivity to the initial conditions and it is shown that the sensitivity is low for the desired parameters.

In this paper, a semi-parametric model for RSS measurements is introduced that can be used to predict coverage in cellular radio networks. The model is composed of an empirical log-distance model and a deterministic antenna gain model that accounts for possible non-uniform base station antenna radiation. A least-squares estimator is proposed to jointly estimate the path loss and antenna gain model parameters. Simulation as well as experimental results verify the efficacy of this approach. The method can provide improved accuracy compared to conventional path loss based estimation methods.

In this paper an extension to the sampling based motion planning framework CL-RRT is presented. The framework uses a system model and a stabilizing controller to sample the perceived environment and build a tree of possible trajectories that are evaluated for execution. Complex system models and constraints are easily handled by a forward simulation making the framework widely applicable. To increase operational safety we propose a sampling recovery scheme that performs a deterministic brake profile regeneration using collision information from the forward simulation. This greatly increases the number of safe trajectories and also reduces the number of samples that produce infeasible results. We apply the framework to a Scania G480 mining truck and evaluate the algorithm in a simple yet challenging obstacle course and show that our approach greatly increases the number of feasible paths available for execution.

The extended Kalman filter (EKF) has been animportant tool for state estimation of nonlinear systems sinceits introduction. However, the EKF does not possess the same optimality properties as the Kalman filter, and may perform poorly. By viewing the EKF as an optimization problem it is possible to, in many cases, improve its performance and robustness. The paper derives three variations of the EKF by applying different optimisation algorithms to the EKF costfunction and relate these to the iterated EKF. The derived filters are evaluated in two simulation studies which exemplify the presented filters.

A local series expansion of a received signal is pro-posed for computing direction of arrival (DOA) in sensor arrays. The advantages compared to classical DOA estimation methods include general sensor configurations, ultra-slow sampling, smalldimension of the arrays, and that it applies for both narrowbandand wideband signals without prior knowledge of the signals. This makes the method well suited for DOA estimation in sensor networks where size and energy consumption have to be small. We generalize the common far-field assumption of the target toalso include the near-field, which enables target tracking usinga network of sensor arrays in one framework.

@inproceedings{diva2:844051,
author = {Gustafsson, Fredrik and Hendeby, Gustaf and Lindgren, David and Mathai, George and Habberstad, Hans},
title = {{Direction of Arrival Estimation in Sensor Arrays Using Local Series Expansion of the Received Signal}},
booktitle = {18th International Conference of Information Fusion},
year = {2015},
publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
}

Visual animal tracking is a challenging problem generally requiring extended target models, group tracking and handling of clutter and missed detections. Furthermore, the dolphin tracking problem we consider includes basin constraints, shadows, limited field of view and rapidly changing light conditions. We describe the whole pipeline of a solution based on a ceiling-mounted fisheye camera that includes foreground segmentation and observation extraction in each image, followed by a target tracking framework. A novel contribution is a potential field model of the basin edges as a part of the motion model, that provides a robust prediction of the dolphin trajectories in phases with long segments of missed detections. The overall performance on real data is quite promising.

Micro unmanned aerial vehicles are becoming increasingly interesting for aiding and collaborating with human agents in myriads of applications, but in particular they are useful for monitoring inaccessible or dangerous areas. In order to interact with and monitor humans, these systems need robust and real-time computer vision subsystems that allow to detect and follow persons.

In this work, we propose a low-level active vision framework to accomplish these challenging tasks. Based on the LinkQuad platform, we present a system study that implements the detection and tracking of people under fully autonomous flight conditions, keeping the vehicle within a certain distance of a person. The framework integrates state-of-the-art methods from visual detection and tracking, Bayesian filtering, and AI-based control. The results from our experiments clearly suggest that the proposed framework performs real-time detection and tracking of persons in complex scenarios

Subspace identification is revisited in the scope of nuclear norm minimization methods. It is shown that essential structural knowledge about the unknown data matrices in the data equation that relates Hankel matrices constructed from input and output data can be used in the first step of the numerical solution presented. The structural knowledge comprises the low rank property of a matrix that is the product of the extended observability matrix and the state sequence and the Toeplitz structure of the matrix of Markov parameters (of the system in innovation form). The new subspace identification method is referred to as the N2SID (twice the N of Nuclear Norm and SID for Subspace IDentification) method. In addition to include key structural knowledge in the solution it integrates the subspace calculation with minimization of a classical prediction error cost function. The nuclear norm relaxation enables us to perform such integration while preserving convexity. The advantages of N2SID are demonstrated in a numerical open- and closed-loop simulation study. Here a comparison is made with another widely used SID method, i.e. N4SID. The comparison focusses on the identification with short data batches, i.e. where the number of measurements is a small multiple of the system order.

This paper presents identification of both network connected systems as well as distributed systems governed by PDEs in the framework of distributed optimization via the Alternating Direction Method of Multipliers. This approach opens first the possibility to identify distributed models in a global manner using all available data sequences and second the possibility for a distributed implementation. The latter will make the application to large scale complex systems possible. In addition to outlining a new large scale identification method, illustrations are shown for identifying both network connected systems and discretized PDEs.

We are concerned with the problem of detecting an overtaking vehicle using a single camera mounted behind the ego-vehicle windscreen. The proposed solution makes use of 1D optical flow evaluated along lines parallel to the motion of the overtaking vehicles. The 1D optical flow is computed by tracking features along these lines. Based on these features, the position of the overtaking vehicle can also be estimated. The proposed solution has been implemented and tested in real time with promising results. The video data was recorded during test drives in normal traffic conditions in Sweden and Germany.

A class of continuous-time dynamical systems able to sort a list of real numbers is introduced in this paper. The dynamical sorting is achieved in a completely distributed manner, by modifying a consensus problem, namely right multiplying a Laplacian matrix by a diagonal matrix of weights that represents the desired order. The sorting obtained is relative, i.e., a conservation law is imposed on the dynamics. It is shown that sorting can be achieved in finite-time even in a globally smooth way.

In vehicle load detection and estimation, having an accurate estimate of the height of the Center of Gravity (CG) and the Roll Center (RC) is crucial. In this paper, taking advantage of the availability of Inertial Measurement Units (IMUs) in automotive vehicles, the nominal CG and RC heights are estimated and used for the main purpose of roof load detection. RC estimation is formulated as an Errors-in-Variables regression model. The Total and Corrected Least Squares (TLS and CLS) approaches are applied and a comparison is performed. Using the knowledge of the covariance matrix, CLS provides consistent estimates of the RC height. The estimated RC and CG heights are used to estimate the roof load using a greybox model and the method is applied to measurements from normal driving conditions.

In this paper, an instrumental variable (IV) method for estimating the mass and center of mass (CM) of a ship using IMU data has been further investigated. Here, this IV method, which was proposed in an earlier paper, has been analyzed from a closed-loop point of view. This new perspective reveals the properties of the system and dependencies of the signals used in the estimation procedure. Due to similarities with closed-loop identification, previous results in the closed-loop identification field have been used as an inspiration to improve the IV estimator. Since the roll dynamics of a ship is well described by a pendulum model, a pendulum experiment has been carried out to validate the performance both of the original and the improved IV estimators. The experiments gave good results for the improved IV estimator with significantly lower variances and relative errors than the previous IV estimator.

Common data preprocessing routines often introduce considerable flaws in laser-based tracking of extended objects. As an alternative, extended target tracking methods, such as the Gamma-Gaussian-Inverse Wishart (GGIW) probability hypothesis density (PHD) filter, work directly on raw data. In this paper, the GGIW-PHD filter is applied to real world traffic scenarios. To cope with the large amount of data, a mixture clustering approach which reduces the combinatorial complexity and computation time is proposed. The effective segmentation of raw measurements with respect to spatial distribution and motion is demonstrated and evaluated on two different applications: pedestrian tracking from a vehicle and intersection surveillance.

There is strong interest in positioing in wireless networks, partly to support end user service needs, but also to support network management with network-based network information. The focus in this paper is on the latter, while using measurements that are readily available in wireless networks. We show how thesignal direction of departure and inter-distance between the base station and the mobile terminal can be estimated, and how particle filters and smoothers can be used to post-process the measurements. The methods are evaluated in a live 3GPP LTE network with promising results inlcuding position error medians of less than 100 m.

Enumerative nonlinear model predictive control for speed tracking problem of linear induction motors has been presented in [1], where the authors show that this control scheme has better performance as compared to direct torque control. In this paper, the authors show that using a load observer for integral action, the performance can be further improved. Specifically simulation results show that a load observer results in better tracking properties and offers more robust control.

In this paper, we aim to relate different Bayesian Cramér-Rao bounds which appear in the discrete-time nonlinear filtering literature in a single framework. A comparative theoretical analysis of the bounds is provided in order to relate their tightness. The results can be used to provide a lower bound on the mean square error in nonlinear filtering. The findings are illustrated and verified by numerical experiments where the tightness of the bounds are compared.

We propose an improved proposal distribution in the Particle Metropolis-Hastings (PMH) algorithm for Bayesian parameter inference in nonlinear state space models. This proposal incorporates second-order information about the parameter posterior distribution, which can be extracted from the particle filter already used within the PMH algorithm. The added information makes the proposal scale-invariant, simpler to tune and can possibly also shorten the burn-in phase. The proposed algorithm has a computational cost which is proportional to the number of particles, i.e. the same as the original marginal PMH algorithm. Finally, we provide two numerical examples that illustrates some of the possible benefits of adding the second-order information.

We propose a new framework for how to use sequential Monte Carlo (SMC) algorithms for inference in probabilistic graphical models (PGM). Via a sequential decomposition of the PGM we find a sequence of auxiliary distributions defined on a monotonically increasing sequence of probability spaces. By targeting these auxiliary distributions using SMC we are able to approximate the full joint distribution defined by the PGM. One of the key merits of the SMC sampler is that it provides an unbiased estimate of the partition function of the model. We also show how it can be used within a particle Markov chain Monte Carlo framework in order to construct high-dimensional block-sampling algorithms for general PGMs.

We derive a new Sequential-Monte-Carlo-based algorithm to estimate the capacity of two-dimensional channel models. The focus is on computing the noiseless capacity of the 2-D (1, ∞) run-length limited constrained channel, but the underlying idea is generally applicable. The proposed algorithm is profiled against a state-of-the-art method, yielding more than an order of magnitude improvement in estimation accuracy for a given computation time.

This paper presents a system which combines a zero-velocity-update-(ZUPT-)aided inertial navigation system (INS), using a foot-mounted inertial measurement unit (IMU), with opportunistic use of multi-frequency received signal strength (RSS) measurements. The system does not rely on maps or pre-collected data from surveys of the radio-frequency (RF) environment. Instead it builds its own database of collected RSS measurements during the course of the operation. New RSS measurements are continuously compared with the stored values in the database, and when the user returns to a previously visited area this can thus be detected. This enables loop-closures to be detected online and used for error drift correction. The system utilises a distributed particle simultaneous localization and mapping (DP-SLAM) algorithm which provides a flexible 2D navigation platform that can be extended with more sensors. The experimental results presented in this paper indicates that the developed RSS SLAM algorithm can, in many cases, significantly improve the positioning performance of a foot-mounted INS.

Stochastic dynamical systems are fundamental in state estimation, systemidentification and control. System models are often provided incontinuous time, while a major part of the applied theory is developedfor discrete-time systems. Discretization of continuous-time models ishence fundamental. We present a novel algorithm using a combination of Lyapunov equations and analytical solutions, enabling efficient implementation in software. The proposed method circumvents numerical problems exhibited by standard algorithms in the literature. Both theoretical and simulation results are provided.

In inertial human motion capture, a multitude of body segments are equipped with inertial measurement units, consisting of 3D accelerometers, 3D gyroscopes and 3D magnetometers. Relative position and orientation estimates can be obtained using the inertial data together with a biomechanical model. In this work we present an optimization-based solution to magnetometer-free inertial motion capture. It allows for natural inclusion of biomechanical constraints, for handling of nonlinearities and for using all data in obtaining an estimate. As a proof-of-concept we apply our algorithm to a lower body configuration, illustrating that the estimates are drift-free and match the joint angles from an optical reference system.

Magnetometers and inertial sensors (accelerometers and gyroscopes) are widely used to estimate 3D orientation. For the orientation estimates to be accurate, the sensor axes need to be aligned and the magnetometer needs to be calibrated for sensor errors and for the presence of magnetic disturbances. In this work we use a grey-box system identification approach to compute maximum likelihood estimates of the calibration parameters. An experiment where the magnetometer data is highly disturbed shows that the algorithm works well on real data, providing good calibration results and improved heading estimates. We also provide an identifiability analysis to understand how much rotation is needed to be able to solve the calibration problem.

In this paper, multiobjective optimization is applied to an optimal control problem for a grab-shift unloader crane. The crane is modeled as a cart-pendulum system with varying rope length and the trajectory of the grab is limited by the ship, the quay, and the crane structure. The objectives to minimize are chosen as time, energy and maximal instantaneous power. The optimal control problem is solved using a direct simultaneous optimal control method. The study shows that MOO can be an efficient tool when choosing a good compromise between conflicting objectives such as time and energy. Furthermore, navigation among the Pareto optimal solutions has proven to be very useful when a user wants to learn how the control variables interact with the process.

In this paper, a method for estimating physical parameters using limited sensors is investigated. As a case study, measurements from an IMU are used for estimating the change in mass and the change in center of mass of a ship. The roll motion is studied and an instrumental variable method estimating the parameters of a transfer function from the tangential acceleration to the angular velocity is presented. It is shown that only a subset of the unknown parameters are identifiable simultaneously. A multi-stage identification approach is presented as a remedy for this. A limited simulation study is also presented to show the properties of the estimator. This shows that the method is indeed promising but that more work is needed to reduce the variance of the estimator.

The use of Model Predictive Control is steadily increasing in industry as more complicated problems can be addressed. Due to that online optimization is usually performed, the main bottleneck with Model Predictive Control is the relatively high computational complexity. Hence, much research has been performed to find efficient algorithms that solve the optimization problem. As parallel hardware is becoming more commonly available, the demand of efficient parallel solvers for Model Predictive Control has increased. In this paper, a tailored parallel algorithm that can adopt different levels of parallelism for solving the Newton step is presented. With sufficiently many processing units, it is capable of reducing the computational growth to logarithmic in the prediction horizon. Since the Newton step computation is where most computational effort is spent in both interior-point and active-set solvers, this new algorithm can significantly reduce the computational complexity of highly relevant solvers for Model Predictive Control.

There is today no established automated method for testing vehicles or tyres, and the most common option is using professional drivers for this purpose. The tests are supposed to be fair and repeatable, which means using human drivers for these kinds of vehicle testing is not an option. Using a steering robot modelled to drive as a human is therefore preferable. The approach described in this paper shows how a driver model can be created by using a control algorithm based on gathered data from human drivers performing double lane change (DLC) manoeuvres in a simulator. The implemented controller shows how human drivers’ behaviors can be captured using control theory.

Keywords

Engineering and Technology

BIBTEX

@inproceedings{diva2:753600,
author = {Jansson, Andreas and Olsson, Erik and Linder, Jonas and Hjort, Mattias},
title = {{Developing of a Driver Model for Vehicle Testing}},
booktitle = {Proceedings of the 14th International Symposium on Advanced Vehicle Control (AVEC), Tokyo, September 2014},
year = {2014},
}

Being able to predict the outcome of an opinion forming process is an important problem in social network theory. However, even for linear dynamics, this becomes a difficult task as soon as non-cooperative interactions are taken into account. Such interactions are naturally modeled as negative weights on the adjacency matrix of the social network. In this paper we show how the Perron-Frobenius theorem can be used for this task also beyond its standard formulation for cooperative systems. In particular we show how it is possible to associate the achievement of unanimous opinions with the existence of invariant cones properly contained in the positive orthant. These cases correspond to signed adjacency matrices having the eventual positivity property, i.e., such that in sufficiently high powers all negative entries have disappeared. More generally, we show how for social networks the achievement of a, possibily non-unanimous, opinion can be associated to the existence of an invariant cone fully contained in one of the orthants of ℝless thansupgreater thannless than/supgreater than.

Large-scale interconnected uncertain systems commonly have large state and uncertainty dimensions. Aside from the heavy computational cost of solving centralized robust stability analysis techniques, privacy requirements in the network can also introduce further issues. In this paper, we utilize IQC analysis for analyzing large-scale interconnected uncertain systems and we evade these issues by describing a decomposition scheme that is based on the interconnection structure of the system. This scheme is based on the so-called chordal decomposition and does not add any conservativeness to the analysis approach. The decomposed problem can be solved using distributed computational algorithms without the need for a centralized computational unit. We further discuss the merits of the proposed analysis approach using a numerical experiment.

In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow and require many iterations to converge. In order to alleviate this issue, we propose algorithms that combine the Newton and interior-point methods with proximal splitting methods for solving such problems. Particularly, the algorithm for solving unconstrained loosely coupled problems, is based on Newton's method and utilizes proximal splitting to distribute the computations for calculating the Newton step at each iteration. A combination of this algorithm and the interior-point method is then used to introduce a distributed algorithm for solving constrained loosely coupled problems. We also provide guidelines on how to implement the proposed methods efficiently and briefly discuss the properties of the resulting solutions.

Gaussian process state-space models (GP-SSMs) are a very exible family of models of nonlinear dynamical systems. They comprise a Bayesian nonparametric representation of the dynamics of the system and additional (hyper-)parameters governing the properties of this nonparametric representation. The Bayesian formalism enables systematic reasoning about the uncertainty in the system dynamics. We present an approach to maximum likelihood identification of the parameters in GP-SSMs, while retaining the full nonparametric description of the dynamics. The method is based on a stochastic approximation version of the EM algorithm that employs recent developments in particle Markov chain Monte Carlo for efficient identification.

The Kalman filter has been the work horse in model based filtering for five decades, and basic knowledge and understanding of it is an important part of the curriculum in many Master of Science programs. It is therefore important to combine theoretical studies with practical experience to allow the students to deepen their understanding of the filter. We have developed a lab where the students implement a Kalman filter in a real-time Matlab framework, to which data are streamed from the smartphone over WiFi. The goal of the lab is to estimate the orientation of the smartphone, which can be nicely visualized graphically and also be compared to the built-in filters in the smartphone. The filter can accept any combination of sensor data from accelerometers, gyroscopes, and magnetometer, with different performance. Different tunings and tricks in the Kalman filter are easily evaluated on-line. The smartphone app is also a stand-alone tool to visualize the sensor data graphically. So far the lab seems tohave been successful in reaching the pedagogic goals and to engage the students.

This paper proposes batch and sequential data-driven approaches to anomaly detection based on generalized likelihood ratio tests for a bias change. The procedure is divided into two steps. Assuming availability of a nominal dataset, a nonparametric density estimate is obtained in the first step, prior to the test. Second, the unknown bias change is estimated from test data. Based on the expectation maximization (EM) algorithm, batch and sequential maximum likelihood estimators of the bias change are derived for the case where the densit yestimate is given by a Gaussian mixture. Approximate asymptotic expressions for the probabilities of error are suggested based on available results. Simulations and real world experiments illustrate the approach.

Fault detection algorithms (FDAs) process data to generate a test quantity. Test quantities are used to determine presence of a fault in a monitored system, despite disturbances. Because only limited knowledge of the system can be embedded in an FDA, it is important to evaluate it in scenarios relevant in practice. In this paper, simulation based approaches are proposed in an attempt to determine: i) which disturbances affect the output of an FDA the most; ii) how to compare the performance of dierent FDAs; and iii) which combinations of fault change size and disturbances variations are allowed to achieve satisfactory performance. The ideas presented are inspired by the literature of design of experiments, surrogate models, sensitivity analysis and change detection. The approaches are illustrated for the problem of wear diagnosis in manipulators where three FDAs are considered. The application study reveals that disturbances caused by variations in temperature and payload mass error affect the FDAs the most. It is also shown how the size of these disturbances delimit the capacity of an FDA to relate to wear changes. Further comparison of the FDAs reveal which performs "best" in average.

Before a sensor network can be used for target localization, the locations of the sensors need to be determined. We approach this calibration step by moving a source to distinct positions around the network. At each position, the range to each sensor is measured,and from these range measurements the sensor locations can be estimated by solving a nonlinear least squares (NLS) problem. Here we formulate the NLS problem and describe how to robustly initialize it by the use of multidimensional scaling. The method is evaluated on both simulations and real data from an acoustic sensor network. Withas few as six source positions, a robust calibration is demonstrated that gives a position error about the same size as the range error. In the acoustic example this RMSE is less than 40 cm.

Nonlinear Kalman filter adaptations such as extended Kalman filters (EKF) or unscented Kalman filters (UKF) provide approximate solutions to state estimation problems in nonlinear models. The algorithms utilize mean values and covariance matrices to represent the probability densities in the otherwise intractable Bayesian filtering equations. As a consequence, their estimation performance can show significant dependence on the choice of state coordinates. The here considered problem of tracking maneuvering targets using coordinated turn (CT) models is one practically relevant example: The velocity in the target state can either be formulated in Cartesian or polar coordinates. We extend a previous study to a broader range of CT models that allow for changes in target speed and turn rate, and investigate UKF as well as EKF variants in terms of their performance and sensitivity to noise parameters. The results advocate for the use of polar CT models.

The probabilistic hypothesis density (PHD) filter has grown in popularity during the last decade as a way to address the multi-target tracking problem. Several algorithms exist; for instance under linear-Gaussian assumptions, the Gaussian mixture PHD (GM-PHD) filter. This paper extends the GM-PHD filter to the common case with variable probability of detection throughout the tracking volume. This allows for more efficient utilization, e.g., in situations with distance dependent probability of detection or occluded regions. The proposed method avoids previous algorithmic pitfalls that can result in a not well-defined PHD. The method is illustrated and compared to the standard GM-PHD in a simplified multi-target tracking example as well asin a realistic nonlinear underwater sonar simulation application, both demonstrating the effectiveness of the proposed method.

The recent introduction of HDR video cameras has enabled the development of image based lighting techniques for rendering virtual objects illuminated with temporally varying real world illumination. A key challenge in this context is that rendering realistic objects illuminated with video environment maps is computationally demanding. In this work, we present a GPU based rendering system based on the NVIDIA OptiX framework, enabling real time raytracing of scenes illuminated with video environment maps. For this purpose, we explore and compare several Monte Carlo sampling approaches, including bidirectional importance sampling, multiple importance sampling and sequential Monte Carlo samplers. While previous work have focused on synthetic data and overly simple environment maps sequences, we have collected a set of real world dynamic environment map sequences using a state-of-art HDR video camera for evaluation and comparisons.

The Markov modulated (switching) state space is an important model paradigm in statistical signal processing. In this article, we specifically consider Markov modulated nonlinear state-space models and address the online Bayesian inference problem for such models. In particular, we propose a new Rao-Blackwellized particle filter for the inference task which is our main contribution here. A detailed description of the problem and an algorithm is presented.

We consider an application of Bayesian signal processing to the energy trading problem. In particular, we address the problem of calibrating the Schwartz-Smith Model using the observed electricity futures prices traded on the markets. As compared with the other financial markets, basic electricity derivatives such as futures are more complicated, as these products are based not on the spot prices themselves but on the arithmetic averages of the spot prices during the delivery period. As a result, the (log) futures prices are no longer affine function of the model factors and as such, an approach based on Kalman filtering, to estimate the latent model factors and the parameters seems meaningless. Here, we envisage a Bayesian approach using the particle marginal Metropolis Hastings (PMMH) algorithm for this challenging estimation task. We demonstrate the efficacy of our approach on simulated data.

In this paper we propose a new type of particle smoother with linear computational complexity. The smoother is based on running a sequential Monte Carlo sampler backward in time after an initial forward filtering pass. While this introduces dependencies among the backward trajectories we show through simulation studies that the new smoother can outperform existing forward-backward particle smoothers when targeting the marginal smoothing densities.

We propose an extended method for experiment design in nonlinear state space models. The proposed input design technique optimizes a scalar cost function of the information matrix, by computing the optimal stationary probability mass function (pmf) from which an input sequence is sampled. The feasible set of the stationary pmf is a polytope, allowing it to be expressed as a convex combination of its extreme points. The extreme points in the feasible set of pmf’s can be computed using graph theory. Therefore, the final information matrix can be approximated as a convex combination of the information matrices associated with each extreme point. For nonlinear systems, the information matrices for each extreme point can be computed by using particle methods. Numerical examples show that the proposed techniquecan be successfully employed for experiment design in nonlinear systems.

We propose a novel method for maximum-likelihood-based parameter inference in nonlinear and/or non-Gaussian state space models. The method is an iterative procedure with three steps. At each iteration a particle filter is used to estimate the value of the log-likelihood function at the current parameter iterate. Using these log-likelihood estimates, a surrogate objective function is created by utilizing a Gaussian process model. Finally, we use a heuristic procedure to obtain a revised parameter iterate, providing an automatic trade-off between exploration and exploitation of the surrogate model. The method is profiled on two state space models with good performance both considering accuracy and computational cost.

An H∞ synthesis method for control of a flexible joint, with non-linear spring characteristic, is proposed. The first step of the synthesis method is to extend the joint model with an uncertainty description of the stiffness parameter. In the second step, a non-linear optimisation problem, based on nominal performance and robust stability requirements, has to be solved. Using the Lyapunov shaping paradigm and a change of variables, the non-linear optimisation problem can be rewritten as a convex, yet conservative, LMI problem. The method is motivated by the assumption that the joint operates in a specific stiffness region of the non-linear spring most of the time, hence the performance requirements are only valid in that region. However, the controller must stabilise the system in all stiffness regions. The method is validated in simulations on a non-linear flexible joint model originating from an industrial robot.

Control of a flexible joint of an industrial manipulator using H∞ design methods is presented. The considered design methods are i) mixed-H∞ design, and ii) H∞ loop shaping design. Two different controller configurations are examined: one uses only the actuator position, while the other uses the actuator position and the acceleration of end-effector. The four resulting controllers are compared to a standard PID controller where only the actuator position is measured. The choices of the weighting functions are discussed in details. For the loop shaping design method, the acceleration measurement is required to improve the performance compared to the PID controller. For the mixed-H∞ method it is enough to have only the actuator position to get an improved performance. Model order reduction of the controllers is briefly discussed, which is important for implementation of the controllers in the robot control system.

The detection and classification of small surface targets at long ranges is a growing need for naval security. This paper will discuss simulations of a laser radar at 1.5 μm aimed for search, detect and recognition of small maritime targets.

The data for the laser radar system will be based on present and realistic future technology. The simulations will incorporate typical target movements at different sea states, vessel courses, effects of the atmosphere and for given laser system parameters also include different beam jitter. The laser pulse energy, repetition rate as well as the receiver and detector parameters have not been changed during the simulations.

A discussion of the classification potential based on information in 1D, 2D and 3D data separately and in combination will be made vs. different environmental conditions and system parameters. System issues when combining the laser radar with IR/TV and a range-Doppler radar will also be commented.

We consider received-signal-strength-based robust geolocation in mixed line-of-sight/non-line-of-sight propagation environments. Herein, we assume a mode-dependent propagation model with unknown parameters. We propose to jointly estimate the geographical coordinates and propagation model parameters. In order to approximate the maximum-likelihood estimator (MLE), we develop an iterative algorithm based on the well-known expectation and maximization criterion. As compared to the standard ML implementation, the proposed algorithm is simpler to implement and capable of reproducing the MLE. Simulation results show that the proposed algorithm attains the best geolocation accuracy as the number of measurements increases.

We study received signal strength-based cooperative localization in wireless sensor networks. We assume that the measurement noise fits a contaminated Gaussian model so as to take into account some outlier conditions. In addition, some environment-dependent parameters are assumed to be unknown. We propose an expectation-maximization based algorithm for robust centralized network localization without offline training. As benchmark for comparison, we express the best achievable localization accuracy in terms of the Cramér-Rao bound. Experimental results demonstrate the advantages of the proposed algorithm as compared to some representative algorithms.

We investigate robust cooperative localization in LOS/NLOS environments in wireless sensor networks. Round-trip time-of-arrival signal metric is considered so that time synchronization among sensors can be avoided. Owing to the non-line-of-sight effect, we model the measurement error by a two-mode Gaussian mixture distribution. However, its parameters are assumed completely unknown. We propose a centralized localization algorithm, which jointly estimates the unknown geographical coordinates and the nuisance mixture model parameters. The expectation-maximization criterion is adopted here to implement the maximum likelihood estimator. In addition, we also compute the Cramér-Rao lower bound (CRLB) for our estimation problem and present the best achievable positioning accuracy in terms of the CRLB.

The problem of estimating heading is central in the indoor positioning problem based on measurements from inertial measurement and magnetic units, Integrating rate of turn angular rate gives the heading with unknown initial condition and a linear drift over time, while the magnetometer gives absolute heading, but m here long segments of data are useless in practice because of magnetic disturbances. A basic Kalman filter approach with outlier rejection has turned out to be difficult to use with high integrity. Here, we propose an approach based on convex optimization, where segments of good magnetometer data are separated from disturbed data and jointly fused with the yaw rate measurements. The optimization framework is flexible with many degrees of freedom in the modeling phase, and we outline one design. A recursive solution to the optimization is derived, which has a computational complexity comparable to the simplest possible Kalman filter. The performance is evaluated using data from a handheld smartphone for a large amount of indoor trajectories, and the result demonstrates that the method effectively resolves the magnetic disturbances.

The Swedish nuclear waste will be stored in copper canisters and kept isolated deep under ground for more than 100,000 years. To ensure reliable sealing of the canisters, friction stir welding is used. To repetitively produce high quality welds, it is vital to use automatic control of the process. This paper introduces a nonlinear model predictive controller for regulating both plunge depth and stir zone temperature, which has not been presented in literature before. Further, a nonlinear process model has been developed and used to evaluate the controller in simulations of the closed loop system. The controller is compared to a decentralized solution, and simulation results indicate that it is possible to achieve higher control performance using the nonlinear model predictive controller.

A Wiener model is a fairly simple, well known, and often used nonlinear block- oriented black-box model. A possible generalization of the class of Wiener models lies in the parallel Wiener model class. This paper presents a method to estimate the linear time-invariant blocks of such parallel Wiener models from input/output data only. The proposed estimation method combines the knowledge obtained by estimating the best linear approximation of a nonlinear system with the MAVE dimension reduction method to estimate the linear time- invariant blocks present in the model. The estimation of the static nonlinearity boils down to a standard static nonlinearity estimation problem starting from input-output data once the linear blocks are known.

The performance and design of lateral stability systems in cars depend on the ratio between the height of the center of gravity and the wheel base. This ratio is car specific, but a roof load can affect this and decrease the stability margins. We investigate the use of vehicle roll dynamics to detect and estimate changes in the overall sprung mass as well as the load positioned on the roof. It is assumed that the vehicle is equipped with a lateral accelerometer and a roll gyro, and a second order physical model is derived. The parameters in this model are partly unknown, and here estimated with a greybox and an ARMAX approach. The changes in load distribution can be detected and the approach is supported by experimental data in a lab environment.

This paper considers the problem of how to estimate a model of the inverse of a system. The use of inverse systems can be found in many applications, such as feedforward control and power amplifier predistortion. The inverse model is here estimated with the purpose of using it in cascade with the system itself, as an inverter. A good inverse model in this setting would be one that, when used in series with the original system, reconstructs the original input. The goal here is to select suitable inputs, experimental conditions and loss functions to obtain a good input estimate. Both linear and nonlinear systems will be discussed.

For nonlinear systems, one way to obtain a linearizing prefilter is by Hirschorn’s algorithm. It is here shown how to extend this to the postdistortion case, and some formulations of how the pre- or postinverter could be estimated are also presented.

We study the sequential identification problem for Bates stochastic volatility model, which is widely used as the model of a stock in finance. By using the exact simulation method, a particle filter for estimating stochastic volatility is constructed. The systems parameters are sequentially estimated with the aid of parallel filtering algorithm. To improve the estimation performance for unknown parameters, the new resampling procedure is proposed. Simulation studies for checking the feasibility of the developed scheme are demonstrated.

The Bayesian Cramer Rao Bound (BCRB) is de­rived for nonlinear state space models with dependent process and measurement noise processes. It generalizes the previously BCRB for the case of dependent noise. Two different dependence structures appearing in literature are considered, leading to two different recursions for BCRB. The special cases of Gaussian noise, and linear models are presented separately. Simulations demonstrate that correct treatment of dependencies is important for both filtering algorithms and the BCRB.

State-space models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning in nonlinear nonparametric state-space models. We place a Gaussian process prior over the transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. However, to enable efficient inference, we marginalize over the dynamics of the model and instead infer directly the joint smoothing distribution through the use of specially tailored Particle Markov Chain Monte Carlo samplers. Once an approximation of the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. We make use of sparse Gaussian process models to greatly reduce the computational complexity of the approach.

A GM-PHD filter is used for pedestrian tracking in a crowdsurveillance application. The purpose is to keep track of thedifferent groups over time as well as to represent the shape ofthe groups and the number of people within the groups. In-put data to the GM-PHD filter are detections using a state ofthe art algorithm applied to video frames from the PETS 2012benchmark data. In a first step, the detections in the framesare converted from image coordinates to world coordinates.This implies that groups can be defined in physical units interms of distance in meters and speed differences in metersper second. The GM-PHD filter is a Bayesian framework thatdoes not form tracks of individuals. Its output is well suitedfor clustering of individuals into groups. The results demon-strate that the GM-PHD filter has the capability of estimatingthe correct number of groups with an accurate representationof their sizes and shapes.

We consider a class of convex feasibility problems where the constraints that describe the feasible set are loosely coupled. These problems arise in robust stability analysis of large, weakly interconnected uncertain systems. To facilitate distributed implementation of robust stability analysis of such systems, we describe two algorithms based on decomposition and simultaneous projections. The first algorithm is a nonlinear variant of Cimmino's mean projection algorithm, but by taking the structure of the constraints into account, we can obtain a faster rate of convergence. The second algorithm is devised by applying the alternating direction method of multipliers to a convex minimization reformulation of the convex feasibility problem. We use numerical results to show that both algorithms require far less iterations than the accelerated nonlinear Cimmino algorithm.

In optimization algorithms used for on-line Model Predictive Control (MPC), the main computational effort is spent while solving linear systems of equations to obtain search directions. Hence, it is of greatest interest to solve them efficiently, which commonly is performed using Riccati recursions or generic sparsity exploiting algorithms. The focus in this work is efficient search direction computation for active-set methods. In these methods, the system of equations to be solved in each iteration is only changed by a low-rank modification of the previous one. This highly structured change of the system of equations from one iteration to the next one is an important ingredient in the performance of active-set solvers. It seems very appealing to try to make a structured update of the Riccati factorization, which has not been presented in the literature so far. The main objective of this paper is to present such an algorithm for how to update the Riccati factorization in a structured way in an active-set solver. The result of the work is that the computational complexity of the step direction computation can be significantly reduced for problems with bound constraints on the control signal. This in turn has important implications for the computational performance of active-set solvers used for linear, nonlinear as well as hybrid MPC.

An estimation-based iterative learning control (ILC) algorithm is applied to a realistic industrial manipulator model. By measuring the acceleration of the end-effector, the arm angular position accuracy is improved when the measurements are fused with motor angle observations. The estimation problem is formulated in a Bayesian estimation framework where three solutions are proposed: one using the extended Kalman filter (EKF), one using the unscented Kalman filter (UKF), and one using the particle filter (PF). The estimates are used in an ILC method to improve the accuracy for following a given reference trajectory. Since the ILC algorithm is repetitive no computational restrictions on the methods apply explicitly. In an extensive Monte Carlo simulation study it is shown that the PF method outperforms the other methods and that the ILC control law is substantially improved using the PF estimate.

This paper presents an approach for 6D pose estimation where MEMS inertial measurements are complemented with magnetometer measurements assuming that a model (map) of the magnetic field is known. The resulting estimation problem is solved using a Rao-Blackwellized particle filter. In our experimental study the magnetic field is generated by a magnetic coil giving rise to a magnetic field that we can model using analytical expressions. The experimental results show that accurate position estimates can be obtained in the vicinity of the coil, where the magnetic field is strong.

In multi-target tracking, the discrepancy between the nominal and the true values of the model parameters might result in poor performance. In this paper, an adaptive Probability Hypothesis Density (PHD) filter is proposed which accounts for sensor parameter uncertainty. Variational Bayes technique is used for approximate inference which provides analytic expressions for the PHD recursions analogous to the Gaussian mixture implementation of the PHD filter. The proposed method is evaluated in a multi-target tracking scenario. The improvement in the performance is shown in simulations.

Radar micro-Doppler signatures (MDS) of humans are created by movements of body parts, such as legs and arms. MDSs can be used in security applications to detect humans and classify their type and activity. Target association and tracking, which can facilitate the classification, become easier if it is possible to distinguish between human individuals by their MDSs. By this we mean to recognize the same individual in a short time frame but not to establish the identity of the individual. In this paper we perform a statistical experiment in which six test persons are able to distinguish between walking human individuals from their MDSs. From this we conclude that there is information in the MDSs of the humans to distinguish between different individuals, which also can be used by a machine. Based on the results of the best test persons we also discuss features in the MDSs that could be utilized to make this processing possible.

Physical activity monitoring has recently become an important topic in wearable computing, motivated by e.g. healthcare applications. However, new benchmark results show that the difficulty of the complex classification problems exceeds the potential of existing classifiers. Therefore, this paper proposes the ConfAdaBoost.M1 algorithm. The proposed algorithm is a variant of the AdaBoost.M1 that incorporates well established ideas for confidence based boosting. The method is compared to the most commonly used boosting methods using benchmark datasets from the UCI machine learning repository and it is also evaluated on an activity recognition and an intensity estimation problem, including a large number of physical activities from the recently released PAMAP2 dataset. The presented results indicate that the proposed ConfAdaBoost.M1 algorithm significantly improves the classification performance on most of the evaluated datasets, especially for larger and more complex classification tasks.

Calibration of ground sensor networks is a complex task in practice. To tackle the problem, we propose an approach based on simultaneous tracking of targets of opportunity and sparse estimation of the bias parameters. The evidence approximation method is used to get a sparse estimate of the bias parameters, and the method is here extended with a novel marginalization step where a state smoother is invoked. A simulation study shows that the non-zero bias parameters are detected and well estimated using only one target of opportunity passing by the network.

It is well-known that the motion of an acoustic source can be estimated from Doppler shift observations. It ishowever not obvious how to design a sensor network to efficiently deliver the localization service. In this work a rather simplistic motion model is proposed that is aimed at sensor networks with realistic numbers of sensor nodes. It is also described how to efficiently solve the associated least squares optimization problem by Gauss-Newton variable projection techniques, and how to initiate the numerical search from simple features extracted from the observed frequency series. The methods are demonstrated onreal data by determining the distance to a passing propellerdriven aircraft and by localizing an all-terrain vehicle. It is concluded that the processing components included are fairly mature for practical implementations in sensor networks.

The monitoring of physical activities under realistic, everyday life conditions - thus while an individual follows his regular daily routine - is usually neglected or even completely ignored. Therefore, this paper investigates the development and evaluation of robust methods for everyday life scenarios, with focus on the task of aerobic activity recognition. Two important aspects of robustness are investigated: dealing with various (unknown) other activities and subject independency. Methods to handle these issues are proposed and compared, a thorough evaluation simulates usual everyday scenarios of the usage of activity recognition applications. Moreover, a new evaluation technique is introduced (leave-one-other-activity-out) to simulate when an activity recognition system is used while performing a previously unknown activity. Through applying the proposed methods it is possible to design a robust physical activity recognition system with the desired generalization characteristic.

This paper describes a competitive approach developed for an activity recognition challenge. The competition was defined on a new and publicly available dataset of human activities, recorded with smartphone sensors. This work investigates different feature sets for the activity recognition task of the competition. Moreover, the focus is also on the introduction of a new, confidence-based boosting algorithm called ConfAda- Boost.M1. Results show that the new classification method outperforms commonly used classifiers, such as decision trees or AdaBoost.M1.

This paper considers extended targets that have constant extension shapes, but generate measurements whose appearance can change abruptly. The problem is approached using multiple measurement models, where each model corresponds to a measurement appearance mode. Mode transitions are modeled as dependent on the extended target kinematic state, and a multiple model extended target PHD filter is used to handle multiple targets with multiple appearance modes. The extended target tracking is evaluated using real world data where a laser range sensor is used to track multiple bicycles.

Unattended Ground Sensor Networks (UGSN) are be- coming increasingly popular for surveillance and situ- ational awareness applications. Acoustic sensors can be used in UGSN to detect and to classify targets, and these sensors are cost efficient, easy to deploy, and above all, non-jammable since they are passive. An array of acoustic sensors can detect multiple sound sources and determine the direction of arrival (DOA), and the network can deal with the multi-sensor multi- target tracking. This contribution focuses on DOA estimation of wideband sources, such as vehicles. We develop a coherent DOA estimation method by taking advantage of the spatial sparsity of the wideband acous- tic sources as a prior information, as an extension to the recently proposed SPICE method for narrowband sources. The method has been tested on both simulated data and field test data with different vehicles with very good performance compared to other state of the art methods.

The mainstream approach to identication of linear discrete-time models is givenby parametric Prediction Error Methods (PEM). As a rule, the model complexity is unknownand model order selection (MOS) is a key ingredient of the estimation process. A dierentapproach to linear system identication has been recently proposed where impulse responsesare described in a Bayesian framework as zero-mean Gaussian processes. Their covariances aregiven by the so-called stable spline, TC or DC kernels that encode information on regularityand BIBO stability. In this paper, we show that these new kernel-based techniques lead alsoto a new eective MOS method for PEM. Furthermore, this paves the way to the design ofa new impulse response estimator that combines the regularized approaches and the classicalparametric PEM. Numerical experiments show that the performance of this technique is verysimilar to that of PEM equipped with an oracle which selects the best model order by knowingthe true impulse response.

Estimation of unknown dynamics is what system identication is about and acore problem in adaptive control and adaptive signal processing. It has long been known thatregularization can be quite benecial for general inverse problems of which system identicationis an example. But only recently, partly under the inuence of machine learning, the use ofwell tuned regularization for estimating linear dynamical systems has been investigated moreseriously. In this presentation we review these new results and discuss what they may mean forthe theory and practice of dynamical model estimation in general.

System Identification is about estimating models of dynamical systems from measured input-output data. Its traditional foundation is basic statistical techniques, such as maximum likelihood estimation and asymptotic analysis of bias and variance and the like. Maximum likelihood estimation relies on minimization of criterion functions that typically are non-convex, and may cause numerical search problems. Recent interest in identification algorithms has focused on techniques that are centered around convex formulations. This is partly the result of developments in machine learning and statistical learning theory. The development concerns issues of regularization for sparsity and for better tuned bias/variance trade-offs. It also involves the use of subspace methods as well as nuclear norms as proxies to rank constraints. A quite different route to convexity is to use algebraic techniques manipulate the model parameterizations. This article will illustrate all this recent development.

Keywords

System Modelling and Identification, Learning Systems, Engineering and Technology

We consider an indoor tracking system consisting of an inertial measurement unit (IMU) and a camera that detects markers in the environment. There are many camera based tracking systems described in literature and available commercially, and a few of them also has support from IMU. These are based on the best-effort principle, where the performance varies depending on the situation. In contrast to this, we start with a specification of the system performance, and the design isbased on an information theoretic approach, where specific user scenarios are defined. Precise models for the camera and IMU are derived for a fusion filter, and the theoretical Cramér-Rao lower bound and the Kalman filter performance are evaluated. In this study, we focus on examining the camera quality versus the marker density needed to get at least a one mm and one degree accuracy in tracking performance.

Model predictive control (MPC) is one of the most popular advanced control techniques and is used widely in industry. The main drawback with MPC is that it is fairly computationally expensive and this has so far limited its practical use for nonlinear systems.

To reduce the computational burden of nonlinear MPC, Feedback Linearization together with linear MPC has been used successfully to control nonlinear systems. The main drawback is that this results in an optimization problem with nonlinear constraints on the control signal.

In this paper we propose a method to handle the nonlinear constraints that arises using a set of dynamically generated local inner polytopic approximations. The main benefits of the proposed method is guaranteed recursive feasibility and convergence.

We consider the filtering problem in linear state space models with heavy tailed process and measurement noise. Our work is based on Student's t distribution, for which we give a number of useful results. The derived filtering algorithm is a generalization of the ubiquitous Kalman filter, and reduces to it as special case. Both Kalman filter and the new algorithm are compared on a challenging tracking example where a maneuvering target is observed in clutter.

Particle Markov Chain Monte Carlo (PMCMC) samplers allow for routine inference of parameters and states in challenging nonlinear problems. A common choice for the parameter proposal is a simple random walk sampler, which can scale poorly with the number of parameters.

In this paper, we propose to use log-likelihood gradients, i.e. the score, in the construction of the proposal, akin to the Langevin Monte Carlo method, but adapted to the PMCMC framework. This can be thought of as a way to guide a random walk proposal by using drift terms that are proportional to the score function. The method is successfully applied to a stochastic volatility model and the drift term exhibits intuitive behaviour.

Particle smoothing is useful for offline state inference and parameter learning in nonlinear/non-Gaussian state-space models. However, many particle smoothers, such as the popular forward filter/backward simulator (FFBS), are plagued by a quadratic computational complexity in the number of particles. One approach to tackle this issue is to use rejection-sampling-based FFBS (RS-FFBS), which asymptotically reaches linear complexity. In practice, however, the constants can be quite large and the actual gain in computational time limited. In this contribution, we develop a hybrid method, governed by an adaptive stopping rule, in order to exploit the benefits, but avoid the drawbacks, of RS-FFBS. The resulting particle smoother is shown in a simulation study to be considerably more computationally efficient than both FFBS and RS-FFBS.

We consider the smoothing problem for a class of conditionally linear Gaussian state-space (CLGSS) models, referred to as mixed linear/nonlinear models. In contrast to the better studied hierarchical CLGSS models, these allow for an intricate cross dependence between the linear and the nonlinear parts of the state vector. We derive a Rao-Blackwellized particle smoother (RBPS) for this model class by exploiting its tractable substructure. The smoother is of the forward filtering/backward simulation type. A key feature of the proposed method is that, unlike existing RBPS for this model class, the linear part of the state vector is marginalized out in both the forward direction and in the backward direction.

I present a novel method for maximum likelihood parameter estimation in nonlinear/non-Gaussian state-space models. It is an expectation maximization (EM) like method, which uses sequential Monte Carlo (SMC) for the intermediate state inference problem. Contrary to existing SMC-based EM algorithms, however, it makes efficient use of the simulated particles through the use of particle Markov chain Monte Carlo (PMCMC) theory. More precisely, the proposed method combines the efficient conditional particle filter with ancestor sampling (CPF-AS) with the stochastic approximation EM (SAEM) algorithm. This results in a procedure which does not rely on asymptotics in the number of particles for convergence, meaning that the method is very computationally competitive. Indeed, the method is evaluated in a simulation study, using a small number of particles with promising results.

Optimal estimation problems for general state space models do not typically admit a closed form solution. However, modern Monte Carlo methods have paved the way to solve such complex inference problems. Particle filters (PF) are a popular class of such Monte Carlo based Bayesian algorithms, which solve the estimation problems numerically in a sequential manner.

PF in general, assume a prior knowledge of the (process and observation) noise distributions involving the state space model, whereas the properties of the noise processes are often unknown for many practical problems. Furthermore, the unknown noise distributions may be state dependent or even non-stationary, which prevent the offline noise calibrations.

In this article, the unknown noises are assumed to be slowly varying in time. The article then proposes a hierarchical noise adaptive PF where a two tier PF is run, the top tier PF estimates the latent states from the streaming observations and the bottom tier PF estimates the noise statistics conditioned on the top tier PF output together with the observations. The estimates are statistically fused together for the inference purpose. In essence, it is an implementation of approximate Rao-Blackwellized PF, where the later is achieved through local Monte Carlo integration. This approach is very generic for different noise classes and importantly, it enhances the level of parallelism in PF implementations.

Starting from the electromagnetic theory, we derive a Bayesian nonparametric model allowing for joint estimation of the magnetic field and the magnetic sources in complex environments. The model is a Gaussian process which exploits the divergence- and curl-free properties of the magnetic field by combining well-known model components in a novel manner. The model is estimated using magnetometer measurements and spatial information implicitly provided by the sensor. The model and the associated estimator are validated on both simulated and real world experimental data producing Bayesian nonparametric maps of magnetized objects.

Many RGB-D sensors, e.g. the Microsoft Kinect, use rolling shutter cameras. Such cameras produce geometrically distorted images when the sensor is moving. To mitigate these rolling shutter distortions we propose a method that uses an attached gyroscope to rectify the depth scans. We also present a simple scheme to calibrate the relative pose and time synchronization between the gyro and a rolling shutter RGB-D sensor. We examine the effectiveness of our rectification scheme by coupling it with the the Kinect Fusion algorithm. By comparing Kinect Fusion models obtained from raw sensor scans and from rectified scans, we demonstrate improvement for three classes of sensor motion: panning motions causes slant distortions, and tilt motions cause vertically elongated or compressed objects. For wobble we also observe a loss of detail, compared to the reconstruction using rectified depth scans. As our method relies on gyroscope readings, the amount of computations required is negligible compared to the cost of running Kinect Fusion.

The availability and reliability of mobile positioning algorithms depend on both the quality of measurements and the environmental characteristics. The positioning systems based on global navigation satellite systems (GNSS), for example, have typically a few meters accuracy but are unavailable in signal denied conditions and unreliable in multipath environments. Other radio network based positioning algorithms have the same drawbacks. This thesis considers a couple of cases where these drawbacks can be mitigated by model-based sensor fusion techniques.

The received signal strength (RSS) is commonly used in cellular radio networks for positioning due to its high availability, but its reliability depends heavily on the environment. We have studied how the directional dependence in the antenna gain in the base stations can be compensated for. We propose a semiempirical model for RSS measurements, composed of an empirical log-distance model of the RSS decay rate, and a deterministic antenna gain model that accounts for non-uniform base station antenna radiation. Evaluations and comparisons presented in this study demonstrate an improvement in estimation performance of the joint model compared to the propagation model alone.

Inertial navigation systems (INS ) rely on integrating inertial sensor measurements. INS as a standalone system is known to have a cubic drift in the position error, and it needs supporting sensor information, for instance, position fixes from GNSS whenever available. For pedestrians, special tricks such as parametric gait models and step detections can be used to limit the drift. In general, the more accurate gait parameters, the better position estimation accuracy. An improved pedestrian dead reckoning (PDR) algorithm is developed that learns gait parameters in time intervals when direct position measurements (such as GNSS positions) are available. We present a multi-rate filtering solution that leads to improved estimates of both gait parameters and position. To further extend the algorithm to more realistic scenarios, a joint classifier of the user’s motion and the device’s carrying mode is developed. Classification of motion mode (walking, running, standing still) and device mode (hand-held, in pocket, in backpack) provides information that can assist in the gait learning process and hence improve the position estimation. The algorithms are applied to collected data and promising results are reported. Furthermore, one of the most extensive datasets for personal navigation systems using both rigid body motion trackers and smartphones is presented, and this dataset has also been made publicly available.

Flight control design for modern fighter aircraft is a challenging task. Aircraft are dynamical systems, which naturally contain a variety of constraints and nonlinearities such as, e.g., maximum permissible load factor, angle of attack and control surface deflections. Taking these limitations into account in the design of control systems is becoming increasingly important as the performance and complexity of the aircraft is constantly increasing.

The aeronautical industry has traditionally applied feedforward, anti-windup or similar techniques and different ad hoc engineering solutions to handle constraints on the aircraft. However these approaches often rely on engineering experience and insight rather than a theoretical foundation, and can often require a tremendous amount of time to tune.

In this thesis we investigate model predictive control as an alternative design tool to handle the constraints that arises in the flight control design.

We derive a simple reference tracking MPC algorithm for linear systems that build on the dual mode formulation with guaranteed stability and low complexity suitable for implementation in real time safety critical systems.

To reduce the computational burden of nonlinear model predictive control we propose a method to handle the nonlinear constraints, using a set of dynamically generated local inner polytopic approximations. The main benefit of the proposed method is that while computationally cheap it still can guarantee recursive feasibility and convergence.

An alternative to deriving MPC algorithms with guaranteed stability properties is to analyze the closed loop stability, post design. Here we focus on deriving a tool based on Mixed Integer Linear Programming for analysis of the closed loop stability and robust stability of linear systems controlled with MPC controllers.

To test the performance of model predictive control for a real world example we design and implement a standard MPC controller in the development simulator for the JAS 39 Gripen aircraft at Saab Aeronautics. This part of the thesis focuses on practical and tuning aspects of designing MPC controllers for fighter aircraft. Finally we have compared the MPC design with an alternative approach to maneuver limiting using a command governor.

Numerical algorithms for efficiently solving optimal control problems are important for commonly used advanced control strategies, such as model predictive control (MPC), but can also be useful for advanced estimation techniques, such as moving horizon estimation (MHE). In MPC, the control input is computed by solving a constrained finite-time optimal control (CFTOC) problem on-line, and in MHE the estimated states are obtained by solving an optimization problem that often can be formulated as a CFTOC problem. Common types of optimization methods for solving CFTOC problems are interior-point (IP) methods, sequential quadratic programming (SQP) methods and active-set (AS) methods. In these types of methods, the main computational effort is often the computation of the second-order search directions. This boils down to solving a sequence of systems of equations that correspond to unconstrained finite-time optimal control (UFTOC) problems. Hence, high-performing second-order methods for CFTOC problems rely on efficient numerical algorithms for solving UFTOC problems. Developing such algorithms is one of the main focuses in this thesis. When the solution to a CFTOC problem is computed using an AS type method, the aforementioned system of equations is only changed by a low-rank modification between two AS iterations. In this thesis, it is shown how to exploit these structured modifications while still exploiting structure in the UFTOC problem using the Riccati recursion. Furthermore, direct (non-iterative) parallel algorithms for computing the search directions in IP, SQP and AS methods are proposed in the thesis. These algorithms exploit, and retain, the sparse structure of the UFTOC problem such that no dense system of equations needs to be solved serially as in many other algorithms. The proposed algorithms can be applied recursively to obtain logarithmic computational complexity growth in the prediction horizon length. For the case with linear MPC problems, an alternative approach to solving the CFTOC problem on-line is to use multiparametric quadratic programming (mp-QP), where the corresponding CFTOC problem can be solved explicitly off-line. This is referred to as explicit MPC. One of the main limitations with mp-QP is the amount of memory that is required to store the parametric solution. In this thesis, an algorithm for decreasing the required amount of memory is proposed. The aim is to make mp-QP and explicit MPC more useful in practical applications, such as embedded systems with limited memory resources. The proposed algorithm exploits the structure from the QP problem in the parametric solution in order to reduce the memory footprint of general mp-QP solutions, and in particular, of explicit MPC solutions. The algorithm can be used directly in mp-QP solvers, or as a post-processing step to an existing solution.

Bayesian state estimation is a flexible framework to address relevant problems at the heart of existing and upcoming technologies. Application examples are obstacle tracking for driverless cars and indoor navigation using smartphone sensor data. Unfortunately, the mathematical solutions of the underlying theory cannot be translated to computer code in general. Therefore, this thesis discusses algorithms and approximations that are related to the Kalman filter (KF).

Four scientific articles and an introduction with the relevant background on Bayesian state estimation theory and algorithms are included. Two articles discuss nonlinear Kalman filters, which employ the KF measurement update in nonlinear models. The numerous variants are presented in a common framework and the employed moment approximations are analyzed. Furthermore, their application to target tracking problems is discussed. A third article analyzes the ensemble Kalman filter (EnKF), a Monte Carlo implementation of the KF that has been developed for high-dimensional geoscientific filtering problems. The EnKF is presented in a simple KF framework, including its challenges, important extensions, and relations to other filters. Whereas the aforementioned articles contribute to the understanding of existing algorithms, a fourth article devises novel filters and smoothers to address heavy-tailed noise. The development is based on Student’s t distribution and provides simple recursions in the spirit of the KF. The introduction and articles are accompanied by extensive simulation experiments.

To infer the hidden states from the noisy observations and make predictions based on a set of input states and output observations are two challenging problems in many research areas. Examples of applications many include position estimation from various measurable radio signals in indoor environments, self-navigation for autonomous cars, modeling and predicting of the traffic flows, and flow pattern analysis for crowds of people. In this thesis, we mainly use the Bayesian inference framework for position estimation in an indoor environment, where the radio propagation is uncertain. In Bayesian inference framework, it is usually hard to get analytical solutions. In such cases, we resort to Monte Carlo methods to solve the problem numerically. In addition, we apply Bayesian nonparametric modeling for trajectory learning in sport analytics.

The main contribution of this thesis is to propose sequential Monte Carlo methods, namely particle filtering and smoothing, for a novel indoor positioning framework based on proximity reports. The experiment results have been further compared with theoretical bounds derived for this proximity based positioning system. To improve the performance, Bayesian non-parametric modeling, namely Gaussian process, has been applied to better indicate the radio propagation conditions. Then, the position estimates obtained sequentially using filtering and smoothing are further compared with a static solution, which is known as fingerprinting.

Moreover, we propose a trajectory learning framework for flow estimation in sport analytics based on Gaussian processes. To mitigate the computation deficiency of Gaussian process, a grid-based on-line algorithm has been adopted for real-time applications. The resulting trajectory modeling for individual athlete can be used for many purposes, such as performance prediction and analysis, health condition monitoring, etc. Furthermore, we aim at modeling the flow of groups of athletes, which could be potentially used for flow pattern recognition, strategy planning, etc.

System identification is used in engineering sciences to build mathematical models from data. A common issue in system identification problems is that the true inputs to the system are not fully known. In this thesis, existing approaches to unknown input problems are classified and some of their properties are analyzed.

A new indirect framework is proposed to treat system identification problems with unknown inputs. The effects of the unknown inputs are assumed to be measured through possibly unknown dynamics. Furthermore, the measurements may also be dependent on other known or measured inputs and can in these cases be called indirect input measurements. Typically, these indirect input measurements can arise when a subsystem of a larger system is of interest and only a limited set of sensors is available. Two examples are when it is desired to estimate parts of a mechanical system or parts of a dynamic network without full knowledge of the signals in the system. The input measurements can be used to eliminate the unknown inputs from a mathematical model of the system through algebraic manipulations. The resulting indirect model structure only depends on known and measured signals and can be used to estimate the desired dynamics or properties. The effects of using the input measurements are analyzed in terms of identifiability, consistency and variance properties. It is shown that cancelation of shared dynamics can occur and that the resulting estimation problem is similar to errors-in-variables and closed-loop estimation problems because of the noisy inputs used in the model. In fact, the indirect framework unifies a number of already existing system identification problems that are contained as special cases.

For completeness, an instrumental variable method is proposed as one possibility for estimating the indirect model. It is shown that multiple datasets can be used to overcome certain identifiability issues and two approaches, the multi-stage and the joint identification approach, are suggested to utilize multiple datasets for estimation of models. Furthermore, the benefits of using the indirect model in filtering and for control synthesis are briefly discussed.

To show the applicability, the framework is applied to the roll dynamics of a ship for tracking of the loading conditions. The roll dynamics is very sensitive to changes in these conditions and a worst-case scenario is that the ship will capsize. It is assumed that only motion measurements from an inertial measurement unit (IMU) together with measurements of the rudder angle are available. The true inputs are thus not available, but the measurements from the IMU can be used to form an indirect model from a well-established ship model. It is shown that only a subset of the unknown parameters can be estimated simultaneously. Data was collected in experiments with a scale ship model in a basin and the joint identification approach was selected for this application due to the properties of the model. The approach was applied to the collected data and gave promising results.

In recent years, inertial sensors have undergone major developments. The quality of their measurements has improved while their cost has decreased, leading to an increase in availability. They can be found in stand-alone sensor units, so-called inertial measurement units, but are nowadays also present in for instance any modern smartphone, in Wii controllers and in virtual reality headsets.

The term inertial sensor refers to the combination of accelerometers and gyroscopes. These measure the external specific force and the angular velocity, respectively. Integration of their measurements provides information about the sensor's position and orientation. However, the position and orientation estimates obtained by simple integration suffer from drift and are therefore only accurate on a short time scale. In order to improve these estimates, we combine the inertial sensors with additional sensors and models. To combine these different sources of information, also called sensor fusion, we make use of probabilistic models to take the uncertainty of the different sources of information into account. The first contribution of this thesis is a tutorial paper that describes the signal processing foundations underlying position and orientation estimation using inertial sensors.

In a second contribution, we use data from multiple inertial sensors placed on the human body to estimate the body's pose. A biomechanical model encodes the knowledge about how the different body segments are connected to each other. We also show how the structure inherent to this problem can be exploited. This opens up for processing long data sets and for solving the problem in a distributed manner.

Inertial sensors can also be combined with time of arrival measurements from an ultrawideband (UWB) system. We focus both on calibration of the UWB setup and on sensor fusion of the inertial and UWB measurements. The UWB measurements are modeled by a tailored heavy-tailed asymmetric distribution. This distribution naturally handles the possibility of measurement delays due to multipath and non-line-of-sight conditions while not allowing for the possibility of measurements arriving early, i.e. traveling faster than the speed of light.

Finally, inertial sensors can be combined with magnetometers. We derive an algorithm that can calibrate a magnetometer for the presence of metallic objects attached to the sensor. Furthermore, the presence of metallic objects in the environment can be exploited by using them as a source of position information. We present a method to build maps of the indoor magnetic field and experimentally show that if a map of the magnetic field is available, accurate position estimates can be obtained by combining inertial and magnetometer measurements.

Keywords

Engineering and Technology, Engineering and Technology, Engineering and Technology

The automotive industry is undergoing a revolution where the more traditional mechanical values are replaced by an ever increasing number of Advanced Driver Assistance Systems (ADAS) where advanced algorithms and software development are taking a bigger role. Increased safety, reduced emissions and the possibility of completely new business models are driving the development and most automotive companies have started projects that aim towards fully autonomous vehicles. For industrial applications that provide a closed environment, such as mining facilities, harbors, agriculture and airports, full implementation of the technology is already available with increased productivity, reliability and reduced wear on equipment as a result. However, it also gives the opportunity to create a safer working environment when human drivers can be removed from dangerous working conditions. Regardless of the application an important part of any mobile autonomous system is the motion planning layer. In this thesis sampling-based motion planning algorithms are used to solve several non-holonomic and kinodynamic planning problems for car-like robotic vehicles in different application areas that all present different challenges.

First we present an extension to the probabilistic sampling-based Closed-Loop Rapidly exploring Random Tree (CL-RRT) framework that significantly increases the probability of drawing a valid sample for platforms with second order differential constraints. When a tree extension is found infeasible a new acceleration profile that tries to brings the vehicle to a full stop before the collision occurs is calculated. A resimulation of the tree extension with the new acceleration profile is then performed. The framework is tested on a heavy-duty Scania G480 mining truck in a simple constructed scenario.

Furthermore, we present two different driver assistance systems for the complicated task of reversing with a truck with a dolly-steered trailer. The first is a manual system where the user can easily construct a kinematically feasible path through a graphical user interface. The second is a fully automatic planner, based on the CL-RRT algorithm where only a start and goal position need to be provided. For both approaches, the internal angles of the trailer configuration are stabilized using a Linear Quadratic (LQ) controller and path following is achieved through a pure-pursuit control law. The systems are demonstrated on a small-scale test vehicle with good results.

Finally, we look at the planning problem for an autonomous vehicle in an urban setting with dense traffic for two different time-critical maneuvers, namely, intersection merging and highway merging. In these situations, a social interplay between drivers is often necessary in order to perform a safe merge. To model this interaction a prediction engine is developed and used to predict the future evolution of the complete traffic scene given our own intended trajectory. Real-time capabilities are demonstrated through a series of simulations with varying traffic densities. It is shown, in simulation, that the proposed method is capable of safe merging in much denser traffic compared to a base-line method where a constant velocity model is used for predictions.

Pose (position and orientation) tracking in room-scaled environments is an enabling technique for many applications. Today, virtual reality (vr) and augmented reality (ar) are two examples of such applications, receiving high interest both from the public and the research community. Accurate pose tracking of the vr or ar equipment, often a camera or a headset, or of different body parts is crucial to trick the human brain and make the virtual experience realistic. Pose tracking in room-scaled environments is also needed for reference tracking and metrology. This thesis focuses on an application to metrology. In this application, photometric models of a photo studio are needed to perform realistic scene reconstruction and image synthesis. Pose tracking of a dedicated sensor enables creation of these photometric models. The demands on the tracking system used in this application is high. It must be able to provide sub-centimeter and sub-degree accuracy and at same time be easy to move and install in new photo studios.

The focus of this thesis is to investigate and develop methods for a pose tracking system that satisfies the requirements of the intended metrology application. The Bayesian filtering framework is suggested because of its firm theoretical foundation in informatics and because it enables straightforward fusion of measurements from several sensors. Sensor fusion is in this thesis seen as a way to exploit complementary characteristics of different sensors to increase tracking accuracy and robustness. Four different types of measurements are considered; inertialmeasurements, images from a camera, range (time-of-flight) measurements from ultra wide band (uwb) radio signals, and range and velocity measurements from echoes of transmitted acoustic signals.

A simulation study and a study of the Cramér-Rao lower filtering bound (crlb) show that an inertial-camera system has the potential to reach the required tracking accuracy. It is however assumed that known fiducial markers, that can be detected and recognized in images, are deployed in the environment. The study shows that many markers are required. This makes the solution more of a stationary solution and the mobility requirement is not fulfilled. A simultaneous localization and mapping (slam) solution, where naturally occurring features are used instead of known markers, are suggested solve this problem. Evaluation using real data shows that the provided inertial-camera slam filter suffers from drift but that support from uwb range measurements eliminates this drift. The slam solution is then only dependent on knowing the position of very few stationary uwb transmitters compared to a large number of known fiducial markers. As a last step, to increase the accuracy of the slam filter, it is investigated if and how range measurements can be complemented with velocity measurement obtained as a result of the Doppler effect. Especially, focus is put on analyzing the correlation between the range and velocity measurements and the implications this correlation has for filtering. The investigation is done in a theoretical study of reflected known signals (compare with radar and sonar) where the crlb is used as an analyzing tool. The theory is validated on real data from acoustic echoes in an indoor environment.

The various elements of a modern target tracking framework are covered. Background theory on pre-processing, modelling and estimation is presented as well as some novel ideas on the topic by the author. In addition, a few applications are posed as target tracking problems for which solutions are gradually constructed as relevant theory is covered.

Among considered problems are how to constrain targets to a region, use state-independent measurements to improve estimation in jump Markov models and how to incorporate observations sampled at an uncertain time into a state-space model.

A framework is developed for tracking dolphins constrained to a basin using an overhead camera that suffers from occlusions. In this scenario, conventional motion models would suffer from infeasible predictions outside the basin. A motion model is developed for the dolphins where collisions with nearby walls are avoided by turning. The basin is modelled as a polygon where each point along the edge influences the turn rate of the dolphin. The proposed model results in predictions inside the basin, increasing robustness against occlusions. An extension to a Gaussian mixture background model providing a degree of confidence for detections is used to improve tracking in the presence of shadows. A probabilistic data association filter is also modified to estimate the dolphin extension as an ellipse. The proposed framework is able to maintain tracks through occlusions and poor light conditions.

A framework is developed for estimating takeoff times and directions of birds in circular cages using an overhead camera. A jump Markov model is used to model the stationary and flight behaviours of the birds. A proposed extension also incorporates state-independent measurements, such as blurriness, to improve mode estimation. Takeoff times and directions are estimated from mode transitions and results are compared to manually annotated data.

The cameras are inaccessible in both applications, disallowing proper calibrations. As an alternative, a method is proposed to estimate stationary camera models from available data and known features in the scene. A map of the basin and the funnel dimensions are used respectively. The method estimates a homography and distortion parameters in an invertible mapping function.

An extension to the linear Gaussian state-space models is proposed, incorporating an additional observation with an uncertain timestamp. The posterior distribution of the states is derived for the model, which is shown to be a mixture of Gaussians, as well as some estimators for the distribution. The effects of incorporating the observation with an uncertain timestamp into the model are analysed for a one-dimensional scenario. The model is also applied to improve the GPS position of an orienteering sprinter by using the control position as an observation with an uncertain timestamp.

Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ.

The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target.

Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal.

The level of automation in our society is ever increasing. Technologies like self-driving cars, virtual reality, and fully autonomous robots, which all were unimaginable a few decades ago, are realizable today, and will become standard consumer products in the future. These technologies depend upon autonomous localization and situation awareness where careful processing of sensory data is required. To increase efficiency, robustness and reliability, appropriate models for these data are needed.In this thesis, such models are analyzed within three different application areas, namely (1) magnetic localization, (2) extended target tracking, and (3) autonomous learning from raw pixel information.

Magnetic localization is based on one or more magnetometers measuring the induced magnetic field from magnetic objects. In this thesis we present a model for determining the position and the orientation of small magnets with an accuracy of a few millimeters. This enables three-dimensional interaction with computer programs that cannot be handled with other localization techniques. Further, an additional model is proposed for detecting wrong-way drivers on highways based on sensor data from magnetometers deployed in the vicinity of traffic lanes. Models for mapping complex magnetic environments are also analyzed. Such magnetic maps can be used for indoor localization where other systems, such as GPS, do not work.

In the second application area, models for tracking objects from laser range sensor data are analyzed. The target shape is modeled with a Gaussian process and is estimated jointly with target position and orientation. The resulting algorithm is capable of tracking various objects with different shapes within the same surveillance region.

In the third application area, autonomous learning based on high-dimensional sensor data is considered. In this thesis, we consider one instance of this challenge, the so-called pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only. To solve this problem, high-dimensional time series are described using a low-dimensional dynamical model. Techniques from machine learning together with standard tools from control theory are used to autonomously design a controller for the system without any prior knowledge.

System models used in the applications above are often provided in continuous time. However, a major part of the applied theory is developed for discrete-time systems. Discretization of continuous-time models is hence fundamental. Therefore, this thesis ends with a method for performing such discretization using Lyapunov equations together with analytical solutions, enabling efficient implementation in software.

Bayesian inference is a statistical inference technique in which Bayes’ theorem is used to update the probability distribution of a random variable using observations. Except for few simple cases, expression of such probability distributions using compact analytical expressions is infeasible. Approximation methods are required to express the a priori knowledge about a random variable in form of prior distributions. Further approximations are needed to compute posterior distributions of the random variables using the observations. When the computational complexity of representation of such posteriors increases over time as in mixture models, approximations are required to reduce the complexity of such representations.

This thesis further extends existing approximation methods for Bayesian inference, and generalizes the existing approximation methods in three aspects namely; prior selection, posterior evaluation given the observations and maintenance of computation complexity.

Particularly, the maximum entropy properties of the first-order stable spline kernel for identification of linear time-invariant stable and causal systems are shown. Analytical approximations are used to express the prior knowledge about the properties of the impulse response of a linear time-invariant stable and causal system.

Variational Bayes (VB) method is used to compute an approximate posterior in two inference problems. In the first problem, an approximate posterior for the state smoothing problem for linear statespace models with unknown and time-varying noise covariances is proposed. In the second problem, the VB method is used for approximate inference in state-space models with skewed measurement noise.

Moreover, a novel approximation method for Bayesian inference is proposed. The proposed Bayesian inference technique is based on Taylor series approximation of the logarithm of the likelihood function. The proposed approximation is devised for the case where the prior distribution belongs to the exponential family of distributions.

Finally, two contributions are dedicated to the mixture reduction (MR) problem. The first contribution, generalize the existing MR algorithms for Gaussian mixtures to the exponential family of distributions and compares them in an extended target tracking scenario. The second contribution, proposes a new Gaussian mixture reduction algorithm which minimizes the reverse Kullback-Leibler divergence and has specific peak preserving properties.

One of the most common advanced control strategies used in industry today is Model Predictive Control (MPC), and some reasons for its success are that it can handle multivariable systems and constraints on states and control inputs in a structured way. At each time-step in the MPC control loop the control input is computed by solving a constrained finite-time optimal control (CFTOC) problem on-line. There exist several optimization methods to solve the CFTOC problem, where two common types are interior-point (IP) methods and active-set (AS) methods. In both these types of methods, the main computational effort is known to be the computation of the search directions, which boils down to solving a sequence of Newton-system-like equations. These systems of equations correspond to unconstrained finite-time optimal control (UFTOC) problems. Hence, high-performance IP and AS methods for CFTOC problems rely on efficient algorithms for solving the UFTOC problems.

The solution to a UFTOC problem is computed by solving the corresponding Karush-Kuhn-Tucker (KKT) system, which is often done using generic sparsity exploiting algorithms or Riccati recursions. When an AS method is used to compute the solution to the CFTOC problem, the system of equations that is solved to obtain the solution to a UFTOC problem is only changed by a low-rank modification of the system of equations in the previous iteration. This structured change is often exploited in AS methods to improve performance in terms of computation time. Traditionally, this has not been possible to exploit when Riccati recursions are used to solve the UFTOC problems, but in this thesis, an algorithm for performing low-rank modifications of the Riccati recursion is presented.

In recent years, parallel hardware has become more commonly available, and the use of parallel algorithms for solving the CFTOC problem and the underlying UFTOC problem has increased. Some existing parallel algorithms for computing the solution to this type of problems obtain the solution iteratively, and these methods may require many iterations to converge. Some other parallel algorithms compute the solution directly (non-iteratively) by solving parts of the system of equations in parallel, followed by a serial solution of a dense system of equations without the sparse structure of the MPC problem. In this thesis, two parallel algorithms that compute the solution directly (non-iteratively) in parallel are presented. These algorithms can be used in both IP and AS methods, and they exploit the sparse structure of the MPC problem such that no dense system of equations needs to be solved serially. Furthermore, one of the proposed parallel algorithms exploits the special structure of the MPC problem even in the parallel computations, which improves performance in terms of computation time even more. By using these algorithms, it is possible to obtain logarithmic complexity growth in the prediction horizon length.

Direction Of Arrival (DOA) estimation has been an active research area for many decades because of its usefulness in many fields, such as wireless communication, navigation, astronomy, surveillance, medicine etc. In DOA estimation, emitted wavefields from unknown sources are measured with the aid of an array of sensors, and this information is then processed to determine various parameters of the sources including the angle between each source and the array. The emitted wavefields could be electromagnetic, seismic or acoustic.

Many of the existing methods for DOA estimation were developed in the communication area for dealing with electromagnetic radiation. In this work, we are investigating the problem of DOA estimation for wideband acoustic sources using a microphone array. Since microphones are getting popular in wireless ground sensor networks, this research direction could potentially offer a cost efficient and passive way for detecting, tracking and classifying moving targets. In this thesis, several existing spectral based wideband DOA methods suitable for this purpose are surveyed. Further, a new method based on local polynomial expansion (LPE) is proposed. The performance of these methods is compared by testing them on both simulated and real data, where the real data come from practical field tests with different motorized vehicles. The LPE method is found to be quite competitive for this kind of wideband acoustic sources in real environments.

As control of large-scale complex systems has become more and more prevalent within control, so has the need for analyzing such controlled systems. This is particularly due to the fact that many of the control design approaches tend to neglect intricacies in such systems, e.g., uncertainties, time delays, nonlinearities, so as to simplify the design procedure.

Robustness analysis techniques allow us to assess the effect of such neglected intricacies on performance and stability. Performing robustness analysis commonly requires solving an optimization problem. However, the number of variables of this optimization problem, and hence the computational time, scales badly with the dimension of the system. This limits our ability to analyze large-scale complex systems in a centralized manner. In addition, certain structural constraints, such as privacy requirements or geographical separation, can prevent us from even forming the analysis problem in a centralized manner.

In this thesis, we address these issues by exploiting structures that are common in large-scale systems and/or their corresponding analysis problems. This enables us to reduce the computational cost of solving these problems both in a centralized and distributed manner. In order to facilitate distributed solutions, we employ or design tailored distributed optimization techniques. Particularly, we propose three distributed optimization algorithms for solving the analysis problem, which provide superior convergence and/or computational properties over existing algorithms. Furthermore, these algorithms can also be used for solving general loosely coupled optimization problems that appear in a variety of fields ranging from control, estimation and communication systems to supply chain management and economics.

A ship's roll dynamics is very sensitive to changes in the loading conditions and a worst-case scenario is that the ship will capsize. Actually, the mass and center of mass are two of the most influential parameters in most mechanical systems. However, it is difficult to uniquely estimate these parameters for a ship under normal operational conditions without special experiments or equipment.

Instead of focusing on a sensor-rich environment where all possible signals on a ship can be measured and a complete model of the ship can be estimated, this thesis presents an approach where a model of a subsystem of the ship's dynamics is estimated using only a limited set of sensors. More specifically, the roll dynamics is studied and it is assumed that only motion measurements from an inertial measurement unit (IMU) together with measurements of the rudder angle are available. Hence, direct measurements of the true inputs to the subsystem are not available, but the measurements indirectly contain information about the inputs and these indirect input measurements can be used as a substitute.

To understand the properties of the proposed method, it is applied to an approximate model of the ship's roll dynamics. The analyses show that only a subset of the unknown parameters can be estimated simultaneously and that the estimation problem is similar to closed-loop system identification.

A multi-stage method that uses several datasets is introduced to circumvent the restrictions shown in the identifiability analysis. An iterative closed-loop instrumental variable approach is used to estimate subsets of the parameters in each step. The approach is verified on experimental data with good results.

It is shown that a well-established and more complete ship model can be used to derive a generalization of the approximate model, with more input measurements and a few extra parameters. The generalized model has the same basic properties as the approximate model. The added complexity is due to the ship's interaction with water. Because of this extra complexity, an iterative joint closed-loop instrumental variable approach based on a graybox formulation and using multiple datasets simultaneously is introduced to estimate the parameters.

Finally, experiments with a scale ship model are described. The joint identification method is applied to the collected data and gives promising results.

Navigation and mapping in unknown environments is an important building block for increased autonomy of unmanned vehicles, since external positioning systems can be susceptible to interference or simply being inaccessible. Navigation and mapping require signal processing of vehicle sensor data to estimate motion relative to the surrounding environment and to simultaneously estimate various properties of the surrounding environment. Physical models of sensors, vehicle motion and external influences are used in conjunction with statistically motivated methods to solve these problems. This thesis mainly addresses three navigation and mapping problems which are described below.

We study how a vessel with known magnetic signature and a sensor network with magnetometers can be used to determine the sensor positions and simultaneously determine the vessel's route in an extended Kalman filter (EKF). This is a so-called simultaneous localisation and mapping (SLAM) problem with a reversed measurement relationship.

Previously determined hydrodynamic models for a remotely operated vehicle (ROV) are used together with the vessel's sensors to improve the navigation performance using an EKF. Data from sea trials is used to evaluate the system and the results show that especially the linear velocity relative to the water can be accurately determined.

The third problem addressed is SLAM with inertial sensors, accelerometers and gyroscopes, and an optical camera contained in a single sensor unit. This problem spans over three publications.

We study how a SLAM estimate, consisting of a point cloud map, the sensor unit's three dimensional trajectory and speed as well as its orientation, can be improved by solving a nonlinear least-squares (NLS) problem. NLS minimisation of the predicted motion error and the predicted point cloud coordinates given all camera measurements is initialised using EKF-SLAM.

We show how NLS-SLAM can be initialised as a sequence of almost uncoupled problems with simple and often linear solutions. It also scales much better to larger data sets than EKF-SLAM. The results obtained using NLS-SLAM are significantly better using the proposed initialisation method than if started from arbitrary points. A SLAM formulation using the expectation maximisation (EM) algorithm is proposed. EM splits the original problem into two simpler problems and solves them iteratively. Here the platform motion is one problem and the landmark map is the other. The first problem is solved using an extended Rauch-Tung-Striebel smoother while the second problem is solved with a quasi-Newton method. The results using EM-SLAM are better than NLS-SLAM both in terms of accuracy and complexity.

High availability and low operational costs are critical for industrial systems. While industrial equipments are designed to endure several years of uninterrupted operation, their behavior and performance will eventually deteriorate over time. To support service and operation decisions, it is important to devise methods to infer the condition of equipments from available data.

The monitoring of industrial robots is an important problem considered in this thesis. The main focus is on the design of methods for the detection of excessive degradations due to wear in a robot joint. Since wear is related to friction, an important idea for the proposed solutions is to analyze the behavior of friction in the joint to infer about wear. Based on a proposed friction model and friction data collected from dedicated experiments, a method is suggested to estimate wear-related effects to friction. As it is shown, the achieved estimates allow for a clear distinction of the wear effects even in the presence of large variations to friction associated to other variables, such as temperature and load.

In automated manufacturing, a continuous and repeatable operation of equipments is important to achieve production requirements. Such repetitive behavior of equipments is explored to define a data-driven approach to diagnosis. Considering data collected from a repetitive operation, an abnormality is inferred by comparing nominal against monitored data in the distribution domain. The approach is demonstrated with successful applications for the diagnosis of wear in industrial robots and gear faults in a rotating machine.

Because only limited knowledge can be embedded in a fault detection method, it is important to evaluate solutions in scenarios of practical relevance. A simulation based framework is proposed that allows for determination of which variables affect a fault detection method the most and how these variables delimit the effectiveness of the solution. Based on an average performance criterion, an approach is also suggested for a direct comparison of different methods. The ideas are illustrated for the robotics application, revealing properties of the problem and of different fault detection solutions.

An important task in fault diagnosis is a correct determination of presence of a condition change. An early and reliable detection of an abnormality is important to support service, giving enough time to perform maintenance and avoid downtime. Data-driven methods are proposed for anomaly detection that only require availability of nominal data and minimal/meaningful specification parameters from the user. Estimates of the detection uncertainties are also possible, supporting higher level service decisions. The approach is illustrated with simulations and real data examples including the robotics application.

In this thesis, we consider the problem of estimating position and orientation (6D pose) using inertial sensors (accelerometers and gyroscopes). Inertial sensors provide information about the change in position and orientation at high sampling rates. However, they suffer from integration drift and hence need to be supplemented with additional sensors. To combine information from the inertial sensors with information from other sensors we use probabilistic models, both for sensor fusion and for sensor calibration.

Inertial sensors can be supplemented with magnetometers, which are typically used to provide heading information. This relies on the assumption that the measured magnetic field is equal to a constant local magnetic field and that the magnetometer is properly calibrated. However, the presence of metallic objects in the vicinity of the sensor will make the first assumption invalid. If the metallic object is rigidly attached to the sensor, the magnetometer can be calibrated for the presence of this magnetic disturbance. Afterwards, the measurements can be used for heading estimation as if the disturbance was not present. We present a practical magnetometer calibration algorithm that is experimentally shown to lead to improved heading estimates. An alternative approach is to exploit the presence of magnetic disturbances in indoor environments by using them as a source of position information. We show that in the vicinity of a magnetic coil it is possible to obtain accurate position estimates using inertial sensors, magnetometers and knowledge of the magnetic field induced by the coil.

We also consider the problem of estimating a human body’s 6D pose. For this, multiple inertial sensors are placed on the body. Information from the inertial sensors is combined using a biomechanical model which represents the human body as consisting of connected body segments. We solve this problem using an optimization-based approach and show that accurate 6D pose estimates are obtained. These estimates accurately represent the relative position and orientation of the human body, i.e. the shape of the body is accurately represented but the absolute position can not be determined.

To estimate absolute position of the body, we consider the problem of indoor positioning using time of arrival measurements from an ultra-wideband (uwb) system in combination with inertial measurements. Our algorithm uses a tightlycoupled sensor fusion approach and is shown to lead to accurate position and orientation estimates. To be able to obtain position information from the uwb measurements, it is imperative that accurate estimates of the receivers’ positions and clock offsets are known. Hence, we also present an easy-to-use algorithm to calibrate the uwb system. It is based on a maximum likelihood formulation and represents the uwb measurements assuming a heavy-tailed asymmetric noise distribution to account for measurement outliers.

Nonlinear state space models (SSMs) are a useful class of models to describe many different kinds of systems. Some examples of its applications are to model; the volatility in financial markets, the number of infected persons during an influenza epidemic and the annual number of major earthquakes around the world. In this thesis, we are concerned with state inference, parameter inference and input design for nonlinear SSMs based on sequential Monte Carlo (SMC) methods.

The state inference problem consists of estimating some latent variable that is not directly observable in the output from the system. The parameter inference problem is concerned with fitting a pre-specified model structure to the observed output from the system. In input design, we are interested in constructing an input to the system, which maximises the information that is available about the parameters in the system output. All of these problems are analytically intractable for nonlinear SSMs. Instead, we make use of SMC to approximate the solution to the state inference problem and to solve the input design problem. Furthermore, we make use of Markov chain Monte Carlo (MCMC) and Bayesian optimisation (BO) to solve the parameter inference problem.

In this thesis, we propose new methods for parameter inference in SSMs using both Bayesian and maximum likelihood inference. More specifically, we propose a new proposal for the particle Metropolis-Hastings algorithm, which includes gradient and Hessian information about the target distribution. We demonstrate that the use of this proposal can reduce the length of the burn-in phase and improve the mixing of the Markov chain.

Furthermore, we develop a novel parameter inference method based on the combination of BO and SMC. We demonstrate that this method requires a relatively small amount of samples from the analytically intractable likelihood, which are computationally costly to obtain. Therefore, it could be a good alternative to other optimisation based parameter inference methods. The proposed BO and SMC combination is also extended for parameter inference in nonlinear SSMs with intractable likelihoods using approximate Bayesian computations. This method is used for parameter inference in a stochastic volatility model with -stable returns using real-world financial data.

Finally, we develop a novel method for input design in nonlinear SSMs which makes use of SMC methods to estimate the expected information matrix. This information is used in combination with graph theory and convex optimisation to estimate optimal inputs with amplitude constraints. We also consider parameter estimation in ARX models with Student-t innovations and unknown model orders. Two different algorithms are used for this inference: reversible Jump Markov chain Monte Carlo and Gibbs sampling with sparseness priors. These methods are used to model real-world EEG data with promising results.

One of the main tasks for an industrial robot is to move the end-effector in a predefined path with a specified velocity and acceleration. Different applications have different requirements of the performance. For some applications it is essential that the tracking error is extremely small, whereas other applications require a time optimal tracking. Independent of the application, the controller is a crucial part of the robot system. The most common controller configuration uses only measurements of the motor angular positions and velocities, instead of the position and velocity of the end-effector. The development of new cost optimised robots has introduced unwanted flexibilities in the joints and the links. The consequence is that it is no longer possible to get the desired performance and robustness by only measuring the motor angular positions.

This thesis investigates if it is possible to estimate the end-effector position using Bayesian estimation methods for state estimation, here represented by the extended Kalman filter and the particle filter. The arm-side information is provided by an accelerometer mounted at the end-effector. The measurements consist of the motor angular positions and the acceleration of the end-effector. In a simulation study on a realistic flexible industrial robot, the angular position performance is shown to be close to the fundamental Cramér-Rao lower bound. The methods are also verified in experiments on an ABB IRB4600 robot, where the dynamic performance of the position for the end-effector is significantly improved. There is no significant difference in performance between the different methods. Instead, execution time, model complexities and implementation issues have to be considered when choosing the method. The estimation performance depends strongly on the tuning of the filters and the accuracy of the models that are used. Therefore, a method for estimating the process noise covariance matrix is proposed. Moreover, sampling methods are analysed and a low-complexity analytical solution for the continuous-time update in the Kalman filter, that does not involve oversampling, is proposed.

The thesis also investigates two types of control problems. First, the norm-optimal iterative learning control (ILC) algorithm for linear systems is extended to an estimation-based norm-optimal ILC algorithm where the controlled variables are not directly available as measurements. The algorithm can also be applied to non-linear systems. The objective function in the optimisation problem is modified to incorporate not only the mean value of the estimated variable, but also information about the uncertainty of the estimate. Second, H∞ controllers are designed and analysed on a linear four-mass flexible joint model. It is shown that the control performance can be increased, without adding new measurements, compared to previous controllers. Measuring the end-effector acceleration increases the control performance even more. A non-linear model has to be used to describe the behaviour of a real flexible joint. An H∞-synthesis method for control of a flexible joint, with non-linear spring characteristic, is therefore proposed.

Aircraft are dynamic systems that naturally contain a variety of constraints and nonlinearities such as, e.g., maximum permissible load factor, angle of attack and control surface deflections. Taking these limitations into account in the design of control systems are becoming increasingly important as the performance and complexity of the controlled systems is constantly increasing. It is especially important in the design of control systems for fighter aircraft. These require maximum control performance in order to have the upper hand in a dogfight or when they have to outmaneuver an enemy missile. Therefore pilots often maneuver the aircraft very close to the limit of what it is capable of, and an automatic system (called flight envelope protection system) against violating the restrictions is a necessity.

In other application areas, nonlinear optimal control methods have been successfully used to solve this but in the aeronautical industry, these methods have not yet been established. One of the more popular methods that are well suited to handle constraints is Model Predictive Control (MPC) and it is used extensively in areas such as the process industry and the refinery industry. Model predictive control means in practice that the control system iteratively solves an advanced optimization problem based on a prediction of the aircraft's future movements in order to calculate the optimal control signal. The aircraft's operating limitations will then be constraints in the optimization problem.

In this thesis, we explore model predictive control and derive two fast, low complexity algorithms, one for guaranteed stability and feasibility of nonlinear systems and one for reference tracking for linear systems. In reference tracking model predictive control for linear systems we build on the dual mode formulation of MPC and our goal is to make minimal changes to this framework, in order to develop a reference tracking algorithm with guaranteed stability and low complexity suitable for implementation in real time safety critical systems.

To reduce the computational burden of nonlinear model predictive control several methods to approximate the nonlinear constraints have been proposed in the literature, many working in an ad hoc fashion, resulting in conservatism, or worse, inability to guarantee recursive feasibility. Also several methods work in an iterative manner which can be quit time consuming making them inappropriate for fast real time applications. In this thesis we propose a method to handle the nonlinear constraints, using a set of dynamically generated local inner polytopic approximations. The main benefits of the proposed method is that while computationally cheap it still can guarantee recursive feasibility and convergence.

Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods provide computational tools for systematic inference and learning in complex dynamical systems, such as nonlinear and non-Gaussian state-space models. This thesis builds upon several methodological advances within these classes of Monte Carlo methods.Particular emphasis is placed on the combination of SMC and MCMC in so called particle MCMC algorithms. These algorithms rely on SMC for generating samples from the often highly autocorrelated state-trajectory. A specific particle MCMC algorithm, referred to as particle Gibbs with ancestor sampling (PGAS), is suggested. By making use of backward sampling ideas, albeit implemented in a forward-only fashion, PGAS enjoys good mixing even when using seemingly few particles in the underlying SMC sampler. This results in a computationally competitive particle MCMC algorithm. As illustrated in this thesis, PGAS is a useful tool for both Bayesian and frequentistic parameter inference as well as for state smoothing. The PGAS sampler is successfully applied to the classical problem of Wiener system identification, and it is also used for inference in the challenging class of non-Markovian latent variable models.Many nonlinear models encountered in practice contain some tractable substructure. As a second problem considered in this thesis, we develop Monte Carlo methods capable of exploiting such substructures to obtain more accurate estimators than what is provided otherwise. For the filtering problem, this can be done by using the well known Rao-Blackwellized particle filter (RBPF). The RBPF is analysed in terms of asymptotic variance, resulting in an expression for the performance gain offered by Rao-Blackwellization. Furthermore, a Rao-Blackwellized particle smoother is derived, capable of addressing the smoothing problem in so called mixed linear/nonlinear state-space models. The idea of Rao-Blackwellization is also used to develop an online algorithm for Bayesian parameter inference in nonlinear state-space models with affine parameter dependencies.

Small and medium sized Unmanned Aerial Vehicles (UAV) are today used in military missions, and will in the future find many new application areas such as surveillance for exploration and security. To enable all these foreseen applications, the UAV's have to be cheap and of low weight, which restrict the sensors that can be used for navigation and surveillance. This thesis investigates several aspects of how fusion of navigation and imaging sensors can improve both tasks at a level that would require much more expensive sensors with the traditional approach of separating the navigation system from the applications. The core idea is that vision sensors can support the navigation system by providing odometric information of the motion, while the navigation system can support the vision algorithms, used to map the surrounding environment, to be more efficient. The unified framework for this kind of approach is called Simultaneous Localisation and Mapping (SLAM) and it will be applied here to inertial sensors, radar and optical camera.

Synthetic Aperture Radar (SAR) uses a radar and the motion of the UAV to provide an image of the microwave reflectivity of the ground. SAR images are a good complement to optical images, giving an all-weather surveillance capability, but they require an accurate navigation system to be focused which is not the case with typical UAV sensors. However, by using the inertial sensors, measuring UAV's motion, and information from the SAR images, measuring how image quality depends on the UAV's motion, both higher navigation accuracy and, consequently, more focused images can be obtained. The fusion of these sensors can be performed in both batch and sequential form. For the first approach, we propose an optimisation formulation of the navigation and focusing problem while the second one results in a filtering approach. For the optimisation method the measurement of the focus in processed SAR images is performed with the image entropy and with an image matching approach, where SAR images are matched to the map of the area. In the proposed filtering method the motion information is estimated from the raw radar data and it corresponds to the time derivative of the range between UAV and the imaged scene, which can be related to the motion of the UAV.

Another imaging sensor that has been exploited in this framework is an ordinary optical camera. Similar to the SAR case, camera images and inertial sensors can also be used to support the navigation estimate and simultaneously build a three-dimensional map of the observed environment, so called inertial/visual SLAM. Also here, the problem is posed in optimisation framework leading to batch Maximum Likelihood (ML) estimate of the navigation parameters and the map. The ML problem is solved in both the straight-forward way, resulting in nonlinear least squares where both map and navigation parameters are considered as parameters, and with the Expectation-Maximisation (EM) approach. In the EM approach, all unknown variables are split into two sets, hidden variables and actual parameters, and in this case the map is considered as parameters and the navigation states are seen as hidden variables. This split enables the total problem to be solved computationally cheaper then the original ML formulation. Both optimisation problems mentioned above are nonlinear and non-convex requiring good initial solution in order to obtain good parameter estimate. For this purpose a method for initialisation of inertial/visual SLAM is devised where the conditional linear structure of the problem is used to obtain the initial estimate of the parameters. The benefits and performance improvements of the methods are illustrated on both simulated and real data.

This thesis is on filtering in state space models. First, we examine approximate Kalman filters for nonlinear systems, where the optimal Bayesian filtering recursions cannot be solved exactly. These algorithms rely on the computation of certain expected values. Second, the problem of filtering in linear systems that are subject to heavy-tailed process and measurement noise is addressed.

Expected values of nonlinearly transformed random vectors are an essential ingredient in any Kalman filter for nonlinear systems, because of the required joint mean vector and joint covariance of the predicted state and measurement. The problem of computing expected values, however, goes beyond the filtering context. Insights into the underlying integrals and useful simplification schemes are given for elliptically contoured distributions, which include the Gaussian and Student’s t distribution. Furthermore, a number of computation schemes are discussed. The focus is on methods that allow for simple implementation and that have an assessable computational cost. Covered are basic Monte Carlo integration, deterministic integration rules and the unscented transformation, and schemes that rely on approximation of involved nonlinearities via Taylor polynomials or interpolation. All methods come with realistic accuracy statements, and are compared on two instructive examples.

Heavy-tailed process and measurement noise in state space models can be accounted for by utilizing Student’s t distribution. Based on the expressions forconditioning and marginalization of t random variables, a compact filtering algorithm for linear systems is derived. The algorithm exhibits some similarities with the Kalman filter, but involves nonlinear processing of the measurements in form of a squared residual in one update equation. The derived filter is compared to state-of-the-art filtering algorithms on a challenging target tracking example, and outperforms all but one optimal filter that knows the exact instances at which outliers occur.

The presented material is embedded into a coherent thesis, with a concise introduction to the Bayesian filtering and state estimation problems; an extensive survey of available filtering algorithms that includes the Kalman filter, Kalman filters for nonlinear systems, and the particle filter; and an appendix that provides the required probability theory basis.

Mathematical models of physical systems are pervasive in engineering. These models can be used to analyze properties of the system, to simulate the system, or synthesize controllers. However, many of these models are too complex or too large for standard analysis and synthesis methods to be applicable. Hence, there is a need to reduce the complexity of models. In this thesis, techniques for reducing complexity of large linear time-invariant (lti) state-space models and linear parameter-varying (lpv) models are presented. Additionally, a method for synthesizing controllers is also presented.

The methods in this thesis all revolve around a system theoretical measure called the H2-norm, and the minimization of this norm using nonlinear optimization. Since the optimization problems rapidly grow large, significant effort is spent on understanding and exploiting the inherent structures available in the problems to reduce the computational complexity when performing the optimization.

The first part of the thesis addresses the classical model-reduction problem of lti state-space models. Various H2 problems are formulated and solved using the proposed structure-exploiting nonlinear optimization technique. The standard problem formulation is extended to incorporate also frequency-weighted problems and norms defined on finite frequency intervals, both for continuous and discrete-time models. Additionally, a regularization-based method to account for uncertainty in data is explored. Several examples reveal that the method is highly competitive with alternative approaches.

Techniques for finding lpv models from data, and reducing the complexity of lpv models are presented. The basic ideas introduced in the first part of the thesis are extended to the lpv case, once again covering a range of different setups. lpv models are commonly used for analysis and synthesis of controllers, but the efficiency of these methods depends highly on a particular algebraic structure in the lpv models. A method to account for and derive models suitable for controller synthesis is proposed. Many of the methods are thoroughly tested on a realistic modeling problem arising in the design and flight clearance of an Airbus aircraft model.

Finally, output-feedback H2 controller synthesis for lpv models is addressed by generalizing the ideas and methods used for modeling. One of the ideas here is to skip the lpv modeling phase before creating the controller, and instead synthesize the controller directly from the data, which classically would have been used to generate a model to be used in the controller synthesis problem. The method specializes to standard output-feedback H2 controller synthesis in the lti case, and favorable comparisons with alternative state-of-the-art implementations are presented.

Mathematical models are commonly used in technical applications to describe the behavior of a system. These models can be estimated from data, which is known as system identification. Usually the models are used to calculate the output for a given input, but in this thesis, the estimation of inverse models is investigated. That is, we want to find a model that can be used to calculate the input for a given output. In this setup, the goal is to minimize the difference between the input and the output from the cascaded systems (system and inverse). A good model would be one that reconstructs the original input when used in series with the original system.

Different methods for estimating a system inverse exist. The inverse model can be based on a forward model, or it can be estimated directly by reversing the use of input and output in the identification procedure. The models obtained using the different approaches capture different aspects of the system, and the choice of method can have a large impact. Here, it is shown in a small linear example that a direct estimation of the inverse can be advantageous, when the inverse is supposed to be used in cascade with the system to reconstruct the input.

Inverse systems turn up in many different applications, such as sensor calibration and power amplifier (PA) predistortion. PAs used in communication devices can be nonlinear, and this causes interference in adjacent transmitting channels, which will be noise to anyone that transmits in these channels. Therefore, linearization of the amplifier is needed, and a prefilter is used, called a predistorter. In this thesis, the predistortion problem has been investigated for a type of PA, called outphasing power amplifier, where the input signal is decomposed into two branches that are amplified separately by highly efficient nonlinear amplifiers, and then recombined. If the decomposition and summation of the two parts are not perfect, nonlinear terms will be introduced in the output, and predistortion is needed.

Here, a predistorter has been constructed based on a model of the PA. In a first method, the structure of the outphasing amplifier has been used to model the distortion, and from this model, a predistorter can be estimated. However, this involves solving two nonconvex optimization problems, and the risk of obtaining a suboptimal solution. Exploring the structure of the PA, the problem can be reformulated such that the PA modeling basically can be done by solving two least-squares (LS) problems, which are convex. In a second step, an analytical description of an ideal predistorter can be used to obtain a predistorter estimate. Another approach is to compute the predistorter without a PA model by estimating the inverse directly. The methods have been evaluated in simulations and in measurements, and it is shown that the predistortion improves the linearity of the overall power amplifier system.

With the demand for more advanced fighter aircraft, relying on relaxed stability or even unstable flight mechanical characteristics to gain flight performance, more focus has been put on model-based system engineering to help with the design work. The flight control system design is one important part that relies on this modeling. Therefore it has become more important to develop flight mechanical models that are highly accurate in the whole flight envelop. For today’s newly developed fighters, the basic aircraft characteristics change between linear and nonlinear as well as stable and unstable as an effect of the desired capability of advanced maneuvering at subsonic, transonic and supersonic speeds.

This thesis combines the subject of system identification, which is the art of building mathematical models of dynamical systems based on measurements, with aeronautics in order to find methods to identify flight mechanical characteristics from flight tests. Here, a challenging aeronautical identification problem combining instability and nonlinearity is treated.

Two aspects are considered. The first is identification during a flight test with the intent to ensure that enough information is available in the resulting test data. Here, a frequency domain method is used. This idea has been taken from an existing method to which some improvements have been made. One of these improvements is to use an Instrumental Variable approach to take care of disturbances coming from atmospheric turbulence. The method treats linear systems that can be both stable and unstable. The improved method shows promising results, but needs further work to become robust against outliers and missing data.

The other aspect is post-flight identification. Here, five different direct identification methods, which treat unstable and nonlinear systems, have been compared. Three of the methods are variations of the prediction-error method. The fourth is a parameter and state estimation method and the fifth method is a state estimation method based on an augmented system approach. The simplest of the prediction-error methods, based on a parametrized observer approach, is least sensitive to noise and initial offsets of the model parameters for the studied cases. This approach is attractive since it does not have any parameters that the user has to tune in order to get the best performance.

All methods in this thesis have been validated on simulated data where the system is known, and have also been tested on real flight test data.

Over the last 20 years, navigation has almost become synonymous with satellite positioning, e.g. the Global Positioning System (GPS). On land, sea or in the air, on the road or in a city, knowing ones position is a question of getting a clear line of sight to enough satellites. Unfortunately, since the signals are extremely weak there are environments the GPS signals cannot reach but where positioning is still highly sought after, such as indoors and underwater. Also, because the signals are so weak, GPS is vulnerable to jamming. This thesis is about alternative means of positioning for three scenarios where gps cannot be used.

Indoors, there is a desire to accurately position first responders, police officers and soldiers. This could make their work both safer and more efficient. In this thesis an inertial navigation system using a foot mounted inertial magnetic mea- surement unit is studied. For such systems, zero velocity updates can be used to significantly reduce the drift in distance travelled. Unfortunately, the estimated direction one is moving in is also subject to drift, causing large positioning errors. We have therefore chosen to throughly study the key problem of robustly estimating heading indoors.

To measure heading, magnetic field measurements can be used as a compass. Unfortunately, they are often disturbed indoors making them unreliable. For estimation support, the turn rate of the sensor can be measured by a gyro but such sensors often have bias problems. In this work, we present two different approaches to estimate heading despite these shortcomings. Our first system uses a Kalman filter bank that recursively estimates if the magnetic readings are disturbed or undisturbed. Our second approach estimates the entire history of headings at once, by matching integrated gyro measurements to a vector of magnetic heading measurements. Large scale experiments are used to evaluate both methods. When the heading estimation is incorporated into our positioning system, experiments show that positioning errors are reduced significantly. We also present a probabilistic stand still detection framework based on accelerometer and gyro measurements.

The second and third problems studied are both maritime. Naval navigation systems are today heavily dependent on GPS. Since GPS is easily jammed, the vessels are vulnerable in critical situations. In this work we describe a radar based backup positioning system to be used in case of GPS failure. radar scans are matched using visual features to detect how the surroundings have changed, thereby describing how the vessel has moved. Finally, we study the problem of underwater positioning, an environment gps signals cannot reach. A sensor network can track vessels using acoustics and the magnetic disturbances they induce. But in order to do so, the sensors themselves first have to be accurately positioned. We present a system that positions the sensors using a friendly vessel with a known magnetic signature and trajectory. Simulations show that by studying the magnetic disturbances that the vessel produces, the location of each sensor can be accurately estimated.

Localization is essential in a variety of applications such as navigation systems, aerospace and surface surveillance, robotics and animal migration studies to mention a few. There are many standard techniques available, where the most common are based on information from satellite or terrestrial radio beacons, radar networks or vision systems.

In this thesis, two alternative techniques are investigated.The first localization technique is based on one or more magnetometers measuring the induced magnetic field from a magnetic object. These measurements depend on the position and the magnetic signature of the object and can be described with models derived from the electromagnetic theory. For this technology, two applications have been analyzed. The first application is traffic surveillance, which has a high need for robust localization systems. By deploying one or more magnetometer in the vicinity of the traffic lane, vehicles can be detected and classified. These systems can be used for safety purposes, such as detecting wrong-way drivers on highways, as well as for statistical purposes by monitoring the traffic flow.

The second application is indoor localization, where a mobile magnetometer measures the stationary magnetic field induced by magnetic structures in indoor environments. In this work, models for such magnetic environments are proposed and evaluated.The second localization technique uses light sensors measuring light intensity during day and night. After registering the time of sunrise and sunset from this data, basic formulas from astronomy can be used to locate the sensor. The main application is localization of small migrating animals. In this work, a framework for localizing migrating birds using light sensors is proposed. The framework has been evaluated on data from a common swift, which during a period of ten months was equipped with a light sensor.

Industrial robots are designed to endure several years of uninterrupted operation and therefore are very reliable. However, no amount of design effort can prevent deterioration over time, and equipments will eventually fail. Its impacts can, nevertheless, be considerably reduced if good maintenance/service practices are performed. The current practice for service of industrial robots is based on preventive and corrective policies, with little consideration about the actual condition of the system. In the current scenario, the serviceability of industrial robots can be greatly improved with the use of condition monitoring/diagnosis methods, allowing for condition-based maintenance (cbm).

This thesis addresses the design of condition monitoring methods for industrial robots. The main focus is on the monitoring and diagnosis of excessive degradations caused by wear of the mechanical parts. The wear processes may take several years to be of significance, but can evolve rapidly once they start to appear. An early detection of excessive wear levels can therefore allow for cbm, increasing maintainability and availability. Since wear is related to friction, the basic idea pursued is to analyze the friction behavior to infer about wear.

To allow this, an extensive study of friction in robot joints is considered in this work. The effects of joint temperature, load and wear changes to static friction in robot a joint are modeled based on empirical observations. It is found that the effects of load and temperature to friction are comparable to those caused by wear. Joint temperature and load are typically not measured, but will always be present in applications. Therefore, diagnosis solutions must be able to cope with them.

Different methods are proposed which allow for robust wear monitoring. First, a wear estimator is suggested. Wear estimates are made possible with the use of a test-cycle and a friction model. Second, a method is defined which considers the repetitive behavior found in many applications of industrial robots. The result of the execution of the same task in different instances of time are compared to provide an estimate of how the system changed over the period. Methods are suggested that consider changes in the distribution of data logged from the robot. It is shown through simulations and experiments that robust wear monitoring is made possible with the proposed methods.

This paper presents an overview of the extended target tracking research undertaken at the division of Automatic Control at Linköping University. The PHD and CPHD filters for multiple extended target tracking under clutter and unknown association are summarized, with focus on the Gaussian mixture and Gaussian inverse Wishart implementations. The paper elaborates on measurement set partitioning, the measurement generating Poisson rates, the probability of detection, and practical examples of measurement models.

In this report, the derivation of the Bayesian Bhattacharyya bound for discrete-time filtering as proposed by Reece and Nicholson [1] is revisited. It turns out that the general results presented in [1] are incorrect, as some expectations appearing in the information matrix recursions are missing. This report presents the corrected results and it is argued that the missing expectations are only zero in a number of special cases. A nonlinear toy example is used to illustrate when this is not the case.

The interest for system identification in dynamic networks has increased recently with a wide variety of applications. In many cases, it is intractable or undesirable to observe all nodes in a network and thus, to estimate the complete dynamics. If the complete dynamics is not desired, it might even be challenging to estimate a subset of the network if key nodes are unobservable due to correlation between the nodes. In this contribution, we will discuss an approach to treat this problem. The approach relies on additional measurements that are dependent on the unobservable nodes and thus indirectly contain information about them. These measurements are used to form an alternative indirect model that is only dependent on observed nodes. The purpose of estimating this indirect model can be either to recover information about modules in the original network or to make accurate predictions of variables in the network. Examples are provided for both recovery of the original modules and prediction of nodes.

In recent years, micro-machined electromechanical system (MEMS) inertial sensors (3D accelerometers and 3D gyroscopes) have become widely available due to their small size and low cost. Inertial sensor measurements are obtained at high sampling rates and can be integrated to obtain position and orientation information. These estimates are accurate on a short time scale, but suffer from integration drift over longer time scales. To overcome this issue, inertial sensors are typically combined with additional sensors and models. In this tutorial we focus on the signal processing aspects of position and orientation estimation using inertial sensors, discussing different modeling choices and a selected number of important algorithms. The algorithms include optimization-based smoothing and filtering as well as computationally cheaper extended Kalman filter implementations.

Interaction measures quantify the input-output relations in MIMO processes and can support the selection of control structures (CSS). Interaction measures are typically computed based on an existing process models. The study of input-output interactions based on data can complement missing information on a model, e.g., revealing unknown relations in a complex system or adjusting for a time dependent behavior. This paper presents a unified approach for data-driven estimation of Gramian based interaction measures from input-output data. Given open or closed-loop data, a high-order Vector ARX (VARX) model is identified and its parameters are used to calculate predictor Markov parameters, together with a covariance estimate. Three interaction measures based on the Hankel, Hilbert-Schmidt-Hankel and H2 norms are calculated from the estimated predictor Markov parameters and uncertainty estimates are provided for the last two, allowing for robust CSS. A solution which is recursive in the data points is presented, making it practical for applications to large datasets. The approach is verified through simulations and several possible extensions are discussed. As the method is suitable for open and closed-loop data and for large datasets, it opens up for data-driven control structure selection based on operational data from entire plants.

We are interested in Bayesian modelling of panel data using a mixed effects model with heterogeneity in the individual random effects. We compare two different approaches for modelling the heterogeneity using a mixture of Gaussians. In the first model, we assume an infinite mixture model with a Dirichlet process prior, which is a non-parametric Bayesian model. In the second model, we assume an over-parametrised finite mixture model with a sparseness prior. Recent work indicates that the second model can be seen as an approximation of the former. In this paper, we investigate this claim and compare the estimates of the posteriors and the mixing obtained by Gibbs sampling in these two models. The results from using both synthetic and real-world data supports the claim that the estimates of the posterior from both models agree even when the data record is finite.

This report contains supplementary material for the paper, and gives detailed proofs of all lemmas and propositions that could not be included into the paper due to space limitations. The notation is adaptedfrom the paper.

This report describes the motivation, the current state and the future actions of an improvement process in engineering education at the Brazilian higher education institution called the Military Institute of Engineering. Based on the reasons for why and how to change, the CDIO framework was chosen, at the end of 2014, as the kernel of this improvement process. The activities realized, the plan for the future actions and the open questions are presented in this report.

The first challenge in robustness analysis of large-scale interconnected uncertain systems is to provide a model of such systems in a standard-form that is required within different analysis frameworks. This becomes particularly important for large-scale systems, as analysis tools that can handle such systems heavily rely on the special structure within such model descriptions. We here propose an automated framework for providing such models of large-scale interconnected uncertain systems that are used in Integral Quadratic Constraint (IQC) analysis. Specifically, in this paper we put forth a methodological way to provide such models from a block-diagram and nested description of interconnected uncertain systems. We describe the details of this automated framework using an example.

In this technical report, some derivations for the smoother proposed in [1] are presented. More specifically, the derivations for the cyclic iteration needed to solve the variational Bayes smoother for linear state-space models with unknownprocess and measurement noise covariances in [1] are presented. Further, the variational iterations are compared with iterations of the Expectation Maximization (EM) algorithm for smoothing linear state-space models with unknown noise covariances.