By injecting false data through compromised sensors, an adversary can drive the probability of detection in a sensor network-based spatial field surveillance system to arbitrarily low values. As a countermeasure, a small subset of sensors may be secured. Leveraging the theory of Matched Subspace Detection, we propose and evaluate several detectors that add robustness to attacks when such trusted nodes are available. Our results reveal the performance-security tradeoff of these schemes and can be used to determine the number of trusted nodes required for a given performance target.

Data-injection attacks on spatial field detection corrupt a subset of measurements to cause erroneous decisions. We consider a centralized decision scheme exploiting spatial field smoothness to overcome lack of knowledge on system parameters such as noise variance. We obtain closed-form expressions for system performance and investigate strategies for an intruder injecting false data in a fraction of the sensors in order to reduce the probability of detection. The problem of determining the most vulnerable subset of sensors is also analyzed.

This paper focuses on the problem of positioning a source using angle-of-arrival measurements taken by a wireless sensor network in which some of the nodes experience non-line-of-sight (NLOS) propagation conditions. In order to mitigate the errors induced by the nodes in NLOS, we derive an algorithm that combines the expectation-maximization algorithm with a weighted least-squares estimation of the source position so that the nodes in NLOS are eventually identified and discarded. Moreover, a distributed version of this algorithm based on a diffusion strategy that iteratively refines the position estimate while driving the network to a consensus is presented.

This paper addresses the problem of distributed estimation of a parameter vector in the presence of noisy input and output data as well as data faults, performed by a wireless sensor network in which only local interactions among the nodes are allowed. In the presence of unreliable observations, standard estimators become biased and perform poorly in low signal-to-noise ratios. We propose two different distributed approaches based on the Expectation-Maximization algorithm: in the first one the regressors are estimated at each iteration,

whereas the second one does not require explicit regressor estimation. Numerical results show that the proposed methods approach the performance of a clairvoyant scheme with knowledge of the random data faults.

Current multibeam satellite systems consist of a very large number of spot beams. In this paper, we analyze them from the large scale MIMO perspective, and establish a comparison with massive MIMO systems. It will be shown that the large number of beams has important operational implications, and that it simplifies the analysis because it allows using asymptotic results. However, it will also be shown that multibeam satellite systems cannot be considered massive MIMO systems.

We consider Total Least Squares (TLS) estimation in a network in which each node has access to a subset of equations of an overdetermined linear system. Previous distributed approaches require that the number of equations at each node be larger than the dimension L of the unknown parameter. We present novel distributed TLS estimators which can handle as few as a single equation per node. In the first scheme, the network computes an extended correlation matrix via standard iterative average consensus techniques, and the TLS estimate is extracted afterwards by means of an eigenvalue decomposition (EVD). The second scheme is EVD-free, but requires that a linear system of size L be solved at each iteration by each node. Replacing this step by a single Gauss-Seidel subiteration is shown to be an effective means to reduce computational cost without sacrificing performance.

We address the problem of distributed estimation of a parameter from a set of noisy observations collected by a sensor network, assuming that some sensors may be subject to data failures and report only noise. In such scenario, simple schemes such as the Best Linear Unbiased Estimator result in an error floor in moderate and high signal-to-noise ratio (SNR), whereas previously proposed methods based on hard decisions on data failure events degrade as the SNR decreases. Aiming at optimal performance within the whole range of SNRs, we adopt a Maximum Likelihood framework based on the Expectation-Maximization (EM) algorithm. The statistical model and the iterative nature of the EM method allow for a diffusion-based distributed implementation, whereby the information propagation is embedded in the iterative update of the parameters. Numerical examples show that the proposed algorithm practically attains the Cramer–Rao Lower Bound at all SNR values and compares favorably with other approaches.

Many problems in digital communications involve wideband radio signals. As the most recent example, the impressive advances in Cognitive Radio systems make even more necessary the development of sampling schemes for wideband radio signals with spectral holes. This is equivalent to considering a sparse multiband signal in the framework of Compressive Sampling theory. Starting from previous results on multicoset sampling and recent advances in compressive sampling, we analyze the matrix involved in the corresponding reconstruction equation and define a new method for the design of universal multicoset codes, that is, codes guaranteeing perfect reconstruction of the sparse multiband signal.

Detecting the presence of a white Gaussian signal distorted by a noisy time-varying channel is addressed by means of three different detectors. First, the generalized likelihood ratio test (GLRT) is found for the case where the channel has no temporal structure, resulting in the well-known Bartlett’s test. Then it is shown that, under the transformation group given by scaling factors, a locally most powerful invariant test (LMPIT) does not exist. Two alternative approaches are explored in the low signal-to-noise ratio (SNR) regime: the first assigns a prior probability density function (pdf) to the channel (hence modeled as random), whereas the second assumes an underlying basis expansion model (BEM) for the (now deterministic) channel and obtains the maximum likelihood (ML) estimates of the parameters relevant for the detection problem. The performance of these detectors is evaluated via Monte Carlo simulation.

Spectrum sensing constitutes a key ingredient in many cognitive radio paradigms in order to detect and protect primary transmissions. Most sensing schemes in the literature assume a time-invariant channel. However, when operating in low Signal-to-Noise Ratio (SNR) conditions, observation times are necessarily long and may become larger than the coherence time of the channel. In this paper the problem of detecting an unknown constant-magnitude waveform in frequency-flat time-varying channels with noise background of unknown variance is considered. The channel is modeled using a basis expansion model (BEM) with random coefficients. Adopting a generalized likelihood ratio (GLR) approach in order to deal with nuisance parameters, a non-convex optimization problem results. We discuss different possibilities to circumvent this problem, including several low complexity approximations to the GLR test as well as an efficient fixed-point iterative method to obtain the true GLR statistic. The approximations exhibit a performance ceiling in terms of probability of detection as the SNR increases, whereas the true GLR test does not. Thus, the proposed fixed-point iteration constitutes the preferred choice in applications requiring a high probability of detection.

Video forensics is an emerging discipline, that aims at inferring information about the processing history undergone by a digital video in a blind fashion. In this work we introduce a new forensic footprint and, based on it, propose a method for detecting whether a video has been encoded twice; if this is the case, we also estimate the size of the Group Of Pictures (GOP) employed during the first encoding. As shown in the experiments, the footprint proves to be very robust even in realistic settings (i.e., when encoding is carried out using typical compression rates), that are rarely addressed by existing techniques.

In the context of spectrum sensing, we investigate the performance of detectors equipped with M antennas (co-located or distributed) under Rayleigh fading, in terms of detection diversity. Rather than the high-SNR concept of diversity order common in the communications literature, we adopt the notion recently advocated by Daher and Adve in the radar community: the slope of the average probability of detection (\bar{P}_D) vs. SNR curve at \bar{P}_D = 0.5. This definition is well suited to spectrum sensing, which invariably deals with low SNR levels. It is shown that the diversity order grows as M for an optimal centralized detector having access to all observations, whereas for the two distributed schemes considered (the multiantenna energy detector and the OR detector) it grows no faster than √M.

Spectrum sensing is a key component of the Cognitive Radio paradigm. Typically, primary signals have to be detected with uncalibrated receivers at signal-to-noise ratios (SNRs) well below decodability levels. Multiantenna detectors exploit spatial independence of receiver thermal noise to boost detection performance and robustness. We study the problem

of detecting a Gaussian signal with rank-P unknown spatial

covariance matrix in spatially uncorrelated Gaussian noise with

unknown covariance using multiple antennas. The generalized

likelihood ratio test (GLRT) is derived for two scenarios. In the

first one, the noises at all antennas are assumed to have the same (unknown) variance, whereas in the second, a generic diagonal noise covariance matrix is allowed in order to accommodate calibration uncertainties in the different antenna frontends. In the latter case, the GLRT statistic must be obtained numerically, for which an efficient method is presented. Furthermore, for asymptotically low SNR, it is shown that the GLRT does admit a closed form, and the resulting detector performs well in practice. Extensions are presented in order to account for unknown temporal correlation in both signal and noise, as well as frequency-selective channels.

Spectrum sensing design for Cognitive Radio systems is challenged by the nature of the wireless medium, which makes the detection requirements difficult to achieve by standalone sensors. To combat shadowing and fading, distributed strategies are usually proposed. However, most distributed approaches are based on the energy detector, which is not robust to noise uncertainty. This phenomenon can be overcome by multi-antenna sensors exploiting spatial independence of the noise process. We combine both ideas to develop distributed detectors for multiantenna sensors. Fusion rules are provided for sensors based on the Generalized Likelihood Ratio as well as for ad hoc detectors derived from geometric considerations. Simulation results are provided comparing the performance of the different strategies under lognormal shadowing and Ricean fading.