This work addresses the uneven traffic demand scenario in multi-beam satellite systems, in which a hot-spot beam

is surrounded by cold beams. After partitioning the hotspot beam in different sectors, resource pulling from cold

neighbouring beams is allowed following an aggressive frequency-reuse scheme. As a consequence, the level of

the co-channel interference within the hot-spot beam increases. A scheme known as Non-Coherent Rate-Splitting (NCRS)

is employed to cope with this interference, based on the exclusive use of magnitude channel state information at the transmitter (CSIT).

The receiver complexity is increased with respect to full CSIT precoding schemes, which are considered for benchmarking purposes. Different NCRS strategies are analyzed and compared with several partial and full CSIT schemes. The proposed solution not only shows an improvement with respect to partial CSIT benchmarks, but also displays a competitive performance against full CSIT precoders.

We consider the forward link of a multibeam satellite system with high spectral reuse and the novel low-complexity transmission and detection strategies from ["“Exploratory analysis of superposition coding and rate splitting for multibeam satellite systems", ISWCS 2018]. More specifically, we study the impact of a time offset between the antenna beams that cooperate to simultaneously serve a given user. Assuming Gaussian signaling, we provide closed-form expressions for the achievable rate region. It is demonstrated that, in the absence of timing information at the gateway, this region is not affected by a time offset. Our numerical results further show that, in case timing is known at the gateway, an offset of half a symbol period at both user terminals is optimal in terms of spectral efficiency.

By injecting false data through compromised sensors, an adversary can drive the probability of detection in a sensor network-based spatial field surveillance system to arbitrarily low values. As a countermeasure, a small subset of sensors may be secured. Leveraging the theory of Matched Subspace Detection, we propose and evaluate several detectors that add robustness to attacks when such trusted nodes are available. Our results reveal the performance-security tradeoff of these schemes and can be used to determine the number of trusted nodes required for a given performance target.

We consider the problem of estimating the frame error rate (FER) of a given memoryless binary symmetric channel by observing the success or failure of transmitted packets. Whereas FER estimation is relatively straightforward if all observations correspond to packets with equal length, the problem becomes considerably more complex when this is not the case. We develop FER estimators when

transmissions of different lengths are observed, together with the Cramer-Rao Lower Bound (CRLB). Although the main focus is on Maximum Likelihood (ML) estimation, we also obtain low complexity schemes performing close to optimal in some scenarios. In a second stage, we consider the case in which FER estimation is performed at a node different from the receiver, and incorporate the impairment of unreliable observations by considering noisy ACK/NAK feedback links. The impact of unreliable feedback is analyzed by means of the corresponding CRLB. In this setting, the ML estimator is obtained by applying the Expectation-Maximization algorithm to jointly estimate the error probabilities of the data and feedback links. Simulation results illustrate the benefits of the proposed estimators.

Data-injection attacks on spatial field detection corrupt a subset of measurements to cause erroneous decisions. We consider a centralized decision scheme exploiting spatial field smoothness to overcome lack of knowledge on system parameters such as noise variance. We obtain closed-form expressions for system performance and investigate strategies for an intruder injecting false data in a fraction of the sensors in order to reduce the probability of detection. The problem of determining the most vulnerable subset of sensors is also analyzed.

Millimeter communication systems use large antenna arrays to provide good average received power and to take advantage of multi-stream MIMO communication. Unfortunately, due to power consumption in the analog front-end, it is impractical to perform beamforming and fully digital precoding at baseband. Hybrid precoding/combining architectures have been proposed to overcome this limitation. The hybrid structure splits the MIMO processing between the digital and analog domains, while keeping the performance close to that of the fully digital solution. In this paper, we introduce and analyze several algorithms that efficiently design hybrid precoders and combiners starting from the known optimum digital precoder/combiner, which can be computed when perfect channel state information is available. We propose several low complexity solutions which provide different trade-offs between performance and complexity. We show that the proposed iterative solutions perform better in terms of spectral efficiency and/or are faster than previous methods in the literature. All of them provide designs which perform close to the known optimal digital solution. Finally, we study the effects of quantizing the analog component of the hybrid design and show that even with coarse quantization, the average rate performance is good.

Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.

In this paper we analyze the mean squared error (MSE) for one-bit compressed sensing schemes based on measurement matrices that correspond to unit norm tight frames. We show that, as in the unquantized case, sensing with unit norm tight frames improves the MSE in the reconstruction of sparse vectors from one-bit measurements using l1 and thresholding algorithms. From our analytical and experimental results we conclude that when implementing one-bit compressed sensing schemes with fixed measurement matrices unit norm tight frames are the measurements of choice

Among many security threats to sensor networks, compromised sensing is particularly challenging due to the fact that it cannot be addressed by standard authentication approaches. We consider a clustered scenario for data aggregation in which an attacker injects a disturbance in sensor readings. Casting the problem in an estimation framework, we systematically apply the Generalized Likelihood Ratio approach to derive attack detectors. The analysis under different attacks reveals that detectors based on similarity of means across

clusters are suboptimal, with Bartlett’s test for homoscedasticity constituting a good candidate when lacking a priori knowledge of the variance of the underlying distribution.

Antenna subset modulation (ASM) is a physical layer security technique that is well suited for millimeter wave communication systems. The key idea is to vary the radiation pattern at the symbol rate by selecting one from a subset of patterns with a similar main lobe and different side lobes. This paper shows that ASM is not robust to an eavesdropper that makes multiple simultaneous measurements at multiple angles. The measurements are combined and used to formulate an estimation problem to undo the effects of the side lobe randomization. Simulations show the performance of the estimation algorithms and how the eavesdropper can effectively recover the information if the signal-to-noise ratio exceeds a certain threshold. Using fewer active radio frequency chains makes it harder for the attacker to recover the transmit symbol, at the expense of more grating lobes.

This work proposes a spectrum cartography algorithm used for learning the power spectrum distribution over a wide frequency band across a given geographic area. Motivated by low-complexity sensing hardware and stringent communication constraints, compressed and quantized measurements are considered. Setting out from a nonparametric regression framework, it is shown that a sensible approach leads to a support vector machine formulation. The simulated tests verify that accurate spectrum maps can be constructed using a simple sensing architecture with significant savings in the feedback.

We propose a method for maximizing the signal-to-interference-plus-noise ratio (SINR) in a wideband full-duplex MIMO regenerative relay that accounts limited dynamic range of the receiver and transmitter impairments. Transmit and receive filters are designed at the relay, by means of an alternating minimization algorithm, to minimize the interference at the decoder input in the destination node. We impose channel shortening and subspace constraints to ensure that the received signal at the destination is not compromised. Simulation results show that the presented method significantly outperforms other constrained approaches.

Active interference cancellation (AIC) is a multicarrier spectrum sculpting technique which reduces the power of undesired out-of-band emissions by adequately modulating a subset of reserved cancellation subcarriers. In most schemes online complexity is a concern, and thus cancellation subcarriers have traditionally been constrained to linear combinations of the data subcarriers. Recent AIC designs truly minimizing out-of-band emission shift complexity to the offline design stage, motivating the consideration of more general mappings to improve performance. We show that there is no loss in optimality incurred by constraining these mappings to the set of linear functions.

Frame error rate (FER) prediction in wireless communication systems is an important tool with applications to system level simulations and link adaptation, among others. Although in realistic communication scenarios it is expected to have codewords of different lengths, previous work on FER prediction marginally treated the dependency of the FER on the codeword length. In this paper, we present a method to estimate the FER using codewords of different length. We derive a low complexity FER estimator for frames of different length transmitted over a binary symmetric channel of unknown error probability. We extend this technique to coded systems by the use of effective SNR FER predictors. The proposed estimation scheme is shown to outperform other simpler estimation methods.

The design of broadcasting networks operating in a single frequency way is challenging due to the difficulty of predicting the performance in a frequency selective channel, caused by natural multipath and echoes coming from different transmitters. In this paper we resort to the use of frame error rate prediction metrics (also known as effective SNR metrics) to characterize the performance gain (or loss) under different multipath and SNR regimes in a simplified scenario with two transmitters. The analysis shows clearly that receivers with a dominant line of sight reception and high SNR are more sensitive to the presence of echoes, while those users in low SNR or strong multipath conditions are easily enforced by the insertion of a second transmitter.

Simultaneous reception and transmission in the same frequency, so-called full-duplex operation, causes an infinite feedback loop in an amplify-and-forward relay. The unwanted echoes may result in oscillation at the relay, making it unstable, and distorting the spectrum. This paper presents an adaptive MIMO filtering method for full-duplex amplify-and-forward relays that aims at solving the joint problem of self-interference mitigation and equalization of the source-relay channel. The scheme exploits knowledge of the autocorrelation of the transmitted signal as the only side information while allowing the relay, in the best case, to implement precoding as if there was not any self-interference or frequency selectivity in the source-relay channel. Finally, the proposed adaptation algorithm is investigated by determining its stationary points and by performing simulations in a MIMO-OFDM framework.

Link adaptation in multiple user multiple-input multiple-output orthogonal frequency division multiplexing communication systems is challenging because of the coupling between user selection, mode selection, precoding, and equalization. In this paper, we present a methodology to perform link adaptation under this multiuser setting, focusing on the capabilities of IEEE 802.11ac. We propose to use a machine learning classifier to solve the problem of selecting a proper modulation and coding scheme, combined with a greedy algorithm that performs user and spatial mode selection. We observe that our solution offers good performance in the case of perfect channel state information or high feedback rate, while those scenarios with less feedback suffer some degradation due to inter-user interference.

Link adaptation in mobile satellite channels is difficult because of the large propagation delays, the frequent signal blockage and the variation of the channel statistics. In this paper, we propose a cross-layer link adaptation strategy that exploits statistical, long-term CSI to increase the throughput when some retransmissions are available; this increase is obtained while meeting a target outage probability constraint. Our strategy takes as inputs the estimated packet error rates of the available modulation and coding schemes (MCS) and their rates; as an output, it returns the optimum sequence of MCS to be used. Results will also show that simple information acquisition strategies can still provide very good results.

Detecting the presence of a white Gaussian signal distorted by a noisy time-varying channel is addressed by means of three different detectors. First, the generalized likelihood ratio test (GLRT) is found for the case where the channel has no temporal structure, resulting in the well-known Bartlett’s test. Then it is shown that, under the transformation group given by scaling factors, a locally most powerful invariant test (LMPIT) does not exist. Two alternative approaches are explored in the low signal-to-noise ratio (SNR) regime: the first assigns a prior probability density function (pdf) to the channel (hence modeled as random), whereas the second assumes an underlying basis expansion model (BEM) for the (now deterministic) channel and obtains the maximum likelihood (ML) estimates of the parameters relevant for the detection problem. The performance of these detectors is evaluated via Monte Carlo simulation.

Spectrum sensing constitutes a key ingredient in many cognitive radio paradigms in order to detect and protect primary transmissions. Most sensing schemes in the literature assume a time-invariant channel. However, when operating in low Signal-to-Noise Ratio (SNR) conditions, observation times are necessarily long and may become larger than the coherence time of the channel. In this paper the problem of detecting an unknown constant-magnitude waveform in frequency-flat time-varying channels with noise background of unknown variance is considered. The channel is modeled using a basis expansion model (BEM) with random coefficients. Adopting a generalized likelihood ratio (GLR) approach in order to deal with nuisance parameters, a non-convex optimization problem results. We discuss different possibilities to circumvent this problem, including several low complexity approximations to the GLR test as well as an efficient fixed-point iterative method to obtain the true GLR statistic. The approximations exhibit a performance ceiling in terms of probability of detection as the SNR increases, whereas the true GLR test does not. Thus, the proposed fixed-point iteration constitutes the preferred choice in applications requiring a high probability of detection.

Hybrid Terrestrial Satellite Single Frequency Networks achieve large spectral efficiencies due to a higher frequency reuse, which is attained by transmitting the same waveform in the same frequency band from satellite and terrestrial transmitters. However, the presence of multiple transmitters propitiates the existence of the so-called SFN echoes, which can degrade the system performance even if they arrive within the OFDM guard interval. In this paper we characterize this effect by resorting to PER prediction metrics (or effective SNR metrics), and analyze two simple preprocessing schemes that mitigate this degradation: the use of Alamouti space-time codes, and a convenient prefiltering at the terrestrial transmitter.

We consider the problem of detecting a known signal with constant magnitude immersed in noise of unknown variance,

when the propagation channel is frequency-flat and randomly

time-varying within the observation window. A Basis Expansion

Model with random coefficients is used for the channel, and a Generalized Likelihood Ratio approach is adopted in order to cope with deterministic nuisance parameters. The resulting scheme can be seen as a generalization of the well-known

Matched Filter detector, to which it reduces for timeinvariant

channels. Closed-form analytical expressions are provided for the distribution of the test statistic under both hypotheses, which allow to assess the detection performance.

A secondary user that tries to reuse the spectrum allocated to a primary user can exploit the knowledge of the primary message to perform this task. In particular, the overlay cognitive radio paradigm postulates the use of a fraction of the available power at the secondary transmitter to convey the primary message, so the spectral efficiency of the primary system is increased, and, therefore, some transmission resources (time slots or frequency bands) can be released to the secondary transmission while the primary user rate is kept constant. The fraction of released resources can be incremented if some channel state information is available at the secondary transmitter. In this paper, we present a scenario where the secondary transmitter maximizes the primary link quality (measured in terms of Effective SNR), and obtains its channel state information by exploiting the primary user SNR-based feedback.

A secondary cognitive user overlaying its message in a broadcast multicarrier network is studied. The secondary user exploits the primary message knowledge to convey its own information while preserving the primary user coverage area, determined by a bound on the BER, and taking into account the degradation due to the insertion of an echo in a dominant line of sight environment. The results are compared with those obtained when the coverage area is defined in capacity terms, and regardless of the degradation caused by the secondary replica of the primary message.

The overlay cognitive radio paradigm presents a framework where a secondary user exploits the knowledge of the primary user's message to improve spectrum utilization. A multicarrier broadcast network is one of the scenarios where this knowledge is possible: the secondary user could join a single frequency network and, therefore, gain access to the primary message. However, if the primary signal is received with a strong line of sight component, its relaying from the secondary transmitter does not suffice to ensure the primary user quality of service. In this paper we study the scenario where a secondary transmitter maximizes its own transmission rate, keeping the quality of a primary receiver over a given threshold. The analytical results, based on bit error rate bounds, are verified by means of software simulations and hardware tests.

Spectrum sensing is a key component of the Cognitive Radio paradigm. Typically, primary signals have to be detected with uncalibrated receivers at signal-to-noise ratios (SNRs) well below decodability levels. Multiantenna detectors exploit spatial independence of receiver thermal noise to boost detection performance and robustness. We study the problem

of detecting a Gaussian signal with rank-P unknown spatial

covariance matrix in spatially uncorrelated Gaussian noise with

unknown covariance using multiple antennas. The generalized

likelihood ratio test (GLRT) is derived for two scenarios. In the

first one, the noises at all antennas are assumed to have the same (unknown) variance, whereas in the second, a generic diagonal noise covariance matrix is allowed in order to accommodate calibration uncertainties in the different antenna frontends. In the latter case, the GLRT statistic must be obtained numerically, for which an efficient method is presented. Furthermore, for asymptotically low SNR, it is shown that the GLRT does admit a closed form, and the resulting detector performs well in practice. Extensions are presented in order to account for unknown temporal correlation in both signal and noise, as well as frequency-selective channels.

Spectrum sensing design for Cognitive Radio systems is challenged by the nature of the wireless medium, which makes the detection requirements difficult to achieve by standalone sensors. To combat shadowing and fading, distributed strategies are usually proposed. However, most distributed approaches are based on the energy detector, which is not robust to noise uncertainty. This phenomenon can be overcome by multi-antenna sensors exploiting spatial independence of the noise process. We combine both ideas to develop distributed detectors for multiantenna sensors. Fusion rules are provided for sensors based on the Generalized Likelihood Ratio as well as for ad hoc detectors derived from geometric considerations. Simulation results are provided comparing the performance of the different strategies under lognormal shadowing and Ricean fading.

Detection of unknown signals with constant modulus (CM) using multiple antennas in additive white Gaussian noise of unknown variance is considered. The channels from the source to each antenna are assumed frequency-flat and unknown. This problem is of interest for spectrum sensing in cognitive radio systems in which primary signals are known to have the CM property. Examples include analog frequency modulated signals such as those transmitted by wireless microphones in the TV bands and Gaussian Minimum Shift Keying modulated signals as in the GSM cellular standard. The proposed detector, derived from a Generalized Likelihood Ratio (GLR) approach, exploits both the CM property and the spatial independence of noise, outperforming the GLR test for Gaussian signals as shown by simulation.