Recommendation systems are becoming increasingly important, as evidenced by the popularity of the Netﬂix prize and the sophistication of various online shopping systems. With this increase in interest, a new problem of nefarious or false rankings that compromise a recommendation system’s integrity has surfaced. We consider such purposefully erroneous rankings to be a form of “toxic waste,” corrupting the performance of the underlying algorithm. In this paper, we propose an adaptive reweighted algorithm as a possible approach towards correcting this problem. Our algorithm relies on ﬁnding a low-rank-plus-sparse decomposition of the recommendation matrix, where the adaptation of the weights aids in rejecting the malicious contributions. Simulations suggest that our algorithm converges fairly rapidly and produces accurate results.

We present a general architecture for the acquisition of ensembles of correlated signals. The signals are multiplexed onto a single line by mixing each one against a different code and then adding them together, and the resulting signal is sampled at a high rate. We show that if the $M$ signals, each bandlimited to $W/2$ Hz, can be approximated by a superposition of $R < M$ underlying signals, then the ensemble can be recovered by sampling at a rate within a logarithmic factor of $RW$ (as compared to the Nyquist rate of $MW$). This sampling theorem shows that the correlation structure of the signal ensemble can be exploited in the acquisition process even though it is unknown a priori.

The reconstruction of the ensemble is recast as a low-rank matrix recovery problem from linear measurements. The architectures we are considering impose a certain type of structure on the linear operators. Although our results depend on the mixing forms being random, this imposed structure results in a very different type of random projection than those analyzed in the low-rank recovery literature to date.

With the popularity of Social Networking Services (SNS), more and more sensitive information are stored online and associated with SNS accounts. The obvious value of SNS accounts motivates the usage stealing problem -- unauthorized, stealthy use of SNS accounts on the devices owned/used by account owners without any technology hacks. For example, anxious parents may use their kids' SNS accounts to inspect the kids' social status; husbands/wives may use their spouses' SNS accounts to spot possible affairs. Usage stealing could happen anywhere in any form, and seriously invades the privacy of account owners. However, there is no any currently known defense against such usage stealing. To an SNS operator (e.g., Facebook Inc.), usage stealing is hard to detect using traditional methods because such attackers come from the same IP addresses/devices, use the same credentials, and share the same accounts as the owners do.

In this paper, we propose a novel continuous authentication approach that analyzes user browsing behavior to detect SNS usage stealing incidents. We use Facebook as a case study and show that it is possible to detect such incidents by analyzing SNS browsing behavior. Our experiment results show that our proposal can achieve higher than 80% detection accuracy within 2 minutes, and higher than 90% detection accuracy after 7 minutes of observation time.

We investigate a compressive sensing framework in which the sensors introduce a distortion to the measurements in the form of unknown gains. We focus on blind calibration, using measurements on multiple unknown (but sparse) signals and formulate the joint recovery of the gains and the sparse signals as a convex optimization problem. The ﬁrst proposed approach is an extension to the basis pursuit optimization which can estimate the unknown gains along with the unknown sparse signals. Demonstrating that this approach is successful for sufﬁcient number of input signals except in cases where the phase shifts among the unknown gains varies signiﬁcantly, a second approach is proposed that makes use of quadratic basis pursuit optimization to calibrate for constant amplitude gains with maximum variance in the phases. An alternative form of this approach is also formulated to reduce the complexity and memory requirements and provide scalability with respect to the number of input signals. Finally a third approach is formulated which combines the ﬁrst two approaches for calibration of systems with any variation in the gains. The performance of the proposed algorithms are investigated extensively through numerical simulations, which demonstrate that simultaneous signal recovery and calibration is possible when sufﬁciently many (unknown, but sparse) calibrating signals are provided.

This work studies the problem of blind sensor calibration(BSC) in linear inverse problems, such as compressive sensing. It aims to estimate the unknown complex gains on eachsensor, given a set of measurements of some unknown training signals. We assume that the unknown training signalsare all sparse. Instead of solving the problem by using convex optimization, we propose a cost function on a suitablemanifold, namely, the set of complex diagonal matrices withdeterminant one. Such a construction can enhance numericalstabilities of the proposed algorithm. By exploring a globalparameterization of the manifold, we tackle the BSC problem with a conjugate gradient method. Several numericalexperiments are provided to oppose our approach to the solutions given by convex optimization and to demonstrate itsperformance.

Sunday, August 25, 2013

In Louisana and Texas; there is an interest in imaging salt domes. This is for two reasons: an industrial one as oil and gas deposits seem to be neighbors to these structures and an environmental one which can be briefly summarized in these two videos of the 1980 Lake Peigneur sinkhole disaster and the recent appearance of a sinkhole in Napoleonville, LA.

What is a salt dome and why it matters that it collapses ? In that region it may be a sign that there is a more structural connection to the Gulf of Mexico since as soon as water hits salt; the structural dome becomes liquid and unstable.

There are currently two majors means of performing this imaging: Acoustic/Seismic imaging and gravity. Both involve dilling most of the time since performing the survey from the surface only makes it hard to image the near vertical structure of the dome. But as we know and see from the sinkhole examples, in some cases drilling may not be appropriate. Here is another idea that ought to be investigated and which could provide a richer set of elements : Muon Tomography [2].

High frame video (HFV) is an important investigational tool in sciences, engineering and military. In ultra-high speed imaging, the obtainable temporal, spatial and spectral resolutions are limited by the sustainable throughput of in-camera mass memory, the lower bound of exposure time, and illumination conditions. In order to break these bottlenecks, we propose a new coded video acquisition framework that employs K larger than 2 conventional cameras, each of which makes random measurements of the 3D video signal in both temporal and spatial domains. For each of the K cameras, this multi-camera strategy greatly relaxes the stringent requirements in memory speed, shutter speed, and illumination strength. The recovery of HFV from these random measurements is posed and solved as a large scale l1 minimization problem by exploiting joint temporal and spatial sparsities of the 3D signal. Three coded video acquisition techniques of varied trade offs between performance and hardware complexity are developed: frame-wise coded acquisition, pixel-wise coded acquisition, and column-row-wise coded acquisition. The performances of these techniques are analyzed in relation to the sparsity of the underlying video signal. Simulations of these new HFV capture techniques are carried out and experimental results are reported.

In this article, we propose a new paradigm of control, called a maximum-hands-off control. A hands-off control is defined as a control that has a much shorter support than the horizon length. The maximum-hands-off control is the minimum-support (or sparsest) control among all admissible controls. We first prove that a solution to an L1-optimal control problem gives a maximum-hands-off control, and vice versa. This result rationalizes the use of L1 optimality in computing a maximum-hands-off control. The solution has in general the "bang-off-bang" property, and hence the control may be discontinuous. We then propose an L1/L2-optimal control to obtain a continuous hands-off control. Examples are shown to illustrate the effectiveness of the proposed control method.

In this article, we consider control theoretic splines with L1 optimization for rejecting outliers in data. Control theoretic splines are either interpolating or smoothing splines, depending on a cost function with a constraint defined by linear differential equations. Control theoretic splines are effective for Gaussian noise in data since the estimation is based on L2 optimization. However, in practice, there may be outliers in data, which may occur with vanishingly small probability under the Gaussian assumption of noise, to which L2-optimized spline regression may be very sensitive. To achieve robustness against outliers, we propose to use L1 optimality, which is also used in support vector regression. A numerical example shows the effectiveness of the proposed method.

We study a networked control architecture for linear time-invariant plants in which an unreliable data-rate limited network is placed between the controller and the plant input. The distinguishing aspect of the situation at hand is that an unreliable data-rate limited network is placed between controller and the plant input. To achieve robustness with respect to dropouts, the controller transmits data packets containing plant input predictions, which minimize a finite horizon cost function. In our formulation, we design sparse packets for rate-limited networks, by adopting an an ell-0 optimization, which can be effectively solved by an orthogonal matching pursuit method. Our formulation ensures asymptotic stability of the control loop in the presence of bounded packet dropouts. Simulation results indicate that the proposed controller provides sparse control packets, thereby giving bit-rate reductions for the case of memoryless scalar coding schemes when compared to the use of, more common, quadratic cost functions, as in linear quadratic (LQ) control.

We study feedback control over erasure channels with packet-dropouts. To achieve robustness with respect to packet-dropouts, the controller transmits data packets containing plant input predictions, which minimize a finite horizon cost function. To reduce the data size of packets, we propose to adopt sparsity-promoting optimizations, namely, L1 and L2-constrained L1 optimizations, for which efficient algorithms exist. We derive sufficient conditions on design parameters, which guarantee (practical) stability of the resulting feedback control systems when the number of consecutive packet-dropouts is bounded.

In this article, we consider remote-controlled systems, where the command generator and the controlled object are connected with a bandwidth-limited communication link. In the remote-controlled systems, efficient representation of control commands is one of the crucial issues because of the bandwidth limitations of the link. We propose a new representation method for control commands based on compressed sensing. In the proposed method, compressed sensing reduces the number of bits in each control signal by representing it as a sparse vector. The compressed sensing problem is solved by an L1-L2 optimization, which can be effectively implemented with an iterative shrinkage algorithm. A design example also shows the effectiveness of the proposed method.

In remote control, efficient compression or representation of control signals is essential to send them through rate-limited channels. For this purpose, we propose an approach of sparse control signal representation using the compressive sampling technique. The problem of obtaining sparse representation is formulated by cardinality-constrained L2 optimization of the control performance, which is reducible to L1-L2 optimization. The low rate random sampling employed in the proposed method based on the compressive sampling, in addition to the fact that the L1-L2 optimization can be effectively solved by a fast iteration method, enables us to generate the sparse control signal with reduced computational complexity, which is preferable in remote control systems where computation delays seriously degrade the performance. We give a theoretical result for control performance analysis based on the notion of restricted isometry property (RIP). An example is shown to illustrate the effectiveness of the proposed approach via numerical experiments.

We investigate the use of compressive sampling for networked feedback control systems. The method proposed serves to compress the control vectors which are transmitted through rate-limited channels without much deterioration of control performance. The control vectors are obtained by an L1-L2 optimization, which can be solved very efficiently by FISTA (Fast Iterative Shrinkage-Thresholding Algorithm). Simulation results show that the proposed sparsity-promoting control scheme gives a better control performance than a conventional energy-limiting L2-optimal control.

Quantum sensors based on single Nitrogen-Vacancy (NV) defects in diamond are state-of-the-art tools for nano-scale magnetometry with precision scaling inversely with total measurement time $\sigma_{B} \propto 1/T$ (Heisenberg scaling) rather than as the inverse of the square root of $T$, with $\sigma_{B} =1/\sqrt{T}$ the Shot-Noise limit. This scaling can be achieved by means of phase estimation algorithms (PEAs) using adaptive or non-adaptive feedback, in combination with single-shot readout techniques. Despite their accuracy, the range of applicability of PEAs is limited to periodic signals involving single frequencies with negligible temporal fluctuations. In this Letter, we propose an alternative method for precision magnetometry in frequency multiplexed signals via compressive sensing (CS) techniques. We show that CS can provide for precision scaling approximately as $\sigma_{B} \approx 1/T$, both in the case of single frequency and frequency multiplexed signals, as well as for a 5-fold increase in sensitivity over dynamic-range gain, in addition to reducing the total number of resources required.

We present methods that can provide an exponential savings in the resources required to perform dynamic parameter estimation using quantum systems. The key idea is to merge classical compressive sensing techniques with quantum control methods to efficiently estimate time-dependent parameters in the system Hamiltonian. We show that incoherent measurement bases and, more generally, suitable random measurement matrices can be created by performing simple control sequences on the quantum system. Since random measurement matrices satisfying the restricted isometry property can be used to reconstruct any sparse signal in an efficient manner, and many physical processes are approximately sparse in some basis, these methods can potentially be useful in a variety of applications such as quantum sensing and magnetometry. We illustrate the theoretical results throughout the presentation with various practically relevant numerical examples.

Wednesday, August 21, 2013

One of the most important aspect of ideas surrounding the themes covered by Nuit Blanche is that they are, to a large extent, different than traditional approaches. While large companies can inspect these new ideas, it is very likely that some of these ideas will target initially niche markets first. The reason for this new series of startup news is to cover exactly these new and disruptive technologies and how they go from the ideas of academia covered here to actual products.

W00083940.jpg was taken on August 18, 2013 and received on Earth August 19, 2013. The camera was pointing toward SUN at approximately 914,414,892 miles (1,471,608,120 kilometers) away, and the image was taken using the IR2 and IRP90 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System in 2014.

Sunday, August 18, 2013

If you have been reading a few entries on the subject here on Nuit Blanche, you know that genomic sequencing is a revolutionary technology that is capable of drastically changing how medicine work. In particular, there is this one technology that has been very promising for the past fifteen years and yet still has not been capable of producing more rapid genome decoding capability: nanopore sequencing.

If you read [1,2,3] , you'll note that one of the idea of nanopore sequencing is that one needs to use biological processes to slow down the translocation (movement) of the DNA through the Nanopore. This slowdown (or "rate control") needed (about three orders of magnitude according to [1]) allows for the sampling to be performed "accurately" and thereby provides a way to distinctly decide which of the base pairs (G,T,A,C) goes through the nanopore (and its attendant voltage readings).

Coupling an exonuclease to the biological pore would slow the translocation of the DNA through the pore, and increase the accuracy of data acquisition.

Or from [1]

For bandwidth and noise levels common to nanopore experiments, the specification for rate reduction is that the DNA should be slowed at least three orders of magnitude, from the un-impeded 1–3 s/nt [12] to 1 ms/nt or slower [4].

In other words, in the past ten years, much technology improvement has been focused on slowing down the DNA movement through the pores in order to be able to nicely sample the voltage recording and map that to a particular base pair. Let us also note that even then, researchers are considering several parallel operations of the same DNA strand through several pores [2] in order to allow redundancy and eventually reduce the overall voltage reading errors. Current result of the technology show a still too low accuracy.

It turns out that in compressive sensing several folks have been taken a stab at this exact problem: if a very rapid phenomenon cannot be sampled with current technology, one can find a solution if one has an ability to have a modulating technology that goes as fast as the phenomenon at play. If you have this modulating capability, then there is probably a way to use these new randomized Analog to Information samplers. Some of theses efforts are summarized in the A2I webpage set up by Emmanuel Candes at Stanford. In the case of nanopore technology, if one uses several batteries of DNA through several pores[2], and a switching technology based on, say, a different voltage across the different pores, at different times, then one might be able to forget about slowing down the DNA translocation through the pores and use directly the randomized readings of several pores to get data that can then be deconvoluted. What about sparsity ? Well, for one, there is already a generic known map of the Human genome. Any particular human genome must not be more than 2% different from that reference. The difference between the two is sparse. Easier said than done, I know, but it's important.

The prospect of nanopores as a next-generation sequencing platform has been a topic of growing interest and considerable government-sponsored research for more than a decade.Oxford Nanopore Technologies recently announced the first commercial nanopore sequencing devices, to be made available by the end of 2012, while other companies (Life, Roche, and IBM) are also pursuing nanopore sequencing approaches. In this paper, the state of the art in nanopore sequencing is reviewed, focusing on the most recent contributions that have or promise to have next-generation sequencing commercial potential. We consider also the scalability of the circuitry to support multichannel arrays of nanopores in future sequencing devices, which is critical to commercial viability.

This numerical study provides an error analysis of an idealized nanopore sequencing method in which ionic current measurements are used to sequence intact single-stranded DNA in the pore, while an enzyme controls DNA motion. Examples of systematic channel errors when more than one nucleotide affects the current amplitude are detailed, which if present will persist regardless of coverage. Absent such errors, random errors associated with tracking through homopolymer regions are shown to necessitate reading known sequences (Escherichia coli K-12) at least 140 times to achieve 99.99% accuracy (Q40). By exploiting the ability to reread each strand at each pore in an array, arbitrary positioning on an error rate versus throughput tradeoff curve is possible if systematic errors are absent, with throughput governed by the number of pores in the array and the enzyme turnover rate.

ABSTRACT: Complexes formed between the bacteriophage phi29 DNA polymerase (DNAP) and DNA ﬂuctuate between the pre-translocation and post-translocation states on the millisecond time scale. These ﬂuctuations can be directly observed with single-nucleotide precision in real-time ionic current traces when individual complexes are captured atop the α-hemolysin nanopore in an applied electric ﬁeld. We recently quantiﬁed the equilibrium across the translocation step as a function of applied force (voltage), active-site proximal DNA sequences, and the binding of complementary dNTP. To gain insight into the mechanism of this step in the DNAP catalytic cycle, in this study, we have examined the stochastic dynamics of the translocation step. The survival probability of complexes in each of the two states decayed at a single exponential rate, indicating that the observed ﬂuctuations are between two discrete states. We used a robust mathematical formulation based on the autocorrelation function to extract the forward and reverse rates of the transitions between the pre-translocation state and the post-translocation state from ionic current traces of captured phi29 DNAP−DNA binary complexes. We evaluated each transition rate as a function of applied voltage to examine the energy landscape of the phi29 DNAP translocation step. The analysis reveals that active-site proximal DNA sequences inﬂuence the depth of the pre-translocation and post-translocation state energy wells and affect the location of the transition state along the direction of the translocation.