a process whose average over time converges to the true average

Menu

SPCOM 2012 : the talks

Well, SPCOM 2012 is over now — it was a lot of fun and a really nice-sized conference. I missed the first day of tutorials, which I heard were fantastic. Qing Zhao couldn’t make it due to visa issues but gave her tutorial over Skype. Hooray for technology!

The conference had one track of invited talks and two tracks of submitted papers. I am embarrassed to say that I mostly hung out in the invited track, mostly because those talks were the most of interest to me. The conference is split between signal processing and communication, rather broadly construed. So the topics ran the gamut of things you might see at ICC, ICASSP, and ISIT. I’ll just touch on a few of the talks here (I missed a few due to meetings and coffee), but the full proceedings will be available on IEEExplore eventually.

I attended all but one plenary — Rob Calderbank talked about compressed sensing ideas in the random access MAC, trying to get non-asyptotic results with realistic codes and asynchronous communication. Ingrid Daubechies talked about her work on forgery detection in art, specifically the Van Gogh forgery detection problem and analyzing underdrawings from the workshop of Goossen van der Weyden to detect whether van der Weyden had actually worked on those paintings. Prakash Narayan talked about generating secret keys (there was overlap from his ISIT plenary early in the month) and secure computing. The plenaries couldn’t be more different in topic from each other, and so I think the students in the audience must have gotten quite a wide perspective on the current breadth of work in signal processing and communication.

The invited talks also ran the gamut of topics.Li-Chun Wang and TJ Lim talked about energy-saving networks. Prof. Lim talked about things from a standards perspective (5G cellular) and Prof. Lim talked about energy harvesting devices. Robert Heath talked about some work in modeling obstruction (e.g. buildings) for cellular networks using a random point process to place random shapes. He showed results for modeling building as lines and analyzed the impact on things like connectivity. Rahul Vaze discussed the problem of localization in sensor networks and made a connection to a percolation model called bootstrap percolation in which nodes have one of two colors (red or blue) and a node becomes red (= localized) if a sufficient number of its neighbors become red. If the node placements are random, the question is what fraction of initial nodes need to be red in order for all nodes to eventually become red.

On a more information theoretic front, Ravi Adve discussed a model of communication which might apply when the transmitter and receiver transmit via chemical signals. This could happen if they lie in two positions in a pipe (blood vessel, oil pipe). In essence they get a timing channel, but by looking at the flow PDEs, they get a model in which the noise has an inverse Gaussian distribution. It’s a preliminary setup but an interesting model (even though it involves icky icky PDEs). Prasad Santhanam talked about his work with Venkat Anantharam on insurance. THe problem of someone providing insurance is similar to that of someone trying to do compression — both involved being able to predict something about future behavior of the process (insurance claims or the data signal) based on finite observations. In a related talk, Wenyi Zhang discussed an information theoretic model in which a source provides a resource (say energy) to an encoder and then the encoder can only transmit when it has enough energy — how do we code in such a scenario? Vinod Prabhakaran talked about how multiuser information theory proofs that use indirect decoding (e.g. auxiliary random variables which are not actually decoded) can be transformed into those using direct decoding without loss in rate. So in a sense, the indirect decoding is not providing the extra boost in the rate region.

There were a few talks on networks as well — Pramod Vishwanath talked bout packet erasure networks and designing broadcast protocols for them, and Sid Jaggi talked about polynomial time codes for Gaussian relay networks and then switched to discussing the SHO-FA protocol, which is actually efficient in a real sense (and not in a polytime sense).

On the networked inference front, José Moura talked about models for consensus with continuous observations using a mix of filtering and consensus operations (and stochastic approximation). The formulas were a bit hairy, but that seemed hard to get around. Olgica Milenkovic talked about consensus protocols for rankings, where you want a group of agents to learn a consensus ranking defined by say, the ranking which minimizes some average distance to all of the initial values. The choice of metric is important here, and she talked about weighted distances between permutations where the weights correspond to ranking (e.g. it’s more important for the top guys to be equal).

There were also talks on learning and inference — Preeti Rao talked about extracting meta data from music, and in particular Hindustani classical music. R. Aravind talked about an empirical study of trying to determine if stripe patterns in tigers exhibit bilateral symmetry. This is important for things like tracking tiger populations via triggered cameras in the forest. Rowr. Aarti Singh discussed matrix completion for matrices which are highly structured but not low-rank — these are ultrametric matrices which have strong block structure and decay as you move off-diagonal. Prakash Ishwar talked about large , small settings for statistical inference in the setting where there is a nonzero Bayes error. In this setting what can we say about different inference procedures?

Finally, I talked about communication with interference generated by an eavesdropper. In the middle of my talk the laptop decided to install Windows updates and rebooted. Apparently it was eavesdropping on my talk and decided to jam it. I now know who the adversary is — it’s Microsoft.