a process whose average over time converges to the true average

Menu

Category Archives: Uncategorized

I got an email recently saying that the Signal Processing Society‘s Publications Board has decided to “no longer allow any changes to papers once the papers are accepted… the accepted version of the papers will be the version posted on Xplore.” Associate editors are supposed to enforce this policy.

I can only imagine that this is the result of abuse by some (or many) authors to make substantive changes to their manuscript post-acceptance. That is clearly bad and should probably be stopped. However, I think this hard-line policy may not be good for a couple of reasons:

Even after reviewers sign off on a manuscript from a technical standpoint, there are often several small issues like grammar, typos, and so on. The only solution then would be to enter an endless cycle of revise and resubmit, unless SPS is ok with typos and the like.

I have had galley proofs come back with several technically substantive errors and have had to go back and forth with IEEE about fixing these. This can only get worse with this policy.

Due to the fast pace of research and the slow pace of reviewing, many times the references for a paper need updating even after acceptance: a journal version of a conference paper may have come out, or an ArXiV preprint may have been updated, or any host of other changes. This hard requirement is bad for scholarship since it makes finding the “correct” reference more onerous.

Overall, this shifts the burden of fine-level verification of the manuscript to the AE. For some journals this is not so bad since they don’t have long papers and AEs may handle only a few papers at the same time. For something like the Transactions on Information Theory, it would be a disaster! Thankfully (?) this is only for the Signal Processing Society. However, my prediction is that overall paper quality will decrease with this policy, driving more papers to ArXiV for their “canonical version.” Is this bad? Depends on your point of view.

The other big change we made to the standard workshop schedule was to put in time for “breakout groups” to have smaller discussions focused on identifying the key fundamental problems that need to be addressed when thinking about privacy and biomedical data. Because of the diversity of viewpoints among participants, it seems a tall order to generate new research collaborations out of attending talks and going to lunch. But if we can, as a group, identify what the mathematical problems are (and maybe even why they are hard), this can help identify the areas of common interest.

I think of these as falling into a few different categories.

Questions about demarcation. Can we formalize (mathematically) the privacy objective in different types of data sets/computations? Can we use these to categorize different types of problems?

Metrics. How do we formulate the privacy-utility tradeoffs for different problems? What is the right measure of performance? What (if anything) do we lose in guaranteeing privacy?

Possibility/impossibility. Algorithms which can guarantee privacy and utility are great, but on the flip side we should try to identify when privacy might be impossible to guarantee. This would have implications for higher-level questions about system architectures and policy.

Domain-specific questions. In some cases all of the setup is established: we want to compute function F on dataset D under differential privacy and the question is to find algorithms with optimal utility for fixed privacy loss or vice versa. Still, identifying those questions and writing them down would be a great outcome.

In addition to all of this, there is a student poster session, a welcome reception, and lunches. It’s going to be a packed 3 days, and although I will miss the very end of it, I am excited to learn a lot from the participants.

We (really Mohsen and Zahra) had a paper nominated for a student paper award at CAMSAP last year, but since both student authors are from Iran, their single-entry student visas prevented them from going to the conference. The award terms require that the student author present the work (in a poster session) and the conference organizers were kind enough to allow Mohsen to present his poster via Skype. It’s hardly an ideal communication channel, given how loud poster sessions are. Although the award went to a different paper, the experience brought up two questions that are not new but don’t get a lot of discussion.

How should paper awards deal with visa issues? This is not an issue specific to students from Iran, although the US State Department’s visa issuance for Iranian students is stupidly restrictive. Students from Iran are essentially precluded from attending any non-US conference unless they want to roll the dice again and wait for another visa at home. Other countries may also deny visas to students for various reasons. Requiring students to be present at the conference is discriminatory, since the award should be based on the work. Disqualifying a student for an award because of bullshit political/bureaucratic nonsense that is totally out of their control just reinforces that bullshit.

Why are best papers judged by their presentation? I have never been a judge for a paper award and I am sure that judges try to be as fair as they can. However, the award is for the paper and not its performance. I agree that scholarly communication through oral presentation is a valuable skill, but if the award is going to be determined by who gives the best show at the conference, they should retitle these to “best student paper and presentation award” or something like that. Maybe it should instead be based on video presentations to allow remote participation. If you are going to call it a paper award, then it should based on the written work.

I don’t want this to seem like a case of sour grapes. Not all student paper awards work this way, but it seems to be the trend in IEEE-ish venues. The visa issue has hurt a lot of researchers I know; they miss out on opportunities to get their name/face known, chances to meet and network with people, and the experience of being exposed to a ton of ideas in a short amount of time. Back when I had time to do conference blogging, it was a way for me to process the wide array of new things that I saw. For newer researchers (i.e. students) this is really important. Making paper awards based on presentations hits these students doubly: they can neither attend the conference nor receive recognition for their work.

Kamalika and I gave a tutorial at NIPS last week on differential privacy and machine learning. We’ve posted the slides and references (updates still being made). It was a bit stressful to get everything put together in time, especially given how this semester went, but it was a good experience and now we have something to build on. It’s amazing how much research activity there has been in the last few years.

One thing that I struggled with a bit was the difference between a class lecture, a tutorial, and a survey. Tutorials sit between lectures and surveys: the goal is to be clear and cover the basics with simple examples, but also lay out something about what is going on in the field and where important future directions lie. It’s impossible to be comprehensive; we had to pick and choose different topics and papers to cover, and ended up barely mentioning large bodies of work. At the same time, it didn’t really make sense to put up a slide saying “here are references for all the things we’re not going to talk about.” If the intended audience is a person who has heard of differential privacy but hasn’t really studied it, or someone who has read this recentseries of articles, then a list without much context is not much help. It seems impossible to even make a real survey now, unless you make the scope more narrow.

As for NIPS itself… I have to say that the rapid increase in size (8000 participants this year) made the conference feel a lot different. I had a hard time hearing/understanding for the short time I was there. Thankfully the talks were streamed/recorded so I can go back to catch what I missed.

I got an email from Venkat Guruswami encouraging those in the TCS community to submit work to the upcoming ISIT 2018deadline. In particular, since ISIT papers are short (5 pages) it’s an ideal venue to publish more technical results or general tools (relevant to information theory) that get used in longer STOC/FOCS/SODA/etc papers. There was a lively discussion about what the “rules” were for ISIT, but basically:

the proceedings are archival so it counts as a real publication (no submitting the same result elsewhere)

ideal works would be things like coding theory problems of interest to both communities, TCS takes on IT problems, or general standalone results that could be applicable to information theory (or related) problems

The deadline is January 12, 2018. I guess I know what I’ll be doing for my winter vacation…

Hiring areas for this search are: (i) Electronics, including sensors, devices, bioelectronics, as well as integrated circuits and systems for RF and millimeter wave applications, (ii) Information processing and machine learning for autonomous systems and robots, especially learning and control in autonomous systems such as vehicles or drones as well as in assistive technologies, (iii) E-health, especially wearable electronics and sensors, medical informatics, quantified self and personalized medicine, as well as (iv) Cyber-physical systems, including signal processing and machine learning techniques, embedded systems, device and software security, IoT security, and applications to smart cities. Exceptional candidates in the university strategic areas are also welcome to apply.

I experienced a horrible “network dropping causes web forms to clear” experience when filing an NSF report a few years back, so I switched to filling things in via the NSF’s Word template. However, the extraneous formatting in that made the cut-and-paste into the webworm tedious. So this time around I created a Markdown (.md) template with all of the questions you need to answer. This makes it easier to edit and lightly format your report text offline (e.g. on a plane) for much faster cut-and-paste later.