Abstract: The large-scale gathering and storage of personal data is raising new questions about the regulation of privacy. On the technology side, there has been a flurry of recent work on new models for privacy risk and protection. One such model is differential privacy, which quantifies the risk to an individual's data being included in a database. Differentially private algorithms introduce noise into their computations to limit this risk, allowing the output to be released publicly. I will describe new algorithms for differentially private machine learning tasks such as learning a classifier and principle components analysis (PCA). I will describe how guaranteeing privacy affects the performance of these algorithms, the results on real data sets, and some exciting future directions.

Parts of this work are with Kamalika Chaudhuri, Claire Monteleoni, Kaushik Sinha, Staal Vinterbo, and Aziz Boxwala.

Biography: Anand Sarwate is a Research Assistant Professor at the Toyota Technological Institute at Chicago, a philanthropically endowed academic institute located on the University of Chicago campus. Prior to that he was a postdoc in the Information Theory and Applications Center (ITA) at UC San Diego. He received his PhD from UC Berkeley in 2008, and undergraduate degrees in Mathematics and Electrical Engineering from MIT in 2002. He is broadly interested in statistical algorithms applied to problems in distributed systems, signal processing, communications, and privacy and security.